From 946b9fc10add5ea3955e3a06e065422f20d06927 Mon Sep 17 00:00:00 2001 From: Sunil Singh Date: Tue, 17 Mar 2026 15:54:04 -0700 Subject: [PATCH] Removing nginx-ingress, updating some phrasing regarding Traefik Ingress usage. Signed-off-by: Sunil Singh --- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- .../troubleshooting.md | 12 ++--- .../installation-references/tls-settings.md | 5 +- .../amazon-elb-load-balancer.md | 19 +++----- .../rke2-for-rancher.md | 47 +++++-------------- .../other-troubleshooting-tips/rancher-ha.md | 2 +- 60 files changed, 324 insertions(+), 696 deletions(-) diff --git a/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index 881a6d871a0..b35276f8c5a 100644 --- a/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/docs/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -13,7 +13,7 @@ This section describes how to troubleshoot an installation of Rancher on a Kuber Most of the troubleshooting will be done on objects in these 3 namespaces. - `cattle-system` - `rancher` deployment and pods. -- `ingress-nginx` - Ingress controller pods and services. +- `traefik` - Ingress controller pods and services. - `cert-manager` - `cert-manager` pods. ### "default backend - 404" @@ -115,7 +115,7 @@ Events: Your certs get applied directly to the Ingress object in the `cattle-system` namespace. -Check the status of the Ingress object and see if its ready. +Check the status of the Ingress object and see if it's ready. ``` kubectl -n cattle-system describe ingress @@ -123,12 +123,10 @@ kubectl -n cattle-system describe ingress If its ready and the SSL is still not working you may have a malformed cert or secret. -Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container. +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ### No matches for kind "Issuer" @@ -148,7 +146,7 @@ The most common cause of this issue is port 8472/UDP is not open between the nod Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections. -### nginx-ingress-controller Pods show RESTARTS +### Traefik Pods show RESTARTS The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-23) for troubleshooting. diff --git a/docs/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/docs/getting-started/installation-and-upgrade/installation-references/tls-settings.md index 3e6c745857c..240897b429c 100644 --- a/docs/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/docs/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -10,10 +10,7 @@ Changing the default TLS settings depends on the chosen installation method. ## Running Rancher in a highly available Kubernetes cluster -When you install Rancher inside of a Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. The possible TLS settings depend on the used ingress controller: - -* nginx-ingress-controller (default for RKE2): [Default TLS Version and Ciphers](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers). -* traefik (default for K3s): [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options). +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## Running Rancher in a single Docker container diff --git a/docs/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/docs/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index f0157a3f6e0..14b5116a3fe 100644 --- a/docs/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/docs/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -8,14 +8,12 @@ title: Setting up Amazon ELB Network Load Balancer This how-to guide describes how to set up a Network Load Balancer (NLB) in Amazon's EC2 service that will direct traffic to multiple instances on EC2. -These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on an RKE Kubernetes cluster, three nodes are required. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. +These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. This tutorial is about one possible way to set up your load balancer, not the only way. Other types of load balancers, such as a Classic Load Balancer or Application Load Balancer, could also direct traffic to the Rancher server nodes. Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ALB. - - ## Requirements These instructions assume you have already created Linux instances in EC2. The load balancer will direct traffic to these nodes. @@ -26,7 +24,7 @@ Begin by creating two target groups for the **TCP** protocol, one with TCP port Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but it's convenient to add a listener for port 80, because traffic to port 80 will be automatically redirected to port 443. -Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, the Ingress should redirect traffic from port 80 to port 443. +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started. Make sure to select the **Region** where your EC2 instances (Linux nodes) are created. 1. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**. @@ -34,7 +32,7 @@ Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, th :::note -Health checks are handled differently based on the Ingress. For details, refer to [this section.](#health-check-paths-for-nginx-ingress-and-traefik-ingresses) +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -167,13 +165,10 @@ After AWS creates the NLB, click **Close**. 6. Click **Save** in the top right of the screen. -## Health Check Paths for NGINX Ingress and Traefik Ingresses +## Health Check Paths for Traefik Ingresses -K3s and RKE Kubernetes clusters handle health checks differently because they use different Ingresses by default. +K3s Kubernetes clusters use Traefik as the default Ingress. -For RKE Kubernetes clusters, NGINX Ingress is used by default, whereas for K3s Kubernetes clusters, Traefik is the default Ingress. +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik:** The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **NGINX Ingress:** The default backend of the NGINX Ingress controller has a `/healthz` endpoint. By default `/healthz` is always matched (regardless of Host), and a response from [`ingress-nginx` itself](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212) is always served. - -To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` (for K3s or for RKE clusters, respectively) wherever possible, to get a response from the Rancher Pods, not the Ingress. +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/docs/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/docs/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index 7a743f731a2..d9cdec4f64c 100644 --- a/docs/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/docs/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -91,7 +91,7 @@ To use this `kubeconfig` file, 1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. 2. Copy the file at `/etc/rancher/rke2/rke2.yaml` and save it to the directory `~/.kube/config` on your local machine. -3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the NGINX Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: +3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the Traefik Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: ```yml apiVersion: v1 @@ -131,39 +131,18 @@ Check that all the required pods and containers are healthy are ready to continu ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **Result:** You have confirmed that you can access the cluster with `kubectl` and the RKE2 cluster is running successfully. Now the Rancher management server can be installed on the cluster. diff --git a/docs/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/docs/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 25845cdc87d..854010e4871 100644 --- a/docs/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/docs/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -69,7 +69,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader Election diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index 9a391416f4b..172c342d867 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -9,7 +9,7 @@ title: Rancher Server Kubernetes 集群的问题排查 故障排除主要针对以下 3 个命名空间中的对象: - `cattle-system`:`rancher` deployment 和 Pod。 -- `ingress-nginx`:Ingress Controller Pod 和 services。 +- `traefik`:Ingress Controller Pod 和 services。 - `cert-manager`:`cert-manager` Pod。 ## "default backend - 404" @@ -117,14 +117,12 @@ Events: kubectl -n cattle-system describe ingress ``` -如果 Ingress 对象已就绪,但是 SSL 仍然无法正常工作,你的证书或密文的格式可能不正确。 +If its ready and the SSL is still not working you may have a malformed cert or secret. -这种情况下,请检查 nginx-ingress-controller 的日志。nginx-ingress-controller 的 Pod 中有多个容器,因此你需要指定容器的名称: +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ## 没有匹配的 "Issuer" @@ -144,7 +142,7 @@ Error: validation failed: unable to recognize "": no matches for kind "Issuer" i 解决网络问题后,`canal` Pod 会超时并重启以建立连接。 -## nginx-ingress-controller Pod 显示 RESTARTS +## Traefik Pod 显示 RESTARTS 此问题的最常见原因是 `canal` pod 未能建立覆盖网络。参见 [canal Pod 显示 READY `2/3`](#canal-pod-显示-ready-23) 进行排查。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation-and-upgrade/installation-references/tls-settings.md index d8ad91b7d67..077e19eb77b 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -6,10 +6,7 @@ title: TLS 设置 ## 在高可用 Kubernetes 集群中运行 Rancher -当你在 Kubernetes 集群内安装 Rancher 时,TLS 会在集群的 Ingress Controller 上卸载。可用的 TLS 设置取决于使用的 Ingress Controller: - -* nginx-ingress-controller(default RKE2 默认):[默认的 TLS 版本和密码](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers)。 -* traefik(K3s 默认):[TLS 选项](https://doc.traefik.io/traefik/https/tls/#tls-options)。 +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## 在单个 Docker 容器中运行 Rancher diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index 6486e4fc4ca..1759e77ad07 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -4,14 +4,12 @@ title: 设置 Amazon NLB 网络负载均衡器 本文介绍了如何在 Amazon EC2 服务中设置 Amazon NLB 网络负载均衡器,用于将流量转发到 EC2 上的多个实例中。 -这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 RKE Kubernetes 集群上,则需要三个节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 +这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 本文介绍的只是配置负载均衡器的其中一种方式。其他负载均衡器如传统负载路由器(Classic Load Balancer)和应用负载路由器(Application Load Balancer),也可以将流量转发到 Rancher Server 节点。 Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量,而不支持 `TLS` 模式。这试因为在 NLB 终止时,NLB 不会将正确的标头注入请求中。如果你想使用由 Amazon Certificate Manager (ACM) 托管的证书,请使用 ALB。 - - ## 要求 你已在 EC2 中创建了 Linux 实例。此外,负载均衡器会把流量转发到这些节点。 @@ -22,7 +20,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, 配置 NLB 的第一个步骤是创建两个目标组。一般来说,只需要端口 443 就可以访问 Rancher。但是,由于端口 80 的流量会被自动重定向到端口 443,因此,你也可以为端口 80 也添加一个监听器。 -不管使用的是 NGINX Ingress 还是 Traefik Ingress Controller,Ingress 都应该将端口 80 的流量重定向到端口 443。以下为操作步骤: +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. 登录到 [Amazon AWS 控制台](https://console.aws.amazon.com/ec2/)。确保选择的**区域**是你创建 EC2 实例 (Linux 节点)的区域。 1. 选择**服务** > **EC2**,找到**负载均衡器**并打开**目标组**。 @@ -30,7 +28,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, :::note -不同 Ingress 的健康检查处理方法不同。详情请参阅[本节](#nginx-ingress-和-traefik-ingress-的健康检查路径)。 +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -163,13 +161,10 @@ AWS 完成 NLB 创建后,单击**关闭**。 6. 单击右上角的**保存**。 -## NGINX Ingress 和 Traefik Ingress 的健康检查路径 +## Health Check Paths for Traefik Ingresses -K3s 和 RKE Kubernetes 集群使用的默认 Ingress 不同,因此对应的健康检查方式也不同。 +K3s Kubernetes clusters use Traefik as the default Ingress. -RKE Kubernetes 集群默认使用 NGINX Ingress,而 K3s Kubernetes 集群默认使用 Traefik Ingress。 +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik**:默认健康检查路径是 `/ping`。默认情况下,不管主机如何,`/ping` 总是匹配,而且 [Traefik 自身](https://docs.traefik.io/operations/ping/)总会响应。 -- **NGINX Ingress**:NGINX Ingress Controller 的默认后端有一个 `/healthz` 端点。默认情况下,不管主机如何,`/healthz` 总是匹配,而且 [`ingress-nginx` 自身](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212)总会响应。 - -想要精确模拟健康检查,最好是使用 Host 标头(Rancher hostname)加上 `/ping` 或 `/healthz`(分别对应 K3s 和 RKE 集群)来获取 Rancher Pod 的响应,而不是 Ingress 的响应。 +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index cc61efd50e4..6b43021aae9 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -87,7 +87,7 @@ systemctl start rke2-server.service 1. 安装 Kubernetes 命令行工具 [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl)。 2. 复制 `/etc/rancher/rke2/rke2.yaml` 文件并保存到本地主机的 `~/.kube/config` 目录上。 -3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 NGINX Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: +3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 Traefik Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: ```yml apiVersion: v1 @@ -127,39 +127,18 @@ kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **结果**:你可通过使用 `kubectl` 访问集群,并且 RKE2 集群能成功运行。现在,你可以在集群上安装 Rancher Management Server。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 23e0ab21787..28d88c69a85 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/i18n/zh/docusaurus-plugin-content-docs/current/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -65,7 +65,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m 如果访问你配置的 Rancher FQDN 时没有显示 UI,请检查 Ingress Controller 日志以查看尝试访问 Rancher 时发生了什么: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader 选举 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.10/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.10/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index b303fc0c1c9..e1af566af0d 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.10/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.10/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -9,7 +9,7 @@ title: Rancher Server Kubernetes 集群的问题排查 故障排除主要针对以下 3 个命名空间中的对象: - `cattle-system`:`rancher` deployment 和 Pod。 -- `ingress-nginx`:Ingress Controller Pod 和 services。 +- `traaefik`:Ingress Controller Pod 和 services。 - `cert-manager`:`cert-manager` Pod。 ## "default backend - 404" @@ -117,14 +117,12 @@ Events: kubectl -n cattle-system describe ingress ``` -如果 Ingress 对象已就绪,但是 SSL 仍然无法正常工作,你的证书或密文的格式可能不正确。 +If its ready and the SSL is still not working you may have a malformed cert or secret. -这种情况下,请检查 nginx-ingress-controller 的日志。nginx-ingress-controller 的 Pod 中有多个容器,因此你需要指定容器的名称: +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ## 没有匹配的 "Issuer" @@ -144,7 +142,7 @@ Error: validation failed: unable to recognize "": no matches for kind "Issuer" i 解决网络问题后,`canal` Pod 会超时并重启以建立连接。 -## nginx-ingress-controller Pod 显示 RESTARTS +## Traefik Pod 显示 RESTARTS 此问题的最常见原因是 `canal` pod 未能建立覆盖网络。参见 [canal Pod 显示 READY `2/3`](#canal-pod-显示-ready-23) 进行排查。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.10/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.10/getting-started/installation-and-upgrade/installation-references/tls-settings.md index 25f5652fe6e..077e19eb77b 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.10/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.10/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -6,10 +6,7 @@ title: TLS 设置 ## 在高可用 Kubernetes 集群中运行 Rancher -当你在 Kubernetes 集群内安装 Rancher 时,TLS 会在集群的 Ingress Controller 上卸载。可用的 TLS 设置取决于使用的 Ingress Controller: - -* nginx-ingress-controller(RKE1 和 RKE2 默认):[默认的 TLS 版本和密码](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers)。 -* traefik(K3s 默认):[TLS 选项](https://doc.traefik.io/traefik/https/tls/#tls-options)。 +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## 在单个 Docker 容器中运行 Rancher diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.10/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.10/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index 6486e4fc4ca..1759e77ad07 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.10/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.10/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -4,14 +4,12 @@ title: 设置 Amazon NLB 网络负载均衡器 本文介绍了如何在 Amazon EC2 服务中设置 Amazon NLB 网络负载均衡器,用于将流量转发到 EC2 上的多个实例中。 -这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 RKE Kubernetes 集群上,则需要三个节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 +这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 本文介绍的只是配置负载均衡器的其中一种方式。其他负载均衡器如传统负载路由器(Classic Load Balancer)和应用负载路由器(Application Load Balancer),也可以将流量转发到 Rancher Server 节点。 Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量,而不支持 `TLS` 模式。这试因为在 NLB 终止时,NLB 不会将正确的标头注入请求中。如果你想使用由 Amazon Certificate Manager (ACM) 托管的证书,请使用 ALB。 - - ## 要求 你已在 EC2 中创建了 Linux 实例。此外,负载均衡器会把流量转发到这些节点。 @@ -22,7 +20,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, 配置 NLB 的第一个步骤是创建两个目标组。一般来说,只需要端口 443 就可以访问 Rancher。但是,由于端口 80 的流量会被自动重定向到端口 443,因此,你也可以为端口 80 也添加一个监听器。 -不管使用的是 NGINX Ingress 还是 Traefik Ingress Controller,Ingress 都应该将端口 80 的流量重定向到端口 443。以下为操作步骤: +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. 登录到 [Amazon AWS 控制台](https://console.aws.amazon.com/ec2/)。确保选择的**区域**是你创建 EC2 实例 (Linux 节点)的区域。 1. 选择**服务** > **EC2**,找到**负载均衡器**并打开**目标组**。 @@ -30,7 +28,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, :::note -不同 Ingress 的健康检查处理方法不同。详情请参阅[本节](#nginx-ingress-和-traefik-ingress-的健康检查路径)。 +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -163,13 +161,10 @@ AWS 完成 NLB 创建后,单击**关闭**。 6. 单击右上角的**保存**。 -## NGINX Ingress 和 Traefik Ingress 的健康检查路径 +## Health Check Paths for Traefik Ingresses -K3s 和 RKE Kubernetes 集群使用的默认 Ingress 不同,因此对应的健康检查方式也不同。 +K3s Kubernetes clusters use Traefik as the default Ingress. -RKE Kubernetes 集群默认使用 NGINX Ingress,而 K3s Kubernetes 集群默认使用 Traefik Ingress。 +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik**:默认健康检查路径是 `/ping`。默认情况下,不管主机如何,`/ping` 总是匹配,而且 [Traefik 自身](https://docs.traefik.io/operations/ping/)总会响应。 -- **NGINX Ingress**:NGINX Ingress Controller 的默认后端有一个 `/healthz` 端点。默认情况下,不管主机如何,`/healthz` 总是匹配,而且 [`ingress-nginx` 自身](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212)总会响应。 - -想要精确模拟健康检查,最好是使用 Host 标头(Rancher hostname)加上 `/ping` 或 `/healthz`(分别对应 K3s 和 RKE 集群)来获取 Rancher Pod 的响应,而不是 Ingress 的响应。 +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.10/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.10/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index cc61efd50e4..6b43021aae9 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.10/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.10/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -87,7 +87,7 @@ systemctl start rke2-server.service 1. 安装 Kubernetes 命令行工具 [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl)。 2. 复制 `/etc/rancher/rke2/rke2.yaml` 文件并保存到本地主机的 `~/.kube/config` 目录上。 -3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 NGINX Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: +3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 Traefik Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: ```yml apiVersion: v1 @@ -127,39 +127,18 @@ kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **结果**:你可通过使用 `kubectl` 访问集群,并且 RKE2 集群能成功运行。现在,你可以在集群上安装 Rancher Management Server。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.10/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.10/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 23e0ab21787..28d88c69a85 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.10/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.10/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -65,7 +65,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m 如果访问你配置的 Rancher FQDN 时没有显示 UI,请检查 Ingress Controller 日志以查看尝试访问 Rancher 时发生了什么: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader 选举 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.11/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.11/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index b303fc0c1c9..140bfcda36a 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.11/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.11/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -9,7 +9,7 @@ title: Rancher Server Kubernetes 集群的问题排查 故障排除主要针对以下 3 个命名空间中的对象: - `cattle-system`:`rancher` deployment 和 Pod。 -- `ingress-nginx`:Ingress Controller Pod 和 services。 +- `traefik`:Ingress Controller Pod 和 services。 - `cert-manager`:`cert-manager` Pod。 ## "default backend - 404" @@ -117,14 +117,12 @@ Events: kubectl -n cattle-system describe ingress ``` -如果 Ingress 对象已就绪,但是 SSL 仍然无法正常工作,你的证书或密文的格式可能不正确。 +If its ready and the SSL is still not working you may have a malformed cert or secret. -这种情况下,请检查 nginx-ingress-controller 的日志。nginx-ingress-controller 的 Pod 中有多个容器,因此你需要指定容器的名称: +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ## 没有匹配的 "Issuer" @@ -144,7 +142,7 @@ Error: validation failed: unable to recognize "": no matches for kind "Issuer" i 解决网络问题后,`canal` Pod 会超时并重启以建立连接。 -## nginx-ingress-controller Pod 显示 RESTARTS +## Traefik Pod 显示 RESTARTS 此问题的最常见原因是 `canal` pod 未能建立覆盖网络。参见 [canal Pod 显示 READY `2/3`](#canal-pod-显示-ready-23) 进行排查。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.11/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.11/getting-started/installation-and-upgrade/installation-references/tls-settings.md index 25f5652fe6e..077e19eb77b 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.11/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.11/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -6,10 +6,7 @@ title: TLS 设置 ## 在高可用 Kubernetes 集群中运行 Rancher -当你在 Kubernetes 集群内安装 Rancher 时,TLS 会在集群的 Ingress Controller 上卸载。可用的 TLS 设置取决于使用的 Ingress Controller: - -* nginx-ingress-controller(RKE1 和 RKE2 默认):[默认的 TLS 版本和密码](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers)。 -* traefik(K3s 默认):[TLS 选项](https://doc.traefik.io/traefik/https/tls/#tls-options)。 +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## 在单个 Docker 容器中运行 Rancher diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.11/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.11/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index 6486e4fc4ca..1759e77ad07 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.11/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.11/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -4,14 +4,12 @@ title: 设置 Amazon NLB 网络负载均衡器 本文介绍了如何在 Amazon EC2 服务中设置 Amazon NLB 网络负载均衡器,用于将流量转发到 EC2 上的多个实例中。 -这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 RKE Kubernetes 集群上,则需要三个节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 +这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 本文介绍的只是配置负载均衡器的其中一种方式。其他负载均衡器如传统负载路由器(Classic Load Balancer)和应用负载路由器(Application Load Balancer),也可以将流量转发到 Rancher Server 节点。 Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量,而不支持 `TLS` 模式。这试因为在 NLB 终止时,NLB 不会将正确的标头注入请求中。如果你想使用由 Amazon Certificate Manager (ACM) 托管的证书,请使用 ALB。 - - ## 要求 你已在 EC2 中创建了 Linux 实例。此外,负载均衡器会把流量转发到这些节点。 @@ -22,7 +20,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, 配置 NLB 的第一个步骤是创建两个目标组。一般来说,只需要端口 443 就可以访问 Rancher。但是,由于端口 80 的流量会被自动重定向到端口 443,因此,你也可以为端口 80 也添加一个监听器。 -不管使用的是 NGINX Ingress 还是 Traefik Ingress Controller,Ingress 都应该将端口 80 的流量重定向到端口 443。以下为操作步骤: +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. 登录到 [Amazon AWS 控制台](https://console.aws.amazon.com/ec2/)。确保选择的**区域**是你创建 EC2 实例 (Linux 节点)的区域。 1. 选择**服务** > **EC2**,找到**负载均衡器**并打开**目标组**。 @@ -30,7 +28,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, :::note -不同 Ingress 的健康检查处理方法不同。详情请参阅[本节](#nginx-ingress-和-traefik-ingress-的健康检查路径)。 +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -163,13 +161,10 @@ AWS 完成 NLB 创建后,单击**关闭**。 6. 单击右上角的**保存**。 -## NGINX Ingress 和 Traefik Ingress 的健康检查路径 +## Health Check Paths for Traefik Ingresses -K3s 和 RKE Kubernetes 集群使用的默认 Ingress 不同,因此对应的健康检查方式也不同。 +K3s Kubernetes clusters use Traefik as the default Ingress. -RKE Kubernetes 集群默认使用 NGINX Ingress,而 K3s Kubernetes 集群默认使用 Traefik Ingress。 +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik**:默认健康检查路径是 `/ping`。默认情况下,不管主机如何,`/ping` 总是匹配,而且 [Traefik 自身](https://docs.traefik.io/operations/ping/)总会响应。 -- **NGINX Ingress**:NGINX Ingress Controller 的默认后端有一个 `/healthz` 端点。默认情况下,不管主机如何,`/healthz` 总是匹配,而且 [`ingress-nginx` 自身](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212)总会响应。 - -想要精确模拟健康检查,最好是使用 Host 标头(Rancher hostname)加上 `/ping` 或 `/healthz`(分别对应 K3s 和 RKE 集群)来获取 Rancher Pod 的响应,而不是 Ingress 的响应。 +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.11/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.11/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index cc61efd50e4..6b43021aae9 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.11/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.11/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -87,7 +87,7 @@ systemctl start rke2-server.service 1. 安装 Kubernetes 命令行工具 [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl)。 2. 复制 `/etc/rancher/rke2/rke2.yaml` 文件并保存到本地主机的 `~/.kube/config` 目录上。 -3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 NGINX Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: +3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 Traefik Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: ```yml apiVersion: v1 @@ -127,39 +127,18 @@ kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **结果**:你可通过使用 `kubectl` 访问集群,并且 RKE2 集群能成功运行。现在,你可以在集群上安装 Rancher Management Server。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.11/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.11/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 23e0ab21787..28d88c69a85 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.11/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.11/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -65,7 +65,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m 如果访问你配置的 Rancher FQDN 时没有显示 UI,请检查 Ingress Controller 日志以查看尝试访问 Rancher 时发生了什么: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader 选举 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index 9a391416f4b..172c342d867 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -9,7 +9,7 @@ title: Rancher Server Kubernetes 集群的问题排查 故障排除主要针对以下 3 个命名空间中的对象: - `cattle-system`:`rancher` deployment 和 Pod。 -- `ingress-nginx`:Ingress Controller Pod 和 services。 +- `traefik`:Ingress Controller Pod 和 services。 - `cert-manager`:`cert-manager` Pod。 ## "default backend - 404" @@ -117,14 +117,12 @@ Events: kubectl -n cattle-system describe ingress ``` -如果 Ingress 对象已就绪,但是 SSL 仍然无法正常工作,你的证书或密文的格式可能不正确。 +If its ready and the SSL is still not working you may have a malformed cert or secret. -这种情况下,请检查 nginx-ingress-controller 的日志。nginx-ingress-controller 的 Pod 中有多个容器,因此你需要指定容器的名称: +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ## 没有匹配的 "Issuer" @@ -144,7 +142,7 @@ Error: validation failed: unable to recognize "": no matches for kind "Issuer" i 解决网络问题后,`canal` Pod 会超时并重启以建立连接。 -## nginx-ingress-controller Pod 显示 RESTARTS +## Traefik Pod 显示 RESTARTS 此问题的最常见原因是 `canal` pod 未能建立覆盖网络。参见 [canal Pod 显示 READY `2/3`](#canal-pod-显示-ready-23) 进行排查。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/getting-started/installation-and-upgrade/installation-references/tls-settings.md index d8ad91b7d67..077e19eb77b 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -6,10 +6,7 @@ title: TLS 设置 ## 在高可用 Kubernetes 集群中运行 Rancher -当你在 Kubernetes 集群内安装 Rancher 时,TLS 会在集群的 Ingress Controller 上卸载。可用的 TLS 设置取决于使用的 Ingress Controller: - -* nginx-ingress-controller(default RKE2 默认):[默认的 TLS 版本和密码](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers)。 -* traefik(K3s 默认):[TLS 选项](https://doc.traefik.io/traefik/https/tls/#tls-options)。 +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## 在单个 Docker 容器中运行 Rancher diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index 6486e4fc4ca..1759e77ad07 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -4,14 +4,12 @@ title: 设置 Amazon NLB 网络负载均衡器 本文介绍了如何在 Amazon EC2 服务中设置 Amazon NLB 网络负载均衡器,用于将流量转发到 EC2 上的多个实例中。 -这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 RKE Kubernetes 集群上,则需要三个节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 +这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 本文介绍的只是配置负载均衡器的其中一种方式。其他负载均衡器如传统负载路由器(Classic Load Balancer)和应用负载路由器(Application Load Balancer),也可以将流量转发到 Rancher Server 节点。 Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量,而不支持 `TLS` 模式。这试因为在 NLB 终止时,NLB 不会将正确的标头注入请求中。如果你想使用由 Amazon Certificate Manager (ACM) 托管的证书,请使用 ALB。 - - ## 要求 你已在 EC2 中创建了 Linux 实例。此外,负载均衡器会把流量转发到这些节点。 @@ -22,7 +20,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, 配置 NLB 的第一个步骤是创建两个目标组。一般来说,只需要端口 443 就可以访问 Rancher。但是,由于端口 80 的流量会被自动重定向到端口 443,因此,你也可以为端口 80 也添加一个监听器。 -不管使用的是 NGINX Ingress 还是 Traefik Ingress Controller,Ingress 都应该将端口 80 的流量重定向到端口 443。以下为操作步骤: +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. 登录到 [Amazon AWS 控制台](https://console.aws.amazon.com/ec2/)。确保选择的**区域**是你创建 EC2 实例 (Linux 节点)的区域。 1. 选择**服务** > **EC2**,找到**负载均衡器**并打开**目标组**。 @@ -30,7 +28,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, :::note -不同 Ingress 的健康检查处理方法不同。详情请参阅[本节](#nginx-ingress-和-traefik-ingress-的健康检查路径)。 +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -163,13 +161,10 @@ AWS 完成 NLB 创建后,单击**关闭**。 6. 单击右上角的**保存**。 -## NGINX Ingress 和 Traefik Ingress 的健康检查路径 +## Health Check Paths for Traefik Ingresses -K3s 和 RKE Kubernetes 集群使用的默认 Ingress 不同,因此对应的健康检查方式也不同。 +K3s Kubernetes clusters use Traefik as the default Ingress. -RKE Kubernetes 集群默认使用 NGINX Ingress,而 K3s Kubernetes 集群默认使用 Traefik Ingress。 +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik**:默认健康检查路径是 `/ping`。默认情况下,不管主机如何,`/ping` 总是匹配,而且 [Traefik 自身](https://docs.traefik.io/operations/ping/)总会响应。 -- **NGINX Ingress**:NGINX Ingress Controller 的默认后端有一个 `/healthz` 端点。默认情况下,不管主机如何,`/healthz` 总是匹配,而且 [`ingress-nginx` 自身](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212)总会响应。 - -想要精确模拟健康检查,最好是使用 Host 标头(Rancher hostname)加上 `/ping` 或 `/healthz`(分别对应 K3s 和 RKE 集群)来获取 Rancher Pod 的响应,而不是 Ingress 的响应。 +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index cc61efd50e4..6b43021aae9 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -87,7 +87,7 @@ systemctl start rke2-server.service 1. 安装 Kubernetes 命令行工具 [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl)。 2. 复制 `/etc/rancher/rke2/rke2.yaml` 文件并保存到本地主机的 `~/.kube/config` 目录上。 -3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 NGINX Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: +3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 Traefik Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: ```yml apiVersion: v1 @@ -127,39 +127,18 @@ kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **结果**:你可通过使用 `kubectl` 访问集群,并且 RKE2 集群能成功运行。现在,你可以在集群上安装 Rancher Management Server。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 23e0ab21787..28d88c69a85 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.12/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.12/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -65,7 +65,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m 如果访问你配置的 Rancher FQDN 时没有显示 UI,请检查 Ingress Controller 日志以查看尝试访问 Rancher 时发生了什么: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader 选举 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index 9a391416f4b..172c342d867 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -9,7 +9,7 @@ title: Rancher Server Kubernetes 集群的问题排查 故障排除主要针对以下 3 个命名空间中的对象: - `cattle-system`:`rancher` deployment 和 Pod。 -- `ingress-nginx`:Ingress Controller Pod 和 services。 +- `traefik`:Ingress Controller Pod 和 services。 - `cert-manager`:`cert-manager` Pod。 ## "default backend - 404" @@ -117,14 +117,12 @@ Events: kubectl -n cattle-system describe ingress ``` -如果 Ingress 对象已就绪,但是 SSL 仍然无法正常工作,你的证书或密文的格式可能不正确。 +If its ready and the SSL is still not working you may have a malformed cert or secret. -这种情况下,请检查 nginx-ingress-controller 的日志。nginx-ingress-controller 的 Pod 中有多个容器,因此你需要指定容器的名称: +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ## 没有匹配的 "Issuer" @@ -144,7 +142,7 @@ Error: validation failed: unable to recognize "": no matches for kind "Issuer" i 解决网络问题后,`canal` Pod 会超时并重启以建立连接。 -## nginx-ingress-controller Pod 显示 RESTARTS +## Traefik Pod 显示 RESTARTS 此问题的最常见原因是 `canal` pod 未能建立覆盖网络。参见 [canal Pod 显示 READY `2/3`](#canal-pod-显示-ready-23) 进行排查。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/getting-started/installation-and-upgrade/installation-references/tls-settings.md index d8ad91b7d67..077e19eb77b 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -6,10 +6,7 @@ title: TLS 设置 ## 在高可用 Kubernetes 集群中运行 Rancher -当你在 Kubernetes 集群内安装 Rancher 时,TLS 会在集群的 Ingress Controller 上卸载。可用的 TLS 设置取决于使用的 Ingress Controller: - -* nginx-ingress-controller(default RKE2 默认):[默认的 TLS 版本和密码](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers)。 -* traefik(K3s 默认):[TLS 选项](https://doc.traefik.io/traefik/https/tls/#tls-options)。 +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## 在单个 Docker 容器中运行 Rancher diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index 6486e4fc4ca..1759e77ad07 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -4,14 +4,12 @@ title: 设置 Amazon NLB 网络负载均衡器 本文介绍了如何在 Amazon EC2 服务中设置 Amazon NLB 网络负载均衡器,用于将流量转发到 EC2 上的多个实例中。 -这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 RKE Kubernetes 集群上,则需要三个节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 +这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 本文介绍的只是配置负载均衡器的其中一种方式。其他负载均衡器如传统负载路由器(Classic Load Balancer)和应用负载路由器(Application Load Balancer),也可以将流量转发到 Rancher Server 节点。 Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量,而不支持 `TLS` 模式。这试因为在 NLB 终止时,NLB 不会将正确的标头注入请求中。如果你想使用由 Amazon Certificate Manager (ACM) 托管的证书,请使用 ALB。 - - ## 要求 你已在 EC2 中创建了 Linux 实例。此外,负载均衡器会把流量转发到这些节点。 @@ -22,7 +20,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, 配置 NLB 的第一个步骤是创建两个目标组。一般来说,只需要端口 443 就可以访问 Rancher。但是,由于端口 80 的流量会被自动重定向到端口 443,因此,你也可以为端口 80 也添加一个监听器。 -不管使用的是 NGINX Ingress 还是 Traefik Ingress Controller,Ingress 都应该将端口 80 的流量重定向到端口 443。以下为操作步骤: +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. 登录到 [Amazon AWS 控制台](https://console.aws.amazon.com/ec2/)。确保选择的**区域**是你创建 EC2 实例 (Linux 节点)的区域。 1. 选择**服务** > **EC2**,找到**负载均衡器**并打开**目标组**。 @@ -30,7 +28,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, :::note -不同 Ingress 的健康检查处理方法不同。详情请参阅[本节](#nginx-ingress-和-traefik-ingress-的健康检查路径)。 +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -163,13 +161,10 @@ AWS 完成 NLB 创建后,单击**关闭**。 6. 单击右上角的**保存**。 -## NGINX Ingress 和 Traefik Ingress 的健康检查路径 +## Health Check Paths for Traefik Ingresses -K3s 和 RKE Kubernetes 集群使用的默认 Ingress 不同,因此对应的健康检查方式也不同。 +K3s Kubernetes clusters use Traefik as the default Ingress. -RKE Kubernetes 集群默认使用 NGINX Ingress,而 K3s Kubernetes 集群默认使用 Traefik Ingress。 +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik**:默认健康检查路径是 `/ping`。默认情况下,不管主机如何,`/ping` 总是匹配,而且 [Traefik 自身](https://docs.traefik.io/operations/ping/)总会响应。 -- **NGINX Ingress**:NGINX Ingress Controller 的默认后端有一个 `/healthz` 端点。默认情况下,不管主机如何,`/healthz` 总是匹配,而且 [`ingress-nginx` 自身](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212)总会响应。 - -想要精确模拟健康检查,最好是使用 Host 标头(Rancher hostname)加上 `/ping` 或 `/healthz`(分别对应 K3s 和 RKE 集群)来获取 Rancher Pod 的响应,而不是 Ingress 的响应。 +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index cc61efd50e4..6b43021aae9 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -87,7 +87,7 @@ systemctl start rke2-server.service 1. 安装 Kubernetes 命令行工具 [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl)。 2. 复制 `/etc/rancher/rke2/rke2.yaml` 文件并保存到本地主机的 `~/.kube/config` 目录上。 -3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 NGINX Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: +3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 Traefik Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: ```yml apiVersion: v1 @@ -127,39 +127,18 @@ kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **结果**:你可通过使用 `kubectl` 访问集群,并且 RKE2 集群能成功运行。现在,你可以在集群上安装 Rancher Management Server。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 23e0ab21787..28d88c69a85 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.13/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.13/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -65,7 +65,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m 如果访问你配置的 Rancher FQDN 时没有显示 UI,请检查 Ingress Controller 日志以查看尝试访问 Rancher 时发生了什么: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader 选举 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.14/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.14/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index 9a391416f4b..172c342d867 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.14/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.14/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -9,7 +9,7 @@ title: Rancher Server Kubernetes 集群的问题排查 故障排除主要针对以下 3 个命名空间中的对象: - `cattle-system`:`rancher` deployment 和 Pod。 -- `ingress-nginx`:Ingress Controller Pod 和 services。 +- `traefik`:Ingress Controller Pod 和 services。 - `cert-manager`:`cert-manager` Pod。 ## "default backend - 404" @@ -117,14 +117,12 @@ Events: kubectl -n cattle-system describe ingress ``` -如果 Ingress 对象已就绪,但是 SSL 仍然无法正常工作,你的证书或密文的格式可能不正确。 +If its ready and the SSL is still not working you may have a malformed cert or secret. -这种情况下,请检查 nginx-ingress-controller 的日志。nginx-ingress-controller 的 Pod 中有多个容器,因此你需要指定容器的名称: +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ## 没有匹配的 "Issuer" @@ -144,7 +142,7 @@ Error: validation failed: unable to recognize "": no matches for kind "Issuer" i 解决网络问题后,`canal` Pod 会超时并重启以建立连接。 -## nginx-ingress-controller Pod 显示 RESTARTS +## Traefik Pod 显示 RESTARTS 此问题的最常见原因是 `canal` pod 未能建立覆盖网络。参见 [canal Pod 显示 READY `2/3`](#canal-pod-显示-ready-23) 进行排查。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.14/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.14/getting-started/installation-and-upgrade/installation-references/tls-settings.md index d8ad91b7d67..077e19eb77b 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.14/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.14/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -6,10 +6,7 @@ title: TLS 设置 ## 在高可用 Kubernetes 集群中运行 Rancher -当你在 Kubernetes 集群内安装 Rancher 时,TLS 会在集群的 Ingress Controller 上卸载。可用的 TLS 设置取决于使用的 Ingress Controller: - -* nginx-ingress-controller(default RKE2 默认):[默认的 TLS 版本和密码](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers)。 -* traefik(K3s 默认):[TLS 选项](https://doc.traefik.io/traefik/https/tls/#tls-options)。 +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## 在单个 Docker 容器中运行 Rancher diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.14/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.14/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index 6486e4fc4ca..1759e77ad07 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.14/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.14/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -4,14 +4,12 @@ title: 设置 Amazon NLB 网络负载均衡器 本文介绍了如何在 Amazon EC2 服务中设置 Amazon NLB 网络负载均衡器,用于将流量转发到 EC2 上的多个实例中。 -这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 RKE Kubernetes 集群上,则需要三个节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 +这些示例中,负载均衡器将流量转发到三个 Rancher Server 节点。如果 Rancher 安装在 K3s Kubernetes 集群上,则只需要两个节点。 本文介绍的只是配置负载均衡器的其中一种方式。其他负载均衡器如传统负载路由器(Classic Load Balancer)和应用负载路由器(Application Load Balancer),也可以将流量转发到 Rancher Server 节点。 Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量,而不支持 `TLS` 模式。这试因为在 NLB 终止时,NLB 不会将正确的标头注入请求中。如果你想使用由 Amazon Certificate Manager (ACM) 托管的证书,请使用 ALB。 - - ## 要求 你已在 EC2 中创建了 Linux 实例。此外,负载均衡器会把流量转发到这些节点。 @@ -22,7 +20,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, 配置 NLB 的第一个步骤是创建两个目标组。一般来说,只需要端口 443 就可以访问 Rancher。但是,由于端口 80 的流量会被自动重定向到端口 443,因此,你也可以为端口 80 也添加一个监听器。 -不管使用的是 NGINX Ingress 还是 Traefik Ingress Controller,Ingress 都应该将端口 80 的流量重定向到端口 443。以下为操作步骤: +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. 登录到 [Amazon AWS 控制台](https://console.aws.amazon.com/ec2/)。确保选择的**区域**是你创建 EC2 实例 (Linux 节点)的区域。 1. 选择**服务** > **EC2**,找到**负载均衡器**并打开**目标组**。 @@ -30,7 +28,7 @@ Rancher 仅支持使用 Amazon NLB 以 `TCP` 模式终止 443 端口的流量, :::note -不同 Ingress 的健康检查处理方法不同。详情请参阅[本节](#nginx-ingress-和-traefik-ingress-的健康检查路径)。 +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -163,13 +161,10 @@ AWS 完成 NLB 创建后,单击**关闭**。 6. 单击右上角的**保存**。 -## NGINX Ingress 和 Traefik Ingress 的健康检查路径 +## Health Check Paths for Traefik Ingresses -K3s 和 RKE Kubernetes 集群使用的默认 Ingress 不同,因此对应的健康检查方式也不同。 +K3s Kubernetes clusters use Traefik as the default Ingress. -RKE Kubernetes 集群默认使用 NGINX Ingress,而 K3s Kubernetes 集群默认使用 Traefik Ingress。 +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik**:默认健康检查路径是 `/ping`。默认情况下,不管主机如何,`/ping` 总是匹配,而且 [Traefik 自身](https://docs.traefik.io/operations/ping/)总会响应。 -- **NGINX Ingress**:NGINX Ingress Controller 的默认后端有一个 `/healthz` 端点。默认情况下,不管主机如何,`/healthz` 总是匹配,而且 [`ingress-nginx` 自身](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212)总会响应。 - -想要精确模拟健康检查,最好是使用 Host 标头(Rancher hostname)加上 `/ping` 或 `/healthz`(分别对应 K3s 和 RKE 集群)来获取 Rancher Pod 的响应,而不是 Ingress 的响应。 +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.14/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.14/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index cc61efd50e4..6b43021aae9 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.14/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.14/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -87,7 +87,7 @@ systemctl start rke2-server.service 1. 安装 Kubernetes 命令行工具 [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl)。 2. 复制 `/etc/rancher/rke2/rke2.yaml` 文件并保存到本地主机的 `~/.kube/config` 目录上。 -3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 NGINX Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: +3. 在 kubeconfig 文件中,`server` 的参数为 localhost。在端口 6443 上将服务器配置为 controlplane 负载均衡器的 DNS(RKE2 Kubernetes API Server 使用端口 6443,而 Rancher Server 将通过 Traefik Ingress 在端口 80 和 443 上提供服务。)以下是一个示例 `rke2.yaml`: ```yml apiVersion: v1 @@ -127,39 +127,18 @@ kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **结果**:你可通过使用 `kubectl` 访问集群,并且 RKE2 集群能成功运行。现在,你可以在集群上安装 Rancher Management Server。 diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-2.14/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/i18n/zh/docusaurus-plugin-content-docs/version-2.14/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 23e0ab21787..28d88c69a85 100644 --- a/i18n/zh/docusaurus-plugin-content-docs/version-2.14/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/i18n/zh/docusaurus-plugin-content-docs/version-2.14/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -65,7 +65,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m 如果访问你配置的 Rancher FQDN 时没有显示 UI,请检查 Ingress Controller 日志以查看尝试访问 Rancher 时发生了什么: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader 选举 diff --git a/versioned_docs/version-2.10/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/versioned_docs/version-2.10/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index 4fa1f09c8f6..e3b73292da3 100644 --- a/versioned_docs/version-2.10/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/versioned_docs/version-2.10/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -13,7 +13,7 @@ This section describes how to troubleshoot an installation of Rancher on a Kuber Most of the troubleshooting will be done on objects in these 3 namespaces. - `cattle-system` - `rancher` deployment and pods. -- `ingress-nginx` - Ingress controller pods and services. +- `traefik` - Ingress controller pods and services. - `cert-manager` - `cert-manager` pods. ### "default backend - 404" @@ -115,7 +115,7 @@ Events: Your certs get applied directly to the Ingress object in the `cattle-system` namespace. -Check the status of the Ingress object and see if its ready. +Check the status of the Ingress object and see if it's ready. ``` kubectl -n cattle-system describe ingress @@ -123,12 +123,10 @@ kubectl -n cattle-system describe ingress If its ready and the SSL is still not working you may have a malformed cert or secret. -Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container. +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ### No matches for kind "Issuer" @@ -148,7 +146,7 @@ The most common cause of this issue is port 8472/UDP is not open between the nod Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections. -### nginx-ingress-controller Pods show RESTARTS +### Traefik Pods show RESTARTS The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-23) for troubleshooting. diff --git a/versioned_docs/version-2.10/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/versioned_docs/version-2.10/getting-started/installation-and-upgrade/installation-references/tls-settings.md index bbde2c61560..240897b429c 100644 --- a/versioned_docs/version-2.10/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/versioned_docs/version-2.10/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -10,10 +10,7 @@ Changing the default TLS settings depends on the chosen installation method. ## Running Rancher in a highly available Kubernetes cluster -When you install Rancher inside of a Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. The possible TLS settings depend on the used ingress controller: - -* nginx-ingress-controller (default for RKE1 and RKE2): [Default TLS Version and Ciphers](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers). -* traefik (default for K3s): [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options). +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## Running Rancher in a single Docker container diff --git a/versioned_docs/version-2.10/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/versioned_docs/version-2.10/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index f0157a3f6e0..14b5116a3fe 100644 --- a/versioned_docs/version-2.10/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/versioned_docs/version-2.10/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -8,14 +8,12 @@ title: Setting up Amazon ELB Network Load Balancer This how-to guide describes how to set up a Network Load Balancer (NLB) in Amazon's EC2 service that will direct traffic to multiple instances on EC2. -These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on an RKE Kubernetes cluster, three nodes are required. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. +These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. This tutorial is about one possible way to set up your load balancer, not the only way. Other types of load balancers, such as a Classic Load Balancer or Application Load Balancer, could also direct traffic to the Rancher server nodes. Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ALB. - - ## Requirements These instructions assume you have already created Linux instances in EC2. The load balancer will direct traffic to these nodes. @@ -26,7 +24,7 @@ Begin by creating two target groups for the **TCP** protocol, one with TCP port Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but it's convenient to add a listener for port 80, because traffic to port 80 will be automatically redirected to port 443. -Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, the Ingress should redirect traffic from port 80 to port 443. +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started. Make sure to select the **Region** where your EC2 instances (Linux nodes) are created. 1. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**. @@ -34,7 +32,7 @@ Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, th :::note -Health checks are handled differently based on the Ingress. For details, refer to [this section.](#health-check-paths-for-nginx-ingress-and-traefik-ingresses) +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -167,13 +165,10 @@ After AWS creates the NLB, click **Close**. 6. Click **Save** in the top right of the screen. -## Health Check Paths for NGINX Ingress and Traefik Ingresses +## Health Check Paths for Traefik Ingresses -K3s and RKE Kubernetes clusters handle health checks differently because they use different Ingresses by default. +K3s Kubernetes clusters use Traefik as the default Ingress. -For RKE Kubernetes clusters, NGINX Ingress is used by default, whereas for K3s Kubernetes clusters, Traefik is the default Ingress. +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik:** The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **NGINX Ingress:** The default backend of the NGINX Ingress controller has a `/healthz` endpoint. By default `/healthz` is always matched (regardless of Host), and a response from [`ingress-nginx` itself](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212) is always served. - -To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` (for K3s or for RKE clusters, respectively) wherever possible, to get a response from the Rancher Pods, not the Ingress. +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/versioned_docs/version-2.10/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/versioned_docs/version-2.10/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index 7a743f731a2..d9cdec4f64c 100644 --- a/versioned_docs/version-2.10/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/versioned_docs/version-2.10/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -91,7 +91,7 @@ To use this `kubeconfig` file, 1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. 2. Copy the file at `/etc/rancher/rke2/rke2.yaml` and save it to the directory `~/.kube/config` on your local machine. -3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the NGINX Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: +3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the Traefik Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: ```yml apiVersion: v1 @@ -131,39 +131,18 @@ Check that all the required pods and containers are healthy are ready to continu ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **Result:** You have confirmed that you can access the cluster with `kubectl` and the RKE2 cluster is running successfully. Now the Rancher management server can be installed on the cluster. diff --git a/versioned_docs/version-2.10/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/versioned_docs/version-2.10/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 25845cdc87d..854010e4871 100644 --- a/versioned_docs/version-2.10/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/versioned_docs/version-2.10/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -69,7 +69,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader Election diff --git a/versioned_docs/version-2.11/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/versioned_docs/version-2.11/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index 4fa1f09c8f6..e3b73292da3 100644 --- a/versioned_docs/version-2.11/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/versioned_docs/version-2.11/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -13,7 +13,7 @@ This section describes how to troubleshoot an installation of Rancher on a Kuber Most of the troubleshooting will be done on objects in these 3 namespaces. - `cattle-system` - `rancher` deployment and pods. -- `ingress-nginx` - Ingress controller pods and services. +- `traefik` - Ingress controller pods and services. - `cert-manager` - `cert-manager` pods. ### "default backend - 404" @@ -115,7 +115,7 @@ Events: Your certs get applied directly to the Ingress object in the `cattle-system` namespace. -Check the status of the Ingress object and see if its ready. +Check the status of the Ingress object and see if it's ready. ``` kubectl -n cattle-system describe ingress @@ -123,12 +123,10 @@ kubectl -n cattle-system describe ingress If its ready and the SSL is still not working you may have a malformed cert or secret. -Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container. +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ### No matches for kind "Issuer" @@ -148,7 +146,7 @@ The most common cause of this issue is port 8472/UDP is not open between the nod Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections. -### nginx-ingress-controller Pods show RESTARTS +### Traefik Pods show RESTARTS The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-23) for troubleshooting. diff --git a/versioned_docs/version-2.11/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/versioned_docs/version-2.11/getting-started/installation-and-upgrade/installation-references/tls-settings.md index bbde2c61560..240897b429c 100644 --- a/versioned_docs/version-2.11/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/versioned_docs/version-2.11/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -10,10 +10,7 @@ Changing the default TLS settings depends on the chosen installation method. ## Running Rancher in a highly available Kubernetes cluster -When you install Rancher inside of a Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. The possible TLS settings depend on the used ingress controller: - -* nginx-ingress-controller (default for RKE1 and RKE2): [Default TLS Version and Ciphers](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers). -* traefik (default for K3s): [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options). +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## Running Rancher in a single Docker container diff --git a/versioned_docs/version-2.11/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/versioned_docs/version-2.11/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index f0157a3f6e0..14b5116a3fe 100644 --- a/versioned_docs/version-2.11/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/versioned_docs/version-2.11/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -8,14 +8,12 @@ title: Setting up Amazon ELB Network Load Balancer This how-to guide describes how to set up a Network Load Balancer (NLB) in Amazon's EC2 service that will direct traffic to multiple instances on EC2. -These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on an RKE Kubernetes cluster, three nodes are required. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. +These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. This tutorial is about one possible way to set up your load balancer, not the only way. Other types of load balancers, such as a Classic Load Balancer or Application Load Balancer, could also direct traffic to the Rancher server nodes. Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ALB. - - ## Requirements These instructions assume you have already created Linux instances in EC2. The load balancer will direct traffic to these nodes. @@ -26,7 +24,7 @@ Begin by creating two target groups for the **TCP** protocol, one with TCP port Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but it's convenient to add a listener for port 80, because traffic to port 80 will be automatically redirected to port 443. -Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, the Ingress should redirect traffic from port 80 to port 443. +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started. Make sure to select the **Region** where your EC2 instances (Linux nodes) are created. 1. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**. @@ -34,7 +32,7 @@ Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, th :::note -Health checks are handled differently based on the Ingress. For details, refer to [this section.](#health-check-paths-for-nginx-ingress-and-traefik-ingresses) +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -167,13 +165,10 @@ After AWS creates the NLB, click **Close**. 6. Click **Save** in the top right of the screen. -## Health Check Paths for NGINX Ingress and Traefik Ingresses +## Health Check Paths for Traefik Ingresses -K3s and RKE Kubernetes clusters handle health checks differently because they use different Ingresses by default. +K3s Kubernetes clusters use Traefik as the default Ingress. -For RKE Kubernetes clusters, NGINX Ingress is used by default, whereas for K3s Kubernetes clusters, Traefik is the default Ingress. +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik:** The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **NGINX Ingress:** The default backend of the NGINX Ingress controller has a `/healthz` endpoint. By default `/healthz` is always matched (regardless of Host), and a response from [`ingress-nginx` itself](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212) is always served. - -To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` (for K3s or for RKE clusters, respectively) wherever possible, to get a response from the Rancher Pods, not the Ingress. +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/versioned_docs/version-2.11/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/versioned_docs/version-2.11/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index 7a743f731a2..d9cdec4f64c 100644 --- a/versioned_docs/version-2.11/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/versioned_docs/version-2.11/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -91,7 +91,7 @@ To use this `kubeconfig` file, 1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. 2. Copy the file at `/etc/rancher/rke2/rke2.yaml` and save it to the directory `~/.kube/config` on your local machine. -3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the NGINX Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: +3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the Traefik Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: ```yml apiVersion: v1 @@ -131,39 +131,18 @@ Check that all the required pods and containers are healthy are ready to continu ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **Result:** You have confirmed that you can access the cluster with `kubectl` and the RKE2 cluster is running successfully. Now the Rancher management server can be installed on the cluster. diff --git a/versioned_docs/version-2.11/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/versioned_docs/version-2.11/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 25845cdc87d..854010e4871 100644 --- a/versioned_docs/version-2.11/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/versioned_docs/version-2.11/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -69,7 +69,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader Election diff --git a/versioned_docs/version-2.12/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/versioned_docs/version-2.12/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index 881a6d871a0..b35276f8c5a 100644 --- a/versioned_docs/version-2.12/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/versioned_docs/version-2.12/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -13,7 +13,7 @@ This section describes how to troubleshoot an installation of Rancher on a Kuber Most of the troubleshooting will be done on objects in these 3 namespaces. - `cattle-system` - `rancher` deployment and pods. -- `ingress-nginx` - Ingress controller pods and services. +- `traefik` - Ingress controller pods and services. - `cert-manager` - `cert-manager` pods. ### "default backend - 404" @@ -115,7 +115,7 @@ Events: Your certs get applied directly to the Ingress object in the `cattle-system` namespace. -Check the status of the Ingress object and see if its ready. +Check the status of the Ingress object and see if it's ready. ``` kubectl -n cattle-system describe ingress @@ -123,12 +123,10 @@ kubectl -n cattle-system describe ingress If its ready and the SSL is still not working you may have a malformed cert or secret. -Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container. +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ### No matches for kind "Issuer" @@ -148,7 +146,7 @@ The most common cause of this issue is port 8472/UDP is not open between the nod Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections. -### nginx-ingress-controller Pods show RESTARTS +### Traefik Pods show RESTARTS The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-23) for troubleshooting. diff --git a/versioned_docs/version-2.12/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/versioned_docs/version-2.12/getting-started/installation-and-upgrade/installation-references/tls-settings.md index 3e6c745857c..240897b429c 100644 --- a/versioned_docs/version-2.12/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/versioned_docs/version-2.12/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -10,10 +10,7 @@ Changing the default TLS settings depends on the chosen installation method. ## Running Rancher in a highly available Kubernetes cluster -When you install Rancher inside of a Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. The possible TLS settings depend on the used ingress controller: - -* nginx-ingress-controller (default for RKE2): [Default TLS Version and Ciphers](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers). -* traefik (default for K3s): [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options). +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## Running Rancher in a single Docker container diff --git a/versioned_docs/version-2.12/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/versioned_docs/version-2.12/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index f0157a3f6e0..14b5116a3fe 100644 --- a/versioned_docs/version-2.12/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/versioned_docs/version-2.12/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -8,14 +8,12 @@ title: Setting up Amazon ELB Network Load Balancer This how-to guide describes how to set up a Network Load Balancer (NLB) in Amazon's EC2 service that will direct traffic to multiple instances on EC2. -These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on an RKE Kubernetes cluster, three nodes are required. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. +These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. This tutorial is about one possible way to set up your load balancer, not the only way. Other types of load balancers, such as a Classic Load Balancer or Application Load Balancer, could also direct traffic to the Rancher server nodes. Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ALB. - - ## Requirements These instructions assume you have already created Linux instances in EC2. The load balancer will direct traffic to these nodes. @@ -26,7 +24,7 @@ Begin by creating two target groups for the **TCP** protocol, one with TCP port Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but it's convenient to add a listener for port 80, because traffic to port 80 will be automatically redirected to port 443. -Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, the Ingress should redirect traffic from port 80 to port 443. +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started. Make sure to select the **Region** where your EC2 instances (Linux nodes) are created. 1. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**. @@ -34,7 +32,7 @@ Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, th :::note -Health checks are handled differently based on the Ingress. For details, refer to [this section.](#health-check-paths-for-nginx-ingress-and-traefik-ingresses) +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -167,13 +165,10 @@ After AWS creates the NLB, click **Close**. 6. Click **Save** in the top right of the screen. -## Health Check Paths for NGINX Ingress and Traefik Ingresses +## Health Check Paths for Traefik Ingresses -K3s and RKE Kubernetes clusters handle health checks differently because they use different Ingresses by default. +K3s Kubernetes clusters use Traefik as the default Ingress. -For RKE Kubernetes clusters, NGINX Ingress is used by default, whereas for K3s Kubernetes clusters, Traefik is the default Ingress. +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik:** The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **NGINX Ingress:** The default backend of the NGINX Ingress controller has a `/healthz` endpoint. By default `/healthz` is always matched (regardless of Host), and a response from [`ingress-nginx` itself](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212) is always served. - -To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` (for K3s or for RKE clusters, respectively) wherever possible, to get a response from the Rancher Pods, not the Ingress. +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/versioned_docs/version-2.12/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/versioned_docs/version-2.12/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index 7a743f731a2..d9cdec4f64c 100644 --- a/versioned_docs/version-2.12/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/versioned_docs/version-2.12/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -91,7 +91,7 @@ To use this `kubeconfig` file, 1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. 2. Copy the file at `/etc/rancher/rke2/rke2.yaml` and save it to the directory `~/.kube/config` on your local machine. -3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the NGINX Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: +3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the Traefik Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: ```yml apiVersion: v1 @@ -131,39 +131,18 @@ Check that all the required pods and containers are healthy are ready to continu ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **Result:** You have confirmed that you can access the cluster with `kubectl` and the RKE2 cluster is running successfully. Now the Rancher management server can be installed on the cluster. diff --git a/versioned_docs/version-2.12/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/versioned_docs/version-2.12/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 25845cdc87d..854010e4871 100644 --- a/versioned_docs/version-2.12/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/versioned_docs/version-2.12/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -69,7 +69,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader Election diff --git a/versioned_docs/version-2.13/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/versioned_docs/version-2.13/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index 881a6d871a0..b35276f8c5a 100644 --- a/versioned_docs/version-2.13/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/versioned_docs/version-2.13/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -13,7 +13,7 @@ This section describes how to troubleshoot an installation of Rancher on a Kuber Most of the troubleshooting will be done on objects in these 3 namespaces. - `cattle-system` - `rancher` deployment and pods. -- `ingress-nginx` - Ingress controller pods and services. +- `traefik` - Ingress controller pods and services. - `cert-manager` - `cert-manager` pods. ### "default backend - 404" @@ -115,7 +115,7 @@ Events: Your certs get applied directly to the Ingress object in the `cattle-system` namespace. -Check the status of the Ingress object and see if its ready. +Check the status of the Ingress object and see if it's ready. ``` kubectl -n cattle-system describe ingress @@ -123,12 +123,10 @@ kubectl -n cattle-system describe ingress If its ready and the SSL is still not working you may have a malformed cert or secret. -Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container. +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ### No matches for kind "Issuer" @@ -148,7 +146,7 @@ The most common cause of this issue is port 8472/UDP is not open between the nod Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections. -### nginx-ingress-controller Pods show RESTARTS +### Traefik Pods show RESTARTS The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-23) for troubleshooting. diff --git a/versioned_docs/version-2.13/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/versioned_docs/version-2.13/getting-started/installation-and-upgrade/installation-references/tls-settings.md index 3e6c745857c..240897b429c 100644 --- a/versioned_docs/version-2.13/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/versioned_docs/version-2.13/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -10,10 +10,7 @@ Changing the default TLS settings depends on the chosen installation method. ## Running Rancher in a highly available Kubernetes cluster -When you install Rancher inside of a Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. The possible TLS settings depend on the used ingress controller: - -* nginx-ingress-controller (default for RKE2): [Default TLS Version and Ciphers](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers). -* traefik (default for K3s): [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options). +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## Running Rancher in a single Docker container diff --git a/versioned_docs/version-2.13/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/versioned_docs/version-2.13/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index f0157a3f6e0..14b5116a3fe 100644 --- a/versioned_docs/version-2.13/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/versioned_docs/version-2.13/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -8,14 +8,12 @@ title: Setting up Amazon ELB Network Load Balancer This how-to guide describes how to set up a Network Load Balancer (NLB) in Amazon's EC2 service that will direct traffic to multiple instances on EC2. -These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on an RKE Kubernetes cluster, three nodes are required. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. +These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. This tutorial is about one possible way to set up your load balancer, not the only way. Other types of load balancers, such as a Classic Load Balancer or Application Load Balancer, could also direct traffic to the Rancher server nodes. Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ALB. - - ## Requirements These instructions assume you have already created Linux instances in EC2. The load balancer will direct traffic to these nodes. @@ -26,7 +24,7 @@ Begin by creating two target groups for the **TCP** protocol, one with TCP port Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but it's convenient to add a listener for port 80, because traffic to port 80 will be automatically redirected to port 443. -Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, the Ingress should redirect traffic from port 80 to port 443. +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started. Make sure to select the **Region** where your EC2 instances (Linux nodes) are created. 1. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**. @@ -34,7 +32,7 @@ Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, th :::note -Health checks are handled differently based on the Ingress. For details, refer to [this section.](#health-check-paths-for-nginx-ingress-and-traefik-ingresses) +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -167,13 +165,10 @@ After AWS creates the NLB, click **Close**. 6. Click **Save** in the top right of the screen. -## Health Check Paths for NGINX Ingress and Traefik Ingresses +## Health Check Paths for Traefik Ingresses -K3s and RKE Kubernetes clusters handle health checks differently because they use different Ingresses by default. +K3s Kubernetes clusters use Traefik as the default Ingress. -For RKE Kubernetes clusters, NGINX Ingress is used by default, whereas for K3s Kubernetes clusters, Traefik is the default Ingress. +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik:** The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **NGINX Ingress:** The default backend of the NGINX Ingress controller has a `/healthz` endpoint. By default `/healthz` is always matched (regardless of Host), and a response from [`ingress-nginx` itself](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212) is always served. - -To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` (for K3s or for RKE clusters, respectively) wherever possible, to get a response from the Rancher Pods, not the Ingress. +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/versioned_docs/version-2.13/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/versioned_docs/version-2.13/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index 7a743f731a2..d9cdec4f64c 100644 --- a/versioned_docs/version-2.13/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/versioned_docs/version-2.13/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -91,7 +91,7 @@ To use this `kubeconfig` file, 1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. 2. Copy the file at `/etc/rancher/rke2/rke2.yaml` and save it to the directory `~/.kube/config` on your local machine. -3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the NGINX Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: +3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the Traefik Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: ```yml apiVersion: v1 @@ -131,39 +131,18 @@ Check that all the required pods and containers are healthy are ready to continu ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **Result:** You have confirmed that you can access the cluster with `kubectl` and the RKE2 cluster is running successfully. Now the Rancher management server can be installed on the cluster. diff --git a/versioned_docs/version-2.13/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/versioned_docs/version-2.13/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 25845cdc87d..854010e4871 100644 --- a/versioned_docs/version-2.13/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/versioned_docs/version-2.13/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -69,7 +69,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader Election diff --git a/versioned_docs/version-2.14/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md b/versioned_docs/version-2.14/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md index 881a6d871a0..b35276f8c5a 100644 --- a/versioned_docs/version-2.14/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md +++ b/versioned_docs/version-2.14/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md @@ -13,7 +13,7 @@ This section describes how to troubleshoot an installation of Rancher on a Kuber Most of the troubleshooting will be done on objects in these 3 namespaces. - `cattle-system` - `rancher` deployment and pods. -- `ingress-nginx` - Ingress controller pods and services. +- `traefik` - Ingress controller pods and services. - `cert-manager` - `cert-manager` pods. ### "default backend - 404" @@ -115,7 +115,7 @@ Events: Your certs get applied directly to the Ingress object in the `cattle-system` namespace. -Check the status of the Ingress object and see if its ready. +Check the status of the Ingress object and see if it's ready. ``` kubectl -n cattle-system describe ingress @@ -123,12 +123,10 @@ kubectl -n cattle-system describe ingress If its ready and the SSL is still not working you may have a malformed cert or secret. -Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container. +Check the `traefik` logs. ``` -kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller -... -W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found +kubectl -n traefik logs ``` ### No matches for kind "Issuer" @@ -148,7 +146,7 @@ The most common cause of this issue is port 8472/UDP is not open between the nod Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections. -### nginx-ingress-controller Pods show RESTARTS +### Traefik Pods show RESTARTS The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-23) for troubleshooting. diff --git a/versioned_docs/version-2.14/getting-started/installation-and-upgrade/installation-references/tls-settings.md b/versioned_docs/version-2.14/getting-started/installation-and-upgrade/installation-references/tls-settings.md index 3e6c745857c..240897b429c 100644 --- a/versioned_docs/version-2.14/getting-started/installation-and-upgrade/installation-references/tls-settings.md +++ b/versioned_docs/version-2.14/getting-started/installation-and-upgrade/installation-references/tls-settings.md @@ -10,10 +10,7 @@ Changing the default TLS settings depends on the chosen installation method. ## Running Rancher in a highly available Kubernetes cluster -When you install Rancher inside of a Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. The possible TLS settings depend on the used ingress controller: - -* nginx-ingress-controller (default for RKE2): [Default TLS Version and Ciphers](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers). -* traefik (default for K3s): [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options). +When you install a Rancher managed Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. Traefik is the default ingress for K3s and can be used with RKE2, refer to [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options) for further information. ## Running Rancher in a single Docker container diff --git a/versioned_docs/version-2.14/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md b/versioned_docs/version-2.14/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md index f0157a3f6e0..14b5116a3fe 100644 --- a/versioned_docs/version-2.14/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md +++ b/versioned_docs/version-2.14/how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md @@ -8,14 +8,12 @@ title: Setting up Amazon ELB Network Load Balancer This how-to guide describes how to set up a Network Load Balancer (NLB) in Amazon's EC2 service that will direct traffic to multiple instances on EC2. -These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on an RKE Kubernetes cluster, three nodes are required. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. +These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. This tutorial is about one possible way to set up your load balancer, not the only way. Other types of load balancers, such as a Classic Load Balancer or Application Load Balancer, could also direct traffic to the Rancher server nodes. Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ALB. - - ## Requirements These instructions assume you have already created Linux instances in EC2. The load balancer will direct traffic to these nodes. @@ -26,7 +24,7 @@ Begin by creating two target groups for the **TCP** protocol, one with TCP port Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but it's convenient to add a listener for port 80, because traffic to port 80 will be automatically redirected to port 443. -Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, the Ingress should redirect traffic from port 80 to port 443. +The Traefik Ingress should redirect traffic from port 80 to port 443. 1. Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started. Make sure to select the **Region** where your EC2 instances (Linux nodes) are created. 1. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**. @@ -34,7 +32,7 @@ Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, th :::note -Health checks are handled differently based on the Ingress. For details, refer to [this section.](#health-check-paths-for-nginx-ingress-and-traefik-ingresses) +For details on Traefik Ingress health checks, refer to [this section.](#health-check-paths-for-traefik-ingresses) ::: @@ -167,13 +165,10 @@ After AWS creates the NLB, click **Close**. 6. Click **Save** in the top right of the screen. -## Health Check Paths for NGINX Ingress and Traefik Ingresses +## Health Check Paths for Traefik Ingresses -K3s and RKE Kubernetes clusters handle health checks differently because they use different Ingresses by default. +K3s Kubernetes clusters use Traefik as the default Ingress. -For RKE Kubernetes clusters, NGINX Ingress is used by default, whereas for K3s Kubernetes clusters, Traefik is the default Ingress. +The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **Traefik:** The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served. -- **NGINX Ingress:** The default backend of the NGINX Ingress controller has a `/healthz` endpoint. By default `/healthz` is always matched (regardless of Host), and a response from [`ingress-nginx` itself](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212) is always served. - -To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` (for K3s or for RKE clusters, respectively) wherever possible, to get a response from the Rancher Pods, not the Ingress. +To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` wherever possible, to get a response from the Rancher Pods, not the Ingress. diff --git a/versioned_docs/version-2.14/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md b/versioned_docs/version-2.14/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md index 7a743f731a2..d9cdec4f64c 100644 --- a/versioned_docs/version-2.14/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md +++ b/versioned_docs/version-2.14/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md @@ -91,7 +91,7 @@ To use this `kubeconfig` file, 1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. 2. Copy the file at `/etc/rancher/rke2/rke2.yaml` and save it to the directory `~/.kube/config` on your local machine. -3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the NGINX Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: +3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the Traefik Ingress on ports 80 and 443.) Here is an example `rke2.yaml`: ```yml apiVersion: v1 @@ -131,39 +131,18 @@ Check that all the required pods and containers are healthy are ready to continu ``` /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s -kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s -kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s -kube-system etcd-rke2-server-1 1/1 Running 0 2m13s -kube-system etcd-rke2-server-2 1/1 Running 0 87s -kube-system etcd-rke2-server-3 1/1 Running 0 56s -kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s -kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s -kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s -kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s -kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s -kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s -kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s -kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s -kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s -kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s -kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s -kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s -kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s -kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s -kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s -kube-system rke2-canal-b9lvm 2/2 Running 0 91s -kube-system rke2-canal-khwp2 2/2 Running 0 2m5s -kube-system rke2-canal-swfmq 2/2 Running 0 105s -kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s -kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s -kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s -kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s -kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s -kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system cloud-controller-manager-my-node-1 1/1 Running 0 5d +kube-system etcd-my-node-1 1/1 Running 0 5d +kube-system helm-install-traefik-crd-z8vsz 0/1 Completed 0 5d +kube-system helm-install-traefik-h6n2q 0/1 Completed 0 5d +kube-system kube-apiserver-my-node-1 1/1 Running 0 5d +kube-system kube-proxy-my-node-1 1/1 Running 0 5d +kube-system kube-scheduler-my-node-1 1/1 Running 0 5d +kube-system rke2-canal-2j4ls 2/2 Running 0 5d +kube-system rke2-coredns-rke2-coredns-5c6b4d5-8f2mz 1/1 Running 0 5d +kube-system rke2-metrics-server-587b78-v9q2s 1/1 Running 0 5d +kube-system traefik-64f54698-m9p2w 1/1 Running 0 2d ``` **Result:** You have confirmed that you can access the cluster with `kubectl` and the RKE2 cluster is running successfully. Now the Rancher management server can be installed on the cluster. diff --git a/versioned_docs/version-2.14/troubleshooting/other-troubleshooting-tips/rancher-ha.md b/versioned_docs/version-2.14/troubleshooting/other-troubleshooting-tips/rancher-ha.md index 25845cdc87d..854010e4871 100644 --- a/versioned_docs/version-2.14/troubleshooting/other-troubleshooting-tips/rancher-ha.md +++ b/versioned_docs/version-2.14/troubleshooting/other-troubleshooting-tips/rancher-ha.md @@ -69,7 +69,7 @@ rancher rancher.yourdomain.com x.x.x.x,x.x.x.x,x.x.x.x 80, 443 2m When accessing your configured Rancher FQDN does not show you the UI, check the ingress controller logging to see what happens when you try to access Rancher: ``` -kubectl -n ingress-nginx logs -l app=ingress-nginx +kubectl -n traefik logs -l app=traefik ``` ## Leader Election