mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-14 09:03:37 +00:00
Add v2.14 preview docs (#2212)
This commit is contained in:
@@ -0,0 +1,179 @@
|
||||
---
|
||||
title: Container Network Interface (CNI) Providers
|
||||
description: Learn about Container Network Interface (CNI), the CNI providers Rancher provides, the features they offer, and how to choose a provider for you
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/container-network-interface-providers"/>
|
||||
</head>
|
||||
|
||||
## What is CNI?
|
||||
|
||||
CNI (Container Network Interface), a [Cloud Native Computing Foundation project](https://cncf.io/), consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted.
|
||||
|
||||
Kubernetes uses CNI as an interface between network providers and Kubernetes pod networking.
|
||||
|
||||

|
||||
|
||||
For more information visit [CNI GitHub project](https://github.com/containernetworking/cni).
|
||||
|
||||
## What Network Models are Used in CNI?
|
||||
|
||||
CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)).
|
||||
|
||||
### What is an Encapsulated Network?
|
||||
|
||||
This network model provides a logical Layer 2 (L2) network encapsulated over the existing Layer 3 (L3) network topology that spans the Kubernetes cluster nodes. With this model you have an isolated L2 network for containers without needing routing distribution, all at the cost of minimal overhead in terms of processing and increased IP package size, which comes from an IP header generated by overlay encapsulation. Encapsulation information is distributed by UDP ports between Kubernetes workers, interchanging network control plane information about how MAC addresses can be reached. Common encapsulation used in this kind of network model is VXLAN, Internet Protocol Security (IPSec), and IP-in-IP.
|
||||
|
||||
In simple terms, this network model generates a kind of network bridge extended between Kubernetes workers, where pods are connected.
|
||||
|
||||
This network model is used when an extended L2 bridge is preferred. This network model is sensitive to L3 network latencies of the Kubernetes workers. If datacenters are in distinct geolocations, be sure to have low latencies between them to avoid eventual network segmentation.
|
||||
|
||||
CNI network providers using this network model include Flannel, Canal, Weave, and Cilium. By default, Calico is not using this model, but it can be configured to do so.
|
||||
|
||||

|
||||
|
||||
### What is an Unencapsulated Network?
|
||||
|
||||
This network model provides an L3 network to route packets between containers. This model doesn't generate an isolated l2 network, nor generates overhead. These benefits come at the cost of Kubernetes workers having to manage any route distribution that's needed. Instead of using IP headers for encapsulation, this network model uses a network protocol between Kubernetes workers to distribute routing information to reach pods, such as [BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol).
|
||||
|
||||
In simple terms, this network model generates a kind of network router extended between Kubernetes workers, which provides information about how to reach pods.
|
||||
|
||||
This network model is used when a routed L3 network is preferred. This mode dynamically updates routes at the OS level for Kubernetes workers. It's less sensitive to latency.
|
||||
|
||||
CNI network providers using this network model include Calico and Cilium. Cilium may be configured with this model although it is not the default mode.
|
||||
|
||||

|
||||
|
||||
## What CNI Providers are Provided by Rancher?
|
||||
|
||||
### RKE2 Kubernetes clusters
|
||||
|
||||
Out-of-the-box, Rancher provides the following CNI network providers for RKE2 Kubernetes clusters: Calico, Canal, Cilium, and Flannel.
|
||||
|
||||
You can choose your CNI network provider when you create new Kubernetes clusters from Rancher.
|
||||
|
||||
#### Calico
|
||||
|
||||

|
||||
|
||||
Calico enables networking and network policy in Kubernetes clusters across the cloud. By default, Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP.
|
||||
|
||||
Calico also provides a stateless IP-in-IP or VXLAN encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies.
|
||||
|
||||
Kubernetes workers should open TCP port `179` if using BGP or UDP port `4789` if using VXLAN encapsulation. In addition, TCP port `5473` is needed when using Typha. See [the port requirements for user clusters](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) for more details.
|
||||
|
||||
:::note Important:
|
||||
|
||||
In Rancher v2.6.3, Calico probes fail on Windows nodes upon RKE2 installation. <b>Note that this issue is resolved in v2.6.4.</b>
|
||||
|
||||
- To work around this issue, first navigate to `https://<rancherserverurl>/v3/settings/windows-rke2-install-script`.
|
||||
|
||||
- There, change the current setting: `https://raw.githubusercontent.com/rancher/wins/v0.1.3/install.ps1` to this new setting: `https://raw.githubusercontent.com/rancher/rke2/master/windows/rke2-install.ps1`.
|
||||
|
||||
:::
|
||||
|
||||

|
||||
|
||||
For more information, see the following pages:
|
||||
|
||||
- [Project Calico Official Site](https://www.projectcalico.org/)
|
||||
- [Project Calico GitHub Page](https://github.com/projectcalico/calico)
|
||||
|
||||
#### Canal
|
||||
|
||||

|
||||
|
||||
Canal is a CNI network provider that gives you the best of Flannel and Calico. It allows users to easily deploy Calico and Flannel networking together as a unified networking solution, combining Calico’s network policy enforcement with the rich superset of Calico (unencapsulated) and/or Flannel (encapsulated) network connectivity options.
|
||||
|
||||
In Rancher, Canal is the default CNI network provider combined with Flannel and VXLAN encapsulation.
|
||||
|
||||
Kubernetes workers should open UDP port `8472` (VXLAN) and TCP port `9099` (health checks). If using Wireguard, you should open UDP ports `51820` and `51821`. For more details, refer to [the port requirements for user clusters](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md).
|
||||
|
||||

|
||||
|
||||
For more information, refer to the [Rancher maintained Canal source](https://github.com/rancher/rke2-charts/tree/main-source/packages/rke2-canal) and the [Canal GitHub Page](https://github.com/projectcalico/canal).
|
||||
|
||||
#### Cilium
|
||||
|
||||

|
||||
|
||||
Cilium enables networking and network policies (L3, L4, and L7) in Kubernetes. By default, Cilium uses eBPF technologies to route packets inside the node and VXLAN to send packets to other nodes. Unencapsulated techniques can also be configured.
|
||||
|
||||
Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4240` for health checks. In addition, ICMP 8/0 must be enabled for health checks. For more information, check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements).
|
||||
|
||||
##### Ingress Routing Across Nodes in Cilium
|
||||
<br/>
|
||||
By default, Cilium does not allow pods to contact pods on other nodes. To work around this, enable the ingress controller to route requests across nodes with a `CiliumNetworkPolicy`.
|
||||
|
||||
After selecting the Cilium CNI and enabling Project Network Isolation for your new cluster, configure as follows:
|
||||
|
||||
```
|
||||
apiVersion: cilium.io/v2
|
||||
kind: CiliumNetworkPolicy
|
||||
metadata:
|
||||
name: hn-nodes
|
||||
namespace: default
|
||||
spec:
|
||||
endpointSelector: {}
|
||||
ingress:
|
||||
- fromEntities:
|
||||
- remote-node
|
||||
```
|
||||
|
||||
#### Flannel
|
||||
|
||||

|
||||
|
||||
Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan).
|
||||
|
||||
Encapsulated traffic is unencrypted by default. Flannel provides two solutions for encryption:
|
||||
|
||||
* [IPSec](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers. It is an experimental backend for encryption.
|
||||
* [WireGuard](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard), which is a more faster-performing alternative to strongSwan.
|
||||
|
||||
Kubernetes workers should open UDP port `8472` (VXLAN). See [the port requirements for user clusters](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) for more details.
|
||||
|
||||

|
||||
|
||||
For more information, see the [Flannel GitHub Page](https://github.com/flannel-io/flannel).
|
||||
|
||||
## CNI Features by Provider
|
||||
|
||||
The following table summarizes the different features available for each CNI network provider provided by Rancher.
|
||||
|
||||
| Provider | Network Model | Route Distribution | Network Policies | Mesh | External Datastore | Encryption | Ingress/Egress Policies |
|
||||
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
|
||||
| Canal | Encapsulated (VXLAN) | No | Yes | No | K8s API | Yes | Yes |
|
||||
| Flannel | Encapsulated (VXLAN) | No | No | No | K8s API | Yes | No |
|
||||
| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8s API | Yes | Yes |
|
||||
| Weave | Encapsulated | Yes | Yes | Yes | No | Yes | Yes |
|
||||
| Cilium | Encapsulated (VXLAN) | Yes | Yes | Yes | Etcd and K8s API | Yes | Yes |
|
||||
|
||||
- Network Model: Encapsulated or unencapsulated. For more information, see [What Network Models are Used in CNI?](#what-network-models-are-used-in-cni)
|
||||
|
||||
- Route Distribution: An exterior gateway protocol designed to exchange routing and reachability information on the Internet. BGP can assist with pod-to-pod networking between clusters. This feature is a must on unencapsulated CNI network providers, and it is typically done by BGP. If you plan to build clusters split across network segments, route distribution is a feature that's nice-to-have.
|
||||
|
||||
- Network Policies: Kubernetes offers functionality to enforce rules about which services can communicate with each other using network policies. This feature is stable as of Kubernetes v1.7 and is ready to use with certain networking plugins.
|
||||
|
||||
- Mesh: This feature allows service-to-service networking communication between distinct Kubernetes clusters.
|
||||
|
||||
- External Datastore: CNI network providers with this feature need an external datastore for its data.
|
||||
|
||||
- Encryption: This feature allows cyphered and secure network control and data planes.
|
||||
|
||||
- Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications.
|
||||
|
||||
## CNI Community Popularity
|
||||
|
||||
<CNIPopularityTable />
|
||||
|
||||
## Which CNI Provider Should I Use?
|
||||
|
||||
It depends on your project needs. There are many different providers, which each have various features and options. There isn't one provider that meets everyone's needs.
|
||||
|
||||
Canal is the default CNI network provider. We recommend it for most use cases. It provides encapsulated networking for containers with Flannel, while adding Calico network policies that can provide project/namespace isolation in terms of networking.
|
||||
|
||||
## How can I configure a CNI network provider?
|
||||
|
||||
Please see [Cluster Options](../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md) on how to configure a network provider for your cluster. For more advanced configuration options, please see how to configure your cluster using a [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md#cluster-config-file-reference).
|
||||
@@ -0,0 +1,22 @@
|
||||
---
|
||||
title: Deprecated Features in Rancher
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/deprecated-features"/>
|
||||
</head>
|
||||
|
||||
## Where can I find out which features have been deprecated in Rancher?
|
||||
|
||||
Rancher will publish deprecated features as part of the [release notes](https://github.com/rancher/rancher/releases) for Rancher found on GitHub. Please consult the following patch releases for deprecated features:
|
||||
|
||||
| Patch Version | Release Date |
|
||||
|---------------|---------------|
|
||||
| [2.13.3](https://github.com/rancher/rancher/releases/tag/v2.13.3) | February 25, 2026 |
|
||||
| [2.13.2](https://github.com/rancher/rancher/releases/tag/v2.13.2) | January 29, 2026 |
|
||||
| [2.13.1](https://github.com/rancher/rancher/releases/tag/v2.13.1) | December 18, 2025 |
|
||||
| [2.13.0](https://github.com/rancher/rancher/releases/tag/v2.13.0) | November 25, 2025 |
|
||||
|
||||
## What can I expect when a feature is marked for deprecation?
|
||||
|
||||
In the release where functionality is marked as "Deprecated", it will still be available and supported allowing upgrades to follow the usual procedure. Once upgraded, users/admins should start planning to move away from the deprecated functionality before upgrading to the release it marked as removed. The recommendation for new deployments is to not use the deprecated feature.
|
||||
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: General FAQ
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/general-faq"/>
|
||||
</head>
|
||||
|
||||
This FAQ is a work in progress designed to answer the questions most frequently asked about Rancher v2.x.
|
||||
|
||||
See the [Technical FAQ](technical-items.md) for frequently asked technical questions.
|
||||
|
||||
## Is it possible to manage Azure Kubernetes Services with Rancher v2.x?
|
||||
|
||||
Yes. See our [Cluster Administration](../how-to-guides/new-user-guides/manage-clusters/manage-clusters.md) guide for what Rancher features are available on AKS, as well as our [documentation on AKS](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-aks.md).
|
||||
|
||||
## Does Rancher support Windows?
|
||||
|
||||
Yes. Rancher supports Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md)
|
||||
|
||||
## Does Rancher support Istio?
|
||||
|
||||
Yes. Rancher supports [Istio](../integrations-in-rancher/istio/istio.md).
|
||||
|
||||
## Will Rancher v2.x support Hashicorp's Vault for storing secrets?
|
||||
|
||||
As of Rancher v2.9, Rancher [supports authentication with service account tokens](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/jwt-authentication.md), which is used by Vault and other integrations.
|
||||
|
||||
## Does Rancher v2.x support RKT containers as well?
|
||||
|
||||
At this time, we only support Docker.
|
||||
|
||||
## Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes?
|
||||
|
||||
Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. Always refer to the [Rancher Support Matrix](https://rancher.com/support-maintenance-terms/) for details about what is officially supported.
|
||||
|
||||
## Are you planning on supporting Traefik for existing setups?
|
||||
|
||||
We don't currently plan on providing embedded Traefik support, but we're still exploring load-balancing approaches.
|
||||
|
||||
## Can I import OpenShift Kubernetes clusters into v2.x?
|
||||
|
||||
Our goal is to run any Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet.
|
||||
|
||||
## Is Longhorn integrated with Rancher?
|
||||
|
||||
Yes. Longhorn is integrated with Rancher v2.5 and later.
|
||||
@@ -0,0 +1,27 @@
|
||||
---
|
||||
title: Installing and Configuring kubectl
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/install-and-configure-kubectl"/>
|
||||
</head>
|
||||
|
||||
`kubectl` is a CLI utility for running commands against Kubernetes clusters. It's required for many maintenance and administrative tasks in Rancher 2.x.
|
||||
|
||||
## Installation
|
||||
|
||||
See [kubectl Installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/) for installation on your operating system.
|
||||
|
||||
## Configuration
|
||||
|
||||
When you create a Kubernetes cluster with RKE2/K3s, the Kubeconfig file is stored at `/etc/rancher/rke2/rke2.yaml` or `/etc/rancher/k3s/k3s.yaml` depending on your chosen distribution. These files are used to configure access to the Kubernetes cluster.
|
||||
|
||||
Test your connectivity with `kubectl` and see if you can get the list of nodes back.
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
165.227.114.63 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
165.227.116.167 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
165.227.127.226 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
```
|
||||
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: Rancher is No Longer Needed
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/rancher-is-no-longer-needed"/>
|
||||
</head>
|
||||
|
||||
This page is intended to answer questions about what happens if you don't want Rancher anymore, if you don't want a cluster to be managed by Rancher anymore, or if the Rancher server is deleted.
|
||||
|
||||
|
||||
## If the Rancher server is deleted, what happens to the workloads in my downstream clusters?
|
||||
|
||||
If Rancher is ever deleted or unrecoverable, all workloads in the downstream Kubernetes clusters managed by Rancher will continue to function as normal.
|
||||
|
||||
## If the Rancher server is deleted, how do I access my downstream clusters?
|
||||
|
||||
The capability to access a downstream cluster without Rancher depends on the type of cluster and the way that the cluster was created. To summarize:
|
||||
|
||||
- **Registered/Imported clusters:** The cluster will be unaffected and you can access the cluster using the same methods that you did before the cluster was registered into Rancher.
|
||||
- **Hosted Kubernetes clusters:** If you created the cluster in a cloud-hosted Kubernetes provider such as EKS, GKE, or AKS, you can continue to manage the cluster using your provider's cloud credentials.
|
||||
- **Rancher provisioned clusters:** To access an [RKE2/K3s cluster](../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) the cluster must have the [authorized cluster endpoint](../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) enabled, and you must have already downloaded the cluster's kubeconfig file from the Rancher UI. With this endpoint, you can access your cluster with kubectl directly instead of communicating through the Rancher server's [authentication proxy.](../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#1-the-authentication-proxy) For instructions on how to configure kubectl to use the authorized cluster endpoint, refer to the section about directly accessing clusters with [kubectl and the kubeconfig file.](../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) These clusters will use a snapshot of the authentication as it was configured when Rancher was removed.
|
||||
|
||||
## What if I don't want Rancher anymore?
|
||||
|
||||
:::note
|
||||
|
||||
The previously recommended [System Tools](../reference-guides/system-tools.md) has been deprecated since June 2022.
|
||||
|
||||
:::
|
||||
|
||||
If you [installed Rancher on a Kubernetes cluster,](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md) remove Rancher by using the [Rancher Cleanup](https://github.com/rancher/rancher-cleanup) tool.
|
||||
|
||||
Uninstalling Rancher in high-availability (HA) mode will also remove all `helm-operation-*` pods and the following apps:
|
||||
|
||||
- fleet
|
||||
- fleet-agent
|
||||
- rancher-operator
|
||||
- rancher-webhook
|
||||
|
||||
Custom resources (CRDs) and custom namespaces will still need to be manually removed.
|
||||
|
||||
If you installed Rancher with Docker, you can uninstall Rancher by removing the single Docker container that it runs in.
|
||||
|
||||
Imported clusters will not be affected by Rancher being removed. For other types of clusters, refer to the section on [accessing downstream clusters when Rancher is removed.](#if-the-rancher-server-is-deleted-how-do-i-access-my-downstream-clusters)
|
||||
|
||||
## What if I don't want my registered cluster managed by Rancher?
|
||||
|
||||
If a registered cluster is deleted from the Rancher UI, the cluster is detached from Rancher, leaving it intact and accessible by the same methods that were used to access it before it was registered in Rancher.
|
||||
|
||||
To detach the cluster,
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
2. Go to the registered cluster that should be detached from Rancher and click **⋮ > Delete**.
|
||||
3. Click **Delete**.
|
||||
|
||||
**Result:** The registered cluster is detached from Rancher and functions normally outside of Rancher.
|
||||
|
||||
## What if I don't want my hosted Kubernetes cluster managed by Rancher?
|
||||
|
||||
At this time, there is no functionality to detach these clusters from Rancher. In this context, "detach" is defined as the ability to remove Rancher components from the cluster and manage access to the cluster independently of Rancher.
|
||||
|
||||
The capability to manage these clusters without Rancher is being tracked in this [issue.](https://github.com/rancher/rancher/issues/25234)
|
||||
|
||||
For information about how to access clusters if the Rancher server is deleted, refer to [this section.](#if-the-rancher-server-is-deleted-how-do-i-access-my-downstream-clusters)
|
||||
@@ -0,0 +1,21 @@
|
||||
---
|
||||
title: Security FAQ
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/security"/>
|
||||
</head>
|
||||
|
||||
## Is there a Hardening Guide?
|
||||
|
||||
The Hardening Guide is located in the main [Security](../reference-guides/rancher-security/rancher-security.md) section.
|
||||
|
||||
## Have hardened Rancher Kubernetes clusters been evaluated by the CIS Kubernetes Benchmark? Where can I find the results?
|
||||
|
||||
We have run the CIS Kubernetes benchmark against a hardened Rancher Kubernetes cluster. The results of that assessment can be found in the main [Security](../reference-guides/rancher-security/rancher-security.md) section.
|
||||
|
||||
## How does Rancher verify communication with downstream clusters, and what are some associated security concerns?
|
||||
|
||||
Communication between the Rancher server and downstream clusters is performed through agents. Rancher uses either a registered certificate authority (CA) bundle or the local trust store to verify communication between Rancher agents and the Rancher server. Using a CA bundle for verification is more strict, as only the certificates based on that bundle are trusted. If TLS verification for a explicit CA bundle fails, Rancher may fall back to using the local trust store for verifying future communication. Any CA within the local trust store can then be used to generate a valid certificate.
|
||||
|
||||
As described in [Rancher Security Update CVE-2024-22030](https://www.suse.com/c/rancher-security-update/), under a narrow set of circumstances, malicious actors can take over Rancher nodes by exploiting the behavior of Rancher CAs. For the attack to succeed, the malicious actor must generate a valid certificate from either a valid CA in the targeted Rancher server, or from a valid registered CA. The attacker also needs to either hijack or spoof the Rancher server-url as a preliminary step. Rancher is currently evaluating Rancher CA behavior to mitigate against this and any similar avenues of attack.
|
||||
@@ -0,0 +1,184 @@
|
||||
---
|
||||
title: Technical FAQ
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/technical-items"/>
|
||||
</head>
|
||||
|
||||
## How can I reset the administrator password?
|
||||
|
||||
Docker install:
|
||||
|
||||
```
|
||||
$ docker exec -ti <container_id> reset-password
|
||||
New password for default administrator (user-xxxxx):
|
||||
<new_password>
|
||||
```
|
||||
|
||||
Kubernetes install (Helm):
|
||||
|
||||
```
|
||||
$ KUBECONFIG=./kube_config_cluster.yml
|
||||
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher --no-headers | head -1 | awk '{ print $1 }') -c rancher -- reset-password
|
||||
New password for default administrator (user-xxxxx):
|
||||
<new_password>
|
||||
```
|
||||
|
||||
## I deleted/deactivated the last admin, how can I fix it?
|
||||
|
||||
Docker install:
|
||||
|
||||
```
|
||||
$ docker exec -ti <container_id> ensure-default-admin
|
||||
New default administrator (user-xxxxx)
|
||||
New password for default administrator (user-xxxxx):
|
||||
<new_password>
|
||||
```
|
||||
|
||||
Kubernetes install (Helm):
|
||||
|
||||
```
|
||||
$ KUBECONFIG=./kube_config_cluster.yml
|
||||
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- ensure-default-admin
|
||||
New password for default administrator (user-xxxxx):
|
||||
<new_password>
|
||||
```
|
||||
|
||||
## How can I enable debug logging?
|
||||
|
||||
See [Troubleshooting: Logging](../troubleshooting/other-troubleshooting-tips/logging.md)
|
||||
|
||||
## My ClusterIP does not respond to ping
|
||||
|
||||
ClusterIP is a virtual IP, which will not respond to ping. Best way to test if the ClusterIP is configured correctly, is by using `curl` to access the IP and port to see if it responds.
|
||||
|
||||
## Where can I manage Node Templates?
|
||||
|
||||
Node Templates can be accessed by opening your account menu (top right) and selecting `Node Templates`.
|
||||
|
||||
## Why is my Layer-4 Load Balancer in `Pending` state?
|
||||
|
||||
The Layer-4 Load Balancer is created as `type: LoadBalancer`. In Kubernetes, this needs a cloud provider or controller that can satisfy these requests, otherwise these will be in `Pending` state forever. More information can be found on [Cloud Providers](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md) or [Create External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
|
||||
## Where is the state of Rancher stored?
|
||||
|
||||
- Docker Install: in the embedded etcd of the `rancher/rancher` container, located at `/var/lib/rancher`.
|
||||
- Kubernetes install: default location is in the `/var/lib/rancher/rke2` or `/var/lib/rancher/k3s` directories of the respective RKE2/K3s cluster created to run Rancher.
|
||||
|
||||
## How are the supported Docker versions determined?
|
||||
|
||||
We follow the validated Docker versions for upstream Kubernetes releases. The validated versions can be found under [External Dependencies](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md#external-dependencies) in the Kubernetes release CHANGELOG.md.
|
||||
|
||||
## How can I access nodes created by Rancher?
|
||||
|
||||
SSH keys to access the nodes created by Rancher can be downloaded via the **Nodes** view. Choose the node which you want to access and click on the vertical ⋮ button at the end of the row, and choose **Download Keys** as shown in the picture below.
|
||||
|
||||

|
||||
|
||||
Unzip the downloaded zip file, and use the file `id_rsa` to connect to you host. Be sure to use the correct username (`rancher` or `docker` for RancherOS, `ubuntu` for Ubuntu, `ec2-user` for Amazon Linux)
|
||||
|
||||
```
|
||||
$ ssh -i id_rsa user@ip_of_node
|
||||
```
|
||||
|
||||
## How can I automate task X in Rancher?
|
||||
|
||||
The UI consists of static files, and works based on responses of the API. That means every action/task that you can execute in the UI, can be automated via the API. There are 2 ways to do this:
|
||||
|
||||
* Visit `https://your_rancher_ip/v3` and browse the API options.
|
||||
* Capture the API calls when using the UI (Most commonly used for this is [Chrome Developer Tools](https://developers.google.com/web/tools/chrome-devtools/#network) but you can use anything you like)
|
||||
|
||||
## The IP address of a node changed, how can I recover?
|
||||
|
||||
A node is required to have a static IP configured (or a reserved IP via DHCP). If the IP of a node has changed, you will have to remove it from the cluster and add it again. After it is removed, Rancher will update the cluster to the correct state. If the cluster is no longer in `Provisioning` state, the node is removed from the cluster.
|
||||
|
||||
When the IP address of the node changed, Rancher lost connection to the node, so it will be unable to clean the node properly. See [Cleaning cluster nodes](../how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes.md) to clean the node.
|
||||
|
||||
When the node is removed from the cluster, and the node is cleaned, you can add the node to the cluster.
|
||||
|
||||
## How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster?
|
||||
|
||||
You can add more arguments/binds/environment variables via the respective [RKE2 Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md#cluster-configuration) or [K3s Config File](../reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md#cluster-configuration).
|
||||
|
||||
## How do I check if my certificate chain is valid?
|
||||
|
||||
Use the `openssl verify` command to validate your certificate chain:
|
||||
|
||||
:::tip
|
||||
|
||||
Configure `SSL_CERT_DIR` and `SSL_CERT_FILE` to a dummy location to make sure the OS-installed certificates are not used when verifying manually.
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
SSL_CERT_DIR=/dummy SSL_CERT_FILE=/dummy openssl verify -CAfile ca.pem rancher.yourdomain.com.pem
|
||||
rancher.yourdomain.com.pem: OK
|
||||
```
|
||||
|
||||
If you receive the error `unable to get local issuer certificate`, the chain is incomplete. This usually means that there is an intermediate CA certificate that issued your server certificate. If you already have this certificate, you can use it in the verification of the certificate like shown below:
|
||||
|
||||
```
|
||||
SSL_CERT_DIR=/dummy SSL_CERT_FILE=/dummy openssl verify -CAfile ca.pem -untrusted intermediate.pem rancher.yourdomain.com.pem
|
||||
rancher.yourdomain.com.pem: OK
|
||||
```
|
||||
|
||||
If you have successfully verified your certificate chain, you should include needed intermediate CA certificates in the server certificate to complete the certificate chain for any connection made to Rancher (for example, by the Rancher agent). The order of the certificates in the server certificate file should be first the server certificate itself (contents of `rancher.yourdomain.com.pem`), followed by intermediate CA certificate(s) (contents of `intermediate.pem`).
|
||||
|
||||
```
|
||||
-----BEGIN CERTIFICATE-----
|
||||
%YOUR_CERTIFICATE%
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN CERTIFICATE-----
|
||||
%YOUR_INTERMEDIATE_CERTIFICATE%
|
||||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
If you still get errors during verification, you can retrieve the subject and the issuer of the server certificate using the following command:
|
||||
|
||||
```
|
||||
openssl x509 -noout -subject -issuer -in rancher.yourdomain.com.pem
|
||||
subject= /C=GB/ST=England/O=Alice Ltd/CN=rancher.yourdomain.com
|
||||
issuer= /C=GB/ST=England/O=Alice Ltd/CN=Alice Intermediate CA
|
||||
```
|
||||
|
||||
## How do I check `Common Name` and `Subject Alternative Names` in my server certificate?
|
||||
|
||||
Although technically an entry in `Subject Alternative Names` is required, having the hostname in both `Common Name` and as entry in `Subject Alternative Names` gives you maximum compatibility with older browser/applications.
|
||||
|
||||
Check `Common Name`:
|
||||
|
||||
```
|
||||
openssl x509 -noout -subject -in cert.pem
|
||||
subject= /CN=rancher.my.org
|
||||
```
|
||||
|
||||
Check `Subject Alternative Names`:
|
||||
|
||||
```
|
||||
openssl x509 -noout -in cert.pem -text | grep DNS
|
||||
DNS:rancher.my.org
|
||||
```
|
||||
|
||||
## Why does it take 5+ minutes for a pod to be rescheduled when a node has failed?
|
||||
|
||||
This is due to a combination of the following default Kubernetes settings:
|
||||
|
||||
* kubelet
|
||||
* `node-status-update-frequency`: Specifies how often kubelet posts node status to master (default 10s)
|
||||
* kube-controller-manager
|
||||
* `node-monitor-period`: The period for syncing NodeStatus in NodeController (default 5s)
|
||||
* `node-monitor-grace-period`: Amount of time which we allow running Node to be unresponsive before marking it unhealthy (default 40s)
|
||||
* `pod-eviction-timeout`: The grace period for deleting pods on failed nodes (default 5m0s)
|
||||
|
||||
See [Kubernetes: kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) and [Kubernetes: kube-controller-manager](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/) for more information on these settings.
|
||||
|
||||
In Kubernetes v1.13, the `TaintBasedEvictions` feature is enabled by default. See [Kubernetes: Taint based Evictions](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#taint-based-evictions) for more information.
|
||||
|
||||
* kube-apiserver (Kubernetes v1.13 and up)
|
||||
* `default-not-ready-toleration-seconds`: Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
|
||||
* `default-unreachable-toleration-seconds`: Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
|
||||
|
||||
## Can I use keyboard shortcuts in the UI?
|
||||
|
||||
Yes, most parts of the UI can be reached using keyboard shortcuts. For an overview of the available shortcuts, press `?` anywhere in the UI.
|
||||
Reference in New Issue
Block a user