Merge pull request #2031 from sunilarjun/rke1-removal

RKE1 removal/updates - /getting-started pages
This commit is contained in:
Sunil Singh
2025-10-16 15:13:58 -07:00
committed by GitHub
46 changed files with 54 additions and 2428 deletions

View File

@@ -1,51 +0,0 @@
---
title: Dockershim
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-requirements/dockershim"/>
</head>
The Dockershim is the CRI compliant layer between the Kubelet and the Docker daemon. As part of the Kubernetes 1.20 release, the [deprecation of the in-tree Dockershim was announced](https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/). For more information on the deprecation and its timelines, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed).
RKE clusters now support the external Dockershim to continue leveraging Docker as the CRI runtime. We now implement the upstream open source community external Dockershim announced by [Mirantis and Docker](https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/) to ensure RKE clusters can continue to leverage Docker.
RKE2 and K3s clusters use an embedded containerd as a container runtime and are not affected.
To enable the external Dockershim in versions of RKE before 1.24, configure the following option.
```
enable_cri_dockerd: true
```
Starting with version 1.24, the above defaults to true.
For users looking to use another container runtime, Rancher has the edge-focused K3s and datacenter-focused RKE2 Kubernetes distributions that use containerd as the default runtime. Imported RKE2 and K3s Kubernetes clusters can then be upgraded and managed through Rancher going forward.
## FAQ
<br/>
Q: Do I have to upgrade Rancher to get Ranchers support of the upstream external Dockershim replacement?
A: The upstream support of the Dockershim replacement `cri_dockerd` begins for RKE in Kubernetes 1.21. You will need to be on a version of Rancher that supports RKE 1.21. See our support matrix for details.
<br/>
Q: I am currently on RKE with Kubernetes 1.23. What happens when upstream finally removes Dockershim in 1.24?
A: The version of Dockershim in RKE with Kubernetes will continue to work until 1.23. For information on the timeline, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed). After this, starting in 1.24, RKE will default to enabling `cri_dockerd` by default and will continue to do for versions afterwards.
<br/>
Q: What are my other options if I dont want to depend on the Dockershim or cri_dockerd?
A: You can use a runtime like containerd with Kubernetes that does not require Dockershim support. RKE2 or K3s are two options for doing this.
<br/>
Q: If I am already using RKE1 and want to switch to RKE2, what are my migration options?
A: Today, you can stand up a new cluster and migrate workloads to a new RKE2 cluster that uses containerd. For details, see the [RKE to RKE2 Replatforming Guide](https://links.imagerelay.com/cdn/3404/ql/5606a3da2365422ab2250d348aa07112/rke_to_rke2_replatforming_guide.pdf).
<br/>

View File

@@ -1,27 +0,0 @@
---
title: Installing Docker
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-requirements/install-docker"/>
</head>
Docker is required to be installed on nodes where the Rancher server will be installed with Helm on an RKE cluster or with Docker. Docker is not required for RKE2 or K3s clusters.
There are a couple of options for installing Docker. One option is to refer to the [official Docker documentation](https://docs.docker.com/install/) about how to install Docker on Linux. The steps will vary based on the Linux distribution.
Another option is to use one of Rancher's Docker installation scripts, which are available for most recent versions of Docker. Rancher has installation scripts for every version of upstream Docker that Kubernetes supports.
For example, this command could be used to install on one of the main Linux distributions, such as SUSE Linux Enterprise or Ubuntu:
```bash
curl https://releases.rancher.com/install-docker/<version-number>.sh | sh
```
Consult the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix) to match a validated Docker version with your operating system and version of Rancher. Although the support matrix lists validated Docker versions down to the patch version, only the major and minor version of the release are relevant for the Docker installation scripts.
Note that the following sysctl setting must be applied:
```bash
net.bridge.bridge-nf-call-iptables=1
```

View File

@@ -1,6 +1,6 @@
---
title: Installation Requirements
description: Learn the node requirements for each node running Rancher server when youre configuring Rancher to run either in a Docker or Kubernetes setup
description: Learn the node requirements for each node running Rancher server when youre configuring Rancher to run either in a Kubernetes setup
---
<head>
@@ -33,9 +33,7 @@ If you install Rancher on a hardened Kubernetes cluster, check the [Exempting Re
All supported operating systems are 64-bit x86. Rancher should work with any modern Linux distribution.
The [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions) lists which OS and Docker versions were tested for each Rancher version.
Docker is required for nodes that will run RKE clusters. It is not required for RKE2 or K3s clusters.
The [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions) lists which OS versions were tested for each Rancher version.
The `ntp` (Network Time Protocol) package should be installed. This prevents errors with certificate validation that can occur when the time is not synchronized between the client and server.
@@ -47,7 +45,7 @@ If you plan to run Rancher on ARM64, see [Running on ARM64 (Experimental).](../.
### RKE2 Specific Requirements
RKE2 bundles its own container runtime, containerd. Docker is not required for RKE2 installs.
RKE2 bundles its own container runtime, containerd.
For details on which OS versions were tested with RKE2, refer to the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions).
@@ -61,12 +59,6 @@ If you are installing Rancher on a K3s cluster with **Raspbian Buster**, follow
If you are installing Rancher on a K3s cluster with Alpine Linux, follow [these steps](https://rancher.com/docs/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup.
### RKE Specific Requirements
RKE requires a Docker container runtime. Supported Docker versions are specified in the [Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/) page.
For more information, see [Installing Docker](install-docker.md).
## Hardware Requirements
The following sections describe the CPU, memory, and I/O requirements for nodes where Rancher is installed. Requirements vary based on the size of the infrastructure.
@@ -155,40 +147,13 @@ These requirements apply to hosted Kubernetes clusters such as Amazon Elastic Ku
(*): Large deployments require that you [follow best practices](../../../reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md) for adequate performance.
### RKE
The following table lists minimum CPU and memory requirements for each node in the [upstream cluster](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md).
Please note that a highly available setup with at least three nodes is required for production.
| Managed Infrastructure Size | Maximum Number of Clusters | Maximum Number of Nodes | vCPUs | RAM |
|-----------------------------|----------------------------|-------------------------|-------|-------|
| Small | 150 | 1500 | 4 | 16 GB |
| Medium | 300 | 3000 | 8 | 32 GB |
| Large (*) | 500 | 5000 | 16 | 64 GB |
(*): Large deployments require that you [follow best practices](../../../reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md) for adequate performance.
Refer to the RKE documentation for more detailed information on [general requirements](https://rke.docs.rancher.com/os).
### Docker
The following table lists minimum CPU and memory requirements for a [single Docker node installation of Rancher](../other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md).
Please note that a Docker installation is only suitable for development or testing purposes and is not meant to be used in production environments.
| Managed Infrastructure Size | Maximum Number of Clusters | Maximum Number of Nodes | vCPUs | RAM |
|-----------------------------|----------------------------|-------------------------|-------|------|
| Small | 5 | 50 | 1 | 4 GB |
| Medium | 15 | 200 | 2 | 8 GB |
## Ingress
Each node in the Kubernetes cluster that Rancher is installed on should run an Ingress.
The Ingress should be deployed as DaemonSet to ensure your load balancer can successfully route traffic to all nodes.
For RKE, RKE2 and K3s installations, you don't have to install the Ingress manually because it is installed by default.
For RKE2 and K3s installations, you don't have to install the Ingress manually because it is installed by default.
For hosted Kubernetes clusters (EKS, GKE, AKS), you will need to set up the ingress.
@@ -224,8 +189,4 @@ If you use a load balancer, it should be be HTTP/2 compatible.
To receive help from SUSE Support, Rancher Prime customers who use load balancers (or any other middleboxes such as firewalls), must use one that is HTTP/2 compatible.
When HTTP/2 is not available, Rancher falls back to HTTP/1.1. However, since HTTP/2 offers improved web application performance, using HTTP/1.1 can create performance issues.
## Dockershim Support
For more information on Dockershim support, refer to [this page](dockershim.md).
When HTTP/2 is not available, Rancher falls back to HTTP/1.1. However, since HTTP/2 offers improved web application performance, using HTTP/1.1 can create performance issues.

View File

@@ -19,7 +19,7 @@ The following table lists the ports that need to be open to and from nodes that
The port requirements differ based on the Rancher server architecture.
Rancher can be installed on any Kubernetes cluster. For Rancher installs on a K3s, RKE, or RKE2 Kubernetes cluster, refer to the tabs below. For other Kubernetes distributions, refer to the distribution's documentation for the port requirements for cluster nodes.
Rancher can be installed on any Kubernetes cluster. For Rancher installs on a K3s or RKE2 Kubernetes cluster, refer to the tabs below. For other Kubernetes distributions, refer to the distribution's documentation for the port requirements for cluster nodes.
:::note Notes:
@@ -70,52 +70,6 @@ The following tables break down the port requirements for inbound and outbound t
</details>
### Ports for Rancher Server Nodes on RKE
<details>
<summary>Click to expand</summary>
Typically Rancher is installed on three RKE nodes that all have the etcd, control plane and worker roles.
The following tables break down the port requirements for traffic between the Rancher nodes:
<figcaption>Rules for traffic between Rancher nodes</figcaption>
| Protocol | Port | Description |
|-----|-----|----------------|
| TCP | 443 | Rancher agents |
| TCP | 2379 | etcd client requests |
| TCP | 2380 | etcd peer communication |
| TCP | 6443 | Kubernetes apiserver |
| TCP | 8443 | Nginx Ingress's Validating Webhook |
| UDP | 8472 | Canal/Flannel VXLAN overlay networking |
| TCP | 9099 | Canal/Flannel livenessProbe/readinessProbe |
| TCP | 10250 | Metrics server communication with all nodes |
| TCP | 10254 | Ingress controller livenessProbe/readinessProbe |
The following tables break down the port requirements for inbound and outbound traffic:
<figcaption>Inbound Rules for Rancher Nodes</figcaption>
| Protocol | Port | Source | Description |
|-----|-----|----------------|---|
| TCP | 22 | RKE CLI | SSH provisioning of node by RKE |
| TCP | 80 | Load Balancer/Reverse Proxy | HTTP traffic to Rancher UI/API |
| TCP | 443 | <ul><li>Load Balancer/Reverse Proxy</li><li>IPs of all cluster nodes and other API/UI clients</li></ul> | HTTPS traffic to Rancher UI/API |
| TCP | 6443 | Kubernetes API clients | HTTPS traffic to Kubernetes API |
<figcaption>Outbound Rules for Rancher Nodes</figcaption>
| Protocol | Port | Destination | Description |
|-----|-----|----------------|---|
| TCP | 443 | git.rancher.io | Rancher catalog |
| TCP | 22 | Any node created using a node driver | SSH provisioning of node by node driver |
| TCP | 2376 | Any node created using a node driver | Docker daemon TLS port used by node driver |
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
| TCP | Provider dependent | Port of the Kubernetes API endpoint in hosted cluster | Kubernetes API |
</details>
### Ports for Rancher Server Nodes on RKE2
<details>

View File

@@ -8,7 +8,7 @@ title: Air-Gapped Helm CLI Install
This section is about using the Helm CLI to install the Rancher server in an air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
The installation steps differ depending on whether Rancher is installed on an RKE Kubernetes cluster, a K3s Kubernetes cluster, or a single Docker container.
The installation steps differ depending on whether Rancher is installed on a K3s Kubernetes cluster or a single Docker container.
For more information on each installation option, refer to [this page.](../../installation-and-upgrade.md)

View File

@@ -16,7 +16,7 @@ This section describes how to install a Kubernetes cluster according to our [bes
Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes providers.
The steps to set up an air-gapped Kubernetes cluster on RKE, RKE2, or K3s are shown below.
The steps to set up an air-gapped Kubernetes cluster on RKE2 or K3s are shown below.
<Tabs>
<TabItem value="K3s">
@@ -291,102 +291,9 @@ Upgrading an air-gap environment can be accomplished in the following manner:
2. Run the script again just as you had done in the past with the same environment variables.
3. Restart the RKE2 service.
</TabItem>
<TabItem value="RKE">
We will create a Kubernetes cluster using Rancher Kubernetes Engine (RKE). Before being able to start your Kubernetes cluster, youll need to install RKE and create a RKE config file.
## 1. Install RKE
Install RKE by following the instructions in the [RKE documentation.](https://rancher.com/docs/rke/latest/en/installation/)
:::note
Certified version(s) of RKE based on the Rancher version can be found in the [Rancher Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/).
:::
## 2. Create an RKE Config File
From a system that can access ports 22/TCP and 6443/TCP on the Linux host node(s) that you set up in a previous step, use the sample below to create a new file named `rancher-cluster.yml`.
This file is an RKE configuration file, which is a configuration for the cluster you're deploying Rancher to.
Replace values in the code sample below with help of the _RKE Options_ table. Use the IP address or DNS names of the three nodes you created.
:::tip
For more details on the options available, see the RKE [Config Options](https://rancher.com/docs/rke/latest/en/config-options/).
:::
<figcaption>RKE Options</figcaption>
| Option | Required | Description |
| ------------------ | -------------------- | --------------------------------------------------------------------------------------- |
| `address` | ✓ | The DNS or IP address for the node within the air gapped network. |
| `user` | ✓ | A user that can run Docker commands. |
| `role` | ✓ | List of Kubernetes roles assigned to the node. |
| `internal_address` | optional<sup>1</sup> | The DNS or IP address used for internal cluster traffic. |
| `ssh_key_path` | | Path to the SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`). |
> <sup>1</sup> Some services like AWS EC2 require setting the `internal_address` if you want to use self-referencing security groups or firewalls.
```yaml
nodes:
- address: 10.10.3.187 # node air gap network IP
internal_address: 172.31.7.22 # node intra-cluster IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
- address: 10.10.3.254 # node air gap network IP
internal_address: 172.31.13.132 # node intra-cluster IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
- address: 10.10.3.89 # node air gap network IP
internal_address: 172.31.3.216 # node intra-cluster IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
private_registries:
- url: <REGISTRY.YOURDOMAIN.COM:PORT> # private registry url
user: rancher
password: '*********'
is_default: true
```
## 3. Run RKE
After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster:
```
rke up --config ./rancher-cluster.yml
```
## 4. Save Your Files
:::note Important:
The files mentioned below are needed to maintain, troubleshoot, and upgrade your cluster.
:::
Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_cluster.yml`: The [Kubeconfig file](https://rancher.com/docs/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state), this file contains the current state of the cluster including the RKE configuration and the certificates.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
</TabItem>
</Tabs>
:::note
The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
:::
## Issues or Errors?
See the [Troubleshooting](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md) page.

View File

@@ -8,7 +8,7 @@ title: '2. Install Kubernetes'
Once the infrastructure is ready, you can continue with setting up a Kubernetes cluster to install Rancher in.
The steps to set up RKE, RKE2, or K3s are shown below.
The steps to set up RKE2 or K3s are shown below.
For convenience, export the IP address and port of your proxy into an environment variable and set up the `HTTP_PROXY` variables for your current shell on every node:
@@ -104,152 +104,6 @@ kubectl cluster-info
kubectl get pods --all-namespaces
```
</TabItem>
<TabItem value="RKE">
First, you have to install Docker and setup the HTTP proxy on all three Linux nodes. For this perform the following steps on all three nodes.
Next configure apt to use this proxy when installing packages. If you are not using Ubuntu, you have to adapt this step accordingly:
```
cat <<'EOF' | sudo tee /etc/apt/apt.conf.d/proxy.conf > /dev/null
Acquire::http::Proxy "http://${proxy_host}/";
Acquire::https::Proxy "http://${proxy_host}/";
EOF
```
Now you can install Docker:
```
curl -sL https://releases.rancher.com/install-docker/19.03.sh | sh
```
Then ensure that your current user is able to access the Docker daemon without sudo:
```
sudo usermod -aG docker YOUR_USERNAME
```
And configure the Docker daemon to use the proxy to pull images:
```
sudo mkdir -p /etc/systemd/system/docker.service.d
cat <<'EOF' | sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf > /dev/null
[Service]
Environment="HTTP_PROXY=http://${proxy_host}"
Environment="HTTPS_PROXY=http://${proxy_host}"
Environment="NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16"
EOF
```
To apply the configuration, restart the Docker daemon:
```
sudo systemctl daemon-reload
sudo systemctl restart docker
```
#### Air-gapped proxy
You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections.
In addition to setting the default rules for a proxy server, you must also add the rules shown below to provision node driver clusters from a proxied Rancher environment.
You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`:
```
acl SSL_ports port 22
acl SSL_ports port 2376
acl Safe_ports port 22 # ssh
acl Safe_ports port 2376 # docker port
```
### Creating the RKE Cluster
You need several command line tools on the host where you have SSH access to the Linux nodes to create and interact with the cluster:
* [RKE CLI binary](https://rancher.com/docs/rke/latest/en/installation/#download-the-rke-binary)
```
sudo curl -fsSL -o /usr/local/bin/rke https://github.com/rancher/rke/releases/download/v1.1.4/rke_linux-amd64
sudo chmod +x /usr/local/bin/rke
```
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
```
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
```
Next, create a YAML file that describes the RKE cluster. Ensure that the IP addresses of the nodes and the SSH username are correct. For more information on the cluster YAML, have a look at the [RKE documentation](https://rancher.com/docs/rke/latest/en/example-yamls/).
```yml
nodes:
- address: 10.0.1.200
user: ubuntu
role: [controlplane,worker,etcd]
- address: 10.0.1.201
user: ubuntu
role: [controlplane,worker,etcd]
- address: 10.0.1.202
user: ubuntu
role: [controlplane,worker,etcd]
services:
etcd:
backup_config:
interval_hours: 12
retention: 6
```
After that, you can create the Kubernetes cluster by running:
```
rke up --config rancher-cluster.yaml
```
RKE creates a state file called `rancher-cluster.rkestate`, this is needed if you want to perform updates, modify your cluster configuration or restore it from a backup. It also creates a `kube_config_cluster.yaml` file, that you can use to connect to the remote Kubernetes cluster locally with tools like kubectl or Helm. Make sure to save all of these files in a secure location, for example by putting them into a version control system.
To have a look at your cluster run:
```
export KUBECONFIG=kube_config_cluster.yaml
kubectl cluster-info
kubectl get pods --all-namespaces
```
You can also verify that your external load balancer works, and the DNS entry is set up correctly. If you send a request to either, you should receive HTTP 404 response from the ingress controller:
```
$ curl 10.0.1.100
default backend - 404
$ curl rancher.example.com
default backend - 404
```
### Save Your Files
:::note Important:
The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
:::
Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_cluster.yml`: The [Kubeconfig file](https://rancher.com/docs/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state), this file contains the current state of the cluster including the RKE configuration and the certificates.
:::note
The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
:::
</TabItem>
</Tabs>

View File

@@ -6,7 +6,7 @@ title: 3. Install Rancher
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher"/>
</head>
Now that you have a running RKE cluster, you can install Rancher in it. For security reasons all traffic to Rancher must be encrypted with TLS. For this tutorial you are going to automatically issue a self-signed certificate through [cert-manager](https://cert-manager.io/). In a real-world use-case you will likely use Let's Encrypt or provide your own certificate.
Now that you have a running RKE2/K3s cluster, you can install Rancher in it. For security reasons all traffic to Rancher must be encrypted with TLS. For this tutorial you are going to automatically issue a self-signed certificate through [cert-manager](https://cert-manager.io/). In a real-world use-case you will likely use Let's Encrypt or provide your own certificate.
### Install the Helm CLI

View File

@@ -8,7 +8,7 @@ title: '1. Set up Infrastructure'
In this section, you will provision the underlying infrastructure for your Rancher management server with internet access through a HTTP proxy.
To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure:
To install the Rancher management server on a high-availability RKE2/K3s cluster, we recommend setting up the following infrastructure:
- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere.
- **A load balancer** to direct front-end traffic to the three nodes.
@@ -18,7 +18,7 @@ These nodes must be in the same region/data center. You may place these servers
### Why three nodes?
In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
In an RKE2/K3s cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes.
@@ -34,7 +34,7 @@ For an example of one way to set up Linux nodes, refer to this [tutorial](../../
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
When Kubernetes gets set up in a later step, the RKE tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
When Kubernetes gets set up in a later step, the RKE2/K3s tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.

View File

@@ -1,198 +0,0 @@
---
title: Setting up a High-availability RKE Kubernetes Cluster
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke1-for-rancher"/>
</head>
<EOLRKE1Warning />
This section describes how to install a Kubernetes cluster. This cluster should be dedicated to run only the Rancher server.
:::note
Rancher can run on any Kubernetes cluster, included hosted Kubernetes solutions such as Amazon EKS. The below instructions represent only one possible way to install Kubernetes.
:::
For systems without direct internet access, refer to [Air Gap: Kubernetes install.](../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md)
:::tip Single-node Installation Tip:
In a single-node Kubernetes cluster, the Rancher server does not have high availability, which is important for running Rancher in production. However, installing Rancher on a single-node cluster can be useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path.
To set up a single-node RKE cluster, configure only one node in the `cluster.yml` . The single node should have all three roles: `etcd`, `controlplane`, and `worker`.
In both single-node setups, Rancher can be installed with Helm on the Kubernetes cluster in the same way that it would be installed on any other cluster.
:::
## Installing Kubernetes
### Required CLI Tools
Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool.
Also install [RKE,](https://rancher.com/docs/rke/latest/en/installation/) the Rancher Kubernetes Engine, a Kubernetes distribution and command-line tool.
### 1. Create the cluster configuration file
In this section, you will create a Kubernetes cluster configuration file called `rancher-cluster.yml`. In a later step, when you set up the cluster with an RKE command, it will use this file to install Kubernetes on your nodes.
Using the sample below as a guide, create the `rancher-cluster.yml` file. Replace the IP addresses in the `nodes` list with the IP address or DNS names of the 3 nodes you created.
If your node has public and internal addresses, it is recommended to set the `internal_address:` so Kubernetes will use it for intra-cluster communication. Some services like AWS EC2 require setting the `internal_address:` if you want to use self-referencing security groups or firewalls.
RKE will need to connect to each node over SSH, and it will look for a private key in the default location of `~/.ssh/id_rsa`. If your private key for a certain node is in a different location than the default, you will also need to configure the `ssh_key_path` option for that node.
When choosing a Kubernetes version, be sure to first consult the [support matrix](https://rancher.com/support-matrix/) to find the highest version of Kubernetes that has been validated for your Rancher version.
```yaml
nodes:
- address: 165.227.114.63
internal_address: 172.16.22.12
user: ubuntu
role: [controlplane, worker, etcd]
- address: 165.227.116.167
internal_address: 172.16.32.37
user: ubuntu
role: [controlplane, worker, etcd]
- address: 165.227.127.226
internal_address: 172.16.42.73
user: ubuntu
role: [controlplane, worker, etcd]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
# Required for external TLS termination with
# ingress-nginx v0.22+
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
kubernetes_version: v1.25.6-rancher4-1
```
<figcaption>Common RKE Nodes Options</figcaption>
| Option | Required | Description |
| ------------------ | -------- | -------------------------------------------------------------------------------------- |
| `address` | yes | The public DNS or IP address |
| `user` | yes | A user that can run docker commands |
| `role` | yes | List of Kubernetes roles assigned to the node |
| `internal_address` | no | The private DNS or IP address for internal cluster traffic |
| `ssh_key_path` | no | Path to SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`) |
:::note Advanced Configurations:
RKE has many configuration options for customizing the install to suit your specific environment.
Please see the [RKE Documentation](https://rancher.com/docs/rke/latest/en/config-options/) for the full list of options and capabilities.
For tuning your etcd cluster for larger Rancher installations, see the [etcd settings guide](../../advanced-user-guides/tune-etcd-for-large-installs.md).
For more information regarding Dockershim support, refer to [this page](../../../getting-started/installation-and-upgrade/installation-requirements/dockershim.md)
:::
### 2. Run RKE
```
rke up --config ./rancher-cluster.yml
```
When finished, it should end with the line: `Finished building Kubernetes cluster successfully`.
### 3. Test Your Cluster
This section describes how to set up your workspace so that you can interact with this cluster using the `kubectl` command-line tool.
Assuming you have installed `kubectl`, you need to place the `kubeconfig` file in a location where `kubectl` can reach it. The `kubeconfig` file contains the credentials necessary to access your cluster with `kubectl`.
When you ran `rke up`, RKE should have created a `kubeconfig` file named `kube_config_cluster.yml`. This file has the credentials for `kubectl` and `helm`.
:::note
If you have used a different file name from `rancher-cluster.yml`, then the kube config file will be named `kube_config_<FILE_NAME>.yml`.
:::
Move this file to `$HOME/.kube/config`, or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_cluster.yml`:
```
export KUBECONFIG=$(pwd)/kube_config_cluster.yml
```
Test your connectivity with `kubectl` and see if all your nodes are in `Ready` state:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
165.227.114.63 Ready controlplane,etcd,worker 11m v1.13.5
165.227.116.167 Ready controlplane,etcd,worker 11m v1.13.5
165.227.127.226 Ready controlplane,etcd,worker 11m v1.13.5
```
### 4. Check the Health of Your Cluster Pods
Check that all the required pods and containers are healthy are ready to continue.
- Pods are in `Running` or `Completed` state.
- `READY` column shows all the containers are running (i.e. `3/3`) for pods with `STATUS` `Running`
- Pods with `STATUS` `Completed` are run-once Jobs. For these pods `READY` should be `0/1`.
```
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-tnsn4 1/1 Running 0 30s
ingress-nginx nginx-ingress-controller-tw2ht 1/1 Running 0 30s
ingress-nginx nginx-ingress-controller-v874b 1/1 Running 0 30s
kube-system canal-jp4hz 3/3 Running 0 30s
kube-system canal-z2hg8 3/3 Running 0 30s
kube-system canal-z6kpw 3/3 Running 0 30s
kube-system kube-dns-7588d5b5f5-sf4vh 3/3 Running 0 30s
kube-system kube-dns-autoscaler-5db9bbb766-jz2k6 1/1 Running 0 30s
kube-system metrics-server-97bc649d5-4rl2q 1/1 Running 0 30s
kube-system rke-ingress-controller-deploy-job-bhzgm 0/1 Completed 0 30s
kube-system rke-kubedns-addon-deploy-job-gl7t4 0/1 Completed 0 30s
kube-system rke-metrics-addon-deploy-job-7ljkc 0/1 Completed 0 30s
kube-system rke-network-plugin-deploy-job-6pbgj 0/1 Completed 0 30s
```
This confirms that you have successfully installed a Kubernetes cluster that the Rancher server will run on.
### 5. Save Your Files
:::note Important:
The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
:::
Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_cluster.yml`: The [Kubeconfig file](https://rancher.com/docs/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
:::note
The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
:::
### Issues or errors?
See the [Troubleshooting](../../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md) page.
### [Next: Install Rancher](../../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md)

View File

@@ -38,7 +38,7 @@ Choose the default security group or configure a security group.
Please refer to [Amazon EC2 security group when using Node Driver](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-security-group) to see what rules are created in the `rancher-nodes` Security Group.
If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#ports-for-rancher-server-nodes-on-rke). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups).
If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#ports-for-rancher-server-nodes-on-rke2). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups).
### Instance Options

View File

@@ -1,47 +0,0 @@
---
title: Dockershim
---
Dockershim 是 Kubelet 和 Docker Daemon 之间的 CRI 兼容层。Kubernetes 1.20 版本宣布了[移除树内 Dockershim](https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/)。有关此移除的更多信息以及时间线,请参见 [Kubernetes Dockershim 弃用相关的常见问题](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed)。
RKE 集群现在支持外部 Dockershim来让用户继续使用 Docker 作为 CRI 运行时。现在,我们通过使用 [Mirantis 和 Docker ](https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/) 来确保 RKE 集群可以继续使用 Docker从而实现上游开源社区的外部 Dockershim。
RKE2 和 K3s 集群使用嵌入的 containerd 作为容器运行时,因此不受影响。
要在 1.24 之前的 RKE 版本中启用外部 Dockershim请配置以下选项
```
enable_cri_dockerd: true
```
从 1.24 版本开始,以上默认为 true。
如果你想使用其他容器运行时Rancher 也提供使用 Containerd 作为默认运行时的,以边缘为中心的 K3s和以数据中心为中心的 RKE2 Kubernetes 发行版。然后,你就可以通过 Rancher 对导入的 RKE2 和 K3s Kubernetes 集群进行升级和管理。
## 常见问题
<br/>
Q是否必须升级 Rancher 才能获得 Rancher 对上游外部 Dockershim 替换的支持?
A对于 RKEDockershim `cri_dockerd` 替换的上游支持从 Kubernetes 1.21 开始。你需要使用支持 RKE 1.21 的 Rancher 版本。详情请参见我们的支持矩阵。
<br/>
Q我目前的 RKE 使用 Kubernetes 1.23。如果上游最终在 1.24 中删除 Dockershim会发生什么
ARKE 中带有 Kubernetes 的 Dockershim 版本将继续工作到 1.23。有关时间线的更多信息,请参见 [Kubernetes Dockershim 弃用相关的常见问题](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed)。从 1.24 开始RKE 将默认启用 `cri_dockerd` 并在之后的版本中继续启用。
<br/>
Q: 如果我不想再依赖 Dockershim 或 cri_dockerd我还有什么选择
A: 你可以为 Kubernetes 使用不需要 Dockershim 支持的运行时,如 Containerd。RKE2 和 K3s 就是其中的两个选项。
<br/>
Q: 如果我目前使用 RKE1但想切换到 RKE2我可以怎样进行迁移
A: 你可以构建一个新集群,然后将工作负载迁移到使用 Containerd 的新 RKE2 集群。Rancher 也在探索就地升级路径的可能性。
<br/>

View File

@@ -1,23 +0,0 @@
---
title: 安装 Docker
---
在使用 Helm 在 RKE 集群节点上或使用 Docker 安装 Rancher Server 前,你需要在节点中先安装 Docker。RKE2 和 K3s 集群不要求使用 Docker。
Docker 有几个安装方法。一种方法是参见 [Docker 官方文档](https://docs.docker.com/install/)以了解如何在 Linux 上安装 Docker。不同 Linux 发行版的安装步骤可能有所不同。
另一种方式是使用 Rancher 的 Docker 安装脚本,该脚本可用于较新的 Docker 版本。 Rancher 为每个 Kubernetes 支持的上游 Docker 版本提供了安装脚本。
例如,此命令可用于在 SUSE Linux Enterprise 或 Ubuntu 等主要 Linux 发行版上安装 Docker
```bash
curl https://releases.rancher.com/install-docker/<version-number>.sh | sh
```
请参阅 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix),使用匹配你的操作系统和 Rancher 版本并且经过验证的 Docker 版本。 尽管支持矩阵列出了经过验证的 Docker 版本直至补丁版本,但只有发行版的主要版本和次要版本与 Docker 安装脚本相关。
请注意,必须应用以下 sysctl 设置:
```bash
net.bridge.bridge-nf-call-iptables=1
```

View File

@@ -1,6 +1,6 @@
---
title: 安装要求
description: 如果 Rancher 配置在 Docker 或 Kubernetes 中运行时,了解运行 Rancher Server 的每个节点的节点要求
description: Learn the node requirements for each node running Rancher server when youre configuring Rancher to run either in a Kubernetes setup
---
本文描述了对需要安装 Rancher Server 的节点的软件、硬件和网络要求。Rancher Server 可以安装在单个节点或高可用的 Kubernetes 集群上。
@@ -27,7 +27,7 @@ Rancher 需要安装在支持的 Kubernetes 版本上。请查阅 [Rancher 支
所有支持的操作系统都使用 64-bit x86 架构。Rancher 兼容当前所有的主流 Linux 发行版。
[Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions)列出了每个 Rancher 版本测试过的操作系统和 Docker 版本。
The [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions) lists which OS versions were tested for each Rancher version.
运行 RKE 集群的节点需要安装 Docker。RKE2 或 K3s 集群不需要它。
@@ -41,7 +41,7 @@ Rancher 需要安装在支持的 Kubernetes 版本上。请查阅 [Rancher 支
### RKE2 要求
对于容器运行时RKE2 附带了自己的 containerd。RKE2 安装不需要 Docker。
对于容器运行时RKE2 附带了自己的 containerd.
如需了解 RKE2 通过了哪些操作系统版本的测试,请参见 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions)。
@@ -150,41 +150,13 @@ Rancher 的代码库不断发展用例不断变化Rancher 积累的经验
(*):大规模的部署需要你[遵循最佳实践](../../../reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md)以获得足够的性能。
### RKE
下面的表格列出了[上游集群](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md)中每个节点最小的 CPU 和内存要求。
请注意,生产环境下的高可用安装最少需要 3 个节点。
| 部署规模 | 最大集群数量 | 最大节点数量 | vCPUs | 内存 |
|-----------------------------|----------------------------|-------------------------|-------|-------|
| 小 | 150 | 1500 | 4 | 16 GB |
| 中 | 300 | 3000 | 8 | 32 GB |
| 大 (*) | 500 | 5000 | 16 | 64 GB |
(*) 大规模的部署需要你[遵循最佳实践](../../../reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md)以获得足够的性能。
有关 RKE 一般要求的更多详细信息,请参见 [RKE 文档](https://rke.docs.rancher.com/os)。
### Docker
下面的表格列出了[上游集群](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md)中每个节点最小的 CPU 和内存要求。
请注意,在 Docker 中安装 Rancher 仅适用于开发或测试目的。不建议在生产环境中使用。
| 部署规模 | 最大集群数量 | 最大节点数量 | vCPUs | 内存 |
|-----------------------------|----------------------------|-------------------------|-------|------|
| 小 | 5 | 50 | 1 | 4 GB |
| 中 | 15 | 200 | 2 | 8 GB |
## Ingress
安装 Rancher 的 Kubernetes 集群中的每个节点都应该运行一个 Ingress。
Ingress 需要部署为 DaemonSet 以确保负载均衡器能成功把流量转发到各个节点。
如果是 RKERKE2 和 K3s 安装,你不需要手动安装 Ingress因为它是默认安装的。
如果是 RKE2 和 K3s 安装,你不需要手动安装 Ingress因为它是默认安装的。
对于托管的 Kubernetes 集群EKS、GKE、AKS你需要设置 Ingress。
@@ -213,7 +185,3 @@ etcd 在集群中的性能决定了 Rancher 的性能。因此,为了获得最
### 端口要求
为了确保能正常运行Rancher 需要在 Rancher 节点和下游 Kubernetes 集群节点上开放一些端口。不同集群类型的 Rancher 和下游集群的所有必要端口,请参见[端口要求](port-requirements.md)。
## Dockershim 支持
有关 Dockershim 支持的详情,请参见[此页面](dockershim.md)。

View File

@@ -15,7 +15,7 @@ import PortsImportedHosted from '@site/src/components/PortsImportedHosted'
不同的 Rancher Server 架构有不同的端口要求。
Rancher 可以安装在任何 Kubernetes 集群上。如果你的 Rancher 安装在 K3s、RKE 或 RKE2 Kubernetes 集群上,请参考下面的标签页。对于其他 Kubernetes 发行版,请参见该发行版的文档,了解集群节点的端口要求。
Rancher 可以安装在任何 Kubernetes 集群上。如果你的 Rancher 安装在 K3s 或 RKE2 Kubernetes 集群上,请参考下面的标签页。对于其他 Kubernetes 发行版,请参见该发行版的文档,了解集群节点的端口要求。
:::note 注意事项:
@@ -66,54 +66,6 @@ K3s server 需要开放端口 6443 才能供节点访问。
</details>
### RKE 上 Rancher Server 节点的端口
<details>
<summary>单击展开</summary>
通常情况下Rancher 安装在三个 RKE 节点上,这些节点都有 etcd、controlplane 和 worker 角色。
下表描述了 Rancher 节点之间流量的端口要求:
<figcaption>Rancher 节点的流量规则</figcaption>
| 协议 | 端口 | 描述 |
|-----|-----|----------------|
| TCP | 443 | Rancher Agents |
| TCP | 2379 | etcd 客户端请求 |
| TCP | 2380 | etcd 对等通信 |
| TCP | 6443 | Kubernetes apiserver |
| TCP | 8443 | NGINX Ingress 的验证 Webhook |
| UDP | 8472 | Canal/Flannel VXLAN 覆盖网络 |
| TCP | 9099 | Canal/Flannel livenessProbe/readinessProbe |
| TCP | 10250 | Metrics Server 与所有节点的通信 |
| TCP | 10254 | Ingress controller livenessProbe/readinessProbe |
下表描述了入站和出站流量的端口要求:
<figcaption>Rancher 节点的入站规则</figcaption>
| 协议 | 端口 | 源 | 描述 |
|-----|-----|----------------|---|
| TCP | 22 | RKE CLI | RKE 通过 SSH 配置节点 |
| TCP | 80 | 负载均衡器/反向代理 | 到 Rancher UI/API 的 HTTP 流量 |
| TCP | 443 | <ul><li>负载均衡器/反向代理</li><li>所有集群节点和其他 API/UI 客户端的 IP</li></ul> | 到 Rancher UI/API 的 HTTPS 流量 |
| TCP | 6443 | Kubernetes API 客户端 | 到 Kubernetes API 的 HTTPS 流量 |
<figcaption>Rancher 节点的出站规则</figcaption>
| 协议 | 端口 | 目标 | 描述 |
|-----|-----|----------------|---|
| TCP | 443 | git.rancher.io | Rancher catalog |
| TCP | 22 | 使用 Node Driver 创建的任何节点 | Node Driver 通过 SSH 配置节点 |
| TCP | 2376 | 使用 Node Driver 创建的任何节点 | Node Driver 使用的 Docker daemon TLS 端口 |
| TCP | 6443 | 托管/导入的 Kubernetes API | Kubernetes API Server |
| TCP | 提供商依赖 | 托管集群中 Kubernetes API 端点的端口 | Kubernetes API |
</details>
### RKE2 上 Rancher Server 节点的端口
<details>

View File

@@ -4,7 +4,7 @@ title: 离线 Helm CLI 安装
本文介绍如何使用 Helm CLI 在离线环境中安装 Rancher Server。离线环境可以是 Rancher Server 离线安装、防火墙后面或代理后面。
Rancher 安装在 RKE Kubernetes 集群、K3s Kubernetes 集群,或单个 Docker 容器上对应的安装步骤会有所不同。
Rancher 安装在 K3s Kubernetes 集群,或单个 Docker 容器上对应的安装步骤会有所不同。
如需了解各个安装方式的更多信息,请参见[本页](../../installation-and-upgrade.md)。

View File

@@ -12,7 +12,7 @@ title: '3. 安装 KubernetesDocker 安装请跳过)'
Rancher 可以安装在任何 Kubernetes 集群上,包括托管的 Kubernetes。
RKE、RKE2 或 K3s 上离线安装 Kubernetes 集群的步骤如下所示:
在 RKE2 或 K3s 上离线安装 Kubernetes 集群的步骤如下所示:
<Tabs>
<TabItem value="K3s">
@@ -283,102 +283,9 @@ kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces
2. 使用相同的环境变量再次运行脚本。
3. 重启 RKE2 服务。
</TabItem>
<TabItem value="RKE">
我们将使用 Rancher Kubernetes Engine (RKE) 创建一个 Kubernetes 集群。在启动 Kubernetes 集群之前,你需要安装 RKE 并创建 RKE 配置文件。
### 1. 安装 RKE
参照 [RKE 官方文档](https://rancher.com/docs/rke/latest/en/installation/)的说明安装 RKE。
:::note
你可以在 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/)中找到基于 Rancher 版本的 RKE 认证版本。
:::
### 2. 创建 RKE 配置文件
在可访问你 Linux 主机节点上的 22/TCP 端口和 6443/TCP 端口的系统上,使用以下示例创建一个名为 `rancher-cluster.yml` 的新文件。
该文件是 RKE 配置文件,用于配置你要部署 Rancher 的集群。
参考下方的 _RKE 选项_ 表格,修改代码示例中的参数。使用你创建的三个节点的 IP 地址或 DNS 名称。
:::tip
如需获取可用选项的详情,请参见 RKE [配置选项](https://rancher.com/docs/rke/latest/en/config-options/)。
:::
<figcaption>RKE 选项</figcaption>
| 选项 | 必填 | 描述 |
| ------------------ | -------------------- | --------------------------------------------------------------------------------------- |
| `address` | ✓ | 离线环境中节点的 DNS 或 IP 地址 |
| `user` | ✓ | 可运行 Docker 命令的用户 |
| `role` | ✓ | 分配给节点的 Kubernetes 角色列表 |
| `internal_address` | 可选<sup>1</sup> | 用于集群内部流量的 DNS 或 IP 地址 |
| `ssh_key_path` | | 用来验证节点的 SSH 私钥文件路径(默认值为 `~/.ssh/id_rsa` |
> <sup>1</sup> 如果你想使用引用安全组或防火墙,某些服务(如 AWS EC2要求设置 `internal_address`。
```yaml
nodes:
- address: 10.10.3.187 # 离线环境节点 IP
internal_address: 172.31.7.22 # 节点内网 IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
- address: 10.10.3.254 # 离线环境节点 IP
internal_address: 172.31.13.132 # 节点内网 IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
- address: 10.10.3.89 # 离线环境节点 IP
internal_address: 172.31.3.216 # 节点内网 IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
private_registries:
- url: <REGISTRY.YOURDOMAIN.COM:PORT> # 私有镜像仓库 URL
user: rancher
password: '*********'
is_default: true
```
### 3. 运行 RKE
配置 `rancher-cluster.yml`后,启动你的 Kubernetes 集群:
```
rke up --config ./rancher-cluster.yml
```
### 4. 保存你的文件
:::note 重要提示:
维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件:
:::
将以下文件的副本保存在安全位置:
- `rancher-cluster.yml`RKE 集群配置文件。
- `kube_config_cluster.yml`:集群的 [Kubeconfig 文件](https://rancher.com/docs/rke/latest/en/kubeconfig/)。该文件包含可完全访问集群的凭证。
- `rancher-cluster.rkestate`[Kubernetes 集群状态文件](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state)。该文件包含集群的当前状态,包括 RKE 配置以及证书<br/>。<br/>_Kubernetes 集群状态文件仅在使用 RKE 0.2.0 或更高版本时创建。_
</TabItem>
</Tabs>
:::note
后两个文件名中的 `rancher-cluster` 部分取决于你命名 RKE 集群配置文件的方式。
:::
### 故障排除
参见[故障排除](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md)页面。

View File

@@ -4,7 +4,7 @@ title: '2. 安装 Kubernetes'
基础设施配置好后,你可以设置一个 Kubernetes 集群来安装 Rancher。
设置 RKE、RKE2 或 K3s 的步骤如下所示。
设置 RKE2 或 K3s 的步骤如下所示。
为方便起见,将代理的 IP 地址和端口导出到一个环境变量中,并在每个节点上为你当前的 shell 设置 HTTP_PROXY 变量:
@@ -92,152 +92,6 @@ kubectl cluster-info
kubectl get pods --all-namespaces
```
</TabItem>
<TabItem value="RKE">
首先,你需要在所有三个 Linux 节点上安装 Docker 并设置 HTTP 代理。因此,你可以在这三个节点上执行以下步骤。
接下来配置 apt 以在安装包时使用这个代理。如果你使用的不是 Ubuntu请相应调整步骤。
```
cat <<'EOF' | sudo tee /etc/apt/apt.conf.d/proxy.conf > /dev/null
Acquire::http::Proxy "http://${proxy_host}/";
Acquire::https::Proxy "http://${proxy_host}/";
EOF
```
安装 Docker
```
curl -sL https://releases.rancher.com/install-docker/19.03.sh | sh
```
然后,确保你的当前用户能够在没有 sudo 的情况下访问 Docker Daemon
```
sudo usermod -aG docker YOUR_USERNAME
```
配置 Docker Daemon 使用代理来拉取镜像:
```
sudo mkdir -p /etc/systemd/system/docker.service.d
cat <<'EOF' | sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf > /dev/null
[Service]
Environment="HTTP_PROXY=http://${proxy_host}"
Environment="HTTPS_PROXY=http://${proxy_host}"
Environment="NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16"
EOF
```
要应用配置,请重新启动 Docker Daemon
```
sudo systemctl daemon-reload
sudo systemctl restart docker
```
#### 离线代理
你现在可以在配置的离线集群中配置主机驱动集群,以使用代理进行出站连接。
除了为代理服务器设置默认规则外,你还需要额外添加如下所示的规则,以从代理的 Rancher 环境中配置主机驱动集群。
根据你的设置配置文件路径,例如 `/etc/apt/apt.conf.d/proxy.conf`
```
acl SSL_ports port 22
acl SSL_ports port 2376
acl Safe_ports port 22 # ssh
acl Safe_ports port 2376 # docker port
```
### 创建 RKE 集群
在能通过 SSH 访问 Linux 节点的主机上,你需要有几个命令行工具,来创建集群并与之交互:
* [RKE CLI binary](https://rancher.com/docs/rke/latest/en/installation/#download-the-rke-binary)
```
sudo curl -fsSL -o /usr/local/bin/rke https://github.com/rancher/rke/releases/download/v1.1.4/rke_linux-amd64
sudo chmod +x /usr/local/bin/rke
```
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
```
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
```
接下来,创建一个描述 RKE 集群的 YAML 文件。确保节点的 IP 地址和 SSH 用户名是正确的。有关集群 YAML 的详情,请参见 [RKE 官方文档](https://rancher.com/docs/rke/latest/en/example-yamls/)。
```yml
nodes:
- address: 10.0.1.200
user: ubuntu
role: [controlplane,worker,etcd]
- address: 10.0.1.201
user: ubuntu
role: [controlplane,worker,etcd]
- address: 10.0.1.202
user: ubuntu
role: [controlplane,worker,etcd]
services:
etcd:
backup_config:
interval_hours: 12
retention: 6
```
之后,你可以通过运行以下命令来创建 Kubernetes 集群:
```
rke up --config rancher-cluster.yaml
```
RKE 会创建一个名为 `rancher-cluster.rkestate` 的状态文件。如果你需要更新或修改集群配置或使用备份恢复集群则需要使用该文件。RKE 还会创建一个 `kube_config_cluster.yaml` 文件,你可以使用该文件在本地使用 kubectl 或 Helm 等工具连接到远端的 Kubernetes 集群。请将这些文件保存在安全的位置,例如版本控制系统中。
如需查看集群,请运行以下命令:
```
export KUBECONFIG=kube_config_cluster.yaml
kubectl cluster-info
kubectl get pods --all-namespaces
```
你也可以验证你的外部负载均衡器是否工作DNS 条目是否设置正确。如果你向其中之一发送请求,你会收到来自 Ingress Controller 的 HTTP 404 响应:
```
$ curl 10.0.1.100
default backend - 404
$ curl rancher.example.com
default backend - 404
```
### 保存你的文件
:::note 重要提示:
维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件:
:::
将以下文件的副本保存在安全位置:
- `rancher-cluster.yml`RKE 集群配置文件。
- `kube_config_cluster.yml`:集群的 [Kubeconfig 文件](https://rancher.com/docs/rke/latest/en/kubeconfig/)。该文件包含可完全访问集群的凭证。
- `rancher-cluster.rkestate`[Kubernetes 集群状态文件](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state)。此文件包含集群的当前状态,包括 RKE 配置和证书。
:::note
后两个文件名中的 `rancher-cluster` 部分取决于你命名 RKE 集群配置文件的方式。
:::
</TabItem>
</Tabs>

View File

@@ -2,7 +2,7 @@
title: 3. 安装 Rancher
---
在前文的操作后,你已经有了一个运行的 RKE 集群,现在可以在其中安装 Rancher 了。出于安全考虑,所有到 Rancher 的流量都必须使用 TLS 加密。在本教程中,你将使用 [cert-manager](https://cert-manager.io/)自动颁发自签名证书。在实际使用情况下,你可使用 Let's Encrypt 或自己的证书。
在前文的操作后,你已经有了一个运行的 RKE2/K3s 集群,现在可以在其中安装 Rancher 了。出于安全考虑,所有到 Rancher 的流量都必须使用 TLS 加密。在本教程中,你将使用 [cert-manager](https://cert-manager.io/)自动颁发自签名证书。在实际使用情况下,你可使用 Let's Encrypt 或自己的证书。
## 安装 Helm CLI

View File

@@ -4,7 +4,7 @@ title: '1. 配置基础设施'
在本节中,你将为 Rancher Management Server 配置底层基础设施,并使其通过 HTTP 代理访问互联网。
如需在高可用 RKE 集群中安装 Rancher Management Server我们建议配置以下基础设施
如需在高可用 RKE2/K3s 集群中安装 Rancher Management Server我们建议配置以下基础设施
- **3 个 Linux 节点**:可以是你的云提供商(例如 Amazon EC2GCE 或 vSphere中的虚拟机。
- **1 个负载均衡器**:用于将前端流量转发到这三个节点中。
@@ -14,7 +14,7 @@ title: '1. 配置基础设施'
## 为什么使用三个节点?
在 RKE 集群中Rancher Server 的数据存储在 etcd 中。而这个 etcd 数据库在这三个节点上运行。
在 RKE2/K3s 集群中Rancher Server 的数据存储在 etcd 中。而这个 etcd 数据库在这三个节点上运行。
为了选举出大多数 etcd 节点认可的 etcd 集群 leaderetcd 数据库需要奇数个节点。如果 etcd 数据库无法选出 leaderetcd 可能会出现[脑裂split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems)的问题,此时你需要使用备份恢复集群。如果三个 etcd 节点之一发生故障,其余两个节点可以选择一个 leader因为它们是 etcd 节点总数的大多数部分。
@@ -30,7 +30,7 @@ title: '1. 配置基础设施'
你还需要设置一个负载均衡器,来将流量重定向到两个节点上的 Rancher 副本。配置后,当单个节点不可用时,继续保障与 Rancher Management Server 的通信。
在后续步骤中配置 Kubernetes 时RKE 工具会部署一个 NGINX Ingress Controller。该 Controller 将侦听 worker 节点的 80 端口和 443 端口,以响应发送给特定主机名的流量。
在后续步骤中配置 Kubernetes 时RKE2/K3s 工具会部署一个 NGINX Ingress Controller。该 Controller 将侦听 worker 节点的 80 端口和 443 端口,以响应发送给特定主机名的流量。
在安装 Rancher 后也是在后续步骤中Rancher 系统将创建一个 Ingress 资源。该 Ingress 通知 NGINX Ingress Controller 监听发往 Rancher 主机名的流量。NGINX Ingress Controller 在收到发往 Rancher 主机名的流量时,会将其转发到集群中正在运行的 Rancher Server Pod。

View File

@@ -1,195 +0,0 @@
---
title: 配置高可用的 RKE Kubernetes 集群
---
<EOLRKE1Warning />
本文介绍如何安装 Kubernetes 集群。该集群应专用于运行 Rancher Server。
:::note
Rancher 可以运行在任何 Kubernetes 集群上,包括托管的 Kubernetes例如 Amazon EKS。以下说明只是安装 Kubernetes 其中一种方式。
:::
如果系统无法直接访问互联网,请参见[离线环境Kubernetes 安装](../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md)。
:::tip 单节点安装提示:
在单节点 Kubernetes 集群中Rancher Server 不具备高可用性,而高可用性对在生产环境中运行 Rancher 非常重要。但是,如果你想要短期内使用单节点节省资源,同时又保留高可用性迁移路径,把 Rancher 安装到单节点集群也是合适的。
要设置单节点 RKE 集群,在 `cluster.yml` 中配置一个节点。该节点需具备所有三个角色,分别是`etcd``controlplane``worker`
在这两种单节点设置中Rancher 可以与 Helm 一起安装在 Kubernetes 集群上,安装方法与安装到其他集群上一样。
:::
## 安装 Kubernetes
### 所需的 CLI 工具
安装 Kubernetes 命令行工具 [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl)。
安装 [RKE](https://rancher.com/docs/rke/latest/en/installation/)Rancher Kubernetes Engine是一个 Kubernetes 发行版和命令行工具)。
### 1. 创建集群配置文件
在这部分,你将创建一个名为 `rancher-cluster.yml`的 Kubernetes 集群配置文件。在后续使用 RKE 命令设置集群的步骤中,此文件会用于在节点上安装 Kubernetes。
使用下面的示例作为指南,创建 `rancher-cluster.yml` 文件。将 `nodes` 列表中的 IP 地址替换为你创建的 3 个节点的 IP 地址或 DNS 名称。
如果你的节点有公共地址和内部地址,建议设置 `internal_address:` 以便 Kubernetes 使用它实现集群内部通信。如果你想使用引用安全组或防火墙,某些服务(如 AWS EC2要求设置 `internal_address:`
RKE 需要通过 SSH 连接到每个节点,它会在 `~/.ssh/id_rsa`的默认位置查找私钥。如果某个节点的私钥不在默认位置中,你还需要为该节点配置 `ssh_key_path` 选项。
在选择 Kubernetes 版本时,请务必先查阅[支持矩阵](https://rancher.com/support-matrix/),以找出已针对你的 Rancher 版本验证的最新 Kubernetes 版本。
```yaml
nodes:
- address: 165.227.114.63
internal_address: 172.16.22.12
user: ubuntu
role: [controlplane, worker, etcd]
- address: 165.227.116.167
internal_address: 172.16.32.37
user: ubuntu
role: [controlplane, worker, etcd]
- address: 165.227.127.226
internal_address: 172.16.42.73
user: ubuntu
role: [controlplane, worker, etcd]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
# Required for external TLS termination with
# ingress-nginx v0.22+
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
kubernetes_version: v1.25.6-rancher4-1
```
<figcaption>通用 RKE 节点选项</figcaption>
| 选项 | 必填 | 描述 |
| ------------------ | -------- | -------------------------------------------------------------------------------------- |
| `address` | 是 | 公共 DNS 或 IP 地址 |
| `user` | 是 | 可以运行 docker 命令的用户 |
| `role` | 是 | 分配给节点的 Kubernetes 角色列表 |
| `internal_address` | 否 | 内部集群流量的私有 DNS 或 IP 地址 |
| `ssh_key_path` | 否 | 用来验证节点的 SSH 私钥文件路径(默认值为 `~/.ssh/id_rsa` |
:::note 高级配置:
RKE 提供大量配置选项,用于针对你的环境进行自定义安装。
如需了解选项和功能的完整列表,请参见 [RKE 官方文档](https://rancher.com/docs/rke/latest/en/config-options/)。
要为大规模 Rancher 安装优化 etcd 集群,请参见 [etcd 设置指南](../../advanced-user-guides/tune-etcd-for-large-installs.md)。
有关 Dockershim 支持的详情,请参见[此页面](../../../getting-started/installation-and-upgrade/installation-requirements/dockershim.md)。
:::
### 2. 运行 RKE
```
rke up --config ./rancher-cluster.yml
```
完成后,结束行应该是:`Finished build Kubernetes cluster succeeded`
### 3. 测试集群
本节介绍如何设置工作区,以便你可以使用 `kubectl` 命令行工具与此集群进行交互。
如果你已安装 `kubectl`,你需要将 `kubeconfig` 文件放在 `kubectl` 可访问的位置。`kubeconfig` 文件包含使用 `kubectl` 访问集群所需的凭证。
你在运行 `rke up`RKE 应该已经创建了一个名为 `kube_config_cluster.yml``kubeconfig` 文件。该文件具有 `kubectl``helm`的凭证。
:::note
如果你的文件名不是 `rancher-cluster.yml`kubeconfig 文件将命名为 `kube_config_<FILE_NAME>.yml`
:::
将此文件移动到 `$HOME/.kube/config`。如果你使用多个 Kubernetes 集群,将 `KUBECONFIG` 环境变量设置为 `kube_config_cluster.yml` 的路径:
```
export KUBECONFIG=$(pwd)/kube_config_cluster.yml
```
`kubectl` 测试你的连接性,并查看你的所有节点是否都处于 `Ready` 状态:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
165.227.114.63 Ready controlplane,etcd,worker 11m v1.13.5
165.227.116.167 Ready controlplane,etcd,worker 11m v1.13.5
165.227.127.226 Ready controlplane,etcd,worker 11m v1.13.5
```
### 4. 检查集群 Pod 的健康状况
检查所有需要的 Pod 和容器是否健康。
- Pod 处于 `Running``Completed` 状态。
- `READY` 表示运行 `STATUS``Running` 的 Pod 的所有容器(例如, `3/3`)。
- `STATUS``Completed` 的 Pod 是一次运行的 Job。这些 Pod `READY` 列的值应该为 `0/1`
```
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-tnsn4 1/1 Running 0 30s
ingress-nginx nginx-ingress-controller-tw2ht 1/1 Running 0 30s
ingress-nginx nginx-ingress-controller-v874b 1/1 Running 0 30s
kube-system canal-jp4hz 3/3 Running 0 30s
kube-system canal-z2hg8 3/3 Running 0 30s
kube-system canal-z6kpw 3/3 Running 0 30s
kube-system kube-dns-7588d5b5f5-sf4vh 3/3 Running 0 30s
kube-system kube-dns-autoscaler-5db9bbb766-jz2k6 1/1 Running 0 30s
kube-system metrics-server-97bc649d5-4rl2q 1/1 Running 0 30s
kube-system rke-ingress-controller-deploy-job-bhzgm 0/1 Completed 0 30s
kube-system rke-kubedns-addon-deploy-job-gl7t4 0/1 Completed 0 30s
kube-system rke-metrics-addon-deploy-job-7ljkc 0/1 Completed 0 30s
kube-system rke-network-plugin-deploy-job-6pbgj 0/1 Completed 0 30s
```
这表示你已成功安装了可运行 Rancher Server 的 Kubernetes 集群。
### 5. 保存你的文件
:::note 重要提示:
维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件:
:::
将以下文件的副本保存在安全位置:
- `rancher-cluster.yml`RKE 集群配置文件。
- `kube_config_cluster.yml`:集群的 [Kubeconfig 文件](https://rancher.com/docs/rke/latest/en/kubeconfig/)。该文件包含可完全访问集群的凭证。
- `rancher-cluster.rkestate`[Kubernetes 状态文件](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state)。此文件包括用于完全访问集群的凭证。<br/><br/>_Kubernetes 集群状态文件仅在 RKE 版本是 0.2.0 或更高版本时生成。_
:::note
后两个文件名中的 `rancher-cluster` 部分取决于你命名 RKE 集群配置文件的方式。
:::
### 故障排除
参见[故障排除](../../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md)页面。
### 后续操作
[安装 Rancher](../../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md)

View File

@@ -34,7 +34,7 @@ title: EC2 节点模板配置
请参考[使用主机驱动时的 Amazon EC2 安全组](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-安全组),了解 `rancher-nodes` 安全组中创建的规则。
如果你自行为 EC2 实例提供安全组Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rke-上-rancher-server-节点的端口)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。
如果你自行为 EC2 实例提供安全组Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rke2-上-rancher-server-节点的端口)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。
## 实例选项

View File

@@ -1,47 +0,0 @@
---
title: Dockershim
---
Dockershim 是 Kubelet 和 Docker Daemon 之间的 CRI 兼容层。Kubernetes 1.20 版本宣布了[移除树内 Dockershim](https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/)。有关此移除的更多信息以及时间线,请参见 [Kubernetes Dockershim 弃用相关的常见问题](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed)。
RKE 集群现在支持外部 Dockershim来让用户继续使用 Docker 作为 CRI 运行时。现在,我们通过使用 [Mirantis 和 Docker ](https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/) 来确保 RKE 集群可以继续使用 Docker从而实现上游开源社区的外部 Dockershim。
RKE2 和 K3s 集群使用嵌入的 containerd 作为容器运行时,因此不受影响。
要在 1.24 之前的 RKE 版本中启用外部 Dockershim请配置以下选项
```
enable_cri_dockerd: true
```
从 1.24 版本开始,以上默认为 true。
如果你想使用其他容器运行时Rancher 也提供使用 Containerd 作为默认运行时的,以边缘为中心的 K3s和以数据中心为中心的 RKE2 Kubernetes 发行版。然后,你就可以通过 Rancher 对导入的 RKE2 和 K3s Kubernetes 集群进行升级和管理。
## 常见问题
<br/>
Q是否必须升级 Rancher 才能获得 Rancher 对上游外部 Dockershim 替换的支持?
A对于 RKEDockershim `cri_dockerd` 替换的上游支持从 Kubernetes 1.21 开始。你需要使用支持 RKE 1.21 的 Rancher 版本。详情请参见我们的支持矩阵。
<br/>
Q我目前的 RKE 使用 Kubernetes 1.23。如果上游最终在 1.24 中删除 Dockershim会发生什么
ARKE 中带有 Kubernetes 的 Dockershim 版本将继续工作到 1.23。有关时间线的更多信息,请参见 [Kubernetes Dockershim 弃用相关的常见问题](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed)。从 1.24 开始RKE 将默认启用 `cri_dockerd` 并在之后的版本中继续启用。
<br/>
Q: 如果我不想再依赖 Dockershim 或 cri_dockerd我还有什么选择
A: 你可以为 Kubernetes 使用不需要 Dockershim 支持的运行时,如 Containerd。RKE2 和 K3s 就是其中的两个选项。
<br/>
Q: 如果我目前使用 RKE1但想切换到 RKE2我可以怎样进行迁移
A: 你可以构建一个新集群,然后将工作负载迁移到使用 Containerd 的新 RKE2 集群。Rancher 也在探索就地升级路径的可能性。
<br/>

View File

@@ -1,23 +0,0 @@
---
title: 安装 Docker
---
在使用 Helm 在 RKE 集群节点上或使用 Docker 安装 Rancher Server 前,你需要在节点中先安装 Docker。RKE2 和 K3s 集群不要求使用 Docker。
Docker 有几个安装方法。一种方法是参见 [Docker 官方文档](https://docs.docker.com/install/)以了解如何在 Linux 上安装 Docker。不同 Linux 发行版的安装步骤可能有所不同。
另一种方式是使用 Rancher 的 Docker 安装脚本,该脚本可用于较新的 Docker 版本。 Rancher 为每个 Kubernetes 支持的上游 Docker 版本提供了安装脚本。
例如,此命令可用于在 SUSE Linux Enterprise 或 Ubuntu 等主要 Linux 发行版上安装 Docker
```bash
curl https://releases.rancher.com/install-docker/<version-number>.sh | sh
```
请参阅 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix),使用匹配你的操作系统和 Rancher 版本并且经过验证的 Docker 版本。 尽管支持矩阵列出了经过验证的 Docker 版本直至补丁版本,但只有发行版的主要版本和次要版本与 Docker 安装脚本相关。
请注意,必须应用以下 sysctl 设置:
```bash
net.bridge.bridge-nf-call-iptables=1
```

View File

@@ -1,6 +1,6 @@
---
title: 安装要求
description: 如果 Rancher 配置在 Docker 或 Kubernetes 中运行时,了解运行 Rancher Server 的每个节点的节点要求
description: Learn the node requirements for each node running Rancher server when youre configuring Rancher to run either in a Kubernetes setup
---
本文描述了对需要安装 Rancher Server 的节点的软件、硬件和网络要求。Rancher Server 可以安装在单个节点或高可用的 Kubernetes 集群上。
@@ -27,7 +27,7 @@ Rancher 需要安装在支持的 Kubernetes 版本上。请查阅 [Rancher 支
所有支持的操作系统都使用 64-bit x86 架构。Rancher 兼容当前所有的主流 Linux 发行版。
[Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions)列出了每个 Rancher 版本测试过的操作系统和 Docker 版本。
The [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions) lists which OS versions were tested for each Rancher version.
运行 RKE 集群的节点需要安装 Docker。RKE2 或 K3s 集群不需要它。
@@ -41,7 +41,7 @@ Rancher 需要安装在支持的 Kubernetes 版本上。请查阅 [Rancher 支
### RKE2 要求
对于容器运行时RKE2 附带了自己的 containerd。RKE2 安装不需要 Docker。
对于容器运行时RKE2 附带了自己的 containerd.
如需了解 RKE2 通过了哪些操作系统版本的测试,请参见 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions)。
@@ -150,41 +150,13 @@ Rancher 的代码库不断发展用例不断变化Rancher 积累的经验
(*):大规模的部署需要你[遵循最佳实践](../../../reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md)以获得足够的性能。
### RKE
下面的表格列出了[上游集群](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md)中每个节点最小的 CPU 和内存要求。
请注意,生产环境下的高可用安装最少需要 3 个节点。
| 部署规模 | 最大集群数量 | 最大节点数量 | vCPUs | 内存 |
|-----------------------------|----------------------------|-------------------------|-------|-------|
| 小 | 150 | 1500 | 4 | 16 GB |
| 中 | 300 | 3000 | 8 | 32 GB |
| 大 (*) | 500 | 5000 | 16 | 64 GB |
(*) 大规模的部署需要你[遵循最佳实践](../../../reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md)以获得足够的性能。
有关 RKE 一般要求的更多详细信息,请参见 [RKE 文档](https://rke.docs.rancher.com/os)。
### Docker
下面的表格列出了[上游集群](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md)中每个节点最小的 CPU 和内存要求。
请注意,在 Docker 中安装 Rancher 仅适用于开发或测试目的。不建议在生产环境中使用。
| 部署规模 | 最大集群数量 | 最大节点数量 | vCPUs | 内存 |
|-----------------------------|----------------------------|-------------------------|-------|------|
| 小 | 5 | 50 | 1 | 4 GB |
| 中 | 15 | 200 | 2 | 8 GB |
## Ingress
安装 Rancher 的 Kubernetes 集群中的每个节点都应该运行一个 Ingress。
Ingress 需要部署为 DaemonSet 以确保负载均衡器能成功把流量转发到各个节点。
如果是 RKERKE2 和 K3s 安装,你不需要手动安装 Ingress因为它是默认安装的。
如果是 RKE2 和 K3s 安装,你不需要手动安装 Ingress因为它是默认安装的。
对于托管的 Kubernetes 集群EKS、GKE、AKS你需要设置 Ingress。
@@ -213,7 +185,3 @@ etcd 在集群中的性能决定了 Rancher 的性能。因此,为了获得最
### 端口要求
为了确保能正常运行Rancher 需要在 Rancher 节点和下游 Kubernetes 集群节点上开放一些端口。不同集群类型的 Rancher 和下游集群的所有必要端口,请参见[端口要求](port-requirements.md)。
## Dockershim 支持
有关 Dockershim 支持的详情,请参见[此页面](dockershim.md)。

View File

@@ -15,7 +15,7 @@ import PortsImportedHosted from '@site/src/components/PortsImportedHosted'
不同的 Rancher Server 架构有不同的端口要求。
Rancher 可以安装在任何 Kubernetes 集群上。如果你的 Rancher 安装在 K3s、RKE 或 RKE2 Kubernetes 集群上,请参考下面的标签页。对于其他 Kubernetes 发行版,请参见该发行版的文档,了解集群节点的端口要求。
Rancher 可以安装在任何 Kubernetes 集群上。如果你的 Rancher 安装在 K3s 或 RKE2 Kubernetes 集群上,请参考下面的标签页。对于其他 Kubernetes 发行版,请参见该发行版的文档,了解集群节点的端口要求。
:::note 注意事项:
@@ -66,54 +66,6 @@ K3s server 需要开放端口 6443 才能供节点访问。
</details>
### RKE 上 Rancher Server 节点的端口
<details>
<summary>单击展开</summary>
通常情况下Rancher 安装在三个 RKE 节点上,这些节点都有 etcd、controlplane 和 worker 角色。
下表描述了 Rancher 节点之间流量的端口要求:
<figcaption>Rancher 节点的流量规则</figcaption>
| 协议 | 端口 | 描述 |
|-----|-----|----------------|
| TCP | 443 | Rancher Agents |
| TCP | 2379 | etcd 客户端请求 |
| TCP | 2380 | etcd 对等通信 |
| TCP | 6443 | Kubernetes apiserver |
| TCP | 8443 | NGINX Ingress 的验证 Webhook |
| UDP | 8472 | Canal/Flannel VXLAN 覆盖网络 |
| TCP | 9099 | Canal/Flannel livenessProbe/readinessProbe |
| TCP | 10250 | Metrics Server 与所有节点的通信 |
| TCP | 10254 | Ingress controller livenessProbe/readinessProbe |
下表描述了入站和出站流量的端口要求:
<figcaption>Rancher 节点的入站规则</figcaption>
| 协议 | 端口 | 源 | 描述 |
|-----|-----|----------------|---|
| TCP | 22 | RKE CLI | RKE 通过 SSH 配置节点 |
| TCP | 80 | 负载均衡器/反向代理 | 到 Rancher UI/API 的 HTTP 流量 |
| TCP | 443 | <ul><li>负载均衡器/反向代理</li><li>所有集群节点和其他 API/UI 客户端的 IP</li></ul> | 到 Rancher UI/API 的 HTTPS 流量 |
| TCP | 6443 | Kubernetes API 客户端 | 到 Kubernetes API 的 HTTPS 流量 |
<figcaption>Rancher 节点的出站规则</figcaption>
| 协议 | 端口 | 目标 | 描述 |
|-----|-----|----------------|---|
| TCP | 443 | git.rancher.io | Rancher catalog |
| TCP | 22 | 使用 Node Driver 创建的任何节点 | Node Driver 通过 SSH 配置节点 |
| TCP | 2376 | 使用 Node Driver 创建的任何节点 | Node Driver 使用的 Docker daemon TLS 端口 |
| TCP | 6443 | 托管/导入的 Kubernetes API | Kubernetes API Server |
| TCP | 提供商依赖 | 托管集群中 Kubernetes API 端点的端口 | Kubernetes API |
</details>
### RKE2 上 Rancher Server 节点的端口
<details>

View File

@@ -4,7 +4,7 @@ title: 离线 Helm CLI 安装
本文介绍如何使用 Helm CLI 在离线环境中安装 Rancher Server。离线环境可以是 Rancher Server 离线安装、防火墙后面或代理后面。
Rancher 安装在 RKE Kubernetes 集群、K3s Kubernetes 集群,或单个 Docker 容器上对应的安装步骤会有所不同。
Rancher 安装在 K3s Kubernetes 集群,或单个 Docker 容器上对应的安装步骤会有所不同。
如需了解各个安装方式的更多信息,请参见[本页](../../installation-and-upgrade.md)。

View File

@@ -12,7 +12,7 @@ title: '3. 安装 KubernetesDocker 安装请跳过)'
Rancher 可以安装在任何 Kubernetes 集群上,包括托管的 Kubernetes。
RKE、RKE2 或 K3s 上离线安装 Kubernetes 集群的步骤如下所示:
在 RKE2 或 K3s 上离线安装 Kubernetes 集群的步骤如下所示:
<Tabs>
<TabItem value="K3s">
@@ -283,102 +283,9 @@ kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces
2. 使用相同的环境变量再次运行脚本。
3. 重启 RKE2 服务。
</TabItem>
<TabItem value="RKE">
我们将使用 Rancher Kubernetes Engine (RKE) 创建一个 Kubernetes 集群。在启动 Kubernetes 集群之前,你需要安装 RKE 并创建 RKE 配置文件。
### 1. 安装 RKE
参照 [RKE 官方文档](https://rancher.com/docs/rke/latest/en/installation/)的说明安装 RKE。
:::note
你可以在 [Rancher 支持矩阵](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/)中找到基于 Rancher 版本的 RKE 认证版本。
:::
### 2. 创建 RKE 配置文件
在可访问你 Linux 主机节点上的 22/TCP 端口和 6443/TCP 端口的系统上,使用以下示例创建一个名为 `rancher-cluster.yml` 的新文件。
该文件是 RKE 配置文件,用于配置你要部署 Rancher 的集群。
参考下方的 _RKE 选项_ 表格,修改代码示例中的参数。使用你创建的三个节点的 IP 地址或 DNS 名称。
:::tip
如需获取可用选项的详情,请参见 RKE [配置选项](https://rancher.com/docs/rke/latest/en/config-options/)。
:::
<figcaption>RKE 选项</figcaption>
| 选项 | 必填 | 描述 |
| ------------------ | -------------------- | --------------------------------------------------------------------------------------- |
| `address` | ✓ | 离线环境中节点的 DNS 或 IP 地址 |
| `user` | ✓ | 可运行 Docker 命令的用户 |
| `role` | ✓ | 分配给节点的 Kubernetes 角色列表 |
| `internal_address` | 可选<sup>1</sup> | 用于集群内部流量的 DNS 或 IP 地址 |
| `ssh_key_path` | | 用来验证节点的 SSH 私钥文件路径(默认值为 `~/.ssh/id_rsa` |
> <sup>1</sup> 如果你想使用引用安全组或防火墙,某些服务(如 AWS EC2要求设置 `internal_address`。
```yaml
nodes:
- address: 10.10.3.187 # 离线环境节点 IP
internal_address: 172.31.7.22 # 节点内网 IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
- address: 10.10.3.254 # 离线环境节点 IP
internal_address: 172.31.13.132 # 节点内网 IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
- address: 10.10.3.89 # 离线环境节点 IP
internal_address: 172.31.3.216 # 节点内网 IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
private_registries:
- url: <REGISTRY.YOURDOMAIN.COM:PORT> # 私有镜像仓库 URL
user: rancher
password: '*********'
is_default: true
```
### 3. 运行 RKE
配置 `rancher-cluster.yml`后,启动你的 Kubernetes 集群:
```
rke up --config ./rancher-cluster.yml
```
### 4. 保存你的文件
:::note 重要提示:
维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件:
:::
将以下文件的副本保存在安全位置:
- `rancher-cluster.yml`RKE 集群配置文件。
- `kube_config_cluster.yml`:集群的 [Kubeconfig 文件](https://rancher.com/docs/rke/latest/en/kubeconfig/)。该文件包含可完全访问集群的凭证。
- `rancher-cluster.rkestate`[Kubernetes 集群状态文件](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state)。该文件包含集群的当前状态,包括 RKE 配置以及证书<br/>。<br/>_Kubernetes 集群状态文件仅在使用 RKE 0.2.0 或更高版本时创建。_
</TabItem>
</Tabs>
:::note
后两个文件名中的 `rancher-cluster` 部分取决于你命名 RKE 集群配置文件的方式。
:::
### 故障排除
参见[故障排除](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md)页面。

View File

@@ -4,7 +4,7 @@ title: '2. 安装 Kubernetes'
基础设施配置好后,你可以设置一个 Kubernetes 集群来安装 Rancher。
设置 RKE、RKE2 或 K3s 的步骤如下所示。
设置 RKE2 或 K3s 的步骤如下所示。
为方便起见,将代理的 IP 地址和端口导出到一个环境变量中,并在每个节点上为你当前的 shell 设置 HTTP_PROXY 变量:
@@ -92,152 +92,6 @@ kubectl cluster-info
kubectl get pods --all-namespaces
```
</TabItem>
<TabItem value="RKE">
首先,你需要在所有三个 Linux 节点上安装 Docker 并设置 HTTP 代理。因此,你可以在这三个节点上执行以下步骤。
接下来配置 apt 以在安装包时使用这个代理。如果你使用的不是 Ubuntu请相应调整步骤。
```
cat <<'EOF' | sudo tee /etc/apt/apt.conf.d/proxy.conf > /dev/null
Acquire::http::Proxy "http://${proxy_host}/";
Acquire::https::Proxy "http://${proxy_host}/";
EOF
```
安装 Docker
```
curl -sL https://releases.rancher.com/install-docker/19.03.sh | sh
```
然后,确保你的当前用户能够在没有 sudo 的情况下访问 Docker Daemon
```
sudo usermod -aG docker YOUR_USERNAME
```
配置 Docker Daemon 使用代理来拉取镜像:
```
sudo mkdir -p /etc/systemd/system/docker.service.d
cat <<'EOF' | sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf > /dev/null
[Service]
Environment="HTTP_PROXY=http://${proxy_host}"
Environment="HTTPS_PROXY=http://${proxy_host}"
Environment="NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16"
EOF
```
要应用配置,请重新启动 Docker Daemon
```
sudo systemctl daemon-reload
sudo systemctl restart docker
```
#### 离线代理
你现在可以在配置的离线集群中配置主机驱动集群,以使用代理进行出站连接。
除了为代理服务器设置默认规则外,你还需要额外添加如下所示的规则,以从代理的 Rancher 环境中配置主机驱动集群。
根据你的设置配置文件路径,例如 `/etc/apt/apt.conf.d/proxy.conf`
```
acl SSL_ports port 22
acl SSL_ports port 2376
acl Safe_ports port 22 # ssh
acl Safe_ports port 2376 # docker port
```
### 创建 RKE 集群
在能通过 SSH 访问 Linux 节点的主机上,你需要有几个命令行工具,来创建集群并与之交互:
* [RKE CLI binary](https://rancher.com/docs/rke/latest/en/installation/#download-the-rke-binary)
```
sudo curl -fsSL -o /usr/local/bin/rke https://github.com/rancher/rke/releases/download/v1.1.4/rke_linux-amd64
sudo chmod +x /usr/local/bin/rke
```
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
```
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
```
接下来,创建一个描述 RKE 集群的 YAML 文件。确保节点的 IP 地址和 SSH 用户名是正确的。有关集群 YAML 的详情,请参见 [RKE 官方文档](https://rancher.com/docs/rke/latest/en/example-yamls/)。
```yml
nodes:
- address: 10.0.1.200
user: ubuntu
role: [controlplane,worker,etcd]
- address: 10.0.1.201
user: ubuntu
role: [controlplane,worker,etcd]
- address: 10.0.1.202
user: ubuntu
role: [controlplane,worker,etcd]
services:
etcd:
backup_config:
interval_hours: 12
retention: 6
```
之后,你可以通过运行以下命令来创建 Kubernetes 集群:
```
rke up --config rancher-cluster.yaml
```
RKE 会创建一个名为 `rancher-cluster.rkestate` 的状态文件。如果你需要更新或修改集群配置或使用备份恢复集群则需要使用该文件。RKE 还会创建一个 `kube_config_cluster.yaml` 文件,你可以使用该文件在本地使用 kubectl 或 Helm 等工具连接到远端的 Kubernetes 集群。请将这些文件保存在安全的位置,例如版本控制系统中。
如需查看集群,请运行以下命令:
```
export KUBECONFIG=kube_config_cluster.yaml
kubectl cluster-info
kubectl get pods --all-namespaces
```
你也可以验证你的外部负载均衡器是否工作DNS 条目是否设置正确。如果你向其中之一发送请求,你会收到来自 Ingress Controller 的 HTTP 404 响应:
```
$ curl 10.0.1.100
default backend - 404
$ curl rancher.example.com
default backend - 404
```
### 保存你的文件
:::note 重要提示:
维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件:
:::
将以下文件的副本保存在安全位置:
- `rancher-cluster.yml`RKE 集群配置文件。
- `kube_config_cluster.yml`:集群的 [Kubeconfig 文件](https://rancher.com/docs/rke/latest/en/kubeconfig/)。该文件包含可完全访问集群的凭证。
- `rancher-cluster.rkestate`[Kubernetes 集群状态文件](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state)。此文件包含集群的当前状态,包括 RKE 配置和证书。
:::note
后两个文件名中的 `rancher-cluster` 部分取决于你命名 RKE 集群配置文件的方式。
:::
</TabItem>
</Tabs>

View File

@@ -2,7 +2,7 @@
title: 3. 安装 Rancher
---
在前文的操作后,你已经有了一个运行的 RKE 集群,现在可以在其中安装 Rancher 了。出于安全考虑,所有到 Rancher 的流量都必须使用 TLS 加密。在本教程中,你将使用 [cert-manager](https://cert-manager.io/)自动颁发自签名证书。在实际使用情况下,你可使用 Let's Encrypt 或自己的证书。
在前文的操作后,你已经有了一个运行的 RKE2/K3s 集群,现在可以在其中安装 Rancher 了。出于安全考虑,所有到 Rancher 的流量都必须使用 TLS 加密。在本教程中,你将使用 [cert-manager](https://cert-manager.io/)自动颁发自签名证书。在实际使用情况下,你可使用 Let's Encrypt 或自己的证书。
## 安装 Helm CLI

View File

@@ -4,7 +4,7 @@ title: '1. 配置基础设施'
在本节中,你将为 Rancher Management Server 配置底层基础设施,并使其通过 HTTP 代理访问互联网。
如需在高可用 RKE 集群中安装 Rancher Management Server我们建议配置以下基础设施
如需在高可用 RKE2/K3s 集群中安装 Rancher Management Server我们建议配置以下基础设施
- **3 个 Linux 节点**:可以是你的云提供商(例如 Amazon EC2GCE 或 vSphere中的虚拟机。
- **1 个负载均衡器**:用于将前端流量转发到这三个节点中。
@@ -14,7 +14,7 @@ title: '1. 配置基础设施'
## 为什么使用三个节点?
在 RKE 集群中Rancher Server 的数据存储在 etcd 中。而这个 etcd 数据库在这三个节点上运行。
在 RKE2/K3s 集群中Rancher Server 的数据存储在 etcd 中。而这个 etcd 数据库在这三个节点上运行。
为了选举出大多数 etcd 节点认可的 etcd 集群 leaderetcd 数据库需要奇数个节点。如果 etcd 数据库无法选出 leaderetcd 可能会出现[脑裂split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems)的问题,此时你需要使用备份恢复集群。如果三个 etcd 节点之一发生故障,其余两个节点可以选择一个 leader因为它们是 etcd 节点总数的大多数部分。
@@ -30,7 +30,7 @@ title: '1. 配置基础设施'
你还需要设置一个负载均衡器,来将流量重定向到两个节点上的 Rancher 副本。配置后,当单个节点不可用时,继续保障与 Rancher Management Server 的通信。
在后续步骤中配置 Kubernetes 时RKE 工具会部署一个 NGINX Ingress Controller。该 Controller 将侦听 worker 节点的 80 端口和 443 端口,以响应发送给特定主机名的流量。
在后续步骤中配置 Kubernetes 时RKE2/K3s 工具会部署一个 NGINX Ingress Controller。该 Controller 将侦听 worker 节点的 80 端口和 443 端口,以响应发送给特定主机名的流量。
在安装 Rancher 后也是在后续步骤中Rancher 系统将创建一个 Ingress 资源。该 Ingress 通知 NGINX Ingress Controller 监听发往 Rancher 主机名的流量。NGINX Ingress Controller 在收到发往 Rancher 主机名的流量时,会将其转发到集群中正在运行的 Rancher Server Pod。

View File

@@ -1,195 +0,0 @@
---
title: 配置高可用的 RKE Kubernetes 集群
---
<EOLRKE1Warning />
本文介绍如何安装 Kubernetes 集群。该集群应专用于运行 Rancher Server。
:::note
Rancher 可以运行在任何 Kubernetes 集群上,包括托管的 Kubernetes例如 Amazon EKS。以下说明只是安装 Kubernetes 其中一种方式。
:::
如果系统无法直接访问互联网,请参见[离线环境Kubernetes 安装](../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md)。
:::tip 单节点安装提示:
在单节点 Kubernetes 集群中Rancher Server 不具备高可用性,而高可用性对在生产环境中运行 Rancher 非常重要。但是,如果你想要短期内使用单节点节省资源,同时又保留高可用性迁移路径,把 Rancher 安装到单节点集群也是合适的。
要设置单节点 RKE 集群,在 `cluster.yml` 中配置一个节点。该节点需具备所有三个角色,分别是`etcd``controlplane``worker`
在这两种单节点设置中Rancher 可以与 Helm 一起安装在 Kubernetes 集群上,安装方法与安装到其他集群上一样。
:::
## 安装 Kubernetes
### 所需的 CLI 工具
安装 Kubernetes 命令行工具 [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl)。
安装 [RKE](https://rancher.com/docs/rke/latest/en/installation/)Rancher Kubernetes Engine是一个 Kubernetes 发行版和命令行工具)。
### 1. 创建集群配置文件
在这部分,你将创建一个名为 `rancher-cluster.yml`的 Kubernetes 集群配置文件。在后续使用 RKE 命令设置集群的步骤中,此文件会用于在节点上安装 Kubernetes。
使用下面的示例作为指南,创建 `rancher-cluster.yml` 文件。将 `nodes` 列表中的 IP 地址替换为你创建的 3 个节点的 IP 地址或 DNS 名称。
如果你的节点有公共地址和内部地址,建议设置 `internal_address:` 以便 Kubernetes 使用它实现集群内部通信。如果你想使用引用安全组或防火墙,某些服务(如 AWS EC2要求设置 `internal_address:`
RKE 需要通过 SSH 连接到每个节点,它会在 `~/.ssh/id_rsa`的默认位置查找私钥。如果某个节点的私钥不在默认位置中,你还需要为该节点配置 `ssh_key_path` 选项。
在选择 Kubernetes 版本时,请务必先查阅[支持矩阵](https://rancher.com/support-matrix/),以找出已针对你的 Rancher 版本验证的最新 Kubernetes 版本。
```yaml
nodes:
- address: 165.227.114.63
internal_address: 172.16.22.12
user: ubuntu
role: [controlplane, worker, etcd]
- address: 165.227.116.167
internal_address: 172.16.32.37
user: ubuntu
role: [controlplane, worker, etcd]
- address: 165.227.127.226
internal_address: 172.16.42.73
user: ubuntu
role: [controlplane, worker, etcd]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
# Required for external TLS termination with
# ingress-nginx v0.22+
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
kubernetes_version: v1.25.6-rancher4-1
```
<figcaption>通用 RKE 节点选项</figcaption>
| 选项 | 必填 | 描述 |
| ------------------ | -------- | -------------------------------------------------------------------------------------- |
| `address` | 是 | 公共 DNS 或 IP 地址 |
| `user` | 是 | 可以运行 docker 命令的用户 |
| `role` | 是 | 分配给节点的 Kubernetes 角色列表 |
| `internal_address` | 否 | 内部集群流量的私有 DNS 或 IP 地址 |
| `ssh_key_path` | 否 | 用来验证节点的 SSH 私钥文件路径(默认值为 `~/.ssh/id_rsa` |
:::note 高级配置:
RKE 提供大量配置选项,用于针对你的环境进行自定义安装。
如需了解选项和功能的完整列表,请参见 [RKE 官方文档](https://rancher.com/docs/rke/latest/en/config-options/)。
要为大规模 Rancher 安装优化 etcd 集群,请参见 [etcd 设置指南](../../advanced-user-guides/tune-etcd-for-large-installs.md)。
有关 Dockershim 支持的详情,请参见[此页面](../../../getting-started/installation-and-upgrade/installation-requirements/dockershim.md)。
:::
### 2. 运行 RKE
```
rke up --config ./rancher-cluster.yml
```
完成后,结束行应该是:`Finished build Kubernetes cluster succeeded`
### 3. 测试集群
本节介绍如何设置工作区,以便你可以使用 `kubectl` 命令行工具与此集群进行交互。
如果你已安装 `kubectl`,你需要将 `kubeconfig` 文件放在 `kubectl` 可访问的位置。`kubeconfig` 文件包含使用 `kubectl` 访问集群所需的凭证。
你在运行 `rke up`RKE 应该已经创建了一个名为 `kube_config_cluster.yml``kubeconfig` 文件。该文件具有 `kubectl``helm`的凭证。
:::note
如果你的文件名不是 `rancher-cluster.yml`kubeconfig 文件将命名为 `kube_config_<FILE_NAME>.yml`
:::
将此文件移动到 `$HOME/.kube/config`。如果你使用多个 Kubernetes 集群,将 `KUBECONFIG` 环境变量设置为 `kube_config_cluster.yml` 的路径:
```
export KUBECONFIG=$(pwd)/kube_config_cluster.yml
```
`kubectl` 测试你的连接性,并查看你的所有节点是否都处于 `Ready` 状态:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
165.227.114.63 Ready controlplane,etcd,worker 11m v1.13.5
165.227.116.167 Ready controlplane,etcd,worker 11m v1.13.5
165.227.127.226 Ready controlplane,etcd,worker 11m v1.13.5
```
### 4. 检查集群 Pod 的健康状况
检查所有需要的 Pod 和容器是否健康。
- Pod 处于 `Running``Completed` 状态。
- `READY` 表示运行 `STATUS``Running` 的 Pod 的所有容器(例如, `3/3`)。
- `STATUS``Completed` 的 Pod 是一次运行的 Job。这些 Pod `READY` 列的值应该为 `0/1`
```
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-tnsn4 1/1 Running 0 30s
ingress-nginx nginx-ingress-controller-tw2ht 1/1 Running 0 30s
ingress-nginx nginx-ingress-controller-v874b 1/1 Running 0 30s
kube-system canal-jp4hz 3/3 Running 0 30s
kube-system canal-z2hg8 3/3 Running 0 30s
kube-system canal-z6kpw 3/3 Running 0 30s
kube-system kube-dns-7588d5b5f5-sf4vh 3/3 Running 0 30s
kube-system kube-dns-autoscaler-5db9bbb766-jz2k6 1/1 Running 0 30s
kube-system metrics-server-97bc649d5-4rl2q 1/1 Running 0 30s
kube-system rke-ingress-controller-deploy-job-bhzgm 0/1 Completed 0 30s
kube-system rke-kubedns-addon-deploy-job-gl7t4 0/1 Completed 0 30s
kube-system rke-metrics-addon-deploy-job-7ljkc 0/1 Completed 0 30s
kube-system rke-network-plugin-deploy-job-6pbgj 0/1 Completed 0 30s
```
这表示你已成功安装了可运行 Rancher Server 的 Kubernetes 集群。
### 5. 保存你的文件
:::note 重要提示:
维护、排除问题和升级集群需要用到以下文件,请妥善保管这些文件:
:::
将以下文件的副本保存在安全位置:
- `rancher-cluster.yml`RKE 集群配置文件。
- `kube_config_cluster.yml`:集群的 [Kubeconfig 文件](https://rancher.com/docs/rke/latest/en/kubeconfig/)。该文件包含可完全访问集群的凭证。
- `rancher-cluster.rkestate`[Kubernetes 状态文件](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state)。此文件包括用于完全访问集群的凭证。<br/><br/>_Kubernetes 集群状态文件仅在 RKE 版本是 0.2.0 或更高版本时生成。_
:::note
后两个文件名中的 `rancher-cluster` 部分取决于你命名 RKE 集群配置文件的方式。
:::
### 故障排除
参见[故障排除](../../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md)页面。
### 后续操作
[安装 Rancher](../../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md)

View File

@@ -34,7 +34,7 @@ title: EC2 节点模板配置
请参考[使用主机驱动时的 Amazon EC2 安全组](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-安全组),了解 `rancher-nodes` 安全组中创建的规则。
如果你自行为 EC2 实例提供安全组Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rke-上-rancher-server-节点的端口)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。
如果你自行为 EC2 实例提供安全组Rancher 不会对其进行修改。因此,你需要让你的安全组允许 [Rancher 配置实例所需的端口](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rke2-上-rancher-server-节点的端口)。有关使用安全组控制 EC2 实例的入站和出站流量的更多信息,请参阅[这里](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups)。
## 实例选项

View File

@@ -84,8 +84,6 @@ const sidebars = {
id: "getting-started/installation-and-upgrade/installation-requirements/installation-requirements",
},
items: [
"getting-started/installation-and-upgrade/installation-requirements/install-docker",
"getting-started/installation-and-upgrade/installation-requirements/dockershim",
"getting-started/installation-and-upgrade/installation-requirements/port-requirements",
],
},
@@ -398,7 +396,6 @@ const sidebars = {
items: [
"how-to-guides/new-user-guides/kubernetes-cluster-setup/high-availability-installs",
"how-to-guides/new-user-guides/kubernetes-cluster-setup/k3s-for-rancher",
"how-to-guides/new-user-guides/kubernetes-cluster-setup/rke1-for-rancher",
"how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher",
],
},

View File

@@ -1,51 +0,0 @@
---
title: Dockershim
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-requirements/dockershim"/>
</head>
The Dockershim is the CRI compliant layer between the Kubelet and the Docker daemon. As part of the Kubernetes 1.20 release, the [deprecation of the in-tree Dockershim was announced](https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/). For more information on the deprecation and its timelines, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed).
RKE clusters now support the external Dockershim to continue leveraging Docker as the CRI runtime. We now implement the upstream open source community external Dockershim announced by [Mirantis and Docker](https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/) to ensure RKE clusters can continue to leverage Docker.
RKE2 and K3s clusters use an embedded containerd as a container runtime and are not affected.
To enable the external Dockershim in versions of RKE before 1.24, configure the following option.
```
enable_cri_dockerd: true
```
Starting with version 1.24, the above defaults to true.
For users looking to use another container runtime, Rancher has the edge-focused K3s and datacenter-focused RKE2 Kubernetes distributions that use containerd as the default runtime. Imported RKE2 and K3s Kubernetes clusters can then be upgraded and managed through Rancher going forward.
## FAQ
<br/>
Q: Do I have to upgrade Rancher to get Ranchers support of the upstream external Dockershim replacement?
A: The upstream support of the Dockershim replacement `cri_dockerd` begins for RKE in Kubernetes 1.21. You will need to be on a version of Rancher that supports RKE 1.21. See our support matrix for details.
<br/>
Q: I am currently on RKE with Kubernetes 1.23. What happens when upstream finally removes Dockershim in 1.24?
A: The version of Dockershim in RKE with Kubernetes will continue to work until 1.23. For information on the timeline, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed). After this, starting in 1.24, RKE will default to enabling `cri_dockerd` by default and will continue to do for versions afterwards.
<br/>
Q: What are my other options if I dont want to depend on the Dockershim or cri_dockerd?
A: You can use a runtime like containerd with Kubernetes that does not require Dockershim support. RKE2 or K3s are two options for doing this.
<br/>
Q: If I am already using RKE1 and want to switch to RKE2, what are my migration options?
A: Today, you can stand up a new cluster and migrate workloads to a new RKE2 cluster that uses containerd. For details, see the [RKE to RKE2 Replatforming Guide](https://links.imagerelay.com/cdn/3404/ql/5606a3da2365422ab2250d348aa07112/rke_to_rke2_replatforming_guide.pdf).
<br/>

View File

@@ -1,27 +0,0 @@
---
title: Installing Docker
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-requirements/install-docker"/>
</head>
Docker is required to be installed on nodes where the Rancher server will be installed with Helm on an RKE cluster or with Docker. Docker is not required for RKE2 or K3s clusters.
There are a couple of options for installing Docker. One option is to refer to the [official Docker documentation](https://docs.docker.com/install/) about how to install Docker on Linux. The steps will vary based on the Linux distribution.
Another option is to use one of Rancher's Docker installation scripts, which are available for most recent versions of Docker. Rancher has installation scripts for every version of upstream Docker that Kubernetes supports.
For example, this command could be used to install on one of the main Linux distributions, such as SUSE Linux Enterprise or Ubuntu:
```bash
curl https://releases.rancher.com/install-docker/<version-number>.sh | sh
```
Consult the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix) to match a validated Docker version with your operating system and version of Rancher. Although the support matrix lists validated Docker versions down to the patch version, only the major and minor version of the release are relevant for the Docker installation scripts.
Note that the following sysctl setting must be applied:
```bash
net.bridge.bridge-nf-call-iptables=1
```

View File

@@ -1,6 +1,6 @@
---
title: Installation Requirements
description: Learn the node requirements for each node running Rancher server when youre configuring Rancher to run either in a Docker or Kubernetes setup
description: Learn the node requirements for each node running Rancher server when youre configuring Rancher to run either in a Kubernetes setup
---
<head>
@@ -33,9 +33,7 @@ If you install Rancher on a hardened Kubernetes cluster, check the [Exempting Re
All supported operating systems are 64-bit x86. Rancher should work with any modern Linux distribution.
The [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions) lists which OS and Docker versions were tested for each Rancher version.
Docker is required for nodes that will run RKE clusters. It is not required for RKE2 or K3s clusters.
The [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions) lists which OS versions were tested for each Rancher version.
The `ntp` (Network Time Protocol) package should be installed. This prevents errors with certificate validation that can occur when the time is not synchronized between the client and server.
@@ -47,7 +45,7 @@ If you plan to run Rancher on ARM64, see [Running on ARM64 (Experimental).](../.
### RKE2 Specific Requirements
RKE2 bundles its own container runtime, containerd. Docker is not required for RKE2 installs.
RKE2 bundles its own container runtime, containerd.
For details on which OS versions were tested with RKE2, refer to the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions).
@@ -61,12 +59,6 @@ If you are installing Rancher on a K3s cluster with **Raspbian Buster**, follow
If you are installing Rancher on a K3s cluster with Alpine Linux, follow [these steps](https://rancher.com/docs/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup.
### RKE Specific Requirements
RKE requires a Docker container runtime. Supported Docker versions are specified in the [Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/) page.
For more information, see [Installing Docker](install-docker.md).
## Hardware Requirements
The following sections describe the CPU, memory, and I/O requirements for nodes where Rancher is installed. Requirements vary based on the size of the infrastructure.
@@ -155,40 +147,13 @@ These requirements apply to hosted Kubernetes clusters such as Amazon Elastic Ku
(*): Large deployments require that you [follow best practices](../../../reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md) for adequate performance.
### RKE
The following table lists minimum CPU and memory requirements for each node in the [upstream cluster](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md).
Please note that a highly available setup with at least three nodes is required for production.
| Managed Infrastructure Size | Maximum Number of Clusters | Maximum Number of Nodes | vCPUs | RAM |
|-----------------------------|----------------------------|-------------------------|-------|-------|
| Small | 150 | 1500 | 4 | 16 GB |
| Medium | 300 | 3000 | 8 | 32 GB |
| Large (*) | 500 | 5000 | 16 | 64 GB |
(*): Large deployments require that you [follow best practices](../../../reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md) for adequate performance.
Refer to the RKE documentation for more detailed information on [general requirements](https://rke.docs.rancher.com/os).
### Docker
The following table lists minimum CPU and memory requirements for a [single Docker node installation of Rancher](../other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md).
Please note that a Docker installation is only suitable for development or testing purposes and is not meant to be used in production environments.
| Managed Infrastructure Size | Maximum Number of Clusters | Maximum Number of Nodes | vCPUs | RAM |
|-----------------------------|----------------------------|-------------------------|-------|------|
| Small | 5 | 50 | 1 | 4 GB |
| Medium | 15 | 200 | 2 | 8 GB |
## Ingress
Each node in the Kubernetes cluster that Rancher is installed on should run an Ingress.
The Ingress should be deployed as DaemonSet to ensure your load balancer can successfully route traffic to all nodes.
For RKE, RKE2 and K3s installations, you don't have to install the Ingress manually because it is installed by default.
For RKE2 and K3s installations, you don't have to install the Ingress manually because it is installed by default.
For hosted Kubernetes clusters (EKS, GKE, AKS), you will need to set up the ingress.
@@ -224,8 +189,4 @@ If you use a load balancer, it should be be HTTP/2 compatible.
To receive help from SUSE Support, Rancher Prime customers who use load balancers (or any other middleboxes such as firewalls), must use one that is HTTP/2 compatible.
When HTTP/2 is not available, Rancher falls back to HTTP/1.1. However, since HTTP/2 offers improved web application performance, using HTTP/1.1 can create performance issues.
## Dockershim Support
For more information on Dockershim support, refer to [this page](dockershim.md).
When HTTP/2 is not available, Rancher falls back to HTTP/1.1. However, since HTTP/2 offers improved web application performance, using HTTP/1.1 can create performance issues.

View File

@@ -19,7 +19,7 @@ The following table lists the ports that need to be open to and from nodes that
The port requirements differ based on the Rancher server architecture.
Rancher can be installed on any Kubernetes cluster. For Rancher installs on a K3s, RKE, or RKE2 Kubernetes cluster, refer to the tabs below. For other Kubernetes distributions, refer to the distribution's documentation for the port requirements for cluster nodes.
Rancher can be installed on any Kubernetes cluster. For Rancher installs on a K3s or RKE2 Kubernetes cluster, refer to the tabs below. For other Kubernetes distributions, refer to the distribution's documentation for the port requirements for cluster nodes.
:::note Notes:
@@ -70,52 +70,6 @@ The following tables break down the port requirements for inbound and outbound t
</details>
### Ports for Rancher Server Nodes on RKE
<details>
<summary>Click to expand</summary>
Typically Rancher is installed on three RKE nodes that all have the etcd, control plane and worker roles.
The following tables break down the port requirements for traffic between the Rancher nodes:
<figcaption>Rules for traffic between Rancher nodes</figcaption>
| Protocol | Port | Description |
|-----|-----|----------------|
| TCP | 443 | Rancher agents |
| TCP | 2379 | etcd client requests |
| TCP | 2380 | etcd peer communication |
| TCP | 6443 | Kubernetes apiserver |
| TCP | 8443 | Nginx Ingress's Validating Webhook |
| UDP | 8472 | Canal/Flannel VXLAN overlay networking |
| TCP | 9099 | Canal/Flannel livenessProbe/readinessProbe |
| TCP | 10250 | Metrics server communication with all nodes |
| TCP | 10254 | Ingress controller livenessProbe/readinessProbe |
The following tables break down the port requirements for inbound and outbound traffic:
<figcaption>Inbound Rules for Rancher Nodes</figcaption>
| Protocol | Port | Source | Description |
|-----|-----|----------------|---|
| TCP | 22 | RKE CLI | SSH provisioning of node by RKE |
| TCP | 80 | Load Balancer/Reverse Proxy | HTTP traffic to Rancher UI/API |
| TCP | 443 | <ul><li>Load Balancer/Reverse Proxy</li><li>IPs of all cluster nodes and other API/UI clients</li></ul> | HTTPS traffic to Rancher UI/API |
| TCP | 6443 | Kubernetes API clients | HTTPS traffic to Kubernetes API |
<figcaption>Outbound Rules for Rancher Nodes</figcaption>
| Protocol | Port | Destination | Description |
|-----|-----|----------------|---|
| TCP | 443 | git.rancher.io | Rancher catalog |
| TCP | 22 | Any node created using a node driver | SSH provisioning of node by node driver |
| TCP | 2376 | Any node created using a node driver | Docker daemon TLS port used by node driver |
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
| TCP | Provider dependent | Port of the Kubernetes API endpoint in hosted cluster | Kubernetes API |
</details>
### Ports for Rancher Server Nodes on RKE2
<details>

View File

@@ -8,7 +8,7 @@ title: Air-Gapped Helm CLI Install
This section is about using the Helm CLI to install the Rancher server in an air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
The installation steps differ depending on whether Rancher is installed on an RKE Kubernetes cluster, a K3s Kubernetes cluster, or a single Docker container.
The installation steps differ depending on whether Rancher is installed on a K3s Kubernetes cluster or a single Docker container.
For more information on each installation option, refer to [this page.](../../installation-and-upgrade.md)

View File

@@ -16,7 +16,7 @@ This section describes how to install a Kubernetes cluster according to our [bes
Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes providers.
The steps to set up an air-gapped Kubernetes cluster on RKE, RKE2, or K3s are shown below.
The steps to set up an air-gapped Kubernetes cluster on RKE2 or K3s are shown below.
<Tabs>
<TabItem value="K3s">
@@ -291,102 +291,9 @@ Upgrading an air-gap environment can be accomplished in the following manner:
2. Run the script again just as you had done in the past with the same environment variables.
3. Restart the RKE2 service.
</TabItem>
<TabItem value="RKE">
We will create a Kubernetes cluster using Rancher Kubernetes Engine (RKE). Before being able to start your Kubernetes cluster, youll need to install RKE and create a RKE config file.
## 1. Install RKE
Install RKE by following the instructions in the [RKE documentation.](https://rancher.com/docs/rke/latest/en/installation/)
:::note
Certified version(s) of RKE based on the Rancher version can be found in the [Rancher Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/).
:::
## 2. Create an RKE Config File
From a system that can access ports 22/TCP and 6443/TCP on the Linux host node(s) that you set up in a previous step, use the sample below to create a new file named `rancher-cluster.yml`.
This file is an RKE configuration file, which is a configuration for the cluster you're deploying Rancher to.
Replace values in the code sample below with help of the _RKE Options_ table. Use the IP address or DNS names of the three nodes you created.
:::tip
For more details on the options available, see the RKE [Config Options](https://rancher.com/docs/rke/latest/en/config-options/).
:::
<figcaption>RKE Options</figcaption>
| Option | Required | Description |
| ------------------ | -------------------- | --------------------------------------------------------------------------------------- |
| `address` | ✓ | The DNS or IP address for the node within the air gapped network. |
| `user` | ✓ | A user that can run Docker commands. |
| `role` | ✓ | List of Kubernetes roles assigned to the node. |
| `internal_address` | optional<sup>1</sup> | The DNS or IP address used for internal cluster traffic. |
| `ssh_key_path` | | Path to the SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`). |
> <sup>1</sup> Some services like AWS EC2 require setting the `internal_address` if you want to use self-referencing security groups or firewalls.
```yaml
nodes:
- address: 10.10.3.187 # node air gap network IP
internal_address: 172.31.7.22 # node intra-cluster IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
- address: 10.10.3.254 # node air gap network IP
internal_address: 172.31.13.132 # node intra-cluster IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
- address: 10.10.3.89 # node air gap network IP
internal_address: 172.31.3.216 # node intra-cluster IP
user: rancher
role: ['controlplane', 'etcd', 'worker']
ssh_key_path: /home/user/.ssh/id_rsa
private_registries:
- url: <REGISTRY.YOURDOMAIN.COM:PORT> # private registry url
user: rancher
password: '*********'
is_default: true
```
## 3. Run RKE
After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster:
```
rke up --config ./rancher-cluster.yml
```
## 4. Save Your Files
:::note Important:
The files mentioned below are needed to maintain, troubleshoot, and upgrade your cluster.
:::
Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_cluster.yml`: The [Kubeconfig file](https://rancher.com/docs/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state), this file contains the current state of the cluster including the RKE configuration and the certificates.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
</TabItem>
</Tabs>
:::note
The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
:::
## Issues or Errors?
See the [Troubleshooting](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md) page.

View File

@@ -8,7 +8,7 @@ title: '2. Install Kubernetes'
Once the infrastructure is ready, you can continue with setting up a Kubernetes cluster to install Rancher in.
The steps to set up RKE, RKE2, or K3s are shown below.
The steps to set up RKE2 or K3s are shown below.
For convenience, export the IP address and port of your proxy into an environment variable and set up the `HTTP_PROXY` variables for your current shell on every node:
@@ -104,152 +104,6 @@ kubectl cluster-info
kubectl get pods --all-namespaces
```
</TabItem>
<TabItem value="RKE">
First, you have to install Docker and setup the HTTP proxy on all three Linux nodes. For this perform the following steps on all three nodes.
Next configure apt to use this proxy when installing packages. If you are not using Ubuntu, you have to adapt this step accordingly:
```
cat <<'EOF' | sudo tee /etc/apt/apt.conf.d/proxy.conf > /dev/null
Acquire::http::Proxy "http://${proxy_host}/";
Acquire::https::Proxy "http://${proxy_host}/";
EOF
```
Now you can install Docker:
```
curl -sL https://releases.rancher.com/install-docker/19.03.sh | sh
```
Then ensure that your current user is able to access the Docker daemon without sudo:
```
sudo usermod -aG docker YOUR_USERNAME
```
And configure the Docker daemon to use the proxy to pull images:
```
sudo mkdir -p /etc/systemd/system/docker.service.d
cat <<'EOF' | sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf > /dev/null
[Service]
Environment="HTTP_PROXY=http://${proxy_host}"
Environment="HTTPS_PROXY=http://${proxy_host}"
Environment="NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16"
EOF
```
To apply the configuration, restart the Docker daemon:
```
sudo systemctl daemon-reload
sudo systemctl restart docker
```
#### Air-gapped proxy
You can now provision node driver clusters from an air-gapped cluster configured to use a proxy for outbound connections.
In addition to setting the default rules for a proxy server, you must also add the rules shown below to provision node driver clusters from a proxied Rancher environment.
You will configure your filepath according to your setup, e.g., `/etc/apt/apt.conf.d/proxy.conf`:
```
acl SSL_ports port 22
acl SSL_ports port 2376
acl Safe_ports port 22 # ssh
acl Safe_ports port 2376 # docker port
```
### Creating the RKE Cluster
You need several command line tools on the host where you have SSH access to the Linux nodes to create and interact with the cluster:
* [RKE CLI binary](https://rancher.com/docs/rke/latest/en/installation/#download-the-rke-binary)
```
sudo curl -fsSL -o /usr/local/bin/rke https://github.com/rancher/rke/releases/download/v1.1.4/rke_linux-amd64
sudo chmod +x /usr/local/bin/rke
```
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
```
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
```
Next, create a YAML file that describes the RKE cluster. Ensure that the IP addresses of the nodes and the SSH username are correct. For more information on the cluster YAML, have a look at the [RKE documentation](https://rancher.com/docs/rke/latest/en/example-yamls/).
```yml
nodes:
- address: 10.0.1.200
user: ubuntu
role: [controlplane,worker,etcd]
- address: 10.0.1.201
user: ubuntu
role: [controlplane,worker,etcd]
- address: 10.0.1.202
user: ubuntu
role: [controlplane,worker,etcd]
services:
etcd:
backup_config:
interval_hours: 12
retention: 6
```
After that, you can create the Kubernetes cluster by running:
```
rke up --config rancher-cluster.yaml
```
RKE creates a state file called `rancher-cluster.rkestate`, this is needed if you want to perform updates, modify your cluster configuration or restore it from a backup. It also creates a `kube_config_cluster.yaml` file, that you can use to connect to the remote Kubernetes cluster locally with tools like kubectl or Helm. Make sure to save all of these files in a secure location, for example by putting them into a version control system.
To have a look at your cluster run:
```
export KUBECONFIG=kube_config_cluster.yaml
kubectl cluster-info
kubectl get pods --all-namespaces
```
You can also verify that your external load balancer works, and the DNS entry is set up correctly. If you send a request to either, you should receive HTTP 404 response from the ingress controller:
```
$ curl 10.0.1.100
default backend - 404
$ curl rancher.example.com
default backend - 404
```
### Save Your Files
:::note Important:
The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
:::
Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_cluster.yml`: The [Kubeconfig file](https://rancher.com/docs/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state), this file contains the current state of the cluster including the RKE configuration and the certificates.
:::note
The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
:::
</TabItem>
</Tabs>

View File

@@ -6,7 +6,7 @@ title: 3. Install Rancher
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher"/>
</head>
Now that you have a running RKE cluster, you can install Rancher in it. For security reasons all traffic to Rancher must be encrypted with TLS. For this tutorial you are going to automatically issue a self-signed certificate through [cert-manager](https://cert-manager.io/). In a real-world use-case you will likely use Let's Encrypt or provide your own certificate.
Now that you have a running RKE2/K3s cluster, you can install Rancher in it. For security reasons all traffic to Rancher must be encrypted with TLS. For this tutorial you are going to automatically issue a self-signed certificate through [cert-manager](https://cert-manager.io/). In a real-world use-case you will likely use Let's Encrypt or provide your own certificate.
### Install the Helm CLI

View File

@@ -8,7 +8,7 @@ title: '1. Set up Infrastructure'
In this section, you will provision the underlying infrastructure for your Rancher management server with internet access through a HTTP proxy.
To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure:
To install the Rancher management server on a high-availability RKE2/K3s cluster, we recommend setting up the following infrastructure:
- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere.
- **A load balancer** to direct front-end traffic to the three nodes.
@@ -18,7 +18,7 @@ These nodes must be in the same region/data center. You may place these servers
### Why three nodes?
In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
In an RKE2/K3s cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes.
@@ -34,7 +34,7 @@ For an example of one way to set up Linux nodes, refer to this [tutorial](../../
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
When Kubernetes gets set up in a later step, the RKE tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
When Kubernetes gets set up in a later step, the RKE2/K3s tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.

View File

@@ -1,198 +0,0 @@
---
title: Setting up a High-availability RKE Kubernetes Cluster
---
<head>
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/kubernetes-cluster-setup/rke1-for-rancher"/>
</head>
<EOLRKE1Warning />
This section describes how to install a Kubernetes cluster. This cluster should be dedicated to run only the Rancher server.
:::note
Rancher can run on any Kubernetes cluster, included hosted Kubernetes solutions such as Amazon EKS. The below instructions represent only one possible way to install Kubernetes.
:::
For systems without direct internet access, refer to [Air Gap: Kubernetes install.](../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md)
:::tip Single-node Installation Tip:
In a single-node Kubernetes cluster, the Rancher server does not have high availability, which is important for running Rancher in production. However, installing Rancher on a single-node cluster can be useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path.
To set up a single-node RKE cluster, configure only one node in the `cluster.yml` . The single node should have all three roles: `etcd`, `controlplane`, and `worker`.
In both single-node setups, Rancher can be installed with Helm on the Kubernetes cluster in the same way that it would be installed on any other cluster.
:::
## Installing Kubernetes
### Required CLI Tools
Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool.
Also install [RKE,](https://rancher.com/docs/rke/latest/en/installation/) the Rancher Kubernetes Engine, a Kubernetes distribution and command-line tool.
### 1. Create the cluster configuration file
In this section, you will create a Kubernetes cluster configuration file called `rancher-cluster.yml`. In a later step, when you set up the cluster with an RKE command, it will use this file to install Kubernetes on your nodes.
Using the sample below as a guide, create the `rancher-cluster.yml` file. Replace the IP addresses in the `nodes` list with the IP address or DNS names of the 3 nodes you created.
If your node has public and internal addresses, it is recommended to set the `internal_address:` so Kubernetes will use it for intra-cluster communication. Some services like AWS EC2 require setting the `internal_address:` if you want to use self-referencing security groups or firewalls.
RKE will need to connect to each node over SSH, and it will look for a private key in the default location of `~/.ssh/id_rsa`. If your private key for a certain node is in a different location than the default, you will also need to configure the `ssh_key_path` option for that node.
When choosing a Kubernetes version, be sure to first consult the [support matrix](https://rancher.com/support-matrix/) to find the highest version of Kubernetes that has been validated for your Rancher version.
```yaml
nodes:
- address: 165.227.114.63
internal_address: 172.16.22.12
user: ubuntu
role: [controlplane, worker, etcd]
- address: 165.227.116.167
internal_address: 172.16.32.37
user: ubuntu
role: [controlplane, worker, etcd]
- address: 165.227.127.226
internal_address: 172.16.42.73
user: ubuntu
role: [controlplane, worker, etcd]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
# Required for external TLS termination with
# ingress-nginx v0.22+
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
kubernetes_version: v1.25.6-rancher4-1
```
<figcaption>Common RKE Nodes Options</figcaption>
| Option | Required | Description |
| ------------------ | -------- | -------------------------------------------------------------------------------------- |
| `address` | yes | The public DNS or IP address |
| `user` | yes | A user that can run docker commands |
| `role` | yes | List of Kubernetes roles assigned to the node |
| `internal_address` | no | The private DNS or IP address for internal cluster traffic |
| `ssh_key_path` | no | Path to SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`) |
:::note Advanced Configurations:
RKE has many configuration options for customizing the install to suit your specific environment.
Please see the [RKE Documentation](https://rancher.com/docs/rke/latest/en/config-options/) for the full list of options and capabilities.
For tuning your etcd cluster for larger Rancher installations, see the [etcd settings guide](../../advanced-user-guides/tune-etcd-for-large-installs.md).
For more information regarding Dockershim support, refer to [this page](../../../getting-started/installation-and-upgrade/installation-requirements/dockershim.md)
:::
### 2. Run RKE
```
rke up --config ./rancher-cluster.yml
```
When finished, it should end with the line: `Finished building Kubernetes cluster successfully`.
### 3. Test Your Cluster
This section describes how to set up your workspace so that you can interact with this cluster using the `kubectl` command-line tool.
Assuming you have installed `kubectl`, you need to place the `kubeconfig` file in a location where `kubectl` can reach it. The `kubeconfig` file contains the credentials necessary to access your cluster with `kubectl`.
When you ran `rke up`, RKE should have created a `kubeconfig` file named `kube_config_cluster.yml`. This file has the credentials for `kubectl` and `helm`.
:::note
If you have used a different file name from `rancher-cluster.yml`, then the kube config file will be named `kube_config_<FILE_NAME>.yml`.
:::
Move this file to `$HOME/.kube/config`, or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_cluster.yml`:
```
export KUBECONFIG=$(pwd)/kube_config_cluster.yml
```
Test your connectivity with `kubectl` and see if all your nodes are in `Ready` state:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
165.227.114.63 Ready controlplane,etcd,worker 11m v1.13.5
165.227.116.167 Ready controlplane,etcd,worker 11m v1.13.5
165.227.127.226 Ready controlplane,etcd,worker 11m v1.13.5
```
### 4. Check the Health of Your Cluster Pods
Check that all the required pods and containers are healthy are ready to continue.
- Pods are in `Running` or `Completed` state.
- `READY` column shows all the containers are running (i.e. `3/3`) for pods with `STATUS` `Running`
- Pods with `STATUS` `Completed` are run-once Jobs. For these pods `READY` should be `0/1`.
```
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-tnsn4 1/1 Running 0 30s
ingress-nginx nginx-ingress-controller-tw2ht 1/1 Running 0 30s
ingress-nginx nginx-ingress-controller-v874b 1/1 Running 0 30s
kube-system canal-jp4hz 3/3 Running 0 30s
kube-system canal-z2hg8 3/3 Running 0 30s
kube-system canal-z6kpw 3/3 Running 0 30s
kube-system kube-dns-7588d5b5f5-sf4vh 3/3 Running 0 30s
kube-system kube-dns-autoscaler-5db9bbb766-jz2k6 1/1 Running 0 30s
kube-system metrics-server-97bc649d5-4rl2q 1/1 Running 0 30s
kube-system rke-ingress-controller-deploy-job-bhzgm 0/1 Completed 0 30s
kube-system rke-kubedns-addon-deploy-job-gl7t4 0/1 Completed 0 30s
kube-system rke-metrics-addon-deploy-job-7ljkc 0/1 Completed 0 30s
kube-system rke-network-plugin-deploy-job-6pbgj 0/1 Completed 0 30s
```
This confirms that you have successfully installed a Kubernetes cluster that the Rancher server will run on.
### 5. Save Your Files
:::note Important:
The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
:::
Save a copy of the following files in a secure location:
- `rancher-cluster.yml`: The RKE cluster configuration file.
- `kube_config_cluster.yml`: The [Kubeconfig file](https://rancher.com/docs/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
:::note
The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
:::
### Issues or errors?
See the [Troubleshooting](../../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting.md) page.
### [Next: Install Rancher](../../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md)

View File

@@ -38,7 +38,7 @@ Choose the default security group or configure a security group.
Please refer to [Amazon EC2 security group when using Node Driver](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#rancher-aws-ec2-security-group) to see what rules are created in the `rancher-nodes` Security Group.
If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#ports-for-rancher-server-nodes-on-rke). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups).
If you provide your own security group for an EC2 instance, please note that Rancher will not modify it. As such, you will be responsible for ensuring that your security group is set to allow the [necessary ports for Rancher to provision the instance](../../../../getting-started/installation-and-upgrade/installation-requirements/port-requirements.md#ports-for-rancher-server-nodes-on-rke2). For more information on controlling inbound and outbound traffic to EC2 instances with security groups, refer [here](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#WorkingWithSecurityGroups).
### Instance Options

View File

@@ -66,8 +66,6 @@
"id": "getting-started/installation-and-upgrade/installation-requirements/installation-requirements"
},
"items": [
"getting-started/installation-and-upgrade/installation-requirements/install-docker",
"getting-started/installation-and-upgrade/installation-requirements/dockershim",
"getting-started/installation-and-upgrade/installation-requirements/port-requirements"
]
},
@@ -373,7 +371,6 @@
"items": [
"how-to-guides/new-user-guides/kubernetes-cluster-setup/high-availability-installs",
"how-to-guides/new-user-guides/kubernetes-cluster-setup/k3s-for-rancher",
"how-to-guides/new-user-guides/kubernetes-cluster-setup/rke1-for-rancher",
"how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher"
]
},