mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-01 02:33:15 +00:00
Remove unneeded intermediate folders
This commit is contained in:
@@ -0,0 +1,169 @@
|
||||
---
|
||||
title: Setting up a High-availability RKE Kubernetes Cluster
|
||||
shortTitle: Set up RKE Kubernetes
|
||||
weight: 3
|
||||
---
|
||||
|
||||
|
||||
This section describes how to install a Kubernetes cluster. This cluster should be dedicated to run only the Rancher server.
|
||||
|
||||
> Rancher can run on any Kubernetes cluster, included hosted Kubernetes solutions such as Amazon EKS. The below instructions represent only one possible way to install Kubernetes.
|
||||
|
||||
For systems without direct internet access, refer to [Air Gap: Kubernetes install.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/air-gap/)
|
||||
|
||||
> **Single-node Installation Tip:**
|
||||
> In a single-node Kubernetes cluster, the Rancher server does not have high availability, which is important for running Rancher in production. However, installing Rancher on a single-node cluster can be useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path.
|
||||
>
|
||||
> To set up a single-node RKE cluster, configure only one node in the `cluster.yml` . The single node should have all three roles: `etcd`, `controlplane`, and `worker`.
|
||||
>
|
||||
> In both single-node setups, Rancher can be installed with Helm on the Kubernetes cluster in the same way that it would be installed on any other cluster.
|
||||
|
||||
# Installing Kubernetes
|
||||
|
||||
### Required CLI Tools
|
||||
|
||||
Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool.
|
||||
|
||||
Also install [RKE,]({{<baseurl>}}/rke/latest/en/installation/) the Rancher Kubernetes Engine, a Kubernetes distribution and command-line tool.
|
||||
|
||||
### 1. Create the cluster configuration file
|
||||
|
||||
In this section, you will create a Kubernetes cluster configuration file called `rancher-cluster.yml`. In a later step, when you set up the cluster with an RKE command, it will use this file to install Kubernetes on your nodes.
|
||||
|
||||
Using the sample below as a guide, create the `rancher-cluster.yml` file. Replace the IP addresses in the `nodes` list with the IP address or DNS names of the 3 nodes you created.
|
||||
|
||||
If your node has public and internal addresses, it is recommended to set the `internal_address:` so Kubernetes will use it for intra-cluster communication. Some services like AWS EC2 require setting the `internal_address:` if you want to use self-referencing security groups or firewalls.
|
||||
|
||||
RKE will need to connect to each node over SSH, and it will look for a private key in the default location of `~/.ssh/id_rsa`. If your private key for a certain node is in a different location than the default, you will also need to configure the `ssh_key_path` option for that node.
|
||||
|
||||
```yaml
|
||||
nodes:
|
||||
- address: 165.227.114.63
|
||||
internal_address: 172.16.22.12
|
||||
user: ubuntu
|
||||
role: [controlplane, worker, etcd]
|
||||
- address: 165.227.116.167
|
||||
internal_address: 172.16.32.37
|
||||
user: ubuntu
|
||||
role: [controlplane, worker, etcd]
|
||||
- address: 165.227.127.226
|
||||
internal_address: 172.16.42.73
|
||||
user: ubuntu
|
||||
role: [controlplane, worker, etcd]
|
||||
|
||||
services:
|
||||
etcd:
|
||||
snapshot: true
|
||||
creation: 6h
|
||||
retention: 24h
|
||||
|
||||
# Required for external TLS termination with
|
||||
# ingress-nginx v0.22+
|
||||
ingress:
|
||||
provider: nginx
|
||||
options:
|
||||
use-forwarded-headers: "true"
|
||||
```
|
||||
|
||||
<figcaption>Common RKE Nodes Options</figcaption>
|
||||
|
||||
| Option | Required | Description |
|
||||
| ------------------ | -------- | -------------------------------------------------------------------------------------- |
|
||||
| `address` | yes | The public DNS or IP address |
|
||||
| `user` | yes | A user that can run docker commands |
|
||||
| `role` | yes | List of Kubernetes roles assigned to the node |
|
||||
| `internal_address` | no | The private DNS or IP address for internal cluster traffic |
|
||||
| `ssh_key_path` | no | Path to SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`) |
|
||||
|
||||
> **Advanced Configurations:** RKE has many configuration options for customizing the install to suit your specific environment.
|
||||
>
|
||||
> Please see the [RKE Documentation]({{<baseurl>}}/rke/latest/en/config-options/) for the full list of options and capabilities.
|
||||
>
|
||||
> For tuning your etcd cluster for larger Rancher installations, see the [etcd settings guide]({{<baseurl>}}/rancher/v2.6/en/installation/resources/advanced/etcd/).
|
||||
>
|
||||
> For more information regarding Dockershim support, refer to [this page]({{<baseurl>}}/rancher/v2.6/en/installation/requirements/dockershim/)
|
||||
|
||||
### 2. Run RKE
|
||||
|
||||
```
|
||||
rke up --config ./rancher-cluster.yml
|
||||
```
|
||||
|
||||
When finished, it should end with the line: `Finished building Kubernetes cluster successfully`.
|
||||
|
||||
### 3. Test Your Cluster
|
||||
|
||||
This section describes how to set up your workspace so that you can interact with this cluster using the `kubectl` command-line tool.
|
||||
|
||||
Assuming you have installed `kubectl`, you need to place the `kubeconfig` file in a location where `kubectl` can reach it. The `kubeconfig` file contains the credentials necessary to access your cluster with `kubectl`.
|
||||
|
||||
When you ran `rke up`, RKE should have created a `kubeconfig` file named `kube_config_cluster.yml`. This file has the credentials for `kubectl` and `helm`.
|
||||
|
||||
> **Note:** If you have used a different file name from `rancher-cluster.yml`, then the kube config file will be named `kube_config_<FILE_NAME>.yml`.
|
||||
|
||||
Move this file to `$HOME/.kube/config`, or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_cluster.yml`:
|
||||
|
||||
```
|
||||
export KUBECONFIG=$(pwd)/kube_config_cluster.yml
|
||||
```
|
||||
|
||||
Test your connectivity with `kubectl` and see if all your nodes are in `Ready` state:
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
165.227.114.63 Ready controlplane,etcd,worker 11m v1.13.5
|
||||
165.227.116.167 Ready controlplane,etcd,worker 11m v1.13.5
|
||||
165.227.127.226 Ready controlplane,etcd,worker 11m v1.13.5
|
||||
```
|
||||
|
||||
### 4. Check the Health of Your Cluster Pods
|
||||
|
||||
Check that all the required pods and containers are healthy are ready to continue.
|
||||
|
||||
- Pods are in `Running` or `Completed` state.
|
||||
- `READY` column shows all the containers are running (i.e. `3/3`) for pods with `STATUS` `Running`
|
||||
- Pods with `STATUS` `Completed` are run-once Jobs. For these pods `READY` should be `0/1`.
|
||||
|
||||
```
|
||||
kubectl get pods --all-namespaces
|
||||
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
ingress-nginx nginx-ingress-controller-tnsn4 1/1 Running 0 30s
|
||||
ingress-nginx nginx-ingress-controller-tw2ht 1/1 Running 0 30s
|
||||
ingress-nginx nginx-ingress-controller-v874b 1/1 Running 0 30s
|
||||
kube-system canal-jp4hz 3/3 Running 0 30s
|
||||
kube-system canal-z2hg8 3/3 Running 0 30s
|
||||
kube-system canal-z6kpw 3/3 Running 0 30s
|
||||
kube-system kube-dns-7588d5b5f5-sf4vh 3/3 Running 0 30s
|
||||
kube-system kube-dns-autoscaler-5db9bbb766-jz2k6 1/1 Running 0 30s
|
||||
kube-system metrics-server-97bc649d5-4rl2q 1/1 Running 0 30s
|
||||
kube-system rke-ingress-controller-deploy-job-bhzgm 0/1 Completed 0 30s
|
||||
kube-system rke-kubedns-addon-deploy-job-gl7t4 0/1 Completed 0 30s
|
||||
kube-system rke-metrics-addon-deploy-job-7ljkc 0/1 Completed 0 30s
|
||||
kube-system rke-network-plugin-deploy-job-6pbgj 0/1 Completed 0 30s
|
||||
```
|
||||
|
||||
This confirms that you have successfully installed a Kubernetes cluster that the Rancher server will run on.
|
||||
|
||||
### 5. Save Your Files
|
||||
|
||||
> **Important**
|
||||
> The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
|
||||
|
||||
Save a copy of the following files in a secure location:
|
||||
|
||||
- `rancher-cluster.yml`: The RKE cluster configuration file.
|
||||
- `kube_config_cluster.yml`: The [Kubeconfig file]({{<baseurl>}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
|
||||
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{<baseurl>}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
|
||||
|
||||
> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
|
||||
|
||||
### Issues or errors?
|
||||
|
||||
See the [Troubleshooting]({{<baseurl>}}/rancher/v2.6/en/installation/resources/troubleshooting/) page.
|
||||
|
||||
|
||||
### [Next: Install Rancher]({{<baseurl>}}/rancher/v2.6/en/installation/install-rancher-on-k8s/)
|
||||
|
||||
@@ -0,0 +1,167 @@
|
||||
---
|
||||
title: Setting up a High-availability RKE2 Kubernetes Cluster for Rancher
|
||||
shortTitle: Set up RKE2 for Rancher
|
||||
weight: 2
|
||||
---
|
||||
_Tested on v2.5.6_
|
||||
|
||||
This section describes how to install a Kubernetes cluster according to the [best practices for the Rancher server environment.]({{<baseurl>}}/rancher/v2.6/en/overview/architecture-recommendations/#environment-for-kubernetes-installations)
|
||||
|
||||
# Prerequisites
|
||||
|
||||
These instructions assume you have set up three nodes, a load balancer, and a DNS record, as described in [this section.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-rke2-ha)
|
||||
|
||||
Note that in order for RKE2 to work correctly with the load balancer, you need to set up two listeners: one for the supervisor on port 9345, and one for the Kubernetes API on port 6443.
|
||||
|
||||
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/) To specify the RKE2 version, use the INSTALL_RKE2_VERSION environment variable when running the RKE2 installation script.
|
||||
# Installing Kubernetes
|
||||
|
||||
### 1. Install Kubernetes and Set up the RKE2 Server
|
||||
|
||||
RKE2 server runs with embedded etcd so you will not need to set up an external datastore to run in HA mode.
|
||||
|
||||
On the first node, you should set up the configuration file with your own pre-shared secret as the token. The token argument can be set on startup.
|
||||
|
||||
If you do not specify a pre-shared secret, RKE2 will generate one and place it at /var/lib/rancher/rke2/server/node-token.
|
||||
|
||||
To avoid certificate errors with the fixed registration address, you should launch the server with the tls-san parameter set. This option adds an additional hostname or IP as a Subject Alternative Name in the server's TLS cert, and it can be specified as a list if you would like to access via both the IP and the hostname.
|
||||
|
||||
First, you must create the directory where the RKE2 config file is going to be placed:
|
||||
|
||||
```
|
||||
mkdir -p /etc/rancher/rke2/
|
||||
```
|
||||
|
||||
Next, create the RKE2 config file at `/etc/rancher/rke2/config.yaml` using the following example:
|
||||
|
||||
```
|
||||
token: my-shared-secret
|
||||
tls-san:
|
||||
- my-kubernetes-domain.com
|
||||
- another-kubernetes-domain.com
|
||||
```
|
||||
After that, you need to run the install command and enable and start rke2:
|
||||
|
||||
```
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_CHANNEL=v1.20 sh -
|
||||
systemctl enable rke2-server.service
|
||||
systemctl start rke2-server.service
|
||||
```
|
||||
1. To join the rest of the nodes, you need to configure each additional node with the same shared token or the one generated automatically. Here is an example of the configuration file:
|
||||
|
||||
token: my-shared-secret
|
||||
server: https://<DNS-DOMAIN>:9345
|
||||
tls-san:
|
||||
- my-kubernetes-domain.com
|
||||
- another-kubernetes-domain.com
|
||||
After that, you need to run the installer and enable, then start, rke2:
|
||||
|
||||
curl -sfL https://get.rke2.io | sh -
|
||||
systemctl enable rke2-server.service
|
||||
systemctl start rke2-server.service
|
||||
|
||||
|
||||
1. Repeat the same command on your third RKE2 server node.
|
||||
|
||||
### 2. Confirm that RKE2 is Running
|
||||
|
||||
Once you've launched the rke2 server process on all server nodes, ensure that the cluster has come up properly with
|
||||
|
||||
```
|
||||
/var/lib/rancher/rke2/bin/kubectl \
|
||||
--kubeconfig /etc/rancher/rke2/rke2.yaml get nodes
|
||||
You should see your server nodes in the Ready state.
|
||||
```
|
||||
|
||||
Then test the health of the cluster pods:
|
||||
```
|
||||
/var/lib/rancher/rke2/bin/kubectl \
|
||||
--kubeconfig /etc/rancher/rke2/rke2.yaml get pods --all-namespaces
|
||||
```
|
||||
|
||||
**Result:** You have successfully set up a RKE2 Kubernetes cluster.
|
||||
|
||||
### 3. Save and Start Using the kubeconfig File
|
||||
|
||||
When you installed RKE2 on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/rke2/rke2.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location.
|
||||
|
||||
To use this `kubeconfig` file,
|
||||
|
||||
1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool.
|
||||
2. Copy the file at `/etc/rancher/rke2/rke2.yaml` and save it to the directory `~/.kube/config` on your local machine.
|
||||
3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your control-plane load balancer, on port 6443. (The RKE2 Kubernetes API Server uses port 6443, while the Rancher server will be served via the NGINX Ingress on ports 80 and 443.) Here is an example `rke2.yaml`:
|
||||
|
||||
```yml
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: [CERTIFICATE-DATA]
|
||||
server: [LOAD-BALANCER-DNS]:6443 # Edit this line
|
||||
name: default
|
||||
contexts:
|
||||
- context:
|
||||
cluster: default
|
||||
user: default
|
||||
name: default
|
||||
current-context: default
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: default
|
||||
user:
|
||||
password: [PASSWORD]
|
||||
username: admin
|
||||
```
|
||||
|
||||
**Result:** You can now use `kubectl` to manage your RKE2 cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using `kubectl`:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces
|
||||
```
|
||||
|
||||
For more information about the `kubeconfig` file, refer to the [RKE2 documentation](https://docs.rke2.io/cluster_access/) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files.
|
||||
|
||||
### 4. Check the Health of Your Cluster Pods
|
||||
|
||||
Now that you have set up the `kubeconfig` file, you can use `kubectl` to access the cluster from your local machine.
|
||||
|
||||
Check that all the required pods and containers are healthy are ready to continue:
|
||||
|
||||
```
|
||||
/var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get pods -A
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system cloud-controller-manager-rke2-server-1 1/1 Running 0 2m28s
|
||||
kube-system cloud-controller-manager-rke2-server-2 1/1 Running 0 61s
|
||||
kube-system cloud-controller-manager-rke2-server-3 1/1 Running 0 49s
|
||||
kube-system etcd-rke2-server-1 1/1 Running 0 2m13s
|
||||
kube-system etcd-rke2-server-2 1/1 Running 0 87s
|
||||
kube-system etcd-rke2-server-3 1/1 Running 0 56s
|
||||
kube-system helm-install-rke2-canal-hs6sx 0/1 Completed 0 2m17s
|
||||
kube-system helm-install-rke2-coredns-xmzm8 0/1 Completed 0 2m17s
|
||||
kube-system helm-install-rke2-ingress-nginx-flwnl 0/1 Completed 0 2m17s
|
||||
kube-system helm-install-rke2-metrics-server-7sggn 0/1 Completed 0 2m17s
|
||||
kube-system kube-apiserver-rke2-server-1 1/1 Running 0 116s
|
||||
kube-system kube-apiserver-rke2-server-2 1/1 Running 0 66s
|
||||
kube-system kube-apiserver-rke2-server-3 1/1 Running 0 48s
|
||||
kube-system kube-controller-manager-rke2-server-1 1/1 Running 0 2m30s
|
||||
kube-system kube-controller-manager-rke2-server-2 1/1 Running 0 57s
|
||||
kube-system kube-controller-manager-rke2-server-3 1/1 Running 0 42s
|
||||
kube-system kube-proxy-rke2-server-1 1/1 Running 0 2m25s
|
||||
kube-system kube-proxy-rke2-server-2 1/1 Running 0 59s
|
||||
kube-system kube-proxy-rke2-server-3 1/1 Running 0 85s
|
||||
kube-system kube-scheduler-rke2-server-1 1/1 Running 0 2m30s
|
||||
kube-system kube-scheduler-rke2-server-2 1/1 Running 0 57s
|
||||
kube-system kube-scheduler-rke2-server-3 1/1 Running 0 42s
|
||||
kube-system rke2-canal-b9lvm 2/2 Running 0 91s
|
||||
kube-system rke2-canal-khwp2 2/2 Running 0 2m5s
|
||||
kube-system rke2-canal-swfmq 2/2 Running 0 105s
|
||||
kube-system rke2-coredns-rke2-coredns-547d5499cb-6tvwb 1/1 Running 0 92s
|
||||
kube-system rke2-coredns-rke2-coredns-547d5499cb-rdttj 1/1 Running 0 2m8s
|
||||
kube-system rke2-coredns-rke2-coredns-autoscaler-65c9bb465d-85sq5 1/1 Running 0 2m8s
|
||||
kube-system rke2-ingress-nginx-controller-69qxc 1/1 Running 0 52s
|
||||
kube-system rke2-ingress-nginx-controller-7hprp 1/1 Running 0 52s
|
||||
kube-system rke2-ingress-nginx-controller-x658h 1/1 Running 0 52s
|
||||
kube-system rke2-metrics-server-6564db4569-vdfkn 1/1 Running 0 66s
|
||||
```
|
||||
|
||||
**Result:** You have confirmed that you can access the cluster with `kubectl` and the RKE2 cluster is running successfully. Now the Rancher management server can be installed on the cluster.
|
||||
+120
@@ -0,0 +1,120 @@
|
||||
---
|
||||
title: Setting up a High-availability K3s Kubernetes Cluster for Rancher
|
||||
shortTitle: Set up K3s for Rancher
|
||||
weight: 2
|
||||
---
|
||||
|
||||
This section describes how to install a Kubernetes cluster according to the [best practices for the Rancher server environment.]({{<baseurl>}}/rancher/v2.6/en/overview/architecture-recommendations/#environment-for-kubernetes-installations)
|
||||
|
||||
For systems without direct internet access, refer to the air gap installation instructions.
|
||||
|
||||
> **Single-node Installation Tip:**
|
||||
> In a single-node Kubernetes cluster, the Rancher server does not have high availability, which is important for running Rancher in production. However, installing Rancher on a single-node cluster can be useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path.
|
||||
>
|
||||
> To set up a single-node K3s cluster, run the Rancher server installation command on just one node instead of two nodes.
|
||||
>
|
||||
> In both single-node setups, Rancher can be installed with Helm on the Kubernetes cluster in the same way that it would be installed on any other cluster.
|
||||
|
||||
# Prerequisites
|
||||
|
||||
These instructions assume you have set up two nodes, a load balancer, a DNS record, and an external MySQL database as described in [this section.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-ha-with-external-db/)
|
||||
|
||||
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/) To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script.
|
||||
# Installing Kubernetes
|
||||
|
||||
### 1. Install Kubernetes and Set up the K3s Server
|
||||
|
||||
When running the command to start the K3s Kubernetes API server, you will pass in an option to use the external datastore that you set up earlier.
|
||||
|
||||
1. Connect to one of the Linux nodes that you have prepared to run the Rancher server.
|
||||
1. On the Linux node, run this command to start the K3s server and connect it to the external datastore:
|
||||
```
|
||||
curl -sfL https://get.k3s.io | sh -s - server \
|
||||
--datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name"
|
||||
```
|
||||
To specify the K3s version, use the INSTALL_K3S_VERSION environment variable:
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z sh -s - server \
|
||||
--datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name"
|
||||
```
|
||||
Note: The datastore endpoint can also be passed in using the environment variable `$K3S_DATASTORE_ENDPOINT`.
|
||||
|
||||
1. Repeat the same command on your second K3s server node.
|
||||
|
||||
### 2. Confirm that K3s is Running
|
||||
|
||||
To confirm that K3s has been set up successfully, run the following command on either of the K3s server nodes:
|
||||
```
|
||||
sudo k3s kubectl get nodes
|
||||
```
|
||||
|
||||
Then you should see two nodes with the master role:
|
||||
```
|
||||
ubuntu@ip-172-31-60-194:~$ sudo k3s kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ip-172-31-60-194 Ready master 44m v1.17.2+k3s1
|
||||
ip-172-31-63-88 Ready master 6m8s v1.17.2+k3s1
|
||||
```
|
||||
|
||||
Then test the health of the cluster pods:
|
||||
```
|
||||
sudo k3s kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
**Result:** You have successfully set up a K3s Kubernetes cluster.
|
||||
|
||||
### 3. Save and Start Using the kubeconfig File
|
||||
|
||||
When you installed K3s on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/k3s/k3s.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location.
|
||||
|
||||
To use this `kubeconfig` file,
|
||||
|
||||
1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool.
|
||||
2. Copy the file at `/etc/rancher/k3s/k3s.yaml` and save it to the directory `~/.kube/config` on your local machine.
|
||||
3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your load balancer, referring to port 6443. (The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443.) Here is an example `k3s.yaml`:
|
||||
|
||||
```yml
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: [CERTIFICATE-DATA]
|
||||
server: [LOAD-BALANCER-DNS]:6443 # Edit this line
|
||||
name: default
|
||||
contexts:
|
||||
- context:
|
||||
cluster: default
|
||||
user: default
|
||||
name: default
|
||||
current-context: default
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: default
|
||||
user:
|
||||
password: [PASSWORD]
|
||||
username: admin
|
||||
```
|
||||
|
||||
**Result:** You can now use `kubectl` to manage your K3s cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using `kubectl`:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig ~/.kube/config/k3s.yaml get pods --all-namespaces
|
||||
```
|
||||
|
||||
For more information about the `kubeconfig` file, refer to the [K3s documentation]({{<baseurl>}}/k3s/latest/en/cluster-access/) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files.
|
||||
|
||||
### 4. Check the Health of Your Cluster Pods
|
||||
|
||||
Now that you have set up the `kubeconfig` file, you can use `kubectl` to access the cluster from your local machine.
|
||||
|
||||
Check that all the required pods and containers are healthy are ready to continue:
|
||||
|
||||
```
|
||||
ubuntu@ip-172-31-60-194:~$ sudo kubectl get pods --all-namespaces
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system metrics-server-6d684c7b5-bw59k 1/1 Running 0 8d
|
||||
kube-system local-path-provisioner-58fb86bdfd-fmkvd 1/1 Running 0 8d
|
||||
kube-system coredns-d798c9dd-ljjnf 1/1 Running 0 8d
|
||||
```
|
||||
|
||||
**Result:** You have confirmed that you can access the cluster with `kubectl` and the K3s cluster is running successfully. Now the Rancher management server can be installed on the cluster.
|
||||
+25
@@ -0,0 +1,25 @@
|
||||
---
|
||||
title: About High-availability Installations
|
||||
weight: 1
|
||||
---
|
||||
|
||||
We recommend using Helm, a Kubernetes package manager, to install Rancher on a dedicated Kubernetes cluster. This is called a high-availability Kubernetes installation because increased availability is achieved by running Rancher on multiple nodes.
|
||||
|
||||
In a standard installation, Kubernetes is first installed on three nodes that are hosted in an infrastructure provider such as Amazon's EC2 or Google Compute Engine.
|
||||
|
||||
Then Helm is used to install Rancher on top of the Kubernetes cluster. Helm uses Rancher's Helm chart to install a replica of Rancher on each of the three nodes in the Kubernetes cluster. We recommend using a load balancer to direct traffic to each replica of Rancher in the cluster, in order to increase Rancher's availability.
|
||||
|
||||
The Rancher server data is stored on etcd. This etcd database also runs on all three nodes, and requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can fail, requiring the cluster to be restored from backup.
|
||||
|
||||
For information on how Rancher works, regardless of the installation method, refer to the [architecture section.]({{<baseurl>}}/rancher/v2.6/en/overview/architecture)
|
||||
|
||||
### Recommended Architecture
|
||||
|
||||
- DNS for Rancher should resolve to a layer 4 load balancer
|
||||
- The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster.
|
||||
- The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443.
|
||||
- The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment.
|
||||
|
||||
<figcaption>Kubernetes Rancher install with layer 4 load balancer, depicting SSL termination at ingress controllers</figcaption>
|
||||

|
||||
<sup>Kubernetes Rancher install with Layer 4 load balancer (TCP), depicting SSL termination at ingress controllers</sup>
|
||||
+67
@@ -0,0 +1,67 @@
|
||||
---
|
||||
title: Setting up Nodes in Amazon EC2
|
||||
weight: 3
|
||||
---
|
||||
|
||||
In this tutorial, you will learn one way to set up Linux nodes for the Rancher management server. These nodes will fulfill the node requirements for [OS, Docker, hardware, and networking.]({{<baseurl>}}/rancher/v2.6/en/installation/requirements/)
|
||||
|
||||
If the Rancher server will be installed on an RKE Kubernetes cluster, you should provision three instances.
|
||||
|
||||
If the Rancher server will be installed on a K3s Kubernetes cluster, you only need to provision two instances.
|
||||
|
||||
If the Rancher server is installed in a single Docker container, you only need one instance.
|
||||
|
||||
### 1. Optional Preparation
|
||||
|
||||
- **Create IAM role:** To allow Rancher to manipulate AWS resources, such as provisioning new storage or new nodes, you will need to configure Amazon as a cloud provider. There are several things you'll need to do to set up the cloud provider on EC2, but part of this process is setting up an IAM role for the Rancher server nodes. For the full details on setting up the cloud provider, refer to this [page.]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/cloud-providers/)
|
||||
- **Create security group:** We also recommend setting up a security group for the Rancher nodes that complies with the [port requirements for Rancher nodes.]({{<baseurl>}}/rancher/v2.6/en/installation/requirements/#port-requirements)
|
||||
|
||||
### 2. Provision Instances
|
||||
|
||||
1. Log into the [Amazon AWS EC2 Console](https://console.aws.amazon.com/ec2/) to get started. Make sure to take note of the **Region** where your EC2 instances (Linux nodes) are created, because all of the infrastructure for the Rancher management server should be in the same region.
|
||||
1. In the left panel, click **Instances**.
|
||||
1. Click **Launch Instance**.
|
||||
1. In the section called **Step 1: Choose an Amazon Machine Image (AMI),** we will use Ubuntu 18.04 as the Linux OS, using `ami-0d1cd67c26f5fca19 (64-bit x86)`. Go to the Ubuntu AMI and click **Select**.
|
||||
1. In the **Step 2: Choose an Instance Type** section, select the `t2.medium` type.
|
||||
1. Click **Next: Configure Instance Details**.
|
||||
1. In the **Number of instances** field, enter the number of instances. A high-availability K3s cluster requires only two instances, while a high-availability RKE cluster requires three instances.
|
||||
1. Optional: If you created an IAM role for Rancher to manipulate AWS resources, select the new IAM role in the **IAM role** field.
|
||||
1. Click **Next: Add Storage,** **Next: Add Tags,** and **Next: Configure Security Group**.
|
||||
1. In **Step 6: Configure Security Group,** select a security group that complies with the [port requirements]({{<baseurl>}}/rancher/v2.6/en/installation/requirements/#port-requirements) for Rancher nodes.
|
||||
1. Click **Review and Launch**.
|
||||
1. Click **Launch**.
|
||||
1. Choose a new or existing key pair that you will use to connect to your instance later. If you are using an existing key pair, make sure you already have access to the private key.
|
||||
1. Click **Launch Instances**.
|
||||
|
||||
|
||||
**Result:** You have created Rancher nodes that satisfy the requirements for OS, hardware, and networking.
|
||||
|
||||
**Note:** If the nodes are being used for an RKE Kubernetes cluster, install Docker on each node in the next step. For a K3s Kubernetes cluster, the nodes are now ready to install K3s.
|
||||
|
||||
### 3. Install Docker and Create User for RKE Kubernetes Cluster Nodes
|
||||
|
||||
1. From the [AWS EC2 console,](https://console.aws.amazon.com/ec2/) click **Instances** in the left panel.
|
||||
1. Go to the instance that you want to install Docker on. Select the instance and click **Actions > Connect**.
|
||||
1. Connect to the instance by following the instructions on the screen that appears. Copy the Public DNS of the instance. An example command to SSH into the instance is as follows:
|
||||
```
|
||||
sudo ssh -i [path-to-private-key] ubuntu@[public-DNS-of-instance]
|
||||
```
|
||||
1. Run the following command on the instance to install Docker with one of Rancher's installation scripts:
|
||||
```
|
||||
curl https://releases.rancher.com/install-docker/18.09.sh | sh
|
||||
```
|
||||
1. When you are connected to the instance, run the following command on the instance to create a user:
|
||||
```
|
||||
sudo usermod -aG docker ubuntu
|
||||
```
|
||||
1. Repeat these steps so that Docker is installed on each node that will eventually run the Rancher management server.
|
||||
|
||||
> To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher’s Docker installation scripts.
|
||||
|
||||
**Result:** You have set up Rancher server nodes that fulfill all the node requirements for OS, Docker, hardware and networking.
|
||||
|
||||
### Next Steps for RKE Kubernetes Cluster Nodes
|
||||
|
||||
If you are going to install an RKE cluster on the new nodes, take note of the **IPv4 Public IP** and **Private IP** of each node. This information can be found on the **Description** tab for each node after it is created. The public and private IP will be used to populate the `address` and `internal_address` of each node in the RKE cluster configuration file, `rancher-cluster.yml`.
|
||||
|
||||
RKE will also need access to the private key to connect to each node. Therefore, you might want to take note of the path to your private keys to connect to the nodes, which can also be included in the `rancher-cluster.yml` under the `ssh_key_path` directive for each node.
|
||||
+67
@@ -0,0 +1,67 @@
|
||||
---
|
||||
title: 'Set up Infrastructure for a High Availability K3s Kubernetes Cluster'
|
||||
weight: 1
|
||||
---
|
||||
|
||||
This tutorial is intended to help you provision the underlying infrastructure for a Rancher management server.
|
||||
|
||||
The recommended infrastructure for the Rancher-only Kubernetes cluster differs depending on whether Rancher will be installed on a K3s Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container.
|
||||
|
||||
For more information about each installation option, refer to [this page.]({{<baseurl>}}/rancher/v2.6/en/installation)
|
||||
|
||||
> **Note:** These nodes must be in the same region. You may place these servers in separate availability zones (datacenter).
|
||||
|
||||
To install the Rancher management server on a high-availability K3s cluster, we recommend setting up the following infrastructure:
|
||||
|
||||
- **Two Linux nodes,** typically virtual machines, in the infrastructure provider of your choice.
|
||||
- **An external database** to store the cluster data. We recommend MySQL.
|
||||
- **A load balancer** to direct traffic to the two nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
|
||||
### 1. Set up Linux Nodes
|
||||
|
||||
Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.]({{<baseurl>}}/rancher/v2.6/en/installation/requirements/)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up External Datastore
|
||||
|
||||
The ability to run Kubernetes using a datastore other than etcd sets K3s apart from other Kubernetes distributions. This feature provides flexibility to Kubernetes operators. The available options allow you to select a datastore that best fits your use case.
|
||||
|
||||
For a high-availability K3s installation, you will need to set a [MySQL](https://www.mysql.com/) external database. Rancher has been tested on K3s Kubernetes clusters using MySQL version 5.7 as the datastore.
|
||||
|
||||
When you install Kubernetes using the K3s installation script, you will pass in details for K3s to connect to the database.
|
||||
|
||||
For an example of one way to set up the MySQL database, refer to this [tutorial]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/rds/) for setting up MySQL on Amazon's RDS service.
|
||||
|
||||
For the complete list of options that are available for configuring a K3s cluster datastore, refer to the [K3s documentation.]({{<baseurl>}}/k3s/latest/en/installation/datastore/)
|
||||
|
||||
### 3. Set up the Load Balancer
|
||||
|
||||
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
|
||||
|
||||
When Kubernetes gets set up in a later step, the K3s tool will deploy a Traefik Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
|
||||
|
||||
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the Traefik Ingress controller to listen for traffic destined for the Rancher hostname. The Traefik Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.
|
||||
|
||||
For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer:
|
||||
|
||||
- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment.
|
||||
- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/chart-options/#external-tls-termination)
|
||||
|
||||
For an example showing how to set up an NGINX load balancer, refer to [this page.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nginx/)
|
||||
|
||||
For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nlb/)
|
||||
|
||||
> **Important:**
|
||||
> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
|
||||
|
||||
### 4. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
Depending on your environment, this may be an A record pointing to the load balancer IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on.
|
||||
|
||||
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
+58
@@ -0,0 +1,58 @@
|
||||
---
|
||||
title: 'Set up Infrastructure for a High Availability RKE Kubernetes Cluster'
|
||||
weight: 2
|
||||
---
|
||||
|
||||
This tutorial is intended to help you create a high-availability RKE cluster that can be used to install a Rancher server.
|
||||
|
||||
> **Note:** These nodes must be in the same region. You may place these servers in separate availability zones (datacenter).
|
||||
|
||||
To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure:
|
||||
|
||||
- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere.
|
||||
- **A load balancer** to direct front-end traffic to the three nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
|
||||
These nodes must be in the same region/data center. You may place these servers in separate availability zones.
|
||||
|
||||
### Why three nodes?
|
||||
|
||||
In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
|
||||
|
||||
The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes.
|
||||
|
||||
### 1. Set up Linux Nodes
|
||||
|
||||
Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.]({{<baseurl>}}/rancher/v2.6/en/installation/requirements/)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up the Load Balancer
|
||||
|
||||
You will also need to set up a load balancer to direct traffic to the Rancher replica on any of the three nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
|
||||
|
||||
When Kubernetes gets set up in a later step, the RKE tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
|
||||
|
||||
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.
|
||||
|
||||
For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer:
|
||||
|
||||
- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment.
|
||||
- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/chart-options/#external-tls-termination)
|
||||
|
||||
For an example showing how to set up an NGINX load balancer, refer to [this page.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nginx/)
|
||||
|
||||
For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nlb/)
|
||||
|
||||
> **Important:**
|
||||
> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
|
||||
|
||||
### 3. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
Depending on your environment, this may be an A record pointing to the LB IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on.
|
||||
|
||||
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
+52
@@ -0,0 +1,52 @@
|
||||
---
|
||||
title: 'Set up Infrastructure for a High Availability RKE2 Kubernetes Cluster'
|
||||
weight: 1
|
||||
---
|
||||
|
||||
This tutorial is intended to help you provision the underlying infrastructure for a Rancher management server.
|
||||
|
||||
The recommended infrastructure for the Rancher-only Kubernetes cluster differs depending on whether Rancher will be installed on a RKE2 Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container.
|
||||
|
||||
> **Note:** These nodes must be in the same region. You may place these servers in separate availability zones (datacenter).
|
||||
|
||||
To install the Rancher management server on a high-availability RKE2 cluster, we recommend setting up the following infrastructure:
|
||||
|
||||
- **Three Linux nodes,** typically virtual machines, in the infrastructure provider of your choice.
|
||||
- **A load balancer** to direct traffic to the two nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
|
||||
### 1. Set up Linux Nodes
|
||||
|
||||
Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.]({{<baseurl>}}/rancher/v2.6/en/installation/requirements/)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up the Load Balancer
|
||||
|
||||
You will also need to set up a load balancer to direct traffic to the Rancher replica on all nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
|
||||
|
||||
When Kubernetes gets set up in a later step, the RKE2 tool will deploy an Nginx Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
|
||||
|
||||
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the Nginx Ingress controller to listen for traffic destined for the Rancher hostname. The Nginx Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.
|
||||
|
||||
For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer:
|
||||
|
||||
- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment.
|
||||
- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/chart-options/#external-tls-termination)
|
||||
|
||||
For an example showing how to set up an NGINX load balancer, refer to [this page.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nginx/)
|
||||
|
||||
For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nlb/)
|
||||
|
||||
> **Important:**
|
||||
> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
|
||||
|
||||
### 4. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
Depending on your environment, this may be an A record pointing to the load balancer IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on.
|
||||
|
||||
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
+10
@@ -0,0 +1,10 @@
|
||||
---
|
||||
title: Don't have infrastructure for your Kubernetes cluster? Try one of these tutorials.
|
||||
shortTitle: Infrastructure Tutorials
|
||||
weight: 5
|
||||
---
|
||||
|
||||
To set up infrastructure for a high-availability K3s Kubernetes cluster with an external DB, refer to [this page.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-ha-with-external-db/)
|
||||
|
||||
|
||||
To set up infrastructure for a high-availability RKE Kubernetes cluster, refer to [this page.]({{<baseurl>}}/rancher/v2.6/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-ha/)
|
||||
+83
@@ -0,0 +1,83 @@
|
||||
---
|
||||
title: Setting up an NGINX Load Balancer
|
||||
weight: 4
|
||||
---
|
||||
|
||||
NGINX will be configured as Layer 4 load balancer (TCP) that forwards connections to one of your Rancher nodes.
|
||||
|
||||
In this configuration, the load balancer is positioned in front of your nodes. The load balancer can be any host capable of running NGINX.
|
||||
|
||||
One caveat: do not use one of your Rancher nodes as the load balancer.
|
||||
|
||||
> These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on an RKE Kubernetes cluster, three nodes are required. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required.
|
||||
|
||||
## Install NGINX
|
||||
|
||||
Start by installing NGINX on the node you want to use as a load balancer. NGINX has packages available for all known operating systems. The versions tested are `1.14` and `1.15`. For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/).
|
||||
|
||||
The `stream` module is required, which is present when using the official NGINX packages. Please refer to your OS documentation on how to install and enable the NGINX `stream` module on your operating system.
|
||||
|
||||
## Create NGINX Configuration
|
||||
|
||||
After installing NGINX, you need to update the NGINX configuration file, `nginx.conf`, with the IP addresses for your nodes.
|
||||
|
||||
1. Copy and paste the code sample below into your favorite text editor. Save it as `nginx.conf`.
|
||||
|
||||
2. From `nginx.conf`, replace both occurrences (port 80 and port 443) of `<IP_NODE_1>`, `<IP_NODE_2>`, and `<IP_NODE_3>` with the IPs of your nodes.
|
||||
|
||||
> **Note:** See [NGINX Documentation: TCP and UDP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/) for all configuration options.
|
||||
|
||||
<figcaption>Example NGINX config</figcaption>
|
||||
```
|
||||
worker_processes 4;
|
||||
worker_rlimit_nofile 40000;
|
||||
|
||||
events {
|
||||
worker_connections 8192;
|
||||
}
|
||||
|
||||
stream {
|
||||
upstream rancher_servers_http {
|
||||
least_conn;
|
||||
server <IP_NODE_1>:80 max_fails=3 fail_timeout=5s;
|
||||
server <IP_NODE_2>:80 max_fails=3 fail_timeout=5s;
|
||||
server <IP_NODE_3>:80 max_fails=3 fail_timeout=5s;
|
||||
}
|
||||
server {
|
||||
listen 80;
|
||||
proxy_pass rancher_servers_http;
|
||||
}
|
||||
|
||||
upstream rancher_servers_https {
|
||||
least_conn;
|
||||
server <IP_NODE_1>:443 max_fails=3 fail_timeout=5s;
|
||||
server <IP_NODE_2>:443 max_fails=3 fail_timeout=5s;
|
||||
server <IP_NODE_3>:443 max_fails=3 fail_timeout=5s;
|
||||
}
|
||||
server {
|
||||
listen 443;
|
||||
proxy_pass rancher_servers_https;
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
3. Save `nginx.conf` to your load balancer at the following path: `/etc/nginx/nginx.conf`.
|
||||
|
||||
4. Load the updates to your NGINX configuration by running the following command:
|
||||
|
||||
```
|
||||
# nginx -s reload
|
||||
```
|
||||
|
||||
## Option - Run NGINX as Docker container
|
||||
|
||||
Instead of installing NGINX as a package on the operating system, you can rather run it as a Docker container. Save the edited **Example NGINX config** as `/etc/nginx.conf` and run the following command to launch the NGINX container:
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /etc/nginx.conf:/etc/nginx/nginx.conf \
|
||||
nginx:1.14
|
||||
```
|
||||
+179
@@ -0,0 +1,179 @@
|
||||
---
|
||||
title: Setting up Amazon ELB Network Load Balancer
|
||||
weight: 5
|
||||
---
|
||||
|
||||
This how-to guide describes how to set up a Network Load Balancer (NLB) in Amazon's EC2 service that will direct traffic to multiple instances on EC2.
|
||||
|
||||
These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on an RKE Kubernetes cluster, three nodes are required. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required.
|
||||
|
||||
This tutorial is about one possible way to set up your load balancer, not the only way. Other types of load balancers, such as a Classic Load Balancer or Application Load Balancer, could also direct traffic to the Rancher server nodes.
|
||||
|
||||
Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ALB.
|
||||
|
||||
# Setting up the Load Balancer
|
||||
|
||||
Configuring an Amazon NLB is a multistage process:
|
||||
|
||||
1. [Create Target Groups](#1-create-target-groups)
|
||||
2. [Register Targets](#2-register-targets)
|
||||
3. [Create Your NLB](#3-create-your-nlb)
|
||||
4. [Add listener to NLB for TCP port 80](#4-add-listener-to-nlb-for-tcp-port-80)
|
||||
|
||||
# Requirements
|
||||
|
||||
These instructions assume you have already created Linux instances in EC2. The load balancer will direct traffic to these nodes.
|
||||
|
||||
# 1. Create Target Groups
|
||||
|
||||
Begin by creating two target groups for the **TCP** protocol, one with TCP port 443 and one regarding TCP port 80 (providing redirect to TCP port 443). You'll add your Linux nodes to these groups.
|
||||
|
||||
Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but it's convenient to add a listener for port 80, because traffic to port 80 will be automatically redirected to port 443.
|
||||
|
||||
Regardless of whether an NGINX Ingress or Traefik Ingress controller is used, the Ingress should redirect traffic from port 80 to port 443.
|
||||
|
||||
1. Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started. Make sure to select the **Region** where your EC2 instances (Linux nodes) are created.
|
||||
1. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**.
|
||||
1. Click **Create target group** to create the first target group, regarding TCP port 443.
|
||||
|
||||
> **Note:** Health checks are handled differently based on the Ingress. For details, refer to [this section.](#health-check-paths-for-nginx-ingress-and-traefik-ingresses)
|
||||
|
||||
### Target Group (TCP port 443)
|
||||
|
||||
Configure the first target group according to the table below.
|
||||
|
||||
| Option | Setting |
|
||||
|-------------------|-------------------|
|
||||
| Target Group Name | `rancher-tcp-443` |
|
||||
| Target type | `instance` |
|
||||
| Protocol | `TCP` |
|
||||
| Port | `443` |
|
||||
| VPC | Choose your VPC |
|
||||
|
||||
Health check settings:
|
||||
|
||||
| Option | Setting |
|
||||
|---------------------|-----------------|
|
||||
| Protocol | TCP |
|
||||
| Port | `override`,`80` |
|
||||
| Healthy threshold | `3` |
|
||||
| Unhealthy threshold | `3` |
|
||||
| Timeout | `6 seconds` |
|
||||
| Interval | `10 seconds` |
|
||||
|
||||
Click **Create target group** to create the second target group, regarding TCP port 80.
|
||||
|
||||
### Target Group (TCP port 80)
|
||||
|
||||
Configure the second target group according to the table below.
|
||||
|
||||
| Option | Setting |
|
||||
|-------------------|------------------|
|
||||
| Target Group Name | `rancher-tcp-80` |
|
||||
| Target type | `instance` |
|
||||
| Protocol | `TCP` |
|
||||
| Port | `80` |
|
||||
| VPC | Choose your VPC |
|
||||
|
||||
|
||||
Health check settings:
|
||||
|
||||
| Option |Setting |
|
||||
|---------------------|----------------|
|
||||
| Protocol | TCP |
|
||||
| Port | `traffic port` |
|
||||
| Healthy threshold | `3` |
|
||||
| Unhealthy threshold | `3` |
|
||||
| Timeout | `6 seconds` |
|
||||
| Interval | `10 seconds` |
|
||||
|
||||
# 2. Register Targets
|
||||
|
||||
Next, add your Linux nodes to both target groups.
|
||||
|
||||
Select the target group named **rancher-tcp-443**, click the tab **Targets** and choose **Edit**.
|
||||
|
||||
{{< img "/img/rancher/ha/nlb/edit-targetgroup-443.png" "Edit target group 443">}}
|
||||
|
||||
Select the instances (Linux nodes) you want to add, and click **Add to registered**.
|
||||
|
||||
<hr>
|
||||
**Screenshot Add targets to target group TCP port 443**<br/>
|
||||
|
||||
{{< img "/img/rancher/ha/nlb/add-targets-targetgroup-443.png" "Add targets to target group 443">}}
|
||||
|
||||
<hr>
|
||||
**Screenshot Added targets to target group TCP port 443**<br/>
|
||||
|
||||
{{< img "/img/rancher/ha/nlb/added-targets-targetgroup-443.png" "Added targets to target group 443">}}
|
||||
|
||||
When the instances are added, click **Save** on the bottom right of the screen.
|
||||
|
||||
Repeat those steps, replacing **rancher-tcp-443** with **rancher-tcp-80**. The same instances need to be added as targets to this target group.
|
||||
|
||||
# 3. Create Your NLB
|
||||
|
||||
Use Amazon's Wizard to create a Network Load Balancer. As part of this process, you'll add the target groups you created in [1. Create Target Groups](#1-create-target-groups).
|
||||
|
||||
1. From your web browser, navigate to the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/).
|
||||
|
||||
2. From the navigation pane, choose **LOAD BALANCING** > **Load Balancers**.
|
||||
|
||||
3. Click **Create Load Balancer**.
|
||||
|
||||
4. Choose **Network Load Balancer** and click **Create**. Then complete each form.
|
||||
|
||||
- [Step 1: Configure Load Balancer](#step-1-configure-load-balancer)
|
||||
- [Step 2: Configure Routing](#step-2-configure-routing)
|
||||
- [Step 3: Register Targets](#step-3-register-targets)
|
||||
- [Step 4: Review](#step-4-review)
|
||||
|
||||
### Step 1: Configure Load Balancer
|
||||
|
||||
Set the following fields in the form:
|
||||
|
||||
- **Name:** `rancher`
|
||||
- **Scheme:** `internal` or `internet-facing`. The scheme that you choose for your NLB is dependent on the configuration of your instances and VPC. If your instances do not have public IPs associated with them, or you will only be accessing Rancher internally, you should set your NLB Scheme to `internal` rather than `internet-facing`.
|
||||
- **Listeners:** The Load Balancer Protocol should be `TCP` and the corresponding Load Balancer Port should be set to `443`.
|
||||
- **Availability Zones:** Select Your **VPC** and **Availability Zones**.
|
||||
|
||||
### Step 2: Configure Routing
|
||||
|
||||
1. From the **Target Group** drop-down, choose **Existing target group**.
|
||||
1. From the **Name** drop-down, choose `rancher-tcp-443`.
|
||||
1. Open **Advanced health check settings**, and configure **Interval** to `10 seconds`.
|
||||
|
||||
### Step 3: Register Targets
|
||||
|
||||
Since you registered your targets earlier, all you have to do is click **Next: Review**.
|
||||
|
||||
### Step 4: Review
|
||||
|
||||
Look over the load balancer details and click **Create** when you're satisfied.
|
||||
|
||||
After AWS creates the NLB, click **Close**.
|
||||
|
||||
# 4. Add listener to NLB for TCP port 80
|
||||
|
||||
1. Select your newly created NLB and select the **Listeners** tab.
|
||||
|
||||
2. Click **Add listener**.
|
||||
|
||||
3. Use `TCP`:`80` as **Protocol** : **Port**
|
||||
|
||||
4. Click **Add action** and choose **Forward to..**.
|
||||
|
||||
5. From the **Forward to** drop-down, choose `rancher-tcp-80`.
|
||||
|
||||
6. Click **Save** in the top right of the screen.
|
||||
|
||||
# Health Check Paths for NGINX Ingress and Traefik Ingresses
|
||||
|
||||
K3s and RKE Kubernetes clusters handle health checks differently because they use different Ingresses by default.
|
||||
|
||||
For RKE Kubernetes clusters, NGINX Ingress is used by default, whereas for K3s Kubernetes clusters, Traefik is the default Ingress.
|
||||
|
||||
- **Traefik:** The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served.
|
||||
- **NGINX Ingress:** The default backend of the NGINX Ingress controller has a `/healthz` endpoint. By default `/healthz` is always matched (regardless of Host), and a response from [`ingress-nginx` itself](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212) is always served.
|
||||
|
||||
To simulate an accurate health check, it is a best practice to use the Host header (Rancher hostname) combined with `/ping` or `/healthz` (for K3s or for RKE clusters, respectively) wherever possible, to get a response from the Rancher Pods, not the Ingress.
|
||||
+34
@@ -0,0 +1,34 @@
|
||||
---
|
||||
title: Setting up a MySQL Database in Amazon RDS
|
||||
weight: 4
|
||||
---
|
||||
This tutorial describes how to set up a MySQL database in Amazon's RDS.
|
||||
|
||||
This database can later be used as an external datastore for a high-availability K3s Kubernetes cluster.
|
||||
|
||||
1. Log into the [Amazon AWS RDS Console](https://console.aws.amazon.com/rds/) to get started. Make sure to select the **Region** where your EC2 instances (Linux nodes) are created.
|
||||
1. In the left panel, click **Databases**.
|
||||
1. Click **Create database**.
|
||||
1. In the **Engine type** section, click **MySQL**.
|
||||
1. In the **Version** section, choose **MySQL 5.7.22**.
|
||||
1. In **Settings** section, under **Credentials Settings,** enter a master password for the **admin** master username. Confirm the password.
|
||||
1. Expand the **Additional configuration** section. In the **Initial database name** field, enter a name. The name can have only letters, numbers, and underscores. This name will be used to connect to the database.
|
||||
1. Click **Create database**.
|
||||
|
||||
You'll need to capture the following information about the new database so that the K3s Kubernetes cluster can connect to it.
|
||||
|
||||
To see this information in the Amazon RDS console, click **Databases,** and click the name of the database that you created.
|
||||
|
||||
- **Username:** Use the admin username.
|
||||
- **Password:** Use the admin password.
|
||||
- **Hostname:** Use the **Endpoint** as the hostname. The endpoint is available in the **Connectivity & security** section.
|
||||
- **Port:** The port should be 3306 by default. You can confirm it in the **Connectivity & security** section.
|
||||
- **Database name:** Confirm the name by going to the **Configuration** tab. The name is listed under **DB name**.
|
||||
|
||||
This information will be used to connect to the database in the following format:
|
||||
|
||||
```
|
||||
mysql://username:password@tcp(hostname:3306)/database-name
|
||||
```
|
||||
|
||||
For more information on configuring the datastore for K3s, refer to the [K3s documentation.]({{<baseurl>}}/k3s/latest/en/installation/datastore/)
|
||||
@@ -0,0 +1,8 @@
|
||||
---
|
||||
title: "Don't have a Kubernetes cluster? Try one of these tutorials."
|
||||
weight: 4
|
||||
---
|
||||
|
||||
This section contains information on how to install a Kubernetes cluster that the Rancher server can be installed on.
|
||||
|
||||
Rancher can run on any Kubernetes cluster.
|
||||
Reference in New Issue
Block a user