Merge pull request #3238 from rancher/master

Merge master to staging
This commit is contained in:
Catherine Luse
2021-05-03 15:24:42 -07:00
committed by GitHub
13 changed files with 106 additions and 34 deletions
@@ -5,6 +5,6 @@ aliases:
- /os/v1.x/en/installation/running-rancheros/cloud/openstack
---
As of v0.5.0, RancherOS releases include an Openstack image that can be found on our [releases page](https://github.com/rancher/os/releases). The image format is [QCOW3](https://wiki.qemu.org/Features/Qcow3#Fully_QCOW2_backwards-compatible_feature_set) that is backward compatible with QCOW2.
As of v0.5.0, RancherOS releases include an OpenStack image that can be found on our [releases page](https://github.com/rancher/os/releases). The image format is [QCOW3](https://wiki.qemu.org/Features/Qcow3#Fully_QCOW2_backwards-compatible_feature_set) that is backward compatible with QCOW2.
When launching an instance using the image, you must enable **Advanced Options** -> **Configuration Drive** and in order to use a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file.
@@ -43,4 +43,4 @@ The `Custom` cloud provider is available if you want to configure any [Kubernete
For the custom cloud provider option, you can refer to the [RKE docs]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/) on how to edit the yaml file for your specific cloud provider. There are specific cloud providers that have more detailed configuration :
* [vSphere]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/vsphere/)
* [Openstack]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/openstack/)
* [OpenStack]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/openstack/)
@@ -43,4 +43,4 @@ The `Custom` cloud provider is available if you want to configure any [Kubernete
For the custom cloud provider option, you can refer to the [RKE docs]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/) on how to edit the yaml file for your specific cloud provider. There are specific cloud providers that have more detailed configuration :
* [vSphere]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/vsphere/)
* [Openstack]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/openstack/)
* [OpenStack]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/openstack/)
@@ -58,7 +58,7 @@ Provision the host according to the [installation requirements]({{<baseurl>}}/ra
>- The only Network Provider available for clusters with Windows support is Flannel.
6. <a id="step-6"></a>Click **Next**.
7. From **Node Role**, choose the roles that you want filled by a cluster node.
7. From **Node Role**, choose the roles that you want filled by a cluster node. You must provision at least one node for each role: `etcd`, `worker`, and `control plane`. All three roles are required for a custom cluster to finish provisioning. For more information on roles, see [this section.]({{<baseurl>}}/rancher/v2.5/en/overview/concepts/#roles-for-nodes-in-kubernetes-clusters)
>**Notes:**
>
@@ -36,10 +36,10 @@ Please refer to [Amazon EC2 security group when using Node Driver]({{<baseurl>}}
### Instance Options
Configure the instances that will be created. Make sure you configure the correct **SSH User** for the configured AMI.
Configure the instances that will be created. Make sure you configure the correct **SSH User** for the configured AMI. It is possible that a selected region does not support the default instance type. In this scenario you must select an instance type that does exist, otherwise an error will occur stating the requested configuration is not supported.
If you need to pass an **IAM Instance Profile Name** (not ARN), for example, when you want to use a [Kubernetes Cloud Provider]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/options/cloud-providers), you will need an additional permission in your policy. See [Example IAM policy with PassRole](#example-iam-policy-with-passrole) for an example policy.
### Engine Options
In the **Engine Options** section of the node template, you can configure the Docker daemon. You may want to specify the docker version or a Docker registry mirror.
In the **Engine Options** section of the node template, you can configure the Docker daemon. You may want to specify the docker version or a Docker registry mirror.
@@ -16,7 +16,9 @@ Make sure the node(s) for the Rancher server fulfill the following requirements:
- [RKE and Hosted Kubernetes](#rke-and-hosted-kubernetes)
- [K3s Kubernetes](#k3s-kubernetes)
- [RancherD](#rancherd)
- [RKE2](#rke2-kubernetes)
- [CPU and Memory for Rancher before v2.4.0](#cpu-and-memory-for-rancher-before-v2-4-0)
- [Ingress](#ingress)
- [Disks](#disks)
- [Networking Requirements](#networking-requirements)
- [Node IP Addresses](#node-ip-addresses)
@@ -30,7 +32,7 @@ The Rancher UI works best in Firefox or Chrome.
Rancher should work with any modern Linux distribution.
Docker is required for nodes that will run RKE Kubernetes clusters. It is not required for RancherD installs.
Docker is required for nodes that will run RKE Kubernetes clusters. It is not required for RancherD or RKE2 Kubernetes installs.
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
@@ -52,7 +54,7 @@ For the container runtime, RKE should work with any modern Docker version.
For the container runtime, K3s should work with any modern version of Docker or containerd.
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/) To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script.
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/) To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script.
If you are installing Rancher on a K3s cluster with **Raspbian Buster**, follow [these steps]({{<baseurl>}}/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster) to switch to legacy iptables.
@@ -66,7 +68,17 @@ At this time, only Linux OSes that leverage systemd are supported.
To install RancherD on SELinux Enforcing CentOS 8 or RHEL 8 nodes, some [additional steps](#rancherd-on-selinux-enforcing-centos-8-or-rhel-8-nodes) are required.
Docker is not required for RancherD installs.
Docker is not required for RancherD installs.
### RKE2 Specific Requirements
_The RKE2 install is available as of v2.5.6._
For details on which OS versions were tested with RKE2, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
Docker is not required for RKE2 installs.
The Ingress should be deployed as DaemonSet to ensure your load balancer can successfully route traffic to all nodes. Currently, RKE2 deploys nginx-ingress as a deployment by default, so you will need to deploy it as a DaemonSet by following [these steps.]({{<baseurl>}}/rancher/v2.5/en/installation/resources/k8s-tutorials/ha-rke2/#5-configure-nginx-to-be-a-daemonset)
### Installing Docker
@@ -87,6 +99,8 @@ These CPU and memory requirements apply to each host in the Kubernetes cluster w
These requirements apply to RKE Kubernetes clusters, as well as to hosted Kubernetes clusters such as EKS.
| Deployment Size | Clusters | Nodes | vCPUs | RAM |
| --------------- | ---------- | ------------ | -------| ------- |
| Small | Up to 150 | Up to 1500 | 2 | 8 GB |
@@ -122,15 +136,41 @@ These CPU and memory requirements apply to each instance with RancherD installed
| Small | Up to 5 | Up to 50 | 2 | 5 GB |
| Medium | Up to 15 | Up to 200 | 3 | 9 GB |
### RKE2 Kubernetes
These CPU and memory requirements apply to each instance with RKE2 installed. Minimum recommendations are outlined here.
| Deployment Size | Clusters | Nodes | vCPUs | RAM |
| --------------- | -------- | --------- | ----- | ---- |
| Small | Up to 5 | Up to 50 | 2 | 5 GB |
| Medium | Up to 15 | Up to 200 | 3 | 9 GB |
### Docker
These CPU and memory requirements apply to a host with a [single-node]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) installation of Rancher.
These CPU and memory requirements apply to a host with a [single-node]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker) installation of Rancher.
| Deployment Size | Clusters | Nodes | vCPUs | RAM |
| --------------- | -------- | --------- | ----- | ---- |
| Small | Up to 5 | Up to 50 | 1 | 4 GB |
| Medium | Up to 15 | Up to 200 | 2 | 8 GB |
# Ingress
Each node in the Kubernetes cluster that Rancher is installed on should run an Ingress.
The Ingress should be deployed as DaemonSet to ensure your load balancer can successfully route traffic to all nodes.
For RKE, K3s and RancherD installations, you don't have to install the Ingress manually because is is installed by default.
For hosted Kubernetes clusters (EKS, GKE, AKS) and RKE2 Kubernetes installations, you will need to set up the ingress.
### Ingress for RKE2
Currently, RKE2 deploys nginx-ingress as a deployment by default, so you will need to deploy it as a DaemonSet by following [these steps.]({{<baseurl>}}/rancher/v2.5/en/installation/resources/k8s-tutorials/ha-rke2/#5-configure-nginx-to-be-a-daemonset)
### Ingress for EKS
For an example of how to deploy an nginx-ingress-controller with a LoadBalancer service, refer to [this section.]({{<baseurl>}}/rancher/v2.5/en/installation/install-rancher-on-k8s/amazon-eks/#5-install-an-ingress)
# Disks
Rancher performance depends on etcd in the cluster performance. To ensure optimal speed, we recommend always using SSD disks to back your Rancher management Kubernetes cluster. On cloud providers, you will also want to use the minimum size that allows the maximum IOPS. In larger clusters, consider using dedicated storage devices for etcd data and wal directories.
@@ -154,4 +194,4 @@ Before installing Rancher on SELinux Enforcing CentOS 8 nodes or RHEL 8 nodes, y
```
sudo yum install iptables
sudo yum install container-selinux
```
```
@@ -11,7 +11,7 @@ This section describes how to install a Kubernetes cluster according to the [bes
# Prerequisites
These instructions assume you have set up three nodes, a load balancer, and a DNS record as described [this section.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-rke2-ha)
These instructions assume you have set up three nodes, a load balancer, and a DNS record, as described in [this section.]({{<baseurl>}}/rancher/v2.5/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-rke2-ha)
Note that in order for RKE2 to work correctly with the load balancer, you need to set up two listeners: one for the supervisor on port 9345, and one for the Kubernetes API on port 6443.
@@ -163,7 +163,7 @@ Currently, RKE2 deploys nginx-ingress as a deployment, and that can impact the R
To rectify that, place the following file in /var/lib/rancher/rke2/server/manifests on any of the server nodes:
```
```yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
@@ -175,7 +175,4 @@ spec:
kind: DaemonSet
daemonset:
useHostPort: true
image:
repository: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller
tag: "v0.34.1"
```
@@ -43,4 +43,4 @@ The `Custom` cloud provider is available if you want to configure any [Kubernete
For the custom cloud provider option, you can refer to the [RKE docs]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/) on how to edit the yaml file for your specific cloud provider. There are specific cloud providers that have more detailed configuration :
* [vSphere]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/vsphere/)
* [Openstack]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/openstack/)
* [OpenStack]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/openstack/)
@@ -11,12 +11,12 @@ See [kubectl Installation](https://kubernetes.io/docs/tasks/tools/install-kubect
### Configuration
When you create a Kubernetes cluster with RKE, RKE creates a `kube_config_rancher-cluster.yml` in the local directory that contains credentials to connect to your new cluster with tools like `kubectl` or `helm`.
When you create a Kubernetes cluster with RKE, RKE creates a `kube_config_cluster.yml` in the local directory that contains credentials to connect to your new cluster with tools like `kubectl` or `helm`.
You can copy this file to `$HOME/.kube/config` or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_rancher-cluster.yml`.
You can copy this file to `$HOME/.kube/config` or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_cluster.yml`.
```
export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
export KUBECONFIG=$(pwd)/kube_config_cluster.yml
```
Test your connectivity with `kubectl` and see if you can get the list of nodes back.
@@ -16,7 +16,9 @@ Make sure the node(s) for the Rancher server fulfill the following requirements:
- [RKE and Hosted Kubernetes](#rke-and-hosted-kubernetes)
- [K3s Kubernetes](#k3s-kubernetes)
- [RancherD](#rancherd)
- [RKE2 Kubernetes](#rke2-kubernetes)
- [CPU and Memory for Rancher before v2.4.0](#cpu-and-memory-for-rancher-before-v2-4-0)
- [Ingress](#ingress)
- [Disks](#disks)
- [Networking Requirements](#networking-requirements)
- [Node IP Addresses](#node-ip-addresses)
@@ -30,7 +32,7 @@ The Rancher UI works best in Firefox or Chrome.
Rancher should work with any modern Linux distribution.
Docker is required for nodes that will run RKE Kubernetes clusters. It is not required for RancherD installs.
Docker is required for nodes that will run RKE Kubernetes clusters. It is not required for RancherD or RKE2 Kubernetes installs.
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
@@ -66,7 +68,17 @@ At this time, only Linux OSes that leverage systemd are supported.
To install RancherD on SELinux Enforcing CentOS 8 or RHEL 8 nodes, some [additional steps](#rancherd-on-selinux-enforcing-centos-8-or-rhel-8-nodes) are required.
Docker is not required for RancherD installs.
Docker is not required for RancherD installs.
### RKE2 Specific Requirements
_The RKE2 install is available as of v2.5.6._
For details on which OS versions were tested with RKE2, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
Docker is not required for RKE2 installs.
The Ingress should be deployed as DaemonSet to ensure your load balancer can successfully route traffic to all nodes. Currently, RKE2 deploys nginx-ingress as a deployment by default, so you will need to deploy it as a DaemonSet by following [these steps.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-rke2/#5-configure-nginx-to-be-a-daemonset)
### Installing Docker
@@ -124,6 +136,15 @@ These CPU and memory requirements apply to each instance with RancherD installed
| Small | Up to 5 | Up to 50 | 2 | 5 GB |
| Medium | Up to 15 | Up to 200 | 3 | 9 GB |
### RKE2 Kubernetes
These CPU and memory requirements apply to each instance with RKE2 installed. Minimum recommendations are outlined here.
| Deployment Size | Clusters | Nodes | vCPUs | RAM |
| --------------- | -------- | --------- | ----- | ---- |
| Small | Up to 5 | Up to 50 | 2 | 5 GB |
| Medium | Up to 15 | Up to 200 | 3 | 9 GB |
### Docker
These CPU and memory requirements apply to a host with a [single-node]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) installation of Rancher.
@@ -147,6 +168,23 @@ These CPU and memory requirements apply to installing Rancher on an RKE Kubernet
| XX-Large | 100+ | 1000+ | [Contact Rancher](https://rancher.com/contact/) | [Contact Rancher](https://rancher.com/contact/) |
{{% /accordion %}}
# Ingress
Each node in the Kubernetes cluster that Rancher is installed on should run an Ingress.
The Ingress should be deployed as DaemonSet to ensure your load balancer can successfully route traffic to all nodes.
For RKE, K3s and RancherD installations, you don't have to install the Ingress manually because is is installed by default.
For hosted Kubernetes clusters (EKS, GKE, AKS) and RKE2 Kubernetes installations, you will need to set up the ingress.
### Ingress for RKE2
Currently, RKE2 deploys nginx-ingress as a deployment by default, so you will need to deploy it as a DaemonSet by following [these steps.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-rke2/#5-configure-nginx-to-be-a-daemonset)
### Ingress for EKS
For an example of how to deploy an nginx-ingress-controller with a LoadBalancer service, refer to [this section.]({{<baseurl>}}/rancher/v2.x/en/installation/install-rancher-on-k8s/amazon-eks/#5-install-an-ingress)
# Disks
Rancher performance depends on etcd in the cluster performance. To ensure optimal speed, we recommend always using SSD disks to back your Rancher management Kubernetes cluster. On cloud providers, you will also want to use the minimum size that allows the maximum IOPS. In larger clusters, consider using dedicated storage devices for etcd data and wal directories.
@@ -9,7 +9,7 @@ This section describes how to install a Kubernetes cluster according to the [bes
# Prerequisites
These instructions assume you have set up three nodes, a load balancer, a DNS record, [this section.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-rke2-ha)
These instructions assume you have set up three nodes, a load balancer, and a DNS record, as described in [this section.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-rke2-ha)
Note that in order for RKE2 to work correctly with the load balancer, you need to set up two listeners: one for the supervisor on port 9345, and one for the Kubernetes API on port 6443.
@@ -161,7 +161,7 @@ Currently, RKE2 deploys nginx-ingress as a deployment, and that can impact the R
To rectify that, place the following file in /var/lib/rancher/rke2/server/manifests on any of the server nodes:
```
```yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
@@ -173,7 +173,4 @@ spec:
kind: DaemonSet
daemonset:
useHostPort: true
image:
repository: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller
tag: "v0.34.1"
```
@@ -41,7 +41,7 @@ OPA Gatekeeper can be installed from the new **Cluster Explorer** view in Ranche
1. Go to the cluster view in the Rancher UI. Click **Cluster Explorer.**
1. Click **Apps** in the top navigation bar.
1. Click **rancher-gatekeeper.**
1. Click **OPA Gatekeeper.**
1. Click **Install.**
**Result:** OPA Gatekeeper is deployed in your Kubernetes cluster.
@@ -1,9 +1,9 @@
---
title: Openstack Cloud Provider
title: OpenStack Cloud Provider
weight: 253
---
To enable the Openstack cloud provider, besides setting the name as `openstack`, there are specific configuration options that must be set. The Openstack configuration options are grouped into different sections.
To enable the OpenStack cloud provider, besides setting the name as `openstack`, there are specific configuration options that must be set. The OpenStack configuration options are grouped into different sections.
```yaml
cloud_provider:
@@ -27,11 +27,11 @@ cloud_provider:
## Overriding the hostname
The OpenStack cloud provider uses the instance name (as determined from OpenStack metadata) as the name of the Kubernetes Node object, you must override the Kubernetes name on the node by setting the `hostname_override` for each node. If you do not set the `hostname_override`, the Kubernetes node name will be set as the `address`, which will cause the Openstack cloud provider to fail.
The OpenStack cloud provider uses the instance name (as determined from OpenStack metadata) as the name of the Kubernetes Node object, you must override the Kubernetes name on the node by setting the `hostname_override` for each node. If you do not set the `hostname_override`, the Kubernetes node name will be set as the `address`, which will cause the OpenStack cloud provider to fail.
## Openstack Configuration Options
## OpenStack Configuration Options
The Openstack configuration options are divided into 5 groups.
The OpenStack configuration options are divided into 5 groups.
* Global
* Load Balancer
@@ -103,4 +103,4 @@ These are the options that are available under the `metadata` directive.
| search-order | string | |
| request-timeout | int | |
For more information of Openstack configurations options please refer to the official Kubernetes [documentation](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack).
For more information of OpenStack configurations options please refer to the official Kubernetes [documentation](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#openstack).