Merge pull request #3216 from rancher/master

Merge master into staging
This commit is contained in:
Catherine Luse
2021-04-23 11:44:56 -07:00
committed by GitHub
31 changed files with 144 additions and 49 deletions
+8
View File
@@ -1,6 +1,14 @@
Rancher Docs
------------
## Contributing
We have transitioned to versioned documentation for Rancher (files within `content/rancher`).
New contributions should be made to the applicable versioned directories (e.g. `content/rancher/v2.5` and `content/rancher/v2.0-v2.4`).
Contents under the `content/rancher/v2.x` directory are no longer maintained after v2.5.6.
## Running for development/editing
The `rancher/docs:dev` docker image runs a live-updating server. To run on your workstation, run:
+9 -1
View File
@@ -21,6 +21,7 @@ This section contains advanced information describing the different ways you can
- [Enabling legacy iptables on Raspbian Buster](#enabling-legacy-iptables-on-raspbian-buster)
- [Enabling cgroups for Raspbian Buster](#enabling-cgroups-for-raspbian-buster)
- [SELinux Support](#selinux-support)
- [Additional preparation for (Red Hat/CentOS) Enterprise Linux](#additional-preparation-for-red-hat-centos-enterprise-linux)
# Certificate Rotation
@@ -228,7 +229,7 @@ $ k3s server
INFO[2019-01-22T15:16:19.908493986-07:00] Starting k3s dev
INFO[2019-01-22T15:16:19.908934479-07:00] Running kube-apiserver --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false
INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false
INFO[2019-01-22T15:16:20.196880841-07:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false
Flag --port has been deprecated, see --secure-port instead.
INFO[2019-01-22T15:16:20.273441984-07:00] Listening on :6443
@@ -366,3 +367,10 @@ Using a custom `--data-dir` under SELinux is not supported. To customize it, you
{{%/tab%}}
{{% /tabs %}}
# Additional preparation for (Red Hat/CentOS) Enterprise Linux
It is recommended to turn off firewalld:
```
systemctl disable firewalld --now
```
@@ -13,6 +13,8 @@ This section contains instructions for installing K3s in various environments. P
[Air-Gap Installation]({{<baseurl>}}/k3s/latest/en/installation/airgap/) details how to set up K3s in environments that do not have direct access to the Internet.
[Disable Components Flags]({{<baseurl>}}/k3s/latest/en/installation/disable-flags/) details how to set up K3s with etcd only nodes and controlplane only nodes
### Uninstalling
If you installed K3s with the help of the `install.sh` script, an uninstall script is generated during installation. The script is created on your node at `/usr/local/bin/k3s-uninstall.sh` (or as `k3s-agent-uninstall.sh`).
@@ -0,0 +1,73 @@
---
title: "Disable Components Flags"
weight: 60
---
When starting K3s server with --cluster-init it will run all control plane components that includes (api server, controller manager, scheduler, and etcd). However you can run server nodes with certain components and execlude others, the following sectiohs will explain how to do that.
# ETCD Only Nodes
This document assumes you run K3s server with embedded etcd by passing `--cluster-init` flag to the server process.
To run a K3s server with only etcd components you can pass `--disable-api-server --disable-controller-manager --disable-scheduler` flags to k3s, this will result in running a server node with only etcd, for example to run K3s server with those flags:
```
curl -fL https://get.k3s.io | sh -s - server --cluster-init --disable-api-server --disable-controller-manager --disable-scheduler
```
You can join other nodes to the cluster normally after that.
# Disable ETCD
You can also disable etcd from a server node and this will result in a k3s server running control components other than etcd, that can be accomplished by running k3s server with flag `--disable-etcd` for example to join another node with only control components to the etcd node created in the previous section:
```
curl -fL https://get.k3s.io | sh -s - server --token <token> --disable-etcd --server https://<etcd-only-node>:6443
```
The end result will be a two nodes one of them is etcd only node and the other one is controlplane only node, if you check the node list you should see something like the following:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-13-32 Ready etcd 5h39m v1.20.4+k3s1
ip-172-31-14-69 Ready control-plane,master 5h39m v1.20.4+k3s1
```
Note that you can run `kubectl` commands only on the k3s server that has the api running, and you cant run `kubectl` commands on etcd only nodes.
### Re-enabling control components
In both cases you can re-enable any component that you already disabled simply by removing the corresponding flag that disables them, so for example if you want to revert the etcd only node back to a full k3s server with all components you can just remove the following 3 flags `--disable-api-server --disable-controller-manager --disable-scheduler`, so in our example to revert back node `ip-172-31-13-32` to a full k3s server you can just re-run the curl command without the disable flags:
```
curl -fL https://get.k3s.io | sh -s - server --cluster-init
```
you will notice that all components started again and you can run kubectl commands again:
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-13-32 Ready control-plane,etcd,master 5h45m v1.20.4+k3s1
ip-172-31-14-69 Ready control-plane,master 5h45m v1.20.4+k3s1
```
Notice that role labels has been re-added to the node `ip-172-31-13-32` with the correct labels (control-plane,etcd,master).
# Add disable flags using the config file
In any of the previous situation you can use the config file instead of running the curl commands with the associated flags, for example to run an etcd only node you can add the following options to the `/etc/rancher/k3s/config.yaml` file:
```
---
disable-api-server: true
disable-controller-manager: true
disable-scheduler: true
cluster-init: true
```
and then start K3s using the curl command without any arguents:
```
curl -fL https://get.k3s.io | sh -
```
@@ -23,6 +23,7 @@ Some OSs have specific requirements:
- If you are using **Raspbian Buster**, follow [these steps]({{<baseurl>}}/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster) to switch to legacy iptables.
- If you are using **Alpine Linux**, follow [these steps]({{<baseurl>}}/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup.
- If you are using **(Red Hat/CentOS) Enterprise Linux**, follow [these steps]({{<baseurl>}}/k3s/latest/en/advanced/#additional-preparation-for-red-hat-centos-enterprise-linux) for additional setup.
For more information on which OSs were tested with Rancher managed K3s clusters, refer to the [Rancher support and maintenance terms.](https://rancher.com/support-maintenance-terms/)
@@ -53,7 +53,7 @@ sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role
### Obtain the Bearer Token
```bash
sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token | grep ^token
sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token | grep '^token'
```
### Local Access to the Dashboard
@@ -18,7 +18,7 @@ aliases:
---
The following instructions will guide you through upgrading a Rancher server that was installed on a Kubernetes cluster with Helm. These steps also apply to air gap installs with Helm.
For the instructions to upgrade Rancher installed with Docker, refer to [ths page.]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/single-node-upgrades)
For the instructions to upgrade Rancher installed with Docker, refer to [this page.]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/single-node-upgrades)
To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{<baseurl>}}/rke/latest/en/config-options/services/) or [add-ons]({{<baseurl>}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{<baseurl>}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine.
@@ -62,7 +62,7 @@ The following tables break down the port requirements for inbound and outbound t
| Protocol | Port | Destination | Description |
| -------- | ---- | -------------------------------------------------------- | --------------------------------------------- |
| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver |
| TCP | 443 | `35.160.43.145/32`, `35.167.242.46/32`, `52.33.59.17/32` | git.rancher.io (catalogs) |
| TCP | 443 | git.rancher.io | Rancher catalog |
| TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine |
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
@@ -130,7 +130,7 @@ The following tables break down the port requirements for Rancher nodes, for inb
| Protocol | Port | Source | Description |
|-----|-----|----------------|---|
| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver |
| TCP | 443 | `35.160.43.145/32`,`35.167.242.46/32`,`52.33.59.17/32` | git.rancher.io (catalogs) |
| TCP | 443 | git.rancher.io | Rancher catalog |
| TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine |
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
@@ -45,14 +45,14 @@ If the Rancher server is installed in a single Docker container, you only need o
```
sudo ssh -i [path-to-private-key] ubuntu@[public-DNS-of-instance]
```
1. When you are connected to the instance, run the following command on the instance to create a user:
```
sudo usermod -aG docker ubuntu
```
1. Run the following command on the instance to install Docker with one of Rancher's installation scripts:
```
curl https://releases.rancher.com/install-docker/18.09.sh | sh
```
1. When you are connected to the instance, run the following command on the instance to create a user:
```
sudo usermod -aG docker ubuntu
```
1. Repeat these steps so that Docker is installed on each node that will eventually run the Rancher management server.
> To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Ranchers Docker installation scripts.
@@ -14,7 +14,7 @@ Double check if all the [required ports]({{<baseurl>}}/rancher/v2.0-v2.4/en/clus
The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod.
To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/leodotcloud/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts.
To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/rancherlabs/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts.
1. Save the following file as `overlaytest.yml`
@@ -35,7 +35,7 @@ To test the overlay network, you can launch the following `DaemonSet` definition
tolerations:
- operator: Exists
containers:
- image: leodotcloud/swiss-army-knife
- image: rancherlabs/swiss-army-knife
imagePullPolicy: Always
name: overlaytest
command: ["sh", "-c", "tail -f /dev/null"]
@@ -51,5 +51,5 @@ After you complete [Configuring Microsoft AD FS for Rancher]({{<baseurl>}}/ranch
**Tip:** You can generate a certificate using an openssl command. For example:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=https://myservice.example.com"
```
@@ -29,6 +29,7 @@ This section covers the following topics:
# Node Options Available for Each Cluster Creation Option
The following table lists which node options are available for each type of cluster in Rancher. Click the links in the **Option** column for more detailed information about each feature.
| Option | [Nodes Hosted by an Infrastructure Provider][1] | [Custom Node][2] | [Hosted Cluster][3] | [Registered EKS Nodes][4] | [All Other Registered Nodes][5] | Description |
| ------------------------------------------------ | ------------------------------------------------ | ---------------- | ------------------- | ------------------- | -------------------| ------------------------------------------------------------------ |
| [Cordon](#cordoning-a-node) | ✓ | ✓ | ✓ | ✓ | ✓ | Marks the node as unschedulable. |
@@ -11,12 +11,12 @@ headless: true
| [Managing Projects, Namespaces and Workloads]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/projects-and-namespaces/) | ✓ | ✓ | ✓ |
| [Using App Catalogs]({{<baseurl>}}/rancher/v2.5/en/catalog/) | ✓ | ✓ | ✓ |
| [Configuring Tools (Alerts, Notifiers, Logging, Monitoring, Istio)]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/tools/) | ✓ | ✓ | ✓ |
| [Running Security Scans]({{<baseurl>}}/rancher/v2.5/en/security/security-scan/) | ✓ | ✓ | ✓ |
| [Cloning Clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cloning-clusters/)| ✓ | ✓ | |
| [Ability to rotate certificates]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/certificate-rotation/) | ✓ | | |
| [Ability to back up your Kubernetes Clusters]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/backing-up-etcd/) | ✓ | | |
| [Ability to recover and restore etcd]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/restoring-etcd/) | ✓ | | |
| [Cleaning Kubernetes components when clusters are no longer reachable from Rancher]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cleaning-cluster-nodes/) | ✓ | | |
| [Configuring Pod Security Policies]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/pod-security-policy/) | ✓ | | |
| [Running Security Scans]({{<baseurl>}}/rancher/v2.5/en/security/security-scan/) | ✓ | | |
\* Cluster configuration options can't be edited for imported clusters, except for [K3s clusters.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/imported-clusters/)
\* Cluster configuration options can't be edited for imported clusters, except for [K3s and RKE2 clusters.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/imported-clusters/)
@@ -96,7 +96,7 @@ If you have configured your cluster to use Amazon as **Cloud Provider**, tag you
>**Note:** You can use Amazon EC2 instances without configuring a cloud provider in Kubernetes. You only have to configure the cloud provider if you want to use specific Kubernetes cloud provider functionality. For more information, see [Kubernetes Cloud Providers](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/)
The following resources need to tagged with a `ClusterID`:
The following resources need to be tagged with a `ClusterID`:
- **Nodes**: All hosts added in Rancher.
- **Subnet**: The subnet used for your cluster
@@ -123,4 +123,4 @@ Key=kubernetes.io/cluster/CLUSTERID, Value=shared
After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster:
- **Access your cluster with the kubectl CLI:** Follow [these steps]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#accessing-clusters-with-kubectl-on-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher servers authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI.
- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you cant connect to Rancher, you can still access the cluster.
- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{<baseurl>}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you cant connect to Rancher, you can still access the cluster.
@@ -20,7 +20,7 @@ The following instructions will guide you through upgrading a Rancher server tha
For the instructions to upgrade Rancher installed on Kubernetes with RancherD, refer to [this page.]({{<baseurl>}}/rancher/v2.5/en/installation/install-rancher-on-linux/upgrades)
For the instructions to upgrade Rancher installed with Docker, refer to [ths page.]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/single-node-upgrades)
For the instructions to upgrade Rancher installed with Docker, refer to [this page.]({{<baseurl>}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/single-node-upgrades)
To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{<baseurl>}}/rke/latest/en/config-options/services/) or [add-ons]({{<baseurl>}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{<baseurl>}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine.
@@ -65,7 +65,7 @@ The following tables break down the port requirements for inbound and outbound t
| Protocol | Port | Destination | Description |
| -------- | ---- | -------------------------------------------------------- | --------------------------------------------- |
| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver |
| TCP | 443 | `35.160.43.145/32`, `35.167.242.46/32`, `52.33.59.17/32` | git.rancher.io (catalogs) |
| TCP | 443 | git.rancher.io | Rancher catalog |
| TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine |
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
@@ -162,7 +162,7 @@ The following tables break down the port requirements for Rancher nodes, for inb
| Protocol | Port | Source | Description |
|-----|-----|----------------|---|
| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver |
| TCP | 443 | `35.160.43.145/32`,`35.167.242.46/32`,`52.33.59.17/32` | git.rancher.io (catalogs) |
| TCP | 443 | git.rancher.io | Rancher catalog |
| TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine |
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
@@ -48,14 +48,14 @@ If the Rancher server is installed in a single Docker container, you only need o
```
sudo ssh -i [path-to-private-key] ubuntu@[public-DNS-of-instance]
```
1. When you are connected to the instance, run the following command on the instance to create a user:
```
sudo usermod -aG docker ubuntu
```
1. Run the following command on the instance to install Docker with one of Rancher's installation scripts:
```
curl https://releases.rancher.com/install-docker/18.09.sh | sh
```
1. When you are connected to the instance, run the following command on the instance to create a user:
```
sudo usermod -aG docker ubuntu
```
1. Repeat these steps so that Docker is installed on each node that will eventually run the Rancher management server.
> To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Ranchers Docker installation scripts.
+3 -2
View File
@@ -32,6 +32,7 @@ These instructions assume you are using Rancher v2.5, but Longhorn can be instal
### Installing Longhorn with Rancher
1. Fulfill all [Installation Requirements.](https://longhorn.io/docs/1.1.0/deploy/install/#installation-requirements)
1. Go to the **Cluster Explorer** in the Rancher UI.
1. Click **Apps.**
1. Click `longhorn`.
@@ -43,7 +44,7 @@ These instructions assume you are using Rancher v2.5, but Longhorn can be instal
### Accessing Longhorn from the Rancher UI
1. From the **Cluster Explorer," go to the top left dropdown menu and click **Cluster Explorer > Longhorn.**
1. On this page, you can edit Kubernetes resources managed by Longhorn. To view the Longhorn UI, click the **Longhorn** button in the **Overview**section.
1. On this page, you can edit Kubernetes resources managed by Longhorn. To view the Longhorn UI, click the **Longhorn** button in the **Overview** section.
**Result:** You will be taken to the Longhorn UI, where you can manage your Longhorn volumes and their replicas in the Kubernetes cluster, as well as secondary backups of your Longhorn storage that may exist in another Kubernetes cluster or in S3.
@@ -73,4 +74,4 @@ The storage controller and replicas are themselves orchestrated using Kubernetes
You can learn more about its architecture [here.](https://longhorn.io/docs/1.0.2/concepts/)
<figcaption>Longhorn Architecture</figcaption>
![Longhorn Architecture]({{<baseurl>}}/img/rancher/longhorn-architecture.svg)
![Longhorn Architecture]({{<baseurl>}}/img/rancher/longhorn-architecture.svg)
@@ -14,7 +14,7 @@ Double check if all the [required ports]({{<baseurl>}}/rancher/v2.5/en/cluster-p
The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod.
To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/leodotcloud/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts.
To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/rancherlabs/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts.
1. Save the following file as `overlaytest.yml`
@@ -35,7 +35,7 @@ To test the overlay network, you can launch the following `DaemonSet` definition
tolerations:
- operator: Exists
containers:
- image: leodotcloud/swiss-army-knife
- image: rancher/swiss-army-knife
imagePullPolicy: Always
name: overlaytest
command: ["sh", "-c", "tail -f /dev/null"]
@@ -113,4 +113,4 @@ To check if your cluster is affected, the following command will list nodes that
kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name'
```
If there is no output, the cluster is not affected.
If there is no output, the cluster is not affected.
@@ -52,5 +52,5 @@ After you complete [Configuring Microsoft AD FS for Rancher]({{<baseurl>}}/ranch
**Tip:** You can generate a certificate using an openssl command. For example:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=https://myservice.example.com"
```
@@ -36,12 +36,13 @@ The Cloud Provider Interface (CPI) should be installed first before installing t
```
kubectl describe nodes | grep "ProviderID"
```
### 3. Installing the CSI plugin
1. From the **Cluster Explorer** view, go to the top left dropdown menu and click **Apps & Marketplace.**
1. Select the **vSphere CSI** chart. Fill out the required vCenter details.
2. Set **Enable CSI Migration** to **false**.
3. This chart creates a StorageClass with the `csi.vsphere.vmware.com` as the provisioner. Fill out the details for the StorageClass and launch the chart.
1. From the **Cluster Explorer** view, go to the top left dropdown menu and click **Apps & Marketplace.**
2. Select the **vSphere CSI** chart. Fill out the required vCenter details.
3. Set **Enable CSI Migration** to **false**.
4. This chart creates a StorageClass with the `csi.vsphere.vmware.com` as the provisioner. Fill out the details for the StorageClass and launch the chart.
# Using the CSI driver for provisioning volumes
The CSI chart by default creates a storageClass.
@@ -27,7 +27,7 @@ Set up the Rancher server's local Kubernetes cluster.
The cluster requirements depend on the Rancher version:
- **As of Rancher v2.5,** Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. Note: To deploy Rancher v2.5 on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.]({{<baseurl>}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#4-choose-your-ssl-configuration)
- **As of Rancher v2.5,** Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. Note: To deploy Rancher v2.5 on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.]({{<baseurl>}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#3-choose-your-ssl-configuration)
- **In Rancher v2.4.x,** Rancher needs to be installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster.
- **In Rancher before v2.4,** Rancher needs to be installed on an RKE Kubernetes cluster.
@@ -20,7 +20,7 @@ The following instructions will guide you through upgrading a Rancher server tha
For the instructions to upgrade Rancher installed on Kubernetes with RancherD, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/installation/install-rancher-on-linux/upgrades)
For the instructions to upgrade Rancher installed with Docker, refer to [ths page.]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades)
For the instructions to upgrade Rancher installed with Docker, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades)
To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{<baseurl>}}/rke/latest/en/config-options/services/) or [add-ons]({{<baseurl>}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{<baseurl>}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine.
@@ -67,7 +67,7 @@ The following tables break down the port requirements for inbound and outbound t
| Protocol | Port | Destination | Description |
| -------- | ---- | -------------------------------------------------------- | --------------------------------------------- |
| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver |
| TCP | 443 | `35.160.43.145/32`, `35.167.242.46/32`, `52.33.59.17/32` | git.rancher.io (catalogs) |
| TCP | 443 | git.rancher.io | Rancher catalog |
| TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine |
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
@@ -164,7 +164,7 @@ The following tables break down the port requirements for Rancher nodes, for inb
| Protocol | Port | Source | Description |
|-----|-----|----------------|---|
| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver |
| TCP | 443 | `35.160.43.145/32`,`35.167.242.46/32`,`52.33.59.17/32` | git.rancher.io (catalogs) |
| TCP | 443 | git.rancher.io | Rancher catalog |
| TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine |
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
@@ -47,14 +47,14 @@ If the Rancher server is installed in a single Docker container, you only need o
```
sudo ssh -i [path-to-private-key] ubuntu@[public-DNS-of-instance]
```
1. When you are connected to the instance, run the following command on the instance to create a user:
```
sudo usermod -aG docker ubuntu
```
1. Run the following command on the instance to install Docker with one of Rancher's installation scripts:
```
curl https://releases.rancher.com/install-docker/18.09.sh | sh
```
1. When you are connected to the instance, run the following command on the instance to add user `ubuntu` to group `docker`:
```
sudo usermod -aG docker ubuntu
```
1. Repeat these steps so that Docker is installed on each node that will eventually run the Rancher management server.
> To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Ranchers Docker installation scripts.
@@ -14,7 +14,7 @@ Double check if all the [required ports]({{<baseurl>}}/rancher/v2.x/en/cluster-p
The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod.
To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/leodotcloud/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts.
To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/rancherlabs/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts.
1. Save the following file as `overlaytest.yml`
@@ -35,7 +35,7 @@ To test the overlay network, you can launch the following `DaemonSet` definition
tolerations:
- operator: Exists
containers:
- image: leodotcloud/swiss-army-knife
- image: rancherlabs/swiss-army-knife
imagePullPolicy: Always
name: overlaytest
command: ["sh", "-c", "tail -f /dev/null"]
@@ -37,7 +37,7 @@ rancher_kubernetes_engine_config:
workspace:
server: vc.example.com
folder: myvmfolder
default-datastore: /eu-west-1/datastore/ds-1
default-datastore: ds-1
datacenter: /eu-west-1
resourcepool-path: /eu-west-1/host/hn1/resources/myresourcepool
+1 -1
View File
@@ -176,7 +176,7 @@ Consult the project pages for openSUSE MicroOS and Kubic for installation
Designed to host container workloads with automated administration & patching. Installing openSUSE MicroOS you get a quick, small environment for deploying Containers, or any other workload that benefits from Transactional Updates. As rolling release distribution the software is always up-to-date.
https://microos.opensuse.org
#### openSUSE Kubic
Based on MicroOS, but not a rolling release distribution. Designed with the same things in mind but also a Certified Kubernetes Distribution.
Based on openSUSE MicroOS, designed with the same things in mind but is focused on being a Certified Kubernetes Distribution.
https://kubic.opensuse.org
Installation instructions:
https://kubic.opensuse.org/blog/2021-02-08-MicroOS-Kubic-Rancher-RKE/
+1 -1
View File
@@ -18,7 +18,7 @@
<td></td>
<td></td>
<td></td>
<td style="background-color: #3497DA; color:#ffffff;">git.rancher.io <sup>(2)</sup>:<br>35.160.43.145:32<br>35.167.242.46:32<br>52.33.59.17:32</td>
<td style="background-color: #3497DA; color:#ffffff;">git.rancher.io</td>
</tr>
<tr>
<td rowspan="6">etcd Plane Nodes</td>
+1 -1
View File
@@ -16,7 +16,7 @@
<td></td>
<td colspan="3" style="background-color: #3497DA; color:#ffffff;">22 TCP</td>
<td></td>
<td rowspan="2" style="background-color: #3497DA; color:#ffffff;">git.rancher.io <sup>(2)</sup>:<br>35.160.43.145:32<br>35.167.242.46:32<br>52.33.59.17:32</td>
<td rowspan="2" style="background-color: #3497DA; color:#ffffff;">git.rancher.io</td>
</tr>
<tr>
<td></td>
@@ -14,7 +14,7 @@
<td></td>
<td style="background-color: #3497DA; color:#ffffff;">Kubernetes API <br>Endpoint Port <sup>(2)</sup></td>
<td></td>
<td rowspan="3" style="background-color: #3497DA; color:#ffffff;">git.rancher.io <sup>(3)</sup>:<br>35.160.43.145:32<br>35.167.242.46:32<br>52.33.59.17:32</td>
<td rowspan="3" style="background-color: #3497DA; color:#ffffff;">git.rancher.io</td>
</tr>
<tr>
<td></td>