From 68f2407bb462c305f1efa1bb8377ff1b7ee9fde7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?H=C3=A9ctor=20Luaces?= Date: Thu, 25 Mar 2021 15:18:31 +0100 Subject: [PATCH 01/24] fix "Node Options" table formatting The "Node Options" table is not properly rendered on Rancher's online documentation: https://rancher.com/docs/rancher/v2.5/en/cluster-admin/nodes/ --- content/rancher/v2.5/en/cluster-admin/nodes/_index.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/rancher/v2.5/en/cluster-admin/nodes/_index.md b/content/rancher/v2.5/en/cluster-admin/nodes/_index.md index 2fcd5b620d9..ca88ce4ab49 100644 --- a/content/rancher/v2.5/en/cluster-admin/nodes/_index.md +++ b/content/rancher/v2.5/en/cluster-admin/nodes/_index.md @@ -29,6 +29,7 @@ This section covers the following topics: # Node Options Available for Each Cluster Creation Option The following table lists which node options are available for each type of cluster in Rancher. Click the links in the **Option** column for more detailed information about each feature. + | Option | [Nodes Hosted by an Infrastructure Provider][1] | [Custom Node][2] | [Hosted Cluster][3] | [Registered EKS Nodes][4] | [All Other Registered Nodes][5] | Description | | ------------------------------------------------ | ------------------------------------------------ | ---------------- | ------------------- | ------------------- | -------------------| ------------------------------------------------------------------ | | [Cordon](#cordoning-a-node) | ✓ | ✓ | ✓ | ✓ | ✓ | Marks the node as unschedulable. | From 3180f9b2847778a3c7021919321af1a04e303ae6 Mon Sep 17 00:00:00 2001 From: Klaas Demter Date: Sun, 28 Mar 2021 09:55:57 +0200 Subject: [PATCH 02/24] k3s: add note about firewalld for el Refs k3s-io/k3s#3122 follow up suggested in rancher/docs#2740 --- content/k3s/latest/en/advanced/_index.md | 8 ++++++++ .../en/installation/installation-requirements/_index.md | 1 + 2 files changed, 9 insertions(+) diff --git a/content/k3s/latest/en/advanced/_index.md b/content/k3s/latest/en/advanced/_index.md index f48955149c4..0fc3948d20f 100644 --- a/content/k3s/latest/en/advanced/_index.md +++ b/content/k3s/latest/en/advanced/_index.md @@ -21,6 +21,7 @@ This section contains advanced information describing the different ways you can - [Enabling legacy iptables on Raspbian Buster](#enabling-legacy-iptables-on-raspbian-buster) - [Enabling cgroups for Raspbian Buster](#enabling-cgroups-for-raspbian-buster) - [SELinux Support](#selinux-support) +- [Additional preparation for (Red Hat/CentOS) Enterprise Linux](#additional-preparation-for-el) # Certificate Rotation @@ -366,3 +367,10 @@ Using a custom `--data-dir` under SELinux is not supported. To customize it, you {{%/tab%}} {{% /tabs %}} + +# Additional preparation for (Red Hat/CentOS) Enterprise Linux + +It is recommended to turn off firewalld: +``` +systemctl disable firewalld --now +``` diff --git a/content/k3s/latest/en/installation/installation-requirements/_index.md b/content/k3s/latest/en/installation/installation-requirements/_index.md index 796451ac6a3..8f8db3d0160 100644 --- a/content/k3s/latest/en/installation/installation-requirements/_index.md +++ b/content/k3s/latest/en/installation/installation-requirements/_index.md @@ -23,6 +23,7 @@ Some OSs have specific requirements: - If you are using **Raspbian Buster**, follow [these steps]({{}}/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster) to switch to legacy iptables. - If you are using **Alpine Linux**, follow [these steps]({{}}/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup. +- If you are using **(Red Hat/CentOS) Enterprise Linux**, follow [these steps]({{}}/k3s/latest/en/advanced/#additional-preparation-for-el) for additional setup. For more information on which OSs were tested with Rancher managed K3s clusters, refer to the [Rancher support and maintenance terms.](https://rancher.com/support-maintenance-terms/) From 9cf6d1b1da99b87d6bfe9b1f6dd622a6b0108ca9 Mon Sep 17 00:00:00 2001 From: Philippe Parisot <22180342+PhilParisot@users.noreply.github.com> Date: Tue, 30 Mar 2021 13:43:25 -0400 Subject: [PATCH 03/24] Update _index.md I had a really hard time installing Longhorn on my cluster simply because I didn't know there were installation requirements. I later found out about them here: https://longhorn.io/docs/1.1.0/deploy/install/#installation-requirements But I wasted a whole day on this, so adding an extra step to installation, linking requirements, so others don't have to struggle the way I did. Regards. --- content/rancher/v2.5/en/longhorn/_index.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.5/en/longhorn/_index.md b/content/rancher/v2.5/en/longhorn/_index.md index ed5a42b1370..5ba7a59deca 100644 --- a/content/rancher/v2.5/en/longhorn/_index.md +++ b/content/rancher/v2.5/en/longhorn/_index.md @@ -32,6 +32,7 @@ These instructions assume you are using Rancher v2.5, but Longhorn can be instal ### Installing Longhorn with Rancher +1. Fulfill all [Installation Requirements.](https://longhorn.io/docs/1.1.0/deploy/install/#installation-requirements) 1. Go to the **Cluster Explorer** in the Rancher UI. 1. Click **Apps.** 1. Click `longhorn`. @@ -43,7 +44,7 @@ These instructions assume you are using Rancher v2.5, but Longhorn can be instal ### Accessing Longhorn from the Rancher UI 1. From the **Cluster Explorer," go to the top left dropdown menu and click **Cluster Explorer > Longhorn.** -1. On this page, you can edit Kubernetes resources managed by Longhorn. To view the Longhorn UI, click the **Longhorn** button in the **Overview**section. +1. On this page, you can edit Kubernetes resources managed by Longhorn. To view the Longhorn UI, click the **Longhorn** button in the **Overview** section. **Result:** You will be taken to the Longhorn UI, where you can manage your Longhorn volumes and their replicas in the Kubernetes cluster, as well as secondary backups of your Longhorn storage that may exist in another Kubernetes cluster or in S3. @@ -73,4 +74,4 @@ The storage controller and replicas are themselves orchestrated using Kubernetes You can learn more about its architecture [here.](https://longhorn.io/docs/1.0.2/concepts/)
Longhorn Architecture
-![Longhorn Architecture]({{}}/img/rancher/longhorn-architecture.svg) \ No newline at end of file +![Longhorn Architecture]({{}}/img/rancher/longhorn-architecture.svg) From c5d445a6c70535dd4e5388bfaa30b9a094341bc6 Mon Sep 17 00:00:00 2001 From: tabarrial <79221432+tabarrial@users.noreply.github.com> Date: Mon, 5 Apr 2021 08:48:56 +0200 Subject: [PATCH 04/24] Update _index.md Change Reference to correct section --- .../v2.x/en/installation/install-rancher-on-k8s/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md index 849a4aef754..840e672c0d1 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md @@ -27,7 +27,7 @@ Set up the Rancher server's local Kubernetes cluster. The cluster requirements depend on the Rancher version: -- **As of Rancher v2.5,** Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. Note: To deploy Rancher v2.5 on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#4-choose-your-ssl-configuration) +- **As of Rancher v2.5,** Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. Note: To deploy Rancher v2.5 on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#3-choose-your-ssl-configuration) - **In Rancher v2.4.x,** Rancher needs to be installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster. - **In Rancher before v2.4,** Rancher needs to be installed on an RKE Kubernetes cluster. From 8d8c628130490493a148220a0a7b7511543fbea8 Mon Sep 17 00:00:00 2001 From: fritzduchardt Date: Mon, 5 Apr 2021 16:14:34 +0200 Subject: [PATCH 05/24] Update _index.md The installation steps for docker and group configuration are in the wrong order. --- .../infrastructure-tutorials/ec2-node/_index.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md index 564ccdb49fb..2e01e815809 100644 --- a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md @@ -47,14 +47,14 @@ If the Rancher server is installed in a single Docker container, you only need o ``` sudo ssh -i [path-to-private-key] ubuntu@[public-DNS-of-instance] ``` -1. When you are connected to the instance, run the following command on the instance to create a user: -``` -sudo usermod -aG docker ubuntu -``` 1. Run the following command on the instance to install Docker with one of Rancher's installation scripts: ``` curl https://releases.rancher.com/install-docker/18.09.sh | sh ``` +1. When you are connected to the instance, run the following command on the instance to add user `ubuntu` to group `docker`: +``` +sudo usermod -aG docker ubuntu +``` 1. Repeat these steps so that Docker is installed on each node that will eventually run the Rancher management server. > To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher’s Docker installation scripts. From 13ad28c6faf3d095638c1fd26c8b0bf4986f9cb7 Mon Sep 17 00:00:00 2001 From: Lucas Ramage Date: Mon, 5 Apr 2021 14:55:49 -0400 Subject: [PATCH 06/24] Fix typo for Custom Nodes on AWS --- .../cluster-provisioning/rke-clusters/custom-nodes/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md index 8bb36b15c2f..772b79b1740 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md @@ -96,7 +96,7 @@ If you have configured your cluster to use Amazon as **Cloud Provider**, tag you >**Note:** You can use Amazon EC2 instances without configuring a cloud provider in Kubernetes. You only have to configure the cloud provider if you want to use specific Kubernetes cloud provider functionality. For more information, see [Kubernetes Cloud Providers](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) -The following resources need to tagged with a `ClusterID`: +The following resources need to be tagged with a `ClusterID`: - **Nodes**: All hosts added in Rancher. - **Subnet**: The subnet used for your cluster @@ -123,4 +123,4 @@ Key=kubernetes.io/cluster/CLUSTERID, Value=shared After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster: - **Access your cluster with the kubectl CLI:** Follow [these steps]({{}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#accessing-clusters-with-kubectl-on-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI. -- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. \ No newline at end of file +- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. From 656985336985fc3bf56a788e852047fbc4cc1496 Mon Sep 17 00:00:00 2001 From: vcasado Date: Tue, 6 Apr 2021 14:06:06 +0200 Subject: [PATCH 07/24] Adding "https://" to the name of the server on line 55 Related to an issue with a customer. Ticket https://rancher.zendesk.com/agent/tickets/12724 --- .../microsoft-adfs/rancher-adfs-setup/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md index c6d45667d4c..6dc6fe240df 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md @@ -52,5 +52,5 @@ After you complete [Configuring Microsoft AD FS for Rancher]({{}}/ranch **Tip:** You can generate a certificate using an openssl command. For example: ``` -openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com" -``` \ No newline at end of file +openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=https://myservice.example.com" +``` From 0dc9bebed0c1f19f4524a3bdc8d5b7c3b56c37fe Mon Sep 17 00:00:00 2001 From: galal-hussein Date: Wed, 7 Apr 2021 01:00:25 +0200 Subject: [PATCH 08/24] Add disable flags documentation --- content/k3s/latest/en/installation/_index.md | 2 + .../en/installation/disable-flags/_index.md | 73 +++++++++++++++++++ 2 files changed, 75 insertions(+) create mode 100644 content/k3s/latest/en/installation/disable-flags/_index.md diff --git a/content/k3s/latest/en/installation/_index.md b/content/k3s/latest/en/installation/_index.md index 91997c7a2a9..6d595b68ff7 100644 --- a/content/k3s/latest/en/installation/_index.md +++ b/content/k3s/latest/en/installation/_index.md @@ -13,6 +13,8 @@ This section contains instructions for installing K3s in various environments. P [Air-Gap Installation]({{}}/k3s/latest/en/installation/airgap/) details how to set up K3s in environments that do not have direct access to the Internet. +[Disable Components Flags]({{}}/k3s/latest/en/installation/disable-flags/) details how to set up K3s with etcd only nodes and controlplane only nodes + ### Uninstalling If you installed K3s with the help of the `install.sh` script, an uninstall script is generated during installation. The script is created on your node at `/usr/local/bin/k3s-uninstall.sh` (or as `k3s-agent-uninstall.sh`). diff --git a/content/k3s/latest/en/installation/disable-flags/_index.md b/content/k3s/latest/en/installation/disable-flags/_index.md new file mode 100644 index 00000000000..6652c85d704 --- /dev/null +++ b/content/k3s/latest/en/installation/disable-flags/_index.md @@ -0,0 +1,73 @@ +--- +title: "Disable Components Flags" +weight: 60 +--- + +When starting K3s server with --cluster-init it will run all control plane components that includes (api server, controller manager, scheduler, and etcd). However you can run server nodes with certain components and execlude others, the following sectiohs will explain how to do that. + +# ETCD Only Nodes + +This document assumes you run K3s server with embedded etcd by passing `--cluster-init` flag to the server process. + +To run a K3s server with only etcd components you can pass `--disable-api-server --disable-controller-manager --disable-scheduler` flags to k3s, this will result in running a server node with only etcd, for example to run K3s server with those flags: + +``` +curl -fL https://get.k3s.io | sh -s - server --cluster-init --disable-api-server --disable-controller-manager --disable-scheduler +``` + +You can join other nodes to the cluster normally after that. + +# Disable ETCD + +You can also disable etcd from a server node and this will result in a k3s server running control components other than etcd, that can be accomplished by running k3s server with flag `--disable-etcd` for example to join another node with only control components to the etcd node created in the previous section: + +``` +curl -fL https://get.k3s.io | sh -s - server --token --disable-etcd --server https://:6443 +``` + +The end result will be a two nodes one of them is etcd only node and the other one is controlplane only node, if you check the node list you should see something like the following: + +``` +kubectl get nodes +NAME STATUS ROLES AGE VERSION +ip-172-31-13-32 Ready etcd 5h39m v1.20.4+k3s1 +ip-172-31-14-69 Ready control-plane,master 5h39m v1.20.4+k3s1 +``` + +Note that you can run `kubectl` commands only on the k3s server that has the api running, and you cant run `kubectl` commands on etcd only nodes. + + +### Re-enabling control components + +In both cases you can re-enable any component that you already disabled simply by removing the corresponding flag that disables them, so for example if you want to revert the etcd only node back to a full k3s server with all components you can just remove the following 3 flags `--disable-api-server --disable-controller-manager --disable-scheduler`, so in our example to revert back node `ip-172-31-13-32` to a full k3s server you can just re-run the curl command without the disable flags: +``` +curl -fL https://get.k3s.io | sh -s - server --cluster-init +``` + +you will notice that all components started again and you can run kubectl commands again: + +``` +kubectl get nodes +NAME STATUS ROLES AGE VERSION +ip-172-31-13-32 Ready control-plane,etcd,master 5h45m v1.20.4+k3s1 +ip-172-31-14-69 Ready control-plane,master 5h45m v1.20.4+k3s1 +``` + +Notice that role labels has been re-added to the node `ip-172-31-13-32` with the correct labels (control-plane,etcd,master). + +# Add disable flags using the config file + +In any of the previous situation you can use the config file instead of running the curl commands with the associated flags, for example to run an etcd only node you can add the following options to the `/etc/rancher/k3s/config.yaml` file: + +``` +--- +disable-api-server: true +disable-controller-manager: true +disable-scheduler: true +cluster-init: true +``` +and then start K3s using the curl command without any arguents: + +``` +curl -fL https://get.k3s.io | sh - +``` \ No newline at end of file From b0e7f038b51d9e4acdc9029f204014183987293c Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Wed, 7 Apr 2021 15:50:02 -0700 Subject: [PATCH 09/24] Fix typos #2987 --- .../en/installation/install-rancher-on-k8s/upgrades/_index.md | 2 +- .../en/installation/install-rancher-on-k8s/upgrades/_index.md | 2 +- .../en/installation/install-rancher-on-k8s/upgrades/_index.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/_index.md b/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/_index.md index 80067909c23..e49c62f7d23 100644 --- a/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/_index.md +++ b/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/_index.md @@ -18,7 +18,7 @@ aliases: --- The following instructions will guide you through upgrading a Rancher server that was installed on a Kubernetes cluster with Helm. These steps also apply to air gap installs with Helm. -For the instructions to upgrade Rancher installed with Docker, refer to [ths page.]({{}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) +For the instructions to upgrade Rancher installed with Docker, refer to [this page.]({{}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{}}/rke/latest/en/config-options/services/) or [add-ons]({{}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine. diff --git a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md index b924a20ee13..1506c27e27b 100644 --- a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md +++ b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md @@ -20,7 +20,7 @@ The following instructions will guide you through upgrading a Rancher server tha For the instructions to upgrade Rancher installed on Kubernetes with RancherD, refer to [this page.]({{}}/rancher/v2.5/en/installation/install-rancher-on-linux/upgrades) -For the instructions to upgrade Rancher installed with Docker, refer to [ths page.]({{}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) +For the instructions to upgrade Rancher installed with Docker, refer to [this page.]({{}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{}}/rke/latest/en/config-options/services/) or [add-ons]({{}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine. diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md index f622a84befd..0929a70d6c4 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md @@ -20,7 +20,7 @@ The following instructions will guide you through upgrading a Rancher server tha For the instructions to upgrade Rancher installed on Kubernetes with RancherD, refer to [this page.]({{}}/rancher/v2.x/en/installation/install-rancher-on-linux/upgrades) -For the instructions to upgrade Rancher installed with Docker, refer to [ths page.]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) +For the instructions to upgrade Rancher installed with Docker, refer to [this page.]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{}}/rke/latest/en/config-options/services/) or [add-ons]({{}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine. From 3312ddc57c046f26fe269d9a4be34e9144a089ef Mon Sep 17 00:00:00 2001 From: Joakim Roubert Date: Thu, 8 Apr 2021 13:02:37 +0200 Subject: [PATCH 10/24] Add missing quotes in k3s kube-dashboard install instruction Change-Id: I885c5ad0ba774d315bfe71c0deb467d9054e3e75 Signed-off-by: Joakim Roubert --- content/k3s/latest/en/installation/kube-dashboard/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/k3s/latest/en/installation/kube-dashboard/_index.md b/content/k3s/latest/en/installation/kube-dashboard/_index.md index 127770e858c..cb5c15bfc36 100644 --- a/content/k3s/latest/en/installation/kube-dashboard/_index.md +++ b/content/k3s/latest/en/installation/kube-dashboard/_index.md @@ -53,7 +53,7 @@ sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role ### Obtain the Bearer Token ```bash -sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token | grep ^token +sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token | grep '^token' ``` ### Local Access to the Dashboard From 3881ea9bb5de937c9ed568912b6f78c5b49d0674 Mon Sep 17 00:00:00 2001 From: Bastian Hofmann Date: Thu, 8 Apr 2021 15:23:26 +0200 Subject: [PATCH 11/24] Update cluster capabilities table for Rancher 2.5 * Cluster Upgrades are possible for imported RKE2 clusters * CIS Scans are available for all types of clusters in 2.5 Signed-off-by: Bastian Hofmann --- .../cluster-provisioning/cluster-capabilities-table/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md b/content/rancher/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md index 404c8a0e057..81cbefffc6d 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md @@ -11,12 +11,12 @@ headless: true | [Managing Projects, Namespaces and Workloads]({{}}/rancher/v2.5/en/cluster-admin/projects-and-namespaces/) | ✓ | ✓ | ✓ | | [Using App Catalogs]({{}}/rancher/v2.5/en/catalog/) | ✓ | ✓ | ✓ | | [Configuring Tools (Alerts, Notifiers, Logging, Monitoring, Istio)]({{}}/rancher/v2.5/en/cluster-admin/tools/) | ✓ | ✓ | ✓ | +| [Running Security Scans]({{}}/rancher/v2.5/en/security/security-scan/) | ✓ | ✓ | ✓ | | [Cloning Clusters]({{}}/rancher/v2.5/en/cluster-admin/cloning-clusters/)| ✓ | ✓ | | | [Ability to rotate certificates]({{}}/rancher/v2.5/en/cluster-admin/certificate-rotation/) | ✓ | | | | [Ability to back up your Kubernetes Clusters]({{}}/rancher/v2.5/en/cluster-admin/backing-up-etcd/) | ✓ | | | | [Ability to recover and restore etcd]({{}}/rancher/v2.5/en/cluster-admin/restoring-etcd/) | ✓ | | | | [Cleaning Kubernetes components when clusters are no longer reachable from Rancher]({{}}/rancher/v2.5/en/cluster-admin/cleaning-cluster-nodes/) | ✓ | | | | [Configuring Pod Security Policies]({{}}/rancher/v2.5/en/cluster-admin/pod-security-policy/) | ✓ | | | -| [Running Security Scans]({{}}/rancher/v2.5/en/security/security-scan/) | ✓ | | | -\* Cluster configuration options can't be edited for imported clusters, except for [K3s clusters.]({{}}/rancher/v2.5/en/cluster-provisioning/imported-clusters/) +\* Cluster configuration options can't be edited for imported clusters, except for [K3s and RKE2 clusters.]({{}}/rancher/v2.5/en/cluster-provisioning/imported-clusters/) From bded51da29051a3288f65a9cb19a9e1af34d33ac Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Thu, 8 Apr 2021 16:19:29 -0700 Subject: [PATCH 12/24] Add change from PR #3163 to versioned docs --- .../authentication/microsoft-adfs/rancher-adfs-setup/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.5/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md b/content/rancher/v2.5/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md index 23674b37a11..b4f8655e59e 100644 --- a/content/rancher/v2.5/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md +++ b/content/rancher/v2.5/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md @@ -51,5 +51,5 @@ After you complete [Configuring Microsoft AD FS for Rancher]({{}}/ranch **Tip:** You can generate a certificate using an openssl command. For example: ``` -openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com" +openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=https://myservice.example.com" ``` \ No newline at end of file From d3728a63403b0caac1f7a43aa3d42fe938b90864 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 9 Apr 2021 09:57:19 -0700 Subject: [PATCH 13/24] Change internal link --- content/k3s/latest/en/advanced/_index.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/content/k3s/latest/en/advanced/_index.md b/content/k3s/latest/en/advanced/_index.md index 0fc3948d20f..b6a654ab2e9 100644 --- a/content/k3s/latest/en/advanced/_index.md +++ b/content/k3s/latest/en/advanced/_index.md @@ -21,7 +21,7 @@ This section contains advanced information describing the different ways you can - [Enabling legacy iptables on Raspbian Buster](#enabling-legacy-iptables-on-raspbian-buster) - [Enabling cgroups for Raspbian Buster](#enabling-cgroups-for-raspbian-buster) - [SELinux Support](#selinux-support) -- [Additional preparation for (Red Hat/CentOS) Enterprise Linux](#additional-preparation-for-el) +- [Additional preparation for (Red Hat/CentOS) Enterprise Linux](#additional-preparation-for-red-hat-centos-enterprise-linux) # Certificate Rotation @@ -229,7 +229,8 @@ $ k3s server INFO[2019-01-22T15:16:19.908493986-07:00] Starting k3s dev INFO[2019-01-22T15:16:19.908934479-07:00] Running kube-apiserver --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key Flag --insecure-port has been deprecated, This flag will be removed in a future version. -INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false +INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader +ect=false INFO[2019-01-22T15:16:20.196880841-07:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false Flag --port has been deprecated, see --secure-port instead. INFO[2019-01-22T15:16:20.273441984-07:00] Listening on :6443 From 84c9cd93c0b4e9cac92152ba3698a5d179a8543a Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 9 Apr 2021 09:59:05 -0700 Subject: [PATCH 14/24] Revert typo --- content/k3s/latest/en/advanced/_index.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/content/k3s/latest/en/advanced/_index.md b/content/k3s/latest/en/advanced/_index.md index b6a654ab2e9..a557e491fc4 100644 --- a/content/k3s/latest/en/advanced/_index.md +++ b/content/k3s/latest/en/advanced/_index.md @@ -229,8 +229,7 @@ $ k3s server INFO[2019-01-22T15:16:19.908493986-07:00] Starting k3s dev INFO[2019-01-22T15:16:19.908934479-07:00] Running kube-apiserver --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key Flag --insecure-port has been deprecated, This flag will be removed in a future version. -INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader -ect=false +INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false INFO[2019-01-22T15:16:20.196880841-07:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false Flag --port has been deprecated, see --secure-port instead. INFO[2019-01-22T15:16:20.273441984-07:00] Listening on :6443 From ec9faa13ef9c63c8c94a9d091646878026f8847d Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 9 Apr 2021 09:59:47 -0700 Subject: [PATCH 15/24] Change internal link --- .../latest/en/installation/installation-requirements/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/k3s/latest/en/installation/installation-requirements/_index.md b/content/k3s/latest/en/installation/installation-requirements/_index.md index 8f8db3d0160..1b5d14825de 100644 --- a/content/k3s/latest/en/installation/installation-requirements/_index.md +++ b/content/k3s/latest/en/installation/installation-requirements/_index.md @@ -23,7 +23,7 @@ Some OSs have specific requirements: - If you are using **Raspbian Buster**, follow [these steps]({{}}/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster) to switch to legacy iptables. - If you are using **Alpine Linux**, follow [these steps]({{}}/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup. -- If you are using **(Red Hat/CentOS) Enterprise Linux**, follow [these steps]({{}}/k3s/latest/en/advanced/#additional-preparation-for-el) for additional setup. +- If you are using **(Red Hat/CentOS) Enterprise Linux**, follow [these steps]({{}}/k3s/latest/en/advanced/#additional-preparation-for-red-hat-centos-enterprise-linux) for additional setup. For more information on which OSs were tested with Rancher managed K3s clusters, refer to the [Rancher support and maintenance terms.](https://rancher.com/support-maintenance-terms/) From fb011f0a8fb401bea6fd1198dc40f401449fdd7a Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Sat, 10 Apr 2021 09:51:14 -0700 Subject: [PATCH 16/24] Make change from PR #3161 in versioned docs --- .../infrastructure-tutorials/ec2-node/_index.md | 8 ++++---- .../infrastructure-tutorials/ec2-node/_index.md | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/content/rancher/v2.0-v2.4/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md b/content/rancher/v2.0-v2.4/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md index 28585766c01..0b9927cb880 100644 --- a/content/rancher/v2.0-v2.4/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md +++ b/content/rancher/v2.0-v2.4/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md @@ -45,14 +45,14 @@ If the Rancher server is installed in a single Docker container, you only need o ``` sudo ssh -i [path-to-private-key] ubuntu@[public-DNS-of-instance] ``` -1. When you are connected to the instance, run the following command on the instance to create a user: -``` -sudo usermod -aG docker ubuntu -``` 1. Run the following command on the instance to install Docker with one of Rancher's installation scripts: ``` curl https://releases.rancher.com/install-docker/18.09.sh | sh ``` +1. When you are connected to the instance, run the following command on the instance to create a user: +``` +sudo usermod -aG docker ubuntu +``` 1. Repeat these steps so that Docker is installed on each node that will eventually run the Rancher management server. > To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher’s Docker installation scripts. diff --git a/content/rancher/v2.5/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md b/content/rancher/v2.5/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md index 61469605176..f0bb8732c52 100644 --- a/content/rancher/v2.5/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md +++ b/content/rancher/v2.5/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md @@ -48,14 +48,14 @@ If the Rancher server is installed in a single Docker container, you only need o ``` sudo ssh -i [path-to-private-key] ubuntu@[public-DNS-of-instance] ``` -1. When you are connected to the instance, run the following command on the instance to create a user: -``` -sudo usermod -aG docker ubuntu -``` 1. Run the following command on the instance to install Docker with one of Rancher's installation scripts: ``` curl https://releases.rancher.com/install-docker/18.09.sh | sh ``` +1. When you are connected to the instance, run the following command on the instance to create a user: +``` +sudo usermod -aG docker ubuntu +``` 1. Repeat these steps so that Docker is installed on each node that will eventually run the Rancher management server. > To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher’s Docker installation scripts. From 6b01bcdfbff7d8e5ff4f14674f61f5f8de80d748 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Sun, 11 Apr 2021 09:12:44 -0700 Subject: [PATCH 17/24] Change datastore in RKE vSphere cloud provider config reference --- .../cloud-providers/vsphere/config-reference/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/_index.md b/content/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/_index.md index ff4c44aea35..b079ae7dd35 100644 --- a/content/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/_index.md +++ b/content/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/_index.md @@ -37,7 +37,7 @@ rancher_kubernetes_engine_config: workspace: server: vc.example.com folder: myvmfolder - default-datastore: /eu-west-1/datastore/ds-1 + default-datastore: ds-1 datacenter: /eu-west-1 resourcepool-path: /eu-west-1/host/hn1/resources/myresourcepool From e670bd51b664eb44339af228795c81b0bb9424e8 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Sun, 11 Apr 2021 10:00:31 -0700 Subject: [PATCH 18/24] Remove IP addresses for git.rancher.io --- .../v2.0-v2.4/en/installation/requirements/ports/_index.md | 4 ++-- .../rancher/v2.5/en/installation/requirements/ports/_index.md | 4 ++-- .../rancher/v2.x/en/installation/requirements/ports/_index.md | 4 ++-- layouts/shortcodes/ports-custom-nodes.html | 2 +- layouts/shortcodes/ports-iaas-nodes.html | 2 +- layouts/shortcodes/ports-imported-hosted.html | 2 +- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/content/rancher/v2.0-v2.4/en/installation/requirements/ports/_index.md b/content/rancher/v2.0-v2.4/en/installation/requirements/ports/_index.md index 97eaaf4fd89..682497174d5 100644 --- a/content/rancher/v2.0-v2.4/en/installation/requirements/ports/_index.md +++ b/content/rancher/v2.0-v2.4/en/installation/requirements/ports/_index.md @@ -62,7 +62,7 @@ The following tables break down the port requirements for inbound and outbound t | Protocol | Port | Destination | Description | | -------- | ---- | -------------------------------------------------------- | --------------------------------------------- | | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`, `35.167.242.46/32`, `52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | @@ -130,7 +130,7 @@ The following tables break down the port requirements for Rancher nodes, for inb | Protocol | Port | Source | Description | |-----|-----|----------------|---| | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`,`35.167.242.46/32`,`52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | diff --git a/content/rancher/v2.5/en/installation/requirements/ports/_index.md b/content/rancher/v2.5/en/installation/requirements/ports/_index.md index a78ed8d8046..7595afab4c1 100644 --- a/content/rancher/v2.5/en/installation/requirements/ports/_index.md +++ b/content/rancher/v2.5/en/installation/requirements/ports/_index.md @@ -65,7 +65,7 @@ The following tables break down the port requirements for inbound and outbound t | Protocol | Port | Destination | Description | | -------- | ---- | -------------------------------------------------------- | --------------------------------------------- | | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`, `35.167.242.46/32`, `52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | @@ -162,7 +162,7 @@ The following tables break down the port requirements for Rancher nodes, for inb | Protocol | Port | Source | Description | |-----|-----|----------------|---| | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`,`35.167.242.46/32`,`52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | diff --git a/content/rancher/v2.x/en/installation/requirements/ports/_index.md b/content/rancher/v2.x/en/installation/requirements/ports/_index.md index 2cb204d6ea8..d98190dc18e 100644 --- a/content/rancher/v2.x/en/installation/requirements/ports/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/ports/_index.md @@ -67,7 +67,7 @@ The following tables break down the port requirements for inbound and outbound t | Protocol | Port | Destination | Description | | -------- | ---- | -------------------------------------------------------- | --------------------------------------------- | | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`, `35.167.242.46/32`, `52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | @@ -164,7 +164,7 @@ The following tables break down the port requirements for Rancher nodes, for inb | Protocol | Port | Source | Description | |-----|-----|----------------|---| | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`,`35.167.242.46/32`,`52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | diff --git a/layouts/shortcodes/ports-custom-nodes.html b/layouts/shortcodes/ports-custom-nodes.html index 45af1975f8f..b5dfa8f4a26 100644 --- a/layouts/shortcodes/ports-custom-nodes.html +++ b/layouts/shortcodes/ports-custom-nodes.html @@ -18,7 +18,7 @@ - git.rancher.io (2):
35.160.43.145:32
35.167.242.46:32
52.33.59.17:32 + git.rancher.io etcd Plane Nodes diff --git a/layouts/shortcodes/ports-iaas-nodes.html b/layouts/shortcodes/ports-iaas-nodes.html index 3079e9bdb21..45b401149f5 100644 --- a/layouts/shortcodes/ports-iaas-nodes.html +++ b/layouts/shortcodes/ports-iaas-nodes.html @@ -16,7 +16,7 @@ 22 TCP - git.rancher.io (2):
35.160.43.145:32
35.167.242.46:32
52.33.59.17:32 + git.rancher.io diff --git a/layouts/shortcodes/ports-imported-hosted.html b/layouts/shortcodes/ports-imported-hosted.html index ea9cf448bad..48e4201bae6 100644 --- a/layouts/shortcodes/ports-imported-hosted.html +++ b/layouts/shortcodes/ports-imported-hosted.html @@ -14,7 +14,7 @@ Kubernetes API
Endpoint Port (2) - git.rancher.io (3):
35.160.43.145:32
35.167.242.46:32
52.33.59.17:32 + git.rancher.io From 2f761c4f2e63e8966a7270938c7506f153fba916 Mon Sep 17 00:00:00 2001 From: ahbosch Date: Tue, 13 Apr 2021 18:40:33 -0400 Subject: [PATCH 19/24] Update _index.md Updated the Format --- .../cloud-providers/vsphere/out-of-tree/_index.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md index 7ba765989f7..b2e11e8967a 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md @@ -36,12 +36,13 @@ The Cloud Provider Interface (CPI) should be installed first before installing t ``` kubectl describe nodes | grep "ProviderID" ``` + ### 3. Installing the CSI plugin - 1. From the **Cluster Explorer** view, go to the top left dropdown menu and click **Apps & Marketplace.** -1. Select the **vSphere CSI** chart. Fill out the required vCenter details. -2. Set **Enable CSI Migration** to **false**. -3. This chart creates a StorageClass with the `csi.vsphere.vmware.com` as the provisioner. Fill out the details for the StorageClass and launch the chart. +1. From the **Cluster Explorer** view, go to the top left dropdown menu and click **Apps & Marketplace.** +2. Select the **vSphere CSI** chart. Fill out the required vCenter details. +3. Set **Enable CSI Migration** to **false**. +4. This chart creates a StorageClass with the `csi.vsphere.vmware.com` as the provisioner. Fill out the details for the StorageClass and launch the chart. # Using the CSI driver for provisioning volumes The CSI chart by default creates a storageClass. From 3d908eab1f9cfcc4f38b5a1f57e5daa0f3d4c143 Mon Sep 17 00:00:00 2001 From: Richard Brown Date: Fri, 16 Apr 2021 13:13:29 +0200 Subject: [PATCH 20/24] Correct openSUSE Kubic details --- content/rke/latest/en/os/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index da210b14422..90882ed3340 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -176,7 +176,7 @@ Consult the project pages for openSUSE MicroOS and Kubic for installation Designed to host container workloads with automated administration & patching. Installing openSUSE MicroOS you get a quick, small environment for deploying Containers, or any other workload that benefits from Transactional Updates. As rolling release distribution the software is always up-to-date. https://microos.opensuse.org #### openSUSE Kubic -Based on MicroOS, but not a rolling release distribution. Designed with the same things in mind but also a Certified Kubernetes Distribution. +Based on openSUSE MicroOS, designed with the same things in mind but is focused on being a Certified Kubernetes Distribution. https://kubic.opensuse.org Installation instructions: https://kubic.opensuse.org/blog/2021-02-08-MicroOS-Kubic-Rancher-RKE/ From d9a2070dbd8a808c9175f49e9eb45fe389166a17 Mon Sep 17 00:00:00 2001 From: Tejeev Date: Fri, 16 Apr 2021 18:02:45 +0100 Subject: [PATCH 21/24] updating swiss army knife location --- .../rancher/v2.5/en/troubleshooting/networking/_index.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/rancher/v2.5/en/troubleshooting/networking/_index.md b/content/rancher/v2.5/en/troubleshooting/networking/_index.md index fe62cf44647..ac1f7a48ce1 100644 --- a/content/rancher/v2.5/en/troubleshooting/networking/_index.md +++ b/content/rancher/v2.5/en/troubleshooting/networking/_index.md @@ -14,7 +14,7 @@ Double check if all the [required ports]({{}}/rancher/v2.5/en/cluster-p The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. -To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/leodotcloud/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. +To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/rancherlabs/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. 1. Save the following file as `overlaytest.yml` @@ -35,7 +35,7 @@ To test the overlay network, you can launch the following `DaemonSet` definition tolerations: - operator: Exists containers: - - image: leodotcloud/swiss-army-knife + - image: rancher/swiss-army-knife imagePullPolicy: Always name: overlaytest command: ["sh", "-c", "tail -f /dev/null"] @@ -113,4 +113,4 @@ To check if your cluster is affected, the following command will list nodes that kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name' ``` -If there is no output, the cluster is not affected. \ No newline at end of file +If there is no output, the cluster is not affected. From 16d7770a6840496e9a21ad065486f9dade36dabe Mon Sep 17 00:00:00 2001 From: Tejeev Date: Fri, 16 Apr 2021 18:04:17 +0100 Subject: [PATCH 22/24] update swissarmy knife lcoation --- .../rancher/v2.0-v2.4/en/troubleshooting/networking/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.0-v2.4/en/troubleshooting/networking/_index.md b/content/rancher/v2.0-v2.4/en/troubleshooting/networking/_index.md index d4ab581cb9b..f1e30f8109a 100644 --- a/content/rancher/v2.0-v2.4/en/troubleshooting/networking/_index.md +++ b/content/rancher/v2.0-v2.4/en/troubleshooting/networking/_index.md @@ -14,7 +14,7 @@ Double check if all the [required ports]({{}}/rancher/v2.0-v2.4/en/clus The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. -To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/leodotcloud/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. +To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/rancherlabs/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. 1. Save the following file as `overlaytest.yml` @@ -35,7 +35,7 @@ To test the overlay network, you can launch the following `DaemonSet` definition tolerations: - operator: Exists containers: - - image: leodotcloud/swiss-army-knife + - image: rancherlabs/swiss-army-knife imagePullPolicy: Always name: overlaytest command: ["sh", "-c", "tail -f /dev/null"] From 8bf6ddc43faa547baf815365e0feda2e19d6dc0d Mon Sep 17 00:00:00 2001 From: Tejeev Date: Fri, 16 Apr 2021 18:05:47 +0100 Subject: [PATCH 23/24] fix swissarmyknife locations --- content/rancher/v2.x/en/troubleshooting/networking/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.x/en/troubleshooting/networking/_index.md b/content/rancher/v2.x/en/troubleshooting/networking/_index.md index c4d10f7552b..1b13f8dfe5b 100644 --- a/content/rancher/v2.x/en/troubleshooting/networking/_index.md +++ b/content/rancher/v2.x/en/troubleshooting/networking/_index.md @@ -14,7 +14,7 @@ Double check if all the [required ports]({{}}/rancher/v2.x/en/cluster-p The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. -To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/leodotcloud/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. +To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/rancherlabs/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. 1. Save the following file as `overlaytest.yml` @@ -35,7 +35,7 @@ To test the overlay network, you can launch the following `DaemonSet` definition tolerations: - operator: Exists containers: - - image: leodotcloud/swiss-army-knife + - image: rancherlabs/swiss-army-knife imagePullPolicy: Always name: overlaytest command: ["sh", "-c", "tail -f /dev/null"] From 5869daa7d6d43b4803ca75351f59fc8340d7f7c3 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Tue, 20 Apr 2021 11:30:29 -0700 Subject: [PATCH 24/24] Add note about versioned Rancher docs --- README.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/README.md b/README.md index 8b877993dbd..9e92e8c1f06 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,14 @@ Rancher Docs ------------ +## Contributing + +We have transitioned to versioned documentation for Rancher (files within `content/rancher`). + +New contributions should be made to the applicable versioned directories (e.g. `content/rancher/v2.5` and `content/rancher/v2.0-v2.4`). + +Contents under the `content/rancher/v2.x` directory are no longer maintained after v2.5.6. + ## Running for development/editing The `rancher/docs:dev` docker image runs a live-updating server. To run on your workstation, run: