diff --git a/README.md b/README.md index 8b877993dbd..9e92e8c1f06 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,14 @@ Rancher Docs ------------ +## Contributing + +We have transitioned to versioned documentation for Rancher (files within `content/rancher`). + +New contributions should be made to the applicable versioned directories (e.g. `content/rancher/v2.5` and `content/rancher/v2.0-v2.4`). + +Contents under the `content/rancher/v2.x` directory are no longer maintained after v2.5.6. + ## Running for development/editing The `rancher/docs:dev` docker image runs a live-updating server. To run on your workstation, run: diff --git a/content/k3s/latest/en/advanced/_index.md b/content/k3s/latest/en/advanced/_index.md index f48955149c4..a557e491fc4 100644 --- a/content/k3s/latest/en/advanced/_index.md +++ b/content/k3s/latest/en/advanced/_index.md @@ -21,6 +21,7 @@ This section contains advanced information describing the different ways you can - [Enabling legacy iptables on Raspbian Buster](#enabling-legacy-iptables-on-raspbian-buster) - [Enabling cgroups for Raspbian Buster](#enabling-cgroups-for-raspbian-buster) - [SELinux Support](#selinux-support) +- [Additional preparation for (Red Hat/CentOS) Enterprise Linux](#additional-preparation-for-red-hat-centos-enterprise-linux) # Certificate Rotation @@ -228,7 +229,7 @@ $ k3s server INFO[2019-01-22T15:16:19.908493986-07:00] Starting k3s dev INFO[2019-01-22T15:16:19.908934479-07:00] Running kube-apiserver --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key Flag --insecure-port has been deprecated, This flag will be removed in a future version. -INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false +INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false INFO[2019-01-22T15:16:20.196880841-07:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false Flag --port has been deprecated, see --secure-port instead. INFO[2019-01-22T15:16:20.273441984-07:00] Listening on :6443 @@ -366,3 +367,10 @@ Using a custom `--data-dir` under SELinux is not supported. To customize it, you {{%/tab%}} {{% /tabs %}} + +# Additional preparation for (Red Hat/CentOS) Enterprise Linux + +It is recommended to turn off firewalld: +``` +systemctl disable firewalld --now +``` diff --git a/content/k3s/latest/en/installation/_index.md b/content/k3s/latest/en/installation/_index.md index 91997c7a2a9..6d595b68ff7 100644 --- a/content/k3s/latest/en/installation/_index.md +++ b/content/k3s/latest/en/installation/_index.md @@ -13,6 +13,8 @@ This section contains instructions for installing K3s in various environments. P [Air-Gap Installation]({{}}/k3s/latest/en/installation/airgap/) details how to set up K3s in environments that do not have direct access to the Internet. +[Disable Components Flags]({{}}/k3s/latest/en/installation/disable-flags/) details how to set up K3s with etcd only nodes and controlplane only nodes + ### Uninstalling If you installed K3s with the help of the `install.sh` script, an uninstall script is generated during installation. The script is created on your node at `/usr/local/bin/k3s-uninstall.sh` (or as `k3s-agent-uninstall.sh`). diff --git a/content/k3s/latest/en/installation/disable-flags/_index.md b/content/k3s/latest/en/installation/disable-flags/_index.md new file mode 100644 index 00000000000..6652c85d704 --- /dev/null +++ b/content/k3s/latest/en/installation/disable-flags/_index.md @@ -0,0 +1,73 @@ +--- +title: "Disable Components Flags" +weight: 60 +--- + +When starting K3s server with --cluster-init it will run all control plane components that includes (api server, controller manager, scheduler, and etcd). However you can run server nodes with certain components and execlude others, the following sectiohs will explain how to do that. + +# ETCD Only Nodes + +This document assumes you run K3s server with embedded etcd by passing `--cluster-init` flag to the server process. + +To run a K3s server with only etcd components you can pass `--disable-api-server --disable-controller-manager --disable-scheduler` flags to k3s, this will result in running a server node with only etcd, for example to run K3s server with those flags: + +``` +curl -fL https://get.k3s.io | sh -s - server --cluster-init --disable-api-server --disable-controller-manager --disable-scheduler +``` + +You can join other nodes to the cluster normally after that. + +# Disable ETCD + +You can also disable etcd from a server node and this will result in a k3s server running control components other than etcd, that can be accomplished by running k3s server with flag `--disable-etcd` for example to join another node with only control components to the etcd node created in the previous section: + +``` +curl -fL https://get.k3s.io | sh -s - server --token --disable-etcd --server https://:6443 +``` + +The end result will be a two nodes one of them is etcd only node and the other one is controlplane only node, if you check the node list you should see something like the following: + +``` +kubectl get nodes +NAME STATUS ROLES AGE VERSION +ip-172-31-13-32 Ready etcd 5h39m v1.20.4+k3s1 +ip-172-31-14-69 Ready control-plane,master 5h39m v1.20.4+k3s1 +``` + +Note that you can run `kubectl` commands only on the k3s server that has the api running, and you cant run `kubectl` commands on etcd only nodes. + + +### Re-enabling control components + +In both cases you can re-enable any component that you already disabled simply by removing the corresponding flag that disables them, so for example if you want to revert the etcd only node back to a full k3s server with all components you can just remove the following 3 flags `--disable-api-server --disable-controller-manager --disable-scheduler`, so in our example to revert back node `ip-172-31-13-32` to a full k3s server you can just re-run the curl command without the disable flags: +``` +curl -fL https://get.k3s.io | sh -s - server --cluster-init +``` + +you will notice that all components started again and you can run kubectl commands again: + +``` +kubectl get nodes +NAME STATUS ROLES AGE VERSION +ip-172-31-13-32 Ready control-plane,etcd,master 5h45m v1.20.4+k3s1 +ip-172-31-14-69 Ready control-plane,master 5h45m v1.20.4+k3s1 +``` + +Notice that role labels has been re-added to the node `ip-172-31-13-32` with the correct labels (control-plane,etcd,master). + +# Add disable flags using the config file + +In any of the previous situation you can use the config file instead of running the curl commands with the associated flags, for example to run an etcd only node you can add the following options to the `/etc/rancher/k3s/config.yaml` file: + +``` +--- +disable-api-server: true +disable-controller-manager: true +disable-scheduler: true +cluster-init: true +``` +and then start K3s using the curl command without any arguents: + +``` +curl -fL https://get.k3s.io | sh - +``` \ No newline at end of file diff --git a/content/k3s/latest/en/installation/installation-requirements/_index.md b/content/k3s/latest/en/installation/installation-requirements/_index.md index 796451ac6a3..1b5d14825de 100644 --- a/content/k3s/latest/en/installation/installation-requirements/_index.md +++ b/content/k3s/latest/en/installation/installation-requirements/_index.md @@ -23,6 +23,7 @@ Some OSs have specific requirements: - If you are using **Raspbian Buster**, follow [these steps]({{}}/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster) to switch to legacy iptables. - If you are using **Alpine Linux**, follow [these steps]({{}}/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup. +- If you are using **(Red Hat/CentOS) Enterprise Linux**, follow [these steps]({{}}/k3s/latest/en/advanced/#additional-preparation-for-red-hat-centos-enterprise-linux) for additional setup. For more information on which OSs were tested with Rancher managed K3s clusters, refer to the [Rancher support and maintenance terms.](https://rancher.com/support-maintenance-terms/) diff --git a/content/k3s/latest/en/installation/kube-dashboard/_index.md b/content/k3s/latest/en/installation/kube-dashboard/_index.md index 127770e858c..cb5c15bfc36 100644 --- a/content/k3s/latest/en/installation/kube-dashboard/_index.md +++ b/content/k3s/latest/en/installation/kube-dashboard/_index.md @@ -53,7 +53,7 @@ sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role ### Obtain the Bearer Token ```bash -sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token | grep ^token +sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token | grep '^token' ``` ### Local Access to the Dashboard diff --git a/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/_index.md b/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/_index.md index 80067909c23..e49c62f7d23 100644 --- a/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/_index.md +++ b/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/_index.md @@ -18,7 +18,7 @@ aliases: --- The following instructions will guide you through upgrading a Rancher server that was installed on a Kubernetes cluster with Helm. These steps also apply to air gap installs with Helm. -For the instructions to upgrade Rancher installed with Docker, refer to [ths page.]({{}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) +For the instructions to upgrade Rancher installed with Docker, refer to [this page.]({{}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{}}/rke/latest/en/config-options/services/) or [add-ons]({{}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine. diff --git a/content/rancher/v2.0-v2.4/en/installation/requirements/ports/_index.md b/content/rancher/v2.0-v2.4/en/installation/requirements/ports/_index.md index 97eaaf4fd89..682497174d5 100644 --- a/content/rancher/v2.0-v2.4/en/installation/requirements/ports/_index.md +++ b/content/rancher/v2.0-v2.4/en/installation/requirements/ports/_index.md @@ -62,7 +62,7 @@ The following tables break down the port requirements for inbound and outbound t | Protocol | Port | Destination | Description | | -------- | ---- | -------------------------------------------------------- | --------------------------------------------- | | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`, `35.167.242.46/32`, `52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | @@ -130,7 +130,7 @@ The following tables break down the port requirements for Rancher nodes, for inb | Protocol | Port | Source | Description | |-----|-----|----------------|---| | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`,`35.167.242.46/32`,`52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | diff --git a/content/rancher/v2.0-v2.4/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md b/content/rancher/v2.0-v2.4/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md index 28585766c01..0b9927cb880 100644 --- a/content/rancher/v2.0-v2.4/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md +++ b/content/rancher/v2.0-v2.4/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md @@ -45,14 +45,14 @@ If the Rancher server is installed in a single Docker container, you only need o ``` sudo ssh -i [path-to-private-key] ubuntu@[public-DNS-of-instance] ``` -1. When you are connected to the instance, run the following command on the instance to create a user: -``` -sudo usermod -aG docker ubuntu -``` 1. Run the following command on the instance to install Docker with one of Rancher's installation scripts: ``` curl https://releases.rancher.com/install-docker/18.09.sh | sh ``` +1. When you are connected to the instance, run the following command on the instance to create a user: +``` +sudo usermod -aG docker ubuntu +``` 1. Repeat these steps so that Docker is installed on each node that will eventually run the Rancher management server. > To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher’s Docker installation scripts. diff --git a/content/rancher/v2.0-v2.4/en/troubleshooting/networking/_index.md b/content/rancher/v2.0-v2.4/en/troubleshooting/networking/_index.md index d4ab581cb9b..f1e30f8109a 100644 --- a/content/rancher/v2.0-v2.4/en/troubleshooting/networking/_index.md +++ b/content/rancher/v2.0-v2.4/en/troubleshooting/networking/_index.md @@ -14,7 +14,7 @@ Double check if all the [required ports]({{}}/rancher/v2.0-v2.4/en/clus The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. -To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/leodotcloud/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. +To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/rancherlabs/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. 1. Save the following file as `overlaytest.yml` @@ -35,7 +35,7 @@ To test the overlay network, you can launch the following `DaemonSet` definition tolerations: - operator: Exists containers: - - image: leodotcloud/swiss-army-knife + - image: rancherlabs/swiss-army-knife imagePullPolicy: Always name: overlaytest command: ["sh", "-c", "tail -f /dev/null"] diff --git a/content/rancher/v2.5/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md b/content/rancher/v2.5/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md index 23674b37a11..b4f8655e59e 100644 --- a/content/rancher/v2.5/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md +++ b/content/rancher/v2.5/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md @@ -51,5 +51,5 @@ After you complete [Configuring Microsoft AD FS for Rancher]({{}}/ranch **Tip:** You can generate a certificate using an openssl command. For example: ``` -openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com" +openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=https://myservice.example.com" ``` \ No newline at end of file diff --git a/content/rancher/v2.5/en/cluster-admin/nodes/_index.md b/content/rancher/v2.5/en/cluster-admin/nodes/_index.md index 2fcd5b620d9..ca88ce4ab49 100644 --- a/content/rancher/v2.5/en/cluster-admin/nodes/_index.md +++ b/content/rancher/v2.5/en/cluster-admin/nodes/_index.md @@ -29,6 +29,7 @@ This section covers the following topics: # Node Options Available for Each Cluster Creation Option The following table lists which node options are available for each type of cluster in Rancher. Click the links in the **Option** column for more detailed information about each feature. + | Option | [Nodes Hosted by an Infrastructure Provider][1] | [Custom Node][2] | [Hosted Cluster][3] | [Registered EKS Nodes][4] | [All Other Registered Nodes][5] | Description | | ------------------------------------------------ | ------------------------------------------------ | ---------------- | ------------------- | ------------------- | -------------------| ------------------------------------------------------------------ | | [Cordon](#cordoning-a-node) | ✓ | ✓ | ✓ | ✓ | ✓ | Marks the node as unschedulable. | diff --git a/content/rancher/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md b/content/rancher/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md index 404c8a0e057..81cbefffc6d 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/cluster-capabilities-table/index.md @@ -11,12 +11,12 @@ headless: true | [Managing Projects, Namespaces and Workloads]({{}}/rancher/v2.5/en/cluster-admin/projects-and-namespaces/) | ✓ | ✓ | ✓ | | [Using App Catalogs]({{}}/rancher/v2.5/en/catalog/) | ✓ | ✓ | ✓ | | [Configuring Tools (Alerts, Notifiers, Logging, Monitoring, Istio)]({{}}/rancher/v2.5/en/cluster-admin/tools/) | ✓ | ✓ | ✓ | +| [Running Security Scans]({{}}/rancher/v2.5/en/security/security-scan/) | ✓ | ✓ | ✓ | | [Cloning Clusters]({{}}/rancher/v2.5/en/cluster-admin/cloning-clusters/)| ✓ | ✓ | | | [Ability to rotate certificates]({{}}/rancher/v2.5/en/cluster-admin/certificate-rotation/) | ✓ | | | | [Ability to back up your Kubernetes Clusters]({{}}/rancher/v2.5/en/cluster-admin/backing-up-etcd/) | ✓ | | | | [Ability to recover and restore etcd]({{}}/rancher/v2.5/en/cluster-admin/restoring-etcd/) | ✓ | | | | [Cleaning Kubernetes components when clusters are no longer reachable from Rancher]({{}}/rancher/v2.5/en/cluster-admin/cleaning-cluster-nodes/) | ✓ | | | | [Configuring Pod Security Policies]({{}}/rancher/v2.5/en/cluster-admin/pod-security-policy/) | ✓ | | | -| [Running Security Scans]({{}}/rancher/v2.5/en/security/security-scan/) | ✓ | | | -\* Cluster configuration options can't be edited for imported clusters, except for [K3s clusters.]({{}}/rancher/v2.5/en/cluster-provisioning/imported-clusters/) +\* Cluster configuration options can't be edited for imported clusters, except for [K3s and RKE2 clusters.]({{}}/rancher/v2.5/en/cluster-provisioning/imported-clusters/) diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md index 8bb36b15c2f..772b79b1740 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md @@ -96,7 +96,7 @@ If you have configured your cluster to use Amazon as **Cloud Provider**, tag you >**Note:** You can use Amazon EC2 instances without configuring a cloud provider in Kubernetes. You only have to configure the cloud provider if you want to use specific Kubernetes cloud provider functionality. For more information, see [Kubernetes Cloud Providers](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) -The following resources need to tagged with a `ClusterID`: +The following resources need to be tagged with a `ClusterID`: - **Nodes**: All hosts added in Rancher. - **Subnet**: The subnet used for your cluster @@ -123,4 +123,4 @@ Key=kubernetes.io/cluster/CLUSTERID, Value=shared After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster: - **Access your cluster with the kubectl CLI:** Follow [these steps]({{}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#accessing-clusters-with-kubectl-on-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI. -- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. \ No newline at end of file +- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. diff --git a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md index b924a20ee13..1506c27e27b 100644 --- a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md +++ b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/upgrades/_index.md @@ -20,7 +20,7 @@ The following instructions will guide you through upgrading a Rancher server tha For the instructions to upgrade Rancher installed on Kubernetes with RancherD, refer to [this page.]({{}}/rancher/v2.5/en/installation/install-rancher-on-linux/upgrades) -For the instructions to upgrade Rancher installed with Docker, refer to [ths page.]({{}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) +For the instructions to upgrade Rancher installed with Docker, refer to [this page.]({{}}/rancher/v2.5/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{}}/rke/latest/en/config-options/services/) or [add-ons]({{}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine. diff --git a/content/rancher/v2.5/en/installation/requirements/ports/_index.md b/content/rancher/v2.5/en/installation/requirements/ports/_index.md index a78ed8d8046..7595afab4c1 100644 --- a/content/rancher/v2.5/en/installation/requirements/ports/_index.md +++ b/content/rancher/v2.5/en/installation/requirements/ports/_index.md @@ -65,7 +65,7 @@ The following tables break down the port requirements for inbound and outbound t | Protocol | Port | Destination | Description | | -------- | ---- | -------------------------------------------------------- | --------------------------------------------- | | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`, `35.167.242.46/32`, `52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | @@ -162,7 +162,7 @@ The following tables break down the port requirements for Rancher nodes, for inb | Protocol | Port | Source | Description | |-----|-----|----------------|---| | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`,`35.167.242.46/32`,`52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | diff --git a/content/rancher/v2.5/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md b/content/rancher/v2.5/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md index 61469605176..f0bb8732c52 100644 --- a/content/rancher/v2.5/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md +++ b/content/rancher/v2.5/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md @@ -48,14 +48,14 @@ If the Rancher server is installed in a single Docker container, you only need o ``` sudo ssh -i [path-to-private-key] ubuntu@[public-DNS-of-instance] ``` -1. When you are connected to the instance, run the following command on the instance to create a user: -``` -sudo usermod -aG docker ubuntu -``` 1. Run the following command on the instance to install Docker with one of Rancher's installation scripts: ``` curl https://releases.rancher.com/install-docker/18.09.sh | sh ``` +1. When you are connected to the instance, run the following command on the instance to create a user: +``` +sudo usermod -aG docker ubuntu +``` 1. Repeat these steps so that Docker is installed on each node that will eventually run the Rancher management server. > To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher’s Docker installation scripts. diff --git a/content/rancher/v2.5/en/longhorn/_index.md b/content/rancher/v2.5/en/longhorn/_index.md index ed5a42b1370..5ba7a59deca 100644 --- a/content/rancher/v2.5/en/longhorn/_index.md +++ b/content/rancher/v2.5/en/longhorn/_index.md @@ -32,6 +32,7 @@ These instructions assume you are using Rancher v2.5, but Longhorn can be instal ### Installing Longhorn with Rancher +1. Fulfill all [Installation Requirements.](https://longhorn.io/docs/1.1.0/deploy/install/#installation-requirements) 1. Go to the **Cluster Explorer** in the Rancher UI. 1. Click **Apps.** 1. Click `longhorn`. @@ -43,7 +44,7 @@ These instructions assume you are using Rancher v2.5, but Longhorn can be instal ### Accessing Longhorn from the Rancher UI 1. From the **Cluster Explorer," go to the top left dropdown menu and click **Cluster Explorer > Longhorn.** -1. On this page, you can edit Kubernetes resources managed by Longhorn. To view the Longhorn UI, click the **Longhorn** button in the **Overview**section. +1. On this page, you can edit Kubernetes resources managed by Longhorn. To view the Longhorn UI, click the **Longhorn** button in the **Overview** section. **Result:** You will be taken to the Longhorn UI, where you can manage your Longhorn volumes and their replicas in the Kubernetes cluster, as well as secondary backups of your Longhorn storage that may exist in another Kubernetes cluster or in S3. @@ -73,4 +74,4 @@ The storage controller and replicas are themselves orchestrated using Kubernetes You can learn more about its architecture [here.](https://longhorn.io/docs/1.0.2/concepts/)
Longhorn Architecture
-![Longhorn Architecture]({{}}/img/rancher/longhorn-architecture.svg) \ No newline at end of file +![Longhorn Architecture]({{}}/img/rancher/longhorn-architecture.svg) diff --git a/content/rancher/v2.5/en/troubleshooting/networking/_index.md b/content/rancher/v2.5/en/troubleshooting/networking/_index.md index fe62cf44647..ac1f7a48ce1 100644 --- a/content/rancher/v2.5/en/troubleshooting/networking/_index.md +++ b/content/rancher/v2.5/en/troubleshooting/networking/_index.md @@ -14,7 +14,7 @@ Double check if all the [required ports]({{}}/rancher/v2.5/en/cluster-p The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. -To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/leodotcloud/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. +To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/rancherlabs/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. 1. Save the following file as `overlaytest.yml` @@ -35,7 +35,7 @@ To test the overlay network, you can launch the following `DaemonSet` definition tolerations: - operator: Exists containers: - - image: leodotcloud/swiss-army-knife + - image: rancher/swiss-army-knife imagePullPolicy: Always name: overlaytest command: ["sh", "-c", "tail -f /dev/null"] @@ -113,4 +113,4 @@ To check if your cluster is affected, the following command will list nodes that kubectl get nodes -o json | jq '.items[].metadata | select(.annotations["flannel.alpha.coreos.com/public-ip"] == null or .annotations["flannel.alpha.coreos.com/kube-subnet-manager"] == null or .annotations["flannel.alpha.coreos.com/backend-type"] == null or .annotations["flannel.alpha.coreos.com/backend-data"] == null) | .name' ``` -If there is no output, the cluster is not affected. \ No newline at end of file +If there is no output, the cluster is not affected. diff --git a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md index c6d45667d4c..6dc6fe240df 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md @@ -52,5 +52,5 @@ After you complete [Configuring Microsoft AD FS for Rancher]({{}}/ranch **Tip:** You can generate a certificate using an openssl command. For example: ``` -openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com" -``` \ No newline at end of file +openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=https://myservice.example.com" +``` diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md index 7ba765989f7..b2e11e8967a 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/out-of-tree/_index.md @@ -36,12 +36,13 @@ The Cloud Provider Interface (CPI) should be installed first before installing t ``` kubectl describe nodes | grep "ProviderID" ``` + ### 3. Installing the CSI plugin - 1. From the **Cluster Explorer** view, go to the top left dropdown menu and click **Apps & Marketplace.** -1. Select the **vSphere CSI** chart. Fill out the required vCenter details. -2. Set **Enable CSI Migration** to **false**. -3. This chart creates a StorageClass with the `csi.vsphere.vmware.com` as the provisioner. Fill out the details for the StorageClass and launch the chart. +1. From the **Cluster Explorer** view, go to the top left dropdown menu and click **Apps & Marketplace.** +2. Select the **vSphere CSI** chart. Fill out the required vCenter details. +3. Set **Enable CSI Migration** to **false**. +4. This chart creates a StorageClass with the `csi.vsphere.vmware.com` as the provisioner. Fill out the details for the StorageClass and launch the chart. # Using the CSI driver for provisioning volumes The CSI chart by default creates a storageClass. diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md index 849a4aef754..840e672c0d1 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md @@ -27,7 +27,7 @@ Set up the Rancher server's local Kubernetes cluster. The cluster requirements depend on the Rancher version: -- **As of Rancher v2.5,** Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. Note: To deploy Rancher v2.5 on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#4-choose-your-ssl-configuration) +- **As of Rancher v2.5,** Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS. Note: To deploy Rancher v2.5 on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#3-choose-your-ssl-configuration) - **In Rancher v2.4.x,** Rancher needs to be installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster. - **In Rancher before v2.4,** Rancher needs to be installed on an RKE Kubernetes cluster. diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md index f622a84befd..0929a70d6c4 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md @@ -20,7 +20,7 @@ The following instructions will guide you through upgrading a Rancher server tha For the instructions to upgrade Rancher installed on Kubernetes with RancherD, refer to [this page.]({{}}/rancher/v2.x/en/installation/install-rancher-on-linux/upgrades) -For the instructions to upgrade Rancher installed with Docker, refer to [ths page.]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) +For the instructions to upgrade Rancher installed with Docker, refer to [this page.]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades) To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{}}/rke/latest/en/config-options/services/) or [add-ons]({{}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine. diff --git a/content/rancher/v2.x/en/installation/requirements/ports/_index.md b/content/rancher/v2.x/en/installation/requirements/ports/_index.md index 2cb204d6ea8..d98190dc18e 100644 --- a/content/rancher/v2.x/en/installation/requirements/ports/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/ports/_index.md @@ -67,7 +67,7 @@ The following tables break down the port requirements for inbound and outbound t | Protocol | Port | Destination | Description | | -------- | ---- | -------------------------------------------------------- | --------------------------------------------- | | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`, `35.167.242.46/32`, `52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | @@ -164,7 +164,7 @@ The following tables break down the port requirements for Rancher nodes, for inb | Protocol | Port | Source | Description | |-----|-----|----------------|---| | TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | -| TCP | 443 | `35.160.43.145/32`,`35.167.242.46/32`,`52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 443 | git.rancher.io | Rancher catalog | | TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine | | TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | diff --git a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md index 564ccdb49fb..2e01e815809 100644 --- a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md @@ -47,14 +47,14 @@ If the Rancher server is installed in a single Docker container, you only need o ``` sudo ssh -i [path-to-private-key] ubuntu@[public-DNS-of-instance] ``` -1. When you are connected to the instance, run the following command on the instance to create a user: -``` -sudo usermod -aG docker ubuntu -``` 1. Run the following command on the instance to install Docker with one of Rancher's installation scripts: ``` curl https://releases.rancher.com/install-docker/18.09.sh | sh ``` +1. When you are connected to the instance, run the following command on the instance to add user `ubuntu` to group `docker`: +``` +sudo usermod -aG docker ubuntu +``` 1. Repeat these steps so that Docker is installed on each node that will eventually run the Rancher management server. > To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher’s Docker installation scripts. diff --git a/content/rancher/v2.x/en/troubleshooting/networking/_index.md b/content/rancher/v2.x/en/troubleshooting/networking/_index.md index c4d10f7552b..1b13f8dfe5b 100644 --- a/content/rancher/v2.x/en/troubleshooting/networking/_index.md +++ b/content/rancher/v2.x/en/troubleshooting/networking/_index.md @@ -14,7 +14,7 @@ Double check if all the [required ports]({{}}/rancher/v2.x/en/cluster-p The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod. -To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/leodotcloud/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. +To test the overlay network, you can launch the following `DaemonSet` definition. This will run a `swiss-army-knife` container on every host (image was developed by Rancher engineers and can be found here: https://github.com/rancherlabs/swiss-army-knife), which we will use to run a `ping` test between containers on all hosts. 1. Save the following file as `overlaytest.yml` @@ -35,7 +35,7 @@ To test the overlay network, you can launch the following `DaemonSet` definition tolerations: - operator: Exists containers: - - image: leodotcloud/swiss-army-knife + - image: rancherlabs/swiss-army-knife imagePullPolicy: Always name: overlaytest command: ["sh", "-c", "tail -f /dev/null"] diff --git a/content/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/_index.md b/content/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/_index.md index ff4c44aea35..b079ae7dd35 100644 --- a/content/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/_index.md +++ b/content/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/_index.md @@ -37,7 +37,7 @@ rancher_kubernetes_engine_config: workspace: server: vc.example.com folder: myvmfolder - default-datastore: /eu-west-1/datastore/ds-1 + default-datastore: ds-1 datacenter: /eu-west-1 resourcepool-path: /eu-west-1/host/hn1/resources/myresourcepool diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index da210b14422..90882ed3340 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -176,7 +176,7 @@ Consult the project pages for openSUSE MicroOS and Kubic for installation Designed to host container workloads with automated administration & patching. Installing openSUSE MicroOS you get a quick, small environment for deploying Containers, or any other workload that benefits from Transactional Updates. As rolling release distribution the software is always up-to-date. https://microos.opensuse.org #### openSUSE Kubic -Based on MicroOS, but not a rolling release distribution. Designed with the same things in mind but also a Certified Kubernetes Distribution. +Based on openSUSE MicroOS, designed with the same things in mind but is focused on being a Certified Kubernetes Distribution. https://kubic.opensuse.org Installation instructions: https://kubic.opensuse.org/blog/2021-02-08-MicroOS-Kubic-Rancher-RKE/ diff --git a/layouts/shortcodes/ports-custom-nodes.html b/layouts/shortcodes/ports-custom-nodes.html index 45af1975f8f..b5dfa8f4a26 100644 --- a/layouts/shortcodes/ports-custom-nodes.html +++ b/layouts/shortcodes/ports-custom-nodes.html @@ -18,7 +18,7 @@ - git.rancher.io (2):
35.160.43.145:32
35.167.242.46:32
52.33.59.17:32 + git.rancher.io etcd Plane Nodes diff --git a/layouts/shortcodes/ports-iaas-nodes.html b/layouts/shortcodes/ports-iaas-nodes.html index 3079e9bdb21..45b401149f5 100644 --- a/layouts/shortcodes/ports-iaas-nodes.html +++ b/layouts/shortcodes/ports-iaas-nodes.html @@ -16,7 +16,7 @@ 22 TCP - git.rancher.io (2):
35.160.43.145:32
35.167.242.46:32
52.33.59.17:32 + git.rancher.io diff --git a/layouts/shortcodes/ports-imported-hosted.html b/layouts/shortcodes/ports-imported-hosted.html index ea9cf448bad..48e4201bae6 100644 --- a/layouts/shortcodes/ports-imported-hosted.html +++ b/layouts/shortcodes/ports-imported-hosted.html @@ -14,7 +14,7 @@ Kubernetes API
Endpoint Port (2) - git.rancher.io (3):
35.160.43.145:32
35.167.242.46:32
52.33.59.17:32 + git.rancher.io