diff --git a/content/_index.html b/content/_index.html index 583a56f29b9..ccad7160a18 100644 --- a/content/_index.html +++ b/content/_index.html @@ -69,7 +69,7 @@
- + @@ -110,7 +110,7 @@

Rancher manages all of your Kubernetes clusters everywhere, unifies them under centralized RBAC, monitors them and lets you easily deploy and manage workloads through an intuitive user interface.

- + @@ -164,7 +164,7 @@

RancherOS is the lightest, easiest way to run Docker in production. Engineered from the ground up for security and speed, it runs all system services and user workloads within Docker containers.

- + @@ -191,7 +191,7 @@

Rancher Kubernetes Engine (RKE) is an extremely simple, lightning fast Kubernetes installer that works everywhere.

- + @@ -215,10 +215,10 @@
-

Lightweight Kubernetes. Easy to install, half the memory, all in a binary less than 40mb.

+

Lightweight Kubernetes. Easy to install, half the memory, all in a binary less than 50mb.

- + diff --git a/content/k3s/latest/en/_index.md b/content/k3s/latest/en/_index.md index a1b0508c132..952027f0c93 100644 --- a/content/k3s/latest/en/_index.md +++ b/content/k3s/latest/en/_index.md @@ -4,7 +4,7 @@ shortTitle: K3s name: "menu" --- -Lightweight Kubernetes. Easy to install, half the memory, all in a binary less than 50mb. +Lightweight Kubernetes. Easy to install, half the memory, all in a binary of less than 50mb. Great for: @@ -12,7 +12,7 @@ Great for: * IoT * CI * ARM -* Situations where a PhD in k8s clusterology is infeasible +* Situations where a PhD in K8s clusterology is infeasible # What is K3s? diff --git a/content/k3s/latest/en/advanced/_index.md b/content/k3s/latest/en/advanced/_index.md index a7b3a4262e0..d7c4c271d24 100644 --- a/content/k3s/latest/en/advanced/_index.md +++ b/content/k3s/latest/en/advanced/_index.md @@ -10,11 +10,14 @@ This section contains advanced information describing the different ways you can - [Auto-deploying manifests](#auto-deploying-manifests) - [Using Docker as the container runtime](#using-docker-as-the-container-runtime) +- [Secrets Encryption Config (Experimental)](#secrets-encryption-config-experimental) - [Running K3s with RootlessKit (Experimental)](#running-k3s-with-rootlesskit-experimental) - [Node labels and taints](#node-labels-and-taints) - [Starting the server with the installation script](#starting-the-server-with-the-installation-script) - [Additional preparation for Alpine Linux setup](#additional-preparation-for-alpine-linux-setup) - [Running K3d (K3s in Docker) and docker-compose](#running-k3d-k3s-in-docker-and-docker-compose) +- [Enabling legacy iptables on Raspbian Buster](#enabling-legacy-iptables-on-raspbian-buster) +- [Experimental SELinux Support](#experimental-selinux-support) # Auto-Deploying Manifests @@ -30,6 +33,45 @@ K3s will generate config.toml for containerd in `/var/lib/rancher/k3s/agent/etc/ The `config.toml.tmpl` will be treated as a Golang template file, and the `config.Node` structure is being passed to the template, the following is an example on how to use the structure to customize the configuration file https://github.com/rancher/k3s/blob/master/pkg/agent/templates/templates.go#L16-L32 +# Secrets Encryption Config (Experimental) +As of v1.17.4+k3s1, K3s added the experimental feature of enabling secrets encryption at rest by passing the flag `--secrets-encryption` on a server, this flag will do the following automatically: + +- Generate an AES-CBC key +- Generate an encryption config file with the generated key + +``` +{ + "kind": "EncryptionConfiguration", + "apiVersion": "apiserver.config.k8s.io/v1", + "resources": [ + { + "resources": [ + "secrets" + ], + "providers": [ + { + "aescbc": { + "keys": [ + { + "name": "aescbckey", + "secret": "xxxxxxxxxxxxxxxxxxx" + } + ] + } + }, + { + "identity": {} + } + ] + } + ] +} +``` + +- Pass the config to the KubeAPI as encryption-provider-config + +Once enabled any created secret will be encrypted with this key. Note that if you disable encryption then any encrypted secrets will not be readable until you enable encryption again. + # Running K3s with RootlessKit (Experimental) > **Warning:** This feature is experimental. @@ -162,3 +204,27 @@ Alternatively the `docker run` command can also be used: -e K3S_TOKEN=${NODE_TOKEN} \ --privileged rancher/k3s:vX.Y.Z + +# Enabling legacy iptables on Raspbian Buster + +Raspbian Buster defaults to using `nftables` instead of `iptables`. **K3S** networking features require `iptables` and do not work with `nftables`. Follow the steps below to switch configure **Buster** to use `legacy iptables`: +``` +sudo iptables -F +sudo update-alternatives --set iptables /usr/sbin/iptables-legacy +sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy +sudo reboot +``` + +# Experimental SELinux Support + +As of release v1.17.4+k3s1, experimental support for SELinux has been added to K3s's embedded containerd. If you are installing K3s on a system where SELinux is enabled by default (such as CentOS), you must ensure the proper SELinux policies have been installed. The [install script]({{}}/k3s/latest/en/installation/install-options/#installation-script-options) will fail if they are not. The necessary policies can be installed with the following commands: +``` +yum install -y container-selinux selinux-policy-base +rpm -i https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm +``` + +To force the install script to log a warning rather than fail, you can set the following environment variable: `INSTALL_K3S_SELINUX_WARN=true`. + +You can turn off SELinux enforcement in the embedded containerd by launching K3s with the `--disable-selinux` flag. + +Note that support for SELinux in containerd is still under development. Progress can be tracked in [this pull request](https://github.com/containerd/cri/pull/1246). diff --git a/content/k3s/latest/en/architecture/_index.md b/content/k3s/latest/en/architecture/_index.md index 0b04ddbfd8c..6b116eb62e2 100644 --- a/content/k3s/latest/en/architecture/_index.md +++ b/content/k3s/latest/en/architecture/_index.md @@ -33,7 +33,7 @@ Single server clusters can meet a variety of use cases, but for environments whe * An **external datastore** (as opposed to the embedded SQLite datastore used in single-server setups)
K3s Architecture with a High-availability Server
-![Architecture]({{< baseurl >}}/img/rancher/k3s-architecture-ha-server.png) +![Architecture]({{}}/img/rancher/k3s-architecture-ha-server.png) ### Fixed Registration Address for Agent Nodes @@ -41,7 +41,7 @@ In the high-availability server configuration, each node must also register with After registration, the agent nodes establish a connection directly to one of the server nodes. -![k3s HA]({{< baseurl >}}/img/k3s/k3s-production-setup.svg) +![k3s HA]({{}}/img/k3s/k3s-production-setup.svg) # How Agent Node Registration Works diff --git a/content/k3s/latest/en/installation/_index.md b/content/k3s/latest/en/installation/_index.md index 3a6fb03fa7f..b141bcce42b 100644 --- a/content/k3s/latest/en/installation/_index.md +++ b/content/k3s/latest/en/installation/_index.md @@ -3,16 +3,15 @@ title: "Installation" weight: 20 --- -This section contains instructions for installing K3s in various environments. Please ensure you have met the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/) before you begin installing K3s. +This section contains instructions for installing K3s in various environments. Please ensure you have met the [Installation Requirements]({{< baseurl >}}/k3s/latest/en/installation/installation-requirements/) before you begin installing K3s. -[Installation and Configuration Options]({{< baseurl >}}/k3s/latest/en/installation/install-options/) provides guidance on the options available to you when installing K3s. +[Installation and Configuration Options]({{}}/k3s/latest/en/installation/install-options/) provides guidance on the options available to you when installing K3s. +[High Availability with an External DB]({{}}/k3s/latest/en/installation/ha/) details how to set up an HA K3s cluster backed by an external datastore such as MySQL, PostgreSQL, or etcd. -[High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha/) details how to set up an HA K3s cluster backed by an external datastore such as MySQL, PostgreSQL, or etcd. +[High Availability with Embedded DB (Experimental)]({{}}/k3s/latest/en/installation/ha-embedded/) details how to set up an HA K3s cluster that leverages a built-in distributed database. -[High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded/) details how to set up an HA K3s cluster that leverages a built-in distributed database. - -[Air-Gap Installation]({{< baseurl >}}/k3s/latest/en/installation/airgap/) details how to set up K3s in environments that do not have direct access to the Internet. +[Air-Gap Installation]({{}}/k3s/latest/en/installation/airgap/) details how to set up K3s in environments that do not have direct access to the Internet. ### Uninstalling diff --git a/content/k3s/latest/en/installation/airgap/_index.md b/content/k3s/latest/en/installation/airgap/_index.md index 66564948a5d..7f9f4a8ec21 100644 --- a/content/k3s/latest/en/installation/airgap/_index.md +++ b/content/k3s/latest/en/installation/airgap/_index.md @@ -3,77 +3,115 @@ title: "Air-Gap Install" weight: 60 --- -In this guide, we are assuming you have created your nodes in your air-gap environment and have a secure Docker private registry on your bastion server. +You can install K3s in an air-gapped environment using two different methods. You can either deploy a private registry and mirror docker.io or you can manually deploy images such as for small clusters. -# Installation Outline +# Private Registry Method -1. [Prepare Images Directory](#prepare-images-directory) -2. [Create Registry YAML](#create-registry-YAML) -3. [Install K3s](#install-k3s) +This document assumes you have already created your nodes in your air-gap environment and have a secure Docker private registry on your bastion host. +If you have not yet set up a private Docker registry, refer to the official documentation [here](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry). -### Prepare Images Directory +### Create the Registry YAML + +Follow the [Private Registry Configuration]({{< baseurl >}}/k3s/latest/en/installation/private-registry) guide to create and configure the registry.yaml file. + +Once you have completed this, you may now go to the [Install K3s](#install-k3s) section below. + + +# Manually Deploy Images Method + +We are assuming you have created your nodes in your air-gap environment. +This method requires you to manually deploy the necessary images to each node and is appropriate for edge deployments where running a private registry is not practical. + +### Prepare the Images Directory and K3s Binary Obtain the images tar file for your architecture from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be running. -Place the tar file in the `images` directory before starting K3s on each node, for example: +Place the tar file in the `images` directory, for example: ```sh sudo mkdir -p /var/lib/rancher/k3s/agent/images/ sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/ ``` -### Create Registry YAML -Create the registries.yaml file at `/etc/rancher/k3s/registries.yaml`. This will tell K3s the necessary details to connect to your private registry. -The registries.yaml file should look like this before plugging in the necessary information: +Place the k3s binary at /usr/local/bin/k3s and ensure it is executable. -``` ---- -mirrors: - customreg: - endpoint: - - "https://ip-to-server:5000" -configs: - customreg: - auth: - username: xxxxxx # this is the registry username - password: xxxxxx # this is the registry password - tls: - cert_file: - key_file: - ca_file: -``` +Follow the steps in the next section to install K3s. -Note, at this time only secure registries are supported with K3s (SSL with custom CA) +# Install K3s -### Install K3s +Only after you have completed either the [Private Registry Method](#private-registry-method) or the [Manually Deploy Images Method](#manually-deploy-images-method) above should you install K3s. -Obtain the K3s binary from the [releases](https://github.com/rancher/k3s/releases) page, matching the same version used to get the airgap images tar. -Also obtain the K3s install script at https://get.k3s.io +Obtain the K3s binary from the [releases](https://github.com/rancher/k3s/releases) page, matching the same version used to get the airgap images. +Obtain the K3s install script at https://get.k3s.io -Place the binary in `/usr/local/bin` on each node. -Place the install script anywhere on each node, name it `install.sh`. +Place the binary in `/usr/local/bin` on each node and ensure it is executable. +Place the install script anywhere on each node, and name it `install.sh`. -Install K3s on each server: + +### Install Options +You can install K3s on one or more servers as described below. + +{{% tabs %}} +{{% tab "Single Server Configuration" %}} + +To install K3s on a single server simply do the following on the server node. ``` INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh ``` -Install K3s on each agent: +Then, to optionally add additional agents do the following on each agent node. Take care to ensure you replace `myserver` with the IP or valid DNS of the server and replace `mynodetoken` with the node token from the server typically at `/var/lib/rancher/k3s/server/node-token` ``` INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken ./install.sh ``` -Note, take care to ensure you replace `myserver` with the IP or valid DNS of the server and replace `mynodetoken` with the node-token from the server. -The node-token is on the server at `/var/lib/rancher/k3s/server/node-token` +{{% /tab %}} +{{% tab "High Availability Configuration" %}} +Reference the [High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha) or [High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded) guides. You will be tweaking install commands so you specify `INSTALL_K3S_SKIP_DOWNLOAD=true` and run your install script locally instead of via curl. You will also utilize `INSTALL_K3S_EXEC='args'` to supply any arguments to k3s. + +For example, step two of the High Availability with an External DB guide mentions the following: + +``` +curl -sfL https://get.k3s.io | sh -s - server \ + --datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name" +``` + +Instead, you would modify such examples like below: + +``` +INSTALL_K3S_SKIP_DOWNLOAD=true INSTALL_K3S_EXEC='server --datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name"' ./install.sh +``` + +{{% /tab %}} +{{% /tabs %}} >**Note:** K3s additionally provides a `--resolv-conf` flag for kubelets, which may help with configuring DNS in air-gap networks. # Upgrading +### Install Script Method + Upgrading an air-gap environment can be accomplished in the following manner: -1. Download the new air-gap images (tar file) from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each node. Delete the old tar file. -2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables. +1. Download the new air-gap images (tar file) from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each +node. Delete the old tar file. +2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past +with the same environment variables. 3. Restart the K3s service (if not restarted automatically by installer). + + +### Automated Upgrades Method + +As of v1.17.4+k3s1 K3s supports [automated upgrades]({{< baseurl >}}/k3s/latest/en/upgrades/automated/). To enable this in air-gapped environments, you must ensure the required images are available in your private registry. + +You will need the version of rancher/k3s-upgrade that corresponds to the version of K3s you intend to upgrade to. Note, the image tag replaces the `+` in the K3s release with a `-` because Docker images do not support `+`. + +You will also need the versions of system-upgrade-controller and kubectl that are specified in the system-upgrade-controller manifest YAML that you will deploy. Check for the latest release of the system-upgrade-controller [here](https://github.com/rancher/system-upgrade-controller/releases/latest) and download the system-upgrade-controller.yaml to determine the versions you need to push to your private registry. For example, in release v0.4.0 of the system-upgrade-controller, these images are specified in the manifest YAML: + +``` +rancher/system-upgrade-controller:v0.4.0 +rancher/kubectl:v0.17.0 +``` + +Once you have added the necessary rancher/k3s-upgrade, rancher/system-upgrade-controller, and rancher/kubectl images to your private registry, follow the [automated upgrades]({{< baseurl >}}/k3s/latest/en/upgrades/automated/) guide. diff --git a/content/k3s/latest/en/installation/datastore/_index.md b/content/k3s/latest/en/installation/datastore/_index.md index 63ef6baa32b..9d04be68d54 100644 --- a/content/k3s/latest/en/installation/datastore/_index.md +++ b/content/k3s/latest/en/installation/datastore/_index.md @@ -14,6 +14,7 @@ K3s supports the following datastore options: * Embedded [SQLite](https://www.sqlite.org/index.html) * [PostgreSQL](https://www.postgresql.org/) (certified against versions 10.7 and 11.5) * [MySQL](https://www.mysql.com/) (certified against version 5.7) +* [MariaDB](https://mariadb.org/) (certified against version 10.3.20) * [etcd](https://etcd.io/) (certified against version 3.3.15) * Embedded [DQLite](https://dqlite.io/) for High Availability (experimental) @@ -50,9 +51,9 @@ If you only supply `postgres://` as the endpoint, K3s will attempt to do the fo {{% /tab %}} -{{% tab "MySQL" %}} +{{% tab "MySQL / MariaDB" %}} -In its most common form, the `datastore-endpoint` parameter for MySQL has the following format: +In its most common form, the `datastore-endpoint` parameter for MySQL and MariaDB has the following format: `mysql://username:password@tcp(hostname:3306)/database-name` @@ -94,4 +95,4 @@ k3s server ``` ### Embedded DQLite for HA (Experimental) -K3s's use of DQLite is similar to its use of SQLite. It is simple to set up and manage. As such, there is no external configuration or additional steps to take in order to use this option. Please see [High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded/) for instructions on how to run with this option. +K3s's use of DQLite is similar to its use of SQLite. It is simple to set up and manage. As such, there is no external configuration or additional steps to take in order to use this option. Please see [High Availability with Embedded DB (Experimental)]({{}}/k3s/latest/en/installation/ha-embedded/) for instructions on how to run with this option. diff --git a/content/k3s/latest/en/installation/ha/_index.md b/content/k3s/latest/en/installation/ha/_index.md index adea8ad1938..7791527f709 100644 --- a/content/k3s/latest/en/installation/ha/_index.md +++ b/content/k3s/latest/en/installation/ha/_index.md @@ -3,7 +3,7 @@ title: High Availability with an External DB weight: 30 --- ->**Note:** Official support for installing Rancher on a Kubernetes cluster was introduced in our v1.0.0 release. +> **Note:** Official support for installing Rancher on a Kubernetes cluster was introduced in our v1.0.0 release. This section describes how to install a high-availability K3s cluster with an external database. @@ -28,10 +28,10 @@ Setting up an HA cluster requires the following steps: 4. [Join agent nodes](#4-optional-join-agent-nodes) ### 1. Create an External Datastore -You will first need to create an external datastore for the cluster. See the [Cluster Datastore Options]({{< baseurl >}}/k3s/latest/en/installation/datastore/) documentation for more details. +You will first need to create an external datastore for the cluster. See the [Cluster Datastore Options]({{}}/k3s/latest/en/installation/datastore/) documentation for more details. ### 2. Launch Server Nodes -K3s requires two or more server nodes for this HA configuration. See the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/) guide for minimum machine requirements. +K3s requires two or more server nodes for this HA configuration. See the [Installation Requirements]({{}}/k3s/latest/en/installation/installation-requirements/) guide for minimum machine requirements. When running the `k3s server` command on these nodes, you must set the `datastore-endpoint` parameter so that K3s knows how to connect to the external datastore. @@ -50,22 +50,24 @@ To configure TLS certificates when launching server nodes, refer to the [datasto By default, server nodes will be schedulable and thus your workloads can get launched on them. If you wish to have a dedicated control plane where no user workloads will run, you can use taints. The `node-taint` parameter will allow you to configure nodes with taints, for example `--node-taint k3s-controlplane=true:NoExecute`. -Once you've launched the `k3s server` process on all server nodes, ensure that the cluster has come up properly with `k3s kubectl get nodes`. You should see your server nodes in the Ready state. +Once you've launched the `k3s server` process on all server nodes, ensure that the cluster has come up properly with `k3s kubectl get nodes`. You should see your server nodes in the Ready state. ### 3. Configure the Fixed Registration Address + Agent nodes need a URL to register against. This can be the IP or hostname of any of the server nodes, but in many cases those may change over time. For example, if you are running your cluster in a cloud that supports scaling groups, you may scale the server node group up and down over time, causing nodes to be created and destroyed and thus having different IPs from the initial set of server nodes. Therefore, you should have a stable endpoint in front of the server nodes that will not change over time. This endpoint can be set up using any number approaches, such as: * A layer-4 (TCP) load balancer * Round-robin DNS * Virtual or elastic IP addresses -This endpoint can also be used for accessing the Kubernetes API. So you can, for example, modify your [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file to point to it instead of a specific node. +This endpoint can also be used for accessing the Kubernetes API. So you can, for example, modify your [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file to point to it instead of a specific node. To avoid certificate errors in such a configuration, you should install the server with the `--tls-san YOUR_IP_OR_HOSTNAME_HERE` option. This option adds an additional hostname or IP as a Subject Alternative Name in the TLS cert, and it can be specified multiple times if you would like to access via both the IP and the hostname. ### 4. Optional: Join Agent Nodes Because K3s server nodes are schedulable by default, the minimum number of nodes for an HA K3s server cluster is two server nodes and zero agent nodes. To add nodes designated to run your apps and services, join agent nodes to your cluster. Joining agent nodes in an HA cluster is the same as joining agent nodes in a single server cluster. You just need to specify the URL the agent should register to and the token it should use. + ``` K3S_TOKEN=SECRET k3s agent --server https://fixed-registration-address:6443 ``` diff --git a/content/k3s/latest/en/installation/install-options/_index.md b/content/k3s/latest/en/installation/install-options/_index.md index 424cbd9ae73..df7ae330dca 100644 --- a/content/k3s/latest/en/installation/install-options/_index.md +++ b/content/k3s/latest/en/installation/install-options/_index.md @@ -5,16 +5,18 @@ weight: 20 This page focuses on the options that can be used when you set up K3s for the first time: -- [Installation script options](#installation-script-options) -- [Installing K3s from the binary](#installing-k3s-from-the-binary) +- [Options for installation with script](#options-for-installation-with-script) +- [Options for installation from binary](#options-for-installation-from-binary) - [Registration options for the K3s server](#registration-options-for-the-k3s-server) - [Registration options for the K3s agent](#registration-options-for-the-k3s-agent) For more advanced options, refer to [this page.]({{}}/k3s/latest/en/advanced) -# Installation Script Options +> Throughout the K3s documentation, you will see some options that can be passed in as both command flags and environment variables. For help with passing in options, refer to [How to Use Flags and Environment Variables.]({{}}/k3s/latest/en/installation/install-options/how-to-flags) -As mentioned in the [Quick-Start Guide]({{< baseurl >}}/k3s/latest/en/quick-start/), you can use the installation script available at https://get.k3s.io to install K3s as a service on systemd and openrc based systems. +### Options for Installation with Script + +As mentioned in the [Quick-Start Guide]({{}}/k3s/latest/en/quick-start/), you can use the installation script available at https://get.k3s.io to install K3s as a service on systemd and openrc based systems. The simplest form of this command is as follows: ```sh @@ -23,58 +25,25 @@ curl -sfL https://get.k3s.io | sh - When using this method to install K3s, the following environment variables can be used to configure the installation: -- `INSTALL_K3S_SKIP_DOWNLOAD` - - If set to true will not download K3s hash or binary. - -- `INSTALL_K3S_SYMLINK` - - If set to 'skip' will not create symlinks, 'force' will overwrite, default will symlink if command does not exist in path. - -- `INSTALL_K3S_SKIP_START` - - If set to true will not start K3s service. - -- `INSTALL_K3S_VERSION` - - Version of K3s to download from github. Will attempt to download the latest version if not specified. - -- `INSTALL_K3S_BIN_DIR` - - Directory to install K3s binary, links, and uninstall script to, or use `/usr/local/bin` as the default. - -- `INSTALL_K3S_BIN_DIR_READ_ONLY` - - If set to true will not write files to `INSTALL_K3S_BIN_DIR`, forces setting `INSTALL_K3S_SKIP_DOWNLOAD=true`. - -- `INSTALL_K3S_SYSTEMD_DIR` - - Directory to install systemd service and environment files to, or use `/etc/systemd/system` as the default. - -- `INSTALL_K3S_EXEC` - - Command with flags to use for launching K3s in the service. If the command is not specified, it will default to "agent" if `K3S_URL` is set or "server" if it is not set. - - The final systemd command resolves to a combination of this environment variable and script args. To illustrate this, the following commands result in the same behavior of registering a server without flannel: - ```sh - curl ... | INSTALL_K3S_EXEC="--no-flannel" sh -s - - curl ... | INSTALL_K3S_EXEC="server --no-flannel" sh -s - - curl ... | INSTALL_K3S_EXEC="server" sh -s - --no-flannel - curl ... | sh -s - server --no-flannel - curl ... | sh -s - --no-flannel - ``` - - - `INSTALL_K3S_NAME` - - Name of systemd service to create, will default from the K3s exec command if not specified. If specified the name will be prefixed with 'k3s-'. - - - `INSTALL_K3S_TYPE` - - Type of systemd service to create, will default from the K3s exec command if not specified. +| Environment Variable | Description | +|-----------------------------|---------------------------------------------| +| `INSTALL_K3S_SKIP_DOWNLOAD` | If set to true will not download K3s hash or binary. | +| `INSTALL_K3S_SYMLINK` | By default will create symlinks for the kubectl, crictl, and ctr binaries if the commands do not already exist in path. If set to 'skip' will not create symlinks and 'force' will overwrite. | +| `INSTALL_K3S_SKIP_START` | If set to true will not start K3s service. | +| `INSTALL_K3S_VERSION` | Version of K3s to download from Github. Will attempt to download the latest version if not specified. | +| `INSTALL_K3S_BIN_DIR` | Directory to install K3s binary, links, and uninstall script to, or use `/usr/local/bin` as the default. | +| `INSTALL_K3S_BIN_DIR_READ_ONLY` | If set to true will not write files to `INSTALL_K3S_BIN_DIR`, forces setting `INSTALL_K3S_SKIP_DOWNLOAD=true`. | +| `INSTALL_K3S_SYSTEMD_DIR` | Directory to install systemd service and environment files to, or use `/etc/systemd/system` as the default. | +| `INSTALL_K3S_EXEC` | Command with flags to use for launching K3s in the service. If the command is not specified, and the `K3S_URL` is set, it will default to "agent." If `K3S_URL` not set, it will default to "server." For help, refer to [this example.]({{}}/k3s/latest/en/installation/install-options/how-to-flags/#example-b-install-k3s-exec) | +| `INSTALL_K3S_NAME` | Name of systemd service to create, will default to 'k3s' if running k3s as a server and 'k3s-agent' if running k3s as an agent. If specified the name will be prefixed with 'k3s-'. | +| `INSTALL_K3S_TYPE` | Type of systemd service to create, will default from the K3s exec command if not specified. -Environment variables which begin with `K3S_` will be preserved for the systemd and openrc services to use. Setting `K3S_URL` without explicitly setting an exec command will default the command to "agent". When running the agent `K3S_TOKEN` must also be set. +Environment variables which begin with `K3S_` will be preserved for the systemd and openrc services to use. +Setting `K3S_URL` without explicitly setting an exec command will default the command to "agent". + +When running the agent `K3S_TOKEN` must also be set. # Installing K3s from the Binary @@ -89,120 +58,13 @@ Command | Description `k3s ctr` | Run an embedded [ctr](https://github.com/projectatomic/containerd/blob/master/docs/cli.md). This is a CLI for containerd, the container daemon used by K3s. Useful for debugging. `k3s help` | Shows a list of commands or help for one command -The `k3s server` and `k3s agent` commands have additional configuration options that can be viewed with `k3s server --help` or `k3s agent --help`. For convenience, that help text is presented here: +The `k3s server` and `k3s agent` commands have additional configuration options that can be viewed with `k3s server --help` or `k3s agent --help`. -# Registration Options for the K3s Server -``` -NAME: - k3s server - Run management server +### Registration Options for the K3s Server -USAGE: - k3s server [OPTIONS] +For details on configuring the K3s server, refer to the [server configuration reference.]({{}}/k3s/latest/en/installation/install-options/server-config) -OPTIONS: - -v value (logging) Number for the log level verbosity (default: 0) - --vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging - --log value, -l value (logging) Log to file - --alsologtostderr (logging) Log to standard error as well as file (if set) - --bind-address value (listener) k3s bind address (default: 0.0.0.0) - --https-listen-port value (listener) HTTPS listen port (default: 6443) - --advertise-address value (listener) IP address that apiserver uses to advertise to members of the cluster (default: node-external-ip/node-ip) - --advertise-port value (listener) Port that apiserver uses to advertise to members of the cluster (default: listen-port) (default: 0) - --tls-san value (listener) Add additional hostname or IP as a Subject Alternative Name in the TLS cert - --data-dir value, -d value (data) Folder to hold state default /var/lib/rancher/k3s or ${HOME}/.rancher/k3s if not root - --cluster-cidr value (networking) Network CIDR to use for pod IPs (default: "10.42.0.0/16") - --service-cidr value (networking) Network CIDR to use for services IPs (default: "10.43.0.0/16") - --cluster-dns value (networking) Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10) - --cluster-domain value (networking) Cluster Domain (default: "cluster.local") - --flannel-backend value (networking) One of 'none', 'vxlan', 'ipsec', or 'flannel' (default: "vxlan") - --token value, -t value (cluster) Shared secret used to join a server or agent to a cluster [$K3S_TOKEN] - --token-file value (cluster) File containing the cluster-secret/token [$K3S_TOKEN_FILE] - --write-kubeconfig value, -o value (client) Write kubeconfig for admin client to this file [$K3S_KUBECONFIG_OUTPUT] - --write-kubeconfig-mode value (client) Write kubeconfig with this mode [$K3S_KUBECONFIG_MODE] - --kube-apiserver-arg value (flags) Customized flag for kube-apiserver process - --kube-scheduler-arg value (flags) Customized flag for kube-scheduler process - --kube-controller-manager-arg value (flags) Customized flag for kube-controller-manager process - --kube-cloud-controller-manager-arg value (flags) Customized flag for kube-cloud-controller-manager process - --datastore-endpoint value (db) Specify etcd, Mysql, Postgres, or Sqlite (default) data source name [$K3S_DATASTORE_ENDPOINT] - --datastore-cafile value (db) TLS Certificate Authority file used to secure datastore backend communication [$K3S_DATASTORE_CAFILE] - --datastore-certfile value (db) TLS certification file used to secure datastore backend communication [$K3S_DATASTORE_CERTFILE] - --datastore-keyfile value (db) TLS key file used to secure datastore backend communication [$K3S_DATASTORE_KEYFILE] - --default-local-storage-path value (storage) Default local storage path for local provisioner storage class - --no-deploy value (components) Do not deploy packaged components (valid items: coredns, servicelb, traefik, local-storage, metrics-server) - --disable-scheduler (components) Disable Kubernetes default scheduler - --disable-cloud-controller (components) Disable k3s default cloud controller manager - --disable-network-policy (components) Disable k3s default network policy controller - --node-name value (agent/node) Node name [$K3S_NODE_NAME] - --with-node-id (agent/node) Append id to node name - --node-label value (agent/node) Registering kubelet with set of labels - --node-taint value (agent/node) Registering kubelet with set of taints - --docker (agent/runtime) Use docker instead of containerd - --container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation - --pause-image value (agent/runtime) Customized pause image for containerd sandbox - --private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml") - --node-ip value, -i value (agent/networking) IP address to advertise for node - --node-external-ip value (agent/networking) External IP address to advertise for node - --resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF] - --flannel-iface value (agent/networking) Override default flannel interface - --flannel-conf value (agent/networking) Override default flannel config file - --kubelet-arg value (agent/flags) Customized flag for kubelet process - --kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process - --rootless (experimental) Run rootless - --agent-token value (experimental/cluster) Shared secret used to join agents to the cluster, but not servers [$K3S_AGENT_TOKEN] - --agent-token-file value (experimental/cluster) File containing the agent secret [$K3S_AGENT_TOKEN_FILE] - --server value, -s value (experimental/cluster) Server to connect to, used to join a cluster [$K3S_URL] - --cluster-init (experimental/cluster) Initialize new cluster master [$K3S_CLUSTER_INIT] - --cluster-reset (experimental/cluster) Forget all peers and become a single cluster new cluster master [$K3S_CLUSTER_RESET] - --no-flannel (deprecated) use --flannel-backend=none - --cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET] -``` -# Registration Options for the K3s Agent -``` -NAME: - k3s agent - Run node agent +### Registration Options for the K3s Agent -USAGE: - k3s agent [OPTIONS] - -OPTIONS: - -v value (logging) Number for the log level verbosity (default: 0) - --vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging - --log value, -l value (logging) Log to file - --alsologtostderr (logging) Log to standard error as well as file (if set) - --token value, -t value (cluster) Token to use for authentication [$K3S_TOKEN] - --token-file value (cluster) Token file to use for authentication [$K3S_TOKEN_FILE] - --server value, -s value (cluster) Server to connect to [$K3S_URL] - --data-dir value, -d value (agent/data) Folder to hold state (default: "/var/lib/rancher/k3s") - --node-name value (agent/node) Node name [$K3S_NODE_NAME] - --with-node-id (agent/node) Append id to node name - --node-label value (agent/node) Registering kubelet with set of labels - --node-taint value (agent/node) Registering kubelet with set of taints - --docker (agent/runtime) Use docker instead of containerd - --container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation - --pause-image value (agent/runtime) Customized pause image for containerd sandbox - --private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml") - --node-ip value, -i value (agent/networking) IP address to advertise for node - --node-external-ip value (agent/networking) External IP address to advertise for node - --resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF] - --flannel-iface value (agent/networking) Override default flannel interface - --flannel-conf value (agent/networking) Override default flannel config file - --kubelet-arg value (agent/flags) Customized flag for kubelet process - --kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process - --rootless (experimental) Run rootless - --no-flannel (deprecated) use --flannel-backend=none - --cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET] -``` - -### Node Labels and Taints for Agents - -K3s agents can be configured with the options `--node-label` and `--node-taint` which adds a label and taint to the kubelet. The two options only add labels and/or taints at registration time, so they can only be added once and not changed after that again by running K3s commands. - -Below is an example showing how to add labels and a taint: -``` - --node-label foo=bar \ - --node-label hello=world \ - --node-taint key1=value1:NoExecute -``` - -If you want to change node labels and taints after node registration you should use `kubectl`. Refer to the official Kubernetes documentation for details on how to add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) and [node labels.](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node) \ No newline at end of file +For details on configuring the K3s agent, refer to the [agent configuration reference.]({{}}/k3s/latest/en/installation/install-options/agent-config) \ No newline at end of file diff --git a/content/k3s/latest/en/installation/install-options/agent-config/_index.md b/content/k3s/latest/en/installation/install-options/agent-config/_index.md new file mode 100644 index 00000000000..216dd5112e0 --- /dev/null +++ b/content/k3s/latest/en/installation/install-options/agent-config/_index.md @@ -0,0 +1,136 @@ +--- +title: K3s Agent Configuration Reference +weight: 2 +--- +In this section, you'll learn how to configure the K3s agent. + +> Throughout the K3s documentation, you will see some options that can be passed in as both command flags and environment variables. For help with passing in options, refer to [How to Use Flags and Environment Variables.]({{}}/k3s/latest/en/installation/install-options/how-to-flags) + +- [Logging](#logging) +- [Cluster Options](#cluster-options) +- [Data](#data) +- [Node](#node) +- [Runtime](#runtime) +- [Networking](#networking) +- [Customized Flags](#customized-flags) +- [Experimental](#experimental) +- [Deprecated](#deprecated) +- [Node Labels and Taints for Agents](#node-labels-and-taints-for-agents) +- [K3s Agent CLI Help](#k3s-agent-cli-help) + +### Logging + +| Flag | Default | Description | +|------|---------|-------------| +| `-v` value | 0 | Number for the log level verbosity | +| `--vmodule` value | N/A | Comma-separated list of pattern=N settings for file-filtered logging | +| `--log value, -l` value | N/A | Log to file | +| `--alsologtostderr` | N/A | Log to standard error as well as file (if set) | + +### Cluster Options +| Flag | Environment Variable | Description | +|------|----------------------|-------------| +| `--token value, -t` value | `K3S_TOKEN` | Token to use for authentication | +| `--token-file` value | `K3S_TOKEN_FILE` | Token file to use for authentication | +| `--server value, -s` value | `K3S_URL` | Server to connect to | + + +### Data +| Flag | Default | Description | +|------|---------|-------------| +| `--data-dir value, -d` value | "/var/lib/rancher/k3s" | Folder to hold state | + +### Node +| Flag | Environment Variable | Description | +|------|----------------------|-------------| +| `--node-name` value | `K3S_NODE_NAME` | Node name | +| `--with-node-id` | N/A | Append id to node name | +| `--node-label` value | N/A | Registering and starting kubelet with set of labels | +| `--node-taint` value | N/A | Registering kubelet with set of taints | + +### Runtime +| Flag | Default | Description | +|------|---------|-------------| +| `--docker` | N/A | Use docker instead of containerd | +| `--container-runtime-endpoint` value | N/A | Disable embedded containerd and use alternative CRI implementation | +| `--pause-image` value | "docker.io/rancher/pause:3.1" | Customized pause image for containerd or docker sandbox | (agent/runtime) (default: ) +| `--private-registry` value | "/etc/rancher/k3s/registries.yaml" | Private registry configuration file | + +### Networking +| Flag | Environment Variable | Description | +|------|----------------------|-------------| +| `--node-ip value, -i` value | N/A | IP address to advertise for node | +| `--node-external-ip` value | N/A | External IP address to advertise for node | +| `--resolv-conf` value | `K3S_RESOLV_CONF` | Kubelet resolv.conf file | +| `--flannel-iface` value | N/A | Override default flannel interface | +| `--flannel-conf` value | N/A | Override default flannel config file | + +### Customized Flags +| Flag | Description | +|------|--------------| +| `--kubelet-arg` value | Customized flag for kubelet process | +| `--kube-proxy-arg` value | Customized flag for kube-proxy process | + +### Experimental +| Flag | Description | +|------|--------------| +| `--rootless` | Run rootless | + +### Deprecated +| Flag | Environment Variable | Description | +|------|----------------------|-------------| +| `--no-flannel` | N/A | Use `--flannel-backend=none` | +| `--cluster-secret` value | `K3S_CLUSTER_SECRET` | Use `--token` | + +### Node Labels and Taints for Agents + +K3s agents can be configured with the options `--node-label` and `--node-taint` which adds a label and taint to the kubelet. The two options only add labels and/or taints at registration time, so they can only be added once and not changed after that again by running K3s commands. + +Below is an example showing how to add labels and a taint: +```bash + --node-label foo=bar \ + --node-label hello=world \ + --node-taint key1=value1:NoExecute +``` + +If you want to change node labels and taints after node registration you should use `kubectl`. Refer to the official Kubernetes documentation for details on how to add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) and [node labels.](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node) + +### K3s Agent CLI Help + +> If an option appears in brackets below, for example `[$K3S_URL]`, it means that the option can be passed in as an environment variable of that name. + +```bash +NAME: + k3s agent - Run node agent + +USAGE: + k3s agent [OPTIONS] + +OPTIONS: + -v value (logging) Number for the log level verbosity (default: 0) + --vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging + --log value, -l value (logging) Log to file + --alsologtostderr (logging) Log to standard error as well as file (if set) + --token value, -t value (cluster) Token to use for authentication [$K3S_TOKEN] + --token-file value (cluster) Token file to use for authentication [$K3S_TOKEN_FILE] + --server value, -s value (cluster) Server to connect to [$K3S_URL] + --data-dir value, -d value (agent/data) Folder to hold state (default: "/var/lib/rancher/k3s") + --node-name value (agent/node) Node name [$K3S_NODE_NAME] + --with-node-id (agent/node) Append id to node name + --node-label value (agent/node) Registering and starting kubelet with set of labels + --node-taint value (agent/node) Registering kubelet with set of taints + --docker (agent/runtime) Use docker instead of containerd + --container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation + --pause-image value (agent/runtime) Customized pause image for containerd or docker sandbox (default: "docker.io/rancher/pause:3.1") + --private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml") + --node-ip value, -i value (agent/networking) IP address to advertise for node + --node-external-ip value (agent/networking) External IP address to advertise for node + --resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF] + --flannel-iface value (agent/networking) Override default flannel interface + --flannel-conf value (agent/networking) Override default flannel config file + --kubelet-arg value (agent/flags) Customized flag for kubelet process + --kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process + --rootless (experimental) Run rootless + --no-flannel (deprecated) use --flannel-backend=none + --cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET] +``` diff --git a/content/k3s/latest/en/installation/install-options/how-to-flags/_index.md b/content/k3s/latest/en/installation/install-options/how-to-flags/_index.md new file mode 100644 index 00000000000..25aa9b43567 --- /dev/null +++ b/content/k3s/latest/en/installation/install-options/how-to-flags/_index.md @@ -0,0 +1,35 @@ +--- +title: How to Use Flags and Environment Variables +weight: 3 +--- + +Throughout the K3s documentation, you will see some options that can be passed in as both command flags and environment variables. The below examples show how these options can be passed in both ways. + +### Example A: K3S_KUBECONFIG_MODE + +The option to allow writing to the kubeconfig file is useful for allowing a K3s cluster to be imported into Rancher. Below are two ways to pass in the option. + +Using the flag `--write-kubeconfig-mode 644`: + +```bash +$ curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 +``` +Using the environment variable `K3S_KUBECONFIG_MODE`: + +```bash +$ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - +``` + +### Example B: INSTALL_K3S_EXEC + +If this command is not specified as a server or agent command, it will default to "agent" if `K3S_URL` is set, or "server" if it is not set. + +The final systemd command resolves to a combination of this environment variable and script args. To illustrate this, the following commands result in the same behavior of registering a server without flannel: + +```bash +curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-flannel" sh -s - +curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --no-flannel" sh -s - +curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - --no-flannel +curl -sfL https://get.k3s.io | sh -s - server --no-flannel +curl -sfL https://get.k3s.io | sh -s - --no-flannel +``` \ No newline at end of file diff --git a/content/k3s/latest/en/installation/install-options/server-config/_index.md b/content/k3s/latest/en/installation/install-options/server-config/_index.md new file mode 100644 index 00000000000..a60c075bbd5 --- /dev/null +++ b/content/k3s/latest/en/installation/install-options/server-config/_index.md @@ -0,0 +1,250 @@ +--- +title: K3s Server Configuration Reference +weight: 1 +--- + +In this section, you'll learn how to configure the K3s server. + +> Throughout the K3s documentation, you will see some options that can be passed in as both command flags and environment variables. For help with passing in options, refer to [How to Use Flags and Environment Variables.]({{}}/k3s/latest/en/installation/install-options/how-to-flags) + +- [Commonly Used Options](#commonly-used-options) + - [Database](#database) + - [Cluster Options](#cluster-options) + - [Client Options](#client-options) +- [Agent Options](#agent-options) + - [Agent Nodes](#agent-nodes) + - [Agent Runtime](#agent-runtime) + - [Agent Networking](#agent-networking) +- [Advanced Options](#advanced-options) + - [Logging](#logging) + - [Listeners](#listeners) + - [Data](#data) + - [Networking](#networking) + - [Customized Options](#customized-options) + - [Storage Class](#storage-class) + - [Kubernetes Components](#kubernetes-components) + - [Customized Flags for Kubernetes Processes](#customized-flags-for-kubernetes-processes) + - [Experimental Options](#experimental-options) + - [Deprecated Options](#deprecated-options) +- [K3s Server Cli Help](#k3s-server-cli-help) + + +# Commonly Used Options + +### Database + +| Flag | Environment Variable | Description | +|------|----------------------|-------------| +| `--datastore-endpoint` value | `K3S_DATASTORE_ENDPOINT` | Specify etcd, Mysql, Postgres, or Sqlite (default) data source name | +| `--datastore-cafile` value | `K3S_DATASTORE_CAFILE` | TLS Certificate Authority file used to secure datastore backend communication | +| `--datastore-certfile` value | `K3S_DATASTORE_CERTFILE` | TLS certification file used to secure datastore backend communication | +| `--datastore-keyfile` value | `K3S_DATASTORE_KEYFILE` | TLS key file used to secure datastore backend communication | + +### Cluster Options + +| Flag | Environment Variable | Description | +|------|----------------------|-------------| +| `--token value, -t` value | `K3S_TOKEN` | Shared secret used to join a server or agent to a cluster | +| `--token-file` value | `K3S_TOKEN_FILE` | File containing the cluster-secret/token | + +### Client Options + +| Flag | Environment Variable | Description | +|------|----------------------|-------------| +| `--write-kubeconfig value, -o` value | `K3S_KUBECONFIG_OUTPUT` | Write kubeconfig for admin client to this file | +| `--write-kubeconfig-mode` value | `K3S_KUBECONFIG_MODE` | Write kubeconfig with this [mode.](https://en.wikipedia.org/wiki/Chmod) The option to allow writing to the kubeconfig file is useful for allowing a K3s cluster to be imported into Rancher. An example value is 644. | + +# Agent Options + +K3s agent options are available as server options because the server has the agent process embedded within. + +### Agent Nodes + +| Flag | Environment Variable | Description | +|------|----------------------|-------------| +| `--node-name` value | `K3S_NODE_NAME` | Node name | +| `--with-node-id` | N/A | Append id to node name | (agent/node) +| `--node-label` value | N/A | Registering and starting kubelet with set of labels | +| `--node-taint` value | N/A | Registering kubelet with set of taints | + +### Agent Runtime + +| Flag | Default | Description | +|------|---------|-------------| +| `--docker` | N/A | Use docker instead of containerd | (agent/runtime) +| `--container-runtime-endpoint` value | N/A | Disable embedded containerd and use alternative CRI implementation | +| `--pause-image` value | "docker.io/rancher/pause:3.1" | Customized pause image for containerd or Docker sandbox | +| `--private-registry` value | "/etc/rancher/k3s/registries.yaml" | Private registry configuration file | + +### Agent Networking + +the agent options are there because the server has the agent process embedded within + +| Flag | Environment Variable | Description | +|------|----------------------|-------------| +| `--node-ip value, -i` value | N/A | IP address to advertise for node | +| `--node-external-ip` value | N/A | External IP address to advertise for node | +| `--resolv-conf` value | `K3S_RESOLV_CONF` | Kubelet resolv.conf file | +| `--flannel-iface` value | N/A | Override default flannel interface | +| `--flannel-conf` value | N/A | Override default flannel config file | + +# Advanced Options + +### Logging + +| Flag | Default | Description | +|------|---------|-------------| +| `-v` value | 0 | Number for the log level verbosity | +| `--vmodule` value | N/A | Comma-separated list of pattern=N settings for file-filtered logging | +| `--log value, -l` value | N/A | Log to file | +| `--alsologtostderr` | N/A | Log to standard error as well as file (if set) | + + +### Listeners + +| Flag | Default | Description | +|------|---------|-------------| +| `--bind-address` value | 0.0.0.0 | k3s bind address | +| `--https-listen-port` value | 6443 | HTTPS listen port | +| `--advertise-address` value | node-external-ip/node-ip | IP address that apiserver uses to advertise to members of the cluster | +| `--advertise-port` value | 0 | Port that apiserver uses to advertise to members of the cluster (default: listen-port) | +| `--tls-san` value | N/A | Add additional hostname or IP as a Subject Alternative Name in the TLS cert + +### Data + +| Flag | Default | Description | +|------|---------|-------------| +| `--data-dir value, -d` value | `/var/lib/rancher/k3s` or `${HOME}/.rancher/k3s` if not root | Folder to hold state | + +### Networking + +| Flag | Default | Description | +|------|---------|-------------| +| `--cluster-cidr` value | "10.42.0.0/16" | Network CIDR to use for pod IPs | +| `--service-cidr` value | "10.43.0.0/16" | Network CIDR to use for services IPs | +| `--cluster-dns` value | "10.43.0.10" | Cluster IP for coredns service. Should be in your service-cidr range | +| `--cluster-domain` value | "cluster.local" | Cluster Domain | +| `--flannel-backend` value | "vxlan" | One of 'none', 'vxlan', 'ipsec', 'host-gw', or 'wireguard' | + +### Customized Flags + +| Flag | Description | +|------|--------------| +| `--kube-apiserver-arg` value | Customized flag for kube-apiserver process | +| `--kube-scheduler-arg` value | Customized flag for kube-scheduler process | +| `--kube-controller-manager-arg` value | Customized flag for kube-controller-manager process | +| `--kube-cloud-controller-manager-arg` value | Customized flag for kube-cloud-controller-manager process | + +### Storage Class + +| Flag | Description | +|------|--------------| +| `--default-local-storage-path` value | Default local storage path for local provisioner storage class | + +### Kubernetes Components + +| Flag | Description | +|------|--------------| +| `--disable` value | Do not deploy packaged components and delete any deployed components (valid items: coredns, servicelb, traefik,local-storage, metrics-server) | +| `--disable-scheduler` | Disable Kubernetes default scheduler | +| `--disable-cloud-controller` | Disable k3s default cloud controller manager | +| `--disable-network-policy` | Disable k3s default network policy controller | + +### Customized Flags for Kubernetes Processes + +| Flag | Description | +|------|--------------| +| `--kubelet-arg` value | Customized flag for kubelet process | +| `--kube-proxy-arg` value | Customized flag for kube-proxy process | + +### Experimental Options + +| Flag | Environment Variable | Description | +|------|----------------------|-------------| +| `--rootless` | N/A | Run rootless | (experimental) +| `--agent-token` value | `K3S_AGENT_TOKEN` | Shared secret used to join agents to the cluster, but not servers | +| `--agent-token-file` value | `K3S_AGENT_TOKEN_FILE` | File containing the agent secret | +| `--server value, -s` value | `K3S_URL` | Server to connect to, used to join a cluster | +| `--cluster-init` | `K3S_CLUSTER_INIT` | Initialize new cluster master | +| `--cluster-reset` | `K3S_CLUSTER_RESET` | Forget all peers and become a single cluster new cluster master | +| `--secrets-encryption` | N/A | Enable Secret encryption at rest | + +### Deprecated Options + +| Flag | Environment Variable | Description | +|------|----------------------|-------------| +| `--no-flannel` | N/A | Use --flannel-backend=none | +| `--no-deploy` value | N/A | Do not deploy packaged components (valid items: coredns, servicelb, traefik, local-storage, metrics-server) | +| `--cluster-secret` value | `K3S_CLUSTER_SECRET` | Use --token | + + +# K3s Server CLI Help + +> If an option appears in brackets below, for example `[$K3S_TOKEN]`, it means that the option can be passed in as an environment variable of that name. + +```bash +NAME: + k3s server - Run management server + +USAGE: + k3s server [OPTIONS] + +OPTIONS: + -v value (logging) Number for the log level verbosity (default: 0) + --vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging + --log value, -l value (logging) Log to file + --alsologtostderr (logging) Log to standard error as well as file (if set) + --bind-address value (listener) k3s bind address (default: 0.0.0.0) + --https-listen-port value (listener) HTTPS listen port (default: 6443) + --advertise-address value (listener) IP address that apiserver uses to advertise to members of the cluster (default: node-external-ip/node-ip) + --advertise-port value (listener) Port that apiserver uses to advertise to members of the cluster (default: listen-port) (default: 0) + --tls-san value (listener) Add additional hostname or IP as a Subject Alternative Name in the TLS cert + --data-dir value, -d value (data) Folder to hold state default /var/lib/rancher/k3s or ${HOME}/.rancher/k3s if not root + --cluster-cidr value (networking) Network CIDR to use for pod IPs (default: "10.42.0.0/16") + --service-cidr value (networking) Network CIDR to use for services IPs (default: "10.43.0.0/16") + --cluster-dns value (networking) Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10) + --cluster-domain value (networking) Cluster Domain (default: "cluster.local") + --flannel-backend value (networking) One of 'none', 'vxlan', 'ipsec', 'host-gw', or 'wireguard' (default: "vxlan") + --token value, -t value (cluster) Shared secret used to join a server or agent to a cluster [$K3S_TOKEN] + --token-file value (cluster) File containing the cluster-secret/token [$K3S_TOKEN_FILE] + --write-kubeconfig value, -o value (client) Write kubeconfig for admin client to this file [$K3S_KUBECONFIG_OUTPUT] + --write-kubeconfig-mode value (client) Write kubeconfig with this mode [$K3S_KUBECONFIG_MODE] + --kube-apiserver-arg value (flags) Customized flag for kube-apiserver process + --kube-scheduler-arg value (flags) Customized flag for kube-scheduler process + --kube-controller-manager-arg value (flags) Customized flag for kube-controller-manager process + --kube-cloud-controller-manager-arg value (flags) Customized flag for kube-cloud-controller-manager process + --datastore-endpoint value (db) Specify etcd, Mysql, Postgres, or Sqlite (default) data source name [$K3S_DATASTORE_ENDPOINT] + --datastore-cafile value (db) TLS Certificate Authority file used to secure datastore backend communication [$K3S_DATASTORE_CAFILE] + --datastore-certfile value (db) TLS certification file used to secure datastore backend communication [$K3S_DATASTORE_CERTFILE] + --datastore-keyfile value (db) TLS key file used to secure datastore backend communication [$K3S_DATASTORE_KEYFILE] + --default-local-storage-path value (storage) Default local storage path for local provisioner storage class + --disable value (components) Do not deploy packaged components and delete any deployed components (valid items: coredns, servicelb, traefik, local-storage, metrics-server) + --disable-scheduler (components) Disable Kubernetes default scheduler + --disable-cloud-controller (components) Disable k3s default cloud controller manager + --disable-network-policy (components) Disable k3s default network policy controller + --node-name value (agent/node) Node name [$K3S_NODE_NAME] + --with-node-id (agent/node) Append id to node name + --node-label value (agent/node) Registering and starting kubelet with set of labels + --node-taint value (agent/node) Registering kubelet with set of taints + --docker (agent/runtime) Use docker instead of containerd + --container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation + --pause-image value (agent/runtime) Customized pause image for containerd or docker sandbox (default: "docker.io/rancher/pause:3.1") + --private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml") + --node-ip value, -i value (agent/networking) IP address to advertise for node + --node-external-ip value (agent/networking) External IP address to advertise for node + --resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF] + --flannel-iface value (agent/networking) Override default flannel interface + --flannel-conf value (agent/networking) Override default flannel config file + --kubelet-arg value (agent/flags) Customized flag for kubelet process + --kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process + --rootless (experimental) Run rootless + --agent-token value (experimental/cluster) Shared secret used to join agents to the cluster, but not servers [$K3S_AGENT_TOKEN] + --agent-token-file value (experimental/cluster) File containing the agent secret [$K3S_AGENT_TOKEN_FILE] + --server value, -s value (experimental/cluster) Server to connect to, used to join a cluster [$K3S_URL] + --cluster-init (experimental/cluster) Initialize new cluster master [$K3S_CLUSTER_INIT] + --cluster-reset (experimental/cluster) Forget all peers and become a single cluster new cluster master [$K3S_CLUSTER_RESET] + --secrets-encryption (experimental) Enable Secret encryption at rest + --no-flannel (deprecated) use --flannel-backend=none + --no-deploy value (deprecated) Do not deploy packaged components (valid items: coredns, servicelb, traefik, local-storage, metrics-server) + --cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET] +``` \ No newline at end of file diff --git a/content/k3s/latest/en/installation/installation-requirements/_index.md b/content/k3s/latest/en/installation/installation-requirements/_index.md index a36d5c1f6e1..ee053e09e0c 100644 --- a/content/k3s/latest/en/installation/installation-requirements/_index.md +++ b/content/k3s/latest/en/installation/installation-requirements/_index.md @@ -1,6 +1,8 @@ --- title: Installation Requirements weight: 1 +aliases: + - /k3s/latest/en/installation/node-requirements/ --- K3s is very lightweight, but has some minimum requirements as outlined below. @@ -9,7 +11,7 @@ Whether you're configuring a K3s cluster to run in a Docker or Kubernetes setup, ## Prerequisites -* Two nodes cannot have the same hostname. If all your nodes have the same hostname, pass `--node-name` or set `$K3S_NODE_NAME` with a unique name for each node you add to the cluster. +* Two nodes cannot have the same hostname. If all your nodes have the same hostname, use the `--with-node-id` option to append a random suffix for each node, or otherwise devise a unique name to pass with `--node-name` or `$K3S_NODE_NAME` for each node you add to the cluster. ## Operating Systems @@ -17,9 +19,10 @@ K3s should run on just about any flavor of Linux. However, K3s is tested on the * Ubuntu 16.04 (amd64) * Ubuntu 18.04 (amd64) -* Raspbian Buster (armhf) -> If you are using Alpine Linux, follow [these steps]({{}}/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup. +> * If you are using **Raspbian Buster**, follow [these steps]({{}}/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster) to switch to legacy iptables. +> * If you are using **Alpine Linux**, follow [these steps]({{}}/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup. + ## Hardware @@ -34,15 +37,28 @@ K3s performance depends on the performance of the database. To ensure optimal sp ## Networking -The K3s server needs port 6443 to be accessible by the nodes. The nodes need to be able to reach other nodes over UDP port 8472 (Flannel VXLAN). If you do not use Flannel and provide your own custom CNI, then port 8472 is not needed by K3s. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel. +The K3s server needs port 6443 to be accessible by the nodes. -IMPORTANT: The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disabled access to port 8472. +The nodes need to be able to reach other nodes over UDP port 8472 when Flannel VXLAN is used. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel. However, if you do not use Flannel and provide your own custom CNI, then port 8472 is not needed by K3s. If you wish to utilize the metrics server, you will need to open port 10250 on each node. +> **Important:** The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disables access to port 8472. + +
Inbound Rules for K3s Server Nodes
+ +| Protocol | Port | Source | Description +|-----|-----|----------------|---| +| TCP | 6443 | K3s server nodes | Kubernetes API +| UDP | 8472 | K3s server and agent nodes | Required only for Flannel VXLAN +| TCP | 10250 | K3s server and agent nodes | kubelet + +Typically all outbound traffic is allowed. + ## Large Clusters Hardware requirements are based on the size of your K3s cluster. For production and large clusters, we recommend using a high-availability setup with an external database. The following options are recommended for the external database in production: + - MySQL - PostgreSQL - etcd @@ -65,6 +81,17 @@ The cluster performance depends on database performance. To ensure optimal speed ### Network -You should consider increasing the subnet size for the cluster CIDR so that you don't run out of IPs for the pods. You can do that by passing the `--cluster-cidr` option to K3s server upon starting. +You should consider increasing the subnet size for the cluster CIDR so that you don't run out of IPs for the pods. You can do that by passing the `--cluster-cidr` option to K3s server upon starting. +### Database + +K3s supports different databases including MySQL, PostgreSQL, MariaDB, and etcd, the following is a sizing guide for the database resources you need to run large clusters: + +| Deployment Size | Nodes | VCPUS | RAM | +|:---------------:|:---------:|:-----:|:-----:| +| Small | Up to 10 | 1 | 2 GB | +| Medium | Up to 100 | 2 | 8 GB | +| Large | Up to 250 | 4 | 16 GB | +| X-Large | Up to 500 | 8 | 32 GB | +| XX-Large | 500+ | 16 | 64 GB | diff --git a/content/k3s/latest/en/installation/network-options/_index.md b/content/k3s/latest/en/installation/network-options/_index.md index c87b2783831..97873e4151b 100644 --- a/content/k3s/latest/en/installation/network-options/_index.md +++ b/content/k3s/latest/en/installation/network-options/_index.md @@ -3,7 +3,7 @@ title: "Network Options" weight: 25 --- -> **Note:** Please reference the [Networking]({{< baseurl >}}/k3s/latest/en/networking) page for information about CoreDNS, Traefik, and the Service LB. +> **Note:** Please reference the [Networking]({{}}/k3s/latest/en/networking) page for information about CoreDNS, Traefik, and the Service LB. By default, K3s will run with flannel as the CNI, using VXLAN as the default backend. To change the CNI, refer to the section on configuring a [custom CNI](#custom-cni). To change the flannel backend, refer to the flannel options section. diff --git a/content/k3s/latest/en/installation/private-registry/_index.md b/content/k3s/latest/en/installation/private-registry/_index.md index 6160866e13a..ac068a6f423 100644 --- a/content/k3s/latest/en/installation/private-registry/_index.md +++ b/content/k3s/latest/en/installation/private-registry/_index.md @@ -25,7 +25,7 @@ Mirrors is a directive that defines the names and endpoints of the private regis ``` mirrors: - "mycustomreg.com:5000": + docker.io: endpoint: - "https://mycustomreg.com:5000" ``` @@ -59,7 +59,7 @@ Below are examples showing how you may configure `/etc/rancher/k3s/registries.ya ``` mirrors: - "mycustomreg.com:5000": + docker.io: endpoint: - "https://mycustomreg.com:5000" configs: @@ -78,7 +78,7 @@ configs: ``` mirrors: - "mycustomreg.com:5000": + docker.io: endpoint: - "https://mycustomreg.com:5000" configs: @@ -101,7 +101,7 @@ Below are examples showing how you may configure `/etc/rancher/k3s/registries.ya ``` mirrors: - "mycustomreg.com:5000": + docker.io: endpoint: - "http://mycustomreg.com:5000" configs: @@ -116,7 +116,7 @@ configs: ``` mirrors: - "mycustomreg.com:5000": + docker.io: endpoint: - "http://mycustomreg.com:5000" ``` @@ -127,3 +127,18 @@ mirrors: > In case of no TLS communication, you need to specify `http://` for the endpoints, otherwise it will default to https. In order for the registry changes to take effect, you need to restart K3s on each node. + +# Adding Images to the Private Registry + +First, obtain the k3s-images.txt file from GitHub for the release you are working with. +Pull the K3s images listed on the k3s-images.txt file from docker.io + +Example: `docker pull docker.io/rancher/coredns-coredns:1.6.3` + +Then, retag the images to the private registry. + +Example: `docker tag coredns-coredns:1.6.3 mycustomreg:5000/coredns-coredns` + +Last, push the images to the private registry. + +Example: `docker push mycustomreg:5000/coredns-coredns` diff --git a/content/k3s/latest/en/networking/_index.md b/content/k3s/latest/en/networking/_index.md index d4f780d8dc5..7bf35ce7c12 100644 --- a/content/k3s/latest/en/networking/_index.md +++ b/content/k3s/latest/en/networking/_index.md @@ -3,11 +3,11 @@ title: "Networking" weight: 35 --- ->**Note:** CNI options are covered in detail on the [Installation Network Options]({{< baseurl >}}/k3s/latest/en/installation/network-options/) page. Please reference that page for details on Flannel and the various flannel backend options or how to set up your own CNI. +>**Note:** CNI options are covered in detail on the [Installation Network Options]({{}}/k3s/latest/en/installation/network-options/) page. Please reference that page for details on Flannel and the various flannel backend options or how to set up your own CNI. Open Ports ---------- -Please reference the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/#networking) page for port information. +Please reference the [Installation Requirements]({{}}/k3s/latest/en/installation/installation-requirements/#networking) page for port information. CoreDNS ------- @@ -21,7 +21,7 @@ Traefik Ingress Controller [Traefik](https://traefik.io/) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It simplifies networking complexity while designing, deploying, and running applications. -Traefik is deployed by default when starting the server. For more information see [Auto Deploying Manifests]({{< baseurl >}}/k3s/latest/en/advanced/#auto-deploying-manifests). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml` and any changes made to this file will automatically be deployed to Kubernetes in a manner similar to `kubectl apply`. +Traefik is deployed by default when starting the server. For more information see [Auto Deploying Manifests]({{}}/k3s/latest/en/advanced/#auto-deploying-manifests). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml` and any changes made to this file will automatically be deployed to Kubernetes in a manner similar to `kubectl apply`. The Traefik ingress controller will use ports 80, 443, and 8080 on the host (i.e. these will not be usable for HostPort or NodePort). @@ -34,4 +34,4 @@ Service Load Balancer K3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available, the load balancer will stay in Pending. -To disable the embedded load balancer, run the server with the `--no-deploy servicelb` option. This is necessary if you wish to run a different load balancer, such as MetalLB. \ No newline at end of file +To disable the embedded load balancer, run the server with the `--no-deploy servicelb` option. This is necessary if you wish to run a different load balancer, such as MetalLB. diff --git a/content/k3s/latest/en/upgrades/_index.md b/content/k3s/latest/en/upgrades/_index.md index 3ce3a0591a3..58c34361b0a 100644 --- a/content/k3s/latest/en/upgrades/_index.md +++ b/content/k3s/latest/en/upgrades/_index.md @@ -3,42 +3,8 @@ title: "Upgrades" weight: 25 --- -You can upgrade K3s by using the installation script, or by manually installing the binary of the desired version. +This section describes how to upgrade your K3s cluster. ->**Note:** When upgrading, upgrade server nodes first one at a time, then any worker nodes. +[Upgrade basics]({{< baseurl >}}/k3s/latest/en/upgrades/basic/) describes several techniques for upgrading your cluster manually. It can also be used as a basis for upgrading through third-party Infrastructure-as-Code tools like [Terraform](https://www.terraform.io/). -### Upgrade K3s Using the Installation Script - -To upgrade K3s from an older version you can re-run the installation script using the same flags, for example: - -```sh -curl -sfL https://get.k3s.io | sh - -``` - -If you want to upgrade to specific version you can run the following command: - -```sh -curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh - -``` - -### Manually Upgrade K3s Using the Binary - -Or to manually upgrade K3s: - -1. Download the desired version of K3s from [releases](https://github.com/rancher/k3s/releases/latest) -2. Install to an appropriate location (normally `/usr/local/bin/k3s`) -3. Stop the old version -4. Start the new version - -### Restarting K3s - -Restarting K3s is supported by the installation script for systemd and openrc. -To restart manually for systemd use: -```sh -sudo systemctl restart k3s -``` - -To restart manually for openrc use: -```sh -sudo service k3s restart -``` \ No newline at end of file +[Automated upgrades]({{< baseurl >}}/k3s/latest/en/upgrades/automated/) describes how to perform Kubernetes-native automated upgrades using Rancher's [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller). diff --git a/content/k3s/latest/en/upgrades/automated/_index.md b/content/k3s/latest/en/upgrades/automated/_index.md new file mode 100644 index 00000000000..3ac8143052e --- /dev/null +++ b/content/k3s/latest/en/upgrades/automated/_index.md @@ -0,0 +1,115 @@ +--- +title: "Automated Upgrades" +weight: 20 +--- + +>**Note:** This feature is available as of [v1.17.4+k3s1](https://github.com/rancher/k3s/releases/tag/v1.17.4%2Bk3s1) + +### Overview + +You can manage K3s cluster upgrades using Rancher's system-upgrade-controller. This is a Kubernetes-native approach to cluster upgrades. It leverages a [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources), the `plan`, and a [controller](https://kubernetes.io/docs/concepts/architecture/controller/) that schedules upgrades based on the configured plans. + +A plan defines upgrade policies and requirements. This documentation will provide plans with defaults appropriate for upgrading a K3s cluster. For more advanced plan configuration options, please review the [CRD](https://github.com/rancher/system-upgrade-controller/blob/master/pkg/apis/upgrade.cattle.io/v1/types.go). + +The controller schedules upgrades by monitoring plans and selecting nodes to run upgrade [jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) on. A plan defines which nodes should be upgraded through a [label selector](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). When a job has run to completion successfully, the controller will label the node on which it ran accordingly. + +>**Note:** The upgrade job that is launched must be highly privileged. It is configured with the following: +> +- Host `IPC`, `NET`, and `PID` namespaces +- The `CAP_SYS_BOOT` capability +- Host root mounted at `/host` with read and write permissions + +For more details on the design and architecture of the system-upgrade-controller or its integration with K3s, see the following Git repositories: + +- [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller) +- [k3s-upgrade](https://github.com/rancher/k3s-upgrade) + +To automate upgrades in this manner you must: + +1. Install the system-upgrade-controller into your cluster +1. Configure plans + + +### Install the system-upgrade-controller +The system-upgrade-controller can be installed as a deployment into your cluster. The deployment requires a service-account, clusterRoleBinding, and a configmap. To install these components, run the following command: +``` +kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/download/v0.4.0/system-upgrade-controller.yaml +``` +The controller can be configured and customized via the previously mentioned configmap, but the controller must be redeployed for the changes to be applied. + + +### Configure plans +It is recommended that you minimally create two plans: a plan for upgrading server (master) nodes and a plan for upgrading agent (worker) nodes. As needed, you can create additional plans to control the rollout of the upgrade across nodes. The following two example plans will upgrade your cluster to K3s v1.17.4+k3s1. Once the plans are created, the controller will pick them up and begin to upgrade your cluster. +``` +# Server plan +apiVersion: upgrade.cattle.io/v1 +kind: Plan +metadata: + name: server-plan + namespace: system-upgrade +spec: + concurrency: 1 + cordon: true + nodeSelector: + matchExpressions: + - key: node-role.kubernetes.io/master + operator: In + values: + - "true" + serviceAccountName: system-upgrade + upgrade: + image: rancher/k3s-upgrade + version: v1.17.4+k3s1 +--- +# Agent plan +apiVersion: upgrade.cattle.io/v1 +kind: Plan +metadata: + name: agent-plan + namespace: system-upgrade +spec: + concurrency: 1 + cordon: true + nodeSelector: + matchExpressions: + - key: node-role.kubernetes.io/master + operator: DoesNotExist + prepare: + args: + - prepare + - server-plan + image: rancher/k3s-upgrade:v1.17.4-k3s1 + serviceAccountName: system-upgrade + upgrade: + image: rancher/k3s-upgrade + version: v1.17.4+k3s1 +``` +There are a few important things to call out regarding these plans: + +First, the plans must be created in the same namespace where the controller was deployed. + +Second, the `concurrency` field indicates how many nodes can be upgraded at the same time. + +Third, the server-plan targets server nodes by specifying a label selector that selects nodes with the `node-role.kubernetes.io/master` label. The agent-plan targets agent nodes by specifying a label selector that select nodes without that label. + +Fourth, the `prepare` step in the agent-plan will cause upgrade jobs for that plan to wait for the server-plan to complete before they execute. + +Fifth, both plans have the `version` field set to v1.17.4+k3s1. Alternatively, you can omit the `version` field and set the `channel` field to a URL that resolves to a release of K3s. This will cause the controller to monitor that URL and upgrade the cluster any time it resolves to a new release. This is designed specifically to work with the [latest release functionality of GitHub](https://help.github.com/en/github/administering-a-repository/linking-to-releases). Thus, you can configure your plans with the following channel to ensure your cluster is always automatically upgraded to the latest release of K3s: +``` +apiVersion: upgrade.cattle.io/v1 +kind: Plan +... +spec: + ... + channel: https://github.com/rancher/k3s/releases/latest + +``` + +As stated, the upgrade will begin as soon as the controller detects that a plan was created. Updating a plan will cause the controller to re-evaluate the plan and determine if another upgrade is needed. + +You can monitor the progress of an upgrade by viewing the plan and jobs via kubectl: +``` +kubectl -n system-upgrade get plans -o yaml +kubectl -n system-upgrade get jobs -o yaml +``` + diff --git a/content/k3s/latest/en/upgrades/basic/_index.md b/content/k3s/latest/en/upgrades/basic/_index.md new file mode 100644 index 00000000000..d0f1b8654c8 --- /dev/null +++ b/content/k3s/latest/en/upgrades/basic/_index.md @@ -0,0 +1,59 @@ +--- +title: "Upgrade Basics" +weight: 10 +--- + +You can upgrade K3s by using the installation script, or by manually installing the binary of the desired version. + +>**Note:** When upgrading, upgrade server nodes first one at a time, then any worker nodes. + +### Upgrade K3s Using the Installation Script + +To upgrade K3s from an older version you can re-run the installation script using the same flags, for example: + +```sh +curl -sfL https://get.k3s.io | sh - +``` + +If you want to upgrade to specific version you can run the following command: + +```sh +curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh - +``` + +### Manually Upgrade K3s Using the Binary + +Or to manually upgrade K3s: + +1. Download the desired version of the K3s binary from [releases](https://github.com/rancher/k3s/releases) +2. Copy the downloaded binary to `/usr/local/bin/k3s` (or your desired location) +3. Stop the old k3s binary +4. Launch the new k3s binary + +### Restarting K3s + +Restarting K3s is supported by the installation script for systemd and OpenRC. + +**systemd** + +To restart servers manually: +```sh +sudo systemctl restart k3s +``` + +To restart agents manually: +```sh +sudo systemctl restart k3s-agent +``` + +**OpenRC** + +To restart servers manually: +```sh +sudo service k3s restart +``` + +To restart agents mantually: +```sh +sudo service k3s-agent restart +``` diff --git a/content/os/v1.x/en/_index.md b/content/os/v1.x/en/_index.md index 1fd27ba96da..a4d46db0150 100644 --- a/content/os/v1.x/en/_index.md +++ b/content/os/v1.x/en/_index.md @@ -25,11 +25,11 @@ VMWare | 1GB | 1280MB (rancheros.iso)
2048MB (ran GCE | 1GB | 1280MB AWS | 1GB | 1.7GB -You can adjust memory requirements by custom building RancherOS, please refer to [reduce-memory-requirements]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/#reduce-memory-requirements) +You can adjust memory requirements by custom building RancherOS, please refer to [reduce-memory-requirements]({{}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/#reduce-memory-requirements) ### How RancherOS Works -Everything in RancherOS is a Docker container. We accomplish this by launching two instances of Docker. One is what we call **System Docker** and is the first process on the system. All other system services, like `ntpd`, `syslog`, and `console`, are running in Docker containers. System Docker replaces traditional init systems like `systemd` and is used to launch [additional system services](installation/system-services/adding-system-services/). +Everything in RancherOS is a Docker container. We accomplish this by launching two instances of Docker. One is what we call **System Docker** and is the first process on the system. All other system services, like `ntpd`, `syslog`, and `console`, are running in Docker containers. System Docker replaces traditional init systems like `systemd` and is used to launch [additional system services](installation/system-services/). System Docker runs a special container called **Docker**, which is another Docker daemon responsible for managing all of the user’s containers. Any containers that you launch as a user from the console will run inside this Docker. This creates isolation from the System Docker containers and ensures that normal user commands don’t impact system services. diff --git a/content/os/v1.x/en/about/_index.md b/content/os/v1.x/en/about/_index.md index 05b095c5451..8b5bf2f8525 100644 --- a/content/os/v1.x/en/about/_index.md +++ b/content/os/v1.x/en/about/_index.md @@ -1,6 +1,6 @@ --- -title: About -weight: 4 +title: Additional Resources +weight: 200 --- ## Developing @@ -59,7 +59,7 @@ All of repositories are located within our main GitHub [page](https://github.com [RancherOS Repo](https://github.com/rancher/os): This repo contains the bulk of the RancherOS code. -[RancherOS Services Repo](https://github.com/rancher/os-services): This repo is where any [system-services]({{< baseurl >}}/os/v1.x/en//installation/system-services/adding-system-services/) can be contributed. +[RancherOS Services Repo](https://github.com/rancher/os-services): This repo is where any [system-services]({{< baseurl >}}/os/v1.x/en//system-services/) can be contributed. [RancherOS Images Repo](https://github.com/rancher/os-images): This repo is for the corresponding service images. diff --git a/content/os/v1.x/en/about/running-rancher-on-rancherOS/_index.md b/content/os/v1.x/en/about/running-rancher-on-rancherOS/_index.md index f0fb87544cd..3fb01def4ed 100644 --- a/content/os/v1.x/en/about/running-rancher-on-rancherOS/_index.md +++ b/content/os/v1.x/en/about/running-rancher-on-rancherOS/_index.md @@ -7,7 +7,7 @@ RancherOS can be used to launch [Rancher](/rancher/) and be used as the OS to ad ### Launching Agents using Cloud-Config -You can easily add hosts into Rancher by using [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) to launch the rancher/agent container. +You can easily add hosts into Rancher by using [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) to launch the rancher/agent container. After Rancher is launched and host registration has been saved, you will be able to find use the custom option to add Rancher OS nodes. @@ -37,7 +37,7 @@ rancher: ```
-> **Note:** You can not name the service `rancher-agent` as this will not allow the rancher/agent container to be launched correctly. Please read more about why [you can't name your container as `rancher-agent`]({{< baseurl >}}/rancher/v1.6/en/faqs/agents/#adding-in-name-rancher-agent). +> **Note:** You can not name the service `rancher-agent` as this will not allow the rancher/agent container to be launched correctly. Please read more about why [you can't name your container as `rancher-agent`]({{}}/rancher/v1.6/en/faqs/agents/#adding-in-name-rancher-agent). ### Adding in Host Labels diff --git a/content/os/v1.x/en/installation/configuration/_index.md b/content/os/v1.x/en/configuration/_index.md similarity index 95% rename from content/os/v1.x/en/installation/configuration/_index.md rename to content/os/v1.x/en/configuration/_index.md index 628115f1816..209b96ea3b4 100644 --- a/content/os/v1.x/en/installation/configuration/_index.md +++ b/content/os/v1.x/en/configuration/_index.md @@ -1,6 +1,8 @@ --- title: Configuration weight: 120 +aliases: + - /os/v1.x/en/installation/configuration --- There are two ways that RancherOS can be configured. @@ -34,7 +36,7 @@ In our example above, we have our `#cloud-config` line to indicate it's a cloud- ### Manually Changing Configuration To update RancherOS configuration after booting, the `ros config set ` command can be used. -For more complicated settings, like the [sysctl settings]({{< baseurl >}}/os/v1.x/en/installation/configuration/sysctl/), you can also create a small YAML file and then run `sudo ros config merge -i `. +For more complicated settings, like the [sysctl settings]({{< baseurl >}}/os/v1.x/en/configuration/sysctl/), you can also create a small YAML file and then run `sudo ros config merge -i `. #### Getting Values diff --git a/content/os/v1.x/en/installation/configuration/adding-kernel-parameters/_index.md b/content/os/v1.x/en/configuration/adding-kernel-parameters/_index.md similarity index 95% rename from content/os/v1.x/en/installation/configuration/adding-kernel-parameters/_index.md rename to content/os/v1.x/en/configuration/adding-kernel-parameters/_index.md index cafa5232098..da82856f3c9 100644 --- a/content/os/v1.x/en/installation/configuration/adding-kernel-parameters/_index.md +++ b/content/os/v1.x/en/configuration/adding-kernel-parameters/_index.md @@ -1,6 +1,8 @@ --- title: Kernel boot parameters weight: 133 +aliases: + - /os/v1.x/en/installation/configuration/adding-kernel-parameters --- RancherOS parses the Linux kernel boot cmdline to add any keys it understands to its configuration. This allows you to modify what cloud-init sources it will use on boot, to enable `rancher.debug` logging, or to almost any other configuration setting. @@ -27,7 +29,7 @@ $ sudo system-docker run --rm -it -v /:/host alpine vi /host/boot/global.cfg ### During installation -If you want to set the extra kernel parameters when you are [Installing RancherOS to Disk]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk/) please use the `--append` parameter. +If you want to set the extra kernel parameters when you are [Installing RancherOS to Disk]({{< baseurl >}}/os/v1.x/en/installation/server/install-to-disk/) please use the `--append` parameter. ```bash $ sudo ros install -d /dev/sda --append "rancheros.autologin=tty1" diff --git a/content/os/v1.x/en/installation/configuration/airgap-configuration/_index.md b/content/os/v1.x/en/configuration/airgap-configuration/_index.md similarity index 86% rename from content/os/v1.x/en/installation/configuration/airgap-configuration/_index.md rename to content/os/v1.x/en/configuration/airgap-configuration/_index.md index 81db7ae4132..c82fdbb2a9d 100644 --- a/content/os/v1.x/en/installation/configuration/airgap-configuration/_index.md +++ b/content/os/v1.x/en/configuration/airgap-configuration/_index.md @@ -1,6 +1,8 @@ --- title: Air Gap Configuration weight: 138 +aliases: + - /os/v1.x/en/installation/configuration/airgap-configuration --- In the air gap environment, the Docker registry, RancherOS repositories URL, and the RancherOS upgrade URL should be configured to ensure the OS can pull images, update OS services, and upgrade the OS. @@ -10,10 +12,10 @@ In the air gap environment, the Docker registry, RancherOS repositories URL, and You should use a private Docker registry so that `user-docker` and `system-docker` can pull images. -1. Add the private Docker registry domain to the [images prefix]({{< baseurl >}}/os/v1.x/en/installation/configuration/images-prefix/). -2. Set the private registry certificates for `user-docker`. For details, refer to [Certificates for Private Registries]({{< baseurl >}}/os/v1.x/en/installation/configuration/private-registries/#certificates-for-private-registries) +1. Add the private Docker registry domain to the [images prefix]({{< baseurl >}}/os/v1.x/en/configuration/images-prefix/). +2. Set the private registry certificates for `user-docker`. For details, refer to [Certificates for Private Registries]({{< baseurl >}}/os/v1.x/en/configuration/private-registries/#certificates-for-private-registries) 3. Set the private registry certificates for `system-docker`. There are two ways to set the certificates: - - To set the private registry certificates before RancherOS starts, you can run a script included with RancherOS. For details, refer to [Set Custom Certs in ISO]({{< baseurl >}}/os/v1.x/en/installation/configuration/airgap-configuration/#set-custom-certs-in-iso). + - To set the private registry certificates before RancherOS starts, you can run a script included with RancherOS. For details, refer to [Set Custom Certs in ISO]({{< baseurl >}}/os/v1.x/en/configuration/airgap-configuration/#set-custom-certs-in-iso). - To set the private registry certificates after RancherOS starts, append your private registry certs to the `/etc/ssl/certs/ca-certificates.crt.rancher` file. Then reboot to make the certs fully take effect. 4. The images used by RancherOS should be pushed to your private registry. @@ -84,7 +86,11 @@ $ sudo ros config set rancher.upgrade.url https://foo.bar.com/os/releases.yml Here is a total cloud-config example for using RancherOS in an air gap environment. -For `system-docker`, see [Configuring Private Docker Registry]({{< baseurl >}}/os/v1.x/en/installation/configuration/airgap-configuration/#configuring-private-docker-registry). +<<<<<<< HEAD:content/os/v1.x/en/installation/configuration/airgap-configuration/_index.md +For `system-docker`, see [Configuring Private Docker Registry]({{}}/os/v1.x/en/installation/configuration/airgap-configuration/#configuring-private-docker-registry). +======= +For `system-docker`, see [Configuring Private Docker Registry]({{< baseurl >}}/os/v1.x/en/configuration/airgap-configuration/#configuring-private-docker-registry). +>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/configuration/airgap-configuration/_index.md ```yaml #cloud-config diff --git a/content/os/v1.x/en/installation/configuration/date-and-timezone/_index.md b/content/os/v1.x/en/configuration/date-and-timezone/_index.md similarity index 85% rename from content/os/v1.x/en/installation/configuration/date-and-timezone/_index.md rename to content/os/v1.x/en/configuration/date-and-timezone/_index.md index 13ec156209f..4f21ba4b3d7 100644 --- a/content/os/v1.x/en/installation/configuration/date-and-timezone/_index.md +++ b/content/os/v1.x/en/configuration/date-and-timezone/_index.md @@ -1,11 +1,13 @@ --- title: Date and time zone weight: 121 +aliases: + - /os/v1.x/en/installation/configuration/date-and-timezone --- The default console keeps time in the Coordinated Universal Time (UTC) zone and synchronizes clocks with the Network Time Protocol (NTP). The Network Time Protocol daemon (ntpd) is an operating system program that maintains the system time in synchronization with time servers using the NTP. -RancherOS can run ntpd in the System Docker container. You can update its configurations by updating `/etc/ntp.conf`. For an example of how to update a file such as `/etc/ntp.conf` within a container, refer to [this page.]({{< baseurl >}}/os/v1.x/en/installation/configuration/write-files/#writing-files-in-specific-system-services) +RancherOS can run ntpd in the System Docker container. You can update its configurations by updating `/etc/ntp.conf`. For an example of how to update a file such as `/etc/ntp.conf` within a container, refer to [this page.]({{< baseurl >}}/os/v1.x/en/configuration/write-files/#writing-files-in-specific-system-services) The default console cannot support changing the time zone because including `tzdata` (time zone data) will increase the ISO size. However, you can change the time zone in the container by passing a flag to specify the time zone when you run the container: diff --git a/content/os/v1.x/en/installation/configuration/disable-access-to-system/_index.md b/content/os/v1.x/en/configuration/disable-access-to-system/_index.md similarity index 91% rename from content/os/v1.x/en/installation/configuration/disable-access-to-system/_index.md rename to content/os/v1.x/en/configuration/disable-access-to-system/_index.md index 8f9e26529d3..bcbe845c4ac 100644 --- a/content/os/v1.x/en/installation/configuration/disable-access-to-system/_index.md +++ b/content/os/v1.x/en/configuration/disable-access-to-system/_index.md @@ -1,6 +1,8 @@ --- title: Disabling Access to RancherOS weight: 136 +aliases: + - /os/v1.x/en/installation/configuration/disable-access-to-system --- _Available as of v1.5_ diff --git a/content/os/v1.x/en/installation/configuration/docker/_index.md b/content/os/v1.x/en/configuration/docker/_index.md similarity index 95% rename from content/os/v1.x/en/installation/configuration/docker/_index.md rename to content/os/v1.x/en/configuration/docker/_index.md index 0620f6ecd6d..f1c9bc03344 100644 --- a/content/os/v1.x/en/installation/configuration/docker/_index.md +++ b/content/os/v1.x/en/configuration/docker/_index.md @@ -1,9 +1,11 @@ --- title: Configuring Docker or System Docker weight: 126 +aliases: + - /os/v1.x/en/installation/configuration/docker --- -In RancherOS, you can configure System Docker and Docker daemons by using [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). +In RancherOS, you can configure System Docker and Docker daemons by using [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). ### Configuring Docker @@ -61,7 +63,7 @@ Key | Value | Default | Description ---|---|---| --- `extra_args` | List of Strings | `[]` | Arbitrary daemon arguments, appended to the generated command `environment` | List of Strings | `[]` | -`tls` | Boolean | `false` | When [setting up TLS]({{< baseurl >}}/os/v1.x/en/installation/configuration/setting-up-docker-tls/), this key needs to be set to true. +`tls` | Boolean | `false` | When [setting up TLS]({{< baseurl >}}/os/v1.x/en/configuration/setting-up-docker-tls/), this key needs to be set to true. `tls_args` | List of Strings (used only if `tls: true`) | `[]` | `server_key` | String (used only if `tls: true`)| `""` | PEM encoded server TLS key. `server_cert` | String (used only if `tls: true`) | `""` | PEM encoded server TLS certificate. @@ -120,7 +122,7 @@ $ ros config set rancher.system_docker.bip 172.19.0.0/16 _Available as of v1.4.x_ The default path of system-docker logs is `/var/log/system-docker.log`. If you want to write the system-docker logs to a separate partition, -e.g. [RANCHER_OEM partition]({{< baseurl >}}/os/v1.x/en/about/custom-partition-layout/#use-rancher-oem-partition), you can try `rancher.defaults.system_docker_logs`: +e.g. [RANCHER_OEM partition]({{}}/os/v1.x/en/about/custom-partition-layout/#use-rancher-oem-partition), you can try `rancher.defaults.system_docker_logs`: ``` #cloud-config diff --git a/content/os/v1.x/en/installation/configuration/hostname/_index.md b/content/os/v1.x/en/configuration/hostname/_index.md similarity index 50% rename from content/os/v1.x/en/installation/configuration/hostname/_index.md rename to content/os/v1.x/en/configuration/hostname/_index.md index 0b05fa53e45..d7c6f3636b5 100644 --- a/content/os/v1.x/en/installation/configuration/hostname/_index.md +++ b/content/os/v1.x/en/configuration/hostname/_index.md @@ -1,9 +1,11 @@ --- title: Setting the Hostname weight: 124 +aliases: + - /os/v1.x/en/installation/configuration/hostname --- -You can set the hostname of the host using [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). The example below shows how to configure it. +You can set the hostname of the host using [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). The example below shows how to configure it. ```yaml #cloud-config diff --git a/content/os/v1.x/en/installation/configuration/images-prefix/_index.md b/content/os/v1.x/en/configuration/images-prefix/_index.md similarity index 94% rename from content/os/v1.x/en/installation/configuration/images-prefix/_index.md rename to content/os/v1.x/en/configuration/images-prefix/_index.md index f8d902c4f66..207595a1312 100644 --- a/content/os/v1.x/en/installation/configuration/images-prefix/_index.md +++ b/content/os/v1.x/en/configuration/images-prefix/_index.md @@ -1,6 +1,8 @@ --- title: Images prefix weight: 121 +aliases: + - /os/v1.x/en/installation/configuration/images-prefix --- _Available as of v1.3_ diff --git a/content/os/v1.x/en/installation/configuration/kernel-modules-kernel-headers/_index.md b/content/os/v1.x/en/configuration/kernel-modules-kernel-headers/_index.md similarity index 95% rename from content/os/v1.x/en/installation/configuration/kernel-modules-kernel-headers/_index.md rename to content/os/v1.x/en/configuration/kernel-modules-kernel-headers/_index.md index 630594495ce..a350c41eff0 100644 --- a/content/os/v1.x/en/installation/configuration/kernel-modules-kernel-headers/_index.md +++ b/content/os/v1.x/en/configuration/kernel-modules-kernel-headers/_index.md @@ -1,6 +1,8 @@ --- title: Installing Kernel Modules that require Kernel Headers weight: 135 +aliases: + - /os/v1.x/en/installation/configuration/kernel-modules-kernel-headers --- To compile any kernel modules, you will need to download the kernel headers. The kernel headers are available in the form of a system service. Since the kernel headers are a system service, they need to be enabled using the `ros service` command. diff --git a/content/os/v1.x/en/installation/configuration/loading-kernel-modules/_index.md b/content/os/v1.x/en/configuration/loading-kernel-modules/_index.md similarity index 97% rename from content/os/v1.x/en/installation/configuration/loading-kernel-modules/_index.md rename to content/os/v1.x/en/configuration/loading-kernel-modules/_index.md index 11d4a5ec41f..d7f2b47673b 100644 --- a/content/os/v1.x/en/installation/configuration/loading-kernel-modules/_index.md +++ b/content/os/v1.x/en/configuration/loading-kernel-modules/_index.md @@ -1,6 +1,8 @@ --- title: Loading Kernel Modules weight: 134 +aliases: + - /os/v1.x/en/installation/configuration/loading-kernel-modules --- Since RancherOS v0.8, we build our own kernels using an unmodified kernel.org LTS kernel. diff --git a/content/os/v1.x/en/installation/configuration/private-registries/_index.md b/content/os/v1.x/en/configuration/private-registries/_index.md similarity index 87% rename from content/os/v1.x/en/installation/configuration/private-registries/_index.md rename to content/os/v1.x/en/configuration/private-registries/_index.md index 5abe0adbbaf..b231ec4fb6c 100644 --- a/content/os/v1.x/en/installation/configuration/private-registries/_index.md +++ b/content/os/v1.x/en/configuration/private-registries/_index.md @@ -1,9 +1,11 @@ --- title: Private Registries weight: 128 +aliases: + - /os/v1.x/en/installation/configuration/private-registries --- -When launching services through a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config), it is sometimes necessary to pull a private image from DockerHub or from a private registry. Authentication for these can be embedded in your cloud-config. +When launching services through a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config), it is sometimes necessary to pull a private image from DockerHub or from a private registry. Authentication for these can be embedded in your cloud-config. For example, to add authentication for DockerHub: @@ -61,7 +63,7 @@ write_files: ### Certificates for Private Registries -Certificates can be stored in the standard locations (i.e. `/etc/docker/certs.d`) following the [Docker documentation](https://docs.docker.com/registry/insecure). By using the `write_files` directive of the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config), the certificates can be written directly into `/etc/docker/certs.d`. +Certificates can be stored in the standard locations (i.e. `/etc/docker/certs.d`) following the [Docker documentation](https://docs.docker.com/registry/insecure). By using the `write_files` directive of the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config), the certificates can be written directly into `/etc/docker/certs.d`. ```yaml #cloud-config diff --git a/content/os/v1.x/en/installation/configuration/resizing-device-partition/_index.md b/content/os/v1.x/en/configuration/resizing-device-partition/_index.md similarity index 87% rename from content/os/v1.x/en/installation/configuration/resizing-device-partition/_index.md rename to content/os/v1.x/en/configuration/resizing-device-partition/_index.md index c7aa605f430..dc21dc1d6a4 100644 --- a/content/os/v1.x/en/installation/configuration/resizing-device-partition/_index.md +++ b/content/os/v1.x/en/configuration/resizing-device-partition/_index.md @@ -1,6 +1,8 @@ --- title: Resizing a Device Partition weight: 131 +aliases: + - /os/v1.x/en/installation/configuration/resizing-device-partition --- The `resize_device` cloud config option can be used to automatically extend the first partition (assuming its `ext4`) to fill the size of it's device. diff --git a/content/os/v1.x/en/installation/configuration/running-commands/_index.md b/content/os/v1.x/en/configuration/running-commands/_index.md similarity index 91% rename from content/os/v1.x/en/installation/configuration/running-commands/_index.md rename to content/os/v1.x/en/configuration/running-commands/_index.md index 11b8d44d8be..b13fee7e041 100644 --- a/content/os/v1.x/en/installation/configuration/running-commands/_index.md +++ b/content/os/v1.x/en/configuration/running-commands/_index.md @@ -1,6 +1,8 @@ --- title: Running Commands weight: 123 +aliases: + - /os/v1.x/en/installation/configuration/running-commands --- You can automate running commands on boot using the `runcmd` cloud-config directive. Commands can be specified as either a list or a string. In the latter case, the command is executed with `sh`. @@ -31,4 +33,4 @@ write_files: docker run -d nginx ``` -Running Docker commands in this manner is useful when pieces of the `docker run` command are dynamically generated. For services whose configuration is static, [adding a system service]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/) is recommended. +Running Docker commands in this manner is useful when pieces of the `docker run` command are dynamically generated. For services whose configuration is static, [adding a system service]({{< baseurl >}}/os/v1.x/en/system-services/) is recommended. diff --git a/content/os/v1.x/en/installation/configuration/setting-up-docker-tls/_index.md b/content/os/v1.x/en/configuration/setting-up-docker-tls/_index.md similarity index 96% rename from content/os/v1.x/en/installation/configuration/setting-up-docker-tls/_index.md rename to content/os/v1.x/en/configuration/setting-up-docker-tls/_index.md index cf98801bbc8..0fb44180b0b 100644 --- a/content/os/v1.x/en/installation/configuration/setting-up-docker-tls/_index.md +++ b/content/os/v1.x/en/configuration/setting-up-docker-tls/_index.md @@ -1,6 +1,8 @@ --- title: Setting up Docker TLS weight: 127 +aliases: + - /os/v1.x/en/installation/configuration/setting-up-docker-tls --- `ros tls generate` is used to generate both the client and server TLS certificates for Docker. diff --git a/content/os/v1.x/en/installation/configuration/ssh-keys/_index.md b/content/os/v1.x/en/configuration/ssh-keys/_index.md similarity index 83% rename from content/os/v1.x/en/installation/configuration/ssh-keys/_index.md rename to content/os/v1.x/en/configuration/ssh-keys/_index.md index 2204c5b637a..25dbfe72cf7 100644 --- a/content/os/v1.x/en/installation/configuration/ssh-keys/_index.md +++ b/content/os/v1.x/en/configuration/ssh-keys/_index.md @@ -1,9 +1,11 @@ --- title: SSH Settings weight: 121 +aliases: + - /os/v1.x/en/installation/configuration/ssh-keys --- -RancherOS supports adding SSH keys through the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file. Within the cloud-config file, you simply add the ssh keys within the `ssh_authorized_keys` key. +RancherOS supports adding SSH keys through the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file. Within the cloud-config file, you simply add the ssh keys within the `ssh_authorized_keys` key. ```yaml #cloud-config diff --git a/content/os/v1.x/en/installation/configuration/switching-consoles/_index.md b/content/os/v1.x/en/configuration/switching-consoles/_index.md similarity index 76% rename from content/os/v1.x/en/installation/configuration/switching-consoles/_index.md rename to content/os/v1.x/en/configuration/switching-consoles/_index.md index e351cac5b65..e410a194a4e 100644 --- a/content/os/v1.x/en/installation/configuration/switching-consoles/_index.md +++ b/content/os/v1.x/en/configuration/switching-consoles/_index.md @@ -1,15 +1,27 @@ --- title: Switching Consoles weight: 125 +aliases: + - /os/v1.x/en/installation/configuration/switching-consoles --- -When [booting from the ISO]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/), RancherOS starts with the default console, which is based on busybox. +<<<<<<< HEAD:content/os/v1.x/en/installation/configuration/switching-consoles/_index.md +When [booting from the ISO]({{}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/), RancherOS starts with the default console, which is based on busybox. -You can select which console you want RancherOS to start with using the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). +You can select which console you want RancherOS to start with using the [cloud-config]({{}}/os/v1.x/en/installation/configuration/#cloud-config). ### Enabling Consoles using Cloud-Config -When launching RancherOS with a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file, you can select which console you want to use. +When launching RancherOS with a [cloud-config]({{}}/os/v1.x/en/installation/configuration/#cloud-config) file, you can select which console you want to use. +======= +When [booting from the ISO]({{< baseurl >}}/os/v1.x/en/installation/workstation//boot-from-iso/), RancherOS starts with the default console, which is based on busybox. + +You can select which console you want RancherOS to start with using the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). + +### Enabling Consoles using Cloud-Config + +When launching RancherOS with a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file, you can select which console you want to use. +>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/configuration/switching-consoles/_index.md Currently, the list of available consoles are: @@ -102,7 +114,7 @@ All consoles except the default (busybox) console are persistent. Persistent con
-> **Note:** When using a persistent console and in the current version's console, [rolling back]({{< baseurl >}}/os/v1.x/en/upgrading/#rolling-back-an-upgrade) is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported. +> **Note:** When using a persistent console and in the current version's console, [rolling back]({{}}/os/v1.x/en/upgrading/#rolling-back-an-upgrade) is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported. ### Enabling Consoles diff --git a/content/os/v1.x/en/installation/configuration/switching-docker-versions/_index.md b/content/os/v1.x/en/configuration/switching-docker-versions/_index.md similarity index 69% rename from content/os/v1.x/en/installation/configuration/switching-docker-versions/_index.md rename to content/os/v1.x/en/configuration/switching-docker-versions/_index.md index e51d1d46405..3b667af2430 100644 --- a/content/os/v1.x/en/installation/configuration/switching-docker-versions/_index.md +++ b/content/os/v1.x/en/configuration/switching-docker-versions/_index.md @@ -1,9 +1,15 @@ --- title: Switching Docker Versions weight: 129 +aliases: + - /os/v1.x/en/installation/configuration/switching-docker-versions --- -The version of User Docker used in RancherOS can be configured using a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file or by using the `ros engine` command. +<<<<<<< HEAD:content/os/v1.x/en/installation/configuration/switching-docker-versions/_index.md +The version of User Docker used in RancherOS can be configured using a [cloud-config]({{}}/os/v1.x/en/installation/configuration/#cloud-config) file or by using the `ros engine` command. +======= +The version of User Docker used in RancherOS can be configured using a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file or by using the `ros engine` command. +>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/configuration/switching-docker-versions/_index.md > **Note:** There are known issues in Docker when switching between versions. For production systems, we recommend setting the Docker engine only once [using a cloud-config](#setting-the-docker-engine-using-cloud-config). @@ -83,7 +89,11 @@ FROM scratch COPY engine /engine ``` -Once the image is built a [system service]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/) configuration file must be created. An [example file](https://github.com/rancher/os-services/blob/master/d/docker-18.06.3-ce.yml) can be found in the rancher/os-services repo. Change the `image` field to point to the Docker engine image you've built. +<<<<<<< HEAD:content/os/v1.x/en/installation/configuration/switching-docker-versions/_index.md +Once the image is built a [system service]({{}}/os/v1.x/en/installation/system-services/adding-system-services/) configuration file must be created. An [example file](https://github.com/rancher/os-services/blob/master/d/docker-18.06.3-ce.yml) can be found in the rancher/os-services repo. Change the `image` field to point to the Docker engine image you've built. +======= +Once the image is built a [system service]({{< baseurl >}}/os/v1.x/en/system-services/) configuration file must be created. An [example file](https://github.com/rancher/os-services/blob/master/d/docker-18.06.3-ce.yml) can be found in the rancher/os-services repo. Change the `image` field to point to the Docker engine image you've built. +>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/configuration/switching-docker-versions/_index.md All of the previously mentioned methods of switching Docker engines are now available. For example, if your service file is located at `https://myservicefile` then the following cloud-config file could be used to use your custom Docker engine. diff --git a/content/os/v1.x/en/installation/configuration/sysctl/_index.md b/content/os/v1.x/en/configuration/sysctl/_index.md similarity index 88% rename from content/os/v1.x/en/installation/configuration/sysctl/_index.md rename to content/os/v1.x/en/configuration/sysctl/_index.md index 6eac6f0eecd..1a8d6722d63 100644 --- a/content/os/v1.x/en/installation/configuration/sysctl/_index.md +++ b/content/os/v1.x/en/configuration/sysctl/_index.md @@ -1,6 +1,8 @@ --- title: Sysctl Settings weight: 132 +aliases: + - /os/v1.x/en/installation/configuration/sysctl --- The `rancher.sysctl` cloud-config key can be used to control sysctl parameters. This works in a manner similar to `/etc/sysctl.conf` for other Linux distros. diff --git a/content/os/v1.x/en/installation/configuration/users/_index.md b/content/os/v1.x/en/configuration/users/_index.md similarity index 62% rename from content/os/v1.x/en/installation/configuration/users/_index.md rename to content/os/v1.x/en/configuration/users/_index.md index 529281eef07..4612c1cce2a 100644 --- a/content/os/v1.x/en/installation/configuration/users/_index.md +++ b/content/os/v1.x/en/configuration/users/_index.md @@ -1,11 +1,13 @@ --- title: Users weight: 130 +aliases: + - /os/v1.x/en/installation/configuration/users --- Currently, we don't support adding other users besides `rancher`. -You _can_ add users in the console container, but these users will only exist as long as the console container exists. It only makes sense to add users in a [persistent consoles]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence). +You _can_ add users in the console container, but these users will only exist as long as the console container exists. It only makes sense to add users in a [persistent consoles]({{}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence). If you want the console user to be able to ssh into RancherOS, you need to add them to the `docker` group. diff --git a/content/os/v1.x/en/installation/configuration/write-files/_index.md b/content/os/v1.x/en/configuration/write-files/_index.md similarity index 95% rename from content/os/v1.x/en/installation/configuration/write-files/_index.md rename to content/os/v1.x/en/configuration/write-files/_index.md index c222448370c..7071d5d8923 100644 --- a/content/os/v1.x/en/installation/configuration/write-files/_index.md +++ b/content/os/v1.x/en/configuration/write-files/_index.md @@ -1,6 +1,8 @@ --- title: Writing Files weight: 122 +aliases: + - /os/v1.x/en/installation/configuration/write-files --- You can automate writing files to disk using the `write_files` cloud-config directive. diff --git a/content/os/v1.x/en/installation/_index.md b/content/os/v1.x/en/installation/_index.md index 99f8d6369a6..be3cae1d222 100644 --- a/content/os/v1.x/en/installation/_index.md +++ b/content/os/v1.x/en/installation/_index.md @@ -1,4 +1,34 @@ --- -title: Installation -weight: 2 +title: Installing and Running RancherOS +weight: 100 +aliases: + - /os/v1.x/en/installation/running-rancheros --- + +RancherOS runs on virtualization platforms, cloud providers and bare metal servers. We also support running a local VM on your laptop. + +To start running RancherOS as quickly as possible, follow our [Quick Start Guide]({{< baseurl >}}/os/v1.x/en/quick-start-guide/). + +# Platforms +Refer to the below resources for more information on installing Rancher on your platform. + +### Workstation + +- [Docker Machine]({{< baseurl >}}/os/v1.x/en/installation/workstation//docker-machine) +- [Boot from ISO]({{< baseurl >}}/os/v1.x/en/installation/workstation//boot-from-iso) + +### Cloud + +- [Amazon EC2]({{< baseurl >}}/os/v1.x/en/installation/cloud/aws) +- [Google Compute Engine]({{< baseurl >}}/os/v1.x/en/installation/cloud/gce) +- [DigitalOcean]({{< baseurl >}}/os/v1.x/en/installation/cloud/do) +- [Azure]({{< baseurl >}}/os/v1.x/en/installation/cloud/azure) +- [OpenStack]({{< baseurl >}}/os/v1.x/en/installation/cloud/openstack) +- [VMware ESXi]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi) +- [Aliyun]({{< baseurl >}}/os/v1.x/en/installation/cloud/aliyun) + +### Bare Metal & Virtual Servers + +- [PXE]({{< baseurl >}}/os/v1.x/en/installation/server/pxe) +- [Install to Hard Disk]({{< baseurl >}}/os/v1.x/en/installation/server/install-to-disk) +- [Raspberry Pi]({{< baseurl >}}/os/v1.x/en/installation/server/raspberry-pi) diff --git a/content/os/v1.x/en/installation/amazon-ecs/_index.md b/content/os/v1.x/en/installation/amazon-ecs/_index.md index a76c7675044..10dae1ffddb 100644 --- a/content/os/v1.x/en/installation/amazon-ecs/_index.md +++ b/content/os/v1.x/en/installation/amazon-ecs/_index.md @@ -11,13 +11,13 @@ Prior to launching RancherOS EC2 instances, the [ECS Container Instance IAM Role ### Launching an instance with ECS -RancherOS makes it easy to join your ECS cluster. The ECS agent is a [system service]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/) that is enabled in the ECS enabled AMI. There may be other RancherOS AMIs that don't have the ECS agent enabled by default, but it can easily be added in the user data on any RancherOS AMI. +RancherOS makes it easy to join your ECS cluster. The ECS agent is a [system service]({{< baseurl >}}/os/v1.x/en/system-services/) that is enabled in the ECS enabled AMI. There may be other RancherOS AMIs that don't have the ECS agent enabled by default, but it can easily be added in the user data on any RancherOS AMI. When launching the RancherOS AMI, you'll need to specify the **IAM Role** and **Advanced Details** -> **User Data** in the **Configure Instance Details** step. For the **IAM Role**, you'll need to be sure to select the ECS Container Instance IAM role. -For the **User Data**, you'll need to pass in the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file. +For the **User Data**, you'll need to pass in the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file. ```yaml #cloud-config @@ -37,7 +37,7 @@ rancher: By default, the ECS agent will be using the `latest` tag for the `amazon-ecs-agent` image. In v0.5.0, we introduced the ability to select which version of the `amazon-ecs-agent`. -To select the version, you can update your [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file. +To select the version, you can update your [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file. ```yaml #cloud-config diff --git a/content/os/v1.x/en/installation/boot-process/built-in-system-services/_index.md b/content/os/v1.x/en/installation/boot-process/built-in-system-services/_index.md index 32e0f7ce61f..d49a8ac4b5a 100644 --- a/content/os/v1.x/en/installation/boot-process/built-in-system-services/_index.md +++ b/content/os/v1.x/en/installation/boot-process/built-in-system-services/_index.md @@ -3,17 +3,17 @@ title: Built-in System Services weight: 150 --- -To launch RancherOS, we have built-in system services. They are defined in the [Docker Compose](https://docs.docker.com/compose/compose-file/) format, and can be found in the default system config file, `/usr/share/ros/os-config.yml`. You can [add your own system services]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/) or override services in the cloud-config. +To launch RancherOS, we have built-in system services. They are defined in the [Docker Compose](https://docs.docker.com/compose/compose-file/) format, and can be found in the default system config file, `/usr/share/ros/os-config.yml`. You can [add your own system services]({{< baseurl >}}/os/v1.x/en/system-services/) or override services in the cloud-config. ### preload-user-images -Read more about [image preloading]({{< baseurl >}}/os/v1.x/en/installation/boot-process/image-preloading/). +Read more about [image preloading]({{}}/os/v1.x/en/installation/boot-process/image-preloading/). ### network During this service, networking is set up, e.g. hostname, interfaces, and DNS. -It is configured by `hostname` and `rancher.network`settings in [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). +It is configured by `hostname` and `rancher.network`settings in [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). ### ntp @@ -24,13 +24,13 @@ Runs `ntpd` in a System Docker container. This service provides the RancherOS user interface by running `sshd` and `getty`. It completes the RancherOS configuration on start up: 1. If the `rancher.password=` kernel parameter exists, it sets `` as the password for the `rancher` user. -2. If there are no host SSH keys, it generates host SSH keys and saves them under `rancher.ssh.keys` in [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). +2. If there are no host SSH keys, it generates host SSH keys and saves them under `rancher.ssh.keys` in [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). 3. Runs `cloud-init -execute`, which does the following: - * Updates `.ssh/authorized_keys` in `/home/rancher` and `/home/docker` from [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/ssh-keys/) and metadata. - * Writes files specified by the `write_files` [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/write-files/) setting. - * Resizes the device specified by the `rancher.resize_device` [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/resizing-device-partition/) setting. - * Mount devices specified in the `mounts` [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/storage/additional-mounts/) setting. - * Set sysctl parameters specified in the`rancher.sysctl` [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/sysctl/) setting. + * Updates `.ssh/authorized_keys` in `/home/rancher` and `/home/docker` from [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/ssh-keys/) and metadata. + * Writes files specified by the `write_files` [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/write-files/) setting. + * Resizes the device specified by the `rancher.resize_device` [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/resizing-device-partition/) setting. + * Mount devices specified in the `mounts` [cloud-config]({{< baseurl >}}/os/v1.x/en/storage/additional-mounts/) setting. + * Set sysctl parameters specified in the`rancher.sysctl` [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/sysctl/) setting. 4. If user-data contained a file that started with `#!`, then a file would be saved at `/var/lib/rancher/conf/cloud-config-script` during cloud-init and then executed. Any errors are ignored. 5. Runs `/opt/rancher/bin/start.sh` if it exists and is executable. Any errors are ignored. 6. Runs `/etc/rc.local` if it exists and is executable. Any errors are ignored. diff --git a/content/os/v1.x/en/installation/boot-process/cloud-init/_index.md b/content/os/v1.x/en/installation/boot-process/cloud-init/_index.md index 85ab3695cea..78a9c583273 100644 --- a/content/os/v1.x/en/installation/boot-process/cloud-init/_index.md +++ b/content/os/v1.x/en/installation/boot-process/cloud-init/_index.md @@ -7,7 +7,7 @@ Userdata and metadata can be fetched from a cloud provider, VM runtime, or manag ### Userdata -Userdata is a file given by users when launching RancherOS hosts. It is stored in different locations depending on its format. If the userdata is a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file, indicated by beginning with `#cloud-config` and being in YAML format, it is stored in `/var/lib/rancher/conf/cloud-config.d/boot.yml`. If the userdata is a script, indicated by beginning with `#!`, it is stored in `/var/lib/rancher/conf/cloud-config-script`. +Userdata is a file given by users when launching RancherOS hosts. It is stored in different locations depending on its format. If the userdata is a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file, indicated by beginning with `#cloud-config` and being in YAML format, it is stored in `/var/lib/rancher/conf/cloud-config.d/boot.yml`. If the userdata is a script, indicated by beginning with `#!`, it is stored in `/var/lib/rancher/conf/cloud-config-script`. ### Metadata @@ -15,7 +15,7 @@ Although the specifics vary based on provider, a metadata file will typically co ## Configuration Load Order -[Cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config/) is read by system services when they need to get configuration. Each additional file overwrites and extends the previous configuration file. +[Cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config/) is read by system services when they need to get configuration. Each additional file overwrites and extends the previous configuration file. 1. `/usr/share/ros/os-config.yml` - This is the system default configuration, which should **not** be modified by users. 2. `/usr/share/ros/oem/oem-config.yml` - This will typically exist by OEM, which should **not** be modified by users. diff --git a/content/os/v1.x/en/installation/running-rancheros/cloud/aliyun/_index.md b/content/os/v1.x/en/installation/cloud/aliyun/_index.md similarity index 86% rename from content/os/v1.x/en/installation/running-rancheros/cloud/aliyun/_index.md rename to content/os/v1.x/en/installation/cloud/aliyun/_index.md index ce08ce913fb..bffd35fc0d6 100644 --- a/content/os/v1.x/en/installation/running-rancheros/cloud/aliyun/_index.md +++ b/content/os/v1.x/en/installation/cloud/aliyun/_index.md @@ -1,6 +1,8 @@ --- title: Aliyun weight: 111 +aliases: + - /os/v1.x/en/installation/running-rancheros/cloud/aliyun --- # Adding the RancherOS Image into Aliyun @@ -13,7 +15,7 @@ RancherOS is available as an image in Aliyun, and can be easily run in Elastic C Example: -![RancherOS on Aliyun 1]({{< baseurl >}}/img/os/RancherOS_aliyun1.jpg) +![RancherOS on Aliyun 1]({{}}/img/os/RancherOS_aliyun1.jpg) ## Options @@ -29,6 +31,6 @@ After the image is uploaded, we can use the `Aliyun Console` to start a new inst Since the image is private, we need to use the `Custom Images`. -![RancherOS on Aliyun 2]({{< baseurl >}}/img/os/RancherOS_aliyun2.jpg) +![RancherOS on Aliyun 2]({{}}/img/os/RancherOS_aliyun2.jpg) After the instance is successfully started, we can login with the `rancher` user via SSH. diff --git a/content/os/v1.x/en/installation/running-rancheros/cloud/aws/_index.md b/content/os/v1.x/en/installation/cloud/aws/_index.md similarity index 77% rename from content/os/v1.x/en/installation/running-rancheros/cloud/aws/_index.md rename to content/os/v1.x/en/installation/cloud/aws/_index.md index e8886b5f617..57a937465a8 100644 --- a/content/os/v1.x/en/installation/running-rancheros/cloud/aws/_index.md +++ b/content/os/v1.x/en/installation/cloud/aws/_index.md @@ -1,6 +1,8 @@ --- title: Amazon EC2 weight: 105 +aliases: + - /os/v1.x/en/installation/running-rancheros/cloud/aws --- RancherOS is available as an Amazon Web Services AMI, and can be easily run on EC2. You can launch RancherOS either using the AWS Command Line Interface (CLI) or using the AWS console. @@ -28,7 +30,11 @@ Let’s walk through how to import and create a RancherOS on EC2 machine using t {{< img "/img/os/Rancher_aws1.png" "RancherOS on AWS 1">}} 2. Select the **Community AMIs** on the sidebar and search for **RancherOS**. Pick the latest version and click **Select**. {{< img "/img/os/Rancher_aws2.png" "RancherOS on AWS 2">}} -3. Go through the steps of creating the instance type through the AWS console. If you want to pass in a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file during boot of RancherOS, you'd pass in the file as **User data** by expanding the **Advanced Details** in **Step 3: Configure Instance Details**. You can pass in the data as text or as a file. +<<<<<<< HEAD:content/os/v1.x/en/installation/running-rancheros/cloud/aws/_index.md +3. Go through the steps of creating the instance type through the AWS console. If you want to pass in a [cloud-config]({{}}/os/v1.x/en/installation/configuration/#cloud-config) file during boot of RancherOS, you'd pass in the file as **User data** by expanding the **Advanced Details** in **Step 3: Configure Instance Details**. You can pass in the data as text or as a file. +======= +3. Go through the steps of creating the instance type through the AWS console. If you want to pass in a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file during boot of RancherOS, you'd pass in the file as **User data** by expanding the **Advanced Details** in **Step 3: Configure Instance Details**. You can pass in the data as text or as a file. +>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/installation/cloud/aws/_index.md {{< img "/img/os/Rancher_aws6.png" "RancherOS on AWS 6">}} After going through all the steps, you finally click on **Launch**, and either create a new key pair or choose an existing key pair to be used with the EC2 instance. If you have created a new key pair, download the key pair. If you have chosen an existing key pair, make sure you have the key pair accessible. Click on **Launch Instances**. {{< img "/img/os/Rancher_aws3.png" "RancherOS on AWS 3">}} diff --git a/content/os/v1.x/en/installation/running-rancheros/cloud/azure/_index.md b/content/os/v1.x/en/installation/cloud/azure/_index.md similarity index 97% rename from content/os/v1.x/en/installation/running-rancheros/cloud/azure/_index.md rename to content/os/v1.x/en/installation/cloud/azure/_index.md index c144d792d4a..19553b92b02 100644 --- a/content/os/v1.x/en/installation/running-rancheros/cloud/azure/_index.md +++ b/content/os/v1.x/en/installation/cloud/azure/_index.md @@ -1,6 +1,8 @@ --- title: Azure weight: 110 +aliases: + - /os/v1.x/en/installation/running-rancheros/cloud/azure --- RancherOS has been published in Azure Marketplace, you can get it from [here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/rancher.rancheros). diff --git a/content/os/v1.x/en/installation/running-rancheros/cloud/do/_index.md b/content/os/v1.x/en/installation/cloud/do/_index.md similarity index 92% rename from content/os/v1.x/en/installation/running-rancheros/cloud/do/_index.md rename to content/os/v1.x/en/installation/cloud/do/_index.md index d644822ded6..1d043601180 100644 --- a/content/os/v1.x/en/installation/running-rancheros/cloud/do/_index.md +++ b/content/os/v1.x/en/installation/cloud/do/_index.md @@ -1,6 +1,8 @@ --- title: Digital Ocean weight: 107 +aliases: + - /os/v1.x/en/installation/running-rancheros/cloud/do --- RancherOS is available in the Digital Ocean portal. RancherOS is a member of container distributions and you can find it easily. @@ -15,7 +17,7 @@ To start a RancherOS Droplet on Digital Ocean: 1. Click **Create Droplet.** 1. Click the **Container distributions** tab. 1. Click **RancherOS.** -1. Choose a plan. Make sure your Droplet has the [minimum hardware requirements for RancherOS]({{< baseurl >}}os/v1.x/en/overview/#hardware-requirements). +1. Choose a plan. Make sure your Droplet has the [minimum hardware requirements for RancherOS]({{}}/os/v1.x/en/overview/#hardware-requirements). 1. Choose any options for backups, block storage, and datacenter region. 1. Optional: In the **Select additional options** section, you can check the **User data** box and enter a `cloud-config` file in the text box that appears. The `cloud-config` file is used to provide a script to be run on the first boot. An example is below. 1. Choose an SSH key that you have access to, or generate a new SSH key. diff --git a/content/os/v1.x/en/installation/running-rancheros/cloud/gce/_index.md b/content/os/v1.x/en/installation/cloud/gce/_index.md similarity index 85% rename from content/os/v1.x/en/installation/running-rancheros/cloud/gce/_index.md rename to content/os/v1.x/en/installation/cloud/gce/_index.md index 6545a2a3477..34159b09d19 100644 --- a/content/os/v1.x/en/installation/running-rancheros/cloud/gce/_index.md +++ b/content/os/v1.x/en/installation/cloud/gce/_index.md @@ -1,9 +1,11 @@ --- title: Google Compute Engine (GCE) weight: 106 +aliases: + - /os/v1.x/en/installation/running-rancheros/cloud/gce --- -> **Note:** Due to the maximum transmission unit (MTU) of [1460 bytes on GCE](https://cloud.google.com/compute/docs/troubleshooting#packetfragmentation), you will need to configure your [network interfaces]({{< baseurl >}}/os/v1.x/en/installation/networking/interfaces/) and both the [Docker and System Docker]({{< baseurl >}}/os/v1.x/en/installation/configuration/docker/) to use a MTU of 1460 bytes or you will encounter weird networking related errors. +> **Note:** Due to the maximum transmission unit (MTU) of [1460 bytes on GCE](https://cloud.google.com/compute/docs/troubleshooting#packetfragmentation), you will need to configure your [network interfaces]({{< baseurl >}}/os/v1.x/en/networking/interfaces/) and both the [Docker and System Docker]({{< baseurl >}}/os/v1.x/en/configuration/docker/) to use a MTU of 1460 bytes or you will encounter weird networking related errors. ### Adding the RancherOS Image into GCE @@ -26,7 +28,7 @@ $ gcloud compute instances create --project --zone }}/os/v1.x/en/installation/configuration/#cloud-config), you can pass it as metadata upon creation of the instance during the `gcloud compute` command. The file will need to be stored locally before running the command. The key of the metadata will be `user-data` and the value is the location of the file. If any SSH keys are added in the cloud config file, it will also be added to the **rancher** user. +If you want to pass in your own cloud config file that will be processed by [cloud init]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config), you can pass it as metadata upon creation of the instance during the `gcloud compute` command. The file will need to be stored locally before running the command. The key of the metadata will be `user-data` and the value is the location of the file. If any SSH keys are added in the cloud config file, it will also be added to the **rancher** user. ``` $ gcloud compute instances create --project --zone --image --metadata-from-file user-data=/Directory/of/Cloud_Config.yml @@ -74,11 +76,11 @@ Updated [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE_OF After the image is uploaded, it's easy to use the console to create new instances. You will **not** be able to upload your own cloud config file when creating instances through the console. You can add it after the instance is created using `gcloud compute` commands and resetting the instance. 1. Make sure you are in the project that the image was created in. - ![RancherOS on GCE 4]({{< baseurl >}}/img/os/Rancher_gce4.png) + ![RancherOS on GCE 4]({{}}/img/os/Rancher_gce4.png) 2. In the navigation bar, click on the **VM instances**, which is located at Compute -> Compute Engine -> Metadata. Click on **Create instance**. - ![RancherOS on GCE 5]({{< baseurl >}}/img/os/Rancher_gce5.png) + ![RancherOS on GCE 5]({{}}/img/os/Rancher_gce5.png) 2. Fill out the information for your instance. In the **Image** dropdown, your private image will be listed among the public images provided by Google. Select the private image for RancherOS. Click **Create**. - ![RancherOS on GCE 6]({{< baseurl >}}/img/os/Rancher_gce6.png) + ![RancherOS on GCE 6]({{}}/img/os/Rancher_gce6.png) 3. Your instance is being created and will be up and running shortly! #### Adding SSH keys @@ -89,7 +91,7 @@ In order to SSH into the GCE instance, you will need to have SSH keys set up in In your project, click on **Metadata**, which is located within Compute -> Compute Engine -> Metadata. Click on **SSH Keys**. -![RancherOS on GCE 7]({{< baseurl >}}/img/os/Rancher_gce7.png) +![RancherOS on GCE 7]({{}}/img/os/Rancher_gce7.png) Add the SSH keys that you want to have access to any instances within your project. @@ -99,11 +101,11 @@ Note: If you do this after any RancherOS instance is created, you will need to r After your instance is created, click on the instance name. Scroll down to the **SSH Keys** section and click on **Add SSH key**. This key will only be applicable to the instance. -![RancherOS on GCE 8]({{< baseurl >}}/img/os/Rancher_gce8.png) +![RancherOS on GCE 8]({{}}/img/os/Rancher_gce8.png) After the SSH keys have been added, you'll need to reset the machine, by clicking **Reset**. -![RancherOS on GCE 9]({{< baseurl >}}/img/os/Rancher_gce9.png) +![RancherOS on GCE 9]({{}}/img/os/Rancher_gce9.png) After a little bit, you will be able to SSH into the box using the **rancher** user. diff --git a/content/os/v1.x/en/installation/running-rancheros/cloud/openstack/_index.md b/content/os/v1.x/en/installation/cloud/openstack/_index.md similarity index 75% rename from content/os/v1.x/en/installation/running-rancheros/cloud/openstack/_index.md rename to content/os/v1.x/en/installation/cloud/openstack/_index.md index 7649d6e7e1a..679c48e998e 100644 --- a/content/os/v1.x/en/installation/running-rancheros/cloud/openstack/_index.md +++ b/content/os/v1.x/en/installation/cloud/openstack/_index.md @@ -1,8 +1,10 @@ --- title: OpenStack weight: 109 +aliases: + - /os/v1.x/en/installation/running-rancheros/cloud/openstack --- As of v0.5.0, RancherOS releases include an Openstack image that can be found on our [releases page](https://github.com/rancher/os/releases). The image format is [QCOW3](https://wiki.qemu.org/Features/Qcow3#Fully_QCOW2_backwards-compatible_feature_set) that is backward compatible with QCOW2. -When launching an instance using the image, you must enable **Advanced Options** -> **Configuration Drive** and in order to use a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file. +When launching an instance using the image, you must enable **Advanced Options** -> **Configuration Drive** and in order to use a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file. diff --git a/content/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi/_index.md b/content/os/v1.x/en/installation/cloud/vmware-esxi/_index.md similarity index 96% rename from content/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi/_index.md rename to content/os/v1.x/en/installation/cloud/vmware-esxi/_index.md index b4ccdb6fa25..07913f18ae4 100644 --- a/content/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi/_index.md +++ b/content/os/v1.x/en/installation/cloud/vmware-esxi/_index.md @@ -1,6 +1,8 @@ --- title: VMware ESXi weight: 108 +aliases: + - /os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi --- As of v1.1.0, RancherOS automatically detects that it is running on VMware ESXi, and automatically adds the `open-vm-tools` service to be downloaded and started, and uses `guestinfo` keys to set the cloud-init data. diff --git a/content/os/v1.x/en/installation/custom-builds/custom-console/_index.md b/content/os/v1.x/en/installation/custom-builds/custom-console/_index.md index c24ca816aeb..5a4e2c225f5 100644 --- a/content/os/v1.x/en/installation/custom-builds/custom-console/_index.md +++ b/content/os/v1.x/en/installation/custom-builds/custom-console/_index.md @@ -3,13 +3,23 @@ title: Custom Console weight: 180 --- -When [booting from the ISO]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/), RancherOS starts with the default console, which is based on busybox. +<<<<<<< HEAD +When [booting from the ISO]({{}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/), RancherOS starts with the default console, which is based on busybox. -You can select which console you want RancherOS to start with using the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). +You can select which console you want RancherOS to start with using the [cloud-config]({{}}/os/v1.x/en/installation/configuration/#cloud-config). ### Enabling Consoles using Cloud-Config -When launching RancherOS with a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file, you can select which console you want to use. +When launching RancherOS with a [cloud-config]({{}}/os/v1.x/en/installation/configuration/#cloud-config) file, you can select which console you want to use. +======= +When [booting from the ISO]({{< baseurl >}}/os/v1.x/en/installation/workstation//boot-from-iso/), RancherOS starts with the default console, which is based on busybox. + +You can select which console you want RancherOS to start with using the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). + +### Enabling Consoles using Cloud-Config + +When launching RancherOS with a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file, you can select which console you want to use. +>>>>>>> Reorganize RancherOS docs Currently, the list of available consoles are: @@ -102,7 +112,7 @@ All consoles except the default (busybox) console are persistent. Persistent con
-> **Note:** When using a persistent console and in the current version's console, [rolling back]({{< baseurl >}}/os/v1.x/en/upgrading/#rolling-back-an-upgrade) is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported. +> **Note:** When using a persistent console and in the current version's console, [rolling back]({{}}/os/v1.x/en/upgrading/#rolling-back-an-upgrade) is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported. ### Enabling Consoles diff --git a/content/os/v1.x/en/installation/custom-builds/custom-kernels/_index.md b/content/os/v1.x/en/installation/custom-builds/custom-kernels/_index.md index 8a7ff668a11..b3d6d35baae 100644 --- a/content/os/v1.x/en/installation/custom-builds/custom-kernels/_index.md +++ b/content/os/v1.x/en/installation/custom-builds/custom-kernels/_index.md @@ -59,7 +59,7 @@ Your kernel should be packaged and published as a set of files of the following ### Building a RancherOS release using the Packaged kernel files. -By default, RancherOS ships with the kernel provided by the [os-kernel repository](https://github.com/rancher/os-kernel). Swapping out the default kernel can by done by [building your own custom RancherOS ISO]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/). +By default, RancherOS ships with the kernel provided by the [os-kernel repository](https://github.com/rancher/os-kernel). Swapping out the default kernel can by done by [building your own custom RancherOS ISO]({{}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/). Create a clone of the main [RancherOS repository](https://github.com/rancher/os) to your local machine with a `git clone`. @@ -75,6 +75,6 @@ ARG KERNEL_VERSION_amd64=4.14.63-rancher ARG KERNEL_URL_amd64=https://link/xxxx ``` -After you've replaced the URL with your custom kernel, you can follow the steps in [building your own custom RancherOS ISO]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/). +After you've replaced the URL with your custom kernel, you can follow the steps in [building your own custom RancherOS ISO]({{}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/). > **Note:** `KERNEL_URL` settings should point to a Linux kernel, compiled and packaged in a specific way. You can fork [os-kernel repository](https://github.com/rancher/os-kernel) to package your own kernel. diff --git a/content/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/_index.md b/content/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/_index.md index 697189f8d9d..18f3ddafcbe 100644 --- a/content/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/_index.md +++ b/content/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/_index.md @@ -11,7 +11,7 @@ Create a clone of the main [RancherOS repository](https://github.com/rancher/os) $ git clone https://github.com/rancher/os.git ``` -In the root of the repository, the "General Configuration" section of `Dockerfile.dapper` can be updated to use [custom kernels]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-kernels). +In the root of the repository, the "General Configuration" section of `Dockerfile.dapper` can be updated to use [custom kernels]({{}}/os/v1.x/en/installation/custom-builds/custom-kernels). After you've saved your edits, run `make` in the root directory. After the build has completed, a `./dist/artifacts` directory will be created with the custom built RancherOS release files. Build Requirements: `bash`, `make`, `docker` (Docker version >= 1.10.3) @@ -29,7 +29,7 @@ If you need a compressed ISO, you can run this command: $ make release ``` -The `rancheros.iso` is ready to be used to [boot RancherOS from ISO]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/) or [launch RancherOS using Docker Machine]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/docker-machine). +The `rancheros.iso` is ready to be used to [boot RancherOS from ISO]({{< baseurl >}}/os/v1.x/en/installation/workstation//boot-from-iso/) or [launch RancherOS using Docker Machine]({{< baseurl >}}/os/v1.x/en/installation/workstation//docker-machine). ## Creating a GCE Image Archive @@ -50,7 +50,7 @@ RANCHEROS_VERSION=v1.4.0 make build-gce #### Reduce Memory Requirements -With changes to the kernel and built Docker, RancherOS booting requires more memory. For details, please refer to the [memory requirements]({{< baseurl >}}/os/v1.x/en/#hardware-requirements). +With changes to the kernel and built Docker, RancherOS booting requires more memory. For details, please refer to the [memory requirements]({{}}/os/v1.x/en/#hardware-requirements). By customizing the ISO, you can reduce the memory usage on boot. The easiest way is to downgrade the built-in Docker version, because Docker takes up a lot of space. This can effectively reduce the memory required to decompress the `initrd` on boot. Using docker 17.03 is a good choice: diff --git a/content/os/v1.x/en/installation/running-rancheros/_index.md b/content/os/v1.x/en/installation/running-rancheros/_index.md index c677f71c35e..17f070f3636 100644 --- a/content/os/v1.x/en/installation/running-rancheros/_index.md +++ b/content/os/v1.x/en/installation/running-rancheros/_index.md @@ -3,37 +3,37 @@ title: Running RancherOS weight: 100 --- -RancherOS runs on virtualization platforms, cloud providers and bare metal servers. We also support running a local VM on your laptop. To start running RancherOS as quickly as possible, follow our [Quick Start Guide]({{< baseurl >}}/os/v1.x/en/quick-start-guide/). +RancherOS runs on virtualization platforms, cloud providers and bare metal servers. We also support running a local VM on your laptop. To start running RancherOS as quickly as possible, follow our [Quick Start Guide]({{}}/os/v1.x/en/quick-start-guide/). ### Platforms #### Workstation -[Docker Machine]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/docker-machine) +[Docker Machine]({{}}/os/v1.x/en/installation/running-rancheros/workstation/docker-machine) -[Boot from ISO]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso) +[Boot from ISO]({{}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso) #### Cloud -[Amazon EC2]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/aws) +[Amazon EC2]({{}}/os/v1.x/en/installation/running-rancheros/cloud/aws) -[Google Compute Engine]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/gce) +[Google Compute Engine]({{}}/os/v1.x/en/installation/running-rancheros/cloud/gce) -[DigitalOcean]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/do) +[DigitalOcean]({{}}/os/v1.x/en/installation/running-rancheros/cloud/do) -[Azure]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/azure) +[Azure]({{}}/os/v1.x/en/installation/running-rancheros/cloud/azure) -[OpenStack]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/openstack) +[OpenStack]({{}}/os/v1.x/en/installation/running-rancheros/cloud/openstack) -[VMware ESXi]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi) +[VMware ESXi]({{}}/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi) -[Aliyun]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/aliyun) +[Aliyun]({{}}/os/v1.x/en/installation/running-rancheros/cloud/aliyun) #### Bare Metal & Virtual Servers -[PXE]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/pxe) +[PXE]({{}}/os/v1.x/en/installation/running-rancheros/server/pxe) -[Install to Hard Disk]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk) +[Install to Hard Disk]({{}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk) -[Raspberry Pi]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/raspberry-pi) +[Raspberry Pi]({{}}/os/v1.x/en/installation/running-rancheros/server/raspberry-pi) diff --git a/content/os/v1.x/en/installation/running-rancheros/server/install-to-disk/_index.md b/content/os/v1.x/en/installation/server/install-to-disk/_index.md similarity index 93% rename from content/os/v1.x/en/installation/running-rancheros/server/install-to-disk/_index.md rename to content/os/v1.x/en/installation/server/install-to-disk/_index.md index e0deb1b54a4..35f1010a6a6 100644 --- a/content/os/v1.x/en/installation/running-rancheros/server/install-to-disk/_index.md +++ b/content/os/v1.x/en/installation/server/install-to-disk/_index.md @@ -1,9 +1,11 @@ --- title: Installing to Disk weight: 111 +aliases: + - /os/v1.x/en/installation/running-rancheros/server/install-to-disk --- -RancherOS comes with a simple installer that will install RancherOS on a given target disk. To install RancherOS on a new disk, you can use the `ros install` command. Before installing, you'll need to have already [booted RancherOS from ISO]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso). Please be sure to pick the `rancheros.iso` from our release [page](https://github.com/rancher/os/releases). +RancherOS comes with a simple installer that will install RancherOS on a given target disk. To install RancherOS on a new disk, you can use the `ros install` command. Before installing, you'll need to have already [booted RancherOS from ISO]({{< baseurl >}}/os/v1.x/en/installation/workstation//boot-from-iso). Please be sure to pick the `rancheros.iso` from our release [page](https://github.com/rancher/os/releases). ### Using `ros install` to Install RancherOS @@ -11,7 +13,7 @@ The `ros install` command orchestrates the installation from the `rancher/os` co #### Cloud-Config -The easiest way to log in is to pass a `cloud-config.yml` file containing your public SSH keys. To learn more about what's supported in our cloud-config, please read our [documentation]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). +The easiest way to log in is to pass a `cloud-config.yml` file containing your public SSH keys. To learn more about what's supported in our cloud-config, please read our [documentation]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). The `ros install` command will process your `cloud-config.yml` file specified with the `-c` flag. This file will also be placed onto the disk and installed to `/var/lib/rancher/conf/`. It will be evaluated on every boot. @@ -61,7 +63,7 @@ Status: Downloaded newer image for rancher/os:v0.5.0 Continue with reboot [y/N]: ``` -After installing RancherOS to disk, you will no longer be automatically logged in as the `rancher` user. You'll need to have added in SSH keys within your [cloud-config file]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). +After installing RancherOS to disk, you will no longer be automatically logged in as the `rancher` user. You'll need to have added in SSH keys within your [cloud-config file]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). #### Installing a Different Version diff --git a/content/os/v1.x/en/installation/running-rancheros/server/pxe/_index.md b/content/os/v1.x/en/installation/server/pxe/_index.md similarity index 92% rename from content/os/v1.x/en/installation/running-rancheros/server/pxe/_index.md rename to content/os/v1.x/en/installation/server/pxe/_index.md index 4041c3cf2cf..c866a92c4e5 100644 --- a/content/os/v1.x/en/installation/running-rancheros/server/pxe/_index.md +++ b/content/os/v1.x/en/installation/server/pxe/_index.md @@ -1,6 +1,8 @@ --- title: iPXE weight: 112 +aliases: + - /os/v1.x/en/installation/running-rancheros/server/pxe --- ``` @@ -63,11 +65,11 @@ Valid cloud-init datasources for RancherOS. | cmdline | Kernel command line: `cloud-config-url=http://link/user_data` | | configdrive | /media/config-2 | | url | URL address | -| vmware| Set `guestinfo` cloud-init or interface data as per [VMware ESXi]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi) | +| vmware| Set `guestinfo` cloud-init or interface data as per [VMware ESXi]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi) | | * | This will add ["configdrive", "vmware", "ec2", "digitalocean", "packet", "gce"] into the list of datasources to try | The vmware datasource was added as of v1.1. ### Cloud-Config -When booting via iPXE, RancherOS can be configured using a [cloud-config file]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). +When booting via iPXE, RancherOS can be configured using a [cloud-config file]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). diff --git a/content/os/v1.x/en/installation/running-rancheros/server/raspberry-pi/_index.md b/content/os/v1.x/en/installation/server/raspberry-pi/_index.md similarity index 92% rename from content/os/v1.x/en/installation/running-rancheros/server/raspberry-pi/_index.md rename to content/os/v1.x/en/installation/server/raspberry-pi/_index.md index 7ac84cf84bc..a540afe8f89 100644 --- a/content/os/v1.x/en/installation/running-rancheros/server/raspberry-pi/_index.md +++ b/content/os/v1.x/en/installation/server/raspberry-pi/_index.md @@ -1,11 +1,13 @@ --- title: Raspberry Pi weight: 113 +aliases: + - /os/v1.x/en/installation/running-rancheros/server/raspberry-pi --- As of v0.5.0, RancherOS releases include a Raspberry Pi image that can be found on our [releases page](https://github.com/rancher/os/releases). The official Raspberry Pi documentation contains instructions on how to [install operating system images](https://www.raspberrypi.org/documentation/installation/installing-images/). -When installing, there is no ability to pass in a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). You will need to boot up, change the configuration and then reboot to apply those changes. +When installing, there is no ability to pass in a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). You will need to boot up, change the configuration and then reboot to apply those changes. Currently, only Raspberry Pi 3 is tested and known to work. diff --git a/content/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/_index.md b/content/os/v1.x/en/installation/workstation/boot-from-iso/_index.md similarity index 65% rename from content/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/_index.md rename to content/os/v1.x/en/installation/workstation/boot-from-iso/_index.md index 6a1b52a6f03..28f3a8a7fc2 100644 --- a/content/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/_index.md +++ b/content/os/v1.x/en/installation/workstation/boot-from-iso/_index.md @@ -1,6 +1,8 @@ --- title: Booting from ISO weight: 102 +aliases: + - /os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso --- The RancherOS ISO file can be used to create a fresh RancherOS install on KVM, VMware, VirtualBox, Hyper-V, Proxmox VE, or bare metal servers. You can download the `rancheros.iso` file from our [releases page](https://github.com/rancher/os/releases/). @@ -13,8 +15,8 @@ VMware | [rancheros-vmware.iso](https://releases.rancher.com/os/latest/vmwar Hyper-V | [rancheros-hyperv.iso](https://releases.rancher.com/os/latest/hyperv/rancheros.iso) Proxmox VE | [rancheros-proxmoxve.iso](https://releases.rancher.com/os/latest/proxmoxve/rancheros.iso) -You must boot with enough memory which you can refer to [here]({{< baseurl >}}/os/v1.x/en/overview/#hardware-requirements). If you boot with the ISO, you will automatically be logged in as the `rancher` user. Only the ISO is set to use autologin by default. If you run from a cloud or install to disk, SSH keys or a password of your choice is expected to be used. +You must boot with enough memory which you can refer to [here]({{}}/os/v1.x/en/overview/#hardware-requirements). If you boot with the ISO, you will automatically be logged in as the `rancher` user. Only the ISO is set to use autologin by default. If you run from a cloud or install to disk, SSH keys or a password of your choice is expected to be used. ### Install to Disk -After you boot RancherOS from ISO, you can follow the instructions [here]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk/) to install RancherOS to a hard disk. +After you boot RancherOS from ISO, you can follow the instructions [here]({{< baseurl >}}/os/v1.x/en/installation/server/install-to-disk/) to install RancherOS to a hard disk. diff --git a/content/os/v1.x/en/installation/running-rancheros/workstation/docker-machine/_index.md b/content/os/v1.x/en/installation/workstation/docker-machine/_index.md similarity index 95% rename from content/os/v1.x/en/installation/running-rancheros/workstation/docker-machine/_index.md rename to content/os/v1.x/en/installation/workstation/docker-machine/_index.md index 0a21a3f7549..1595b668383 100644 --- a/content/os/v1.x/en/installation/running-rancheros/workstation/docker-machine/_index.md +++ b/content/os/v1.x/en/installation/workstation/docker-machine/_index.md @@ -1,10 +1,12 @@ --- title: Using Docker Machine weight: 101 +aliases: + - /os/v1.x/en/installation/running-rancheros/workstation/docker-machine --- Before we get started, you'll need to make sure that you have docker machine installed. Download it directly from the docker machine [releases](https://github.com/docker/machine/releases). -You also need to know the [memory requirements]({{< baseurl >}}/os/v1.x/en/#hardware-requirements). +You also need to know the [memory requirements]({{}}/os/v1.x/en/#hardware-requirements). > **Note:** If you create a RancherOS instance using Docker Machine, you will not be able to upgrade your version of RancherOS. @@ -116,7 +118,7 @@ Logging into RancherOS follows the standard Docker Machine commands. To login in $ docker-machine ssh ``` -You'll be logged into RancherOS and can start exploring the OS, This will log you into the RancherOS VM. You'll then be able to explore the OS by [adding system services]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/), [customizing the configuration]({{< baseurl >}}/os/v1.x/en/installation/configuration/), and launching containers. +You'll be logged into RancherOS and can start exploring the OS, This will log you into the RancherOS VM. You'll then be able to explore the OS by [adding system services]({{< baseurl >}}/os/v1.x/en/system-services/), [customizing the configuration]({{< baseurl >}}/os/v1.x/en/configuration/), and launching containers. If you want to exit out of RancherOS, you can exit by pressing `Ctrl+D`. diff --git a/content/os/v1.x/en/installation/networking/dns/_index.md b/content/os/v1.x/en/networking/dns/_index.md similarity index 92% rename from content/os/v1.x/en/installation/networking/dns/_index.md rename to content/os/v1.x/en/networking/dns/_index.md index efbf740fa29..725a4f109fc 100644 --- a/content/os/v1.x/en/installation/networking/dns/_index.md +++ b/content/os/v1.x/en/networking/dns/_index.md @@ -1,6 +1,8 @@ --- title: Configuring DNS weight: 171 +aliases: + - /os/v1.x/en/installation/networking/dns --- If you wanted to configure the DNS through the cloud config file, you'll need to place DNS configurations within the `rancher` key. diff --git a/content/os/v1.x/en/installation/networking/interfaces/_index.md b/content/os/v1.x/en/networking/interfaces/_index.md similarity index 99% rename from content/os/v1.x/en/installation/networking/interfaces/_index.md rename to content/os/v1.x/en/networking/interfaces/_index.md index f93384e4e53..cdbc82eaa70 100644 --- a/content/os/v1.x/en/installation/networking/interfaces/_index.md +++ b/content/os/v1.x/en/networking/interfaces/_index.md @@ -1,6 +1,8 @@ --- title: Configuring Network Interfaces weight: 170 +aliases: + - /os/v1.x/en/installation/networking/interfaces --- Using `ros config`, you can configure specific interfaces. Wildcard globbing is supported so `eth*` will match `eth1` and `eth2`. The available options you can configure are `address`, `gateway`, `mtu`, and `dhcp`. diff --git a/content/os/v1.x/en/installation/networking/proxy-settings/_index.md b/content/os/v1.x/en/networking/proxy-settings/_index.md similarity index 92% rename from content/os/v1.x/en/installation/networking/proxy-settings/_index.md rename to content/os/v1.x/en/networking/proxy-settings/_index.md index fccd1c14d01..09698194c9c 100644 --- a/content/os/v1.x/en/installation/networking/proxy-settings/_index.md +++ b/content/os/v1.x/en/networking/proxy-settings/_index.md @@ -1,6 +1,8 @@ --- title: Configuring Proxy Settings weight: 172 +aliases: + - /os/v1.x/en/installation/networking/proxy-settings --- HTTP proxy settings can be set directly under the `network` key. This will automatically configure proxy settings for both Docker and System Docker. diff --git a/content/os/v1.x/en/overview/_index.md b/content/os/v1.x/en/overview/_index.md index 264f130ef15..a2936d617c0 100644 --- a/content/os/v1.x/en/overview/_index.md +++ b/content/os/v1.x/en/overview/_index.md @@ -25,11 +25,11 @@ VMWare | 1GB | 1280MB (rancheros.iso)
2048MB (ran GCE | 1GB | 1280MB AWS | 1GB | 1.7GB -You can adjust memory requirements by custom building RancherOS, please refer to [reduce-memory-requirements]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/#reduce-memory-requirements) +You can adjust memory requirements by custom building RancherOS, please refer to [reduce-memory-requirements]({{}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/#reduce-memory-requirements) ### How RancherOS Works -Everything in RancherOS is a Docker container. We accomplish this by launching two instances of Docker. One is what we call **System Docker** and is the first process on the system. All other system services, like `ntpd`, `syslog`, and `console`, are running in Docker containers. System Docker replaces traditional init systems like `systemd` and is used to launch [additional system services]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/). +Everything in RancherOS is a Docker container. We accomplish this by launching two instances of Docker. One is what we call **System Docker** and is the first process on the system. All other system services, like `ntpd`, `syslog`, and `console`, are running in Docker containers. System Docker replaces traditional init systems like `systemd` and is used to launch [additional system services]({{< baseurl >}}/os/v1.x/en/system-services/). System Docker runs a special container called **Docker**, which is another Docker daemon responsible for managing all of the user’s containers. Any containers that you launch as a user from the console will run inside this Docker. This creates isolation from the System Docker containers and ensures that normal user commands don’t impact system services. @@ -39,7 +39,7 @@ System Docker runs a special container called **Docker**, which is another Docke ### Running RancherOS -To get started with RancherOS, head over to our [Quick Start Guide]({{< baseurl >}}/os/v1.x/en/quick-start-guide/). +To get started with RancherOS, head over to our [Quick Start Guide]({{}}/os/v1.x/en/quick-start-guide/). ### Latest Release diff --git a/content/os/v1.x/en/quick-start-guide/_index.md b/content/os/v1.x/en/quick-start-guide/_index.md index 7e01e0fc0a3..5c1ee3a13d5 100644 --- a/content/os/v1.x/en/quick-start-guide/_index.md +++ b/content/os/v1.x/en/quick-start-guide/_index.md @@ -3,7 +3,7 @@ title: Quick Start weight: 1 --- -If you have a specific RanchersOS machine requirements, please check out our [guides on running RancherOS]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/). With the rest of this guide, we'll start up a RancherOS using [Docker machine]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/docker-machine/) and show you some of what RancherOS can do. +If you have a specific RanchersOS machine requirements, please check out our [guides on running RancherOS]({{< baseurl >}}/os/v1.x/en/installation/platform/). With the rest of this guide, we'll start up a RancherOS using [Docker machine]({{< baseurl >}}/os/v1.x/en/installation/workstation//docker-machine/) and show you some of what RancherOS can do. ### Launching RancherOS using Docker Machine @@ -120,7 +120,7 @@ $ sudo ros config get rancher.network.dns.nameservers ``` -When using the native Busybox console, any changes to the console will be lost after reboots, only changes to `/home` or `/opt` will be persistent. You can use the `ros console switch` command to switch to a [persistent console]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence) and replace the native Busybox console. For example, to switch to the Ubuntu console: +When using the native Busybox console, any changes to the console will be lost after reboots, only changes to `/home` or `/opt` will be persistent. You can use the `ros console switch` command to switch to a [persistent console]({{}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence) and replace the native Busybox console. For example, to switch to the Ubuntu console: ``` $ sudo ros console switch ubuntu diff --git a/content/os/v1.x/en/installation/storage/additional-mounts/_index.md b/content/os/v1.x/en/storage/additional-mounts/_index.md similarity index 71% rename from content/os/v1.x/en/installation/storage/additional-mounts/_index.md rename to content/os/v1.x/en/storage/additional-mounts/_index.md index e568596e3d1..a9b39af7f33 100644 --- a/content/os/v1.x/en/installation/storage/additional-mounts/_index.md +++ b/content/os/v1.x/en/storage/additional-mounts/_index.md @@ -1,9 +1,15 @@ --- title: Additional Mounts weight: 161 +aliases: + - /os/v1.x/en/installation/storage/additional-mounts --- -Additional mounts can be specified as part of your [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). These mounts are applied within the console container. Here's a simple example that mounts `/dev/vdb` to `/mnt/s`. +<<<<<<< HEAD:content/os/v1.x/en/installation/storage/additional-mounts/_index.md +Additional mounts can be specified as part of your [cloud-config]({{}}/os/v1.x/en/installation/configuration/#cloud-config). These mounts are applied within the console container. Here's a simple example that mounts `/dev/vdb` to `/mnt/s`. +======= +Additional mounts can be specified as part of your [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). These mounts are applied within the console container. Here's a simple example that mounts `/dev/vdb` to `/mnt/s`. +>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/storage/additional-mounts/_index.md ```yaml #cloud-config diff --git a/content/os/v1.x/en/installation/storage/state-partition/_index.md b/content/os/v1.x/en/storage/state-partition/_index.md similarity index 86% rename from content/os/v1.x/en/installation/storage/state-partition/_index.md rename to content/os/v1.x/en/storage/state-partition/_index.md index c16152c2771..f5ae065cd12 100644 --- a/content/os/v1.x/en/installation/storage/state-partition/_index.md +++ b/content/os/v1.x/en/storage/state-partition/_index.md @@ -1,6 +1,8 @@ --- title: Persistent State Partition weight: 160 +aliases: + - /os/v1.x/en/installation/storage/state-partition --- RancherOS will store its state in a single partition specified by the `dev` field. The field can be a device such as `/dev/sda1` or a logical name such `LABEL=state` or `UUID=123124`. The default value is `LABEL=RANCHER_STATE`. The file system type of that partition can be set to `auto` or a specific file system type such as `ext4`. @@ -13,7 +15,7 @@ rancher: dev: LABEL=RANCHER_STATE ``` -For other labels such as `RANCHER_BOOT` and `RANCHER_OEM` and `RANCHER_SWAP`, please refer to [Custom partition layout]({{< baseurl >}}/os/v1.x/en/about/custom-partition-layout/). +For other labels such as `RANCHER_BOOT` and `RANCHER_OEM` and `RANCHER_SWAP`, please refer to [Custom partition layout]({{}}/os/v1.x/en/about/custom-partition-layout/). ### Autoformat diff --git a/content/os/v1.x/en/installation/storage/using-zfs/_index.md b/content/os/v1.x/en/storage/using-zfs/_index.md similarity index 96% rename from content/os/v1.x/en/installation/storage/using-zfs/_index.md rename to content/os/v1.x/en/storage/using-zfs/_index.md index 494bf53f017..1247accff85 100644 --- a/content/os/v1.x/en/installation/storage/using-zfs/_index.md +++ b/content/os/v1.x/en/storage/using-zfs/_index.md @@ -1,6 +1,8 @@ --- title: Using ZFS weight: 162 +aliases: + - /os/v1.x/en/installation/storage/using-zfs --- #### Installing the ZFS service @@ -19,7 +21,7 @@ $ sudo ros service logs --follow zfs $ lsmod | grep zfs ``` -> *Note:* if you switch consoles, you may need to re-run `ros up zfs`. +> *Note:* if you switch consoles, you may need to re-run `sudo ros service up zfs`. #### Creating ZFS pools diff --git a/content/os/v1.x/en/installation/system-services/adding-system-services/_index.md b/content/os/v1.x/en/system-services/_index.md similarity index 95% rename from content/os/v1.x/en/installation/system-services/adding-system-services/_index.md rename to content/os/v1.x/en/system-services/_index.md index bbfc6c4470e..b3d0ebd6051 100644 --- a/content/os/v1.x/en/installation/system-services/adding-system-services/_index.md +++ b/content/os/v1.x/en/system-services/_index.md @@ -1,6 +1,8 @@ --- title: System Services weight: 140 +aliases: + - /os/v1.x/en/installation/system-services/adding-system-services --- A system service is a container that can be run in either System Docker or Docker. Rancher provides services that are already available in RancherOS by adding them to the [os-services repo](https://github.com/rancher/os-services). Anything in the `index.yml` file from the repository for the tagged release will be an available system service when using the `ros service list` command. diff --git a/content/os/v1.x/en/installation/system-services/custom-system-services/_index.md b/content/os/v1.x/en/system-services/custom-system-services/_index.md similarity index 96% rename from content/os/v1.x/en/installation/system-services/custom-system-services/_index.md rename to content/os/v1.x/en/system-services/custom-system-services/_index.md index ba63929e047..0fe56654018 100644 --- a/content/os/v1.x/en/installation/system-services/custom-system-services/_index.md +++ b/content/os/v1.x/en/system-services/custom-system-services/_index.md @@ -1,9 +1,11 @@ --- title: Custom System Services weight: 141 +aliases: + - /os/v1.x/en/installation/system-services/custom-system-services --- -You can also create your own system service in [Docker Compose](https://docs.docker.com/compose/) format. After creating your own custom service, you can launch it in RancherOS in a couple of methods. The service could be directly added to the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config), or a `docker-compose.yml` file could be saved at a http(s) url location or in a specific directory of RancherOS. +You can also create your own system service in [Docker Compose](https://docs.docker.com/compose/) format. After creating your own custom service, you can launch it in RancherOS in a couple of methods. The service could be directly added to the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config), or a `docker-compose.yml` file could be saved at a http(s) url location or in a specific directory of RancherOS. ### Launching Services through Cloud-Config diff --git a/content/os/v1.x/en/installation/system-services/environment/_index.md b/content/os/v1.x/en/system-services/environment/_index.md similarity index 94% rename from content/os/v1.x/en/installation/system-services/environment/_index.md rename to content/os/v1.x/en/system-services/environment/_index.md index c3990e318a9..f2a5d07fccd 100644 --- a/content/os/v1.x/en/installation/system-services/environment/_index.md +++ b/content/os/v1.x/en/system-services/environment/_index.md @@ -1,6 +1,8 @@ --- title: Environment weight: 143 +aliases: + - /os/v1.x/en/installation/system-services/environment --- The [environment key](https://docs.docker.com/compose/compose-file/#environment) can be used to customize system services. When a value is not assigned, RancherOS looks up the value from the `rancher.environment` key. diff --git a/content/os/v1.x/en/installation/system-services/system-docker-volumes/_index.md b/content/os/v1.x/en/system-services/system-docker-volumes/_index.md similarity index 95% rename from content/os/v1.x/en/installation/system-services/system-docker-volumes/_index.md rename to content/os/v1.x/en/system-services/system-docker-volumes/_index.md index 8430640c436..1ec9fb1baab 100644 --- a/content/os/v1.x/en/installation/system-services/system-docker-volumes/_index.md +++ b/content/os/v1.x/en/system-services/system-docker-volumes/_index.md @@ -1,6 +1,8 @@ --- title: System Docker Volumes weight: 142 +aliases: + - /os/v1.x/en/installation/system-services/system-docker-volumes --- A few services are containers in `created` state. Their purpose is to provide volumes for other services. diff --git a/content/os/v1.x/en/upgrading/_index.md b/content/os/v1.x/en/upgrading/_index.md index beedfcdd821..a1de8d39291 100644 --- a/content/os/v1.x/en/upgrading/_index.md +++ b/content/os/v1.x/en/upgrading/_index.md @@ -9,7 +9,7 @@ Since RancherOS is a kernel and initrd, the upgrade process is downloading a new Before upgrading to any version, please review the release notes on our [releases page](https://github.com/rancher/os/releases) in GitHub to review any updates in the release. -> **Note:** If you are using [`docker-machine`]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/docker-machine/) then you will not be able to upgrade your RancherOS version. You need to delete and re-create the machine. +> **Note:** If you are using [`docker-machine`]({{< baseurl >}}/os/v1.x/en/installation/workstation//docker-machine/) then you will not be able to upgrade your RancherOS version. You need to delete and re-create the machine. ### Version Control @@ -64,7 +64,7 @@ $ sudo ros -v ros version v0.5.0 ``` -> **Note:** If you are booting from ISO and have not installed to disk, your upgrade will not be saved. You can view our guide to [installing to disk]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk/). +> **Note:** If you are booting from ISO and have not installed to disk, your upgrade will not be saved. You can view our guide to [installing to disk]({{< baseurl >}}/os/v1.x/en/installation/server/install-to-disk/). #### Upgrading to a Specific Version @@ -114,7 +114,7 @@ ros version 0.4.4
-> **Note:** If you are using a [persistent console]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence) and in the current version's console, rolling back is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported. +> **Note:** If you are using a [persistent console]({{}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence) and in the current version's console, rolling back is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported. ### Staging an Upgrade diff --git a/content/rancher/v2.x/en/_index.md b/content/rancher/v2.x/en/_index.md index 1cdb421ebd0..9a7712bc041 100644 --- a/content/rancher/v2.x/en/_index.md +++ b/content/rancher/v2.x/en/_index.md @@ -8,13 +8,14 @@ insertOneSix: true weight: 1 ctaBanner: intro-k8s-rancher-online-training --- +Rancher was originally built to work with multiple orchestrators, and it included its own orchestrator called Cattle. With the rise of Kubernetes in the marketplace, Rancher 2.x exclusively deploys and manages Kubernetes clusters running anywhere, on any provider. -# What's New? +Rancher can provision Kubernetes from a hosted provider, provision compute nodes and then install Kubernetes onto them, or import existing Kubernetes clusters running anywhere. -Rancher was originally built to work with multiple orchestrators, and it included its own orchestrator called Cattle. With the rise of Kubernetes in the marketplace, Rancher now exclusively deploys and manages multiple Kubernetes clusters running anywhere, on any provider. It can provision Kubernetes from a hosted provider, provision compute nodes and then install Kubernetes onto them, or inherit existing Kubernetes clusters running anywhere. +One Rancher server installation can manage thousands of Kubernetes clusters and thousands of nodes from the same user interface. -One Rancher server installation can manage hundreds of Kubernetes clusters from the same interface. +Rancher adds significant value on top of Kubernetes, first by centralizing authentication and role-based access control (RBAC) for all of the clusters, giving global admins the ability to control cluster access from one location. -Rancher adds significant value on top of Kubernetes, first by centralizing role-based access control (RBAC) for all of the clusters and giving global admins the ability to control cluster access from one location. It then enables detailed monitoring and alerting for clusters and their resources, ships logs to external providers, and integrates directly with Helm via the Application Catalog. If you have an external CI/CD system, you can plug it into Rancher, but if you don't, Rancher even includes a pipeline engine to help you automatically deploy and upgrade workloads. +It then enables detailed monitoring and alerting for clusters and their resources, ships logs to external providers, and integrates directly with Helm via the Application Catalog. If you have an external CI/CD system, you can plug it into Rancher, but if you don't, Rancher even includes a pipeline engine to help you automatically deploy and upgrade workloads. -Rancher is a _complete_ container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere. +Rancher is a _complete_ container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere. \ No newline at end of file diff --git a/content/rancher/v2.x/en/admin-settings/_index.md b/content/rancher/v2.x/en/admin-settings/_index.md index e1dc6d52f2c..2242b4d3328 100644 --- a/content/rancher/v2.x/en/admin-settings/_index.md +++ b/content/rancher/v2.x/en/admin-settings/_index.md @@ -9,7 +9,7 @@ aliases: - /rancher/v2.x/en/admin-settings/log-in/ --- -After installation, the [system administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) should configure Rancher to configure authentication, authorization, security, default settings, security policies, drivers and global DNS entries. +After installation, the [system administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) should configure Rancher to configure authentication, authorization, security, default settings, security policies, drivers and global DNS entries. ## First Log In @@ -21,7 +21,7 @@ After you log into Rancher for the first time, Rancher will prompt you for a **R One of the key features that Rancher adds to Kubernetes is centralized user authentication. This feature allows to set up local users and/or connect to an external authentication provider. By connecting to an external authentication provider, you can leverage that provider's user and groups. -For more information how authentication works and how to configure each provider, see [Authentication]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/). +For more information how authentication works and how to configure each provider, see [Authentication]({{}}/rancher/v2.x/en/admin-settings/authentication/). ## Authorization @@ -33,13 +33,13 @@ For more information how authorization works and how to customize roles, see [Ro _Pod Security Policies_ (or PSPs) are objects that control security-sensitive aspects of pod specification, e.g. root privileges. If a pod does not meet the conditions specified in the PSP, Kubernetes will not allow it to start, and Rancher will display an error message. -For more information how to create and use PSPs, see [Pod Security Policies]({{< baseurl >}}/rancher/v2.x/en/admin-settings/pod-security-policies/). +For more information how to create and use PSPs, see [Pod Security Policies]({{}}/rancher/v2.x/en/admin-settings/pod-security-policies/). ## Provisioning Drivers -Drivers in Rancher allow you to manage which providers can be used to provision [hosted Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) or [nodes in an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) to allow Rancher to deploy and manage Kubernetes. +Drivers in Rancher allow you to manage which providers can be used to provision [hosted Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) or [nodes in an infrastructure provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) to allow Rancher to deploy and manage Kubernetes. -For more information, see [Provisioning Drivers]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/). +For more information, see [Provisioning Drivers]({{}}/rancher/v2.x/en/admin-settings/drivers/). ## Adding Kubernetes Versions into Rancher @@ -47,9 +47,9 @@ _Available as of v2.3.0_ With this feature, you can upgrade to the latest version of Kubernetes as soon as it is released, without upgrading Rancher. This feature allows you to easily upgrade Kubernetes patch versions (i.e. `v1.15.X`), but not intended to upgrade Kubernetes minor versions (i.e. `v1.X.0`) as Kubernetes tends to deprecate or add APIs between minor versions. -The information that Rancher uses to provision [RKE clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) is now located in the Rancher Kubernetes Metadata. For details on metadata configuration and how to change the Kubernetes version used for provisioning RKE clusters, see [Rancher Kubernetes Metadata.]({{}}/rancher/v2.x/en/admin-settings/k8s-metadata/) +The information that Rancher uses to provision [RKE clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) is now located in the Rancher Kubernetes Metadata. For details on metadata configuration and how to change the Kubernetes version used for provisioning RKE clusters, see [Rancher Kubernetes Metadata.]({{}}/rancher/v2.x/en/admin-settings/k8s-metadata/) -Rancher Kubernetes Metadata contains Kubernetes version information which Rancher uses to provision [RKE clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). +Rancher Kubernetes Metadata contains Kubernetes version information which Rancher uses to provision [RKE clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). For more information on how metadata works and how to configure metadata config, see [Rancher Kubernetes Metadata]({{}}/rancher/v2.x/en/admin-settings/k8s-metadata/). diff --git a/content/rancher/v2.x/en/admin-settings/authentication/ad/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/ad/_index.md index f74e1e8b0ce..6b72f6752c4 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/ad/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/ad/_index.md @@ -7,11 +7,11 @@ aliases: If your organization uses Microsoft Active Directory as central user repository, you can configure Rancher to communicate with an Active Directory server to authenticate users. This allows Rancher admins to control access to clusters and projects based on users and groups managed externally in the Active Directory, while allowing end-users to authenticate with their AD credentials when logging in to the Rancher UI. -Rancher uses LDAP to communicate with the Active Directory server. The authentication flow for Active Directory is therefore the same as for the [OpenLDAP authentication]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/openldap) integration. +Rancher uses LDAP to communicate with the Active Directory server. The authentication flow for Active Directory is therefore the same as for the [OpenLDAP authentication]({{}}/rancher/v2.x/en/admin-settings/authentication/openldap) integration. > **Note:** > -> Before you start, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). +> Before you start, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). ## Prerequisites @@ -196,4 +196,4 @@ In the same way, we can observe that the value in the **memberOf** attribute in ## Annex: Troubleshooting -If you are experiencing issues while testing the connection to the Active Directory server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{< baseurl >}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation. +If you are experiencing issues while testing the connection to the Active Directory server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation. diff --git a/content/rancher/v2.x/en/admin-settings/authentication/azure-ad/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/azure-ad/_index.md index b4879220c29..1400dfb6ce1 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/azure-ad/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/azure-ad/_index.md @@ -28,9 +28,8 @@ Configuring Rancher to allow your users to authenticate with their Azure AD acco - [1. Register Rancher with Azure](#1-register-rancher-with-azure) - [2. Create an Azure API Key](#2-create-an-azure-api-key) - [3. Set Required Permissions for Rancher](#3-set-required-permissions-for-rancher) -- [4. Add a Reply URL](#4-add-a-reply-url) -- [5. Copy Azure Application Data](#5-copy-azure-application-data) -- [6. Configure Azure AD in Rancher](#6-configure-azure-ad-in-rancher) +- [4. Copy Azure Application Data](#4-copy-azure-application-data) +- [5. Configure Azure AD in Rancher](#5-configure-azure-ad-in-rancher) @@ -42,41 +41,43 @@ Before enabling Azure AD within Rancher, you must register Rancher with Azure. 1. Use search to open the **App registrations** service. - ![Open App Registrations]({{< baseurl >}}/img/rancher/search-app-registrations.png) + ![Open App Registrations]({{}}/img/rancher/search-app-registrations.png) -1. Click **New application registration** and complete the **Create** form. +1. Click **New registrations** and complete the **Create** form. - ![New App Registration]({{< baseurl >}}/img/rancher/new-app-registration.png) + ![New App Registration]({{}}/img/rancher/new-app-registration.png) 1. Enter a **Name** (something like `Rancher`). - 1. From **Application type**, make sure that **Web app / API** is selected. + 1. From **Supported account types**, select "Accounts in this organizational directory only (AzureADTest only - Single tenant)" This corresponds to the legacy app registration options. - 1. In the **Sign-on URL** field, enter the URL of your Rancher Server. + 1. In the **Redirect URI** section, make sure **Web** is selected from the dropdown and enter the URL of your Rancher Server in the text box next to the dropdown. This Rancher server URL should be appended with the verification path: `/verify-auth-azure`. - 1. Click **Create**. + >**Tip:** You can find your personalized Azure reply URL in Rancher on the Azure AD Authentication page (Global View > Security Authentication > Azure AD). -### 2. Create an Azure API Key + 1. Click **Register**. -From the Azure portal, create an API key. Rancher will use this key to authenticate with Azure AD. +>**Note:** It can take up to five minutes for this change to take affect, so don't be alarmed if you can't authenticate immediately after Azure AD configuration. + +### 2. Create a new client secret + +From the Azure portal, create a client secret. Rancher will use this key to authenticate with Azure AD. 1. Use search to open **App registrations** services. Then open the entry for Rancher that you created in the last procedure. - ![Open Rancher Registration]({{< baseurl >}}/img/rancher/open-rancher-app.png) + ![Open Rancher Registration]({{}}/img/rancher/open-rancher-app.png) - **Step Result:** A new blade opens for Rancher. +1. From the navigation pane on left, click **Certificates and Secrets**. -1. Click **Settings**. +1. Click **New client secret**. -1. From the **Settings** blade, select **Keys**. + ![Create new client secret]({{< baseurl >}}/img/rancher/select-client-secret.png) -1. From **Passwords**, create an API key. + 1. Enter a **Description** (something like `Rancher`). - 1. Enter a **Key description** (something like `Rancher`). + 1. Select duration for the key from the options under **Expires**. This drop-down sets the expiration date for the key. Shorter durations are more secure, but require you to create a new key after expiration. - 1. Select a **Duration** for the key. This drop-down sets the expiration date for the key. Shorter durations are more secure, but require you to create a new key after expiration. - - 1. Click **Save** (you don't need to enter a value—it will automatically populate after you save). + 1. Click **Add** (you don't need to enter a value—it will automatically populate after you save).
1. Copy the key value and save it to an [empty text file](#tip). @@ -89,13 +90,16 @@ From the Azure portal, create an API key. Rancher will use this key to authentic Next, set API permissions for Rancher within Azure. -1. From the **Settings** blade, select **Required permissions**. +1. From the navigation pane on left, select **API permissions**. - ![Open Required Permissions]({{< baseurl >}}/img/rancher/select-required-permissions.png) + ![Open Required Permissions]({{}}/img/rancher/select-required-permissions.png) -1. Click **Windows Azure Active Directory**. +1. Click **Add a permission**. + +1. From the **Azure Active Directory Graph**, select the following **Delegated Permissions**: + + ![Select API Permissions]({{< baseurl >}}/img/rancher/select-required-permissions-2.png) -1. From the **Enable Access** blade, select the following **Delegated Permissions**:

- **Access the directory as the signed-in user** @@ -105,9 +109,9 @@ Next, set API permissions for Rancher within Azure. - **Read all users' basic profiles** - **Sign in and read user profile** -1. Click **Save**. +1. Click **Add permissions**. -1. From **Required permissions**, click **Grant permissions**. Then click **Yes**. +1. From **API permissions**, click **Grant admin consent**. Then click **Yes**. >**Note:** You must be signed in as an Azure administrator to successfully save your permission settings. @@ -119,7 +123,7 @@ To use Azure AD with Rancher you must whitelist Rancher with Azure. You can comp 1. From the **Setting** blade, select **Reply URLs**. - ![Azure: Enter Reply URL]({{< baseurl >}}/img/rancher/enter-azure-reply-url.png) + ![Azure: Enter Reply URL]({{}}/img/rancher/enter-azure-reply-url.png) 1. From the **Reply URLs** blade, enter the URL of your Rancher Server, appended with the verification path: `/verify-auth-azure`. @@ -139,9 +143,9 @@ As your final step in Azure, copy the data that you'll use to configure Rancher 1. Use search to open the **Azure Active Directory** service. - ![Open Azure Active Directory]({{< baseurl >}}/img/rancher/search-azure-ad.png) + ![Open Azure Active Directory]({{}}/img/rancher/search-azure-ad.png) - 1. From the **Azure Active Directory** menu, open **Properties**. + 1. From the left navigation pane, open **Overview**. 2. Copy the **Directory ID** and paste it into your [text file](#tip). @@ -151,7 +155,7 @@ As your final step in Azure, copy the data that you'll use to configure Rancher 1. Use search to open **App registrations**. - ![Open App Registrations]({{< baseurl >}}/img/rancher/search-app-registrations.png) + ![Open App Registrations]({{}}/img/rancher/search-app-registrations.png) 1. Find the entry you created for Rancher. @@ -161,7 +165,7 @@ As your final step in Azure, copy the data that you'll use to configure Rancher 1. From **App registrations**, click **Endpoints**. - ![Click Endpoints]({{< baseurl >}}/img/rancher/click-endpoints.png) + ![Click Endpoints]({{}}/img/rancher/click-endpoints.png) 2. Copy the following endpoints to your clipboard and paste them into your [text file](#tip) (these values will be your Rancher endpoint values). @@ -171,7 +175,7 @@ As your final step in Azure, copy the data that you'll use to configure Rancher >**Note:** Copy the v1 version of the endpoints -### 6. Configure Azure AD in Rancher +### 5. Configure Azure AD in Rancher From the Rancher UI, enter information about your AD instance hosted in Azure to complete configuration. diff --git a/content/rancher/v2.x/en/admin-settings/authentication/freeipa/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/freeipa/_index.md index 7158f26a6a8..37d8ba2e22b 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/freeipa/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/freeipa/_index.md @@ -13,7 +13,7 @@ If your organization uses FreeIPA for user authentication, you can configure Ran > >- You must have a [FreeIPA Server](https://www.freeipa.org/) configured. >- Create a service account in FreeIPA with `read-only` access. Rancher uses this account to verify group membership when a user makes a request using an API key. ->- Read [External Authentication Configuration and Principal Users]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). +>- Read [External Authentication Configuration and Principal Users]({{}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). 1. Sign into Rancher using a local user assigned the `administrator` role (i.e., the _local principal_). diff --git a/content/rancher/v2.x/en/admin-settings/authentication/github/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/github/_index.md index 55e505e26f3..9e2c4266c56 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/github/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/github/_index.md @@ -7,7 +7,7 @@ aliases: In environments using GitHub, you can configure Rancher to allow sign on using GitHub credentials. ->**Prerequisites:** Read [External Authentication Configuration and Principal Users]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). +>**Prerequisites:** Read [External Authentication Configuration and Principal Users]({{}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). 1. Sign into Rancher using a local user assigned the `administrator` role (i.e., the _local principal_). diff --git a/content/rancher/v2.x/en/admin-settings/authentication/keycloak/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/keycloak/_index.md index e7350e6c96d..197e796fb62 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/keycloak/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/keycloak/_index.md @@ -17,12 +17,13 @@ If your organization uses Keycloak Identity Provider (IdP) for user authenticati `Sign Documents` | `ON` 1 `Sign Assertions` | `ON` 1 All other `ON/OFF` Settings | `OFF` - `Client ID` | `https://yourRancherHostURL/v1-saml/keycloak/saml/metadata` + `Client ID` | `https://yourRancherHostURL/v1-saml/keycloak/saml/metadata`2 `Client Name` | (e.g. `rancher`) `Client Protocol` | `SAML` `Valid Redirect URI` | `https://yourRancherHostURL/v1-saml/keycloak/saml/acs` >1: Optionally, you can enable either one or both of these settings. + >2: Rancher SAML metadata won't be generated until a SAML provider is configured and saved. - Export a `metadata.xml` file from your Keycloak client: From the `Installation` tab, choose the `SAML Metadata IDPSSODescriptor` format option and download your file. @@ -64,7 +65,7 @@ If your organization uses Keycloak Identity Provider (IdP) for user authenticati ## Annex: Troubleshooting -If you are experiencing issues while testing the connection to the Keycloak server, first double-check the configuration option of your SAML client. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{< baseurl >}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation. +If you are experiencing issues while testing the connection to the Keycloak server, first double-check the configuration option of your SAML client. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation. ### You are not redirected to Keycloak @@ -81,6 +82,11 @@ You are correctly redirected to your IdP login page and you are able to enter yo * Check the Rancher debug log. * If the log displays `ERROR: either the Response or Assertion must be signed`, make sure either `Sign Documents` or `Sign assertions` is set to `ON` in your Keycloak client. +### HTTP 502 when trying to access /v1-saml/keycloak/saml/metadata + +This is usually due to the metadata not being created until a SAML provider is configured. +Try configuring and saving keycloak as your SAML provider and then accessing the metadata. + ### Keycloak Error: "We're sorry, failed to process response" * Check your Keycloak log. diff --git a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/_index.md index c79cf3e4087..6062bdb0288 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/_index.md @@ -27,10 +27,10 @@ If your organization uses Microsoft Active Directory Federation Services (AD FS) Setting up Microsoft AD FS with Rancher Server requires configuring AD FS on your Active Directory server, and configuring Rancher to utilize your AD FS server. The following pages serve as guides for setting up Microsoft AD FS authentication on your Rancher installation. -- [1 — Configuring Microsoft AD FS for Rancher]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup) -- [2 — Configuring Rancher for Microsoft AD FS]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup) +- [1 — Configuring Microsoft AD FS for Rancher]({{}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup) +- [2 — Configuring Rancher for Microsoft AD FS]({{}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup) {{< saml_caveats >}} -### [Next: Configuring Microsoft AD FS for Rancher]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup) +### [Next: Configuring Microsoft AD FS for Rancher]({{}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup) diff --git a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/_index.md index 822a991e3e9..152834ec60c 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/_index.md @@ -79,4 +79,4 @@ https:///federationmetadata/2007-06/federationmetadata.xml **Result:** You've added Rancher as a relying trust party. Now you can configure Rancher to leverage AD. -### [Next: Configuring Rancher for Microsoft AD FS]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/) +### [Next: Configuring Rancher for Microsoft AD FS]({{}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/) diff --git a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md index f5ba2a38b0e..d87510c66dd 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md @@ -4,7 +4,7 @@ weight: 1205 --- _Available as of v2.0.7_ -After you complete [Configuring Microsoft AD FS for Rancher]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/), enter your AD FS information into Rancher to allow AD FS users to authenticate with Rancher. +After you complete [Configuring Microsoft AD FS for Rancher]({{}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/), enter your AD FS information into Rancher to allow AD FS users to authenticate with Rancher. >**Important Notes For Configuring Your AD FS Server:** > diff --git a/content/rancher/v2.x/en/admin-settings/authentication/openldap/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/openldap/_index.md index bce05911aac..401d0259229 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/openldap/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/openldap/_index.md @@ -8,17 +8,6 @@ aliases: _Available as of v2.0.5_ If your organization uses LDAP for user authentication, you can configure Rancher to communicate with an OpenLDAP server to authenticate users. This allows Rancher admins to control access to clusters and projects based on users and groups managed externally in the organisation's central user repository, while allowing end-users to authenticate with their LDAP credentials when logging in to the Rancher UI. - -## OpenLDAP Authentication Flow - -1. When a user attempts to login with his LDAP credentials, Rancher creates an initial bind to the LDAP server using a service account with permissions to search the directory and read user/group attributes. -2. Rancher then searches the directory for the user by using a search filter based on the provided username and configured attribute mappings. -3. Once the user has been found, he is authenticated with another LDAP bind request using the user's DN and provided password. -4. Once authentication succeeded, Rancher then resolves the group memberships both from the membership attribute in the user's object and by performing a group search based on the configured user mapping attribute. - -> **Note:** -> -> Before you proceed with the configuration, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). ## Prerequisites @@ -28,81 +17,16 @@ Rancher must be configured with a LDAP bind account (aka service account) to sea > > If the certificate used by the OpenLDAP server is self-signed or not from a recognised certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain. -## Configuration Steps -### Open OpenLDAP Configuration +## Configure OpenLDAP in Rancher + +Configure the settings for the OpenLDAP server, groups and users. For help filling out each field, refer to the [configuration reference.](../openldap-config) + +> Before you proceed with the configuration, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). 1. Log into the Rancher UI using the initial local `admin` account. 2. From the **Global** view, navigate to **Security** > **Authentication** 3. Select **OpenLDAP**. The **Configure an OpenLDAP server** form will be displayed. -### Configure OpenLDAP Server Settings - -In the section titled `1. Configure an OpenLDAP server`, complete the fields with the information specific to your server. Please refer to the following table for detailed information on the required values for each parameter. - -> **Note:** -> -> If you are in doubt about the correct values to enter in the user/group Search Base configuration fields, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/ad/#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation. - -**Table 1: OpenLDAP server parameters** - -| Parameter | Description | -|:--|:--| -| Hostname | Specify the hostname or IP address of the OpenLDAP server | -| Port | Specify the port at which the OpenLDAP server is listening for connections. Unencrypted LDAP normally uses the standard port of 389, while LDAPS uses port 636.| -| TLS | Check this box to enable LDAP over SSL/TLS (commonly known as LDAPS). You will also need to paste in the CA certificate if the server uses a self-signed/enterprise-signed certificate. | -| Server Connection Timeout | The duration in number of seconds that Rancher waits before considering the server unreachable. | -| Service Account Distinguished Name | Enter the Distinguished Name (DN) of the user that should be used to bind, search and retrieve LDAP entries. (see [Prerequisites](#prerequisites)). | -| Service Account Password | The password for the service account. | -| User Search Base | Enter the Distinguished Name of the node in your directory tree from which to start searching for user objects. All users must be descendents of this base DN. For example: "ou=people,dc=acme,dc=com".| -| Group Search Base | If your groups live under a different node than the one configured under `User Search Base` you will need to provide the Distinguished Name here. Otherwise leave this field empty. For example: "ou=groups,dc=acme,dc=com".| - ---- - -### Configure User/Group Schema - -If your OpenLDAP directory deviates from the standard OpenLDAP schema, you must complete the **Customize Schema** section to match it. -Note that the attribute mappings configured in this section are used by Rancher to construct search filters and resolve group membership. It is therefore always recommended to verify that the configuration here matches the schema used in your OpenLDAP. - -> **Note:** -> -> If you are unfamiliar with the user/group schema used in the OpenLDAP server, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/ad/#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation. - -#### User Schema - -The table below details the parameters for the user schema configuration. - -**Table 2: User schema configuration parameters** - -| Parameter | Description | -|:--|:--| -| Object Class | The name of the object class used for user objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) | -| Username Attribute | The user attribute whose value is suitable as a display name. | -| Login Attribute | The attribute whose value matches the username part of credentials entered by your users when logging in to Rancher. This is typically `uid`. | -| User Member Attribute | The user attribute containing the Distinguished Name of groups a user is member of. Usually this is one of `memberOf` or `isMemberOf`. | -| Search Attribute | When a user enters text to add users or groups in the UI, Rancher queries the LDAP server and attempts to match users by the attributes provided in this setting. Multiple attributes can be specified by separating them with the pipe ("\|") symbol. | -| User Enabled Attribute | If the schema of your OpenLDAP server supports a user attribute whose value can be evaluated to determine if the account is disabled or locked, enter the name of that attribute. The default OpenLDAP schema does not support this and the field should usually be left empty. | -| Disabled Status Bitmask | This is the value for a disabled/locked user account. The parameter is ignored if `User Enabled Attribute` is empty. | - ---- - -#### Group Schema - -The table below details the parameters for the group schema configuration. - -**Table 3: Group schema configuration parameters** - -| Parameter | Description | -|:--|:--| -| Object Class | The name of the object class used for group entries in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) | -| Name Attribute | The group attribute whose value is suitable for a display name. | -| Group Member User Attribute | The name of the **user attribute** whose format matches the group members in the `Group Member Mapping Attribute`. | -| Group Member Mapping Attribute | The name of the group attribute containing the members of a group. | -| Search Attribute | Attribute used to construct search filters when adding groups to clusters or projects in the UI. See description of user schema `Search Attribute`. | -| Group DN Attribute | The name of the group attribute whose format matches the values in the user's group membership attribute. See `User Member Attribute`. | -| Nested Group Membership | This settings defines whether Rancher should resolve nested group memberships. Use only if your organisation makes use of these nested memberships (ie. you have groups that contain other groups as members). | - ---- - ### Test Authentication Once you have completed the configuration, proceed by testing the connection to the OpenLDAP server. Authentication with OpenLDAP will be enabled implicitly if the test is successful. @@ -125,4 +49,4 @@ Once you have completed the configuration, proceed by testing the connection to ## Annex: Troubleshooting -If you are experiencing issues while testing the connection to the OpenLDAP server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{< baseurl >}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation. +If you are experiencing issues while testing the connection to the OpenLDAP server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation. diff --git a/content/rancher/v2.x/en/admin-settings/authentication/openldap/openldap-config/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/openldap/openldap-config/_index.md new file mode 100644 index 00000000000..addd6773a60 --- /dev/null +++ b/content/rancher/v2.x/en/admin-settings/authentication/openldap/openldap-config/_index.md @@ -0,0 +1,86 @@ +--- +title: OpenLDAP Configuration Reference +weight: 2 +--- + +This section is intended to be used as a reference when setting up an OpenLDAP authentication provider in Rancher. + +For further details on configuring OpenLDAP, refer to the [official documentation.](https://www.openldap.org/doc/) + +> Before you proceed with the configuration, please familiarize yourself with the concepts of [External Authentication Configuration and Principal Users]({{}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). + +- [Background: OpenLDAP Authentication Flow](#background-openldap-authentication-flow) +- [OpenLDAP server configuration](#openldap-server-configuration) +- [User/group schema configuration](#user-group-schema-configuration) + - [User schema configuration](#user-schema-configuration) + - [Group schema configuration](#group-schema-configuration) + +## Background: OpenLDAP Authentication Flow + +1. When a user attempts to login with his LDAP credentials, Rancher creates an initial bind to the LDAP server using a service account with permissions to search the directory and read user/group attributes. +2. Rancher then searches the directory for the user by using a search filter based on the provided username and configured attribute mappings. +3. Once the user has been found, he is authenticated with another LDAP bind request using the user's DN and provided password. +4. Once authentication succeeded, Rancher then resolves the group memberships both from the membership attribute in the user's object and by performing a group search based on the configured user mapping attribute. + +# OpenLDAP Server Configuration + +You will need to enter the address, port, and protocol to connect to your OpenLDAP server. `389` is the standard port for insecure traffic, `636` for TLS traffic. + +> **Using TLS?** +> +> If the certificate used by the OpenLDAP server is self-signed or not from a recognized certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain. + +If you are in doubt about the correct values to enter in the user/group Search Base configuration fields, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch]({{}}/rancher/v2.x/en/admin-settings/authentication/ad/#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation. + +
OpenLDAP Server Parameters
+ +| Parameter | Description | +|:--|:--| +| Hostname | Specify the hostname or IP address of the OpenLDAP server | +| Port | Specify the port at which the OpenLDAP server is listening for connections. Unencrypted LDAP normally uses the standard port of 389, while LDAPS uses port 636.| +| TLS | Check this box to enable LDAP over SSL/TLS (commonly known as LDAPS). You will also need to paste in the CA certificate if the server uses a self-signed/enterprise-signed certificate. | +| Server Connection Timeout | The duration in number of seconds that Rancher waits before considering the server unreachable. | +| Service Account Distinguished Name | Enter the Distinguished Name (DN) of the user that should be used to bind, search and retrieve LDAP entries. (see [Prerequisites](#prerequisites)). | +| Service Account Password | The password for the service account. | +| User Search Base | Enter the Distinguished Name of the node in your directory tree from which to start searching for user objects. All users must be descendents of this base DN. For example: "ou=people,dc=acme,dc=com".| +| Group Search Base | If your groups live under a different node than the one configured under `User Search Base` you will need to provide the Distinguished Name here. Otherwise leave this field empty. For example: "ou=groups,dc=acme,dc=com".| + +# User/Group Schema Configuration + +If your OpenLDAP directory deviates from the standard OpenLDAP schema, you must complete the **Customize Schema** section to match it. + +Note that the attribute mappings configured in this section are used by Rancher to construct search filters and resolve group membership. It is therefore always recommended to verify that the configuration here matches the schema used in your OpenLDAP. + +If you are unfamiliar with the user/group schema used in the OpenLDAP server, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch]({{}}/rancher/v2.x/en/admin-settings/authentication/ad/#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation. + +### User Schema Configuration + +The table below details the parameters for the user schema configuration. + +
User Schema Configuration Parameters
+ +| Parameter | Description | +|:--|:--| +| Object Class | The name of the object class used for user objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) | +| Username Attribute | The user attribute whose value is suitable as a display name. | +| Login Attribute | The attribute whose value matches the username part of credentials entered by your users when logging in to Rancher. This is typically `uid`. | +| User Member Attribute | The user attribute containing the Distinguished Name of groups a user is member of. Usually this is one of `memberOf` or `isMemberOf`. | +| Search Attribute | When a user enters text to add users or groups in the UI, Rancher queries the LDAP server and attempts to match users by the attributes provided in this setting. Multiple attributes can be specified by separating them with the pipe ("\|") symbol. | +| User Enabled Attribute | If the schema of your OpenLDAP server supports a user attribute whose value can be evaluated to determine if the account is disabled or locked, enter the name of that attribute. The default OpenLDAP schema does not support this and the field should usually be left empty. | +| Disabled Status Bitmask | This is the value for a disabled/locked user account. The parameter is ignored if `User Enabled Attribute` is empty. | + +### Group Schema Configuration + +The table below details the parameters for the group schema configuration. + +
Group Schema Configuration Parameters
+ +| Parameter | Description | +|:--|:--| +| Object Class | The name of the object class used for group entries in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) | +| Name Attribute | The group attribute whose value is suitable for a display name. | +| Group Member User Attribute | The name of the **user attribute** whose format matches the group members in the `Group Member Mapping Attribute`. | +| Group Member Mapping Attribute | The name of the group attribute containing the members of a group. | +| Search Attribute | Attribute used to construct search filters when adding groups to clusters or projects in the UI. See description of user schema `Search Attribute`. | +| Group DN Attribute | The name of the group attribute whose format matches the values in the user's group membership attribute. See `User Member Attribute`. | +| Nested Group Membership | This settings defines whether Rancher should resolve nested group memberships. Use only if your organization makes use of these nested memberships (ie. you have groups that contain other groups as members). This option is disabled if you are using Shibboleth. | \ No newline at end of file diff --git a/content/rancher/v2.x/en/admin-settings/authentication/shibboleth/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/shibboleth/_index.md new file mode 100644 index 00000000000..4e2c2001dbf --- /dev/null +++ b/content/rancher/v2.x/en/admin-settings/authentication/shibboleth/_index.md @@ -0,0 +1,109 @@ +--- +title: Configuring Shibboleth (SAML) +weight: 1210 +--- + +_Available as of v2.4.0_ + +If your organization uses Shibboleth Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in to Rancher using their Shibboleth credentials. + +In this configuration, when Rancher users log in, they will be redirected to the Shibboleth IdP to enter their credentials. After authentication, they will be redirected back to the Rancher UI. + +If you also configure OpenLDAP as the back end to Shibboleth, it will return a SAML assertion to Rancher with user attributes that include groups. Then the authenticated user will be able to access resources in Rancher that their groups have permissions for. + +> The instructions in this section assume that you understand how Rancher, Shibboleth, and OpenLDAP work together. For a more detailed explanation of how it works, refer to [this page.](./about) + +This section covers the following topics: + +- [Setting up Shibboleth in Rancher](#setting-up-shibboleth-in-rancher) + - [Shibboleth Prerequisites](#shibboleth-prerequisites) + - [Configure Shibboleth in Rancher](#configure-shibboleth-in-rancher) + - [SAML Provider Caveats](#saml-provider-caveats) +- [Setting up OpenLDAP in Rancher](#setting-up-openldap-in-rancher) + - [OpenLDAP Prerequisites](#openldap-prerequisites) + - [Configure OpenLDAP in Rancher](#configure-openldap-in-rancher) + - [Troubleshooting](#troubleshooting) + +# Setting up Shibboleth in Rancher + +### Shibboleth Prerequisites +> +>- You must have a Shibboleth IdP Server configured. +>- Following are the Rancher Service Provider URLs needed for configuration: +Metadata URL: `https:///v1-saml/shibboleth/saml/metadata` +Assertion Consumer Service (ACS) URL: `https:///v1-saml/shibboleth/saml/acs` +>- Export a `metadata.xml` file from your IdP Server. For more information, see the [Shibboleth documentation.](https://wiki.shibboleth.net/confluence/display/SP3/Home) + +### Configure Shibboleth in Rancher +If your organization uses Shibboleth for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials. + +1. From the **Global** view, select **Security > Authentication** from the main menu. + +1. Select **Shibboleth**. + +1. Complete the **Configure Shibboleth Account** form. Shibboleth IdP lets you specify what data store you want to use. You can either add a database or use an existing ldap server. For example, if you select your Active Directory (AD) server, the examples below describe how you can map AD attributes to fields within Rancher. + + 1. **Display Name Field**: Enter the AD attribute that contains the display name of users (example: `displayName`). + + 1. **User Name Field**: Enter the AD attribute that contains the user name/given name (example: `givenName`). + + 1. **UID Field**: Enter an AD attribute that is unique to every user (example: `sAMAccountName`, `distinguishedName`). + + 1. **Groups Field**: Make entries for managing group memberships (example: `memberOf`). + + 1. **Rancher API Host**: Enter the URL for your Rancher Server. + + 1. **Private Key** and **Certificate**: This is a key-certificate pair to create a secure shell between Rancher and your IdP. + + You can generate one using an openssl command. For example: + + ``` + openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com" + ``` + 1. **IDP-metadata**: The `metadata.xml` file that you exported from your IdP server. + + +1. After you complete the **Configure Shibboleth Account** form, click **Authenticate with Shibboleth**, which is at the bottom of the page. + + Rancher redirects you to the IdP login page. Enter credentials that authenticate with Shibboleth IdP to validate your Rancher Shibboleth configuration. + + >**Note:** You may have to disable your popup blocker to see the IdP login page. + +**Result:** Rancher is configured to work with Shibboleth. Your users can now sign into Rancher using their Shibboleth logins. + +### SAML Provider Caveats + +If you configure Shibboleth without OpenLDAP, the following caveats apply due to the fact that SAML Protocol does not support search or lookup for users or groups. + +- There is no validation on users or groups when assigning permissions to them in Rancher. +- When adding users, the exact user IDs (i.e. UID Field) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match. +- When adding groups, you must select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user. +- The group drop-down shows only the groups that you are a member of. You will not be able to add groups that you are not a member of. + +To enable searching for groups when assigning permissions in Rancher, you will need to configure a back end for the SAML provider that supports groups, such as OpenLDAP. + +# Setting up OpenLDAP in Rancher + +If you also configure OpenLDAP as the back end to Shibboleth, it will return a SAML assertion to Rancher with user attributes that include groups. Then authenticated users will be able to access resources in Rancher that their groups have permissions for. + +### OpenLDAP Prerequisites + +Rancher must be configured with a LDAP bind account (aka service account) to search and retrieve LDAP entries pertaining to users and groups that should have access. It is recommended to not use an administrator account or personal account for this purpose and instead create a dedicated account in OpenLDAP with read-only access to users and groups under the configured search base (see below). + +> **Using TLS?** +> +> If the certificate used by the OpenLDAP server is self-signed or not from a recognized certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain. + +### Configure OpenLDAP in Rancher + +Configure the settings for the OpenLDAP server, groups and users. For help filling out each field, refer to the [configuration reference.]({{}}/rancher/v2.x/en/admin-settings/authentication/openldap/openldap-config) Note that nested group membership is not available for Shibboleth. + +> Before you proceed with the configuration, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). + +1. Log into the Rancher UI using the initial local `admin` account. +2. From the **Global** view, navigate to **Security** > **Authentication** +3. Select **OpenLDAP**. The **Configure an OpenLDAP server** form will be displayed. + +# Troubleshooting + +If you are experiencing issues while testing the connection to the OpenLDAP server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation. diff --git a/content/rancher/v2.x/en/admin-settings/authentication/shibboleth/about/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/shibboleth/about/_index.md new file mode 100644 index 00000000000..6a057b2104a --- /dev/null +++ b/content/rancher/v2.x/en/admin-settings/authentication/shibboleth/about/_index.md @@ -0,0 +1,34 @@ +--- +title: Group Permissions with Shibboleth and OpenLDAP +weight: 1 +--- + +_Available as of Rancher v2.4_ + +This page provides background information and context for Rancher users who intend to set up the Shibboleth authentication provider in Rancher. + +Because Shibboleth is a SAML provider, it does not support searching for groups. While a Shibboleth integration can validate user credentials, it can't be used to assign permissions to groups in Rancher without additional configuration. + +One solution to this problem is to configure an OpenLDAP identity provider. With an OpenLDAP back end for Shibboleth, you will be able to search for groups in Rancher and assign them to resources such as clusters, projects, or namespaces from the Rancher UI. + +### Terminology + +- **Shibboleth** is a single sign-on log-in system for computer networks and the Internet. It allows people to sign in using just one identity to various systems. It validates user credentials, but does not, on its own, handle group memberships. +- **SAML:** Security Assertion Markup Language, an open standard for exchanging authentication and authorization data between an identity provider and a service provider. +- **OpenLDAP:** a free, open-source implementation of the Lightweight Directory Access Protocol (LDAP). It is used to manage an organization’s computers and users. OpenLDAP is useful for Rancher users because it supports groups. In Rancher, it is possible to assign permissions to groups so that they can access resources such as clusters, projects, or namespaces, as long as the groups already exist in the identity provider. +- **IdP or IDP:** An identity provider. OpenLDAP is an example of an identity provider. + +### Adding OpenLDAP Group Permissions to Rancher Resources + +The diagram below illustrates how members of an OpenLDAP group can access resources in Rancher that the group has permissions for. + +For example, a cluster owner could add an OpenLDAP group to a cluster so that they have permissions view most cluster level resources and create new projects. Then the OpenLDAP group members will have access to the cluster as soon as they log in to Rancher. + +In this scenario, OpenLDAP allows the cluster owner to search for groups when assigning persmissions. Without OpenLDAP, the functionality to search for groups would not be supported. + +When a member of the OpenLDAP group logs in to Rancher, she is redirected to Shibboleth and enters her username and password. + +Shibboleth validates her credentials, and retrieves user attributes from OpenLDAP, including groups. Then Shibboleth sends a SAML assertion to Rancher including the user attributes. Rancher uses the group data so that she can access all of the resources and permissions that her groups have permissions for. + +![Adding OpenLDAP Group Permissions to Rancher Resources]({{}}/img/rancher/shibboleth-with-openldap-groups.svg) + \ No newline at end of file diff --git a/content/rancher/v2.x/en/admin-settings/authentication/user-groups/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/user-groups/_index.md index 722452e5f63..d88eb423f82 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/user-groups/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/user-groups/_index.md @@ -5,11 +5,11 @@ weight: 1 Rancher relies on users and groups to determine who is allowed to log in to Rancher and which resources they can access. When you configure an external authentication provider, users from that provider will be able to log in to your Rancher server. When a user logs in, the authentication provider will supply your Rancher server with a list of groups to which the user belongs. -Access to clusters, projects, multi-cluster apps, and global DNS providers and entries can be controlled by adding either individual users or groups to these resources. When you add a group to a resource, all users who are members of that group in the authentication provider, will be able to access the resource with the permissions that you've specified for the group. For more information on roles and permissions, see [Role Based Access Control]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/). +Access to clusters, projects, multi-cluster apps, and global DNS providers and entries can be controlled by adding either individual users or groups to these resources. When you add a group to a resource, all users who are members of that group in the authentication provider, will be able to access the resource with the permissions that you've specified for the group. For more information on roles and permissions, see [Role Based Access Control]({{}}/rancher/v2.x/en/admin-settings/rbac/). ## Managing Members -When adding a user or group to a resource, you can search for users or groups by beginning to type their name. The Rancher server will query the authentication provider to find users and groups that match what you've entered. Searching is limited to the authentication provider that you are currently logged in with. For example, if you've enabled GitHub authentication but are logged in using a [local]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/local/) user account, you will not be able to search for GitHub users or groups. +When adding a user or group to a resource, you can search for users or groups by beginning to type their name. The Rancher server will query the authentication provider to find users and groups that match what you've entered. Searching is limited to the authentication provider that you are currently logged in with. For example, if you've enabled GitHub authentication but are logged in using a [local]({{}}/rancher/v2.x/en/admin-settings/authentication/local/) user account, you will not be able to search for GitHub users or groups. All users, whether they are local users or from an authentication provider, can be viewed and managed. From the **Global** view, click on **Users**. diff --git a/content/rancher/v2.x/en/admin-settings/drivers/_index.md b/content/rancher/v2.x/en/admin-settings/drivers/_index.md index 63d202b1fad..11cc9d71582 100644 --- a/content/rancher/v2.x/en/admin-settings/drivers/_index.md +++ b/content/rancher/v2.x/en/admin-settings/drivers/_index.md @@ -3,7 +3,7 @@ title: Provisioning Drivers weight: 1140 --- -Drivers in Rancher allow you to manage which providers can be used to deploy [hosted Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) or [nodes in an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) to allow Rancher to deploy and manage Kubernetes. +Drivers in Rancher allow you to manage which providers can be used to deploy [hosted Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) or [nodes in an infrastructure provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) to allow Rancher to deploy and manage Kubernetes. ### Rancher Drivers @@ -18,19 +18,19 @@ There are two types of drivers within Rancher: _Available as of v2.2.0_ -Cluster drivers are used to provision [hosted Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/), such as GKE, EKS, AKS, etc.. The availability of which cluster driver to display when creating a cluster is defined based on the cluster driver's status. Only `active` cluster drivers will be displayed as an option for creating clusters for hosted Kubernetes clusters. By default, Rancher is packaged with several existing cluster drivers, but you can also create custom cluster drivers to add to Rancher. +Cluster drivers are used to provision [hosted Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/), such as GKE, EKS, AKS, etc.. The availability of which cluster driver to display when creating a cluster is defined based on the cluster driver's status. Only `active` cluster drivers will be displayed as an option for creating clusters for hosted Kubernetes clusters. By default, Rancher is packaged with several existing cluster drivers, but you can also create custom cluster drivers to add to Rancher. By default, Rancher has activated several hosted Kubernetes cloud providers including: -* [Amazon EKS]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/) -* [Google GKE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/) -* [Azure AKS]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/aks/) +* [Amazon EKS]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/) +* [Google GKE]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/) +* [Azure AKS]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/aks/) There are several other hosted Kubernetes cloud providers that are disabled by default, but are packaged in Rancher: -* [Alibaba ACK]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/) -* [Huawei CCE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/) -* [Tencent]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/) +* [Alibaba ACK]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/) +* [Huawei CCE]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/) +* [Tencent]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/) ## Node Drivers @@ -40,7 +40,7 @@ If there are specific node drivers that you don't want to show to your users, yo Rancher supports several major cloud providers, but by default, these node drivers are active and available for deployment: -* [Amazon EC2]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/) -* [Azure]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/) -* [Digital Ocean]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/) -* [vSphere]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/) +* [Amazon EC2]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/) +* [Azure]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/) +* [Digital Ocean]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/) +* [vSphere]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/) diff --git a/content/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/_index.md b/content/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/_index.md index f578774e99f..ef92a737bd6 100644 --- a/content/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/_index.md +++ b/content/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/_index.md @@ -5,7 +5,7 @@ weight: 1 _Available as of v2.2.0_ -Cluster drivers are used to create clusters in a [hosted Kubernetes provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/), such as Google GKE. The availability of which cluster driver to display when creating clusters is defined by the cluster driver's status. Only `active` cluster drivers will be displayed as an option for creating clusters. By default, Rancher is packaged with several existing cloud provider cluster drivers, but you can also add custom cluster drivers to Rancher. +Cluster drivers are used to create clusters in a [hosted Kubernetes provider]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/), such as Google GKE. The availability of which cluster driver to display when creating clusters is defined by the cluster driver's status. Only `active` cluster drivers will be displayed as an option for creating clusters. By default, Rancher is packaged with several existing cloud provider cluster drivers, but you can also add custom cluster drivers to Rancher. If there are specific cluster drivers that you do not want to show your users, you may deactivate those cluster drivers within Rancher and they will not appear as an option for cluster creation. @@ -13,8 +13,8 @@ If there are specific cluster drivers that you do not want to show your users, y >**Prerequisites:** To create, edit, or delete cluster drivers, you need _one_ of the following permissions: > ->- [Administrator Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) ->- [Custom Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Cluster Drivers]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned. +>- [Administrator Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) +>- [Custom Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Cluster Drivers]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned. ## Activating/Deactivating Cluster Drivers diff --git a/content/rancher/v2.x/en/admin-settings/drivers/node-drivers/_index.md b/content/rancher/v2.x/en/admin-settings/drivers/node-drivers/_index.md index ba310504acc..5cf47fec86e 100644 --- a/content/rancher/v2.x/en/admin-settings/drivers/node-drivers/_index.md +++ b/content/rancher/v2.x/en/admin-settings/drivers/node-drivers/_index.md @@ -14,8 +14,8 @@ If there are specific node drivers that you don't want to show to your users, yo >**Prerequisites:** To create, edit, or delete drivers, you need _one_ of the following permissions: > ->- [Administrator Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) ->- [Custom Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Node Drivers]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned. +>- [Administrator Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) +>- [Custom Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Node Drivers]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned. ## Activating/Deactivating Node Drivers diff --git a/content/rancher/v2.x/en/admin-settings/k8s-metadata/_index.md b/content/rancher/v2.x/en/admin-settings/k8s-metadata/_index.md index 4f16e1aa458..58e56178eee 100644 --- a/content/rancher/v2.x/en/admin-settings/k8s-metadata/_index.md +++ b/content/rancher/v2.x/en/admin-settings/k8s-metadata/_index.md @@ -39,10 +39,28 @@ To force Rancher to refresh the Kubernetes metadata, a manual refresh action is The RKE metadata config controls how often Rancher syncs metadata and where it downloads data from. You can configure the metadata from the settings in the Rancher UI, or through the Rancher API at the endpoint `v3/settings/rke-metadata-config`. +The way that the metadata is configured depends on the Rancher version. + +{{% tabs %}} +{{% tab "Rancher v2.4+" %}} To edit the metadata config in Rancher, 1. Go to the **Global** view and click the **Settings** tab. -1. Go to the **rke-metadata-config** section. Click the **Ellipsis (...)** and click **Edit.** +1. Go to the **rke-metadata-config** section. Click the **⋮** and click **Edit.** +1. You can optionally fill in the following parameters: + + - `refresh-interval-minutes`: This is the amount of time that Rancher waits to sync the metadata. To disable the periodic refresh, set `refresh-interval-minutes` to 0. + - `url`: This is the HTTP path that Rancher fetches data from. The path must be a direct path to a JSON file. For example, the default URL for Rancher v2.4 is `https://releases.rancher.com/kontainer-driver-metadata/release-v2.4/data.json`. + +If you don't have an air gap setup, you don't need to specify the URL where Rancher gets the metadata, because the default setting is to pull from [Rancher's metadata Git repository.](https://github.com/rancher/kontainer-driver-metadata/blob/dev-v2.5/data/data.json) + +However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL to point to the new location of the JSON file. +{{% /tab %}} +{{% tab "Rancher v2.3" %}} +To edit the metadata config in Rancher, + +1. Go to the **Global** view and click the **Settings** tab. +1. Go to the **rke-metadata-config** section. Click the **⋮** and click **Edit.** 1. You can optionally fill in the following parameters: - `refresh-interval-minutes`: This is the amount of time that Rancher waits to sync the metadata. To disable the periodic refresh, set `refresh-interval-minutes` to 0. @@ -52,6 +70,8 @@ To edit the metadata config in Rancher, If you don't have an air gap setup, you don't need to specify the URL or Git branch where Rancher gets the metadata, because the default setting is to pull from [Rancher's metadata Git repository.](https://github.com/rancher/kontainer-driver-metadata.git) However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL and Git branch in the `rke-metadata-config` settings to point to the new location of the repository. +{{% /tab %}} +{{% /tabs %}} ### Air Gap Setups @@ -59,7 +79,7 @@ Rancher relies on a periodic refresh of the `rke-metadata-config` to download ne If you have an air gap setup, you might not be able to get the automatic periodic refresh of the Kubernetes metadata from Rancher's Git repository. In that case, you should disable the periodic refresh to prevent your logs from showing errors. Optionally, you can configure your metadata settings so that Rancher can sync with a local copy of the RKE metadata. -To sync Rancher with a local mirror of the RKE metadata, an administrator would configure the `rke-metadata-config` settings by updating the `url` and `branch` to point to the mirror. +To sync Rancher with a local mirror of the RKE metadata, an administrator would configure the `rke-metadata-config` settings to point to the mirror. For details, refer to [Configuring the Metadata Synchronization.](#configuring-the-metadata-synchronization) After new Kubernetes versions are loaded into the Rancher setup, additional steps would be required in order to use them for launching clusters. Rancher needs access to updated system images. While the metadata settings can only be changed by administrators, any user can download the Rancher system images and prepare a private Docker registry for them. diff --git a/content/rancher/v2.x/en/admin-settings/pod-security-policies/_index.md b/content/rancher/v2.x/en/admin-settings/pod-security-policies/_index.md index 2ff9bd75b55..12616772261 100644 --- a/content/rancher/v2.x/en/admin-settings/pod-security-policies/_index.md +++ b/content/rancher/v2.x/en/admin-settings/pod-security-policies/_index.md @@ -9,6 +9,8 @@ aliases: _Pod Security Policies_ (or PSPs) are objects that control security-sensitive aspects of pod specification (like root privileges). If a pod does not meet the conditions specified in the PSP, Kubernetes will not allow it to start, and Rancher will display an error message of `Pod is forbidden: unable to validate...`. +> **Note:** Assigning Pod Security Policies are only available for clusters that are [launched using RKE.]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) + - You can assign PSPs at the cluster or project level. - PSPs work through inheritance. @@ -71,10 +73,10 @@ Rancher ships with two default Pod Security Policies (PSPs): the `restricted` an You can add a Pod Security Policy (PSPs hereafter) in the following contexts: -- [When creating a cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/pod-security-policies/) -- [When editing an existing cluster]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/) -- [When creating a project]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#creating-a-project/) -- [When editing an existing project]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/) +- [When creating a cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/pod-security-policies/) +- [When editing an existing cluster]({{}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/) +- [When creating a project]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#creating-a-project/) +- [When editing an existing project]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/) > **Note:** We recommend adding PSPs during cluster and project creation instead of adding it to an existing one. diff --git a/content/rancher/v2.x/en/admin-settings/rbac/_index.md b/content/rancher/v2.x/en/admin-settings/rbac/_index.md index ee8ef07a3e7..01b6eaacaa7 100644 --- a/content/rancher/v2.x/en/admin-settings/rbac/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rbac/_index.md @@ -5,7 +5,7 @@ aliases: - /rancher/v2.x/en/concepts/global-configuration/users-permissions-roles/ --- -Within Rancher, each person authenticates as a _user_, which is a login that grants you access to Rancher. As mentioned in [Authentication]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/), users can either be local or external. +Within Rancher, each person authenticates as a _user_, which is a login that grants you access to Rancher. As mentioned in [Authentication]({{}}/rancher/v2.x/en/admin-settings/authentication/), users can either be local or external. After you configure external authentication, the users that display on the **Users** page changes. @@ -17,11 +17,11 @@ After you configure external authentication, the users that display on the **Use Once the user logs in to Rancher, their _authorization_, or their access rights within the system, is determined by _global permissions_, and _cluster and project roles_. -- [Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/): +- [Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/): Define user authorization outside the scope of any particular cluster. -- [Cluster and Project Roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/): +- [Cluster and Project Roles]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/): Define user authorization inside the specific cluster or project where they are assigned the role. diff --git a/content/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/_index.md b/content/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/_index.md index 591d1e2365d..6d04183c0b5 100644 --- a/content/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/_index.md @@ -67,7 +67,7 @@ To assign the role to a new cluster member, To assign any custom role to an existing cluster member, -1. Go to the member you want to give the role to. Click the **Ellipsis (...) > View in API.** +1. Go to the member you want to give the role to. Click the **⋮ > View in API.** 1. In the **roleTemplateId** field, go to the drop-down menu and choose the role you want to assign to the member. Click **Show Request** and **Send Request.** **Result:** The member has the assigned role. @@ -140,7 +140,7 @@ By default, when a standard user creates a new cluster or project, they are auto There are two methods for changing default cluster/project roles: -- **Assign Custom Roles**: Create a [custom role]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles) for either your [cluster](#custom-cluster-roles) or [project](#custom-project-roles), and then set the custom role as default. +- **Assign Custom Roles**: Create a [custom role]({{}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles) for either your [cluster](#custom-cluster-roles) or [project](#custom-project-roles), and then set the custom role as default. - **Assign Individual Roles**: Configure multiple [cluster](#cluster-role-reference)/[project](#project-role-reference) roles as default for assignment to the creating user. @@ -148,7 +148,7 @@ There are two methods for changing default cluster/project roles: >**Note:** > ->- Although you can [lock]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/) a default role, the system still assigns the role to users who create a cluster/project. +>- Although you can [lock]({{}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/) a default role, the system still assigns the role to users who create a cluster/project. >- Only users that create clusters/projects inherit their roles. Users added to the cluster/project membership afterward must be explicitly assigned their roles. ### Configuring Default Roles for Cluster and Project Creators @@ -157,7 +157,7 @@ You can change the cluster or project role(s) that are automatically assigned to 1. From the **Global** view, select **Security > Roles** from the main menu. Select either the **Cluster** or **Project** tab. -1. Find the custom or individual role that you want to use as default. Then edit the role by selecting **Ellipsis > Edit**. +1. Find the custom or individual role that you want to use as default. Then edit the role by selecting **⋮ > Edit**. 1. Enable the role as default. {{% accordion id="cluster" label="For Clusters" %}} diff --git a/content/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/_index.md b/content/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/_index.md index 895330c46ea..1d262e1db31 100644 --- a/content/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/_index.md @@ -13,8 +13,7 @@ This section covers the following topics: - [Prerequisites](#prerequisites) - [Creating a custom role for a cluster or project](#creating-a-custom-role-for-a-cluster-or-project) -- [Creating a custom global role that copies rules from an existing role](#creating-a-custom-global-role-that-copies-rules-from-an-existing-role) -- [Creating a custom global role that does not copy rules from another role](#creating-a-custom-global-role-that-does-not-copy-rules-from-another-role) +- [Creating a custom global role](#creating-a-custom-global-role) - [Deleting a custom global role](#deleting-a-custom-global-role) - [Assigning a custom global role to a group](#assigning-a-custom-global-role-to-a-group) @@ -22,8 +21,8 @@ This section covers the following topics: To complete the tasks on this page, one of the following permissions are required: - - [Administrator Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/). - - [Custom Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned. + - [Administrator Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/). + - [Custom Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Roles]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned. ## Creating A Custom Role for a Cluster or Project @@ -68,7 +67,7 @@ The steps to add custom roles differ depending on the version of Rancher. 1. **Name** the role. -1. Choose whether to set the role to a status of [locked]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/). +1. Choose whether to set the role to a status of [locked]({{}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/). > **Note:** Locked roles cannot be assigned to users. @@ -93,9 +92,11 @@ The steps to add custom roles differ depending on the version of Rancher. {{% /tab %}} {{% /tabs %}} -## Creating a Custom Global Role that Copies Rules from an Existing Role +## Creating a Custom Global Role -_Available as of v2.4.0-alpha1_ +_Available as of v2.4.0_ + +### Creating a Custom Global Role that Copies Rules from an Existing Role If you have a group of individuals that need the same level of access in Rancher, it can save time to create a custom global role in which all of the rules from another role, such as the administrator role, are copied into a new role. This allows you to only configure the variations between the existing role and the new role. @@ -104,15 +105,13 @@ The custom global role can then be assigned to a user or group so that the custo To create a custom global role based on an existing role, 1. Go to the **Global** view and click **Security > Roles.** -1. On the **Global** tab, go to the role that the custom global role will be based on. Click **Ellipsis (…) > Clone.** +1. On the **Global** tab, go to the role that the custom global role will be based on. Click **⋮ (…) > Clone.** 1. Enter a name for the role. 1. Optional: To assign the custom role default for new users, go to the **New User Default** section and click **Yes: Default role for new users.** 1. In the **Grant Resources** section, select the Kubernetes resource operations that will be enabled for users with the custom role. 1. Click **Save.** -## Creating a Custom Global Role that Does Not Copy Rules from Another Role - -_Available as of v2.4.0-alpha1_ +### Creating a Custom Global Role that Does Not Copy Rules from Another Role Custom global roles don't have to be based on existing roles. To create a custom global role by choosing the specific Kubernetes resource operations that should be allowed for the role, follow these steps: @@ -125,7 +124,7 @@ Custom global roles don't have to be based on existing roles. To create a custom ## Deleting a Custom Global Role -_Available as of v2.4.0-alpha1_ +_Available as of v2.4.0_ When deleting a custom global role, all global role bindings with this custom role are deleted. @@ -136,12 +135,12 @@ Custom global roles can be deleted, but built-in roles cannot be deleted. To delete a custom global role, 1. Go to the **Global** view and click **Security > Roles.** -2. On the **Global** tab, go to the custom global role that should be deleted and click **Ellipsis (…) > Delete.** +2. On the **Global** tab, go to the custom global role that should be deleted and click **⋮ (…) > Delete.** 3. Click **Delete.** ## Assigning a Custom Global Role to a Group -_Available as of v2.4.0-alpha1_ +_Available as of v2.4.0_ If you have a group of individuals that need the same level of access in Rancher, it can save time to create a custom global role. When the role is assigned to a group, the users in the group have the appropriate level of access the first time they sign into Rancher. @@ -164,4 +163,4 @@ To assign a custom global role to a group, follow these steps: 1. Optional: In the **Global Permissions** or **Built-in** sections, select any additional permissions that the group should have. 1. Click **Create.** -**Result:** The custom global role will take effect when the users in the group log into Rancher. \ No newline at end of file +**Result:** The custom global role will take effect when the users in the group log into Rancher. diff --git a/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md b/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md index 4f754a97d37..123a11ea5e1 100644 --- a/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rbac/global-permissions/_index.md @@ -43,7 +43,7 @@ To see the default permissions for new users, go to the **Global** view and clic Permissions can be assigned to an individual user with [these steps.](#configuring-global-permissions-for-existing-individual-users) -As of Rancher v2.4.0-alpha1, you can [assign a role to everyone in the group at the same time](#configuring-global-permissions-for-groups) if the external authentication provider supports groups. +As of Rancher v2.4.0, you can [assign a role to everyone in the group at the same time](#configuring-global-permissions-for-groups) if the external authentication provider supports groups. # Custom Global Permissions @@ -102,7 +102,7 @@ To change the default global permissions that are assigned to external users upo 1. From the **Global** view, select **Security > Roles** from the main menu. Make sure the **Global** tab is selected. -1. Find the permissions set that you want to add or remove as a default. Then edit the permission by selecting **Ellipsis > Edit**. +1. Find the permissions set that you want to add or remove as a default. Then edit the permission by selecting **⋮ > Edit**. 1. If you want to add the permission as a default, Select **Yes: Default role for new users** and then click **Save**. @@ -116,7 +116,7 @@ To configure permission for a user, 1. Go to the **Users** tab. -1. On this page, go to the user whose access level you want to change and click **Ellipsis (...) > Edit.** +1. On this page, go to the user whose access level you want to change and click **⋮ > Edit.** 1. In the **Global Permissions** section, click **Custom.** @@ -128,7 +128,7 @@ To configure permission for a user, ### Configuring Global Permissions for Groups -_Available as of v2.4.0-alpha1_ +_Available as of v2.4.0_ If you have a group of individuals that need the same level of access in Rancher, it can save time to assign permissions to the entire group at once, so that the users in the group have the appropriate level of access the first time they sign into Rancher. diff --git a/content/rancher/v2.x/en/admin-settings/rbac/locked-roles/_index.md b/content/rancher/v2.x/en/admin-settings/rbac/locked-roles/_index.md index 91ea1123625..3bbfd52bd07 100644 --- a/content/rancher/v2.x/en/admin-settings/rbac/locked-roles/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rbac/locked-roles/_index.md @@ -27,11 +27,11 @@ If you want to prevent a role from being assigned to users, you can set it to a You can lock roles in two contexts: -- When you're [adding a custom role]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/). +- When you're [adding a custom role]({{}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/). - When you editing an existing role (see below). 1. From the **Global** view, select **Security** > **Roles**. -2. From the role that you want to lock (or unlock), select **Vertical Ellipsis (...)** > **Edit**. +2. From the role that you want to lock (or unlock), select **⋮** > **Edit**. 3. From the **Locked** option, choose the **Yes** or **No** radio button. Then click **Save**. diff --git a/content/rancher/v2.x/en/admin-settings/rke-templates/applying-templates/_index.md b/content/rancher/v2.x/en/admin-settings/rke-templates/applying-templates/_index.md index 1a3010aefe0..06a62b8e02a 100644 --- a/content/rancher/v2.x/en/admin-settings/rke-templates/applying-templates/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rke-templates/applying-templates/_index.md @@ -53,7 +53,7 @@ RKE templates cannot be applied to existing clusters, except if you save an exis To convert an existing cluster to use an RKE template, 1. From the **Global** view in Rancher, click the **Clusters** tab. -1. Go to the cluster that will be converted to use an RKE template. Click **Ellipsis (...)** > **Save as RKE Template.** +1. Go to the cluster that will be converted to use an RKE template. Click **⋮** > **Save as RKE Template.** 1. Enter a name for the template in the form that appears, and click **Create.** **Results:** diff --git a/content/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/_index.md b/content/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/_index.md index 1c0c4711596..10935277fa9 100644 --- a/content/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/_index.md @@ -40,7 +40,7 @@ You can revise, share, and delete a template if you are an owner of the template 1. Optional: Share the template with other users or groups by [adding them as members.]({{}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/#sharing-templates-with-specific-users) You can also make the template public to share with everyone in the Rancher setup. 1. Then follow the form on screen to save the cluster configuration parameters as part of the template's revision. The revision can be marked as default for this template. -**Result:** An RKE template with one revision is configured. You can use this RKE template revision later when you [provision a Rancher-launched cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters). +**Result:** An RKE template with one revision is configured. You can use this RKE template revision later when you [provision a Rancher-launched cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters). After a cluster is managed by an RKE template, it cannot be disconnected and the option to uncheck **Use an existing RKE Template and Revision** will be unavailable. ### Updating a Template @@ -51,7 +51,7 @@ You can't edit individual revisions. Since you can't edit individual revisions o When new template revisions are created, clusters using an older revision of the template are unaffected. 1. From the **Global** view, click **Tools > RKE Templates.** -1. Go to the template that you want to edit and click the **Vertical Ellipsis (...) > Edit.** +1. Go to the template that you want to edit and click the **⋮ > Edit.** 1. Edit the required information and click **Save.** 1. Optional: You can change the default revision of this template and also change who it is shared with. @@ -62,7 +62,7 @@ When new template revisions are created, clusters using an older revision of the When you no longer use an RKE template for any of your clusters, you can delete it. 1. From the **Global** view, click **Tools > RKE Templates.** -1. Go to the RKE template that you want to delete and click the **Vertical Ellipsis (...) > Delete.** +1. Go to the RKE template that you want to delete and click the **⋮ > Delete.** 1. Confirm the deletion when prompted. **Result:** The template is deleted. @@ -72,7 +72,7 @@ When you no longer use an RKE template for any of your clusters, you can delete You can clone the default template revision and quickly update its settings rather than creating a new revision from scratch. Cloning templates saves you the hassle of re-entering the access keys and other parameters needed for cluster creation. 1. From the **Global** view, click **Tools > RKE Templates.** -1. Go to the RKE template that you want to clone and click the **Vertical Ellipsis (...) > New Revision From Default.** +1. Go to the RKE template that you want to clone and click the **⋮ > New Revision From Default.** 1. Complete the rest of the form to create a new revision. **Result:** The RKE template revision is cloned and configured. @@ -82,7 +82,7 @@ You can clone the default template revision and quickly update its settings rath When creating new RKE template revisions from your user settings, you can clone an existing revision and quickly update its settings rather than creating a new one from scratch. Cloning template revisions saves you the hassle of re-entering the cluster parameters. 1. From the **Global** view, click **Tools > RKE Templates.** -1. Go to the template revision you want to clone. Then select **Ellipsis > Clone Revision.** +1. Go to the template revision you want to clone. Then select **⋮ > Clone Revision.** 1. Complete the rest of the form. **Result:** The RKE template revision is cloned and configured. You can use the RKE template revision later when you provision a cluster. Any existing cluster using this RKE template can be upgraded to this new revision. @@ -94,7 +94,7 @@ When you no longer want an RKE template revision to be used for creating new clu You can disable the revision if it is not being used by any cluster. 1. From the **Global** view, click **Tools > RKE Templates.** -1. Go to the template revision you want to disable. Then select **Ellipsis > Disable.** +1. Go to the template revision you want to disable. Then select **⋮ > Disable.** **Result:** The RKE template revision cannot be used to create a new cluster. @@ -103,7 +103,7 @@ You can disable the revision if it is not being used by any cluster. If you decide that a disabled RKE template revision should be used to create new clusters, you can re-enable it. 1. From the **Global** view, click **Tools > RKE Templates.** -1. Go to the template revision you want to re-enable. Then select **Ellipsis > Enable.** +1. Go to the template revision you want to re-enable. Then select **⋮ > Enable.** **Result:** The RKE template revision can be used to create a new cluster. @@ -114,7 +114,7 @@ When end users create a cluster using an RKE template, they can choose which rev To set an RKE template revision as default, 1. From the **Global** view, click **Tools > RKE Templates.** -1. Go to the RKE template revision that should be default and click the **Ellipsis (...) > Set as Default.** +1. Go to the RKE template revision that should be default and click the **⋮ > Set as Default.** **Result:** The RKE template revision will be used as the default option when clusters are created with the template. @@ -125,7 +125,7 @@ You can delete all revisions of a template except for the default revision. To permanently delete a revision, 1. From the **Global** view, click **Tools > RKE Templates.** -1. Go to the RKE template revision that should be deleted and click the **Ellipsis (...) > Delete.** +1. Go to the RKE template revision that should be deleted and click the **⋮ > Delete.** **Result:** The RKE template revision is deleted. @@ -137,7 +137,7 @@ To permanently delete a revision, To upgrade a cluster to use a new template revision, 1. From the **Global** view in Rancher, click the **Clusters** tab. -1. Go to the cluster that you want to upgrade and click **Ellipsis (...) > Edit.** +1. Go to the cluster that you want to upgrade and click **⋮ > Edit.** 1. In the **Cluster Options** section, click the dropdown menu for the template revision, then select the new template revision. 1. Click **Save.** @@ -152,7 +152,7 @@ This exports the cluster's settings as a new RKE template, and also binds the cl To convert an existing cluster to use an RKE template, 1. From the **Global** view in Rancher, click the **Clusters** tab. -1. Go to the cluster that will be converted to use an RKE template. Click **Ellipsis (...)** > **Save as RKE Template.** +1. Go to the cluster that will be converted to use an RKE template. Click **⋮** > **Save as RKE Template.** 1. Enter a name for the template in the form that appears, and click **Create.** **Results:** diff --git a/content/rancher/v2.x/en/admin-settings/rke-templates/creator-permissions/_index.md b/content/rancher/v2.x/en/admin-settings/rke-templates/creator-permissions/_index.md index 30b58bebd98..0773da504e3 100644 --- a/content/rancher/v2.x/en/admin-settings/rke-templates/creator-permissions/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rke-templates/creator-permissions/_index.md @@ -24,7 +24,7 @@ Administrators can give users permission to create RKE templates in two ways: An administrator can individually grant the role **Create RKE Templates** to any existing user by following these steps: -1. From the global view, click the **Users** tab. Choose the user you want to edit and click the **Vertical Ellipsis (...) > Edit.** +1. From the global view, click the **Users** tab. Choose the user you want to edit and click the **⋮ > Edit.** 1. In the **Global Permissions** section, choose **Custom** and select the **Create RKE Templates** role along with any other roles the user should have. Click **Save.** **Result:** The user has permission to create RKE templates. @@ -34,7 +34,7 @@ An administrator can individually grant the role **Create RKE Templates** to any Alternatively, the administrator can give all new users the default permission to create RKE templates by following the following steps. This will not affect the permissions of existing users. 1. From the **Global** view, click **Security > Roles.** -1. Under the **Global** roles tab, go to the role **Create RKE Templates** and click the **Vertical Ellipsis (...) > Edit**. +1. Under the **Global** roles tab, go to the role **Create RKE Templates** and click the **⋮ > Edit**. 1. Select the option **Yes: Default role for new users** and click **Save.** **Result:** Any new user created in this Rancher installation will be able to create RKE templates. Existing users will not get this permission. @@ -43,7 +43,7 @@ Alternatively, the administrator can give all new users the default permission t Administrators can remove a user's permission to create templates with the following steps: -1. From the global view, click the **Users** tab. Choose the user you want to edit and click the **Vertical Ellipsis (...) > Edit.** +1. From the global view, click the **Users** tab. Choose the user you want to edit and click the **⋮ > Edit.** 1. In the **Global Permissions** section, un-check the box for **Create RKE Templates**. In this section, you can change the user back to a standard user, or give the user a different set of custom permissions. 1. Click **Save.** diff --git a/content/rancher/v2.x/en/admin-settings/rke-templates/enforcement/_index.md b/content/rancher/v2.x/en/admin-settings/rke-templates/enforcement/_index.md index 4f686c0222a..a1fa1e79ddb 100644 --- a/content/rancher/v2.x/en/admin-settings/rke-templates/enforcement/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rke-templates/enforcement/_index.md @@ -22,7 +22,7 @@ You might want to require new clusters to use a template to ensure that any clus To require new clusters to use an RKE template, administrators can turn on RKE template enforcement with the following steps: 1. From the **Global** view, click the **Settings** tab. -1. Go to the `cluster-template-enforcement` setting. Click the vertical **Ellipsis (...)** and click **Edit.** +1. Go to the `cluster-template-enforcement` setting. Click the vertical **⋮** and click **Edit.** 1. Set the value to **True** and click **Save.** **Result:** All clusters provisioned by Rancher must use a template, unless the creator is an administrator. @@ -32,7 +32,7 @@ To require new clusters to use an RKE template, administrators can turn on RKE t To allow new clusters to be created without an RKE template, administrators can turn off RKE template enforcement with the following steps: 1. From the **Global** view, click the **Settings** tab. -1. Go to the `cluster-template-enforcement` setting. Click the vertical **Ellipsis (...)** and click **Edit.** +1. Go to the `cluster-template-enforcement` setting. Click the vertical **⋮** and click **Edit.** 1. Set the value to **False** and click **Save.** **Result:** When clusters are provisioned by Rancher, they don't need to use a template. diff --git a/content/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/_index.md b/content/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/_index.md index a86d8219a85..863faa1bc8b 100644 --- a/content/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/_index.md +++ b/content/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/_index.md @@ -28,7 +28,7 @@ There are several ways to share templates: To allow users or groups to create clusters using your template, you can give them the basic **User** access level for the template. 1. From the **Global** view, click **Tools > RKE Templates.** -1. Go to the template that you want to share and click the **Vertical Ellipsis (...) > Edit.** +1. Go to the template that you want to share and click the **⋮ > Edit.** 1. In the **Share Template** section, click on **Add Member**. 1. Search in the **Name** field for the user or group you want to share the template with. 1. Choose the **User** access type. @@ -39,7 +39,7 @@ To allow users or groups to create clusters using your template, you can give th ### Sharing Templates with All Users 1. From the **Global** view, click **Tools > RKE Templates.** -1. Go to the template that you want to share and click the **Vertical Ellipsis (...) > Edit.** +1. Go to the template that you want to share and click the **⋮ > Edit.** 1. Under **Share Template,** click **Make Public (read-only).** Then click **Save.** **Result:** All users in the Rancher setup can create clusters using the template. @@ -53,7 +53,7 @@ In that case, you can give users the Owner access type, which allows another use To give Owner access to a user or group, 1. From the **Global** view, click **Tools > RKE Templates.** -1. Go to the RKE template that you want to share and click the **Vertical Ellipsis (...) > Edit.** +1. Go to the RKE template that you want to share and click the **⋮ > Edit.** 1. Under **Share Template**, click on **Add Member** and search in the **Name** field for the user or group you want to share the template with. 1. In the **Access Type** field, click **Owner.** 1. Click **Save.** diff --git a/content/rancher/v2.x/en/api/_index.md b/content/rancher/v2.x/en/api/_index.md index 97a0c5a6489..b2f9e84816d 100644 --- a/content/rancher/v2.x/en/api/_index.md +++ b/content/rancher/v2.x/en/api/_index.md @@ -5,11 +5,11 @@ weight: 7500 ## How to use the API -The API has its own user interface accessible from a web browser. This is an easy way to see resources, perform actions, and see the equivalent cURL or HTTP request & response. To access it, click on your user avatar in the upper right corner. Under **API & Keys**, you can find the URL endpoint as well as create [API keys]({{< baseurl >}}/rancher/v2.x/en/user-settings/api-keys/). +The API has its own user interface accessible from a web browser. This is an easy way to see resources, perform actions, and see the equivalent cURL or HTTP request & response. To access it, click on your user avatar in the upper right corner. Under **API & Keys**, you can find the URL endpoint as well as create [API keys]({{}}/rancher/v2.x/en/user-settings/api-keys/). ## Authentication -API requests must include authentication information. Authentication is done with HTTP basic authentication using [API Keys]({{< baseurl >}}/rancher/v2.x/en/user-settings/api-keys/). API keys can create new clusters and have access to multiple clusters via `/v3/clusters/`. [Cluster and project roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) apply to these keys and restrict what clusters and projects the account can see and what actions they can take. +API requests must include authentication information. Authentication is done with HTTP basic authentication using [API Keys]({{}}/rancher/v2.x/en/user-settings/api-keys/). API keys can create new clusters and have access to multiple clusters via `/v3/clusters/`. [Cluster and project roles]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) apply to these keys and restrict what clusters and projects the account can see and what actions they can take. By default, some cluster-level API tokens are generated with infinite time-to-live (`ttl=0`). In other words, API tokens with `ttl=0` never expire unless you invalidate them. For details on how to invalidate them, refer to the [API tokens page]({{}}/rancher/v2.x/en/api/api-tokens). diff --git a/content/rancher/v2.x/en/backups/_index.md b/content/rancher/v2.x/en/backups/_index.md index 0f2c8b5a106..d9b66a43114 100644 --- a/content/rancher/v2.x/en/backups/_index.md +++ b/content/rancher/v2.x/en/backups/_index.md @@ -5,14 +5,15 @@ weight: 1000 This section is devoted to protecting your data in a disaster scenario. - To protect yourself from a disaster scenario, you should create backups on a regular basis. - - [Rancher Server Backups]({{< baseurl >}}/rancher/v2.x/en/backups/backups) - - [Backing up Rancher Launched Kubernetes Clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/) - + - Rancher server backups: + - [Rancher installed on a K3s Kubernetes cluster](./backups/k3s-backups) + - [Rancher installed on an RKE Kubernetes cluster](./backups/ha-backups) + - [Rancher installed with Docker](./backups/single-node-backups/) + - [Backing up Rancher Launched Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/) In a disaster scenario, you can restore your `etcd` database by restoring a backup. - - [Rancher Server Restorations]({{< baseurl >}}/rancher/v2.x/en/backups/restorations) - - [Restoring Rancher Launched Kubernetes Clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/restoring-etcd/) + - [Rancher Server Restorations]({{}}/rancher/v2.x/en/backups/restorations) + - [Restoring Rancher Launched Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/) diff --git a/content/rancher/v2.x/en/backups/backups/_index.md b/content/rancher/v2.x/en/backups/backups/_index.md index 9ef3beb47d8..072c1913cac 100644 --- a/content/rancher/v2.x/en/backups/backups/_index.md +++ b/content/rancher/v2.x/en/backups/backups/_index.md @@ -7,7 +7,8 @@ aliases: --- This section contains information about how to create backups of your Rancher data and how to restore them in a disaster scenario. -- [Docker Install Backups](./single-node-backups/) -- [Kubernetes Install Backups](./ha-backups/) +- [Backing up Rancher installed on a K3s Kubernetes cluster](./k3s-backups) +- [Backing up Rancher installed on an RKE Kubernetes cluster](./ha-backups/) +- [Backing up Rancher installed with Docker](./single-node-backups/) -If you are looking to back up your [Rancher launched Kubernetes cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), please refer [here]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/). +If you are looking to back up your [Rancher launched Kubernetes cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), please refer [here]({{}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/). diff --git a/content/rancher/v2.x/en/backups/backups/ha-backups/_index.md b/content/rancher/v2.x/en/backups/backups/ha-backups/_index.md index 08fce60c0b2..362b867745f 100644 --- a/content/rancher/v2.x/en/backups/backups/ha-backups/_index.md +++ b/content/rancher/v2.x/en/backups/backups/ha-backups/_index.md @@ -1,6 +1,6 @@ --- -title: Creating Backups for Rancher Installed on Kubernetes -weight: 50 +title: Backing up Rancher Installed on an RKE Kubernetes Cluster +weight: 2 aliases: - /rancher/v2.x/en/installation/after-installation/k8s-install-backup-and-restoration/ - /rancher/v2.x/en/installation/backups-and-restoration/ha-backup-and-restoration/ @@ -9,6 +9,13 @@ This section describes how to create backups of your high-availability Rancher i >**Prerequisites:** {{< requirements_rollback >}} +## RKE Kubernetes Cluster Data + +In an RKE installation, the cluster data is replicated on each of three etcd nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails. + +
Architecture of an RKE Kubernetes Cluster Running the Rancher Management Server
+![Architecture of an RKE Kubernetes cluster running the Rancher management server]({{}}/img/rancher/rke-server-storage.svg) + ## Backup Outline Backing up your high-availability Rancher cluster is process that involves completing multiple tasks. diff --git a/content/rancher/v2.x/en/backups/backups/k3s-backups/_index.md b/content/rancher/v2.x/en/backups/backups/k3s-backups/_index.md new file mode 100644 index 00000000000..01408849bb0 --- /dev/null +++ b/content/rancher/v2.x/en/backups/backups/k3s-backups/_index.md @@ -0,0 +1,25 @@ +--- +title: Backing up Rancher Installed on a K3s Kubernetes Cluster +weight: 1 +--- + +When Rancher is installed on a high-availability Kubernetes cluster, we recommend using an external database to store the cluster data. + +The database administrator will need to back up the external database, or restore it from a snapshot or dump. + +We recommend configuring the database to take recurring snapshots. + +### K3s Kubernetes Cluster Data + +One main advantage of this K3s architecture is that it allows an external datastore to hold the cluster data, allowing the K3s server nodes to be treated as ephemeral. + +
Architecture of a K3s Kubernetes Cluster Running the Rancher Management Server
+![Architecture of an RKE Kubernetes Cluster Running the Rancher Management Server]({{}}/img/rancher/k3s-server-storage.svg) + +### Creating Snapshots and Restoring Databases from Snapshots + +For details on taking database snapshots and restoring your database from them, refer to the official database documentation: + +- [Official MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-snapshot-method.html) +- [Official PostgreSQL documentation](https://www.postgresql.org/docs/8.3/backup-dump.html) +- [Official etcd documentation](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md) \ No newline at end of file diff --git a/content/rancher/v2.x/en/backups/backups/single-node-backups/_index.md b/content/rancher/v2.x/en/backups/backups/single-node-backups/_index.md index e86291f230f..ae0ee7b1ae7 100644 --- a/content/rancher/v2.x/en/backups/backups/single-node-backups/_index.md +++ b/content/rancher/v2.x/en/backups/backups/single-node-backups/_index.md @@ -1,6 +1,6 @@ --- -title: Creating Backups for Rancher Installed with Docker -weight: 25 +title: Backing up Rancher Installed with Docker +weight: 3 aliases: - /rancher/v2.x/en/installation/after-installation/single-node-backup-and-restoration/ --- @@ -20,7 +20,7 @@ In this command, `` is a placeholder for the date that the data container Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the [procedure below](#creating-a-backup). Terminal `docker ps` Command, Displaying Where to Find `` and `` -![Placeholder Reference]({{< baseurl >}}/img/rancher/placeholder-ref.png) +![Placeholder Reference]({{}}/img/rancher/placeholder-ref.png) | Placeholder | Example | Description | | -------------------------- | -------------------------- | --------------------------------------------------------- | @@ -68,4 +68,4 @@ This procedure creates a backup that you can restore if Rancher encounters a dis docker start ``` -**Result:** A backup tarball of your Rancher Server data is created. See [Restoring Backups: Docker Installs]({{< baseurl >}}/rancher/v2.x/en/backups/restorations/single-node-restoration) if you need to restore backup data. +**Result:** A backup tarball of your Rancher Server data is created. See [Restoring Backups: Docker Installs]({{}}/rancher/v2.x/en/backups/restorations/single-node-restoration) if you need to restore backup data. diff --git a/content/rancher/v2.x/en/backups/restorations/_index.md b/content/rancher/v2.x/en/backups/restorations/_index.md index 52fd8cab149..2f32ad1d9e2 100644 --- a/content/rancher/v2.x/en/backups/restorations/_index.md +++ b/content/rancher/v2.x/en/backups/restorations/_index.md @@ -4,7 +4,7 @@ weight: 1010 --- If you lose the data on your Rancher Server, you can restore it if you have backups stored in a safe location. -- [Restoring Backups—Docker Installs]({{< baseurl >}}/rancher/v2.x/en/backups/restorations/single-node-restoration/) -- [Restoring Backups—Kubernetes installs]({{< baseurl >}}/rancher/v2.x/en/backups/restorations/ha-restoration/) +- [Restoring Backups—Docker Installs]({{}}/rancher/v2.x/en/backups/restorations/single-node-restoration/) +- [Restoring Backups—Kubernetes installs]({{}}/rancher/v2.x/en/backups/restorations/ha-restoration/) -If you are looking to restore your [Rancher launched Kubernetes cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), please refer [here]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/restoring-etcd/). +If you are looking to restore your [Rancher launched Kubernetes cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), please refer [here]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/). diff --git a/content/rancher/v2.x/en/backups/restorations/ha-restoration/_index.md b/content/rancher/v2.x/en/backups/restorations/ha-restoration/_index.md index ac30f5113c5..5b8cfd3e0b6 100644 --- a/content/rancher/v2.x/en/backups/restorations/ha-restoration/_index.md +++ b/content/rancher/v2.x/en/backups/restorations/ha-restoration/_index.md @@ -8,7 +8,7 @@ aliases: This procedure describes how to use RKE to restore a snapshot of the Rancher Kubernetes cluster. The cluster snapshot will include Kubernetes configuration and the Rancher database and state. -Additionally, the `pki.bundle.tar.gz` file usage is no longer required as v0.2.0 has changed how the [Kubernetes cluster state is stored]({{< baseurl >}}/rke/latest/en/installation/#kubernetes-cluster-state). +Additionally, the `pki.bundle.tar.gz` file usage is no longer required as v0.2.0 has changed how the [Kubernetes cluster state is stored]({{}}/rke/latest/en/installation/#kubernetes-cluster-state). ## Restore Outline @@ -24,11 +24,11 @@ Additionally, the `pki.bundle.tar.gz` file usage is no longer required as v0.2.0 ### 1. Preparation -You will need [RKE]({{< baseurl >}}/rke/latest/en/installation/) and [kubectl]({{< baseurl >}}/rancher/v2.x/en/faq/kubectl/) CLI utilities installed. +You will need [RKE]({{}}/rke/latest/en/installation/) and [kubectl]({{}}/rancher/v2.x/en/faq/kubectl/) CLI utilities installed. -Prepare by creating 3 new nodes to be the target for the restored Rancher instance. See [Kubernetes Install]({{< baseurl >}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/) for node requirements. +Prepare by creating 3 new nodes to be the target for the restored Rancher instance. See [Kubernetes Install]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/) for node requirements. -We recommend that you start with fresh nodes and a clean state. Alternatively you can clear Kubernetes and Rancher configurations from the existing nodes. This will destroy the data on these nodes. See [Node Cleanup]({{< baseurl >}}/rancher/v2.x/en/faq/cleaning-cluster-nodes/) for the procedure. +We recommend that you start with fresh nodes and a clean state. Alternatively you can clear Kubernetes and Rancher configurations from the existing nodes. This will destroy the data on these nodes. See [Node Cleanup]({{}}/rancher/v2.x/en/faq/cleaning-cluster-nodes/) for the procedure. > **IMPORTANT:** Before starting the restore make sure all the Kubernetes services on the old cluster nodes are stopped. We recommend powering off the nodes to be sure. @@ -135,8 +135,8 @@ S3 specific options are only available for RKE v0.2.0+. | `--bucket-name` value | Specify s3 bucket name | *| | `--folder` value | Specify s3 folder in the bucket name _Available as of v2.3.0_ | *| | `--region` value | Specify the s3 bucket location (optional) | *| -| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{< baseurl >}}/rke/latest/en/config-options/#ssh-agent) | | -| `--ignore-docker-version` | [Disable Docker version check]({{< baseurl >}}/rke/latest/en/config-options/#supported-docker-versions) | +| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{}}/rke/latest/en/config-options/#ssh-agent) | | +| `--ignore-docker-version` | [Disable Docker version check]({{}}/rke/latest/en/config-options/#supported-docker-versions) | ### 5. Bring Up the Cluster @@ -150,7 +150,7 @@ rke up --config ./rancher-cluster-restore.yml #### Testing the Cluster -Once RKE completes it will have created a credentials file in the local directory. Configure `kubectl` to use the `kube_config_rancher-cluster-restore.yml` credentials file and check on the state of the cluster. See [Installing and Configuring kubectl]({{< baseurl >}}/rancher/v2.x/en/faq/kubectl/#configuration) for details. +Once RKE completes it will have created a credentials file in the local directory. Configure `kubectl` to use the `kube_config_rancher-cluster-restore.yml` credentials file and check on the state of the cluster. See [Installing and Configuring kubectl]({{}}/rancher/v2.x/en/faq/kubectl/#configuration) for details. Your new cluster will take a few minutes to stabilize. Once you see the new "target node" transition to `Ready` and three old nodes in `NotReady` you are ready to continue. @@ -232,6 +232,6 @@ rke up --config ./rancher-cluster-restore.yml #### Finishing Up -Rancher should now be running and available to manage your Kubernetes clusters. Review the [recommended architecture]({{< baseurl >}}/rancher/v2.x/en/installation/k8s-install/#recommended-architecture) for Kubernetes installations and update the endpoints for Rancher DNS or the Load Balancer that you built during Step 1 of the Kubernetes install ([1. Create Nodes and Load Balancer]({{< baseurl >}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/#load-balancer)) to target the new cluster. Once the endpoints are updated, the agents on your managed clusters should automatically reconnect. This may take 10-15 minutes due to reconnect back off timeouts. +Rancher should now be running and available to manage your Kubernetes clusters. Review the [recommended architecture]({{}}/rancher/v2.x/en/installation/k8s-install/#recommended-architecture) for Kubernetes installations and update the endpoints for Rancher DNS or the Load Balancer that you built during Step 1 of the Kubernetes install ([1. Create Nodes and Load Balancer]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/#load-balancer)) to target the new cluster. Once the endpoints are updated, the agents on your managed clusters should automatically reconnect. This may take 10-15 minutes due to reconnect back off timeouts. > **IMPORTANT:** Remember to save your new RKE config (`rancher-cluster-restore.yml`) and `kubectl` credentials (`kube_config_rancher-cluster-restore.yml`) files in a safe place for future maintenance. diff --git a/content/rancher/v2.x/en/backups/restorations/k3s-restoration/_index.md b/content/rancher/v2.x/en/backups/restorations/k3s-restoration/_index.md new file mode 100644 index 00000000000..16b242a6024 --- /dev/null +++ b/content/rancher/v2.x/en/backups/restorations/k3s-restoration/_index.md @@ -0,0 +1,18 @@ +--- +title: Restoring Rancher Installed on a K3s Kubernetes Cluster +weight: 1 +--- + +When Rancher is installed on a high-availability Kubernetes cluster, we recommend using an external database to store the cluster data. + +The database administrator will need to back up the external database, or restore it from a snapshot or dump. + +We recommend configuring the database to take recurring snapshots. + +### Creating Snapshots and Restoring Databases from Snapshots + +For details on taking database snapshots and restoring your database from them, refer to the official database documentation: + +- [Official MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-snapshot-method.html) +- [Official PostgreSQL documentation](https://www.postgresql.org/docs/8.3/backup-dump.html) +- [Official etcd documentation](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md) \ No newline at end of file diff --git a/content/rancher/v2.x/en/backups/restorations/single-node-restoration/_index.md b/content/rancher/v2.x/en/backups/restorations/single-node-restoration/_index.md index 9034877c2e4..aefa51a9da5 100644 --- a/content/rancher/v2.x/en/backups/restorations/single-node-restoration/_index.md +++ b/content/rancher/v2.x/en/backups/restorations/single-node-restoration/_index.md @@ -23,7 +23,7 @@ In this command, `` and `-` are e Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the [procedure below](#creating-a-backup). Terminal `docker ps` Command, Displaying Where to Find `` and `` -![Placeholder Reference]({{< baseurl >}}/img/rancher/placeholder-ref.png) +![Placeholder Reference]({{}}/img/rancher/placeholder-ref.png) | Placeholder | Example | Description | | -------------------------- | -------------------------- | --------------------------------------------------------- | @@ -37,7 +37,7 @@ You can obtain `` and `` by loggi ## Restoring Backups -Using a [backup]({{< baseurl >}}/rancher/v2.x/en/backups/backups/single-node-backups/) that you created earlier, restore Rancher to its last known healthy state. +Using a [backup]({{}}/rancher/v2.x/en/backups/backups/single-node-backups/) that you created earlier, restore Rancher to its last known healthy state. 1. Using a remote Terminal connection, log into the node running your Rancher Server. @@ -46,9 +46,9 @@ Using a [backup]({{< baseurl >}}/rancher/v2.x/en/backups/backups/single-node-bac ``` docker stop ``` -1. Move the backup tarball that you created during completion of [Creating Backups—Docker Installs]({{< baseurl >}}/rancher/v2.x/en/backups/backups/single-node-backups/) onto your Rancher Server. Change to the directory that you moved it to. Enter `dir` to confirm that it's there. +1. Move the backup tarball that you created during completion of [Creating Backups—Docker Installs]({{}}/rancher/v2.x/en/backups/backups/single-node-backups/) onto your Rancher Server. Change to the directory that you moved it to. Enter `dir` to confirm that it's there. - If you followed the naming convention we suggested in [Creating Backups—Docker Installs]({{< baseurl >}}/rancher/v2.x/en/backups/backups/single-node-backups/), it will have a name similar to `rancher-data-backup--.tar.gz`. + If you followed the naming convention we suggested in [Creating Backups—Docker Installs]({{}}/rancher/v2.x/en/backups/backups/single-node-backups/), it will have a name similar to `rancher-data-backup--.tar.gz`. 1. Enter the following command to delete your current state data and replace it with your backup data, replacing the [placeholders](#before-you-start). Don't forget to close the quotes. diff --git a/content/rancher/v2.x/en/best-practices/_index.md b/content/rancher/v2.x/en/best-practices/_index.md index c5aad4106e3..41bbb4cc9c4 100644 --- a/content/rancher/v2.x/en/best-practices/_index.md +++ b/content/rancher/v2.x/en/best-practices/_index.md @@ -11,10 +11,10 @@ Use the navigation bar on the left to find the current best practices for managi For more guidance on best practices, you can consult these resources: -- [Rancher Docs]({{< baseurl >}}) - - [Monitoring]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) - - [Backups and Disaster Recovery]({{< baseurl >}}/rancher/v2.x/en/backups/) - - [Security]({{< baseurl >}}/rancher/v2.x/en/security/) +- [Rancher Docs]({{}}) + - [Monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) + - [Backups and Disaster Recovery]({{}}/rancher/v2.x/en/backups/) + - [Security]({{}}/rancher/v2.x/en/security/) - [Rancher Blog](https://rancher.com/blog/) - [Articles about best practices on the Rancher blog](https://rancher.com/tags/best-practices/) - [101 More Security Best Practices for Kubernetes](https://rancher.com/blog/2019/2019-01-17-101-more-kubernetes-security-best-practices/) diff --git a/content/rancher/v2.x/en/best-practices/deployment-types/_index.md b/content/rancher/v2.x/en/best-practices/deployment-types/_index.md index 82d177cbcaf..ff493e7fbf2 100644 --- a/content/rancher/v2.x/en/best-practices/deployment-types/_index.md +++ b/content/rancher/v2.x/en/best-practices/deployment-types/_index.md @@ -28,11 +28,11 @@ For best performance, run all three of your nodes in the same geographic datacen It's strongly recommended to have a "staging" or "pre-production" environment of the Kubernetes cluster that Rancher runs on. This environment should mirror your production environment as closely as possible in terms of software and hardware configuration. ### Monitor Your Clusters to Plan Capacity -The Rancher server's Kubernetes cluster should run within the [system and hardware requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/) as closely as possible. The more you deviate from the system and hardware requirements, the more risk you take. +The Rancher server's Kubernetes cluster should run within the [system and hardware requirements]({{}}/rancher/v2.x/en/installation/requirements/) as closely as possible. The more you deviate from the system and hardware requirements, the more risk you take. However, metrics-driven capacity planning analysis should be the ultimate guidance for scaling Rancher, because the published requirements take into account a variety of workload types. Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with Prometheus, a leading open-source monitoring solution, and Grafana, which lets you visualize the metrics from Prometheus. -After you [enable monitoring]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) in the cluster, you can set up [a notification channel]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) and [cluster alerts]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/alerts/) to let you know if your cluster is approaching its capacity. You can also use the Prometheus and Grafana monitoring framework to establish a baseline for key metrics as you scale. +After you [enable monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) in the cluster, you can set up [a notification channel]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) and [cluster alerts]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) to let you know if your cluster is approaching its capacity. You can also use the Prometheus and Grafana monitoring framework to establish a baseline for key metrics as you scale. diff --git a/content/rancher/v2.x/en/best-practices/management/_index.md b/content/rancher/v2.x/en/best-practices/management/_index.md index fe7f5f75bf4..4fd202dc1ec 100644 --- a/content/rancher/v2.x/en/best-practices/management/_index.md +++ b/content/rancher/v2.x/en/best-practices/management/_index.md @@ -10,7 +10,7 @@ Rancher allows you to set up numerous combinations of configurations. Some confi These tips can help you solve problems before they happen. ### Run Rancher on a Supported OS and Supported Docker Version -Rancher is container-based and can potentially run on any Linux-based operating system. However, only operating systems listed in the [requirements documentation]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/) should be used for running Rancher, along with a supported version of Docker. These versions have been most thoroughly tested and can be properly supported by the Rancher Support team. +Rancher is container-based and can potentially run on any Linux-based operating system. However, only operating systems listed in the [requirements documentation]({{}}/rancher/v2.x/en/installation/requirements/) should be used for running Rancher, along with a supported version of Docker. These versions have been most thoroughly tested and can be properly supported by the Rancher Support team. ### Upgrade Your Kubernetes Version Keep your Kubernetes cluster up to date with a recent and supported version. Typically the Kubernetes community will support the current version and previous three minor releases (for example, 1.14.x, 1.13.x, 1.12.x, and 1.11.x). After a new version is released, the third-oldest supported version reaches EOL (End of Life) status. Running on an EOL release can be a risk if a security issues are found and patches are not available. The community typically makes minor releases every quarter (every three months). @@ -29,11 +29,11 @@ Rancher [maintains a Terraform provider](https://rancher.com/blog/2019/rancher-2 All upgrades, both patch and feature upgrades, should be first tested on a staging environment before production is upgraded. The more closely the staging environment mirrors production, the higher chance your production upgrade will be successful. ### Renew Certificates Before they Expire -Multiple people in your organization should set up calendar reminders for certificate renewal. Consider renewing the certificate two weeks to one month in advance. If you have multiple certificates to track, consider using [monitoring and alerting mechanisms]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/) to track certificate expiration. +Multiple people in your organization should set up calendar reminders for certificate renewal. Consider renewing the certificate two weeks to one month in advance. If you have multiple certificates to track, consider using [monitoring and alerting mechanisms]({{}}/rancher/v2.x/en/cluster-admin/tools/) to track certificate expiration. Rancher-provisioned Kubernetes clusters will use certificates that expire in one year. Clusters provisioned by other means may have a longer or shorter expiration. -Certificates can be renewed for Rancher-provisioned clusters [through the Rancher user interface]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/certificate-rotation/). +Certificates can be renewed for Rancher-provisioned clusters [through the Rancher user interface]({{}}/rancher/v2.x/en/cluster-admin/certificate-rotation/). ### Enable Recurring Snapshots for Backing up and Restoring the Cluster Make sure etcd recurring snapshots are enabled. Extend the snapshot retention to a period of time that meets your business needs. In the event of a catastrophic failure or deletion of data, this may be your only recourse for recovery. For details about configuring snapshots, refer to the [RKE documentation]({{}}/rke/latest/en/etcd-snapshots/) or the [Rancher documentation on backups]({{}}/rancher/v2.x/en/backups/). @@ -78,13 +78,13 @@ Provision 3 or 5 etcd nodes. Etcd requires a quorum to determine a leader by the Provision two or more control plane nodes. Some control plane components, such as the `kube-apiserver`, run in [active-active](https://www.jscape.com/blog/active-active-vs-active-passive-high-availability-cluster) mode and will give you more scalability. Other components such as kube-scheduler and kube-controller run in active-passive mode (leader elect) and give you more fault tolerance. ### Monitor Your Cluster -Closely monitor and scale your nodes as needed. You should [enable cluster monitoring]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and use the Prometheus metrics and Grafana visualization options as a starting point. +Closely monitor and scale your nodes as needed. You should [enable cluster monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and use the Prometheus metrics and Grafana visualization options as a starting point. # Tips for Security Below are some basic tips for increasing security in Rancher. For more detailed information about securing your cluster, you can refer to these resources: -- Rancher's [security documentation and Kubernetes cluster hardening guide]({{< baseurl >}}/rancher/v2.x/en/security/) +- Rancher's [security documentation and Kubernetes cluster hardening guide]({{}}/rancher/v2.x/en/security/) - [101 More Security Best Practices for Kubernetes](https://rancher.com/blog/2019/2019-01-17-101-more-kubernetes-security-best-practices/) ### Update Rancher with Security Patches diff --git a/content/rancher/v2.x/en/catalog/_index.md b/content/rancher/v2.x/en/catalog/_index.md index 447d3a2f4be..48ecd6612fd 100644 --- a/content/rancher/v2.x/en/catalog/_index.md +++ b/content/rancher/v2.x/en/catalog/_index.md @@ -17,31 +17,17 @@ Rancher improves on Helm catalogs and charts. All native Helm charts can work wi This section covers the following topics: -- [Prerequisites](#prerequisites) - [Catalog scopes](#catalog-scopes) -- [Enabling built-in global catalogs](#enabling-built-in-global-catalogs) -- [Adding custom global catalogs](#adding-custom-global-catalogs) - - [Add custom Git repositories](#add-custom-git-repositories) - - [Add custom Helm chart repositories](#add-custom-helm-chart-repositories) - - [Add private Git/Helm chart repositories](#add-private-git-helm-chart-repositories) -- [Launching catalog applications](#launching-catalog-applications) -- [Working with catalogs](#working-with-catalogs) - - [Apps](#apps) - - [Global DNS](#global-dns) - - [Chart compatibility with Rancher](#chart-compatibility-with-rancher) - -# Prerequisites - -When Rancher deploys a catalog app, it launches an ephemeral instance of a Helm service account that has the permissions of the user deploying the catalog app. Therefore, a user cannot gain more access to the cluster through Helm or a catalog application than they otherwise would have. - -To launch a catalog app or a multi-cluster app, you should have at least one of the following permissions: - -- A [project-member role]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) in the target cluster, which gives you the ability to create, read, update, and delete the workloads -- A [cluster owner role]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) for the cluster that include the target project +- [Catalog Helm Deployment Versions](#catalog-helm-deployment-versions) +- [Built-in global catalogs](#built-in-global-catalogs) +- [Custom catalogs](#custom-catalogs) +- [Creating and launching applications](#creating-and-launching-applications) +- [Chart compatibility with Rancher](#chart-compatibility-with-rancher) +- [Global DNS](#global-dns) # Catalog Scopes -Within Rancher, you can manage catalogs at three different scopes. Global catalogs are shared across all clusters and project. There are some use cases where you might not want to share catalogs across between different clusters or even projects in the same cluster. By leveraging cluster and project scoped catalogs, you will be able to provide applications for specific teams without needing to share them with all clusters and/or projects. +Within Rancher, you can manage catalogs at three different scopes. Global catalogs are shared across all clusters and project. There are some use cases where you might not want to share catalogs between different clusters or even projects in the same cluster. By leveraging cluster and project scoped catalogs, you will be able to provide applications for specific teams without needing to share them with all clusters and/or projects. Scope | Description | Available As of | --- | --- | --- | @@ -49,119 +35,48 @@ Global | All clusters and all projects can access the Helm charts in this catalo Cluster | All projects in the specific cluster can access the Helm charts in this catalog | v2.2.0 | Project | This specific cluster can access the Helm charts in this catalog | v2.2.0 | -# Enabling Built-in Global Catalogs +# Catalog Helm Deployment Versions -Within Rancher, there are default catalogs packaged as part of Rancher. These can be enabled or disabled by an administrator. +_Applicable as of v2.4.0_ -1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar. +In November 2019, Helm 3 was released, and some features were deprecated or refactored. It is not fully backwards compatible with Helm 2. Therefore, catalogs in Rancher need to be separated, with each catalog only using one Helm version. -2. Toggle the default catalogs that you want use to a setting of **Enabled**. +When you create a custom catalog, you will have to configure the catalog to use either Helm 2 or Helm 3. This version cannot be changed later. If the catalog is added with the wrong Helm version, it will need to be deleted and re-added. - - **Library** +When you launch a new app from a catalog, the app will be managed by the catalog's Helm version. A Helm 2 catalog will use Helm 2 to manage all of the apps, and a Helm 3 catalog will use Helm 3 to manage all apps. - The Library Catalog includes charts curated by Rancher. Rancher stores charts in a Git repository to expedite the fetch and update of charts. In Rancher 2.x, only global catalogs are supported. Support for cluster-level and project-level charts will be added in the future. +By default, catalogs are assumed to be deployed using Helm 2. If you run an app in Rancher prior to v2.4.0, then upgrade to Rancher v2.4.0+, the app will still be managed by Helm 2. If the app was already using a Helm 3 Chart (API version 2) it will no longer work in v2.4.0+. You must either downgrade the chart's API version or recreate the catalog to use Helm 3. - This catalog features Rancher Charts, which include some [notable advantages]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/#chart-types) over native Helm charts. +Charts that are specific to Helm 2 should only be added to a Helm 2 catalog, and Helm 3 specific charts should only be added to a Helm 3 catalog. - - **Helm Stable** +# Built-in Global Catalogs - This catalog, , which is maintained by the Kubernetes community, includes native [Helm charts](https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/getting_started.md). This catalog features the largest pool of apps. +Within Rancher, there are default catalogs packaged as part of Rancher. These can be enabled or disabled by an administrator. For details, refer to the section on managing [built-in global catalogs.]({{}}/rancher/v2.x/en/catalog/built-in) - - **Helm Incubator** +# Custom Catalogs - Similar in user experience to Helm Stable, but this catalog is filled with applications in **beta**. +There are two types of catalogs in Rancher: [Built-in global catalogs]({{}}/rancher/v2.x/en/catalog/built-in/) and [custom catalogs.]({{}}/rancher/v2.x/en/catalog/adding-catalogs/) - **Result**: The chosen catalogs are enabled. Wait a few minutes for Rancher to replicate the catalog charts. When replication completes, you'll be able to see them in any of your projects by selecting **Apps** from the main navigation bar. In versions prior to v2.2.0, you can select **Catalog Apps** from the main navigation bar. +Any user can create custom catalogs to add into Rancher. Custom catalogs can be added into Rancher at the global level, cluster level, or project level. For details, refer to the [section on adding custom catalogs]({{}}/rancher/v2.x/en/catalog/adding-catalogs) and the [catalog configuration reference.]({{}}/rancher/v2.x/en/catalog/catalog-config) -# Adding Custom Global Catalogs +# Creating and Launching Applications -Adding a catalog is as simple as adding a catalog name, a URL and a branch name. +In Rancher, applications are deployed from the templates in a catalog. This section covers the following topics: -### Add Custom Git Repositories -The Git URL needs to be one that `git clone` [can handle](https://git-scm.com/docs/git-clone#_git_urls_a_id_urls_a) and must end in `.git`. The branch name must be a branch that is in your catalog URL. If no branch name is provided, it will use the `master` branch by default. Whenever you add a catalog to Rancher, it will be available immediately. +* [Multi-cluster applications]({{}}/rancher/v2.x/en/catalog/multi-cluster-apps/) +* [Creating catalog apps]({{}}/rancher/v2.x/en/catalog/creating-apps) +* [Launching catalog apps within a project]({{}}/rancher/v2.x/en/catalog/launching-apps) +* [Managing catalog apps]({{}}/rancher/v2.x/en/catalog/managing-apps) +* [Tutorial: Example custom chart creation]({{}}/rancher/v2.x/en/catalog/tutorial) -### Add Custom Helm Chart Repositories +# Chart Compatibility with Rancher -A Helm chart repository is an HTTP server that houses one or more packaged charts. Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server. +Charts now support the fields `rancher_min_version` and `rancher_max_version` in the [`questions.yml` file](https://github.com/rancher/integration-test-charts/blob/master/charts/chartmuseum/v1.6.0/questions.yml) to specify the versions of Rancher that the chart is compatible with. When using the UI, only app versions that are valid for the version of Rancher running will be shown. API validation is done to ensure apps that don't meet the Rancher requirements cannot be launched. An app that is already running will not be affected on a Rancher upgrade if the newer Rancher version does not meet the app's requirements. -Helm comes with built-in package server for developer testing (helm serve). The Helm team has tested other servers, including Google Cloud Storage with website mode enabled, S3 with website mode enabled or hosting custom chart repository server using open-source projects like [ChartMuseum](https://github.com/helm/chartmuseum). - -In Rancher, you can add the custom Helm chart repository with only a catalog name and the URL address of the chart repository. - -### Add Private Git/Helm Chart Repositories -_Available as of v2.2.0_ - -In Rancher v2.2.0, you can add private catalog repositories using credentials like Username and Password. You may also want to use the -OAuth token if your Git or Helm repository server support that. - -[Read More About Adding Private Git/Helm Catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/#private-repositories) - - - - 1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar. - 2. Click **Add Catalog**. - 3. Complete the form and click **Create**. - - **Result**: Your catalog is added to Rancher. - -# Launching Catalog Applications - -After you've either enabled the built-in catalogs or added your own custom catalog, you can start launching any catalog application.> - -1. From the **Global** view, open the project that you want to deploy to. - -2. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. - -3. Find the app that you want to launch, and then click **View Now**. - -4. Under **Configuration Options** enter a **Name**. By default, this name is also used to create a Kubernetes namespace for the application. - - * If you would like to change the **Namespace**, click **Customize** and enter a new name. - * If you want to use a different namespace that already exists, click **Customize**, and then click **Use an existing namespace**. Choose a namespace from the list. - -5. Select a **Template Version**. - -6. Complete the rest of the **Configuration Options**. - - * For native Helm charts (i.e., charts from the **Helm Stable** or **Helm Incubator** catalogs), answers are provided as key value pairs in the **Answers** section. - * Keys and values are available within **Detailed Descriptions**. - * When entering answers, you must format them using the syntax rules found in [Using Helm: The format and limitations of --set](https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of-set), as Rancher passes them as `--set` flags to Helm. - - For example, when entering an answer that includes two values separated by a comma (i.e., `abc, bcd`), wrap the values with double quotes (i.e., `"abc, bcd"`). - -7. Review the files in **Preview**. When you're satisfied, click **Launch**. - -**Result**: Your application is deployed to your chosen namespace. You can view the application status from the project's: - -By creating a customized repository with added files, Rancher improves on Helm repositories and charts. All native Helm charts can work within Rancher, but Rancher adds several enhancements to improve their user experience. - -# Working with Catalogs - -There are two types of catalogs in Rancher. Learn more about each type: - -* [Built-in Global Catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/built-in/) -* [Custom Catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/) - -### Apps - -In Rancher, applications are deployed from the templates in a catalog. Rancher supports two types of applications: - -* [Multi-cluster applications]({{< baseurl >}}/rancher/v2.x/en/catalog/multi-cluster-apps/) -* [Applications deployed in a specific Project]({{< baseurl >}}/rancher/v2.x/en/catalog/apps) - -### Global DNS +# Global DNS _Available as v2.2.0_ When creating applications that span multiple Kubernetes clusters, a Global DNS entry can be created to route traffic to the endpoints in all of the different clusters. An external DNS server will need be programmed to assign a fully qualified domain name (a.k.a FQDN) to your application. Rancher will use the FQDN you provide and the IP addresses where your application is running to program the DNS. Rancher will gather endpoints from all the Kubernetes clusters running your application and program the DNS. -For more information on how to use this feature, see [Global DNS]({{< baseurl >}}/rancher/v2.x/en/catalog/globaldns/). - -### Chart Compatibility with Rancher - -Charts now support the fields `rancher_min_version` and `rancher_max_version` in the [`questions.yml` file](https://github.com/rancher/integration-test-charts/blob/master/charts/chartmuseum/v1.6.0/questions.yml) to specify the versions of Rancher that the chart is compatible with. When using the UI, only app versions that are valid for the version of Rancher running will be shown. API validation is done to ensure apps that don't meet the Rancher requirements cannot be launched. An app that is already running will not be affected on a Rancher upgrade if the newer Rancher version does not meet the app's requirements. +For more information on how to use this feature, see [Global DNS]({{}}/rancher/v2.x/en/catalog/globaldns/). diff --git a/content/rancher/v2.x/en/catalog/adding-catalogs/_index.md b/content/rancher/v2.x/en/catalog/adding-catalogs/_index.md new file mode 100644 index 00000000000..d8540b3cf42 --- /dev/null +++ b/content/rancher/v2.x/en/catalog/adding-catalogs/_index.md @@ -0,0 +1,106 @@ +--- +title: Creating Custom Catalogs +weight: 200 +aliases: + - /rancher/v2.x/en/tasks/global-configuration/catalog/adding-custom-catalogs/ + - /rancher/v2.x/en/catalog/custom/adding +--- + +Custom catalogs can be added into Rancher at a global scope, cluster scope, or project scope. + +- [Adding catalog repositories](#adding-catalog-repositories) + - [Add custom Git repositories](#add-custom-git-repositories) + - [Add custom Helm chart repositories](#add-custom-helm-chart-repositories) + - [Add private Git/Helm chart repositories](#add-private-git-helm-chart-repositories) +- [Adding global catalogs](#adding-global-catalogs) +- [Adding cluster level catalogs](#adding-cluster-level-catalogs) +- [Adding project level catalogs](#adding-project-level-catalogs) +- [Custom catalog configuration reference](#custom-catalog-configuration-reference) + +# Adding Catalog Repositories + +Adding a catalog is as simple as adding a catalog name, a URL and a branch name. + +**Prerequisite:** An [admin]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) of Rancher has the ability to add or remove catalogs globally in Rancher. + +### Add Custom Git Repositories +The Git URL needs to be one that `git clone` [can handle](https://git-scm.com/docs/git-clone#_git_urls_a_id_urls_a) and must end in `.git`. The branch name must be a branch that is in your catalog URL. If no branch name is provided, it will use the `master` branch by default. Whenever you add a catalog to Rancher, it will be available immediately. + +### Add Custom Helm Chart Repositories + +A Helm chart repository is an HTTP server that houses one or more packaged charts. Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server. + +Helm comes with built-in package server for developer testing (helm serve). The Helm team has tested other servers, including Google Cloud Storage with website mode enabled, S3 with website mode enabled or hosting custom chart repository server using open-source projects like [ChartMuseum](https://github.com/helm/chartmuseum). + +In Rancher, you can add the custom Helm chart repository with only a catalog name and the URL address of the chart repository. + +### Add Private Git/Helm Chart Repositories +_Available as of v2.2.0_ + +Private catalog repositories can be added using credentials like Username and Password. You may also want to use the OAuth token if your Git or Helm repository server supports that. + +For more information on private Git/Helm catalogs, refer to the [custom catalog configuration reference.]({{}}/rancher/v2.x/en/catalog/catalog-config) + + 1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar. + 2. Click **Add Catalog**. + 3. Complete the form and click **Create**. + + **Result:** Your catalog is added to Rancher. + +# Adding Global Catalogs + +>**Prerequisites:** In order to manage the [built-in catalogs]({{}}/rancher/v2.x/en/catalog/built-in/) or manage global catalogs, you need _one_ of the following permissions: +> +>- [Administrator Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) +>- [Custom Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned. + + 1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar. + 2. Click **Add Catalog**. + 3. Complete the form. Select the Helm version that will be used to launch all of the apps in the catalog. For more information about the Helm version, refer to [this section.]( +{{}}/rancher/v2.x/en/catalog/#catalog-helm-deployment-versions) +4. Click **Create**. + + **Result**: Your custom global catalog is added to Rancher. Once it is in `Active` state, it has completed synchronization and you will be able to start deploying [multi-cluster apps]({{}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or [applications in any project]({{}}/rancher/v2.x/en/catalog/launching-apps/) from this catalog. + +# Adding Cluster Level Catalogs + +_Available as of v2.2.0_ + +>**Prerequisites:** In order to manage cluster scoped catalogs, you need _one_ of the following permissions: +> +>- [Administrator Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) +>- [Cluster Owner Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) +>- [Custom Cluster Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) with the [Manage Cluster Catalogs]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-role-reference) role assigned. + +1. From the **Global** view, navigate to your cluster that you want to start adding custom catalogs. +2. Choose the **Tools > Catalogs** in the navigation bar. +2. Click **Add Catalog**. +3. Complete the form. By default, the form will provide the ability to select `Scope` of the catalog. When you have added a catalog from the **Cluster** scope, it is defaulted to `Cluster`. Select the Helm version that will be used to launch all of the apps in the catalog. For more information about the Helm version, refer to [this section.]( +{{}}/rancher/v2.x/en/catalog/#catalog-helm-deployment-versions) +5. Click **Create**. + +**Result**: Your custom cluster catalog is added to Rancher. Once it is in `Active` state, it has completed synchronization and you will be able to start deploying [applications in any project in that cluster]({{}}/rancher/v2.x/en/catalog/apps/) from this catalog. + +# Adding Project Level Catalogs + +_Available as of v2.2.0_ + +>**Prerequisites:** In order to manage project scoped catalogs, you need _one_ of the following permissions: +> +>- [Administrator Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) +>- [Cluster Owner Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) +>- [Project Owner Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) +>- [Custom Project Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) with the [Manage Project Catalogs]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-role-reference) role assigned. + +1. From the **Global** view, navigate to your project that you want to start adding custom catalogs. +2. Choose the **Tools > Catalogs** in the navigation bar. +2. Click **Add Catalog**. +3. Complete the form. By default, the form will provide the ability to select `Scope` of the catalog. When you have added a catalog from the **Project** scope, it is defaulted to `Cluster`. Select the Helm version that will be used to launch all of the apps in the catalog. For more information about the Helm version, refer to [this section.]( +{{}}/rancher/v2.x/en/catalog/#catalog-helm-deployment-versions) +5. Click **Create**. + +**Result**: Your custom project catalog is added to Rancher. Once it is in `Active` state, it has completed synchronization and you will be able to start deploying [applications in that project]({{}}/rancher/v2.x/en/catalog/apps/) from this catalog. + +# Custom Catalog Configuration Reference + +Refer to [this page]({{}}/rancher/v2.x/en/catalog/catalog-config) more information on configuring custom catalogs. \ No newline at end of file diff --git a/content/rancher/v2.x/en/catalog/apps/_index.md b/content/rancher/v2.x/en/catalog/apps/_index.md deleted file mode 100644 index 04d509449fc..00000000000 --- a/content/rancher/v2.x/en/catalog/apps/_index.md +++ /dev/null @@ -1,170 +0,0 @@ ---- -title: Apps in a Project -weight: 5005 ---- - -Within a project, when you want to deploy applications from catalogs, the applications available in your project will be based on the [scope of the catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/#catalog-scope). - -If your application is using ingresses, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{< baseurl >}}/rancher/v2.x/en/catalog/globaldns/). - -## Prerequisites - -To create a multi-cluster app in Rancher, you must have at least one of the following permissions: - -- A [project-member role]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) in the target cluster, which gives you the ability to create, read, update, and delete the workloads -- A [cluster owner role]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) for the cluster that include the target project - -## Launching Catalog Applications - -After you've either enabled the [built-in global catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/built-in/) or [added your own custom catalog]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/adding), you can start launching catalog applications. - -1. From the **Global** view, navigate to your project that you want to start deploying applications. - -2. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. - -3. Find the application that you want to launch, and then click **View Details**. - -4. (Optional) Review the detailed descriptions, which comes from the Helm chart's `README`. - -5. Under **Configuration Options** enter a **Name**. By default, this name is also used to create a Kubernetes namespace for the application. - - * If you would like to change the **Namespace**, click **Customize** and change the name of the namespace. - * If you want to use a different namespace that already exists, click **Customize**, and then click **Use an existing namespace**. Choose a namespace from the list. - -6. Select a **Template Version**. - -7. Complete the rest of the **Configuration Options**. Rancher handles how to [customize your configuration options](#configuration-options) depending on whether or not the custom catalog includes the `questions.yml` file. - -8. Review the files in the **Preview** section. When you're satisfied, click **Launch**. - -**Result**: Your application is deployed to your chosen namespace. You can view the application status from the project's: - -- **Workloads** view -- **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view. - -### Configuration Options - -For each Helm chart, there are a list of desired answers that must be entered in order to successfully deploy the chart. When entering answers, you must format them using the syntax rules found in [Using Helm: The format and limitations of –set](https://github.com/helm/helm/blob/master/docs/using_helm.md#the-format-and-limitations-of---set), as Rancher passes them as `--set` flags to Helm. - -> For example, when entering an answer that includes two values separated by a comma (i.e. `abc, bcd`), it is required to wrap the values with double quotes (i.e., ``"abc, bcd"``). - -{{% tabs %}} -{{% tab "UI" %}} - -#### Using a `questions.yml` file - -If the Helm chart that you are deploying contains a `questions.yml` file, Rancher's UI will translate this file to display an easy to use UI to collect the answers for the questions. - -#### Key Value Pairs for Native Helm Charts - -For native Helm charts (i.e., charts from the **Helm Stable** or **Helm Incubator** catalogs or a [custom Helm chart repository]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/#custom-helm-chart-repository)), answers are provided as key value pairs in the **Answers** section. These answers are used to override the default values. - -{{% /tab %}} -{{% tab "Editing YAML Files" %}} - -_Available as of v2.1.0_ - -If you do not want to input answers using the UI, you can choose the **Edit as YAML** option. - -With this example YAML: - -```YAML -outer: - inner: value -servers: -- port: 80 - host: example -``` - -#### Kev Value Pairs - -You can have a YAML file that translates these fields to match how to [format custom values so that it can be used with `--set`](https://github.com/helm/helm/blob/master/docs/using_helm.md#the-format-and-limitations-of---set). - -These values would be translated to: - -``` -outer.inner=value -servers[0].port=80 -servers[0].host=example -``` - -#### YAML files - -_Available as of v2.2.0_ - -You can directly paste that YAML formatted structure into the YAML editor. By allowing custom values to be set using a YAML formatted structure, Rancher has the ability to easily customize for more complicated input values (e.g. multi-lines, array and JSON objects). -{{% /tab %}} -{{% /tabs %}} - -## Application Management - -After deploying an application, one of the benefits of using an application versus individual workloads/resources is the ease of being able to manage many workloads/resources applications. Apps can be cloned, upgraded or rolled back. - -### Cloning Catalog Applications - -After an application is deployed, you can easily clone it to use create another application with almost the same configuration. It saves you the work of manually filling in duplicate information. - -### Upgrading Catalog Applications - -After an application is deployed, you can easily upgrade to a different template version. - -1. From the **Global** view, navigate to the project that contains the catalog application that you want to upgrade. - -1. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. - -3. Find the application that you want to upgrade, and then click the Ellipsis to find **Upgrade**. - -4. Select the **Template Version** that you want to deploy. - -5. (Optional) Update your **Configuration Options**. - -6. (Optional) Select whether or not you want to force the catalog application to be upgraded by checking the box for **Delete and recreate resources if needed during the upgrade**. - - > In Kubernetes, some fields are designed to be immutable or cannot be updated directly. As of v2.2.0, you can now force your catalog application to be updated regardless of these fields. This will cause the catalog apps to be deleted and resources to be re-created if needed during the upgrade. - -7. Review the files in the **Preview** section. When you're satisfied, click **Launch**. - -**Result**: Your application is updated. You can view the application status from the project's: - -- **Workloads** view -- **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view. - - -### Rolling Back Catalog Applications - -After an application has been upgraded, you can easily rollback to a different template version. - -1. From the **Global** view, navigate to the project that contains the catalog application that you want to upgrade. - -1. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. - -3. Find the application that you want to rollback, and then click the Ellipsis to find **Rollback**. - -4. Select the **Revision** that you want to roll back to. By default, Rancher saves up to the last 10 revisions. - -5. (Optional) Select whether or not you want to force the catalog application to be upgraded by checking the box for **Delete and recreate resources if needed during the upgrade**. - - > In Kubernetes, some fields are designed to be immutable or cannot be updated directly. As of v2.2.0, you can now force your catalog application to be updated regardless of these fields. This will cause the catalog apps to be deleted and resources to be re-created if needed during the rollback. - -7. Click **Rollback**. - -**Result**: Your application is updated. You can view the application status from the project's: - -- **Workloads** view -- **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view. - -### Deleting Catalog Application Deployments - -As a safeguard to prevent you from unintentionally deleting other catalog applications that share a namespace, deleting catalog applications themselves does not delete the namespace they're assigned to. - -Therefore, if you want to delete both an app and the namespace that contains the app, you should remove the app and the namespace separately: - -1. Uninstall the app using the app's `uninstall` function. - -1. From the **Global** view, navigate to the project that contains the catalog application that you want to delete. - -1. From the main menu, choose **Namespaces**. - -1. Find the namespace running your catalog app. Select it and click **Delete**. - -**Result:** The catalog application deployment and its namespace are deleted. diff --git a/content/rancher/v2.x/en/catalog/built-in/_index.md b/content/rancher/v2.x/en/catalog/built-in/_index.md index 54a1268c88f..5b86667717b 100644 --- a/content/rancher/v2.x/en/catalog/built-in/_index.md +++ b/content/rancher/v2.x/en/catalog/built-in/_index.md @@ -1,35 +1,25 @@ --- -title: Built-in Global Catalogs -weight: 4000 +title: Enabling and Disabling Built-in Global Catalogs +weight: 100 aliases: - /rancher/v2.x/en/tasks/global-configuration/catalog/enabling-default-catalogs/ --- -There are default [global catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/#global-catalogs) packaged as part of Rancher. +There are default global catalogs packaged as part of Rancher. -## Managing Built-in Global Catalogs +Within Rancher, there are default catalogs packaged as part of Rancher. These can be enabled or disabled by an administrator. ->**Prerequisites:** In order to manage the built-in catalogs or [manage global catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/adding/#adding-global-catalogs), you need _one_ of the following permissions: +>**Prerequisites:** In order to manage the built-in catalogs or manage global catalogs, you need _one_ of the following permissions: > ->- [Administrator Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) ->- [Custom Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned. +>- [Administrator Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) +>- [Custom Global Permissions]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions-reference) role assigned. 1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar. -2. Toggle the default catalogs that you want use to a setting of **Enabled**. +2. Toggle the default catalogs that you want to be enabled or disabled: - - **Library** - - The Library Catalog includes charts curated by Rancher. Rancher stores charts in a Git repository to expedite the fetch and update of charts. In Rancher 2.x, only global catalogs are supported. Support for cluster-level and project-level charts will be added in the future. - - This catalog features Rancher Charts, which include some [notable advantages]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/#chart-types) over native Helm charts. - - - **Helm Stable** - - This catalog, , which is maintained by the Kubernetes community, includes native [Helm charts](https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/getting_started.md). This catalog features the largest pool of apps. - - - **Helm Incubator** - - Similar in user experience to Helm Stable, but this catalog is filled with applications in **beta**. + - **Library:** The Library Catalog includes charts curated by Rancher. Rancher stores charts in a Git repository to expedite the fetch and update of charts. This catalog features Rancher Charts, which include some [notable advantages]({{}}/rancher/v2.x/en/catalog/creating-apps/#rancher-charts) over native Helm charts. + - **Helm Stable:** This catalog, which is maintained by the Kubernetes community, includes native [Helm charts](https://helm.sh/docs/chart_template_guide/). This catalog features the largest pool of apps. + - **Helm Incubator:** Similar in user experience to Helm Stable, but this catalog is filled with applications in **beta**. **Result**: The chosen catalogs are enabled. Wait a few minutes for Rancher to replicate the catalog charts. When replication completes, you'll be able to see them in any of your projects by selecting **Apps** from the main navigation bar. In versions prior to v2.2.0, within a project, you can select **Catalog Apps** from the main navigation bar. diff --git a/content/rancher/v2.x/en/catalog/custom/_index.md b/content/rancher/v2.x/en/catalog/catalog-config/_index.md similarity index 60% rename from content/rancher/v2.x/en/catalog/custom/_index.md rename to content/rancher/v2.x/en/catalog/catalog-config/_index.md index 771097c6ec6..229f65e1c97 100644 --- a/content/rancher/v2.x/en/catalog/custom/_index.md +++ b/content/rancher/v2.x/en/catalog/catalog-config/_index.md @@ -1,24 +1,32 @@ --- -title: Custom Catalogs -weight: 4020 +title: Custom Catalog Configuration Reference +weight: 300 aliases: - + - /rancher/v2.x/en/catalog/catalog-config --- -Any user can [create custom catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/creating/) to add into Rancher. Besides the content of the catalog, users must ensure their catalogs are able to be added into Rancher. +Any user can create custom catalogs to add into Rancher. Besides the content of the catalog, users must ensure their catalogs are able to be added into Rancher. -## Types of Repositories +- [Types of Repositories](#types-of-repositories) +- [Custom Git Repository](#custom-git-repository) +- [Custom Helm Chart Repository](#custom-helm-chart-repository) +- [Catalog Fields](#catalog-fields) +- [Private Repositories](#private-repositories) + - [Using Username and Password](#using-username-and-password) + - [Using an OAuth token](#using-an-oauth-token) + +# Types of Repositories Rancher supports adding in different types of repositories as a catalog: * Custom Git Repository * Custom Helm Chart Repository -### Custom Git Repository +# Custom Git Repository The Git URL needs to be one that `git clone` [can handle](https://git-scm.com/docs/git-clone#_git_urls_a_id_urls_a) and must end in `.git`. The branch name must be a branch that is in your catalog URL. If no branch name is provided, it will default to use the `master` branch. Whenever you add a catalog to Rancher, it will be available almost immediately. -### Custom Helm Chart Repository +# Custom Helm Chart Repository A Helm chart repository is an HTTP server that contains one or more packaged charts. Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server. @@ -26,9 +34,9 @@ Helm comes with a built-in package server for developer testing (`helm serve`). In Rancher, you can add the custom Helm chart repository with only a catalog name and the URL address of the chart repository. -## Catalog Fields +# Catalog Fields -When [adding your catalog]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/adding/) to Rancher, you'll provide the following information: +When [adding your catalog]({{}}/rancher/v2.x/en/catalog/custom/adding/) to Rancher, you'll provide the following information: | Variable | Description | @@ -36,11 +44,12 @@ When [adding your catalog]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/adding | Name | Name for your custom catalog to distinguish the repositories in Rancher | | Catalog URL | URL of your custom chart repository| | Use Private Catalog | Selected if you are using a private repository that requires authentication | -| Username (Optional) | [Username](#using-username-and-password) or [OAuth Token](#using-an-oauth-token) | -| Password (Optional) | If you are authenticating using [username](#using-username-and-password), the associated password. If you are using an [OAuth Token](#using-an-oauth-token), use `x-oauth-basic`. | +| Username (Optional) | Username or OAuth Token | +| Password (Optional) | If you are authenticating using a username, enter the associated password. If you are using an OAuth token, use `x-oauth-basic`. | | Branch | For a Git repository, the branch name. Default: `master`. For a Helm Chart repository, this field is ignored. | +| Helm version | The Helm version that will be used to deploy all of the charts in the catalog. This field cannot be changed later. For more information, refer to the [section on Helm versions.]({{}}/rancher/v2.x/en/catalog/#catalog-helm-deployment-versions) | -## Private Repositories +# Private Repositories _Available as of v2.2.0_ @@ -48,7 +57,7 @@ Private Git or Helm chart repositories can be added into Rancher using either cr ### Using Username and Password -1. When [adding the catalog]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/adding/), select the **Use private catalog** checkbox. +1. When [adding the catalog]({{}}/rancher/v2.x/en/catalog/custom/adding/), select the **Use private catalog** checkbox. 2. Provide the `Username` and `Password` for your Git or Helm repository. @@ -59,6 +68,6 @@ Read [using Git over HTTPS and OAuth](https://github.blog/2012-09-21-easier-buil 1. Create an [OAuth token](https://github.com/settings/tokens) with `repo` permission selected, and click **Generate token**. -2. When [adding the catalog]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/adding/), select the **Use private catalog** checkbox. +2. When [adding the catalog]({{}}/rancher/v2.x/en/catalog/custom/adding/), select the **Use private catalog** checkbox. 3. For `Username`, provide the Git generated OAuth token. For `Password`, enter `x-oauth-basic`. diff --git a/content/rancher/v2.x/en/catalog/custom/creating/_index.md b/content/rancher/v2.x/en/catalog/creating-apps/_index.md similarity index 51% rename from content/rancher/v2.x/en/catalog/custom/creating/_index.md rename to content/rancher/v2.x/en/catalog/creating-apps/_index.md index bc1ed5e919d..d59893cd9de 100644 --- a/content/rancher/v2.x/en/catalog/custom/creating/_index.md +++ b/content/rancher/v2.x/en/catalog/creating-apps/_index.md @@ -1,41 +1,46 @@ --- -title: Creating Custom Catalogs Apps -weight: 4000 +title: Creating Catalog Apps +weight: 400 aliases: - /rancher/v2.x/en/tasks/global-configuration/catalog/customizing-charts/ + - /rancher/v2.x/en/catalog/custom/creating --- Rancher's catalog service requires any custom catalogs to be structured in a specific format for the catalog service to be able to leverage it in Rancher. -## Chart Types +> For a complete walkthrough of developing charts, see the [Chart Template Developer's Guide](https://helm.sh/docs/chart_template_guide/) in the official Helm documentation. -Rancher supports two different types of charts: +- [Chart types](#chart-types) + - [Helm charts](#helm-charts) + - [Rancher charts](#rancher-charts) +- [Chart directory structure](#chart-directory-structure) +- [Additional Files for Rancher Charts](#additional-files-for-rancher-charts) + - [questions.yml](#questions-yml) + - [Min/Max Rancher versions](#min-max-rancher-versions) + - [Question variable reference](#question-variable-reference) +- [Tutorial: Example Custom Chart Creation](#tutorial-example-custom-chart-creation) -- **Helm Charts** +# Chart Types - Native Helm charts include an application along with other software required to run it. When deploying native Helm charts, you'll learn the chart's parameters and then configure them using **Answers**, which are sets of key value pairs. +Rancher supports two different types of charts: Helm charts and Rancher charts. - The Helm Stable and Helm Incubators are populated with native Helm charts. However, you can also use native Helm charts in Custom catalogs (although we recommend Rancher Charts). +### Helm Charts -- **Rancher Charts** +Native Helm charts include an application along with other software required to run it. When deploying native Helm charts, you'll learn the chart's parameters and then configure them using **Answers**, which are sets of key value pairs. - Rancher charts mirror native helm charts, although they add two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Rancher Chart Additional Files](#rancher-chart-additional-files). +The Helm Stable and Helm Incubators are populated with native Helm charts. However, you can also use native Helm charts in Custom catalogs (although we recommend Rancher Charts). - Advantages of Rancher charts include: +### Rancher Charts - - **Enhanced Revision Tracking** +Rancher charts mirror native helm charts, although they add two files that enhance user experience: `app-readme.md` and `questions.yaml`. Read more about them in [Additional Files for Rancher Charts.](#additional-files-for-rancher-charts) - While Helm supports versioned deployments, Rancher adds tracking and revision history to display changes between different versions of the chart. +Advantages of Rancher charts include: - - **Streamlined Application Launch** +- **Enhanced revision tracking:** While Helm supports versioned deployments, Rancher adds tracking and revision history to display changes between different versions of the chart. +- **Streamlined application launch:** Rancher charts add simplified chart descriptions and configuration forms to make catalog application deployment easy. Rancher users need not read through the entire list of Helm variables to understand how to launch an application. +- **Application resource management:** Rancher tracks all the resources created by a specific application. Users can easily navigate to and troubleshoot on a page listing all the workload objects used to power an application. - Rancher charts add simplified chart descriptions and configuration forms to make catalog application deployment easy. Rancher users need not read through the entire list of Helm variables to understand how to launch an application. - - - **Application Resource Management** - - Rancher tracks all the resources created by a specific application. Users can easily navigate to and troubleshoot on a page listing all the workload objects used to power an application. - -## Chart Directory Structure +# Chart Directory Structure The following table demonstrates the directory structure for a chart, which can be found in a chart directory: `charts///`. This information is helpful when customizing charts for a custom catalog. Files denoted with **Rancher Specific** are specific to Rancher charts, but are optional for chart customization. @@ -51,7 +56,7 @@ charts/// |--values.yml # Default configuration values for the chart. ``` -## Rancher Chart Additional Files +# Additional Files for Rancher Charts Before you create your own custom catalog, you should have a basic understanding about how a Rancher chart differs from a native Helm chart. Rancher charts differ slightly from Helm charts in their directory structures. Rancher charts include two files that Helm charts do not. @@ -61,7 +66,7 @@ Before you create your own custom catalog, you should have a basic understanding
Rancher Chart with app-readme.md (left) vs. Helm Chart without (right)
- ![app-readme.md]({{< baseurl >}}/img/rancher/app-readme.png) + ![app-readme.md]({{}}/img/rancher/app-readme.png) - `questions.yml` @@ -70,14 +75,14 @@ Before you create your own custom catalog, you should have a basic understanding
Rancher Chart with questions.yml (left) vs. Helm Chart without (right)
- ![questions.yml]({{< baseurl >}}/img/rancher/questions.png) + ![questions.yml]({{}}/img/rancher/questions.png) -### Questions.yml +### questions.yml Inside the `questions.yml`, most of the content will be around the questions to ask the end user, but there are some additional fields that can be set in this file. -#### Min/Max Rancher versions +### Min/Max Rancher versions _Available as of v2.3.0_ @@ -90,7 +95,7 @@ rancher_min_version: 2.3.0 rancher_max_version: 2.3.99 ``` -#### Question Variable Reference +### Question Variable Reference This reference contains variables that you can use in `questions.yml` nested under `questions:`. @@ -116,71 +121,6 @@ This reference contains variables that you can use in `questions.yml` nested und >**Note:** `subquestions[]` cannot contain `subquestions` or `show_subquestions_if` keys, but all other keys in the above table are supported. +# Tutorial: Example Custom Chart Creation -## Example Custom Chart Creation - - You can fill your custom catalogs with either Helm Charts or Rancher Charts, although we recommend Rancher Charts due to their enhanced user experience. - ->**Note:** For a complete walkthrough of developing charts, see the upstream Helm chart [developer reference](https://helm.sh/docs/chart_template_guide/). - -1. Within the GitHub repo that you're using as your custom catalog, create a directory structure that mirrors the structure listed in [Chart Directory Structure](#chart-directory-structure). - - Rancher requires this directory structure, although `app-readme.md` and `questions.yml` are optional. - - >**Tip:** - > - >- To begin customizing a chart, copy one from either the [Rancher Library](https://github.com/rancher/charts) or the [Helm Stable](https://github.com/kubernetes/charts/tree/master/stable). - >- For a complete walk through of developing charts, see the upstream Helm chart [developer reference](https://docs.helm.sh/developing_charts/). - -2. **Recommended:** Create an `app-readme.md` file. - - Use this file to create custom text for your chart's header in the Rancher UI. You can use this text to notify users that the chart is customized for your environment or provide special instruction on how to use it. -
-
- **Example**: - - ``` - $ cat ./app-readme.md - - # Wordpress ROCKS! - ``` - -3. **Recommended:** Create a `questions.yml` file. - - This file creates a form for users to specify deployment parameters when they deploy the custom chart. Without this file, users **must** specify the parameters manually using key value pairs, which isn't user-friendly. -
-
- The example below creates a form that prompts users for persistent volume size and a storage class. -
-
- For a list of variables you can use when creating a `questions.yml` file, see [Question Variable Reference](#question-variable-reference). - -
-        categories:
-        - Blog
-        - CMS
-        questions:
-        - variable: persistence.enabled
-        default: "false"
-        description: "Enable persistent volume for WordPress"
-        type: boolean
-        required: true
-        label: WordPress Persistent Volume Enabled
-        show_subquestion_if: true
-        group: "WordPress Settings"
-        subquestions:
-        - variable: persistence.size
-            default: "10Gi"
-            description: "WordPress Persistent Volume Size"
-            type: string
-            label: WordPress Volume Size
-        - variable: persistence.storageClass
-            default: ""
-            description: "If undefined or null, uses the default StorageClass. Default to null"
-            type: storageclass
-            label: Default StorageClass for WordPress
-    
- -4. Check the customized chart into your GitHub repo. - -**Result:** Your custom chart is added to the repo. Your Rancher Server will replicate the chart within a few minutes. +For a tutorial on adding a custom Helm chart to a custom catalog, refer to [this page.]({{}}/rancher/v2.x/en/catalog/tutorial) diff --git a/content/rancher/v2.x/en/catalog/custom/adding/_index.md b/content/rancher/v2.x/en/catalog/custom/adding/_index.md deleted file mode 100644 index f3813c01404..00000000000 --- a/content/rancher/v2.x/en/catalog/custom/adding/_index.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: Adding Custom Catalogs -weight: 4005 -aliases: - - /rancher/v2.x/en/tasks/global-configuration/catalog/adding-custom-catalogs/ ---- - -[Custom catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/) can be added into Rancher at any [scope of Rancher]({{< baseurl >}}/rancher/v2.x/en/catalog/#catalog-scope). - -## Adding Global Catalogs - ->**Prerequisites:** In order to manage the [built-in catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/built-in/) or manage global catalogs, you need _one_ of the following permissions: -> ->- [Administrator Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) ->- [Custom Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Catalogs]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned. - - 1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar. - 2. Click **Add Catalog**. - 3. Complete the form and click **Create**. - - **Result**: Your custom global catalog is added to Rancher. Once it is in `Active` state, it has completed synchronization and you will be able to start deploying [multi-cluster apps]({{< baseurl >}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or [applications in any project]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) from this catalog. - -## Adding Cluster Catalogs - -_Available as of v2.2.0_ - ->**Prerequisites:** In order to manage cluster scoped catalogs, you need _one_ of the following permissions: -> ->- [Administrator Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) ->- [Cluster Owner Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) ->- [Custom Cluster Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) with the [Manage Cluster Catalogs]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-role-reference) role assigned. - -1. From the **Global** view, navigate to your cluster that you want to start adding custom catalogs. -2. Choose the **Tools > Catalogs** in the navigation bar. -2. Click **Add Catalog**. -3. Complete the form. By default, the form will provide the ability to select `Scope` of the catalog. When you have added a catalog from the **Cluster** scope, it is defaulted to `Cluster`. -5. Click **Create**. - -**Result**: Your custom cluster catalog is added to Rancher. Once it is in `Active` state, it has completed synchronization and you will be able to start deploying [applications in any project in that cluster]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) from this catalog. - -## Adding Project Level Catalogs - -_Available as of v2.2.0_ - ->**Prerequisites:** In order to manage project scoped catalogs, you need _one_ of the following permissions: -> ->- [Administrator Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) ->- [Cluster Owner Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) ->- [Project Owner Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) ->- [Custom Project Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) with the [Manage Project Catalogs]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-role-reference) role assigned. - -1. From the **Global** view, navigate to your project that you want to start adding custom catalogs. -2. Choose the **Tools > Catalogs** in the navigation bar. -2. Click **Add Catalog**. -3. Complete the form. By default, the form will provide the ability to select `Scope` of the catalog. When you have added a catalog from the **Project** scope, it is defaulted to `Cluster`. -5. Click **Create**. - -**Result**: Your custom project catalog is added to Rancher. Once it is in `Active` state, it has completed synchronization and you will be able to start deploying [applications in that project]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) from this catalog. diff --git a/content/rancher/v2.x/en/catalog/globaldns/_index.md b/content/rancher/v2.x/en/catalog/globaldns/_index.md index ffa841ae509..7be91731f1c 100644 --- a/content/rancher/v2.x/en/catalog/globaldns/_index.md +++ b/content/rancher/v2.x/en/catalog/globaldns/_index.md @@ -23,11 +23,11 @@ The following table lists the first version of Rancher each provider debuted. ## Global DNS Entries -For each application that you want to route traffic to, you will need to create a Global DNS Entry. This entry will use a fully qualified domain name (a.k.a FQDN) from a global DNS provider to target applications. The applications can either resolve to a single [multi-cluster application]({{< baseurl >}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or to specific projects. You must [add specific annotation labels](#adding-annotations-to-ingresses-to-program-the-external-dns) to the ingresses in order for traffic to be routed correctly to the applications. Without this annotation, the programming for the DNS entry will not work. +For each application that you want to route traffic to, you will need to create a Global DNS Entry. This entry will use a fully qualified domain name (a.k.a FQDN) from a global DNS provider to target applications. The applications can either resolve to a single [multi-cluster application]({{}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or to specific projects. You must [add specific annotation labels](#adding-annotations-to-ingresses-to-program-the-external-dns) to the ingresses in order for traffic to be routed correctly to the applications. Without this annotation, the programming for the DNS entry will not work. ## Permissions for Global DNS Providers/Entries -By default, only [global administrators]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) and the creator of the Global DNS provider or Global DNS entry have access to use, edit and delete them. When creating the provider or entry, the creator can add additional users in order for those users to access and manage them. By default, these members will get `Owner` role to manage them. +By default, only [global administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) and the creator of the Global DNS provider or Global DNS entry have access to use, edit and delete them. When creating the provider or entry, the creator can add additional users in order for those users to access and manage them. By default, these members will get `Owner` role to manage them. ## Setting up Global DNS for Applications @@ -63,7 +63,7 @@ By default, only [global administrators]({{< baseurl >}}/rancher/v2.x/en/admin-s >**Notes:** > ->- Alibaba Cloud SDK uses TZ data. It needs to be present on `/usr/share/zoneinfo` path of the nodes running [`local` cluster]({{< baseurl >}}/rancher/v2.x/en/installation/options/chart-options/#import-local-cluster), and it is mounted to the external DNS pods. If it is not available on the nodes, please follow the [instruction](https://www.ietf.org/timezones/tzdb-2018f/tz-link.html) to prepare it. +>- Alibaba Cloud SDK uses TZ data. It needs to be present on `/usr/share/zoneinfo` path of the nodes running [`local` cluster]({{}}/rancher/v2.x/en/installation/options/chart-options/#import-local-cluster), and it is mounted to the external DNS pods. If it is not available on the nodes, please follow the [instruction](https://www.ietf.org/timezones/tzdb-2018f/tz-link.html) to prepare it. >- Different versions of AliDNS have different allowable TTL range, where the default TTL for a global DNS entry may not be valid. Please see the [reference](https://www.alibabacloud.com/help/doc-detail/34338.htm) before adding an AliDNS entry. {{% /accordion %}} @@ -73,7 +73,7 @@ By default, only [global administrators]({{< baseurl >}}/rancher/v2.x/en/admin-s 1. Click on **Add DNS Entry**. 1. Enter the **FQDN** you wish to program on the external DNS. 1. Select a Global DNS **Provider** from the list. -1. Select if this DNS entry will be for a [multi-cluster application]({{< baseurl >}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or for workloads in different [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). You will need to ensure that [annotations are added to any ingresses](#adding-annotations-to-ingresses-to-program-the-external-dns) for the applications that you want to target. +1. Select if this DNS entry will be for a [multi-cluster application]({{}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or for workloads in different [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). You will need to ensure that [annotations are added to any ingresses](#adding-annotations-to-ingresses-to-program-the-external-dns) for the applications that you want to target. 1. Configure the **DNS TTL** value in seconds. By default, it will be 300 seconds. 1. Under **Member Access**, search for any users that you want to have the ability to manage this Global DNS entry. @@ -85,11 +85,11 @@ In order for Global DNS entries to be programmed, you will need to add a specifi 1. In order for the DNS to be programmed, the following requirements must be met: * The ingress routing rule must be set to use a `hostname` that matches the FQDN of the Global DNS entry. * The ingress must have an annotation (`rancher.io/globalDNS.hostname`) and the value of this annotation should match the FQDN of the Global DNS entry. -1. Once the ingress in your [multi-cluster application]({{< baseurl >}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or in your target projects are in `active` state, the FQDN will be programmed on the external DNS against the Ingress IP addresses. +1. Once the ingress in your [multi-cluster application]({{}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or in your target projects are in `active` state, the FQDN will be programmed on the external DNS against the Ingress IP addresses. ## Editing a Global DNS Provider -The [global administrators]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), creator of the Global DNS provider and any users added as `members` to a Global DNS provider, have _owner_ access to that provider. Any members can edit the following fields: +The [global administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), creator of the Global DNS provider and any users added as `members` to a Global DNS provider, have _owner_ access to that provider. Any members can edit the following fields: - Root Domain - Access Key & Secret Key @@ -97,11 +97,11 @@ The [global administrators]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/ 1. From the **Global View**, select **Tools > Global DNS Providers**. -1. For the Global DNS provider that you want to edit, click the **Vertical Ellipsis (...) > Edit**. +1. For the Global DNS provider that you want to edit, click the **⋮ > Edit**. ## Editing a Global DNS Entry -The [global administrators]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), creator of the Global DNS entry and any users added as `members` to a Global DNS entry, have _owner_ access to that DNS entry. Any members can edit the following fields: +The [global administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), creator of the Global DNS entry and any users added as `members` to a Global DNS entry, have _owner_ access to that DNS entry. Any members can edit the following fields: - FQDN - Global DNS Provider @@ -115,4 +115,4 @@ Permission checks are relaxed for removing target projects in order to support s 1. From the **Global View**, select **Tools > Global DNS Entries**. -1. For the Global DNS entry that you want to edit, click the **Vertical Ellipsis (...) > Edit**. +1. For the Global DNS entry that you want to edit, click the **⋮ > Edit**. diff --git a/content/rancher/v2.x/en/catalog/launching-apps/_index.md b/content/rancher/v2.x/en/catalog/launching-apps/_index.md new file mode 100644 index 00000000000..74c0fd358e2 --- /dev/null +++ b/content/rancher/v2.x/en/catalog/launching-apps/_index.md @@ -0,0 +1,104 @@ +--- +title: Launching Catalog Apps +weight: 700 +aliases: + - /rancher/v2.x/en/catalog/launching-apps +--- + +Within a project, when you want to deploy applications from catalogs, the applications available in your project will be based on the [scope of the catalogs]({{}}/rancher/v2.x/en/catalog/#catalog-scope). + +If your application is using ingresses, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{}}/rancher/v2.x/en/catalog/globaldns/). + +- [Prerequisites](#prerequisites) +- [Launching a catalog app](#launching-a-catalog-app) +- [Configuration options](#configuration-options) + +# Prerequisites + +When Rancher deploys a catalog app, it launches an ephemeral instance of a Helm service account that has the permissions of the user deploying the catalog app. Therefore, a user cannot gain more access to the cluster through Helm or a catalog application than they otherwise would have. + +To launch an app from a catalog in Rancher, you must have at least one of the following permissions: + +- A [project-member role]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) in the target cluster, which gives you the ability to create, read, update, and delete the workloads +- A [cluster owner role]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) for the cluster that include the target project + +Before launching an app, you'll need to either [enable a built-in global catalog]({{}}/rancher/v2.x/en/catalog/built-in) or [add your own custom catalog.]({{}}/rancher/v2.x/en/catalog/adding-catalogs) + +# Launching a Catalog App + +1. From the **Global** view, open the project that you want to deploy an app to. + +2. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. + +3. Find the app that you want to launch, and then click **View Now**. + +4. Under **Configuration Options** enter a **Name**. By default, this name is also used to create a Kubernetes namespace for the application. + + * If you would like to change the **Namespace**, click **Customize** and enter a new name. + * If you want to use a different namespace that already exists, click **Customize**, and then click **Use an existing namespace**. Choose a namespace from the list. + +5. Select a **Template Version**. + +6. Complete the rest of the **Configuration Options**. + + * For native Helm charts (i.e., charts from the **Helm Stable** or **Helm Incubator** catalogs), answers are provided as key value pairs in the **Answers** section. + * Keys and values are available within **Detailed Descriptions**. + * When entering answers, you must format them using the syntax rules found in [Using Helm: The format and limitations of --set](https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set), as Rancher passes them as `--set` flags to Helm. For example, when entering an answer that includes two values separated by a comma (i.e., `abc, bcd`), wrap the values with double quotes (i.e., `"abc, bcd"`). + +7. Review the files in **Preview**. When you're satisfied, click **Launch**. + +**Result**: Your application is deployed to your chosen namespace. You can view the application status from the project's **Workloads** view or **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view. + +# Configuration Options + +For each Helm chart, there are a list of desired answers that must be entered in order to successfully deploy the chart. When entering answers, you must format them using the syntax rules found in [Using Helm: The format and limitations of –set](https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set), as Rancher passes them as `--set` flags to Helm. + +> For example, when entering an answer that includes two values separated by a comma (i.e. `abc, bcd`), it is required to wrap the values with double quotes (i.e., ``"abc, bcd"``). + +{{% tabs %}} +{{% tab "UI" %}} + +### Using a questions.yml file + +If the Helm chart that you are deploying contains a `questions.yml` file, Rancher's UI will translate this file to display an easy to use UI to collect the answers for the questions. + +### Key Value Pairs for Native Helm Charts + +For native Helm charts (i.e., charts from the **Helm Stable** or **Helm Incubator** catalogs or a [custom Helm chart repository]({{}}/rancher/v2.x/en/catalog/custom/#custom-helm-chart-repository)), answers are provided as key value pairs in the **Answers** section. These answers are used to override the default values. + +{{% /tab %}} +{{% tab "Editing YAML Files" %}} + +_Available as of v2.1.0_ + +If you do not want to input answers using the UI, you can choose the **Edit as YAML** option. + +With this example YAML: + +```YAML +outer: + inner: value +servers: +- port: 80 + host: example +``` + +### Key Value Pairs + +You can have a YAML file that translates these fields to match how to [format custom values so that it can be used with `--set`](https://github.com/helm/helm/blob/master/docs/using_helm.md#the-format-and-limitations-of---set). + +These values would be translated to: + +``` +outer.inner=value +servers[0].port=80 +servers[0].host=example +``` + +### YAML files + +_Available as of v2.2.0_ + +You can directly paste that YAML formatted structure into the YAML editor. By allowing custom values to be set using a YAML formatted structure, Rancher has the ability to easily customize for more complicated input values (e.g. multi-lines, array and JSON objects). +{{% /tab %}} +{{% /tabs %}} \ No newline at end of file diff --git a/content/rancher/v2.x/en/catalog/managing-apps/_index.md b/content/rancher/v2.x/en/catalog/managing-apps/_index.md new file mode 100644 index 00000000000..1351c90b3bc --- /dev/null +++ b/content/rancher/v2.x/en/catalog/managing-apps/_index.md @@ -0,0 +1,80 @@ +--- +title: Managing Catalog Apps +weight: 500 +--- + +After deploying an application, one of the benefits of using an application versus individual workloads/resources is the ease of being able to manage many workloads/resources applications. Apps can be cloned, upgraded or rolled back. + +- [Cloning catalog applications](#cloning-catalog-applications) +- [Upgrading catalog applications](#upgrading-catalog-applications) +- [Rolling back catalog applications](#rolling-back-catalog-applications) +- [Deleting catalog application deployments](#deleting-catalog-application-deployments) + +### Cloning Catalog Applications + +After an application is deployed, you can easily clone it to use create another application with almost the same configuration. It saves you the work of manually filling in duplicate information. + +### Upgrading Catalog Applications + +After an application is deployed, you can easily upgrade to a different template version. + +1. From the **Global** view, navigate to the project that contains the catalog application that you want to upgrade. + +1. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. + +3. Find the application that you want to upgrade, and then click the ⋮ to find **Upgrade**. + +4. Select the **Template Version** that you want to deploy. + +5. (Optional) Update your **Configuration Options**. + +6. (Optional) Select whether or not you want to force the catalog application to be upgraded by checking the box for **Delete and recreate resources if needed during the upgrade**. + + > In Kubernetes, some fields are designed to be immutable or cannot be updated directly. As of v2.2.0, you can now force your catalog application to be updated regardless of these fields. This will cause the catalog apps to be deleted and resources to be re-created if needed during the upgrade. + +7. Review the files in the **Preview** section. When you're satisfied, click **Launch**. + +**Result**: Your application is updated. You can view the application status from the project's: + +- **Workloads** view +- **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view. + + +### Rolling Back Catalog Applications + +After an application has been upgraded, you can easily rollback to a different template version. + +1. From the **Global** view, navigate to the project that contains the catalog application that you want to upgrade. + +1. From the main navigation bar, choose **Apps**. In versions prior to v2.2.0, choose **Catalog Apps** on the main navigation bar. Click **Launch**. + +3. Find the application that you want to rollback, and then click the ⋮ to find **Rollback**. + +4. Select the **Revision** that you want to roll back to. By default, Rancher saves up to the last 10 revisions. + +5. (Optional) Select whether or not you want to force the catalog application to be upgraded by checking the box for **Delete and recreate resources if needed during the upgrade**. + + > In Kubernetes, some fields are designed to be immutable or cannot be updated directly. As of v2.2.0, you can now force your catalog application to be updated regardless of these fields. This will cause the catalog apps to be deleted and resources to be re-created if needed during the rollback. + +7. Click **Rollback**. + +**Result**: Your application is updated. You can view the application status from the project's: + +- **Workloads** view +- **Apps** view. In versions prior to v2.2.0, this is the **Catalog Apps** view. + +### Deleting Catalog Application Deployments + +As a safeguard to prevent you from unintentionally deleting other catalog applications that share a namespace, deleting catalog applications themselves does not delete the namespace they're assigned to. + +Therefore, if you want to delete both an app and the namespace that contains the app, you should remove the app and the namespace separately: + +1. Uninstall the app using the app's `uninstall` function. + +1. From the **Global** view, navigate to the project that contains the catalog application that you want to delete. + +1. From the main menu, choose **Namespaces**. + +1. Find the namespace running your catalog app. Select it and click **Delete**. + +**Result:** The catalog application deployment and its namespace are deleted. diff --git a/content/rancher/v2.x/en/catalog/multi-cluster-apps/_index.md b/content/rancher/v2.x/en/catalog/multi-cluster-apps/_index.md index e1ec64524d8..37fe0c6304b 100644 --- a/content/rancher/v2.x/en/catalog/multi-cluster-apps/_index.md +++ b/content/rancher/v2.x/en/catalog/multi-cluster-apps/_index.md @@ -1,14 +1,29 @@ --- title: Multi-Cluster Apps -weight: 5000 +weight: 600 --- _Available as of v2.2.0_ Typically, most applications are deployed on a single Kubernetes cluster, but there will be times you might want to deploy multiple copies of the same application across different clusters and/or projects. In Rancher, a _multi-cluster application_, is an application deployed using a Helm chart across multiple clusters. With the ability to deploy the same application across multiple clusters, it avoids the repetition of the same action on each cluster, which could introduce user error during application configuration. With multi-cluster applications, you can customize to have the same configuration across all projects/clusters as well as have the ability to change the configuration based on your target project. Since multi-cluster application is considered a single application, it's easy to manage and maintain this application. -Any Helm charts from a [global catalog]({{< baseurl >}}/rancher/v2.x/en/catalog/#catalog-scope) can be used to deploy and manage multi-cluster applications. +Any Helm charts from a global catalog can be used to deploy and manage multi-cluster applications. -After creating a multi-cluster application, you can program a [Global DNS entry]({{< baseurl >}}/rancher/v2.x/en/catalog/globaldns/) to make it easier to access the application. +After creating a multi-cluster application, you can program a [Global DNS entry]({{}}/rancher/v2.x/en/catalog/globaldns/) to make it easier to access the application. + +- [Prerequisites](#prerequisites) +- [Launching a multi-cluster app](#launching-a-multi-cluster-app) +- [Multi-cluster app configuration options](#multi-cluster-app-configuration-options) + - [Targets](#targets) + - [Upgrades](#upgrades) + - [Roles](#roles) +- [Application configuration options](#application-configuration-options) + - [Using a questions.yml file](#using-a-questions-yml-file) + - [Key value pairs for native Helm charts](key-value-pairs-for-native-helm-charts) + - [Members](#members) + - [Overriding application configuration options for specific projects](#overriding-application-configuration-options-for-specific-projects) +- [Upgrading multi-cluster app roles and projects](#upgrading-multi-cluster-app-roles-and-projects) +- [Multi-cluster application management](#multi-cluster-application-managements) +- [Deleting a multi-cluster application](#deleting-a-multi-cluster-application) # Prerequisites @@ -17,7 +32,7 @@ To create a multi-cluster app in Rancher, you must have at least one of the foll - A [project-member role]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) in the target cluster(s), which gives you the ability to create, read, update, and delete the workloads - A [cluster owner role]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) for the clusters(s) that include the target project(s) -## Launching a Multi-Cluster App +# Launching a Multi-Cluster App 1. From the **Global** view, choose **Apps** in the navigation bar. Click **Launch**. @@ -29,7 +44,7 @@ To create a multi-cluster app in Rancher, you must have at least one of the foll 5. Select a **Template Version**. -6. Complete the [multi-cluster applications specific configuration options](#configuration-options-to-make-a-multi-cluster-app) as well as the [application configuration options](#application-configuration-options). +6. Complete the [multi-cluster applications specific configuration options](#multi-cluster-app-configuration-options) as well as the [application configuration options](#application-configuration-options). 7. Select the **Members** who can [interact with the multi-cluster application](#members). @@ -39,15 +54,15 @@ To create a multi-cluster app in Rancher, you must have at least one of the foll **Result**: Your application is deployed to your chosen namespace. You can view the application status from the project's: -### Configuration Options to Make a Multi-Cluster App +# Multi-cluster App Configuration Options Rancher has divided the configuration option for the multi-cluster application into several sections. -#### Targets +### Targets -In the **Targets** section, select the [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects) that you want the application to be deployed in. The list of projects is based on what projects you have access to. For each project that you select, it will be added to the list, which shows the cluster name and project name that were selected. To remove a target project, click on **-**. +In the **Targets** section, select the projects that you want the application to be deployed in. The list of projects is based on what projects you have access to. For each project that you select, it will be added to the list, which shows the cluster name and project name that were selected. To remove a target project, click on **-**. -#### Upgrades +### Upgrades In the **Upgrades** section, select the upgrade strategy to use, when you decide to upgrade your application. @@ -55,35 +70,35 @@ In the **Upgrades** section, select the upgrade strategy to use, when you decide * **Upgrade all apps simultaneously:** When selecting this upgrade strategy, all applications across all projects will be upgraded at the same time. -#### Roles +### Roles -In the **Roles** section, you define the role of the multi-cluster application. Typically, when a user [launches catalog applications]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/#launching-catalog-applications), that specific user's permissions are used for creation of all workloads/resources that is required by the app. +In the **Roles** section, you define the role of the multi-cluster application. Typically, when a user [launches catalog applications]({{}}/rancher/v2.x/en/catalog/launching-apps), that specific user's permissions are used for creation of all workloads/resources that is required by the app. For multi-cluster applications, the application is deployed by a _system user_ and is assigned as the creator of all underlying resources. A _system user_ is used instead of the actual user due to the fact that the actual user could be removed from one of the target projects. If the actual user was removed from one of the projects, then that user would no longer be able to manage the application for the other projects. Rancher will let you select from two options for Roles, **Project** and **Cluster**. Rancher will allow creation using any of these roles based on the user's permissions. -- **Project** - This is the equivalent of a [project member]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles). If you select this role, Rancher will check that in all the target projects, the user has minimally the [project member]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) role. While the user might not be explicitly granted the _project member_ role, if the user is an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), a [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or a [project owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles), then the user is considered to have the appropriate level of permissions. +- **Project** - This is the equivalent of a [project member]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles). If you select this role, Rancher will check that in all the target projects, the user has minimally the [project member]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) role. While the user might not be explicitly granted the _project member_ role, if the user is an [administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), a [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or a [project owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles), then the user is considered to have the appropriate level of permissions. -- **Cluster** - This is the equivalent of a [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles). If you select this role, Rancher will check that in all the target projects, the user has minimally the [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) role. While the user might not be explicitly granted the _cluster owner_ role, if the user is an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), then the user is considered to have the appropriate level of permissions. +- **Cluster** - This is the equivalent of a [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles). If you select this role, Rancher will check that in all the target projects, the user has minimally the [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) role. While the user might not be explicitly granted the _cluster owner_ role, if the user is an [administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), then the user is considered to have the appropriate level of permissions. When launching the application, Rancher will confirm if you have these permissions in the target projects before launching the application. > **Note:** There are some applications like _Grafana_ or _Datadog_ that require access to specific cluster-scoped resources. These applications will require the _Cluster_ role. If you find out later that the application requires cluster roles, the multi-cluster application can be upgraded to update the roles. -### Application Configuration Options +# Application Configuration Options -For each Helm chart, there are a list of desired answers that must be entered in order to successfully deploy the chart. When entering answers, you must format them using the syntax rules found in [Using Helm: The format and limitations of –set](https://github.com/helm/helm/blob/master/docs/using_helm.md#the-format-and-limitations-of---set), as Rancher passes them as `--set` flags to Helm. +For each Helm chart, there are a list of desired answers that must be entered in order to successfully deploy the chart. When entering answers, you must format them using the syntax rules found in [Using Helm: The format and limitations of –set](https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set), as Rancher passes them as `--set` flags to Helm. > For example, when entering an answer that includes two values separated by a comma (i.e. `abc, bcd`), it is required to wrap the values with double quotes (i.e., ``"abc, bcd"``). -#### Using a `questions.yml` file +### Using a questions.yml file If the Helm chart that you are deploying contains a `questions.yml` file, Rancher's UI will translate this file to display an easy to use UI to collect the answers for the questions. -#### Key Value Pairs for Native Helm Charts +### Key Value Pairs for Native Helm Charts -For native Helm charts (i.e., charts from the **Helm Stable** or **Helm Incubator** catalogs or a [custom Helm chart repository]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/#custom-helm-chart-repository)), answers are provided as key value pairs in the **Answers** section. These answers are used to override the default values. +For native Helm charts (i.e., charts from the **Helm Stable** or **Helm Incubator** catalogs or a [custom Helm chart repository]({{}}/rancher/v2.x/en/catalog/custom/#custom-helm-chart-repository)), answers are provided as key value pairs in the **Answers** section. These answers are used to override the default values. ### Members @@ -93,7 +108,7 @@ By default, multi-cluster applications can only be managed by the user who creat 2. Select the **Access Type** for that member. There are three access types for a multi-cluster project, but due to how the permissions of a multi-cluster application are launched, please read carefully to understand what these access types mean. - - **Owner**: This access type can manage any configuration part of the multi-cluster application including the template version, the [multi-cluster applications specific configuration options](#configuration-options-to-make-a-multi-cluster-app), the [application specific configuration options](#application-configuration-options), the [members who can interact with the multi-cluster application](#members) and the [custom application configuration answers](#overriding-application-configuration-options-for-specific-projects). Since a multi-cluster application is created with a different set of permissions from the user, any _owner_ of the multi-cluster application can manage/remove applications in [target projects](#targets) without explicitly having access to these project(s). Only trusted users should be provided with this access type. + - **Owner**: This access type can manage any configuration part of the multi-cluster application including the template version, the [multi-cluster applications specific configuration options](#Multi-cluster App Configuration Options), the [application specific configuration options](#application-configuration-options), the members who can interact with the multi-cluster application and the [custom application configuration answers](#overriding-application-configuration-options-for-specific-projects). Since a multi-cluster application is created with a different set of permissions from the user, any _owner_ of the multi-cluster application can manage/remove applications in [target projects](#targets) without explicitly having access to these project(s). Only trusted users should be provided with this access type. - **Member**: This access type can only modify the template version, the [application specific configuration options](#application-configuration-options) and the [custom application configuration answers](#overriding-application-configuration-options-for-specific-projects). Since a multi-cluster application is created with a different set of permissions from the user, any _member_ of the multi-cluster application can modify the application without explicitly having access to these project(s). Only trusted users should be provided with this access type. @@ -115,7 +130,7 @@ The ability to use the same configuration to deploy the same application across - **Answer**: Enter the answer that you want to be used instead. -## Upgrading Multi-Cluster App Roles and Projects +# Upgrading Multi-Cluster App Roles and Projects - **Changing Roles on an existing Multi-Cluster app** The creator and any users added with the access-type "owner" to a multi-cluster app, can upgrade its Roles. When adding a new Role, we check if the user has that exact role in all current target projects. These checks allow the same relaxations for global admins, cluster owners and project-owners as described in the installation section for the field `Roles`. @@ -125,22 +140,22 @@ The creator and any users added with the access-type "owner" to a multi-cluster 2. We do not do these membership checks when removing target projects. This is because the caller's permissions could have with respect to the target project, or the project could have been deleted and hence the caller wants to remove it from targets list. -## Multi-Cluster Application Management +# Multi-Cluster Application Management One of the benefits of using a multi-cluster application as opposed to multiple individual applications of the same type, is the ease of management. Multi-cluster applications can be cloned, upgraded or rolled back. 1. From the **Global** view, choose **Apps** in the navigation bar. -2. Choose the multi-cluster application you want to take one of these actions on and click the **Vertical Ellipsis (...)**. Select one of the following options: +2. Choose the multi-cluster application you want to take one of these actions on and click the **⋮**. Select one of the following options: * **Clone**: Creates another multi-cluster application with the same configuration. By using this option, you can easily duplicate a multi-cluster application. - * **Upgrade**: Upgrade your multi-cluster application to change some part of the configuration. When performing an upgrade for multi-cluster application, the [upgrade strategy](#upgrade-strategy) can be modified if you have the correct [access type](#members). + * **Upgrade**: Upgrade your multi-cluster application to change some part of the configuration. When performing an upgrade for multi-cluster application, the [upgrade strategy](#upgrades) can be modified if you have the correct [access type](#members). * **Rollback**: Rollback your application to a specific version. If after an upgrade, there are issues for your multi-cluster application for one or more of your [targets](#targets), Rancher has stored up to 10 versions of the multi-cluster application. Rolling back a multi-cluster application reverts the application for **all** target clusters and projects, not just the targets(s) affected by the upgrade issue. -## Deleting a Multi-Cluster Application +# Deleting a Multi-Cluster Application 1. From the **Global** view, choose **Apps** in the navigation bar. -2. Choose the multi-cluster application you want to delete and click the **Vertical Ellipsis (...) > Delete**. When deleting the multi-cluster application, all applications and namespaces are deleted in all of the target projects. +2. Choose the multi-cluster application you want to delete and click the **⋮ > Delete**. When deleting the multi-cluster application, all applications and namespaces are deleted in all of the target projects. > **Note:** The applications in the target projects, that are created for a multi-cluster application, cannot be deleted individually. The applications can only be deleted when the multi-cluster application is deleted. diff --git a/content/rancher/v2.x/en/catalog/tutorial/_index.md b/content/rancher/v2.x/en/catalog/tutorial/_index.md new file mode 100644 index 00000000000..b8e6295742c --- /dev/null +++ b/content/rancher/v2.x/en/catalog/tutorial/_index.md @@ -0,0 +1,72 @@ +--- +title: "Tutorial: Example Custom Chart Creation" +weight: 800 +--- + +In this tutorial, you'll learn how to create a Helm chart and deploy it to a repository. The repository can then be used as a source for a custom catalog in Rancher. + +You can fill your custom catalogs with either Helm Charts or Rancher Charts, although we recommend Rancher Charts due to their enhanced user experience. + +> For a complete walkthrough of developing charts, see the upstream Helm chart [developer reference](https://helm.sh/docs/chart_template_guide/). + +1. Within the GitHub repo that you're using as your custom catalog, create a directory structure that mirrors the structure listed in [Chart Directory Structure](#chart-directory-structure). + + Rancher requires this directory structure, although `app-readme.md` and `questions.yml` are optional. + + >**Tip:** + > + >- To begin customizing a chart, copy one from either the [Rancher Library](https://github.com/rancher/charts) or the [Helm Stable](https://github.com/kubernetes/charts/tree/master/stable). + >- For a complete walk through of developing charts, see the upstream Helm chart [developer reference](https://docs.helm.sh/developing_charts/). + +2. **Recommended:** Create an `app-readme.md` file. + + Use this file to create custom text for your chart's header in the Rancher UI. You can use this text to notify users that the chart is customized for your environment or provide special instruction on how to use it. +
+
+ **Example**: + + ``` + $ cat ./app-readme.md + + # Wordpress ROCKS! + ``` + +3. **Recommended:** Create a `questions.yml` file. + + This file creates a form for users to specify deployment parameters when they deploy the custom chart. Without this file, users **must** specify the parameters manually using key value pairs, which isn't user-friendly. +
+
+ The example below creates a form that prompts users for persistent volume size and a storage class. +
+
+ For a list of variables you can use when creating a `questions.yml` file, see [Question Variable Reference]({{}}/rancher/v2.x/en/catalog/creating-apps/#question-variable-reference). + + ```yaml + categories: + - Blog + - CMS + questions: + - variable: persistence.enabled + default: "false" + description: "Enable persistent volume for WordPress" + type: boolean + required: true + label: WordPress Persistent Volume Enabled + show_subquestion_if: true + group: "WordPress Settings" + subquestions: + - variable: persistence.size + default: "10Gi" + description: "WordPress Persistent Volume Size" + type: string + label: WordPress Volume Size + - variable: persistence.storageClass + default: "" + description: "If undefined or null, uses the default StorageClass. Default to null" + type: storageclass + label: Default StorageClass for WordPress + ``` + +4. Check the customized chart into your GitHub repo. + +**Result:** Your custom chart is added to the repo. Your Rancher Server will replicate the chart within a few minutes. diff --git a/content/rancher/v2.x/en/cli/_index.md b/content/rancher/v2.x/en/cli/_index.md index 0baa8f9da86..dd4d656fd19 100644 --- a/content/rancher/v2.x/en/cli/_index.md +++ b/content/rancher/v2.x/en/cli/_index.md @@ -16,8 +16,8 @@ The binary can be downloaded directly from the UI. The link can be found in the After you download the Rancher CLI, you need to make a few configurations. Rancher CLI requires: -- Your [Rancher Server URL]({{< baseurl >}}/rancher/v2.x/en/admin-settings/server-url), which is used to connect to Rancher Server. -- An API Bearer Token, which is used to authenticate with Rancher. For more information about obtaining a Bearer Token, see [Creating an API Key]({{< baseurl >}}/rancher/v2.x/en/user-settings/api-keys/). +- Your [Rancher Server URL]({{}}/rancher/v2.x/en/admin-settings/server-url), which is used to connect to Rancher Server. +- An API Bearer Token, which is used to authenticate with Rancher. For more information about obtaining a Bearer Token, see [Creating an API Key]({{}}/rancher/v2.x/en/user-settings/api-keys/). ### CLI Authentication @@ -31,7 +31,7 @@ If Rancher Server uses a self-signed certificate, Rancher CLI prompts you to con ### Project Selection -Before you can perform any commands, you must select a Rancher project to perform those commands against. To select a [project]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/) to work on, use the command `./rancher context switch`. When you enter this command, a list of available projects displays. Enter a number to choose your project. +Before you can perform any commands, you must select a Rancher project to perform those commands against. To select a [project]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/) to work on, use the command `./rancher context switch`. When you enter this command, a list of available projects displays. Enter a number to choose your project. **Example: `./rancher context switch` Output** ``` @@ -57,17 +57,17 @@ The following commands are available for use in Rancher CLI. | Command | Result | |---|---| -| `apps, [app]` | Performs operations on catalog applications (i.e. individual [Helm charts](https://docs.helm.sh/developing_charts/) or [Rancher charts]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/#chart-directory-structure)). | -| `catalog` | Performs operations on [catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/). | -| `clusters, [cluster]` | Performs operations on your [clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/). | -| `context` | Switches between Rancher [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). For an example, see [Project Selection](#project-selection). | -| `inspect [OPTIONS] [RESOURCEID RESOURCENAME]` | Displays details about [Kubernetes resources](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#resource-types) or Rancher resources (i.e.: [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/) and [workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/)). Specify resources by name or ID. | +| `apps, [app]` | Performs operations on catalog applications (i.e. individual [Helm charts](https://docs.helm.sh/developing_charts/) or [Rancher charts]({{}}/rancher/v2.x/en/catalog/custom/#chart-directory-structure)). | +| `catalog` | Performs operations on [catalogs]({{}}/rancher/v2.x/en/catalog/). | +| `clusters, [cluster]` | Performs operations on your [clusters]({{}}/rancher/v2.x/en/cluster-provisioning/). | +| `context` | Switches between Rancher [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). For an example, see [Project Selection](#project-selection). | +| `inspect [OPTIONS] [RESOURCEID RESOURCENAME]` | Displays details about [Kubernetes resources](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#resource-types) or Rancher resources (i.e.: [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/) and [workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/)). Specify resources by name or ID. | | `kubectl` |Runs [kubectl commands](https://kubernetes.io/docs/reference/kubectl/overview/#operations). | | `login, [l]` | Logs into a Rancher Server. For an example, see [CLI Authentication](#cli-authentication). | -| `namespaces, [namespace]` |Performs operations on [namespaces]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). | -| `nodes, [node]` |Performs operations on [nodes]({{< baseurl >}}/rancher/v2.x/en/overview/architecture/#kubernetes). | -| `projects, [project]` | Performs operations on [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). | -| `ps` | Displays [workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads) in a project. | +| `namespaces, [namespace]` |Performs operations on [namespaces]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). | +| `nodes, [node]` |Performs operations on [nodes]({{}}/rancher/v2.x/en/overview/architecture/#kubernetes). | +| `projects, [project]` | Performs operations on [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). | +| `ps` | Displays [workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads) in a project. | | `settings, [setting]` | Shows the current settings for your Rancher Server. | | `ssh` | Connects to one of your cluster nodes using the SSH protocol. | | `help, [h]` | Shows a list of commands or help for one command. | diff --git a/content/rancher/v2.x/en/cluster-admin/_index.md b/content/rancher/v2.x/en/cluster-admin/_index.md index 09397d9c2c7..ec93dd077f7 100644 --- a/content/rancher/v2.x/en/cluster-admin/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/_index.md @@ -21,22 +21,22 @@ Alternatively, you can switch between projects and clusters directly in the navi ## Managing Clusters in Rancher -After clusters have been [provisioned into Rancher]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/), [cluster owners]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) will need to manage these clusters. There are many different options of how to manage your cluster. +After clusters have been [provisioned into Rancher]({{}}/rancher/v2.x/en/cluster-provisioning/), [cluster owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) will need to manage these clusters. There are many different options of how to manage your cluster. -| Action | [Rancher launched Kubernetes Clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) | [Hosted Kubernetes Clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) | [Imported Clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters) | +| Action | [Rancher launched Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) | [Hosted Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) | [Imported Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/imported-clusters) | | --- | --- | ---| ---| -| [Using kubectl and a kubeconfig file to Access a Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/) | * | * | * | -| [Adding Cluster Members]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/cluster-access/cluster-members/) | * | * | * | -| [Editing Clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/editing-clusters/) | * | * | * | -| [Managing Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/nodes) | * | * | * | -| [Managing Persistent Volumes and Storage Classes]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/) | * | * | * | -| [Managing Projects and Namespaces]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/projects-and-namespaces/) | * | * | * | +| [Using kubectl and a kubeconfig file to Access a Cluster]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/) | * | * | * | +| [Adding Cluster Members]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/cluster-members/) | * | * | * | +| [Editing Clusters]({{}}/rancher/v2.x/en/cluster-admin/editing-clusters/) | * | * | * | +| [Managing Nodes]({{}}/rancher/v2.x/en/cluster-admin/nodes) | * | * | * | +| [Managing Persistent Volumes and Storage Classes]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/) | * | * | * | +| [Managing Projects and Namespaces]({{}}/rancher/v2.x/en/cluster-admin/projects-and-namespaces/) | * | * | * | | [Configuring Tools](#configuring-tools) | * | * | * | -| [Cloning Clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/cloning-clusters/)| | * | * | -| [Ability to rotate certificates]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/certificate-rotation/) | * | | | -| [Ability to back up your Kubernetes Clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/) | * | | | -| [Ability to recover and restore etcd]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/restoring-etcd/) | * | | | -| [Cleaning Kubernetes components when clusters are no longer reachable from Rancher]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/) | * | | | +| [Cloning Clusters]({{}}/rancher/v2.x/en/cluster-admin/cloning-clusters/)| | * | * | +| [Ability to rotate certificates]({{}}/rancher/v2.x/en/cluster-admin/certificate-rotation/) | * | | | +| [Ability to back up your Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/) | * | | | +| [Ability to recover and restore etcd]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/) | * | | | +| [Cleaning Kubernetes components when clusters are no longer reachable from Rancher]({{}}/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/) | * | | | ## Configuring Tools @@ -47,4 +47,4 @@ Rancher contains a variety of tools that aren't included in Kubernetes to assist - Logging - Monitoring -For more information, see [Tools]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/) +For more information, see [Tools]({{}}/rancher/v2.x/en/cluster-admin/tools/) diff --git a/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md b/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md index e4aa716ccbb..369f5a34d61 100644 --- a/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md @@ -1,23 +1,64 @@ --- -title: Backing up etcd +title: Backing up a Cluster weight: 2045 --- _Available as of v2.2.0_ -In the Rancher UI, etcd backup and recovery for [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) can be easily performed. Snapshots of the etcd database are taken and saved either [locally onto the etcd nodes](#local-backup-target) or to a [S3 compatible target](#s3-backup-target). The advantages of configuring S3 is that if all etcd nodes are lost, your snapshot is saved remotely and can be used to restore the cluster. +In the Rancher UI, etcd backup and recovery for [Rancher launched Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) can be easily performed. Rancher recommends configuring recurrent `etcd` snapshots for all production clusters. Additionally, one-time snapshots can easily be taken as well. ->**Note:** If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/restoring-etcd/). +Snapshots of the etcd database are taken and saved either [locally onto the etcd nodes](#local-backup-target) or to a [S3 compatible target](#s3-backup-target). The advantages of configuring S3 is that if all etcd nodes are lost, your snapshot is saved remotely and can be used to restore the cluster. -# Snapshot Creation Period and Retention Count +This section covers the following topics: + +- [How snapshots work](#how-snapshots-work) +- [Configuring recurring snapshots](#configuring-recurring-snapshots) +- [One-time snapshots](#one-time-snapshots) +- [Snapshot backup targets](#snapshot-backup-targets) + - [Local backup target](#local-backup-target) + - [S3 backup target](#s3-backup-target) + - [Using a custom CA certificate for S3](#using-a-custom-ca-certificate-for-s3) + - [IAM Support for storing snapshots in S3](#iam-support-for-storing-snapshots-in-s3) +- [Viewing available snapshots](#viewing-available-snapshots) +- [Safe timestamps](#safe-timestamps) +- [Enabling snapshot features for clusters created before Rancher v2.2.0](#enabling-snapshot-features-for-clusters-created-before-rancher-v2-2-0) + +# How Snapshots Work + +{{% tabs %}} +{{% tab "Rancher v2.4.0+" %}} +When Rancher creates a snapshot, it includes three components: + +- The cluster data in etcd +- The Kubernetes version +- The cluster configuration in the form of the `cluster.yml` + +Because the Kubernetes version is now included in the snapshot, it is possible to restore a cluster to a prior Kubernetes version. + +The multiple components of the snapshot allow you to select from the following options if you need to a cluster from a snapshot: + +- **Restore just the etcd contents:** This restoration is similar to restoring to snapshots in Rancher prior to v2.4.0. +- **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes. +- **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading. + +It's always recommended to take a new snapshot before any upgrades. +{{% /tab %}} +{{% tab "Rancher prior to v2.4.0" %}} +When Rancher creates a snapshot, only the etcd data is included in the snapshot. + +Because the Kubernetes version is not included in the snapshot, there is no option to restore a cluster to a different Kubernetes version. + +It's always recommended to take a new snapshot before any upgrades. +{{% /tab %}} +{{% /tabs %}} + +# Configuring Recurring Snapshots Select how often you want recurring snapshots to be taken as well as how many snapshots to keep. The amount of time is measured in hours. With timestamped snapshots, the user has the ability to do a point-in-time recovery. -### Configuring Recurring Snapshots for the Cluster - -By default, [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) are configured to take recurring snapshots (saved to local disk). To protect against local disk failure, using the [S3 Target](#s3-backup-target) or replicating the path on disk is advised. +By default, [Rancher launched Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) are configured to take recurring snapshots (saved to local disk). To protect against local disk failure, using the [S3 Target](#s3-backup-target) or replicating the path on disk is advised. During cluster provisioning or editing the cluster, the configuration for snapshots can be found in the advanced section for **Cluster Options**. Click on **Show advanced options**. @@ -30,13 +71,13 @@ In the **Advanced Cluster Options** section, there are several options available |[Recurring etcd Snapshot Creation Period](#snapshot-creation-period-and-retention-count) | Time in hours between recurring snapshots| 12 hours | |[Recurring etcd Snapshot Retention Count](#snapshot-creation-period-and-retention-count)| Number of snapshots to retain| 6 | -### One-Time Snapshots +# One-Time Snapshots In addition to recurring snapshots, you may want to take a "one-time" snapshot. For example, before upgrading the Kubernetes version of a cluster it's best to backup the state of the cluster to protect against upgrade failure. 1. In the **Global** view, navigate to the cluster that you want to take a one-time snapshot. -2. Click the **Vertical Ellipsis (...) > Snapshot Now**. +2. Click the **⋮ > Snapshot Now**. **Result:** Based on your [snapshot backup target](#snapshot-backup-targets), a one-time snapshot will be taken and saved in the selected backup target. @@ -49,15 +90,7 @@ Rancher supports two different backup targets: ### Local Backup Target -By default, the `local` backup target is selected. The benefits of this option is that there is no external configuration. Snapshots are automatically saved locally to the etcd nodes in the [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) in `/opt/rke/etcd-snapshots`. All recurring snapshots are taken at configured intervals. The downside of using the `local` backup target is that if there is a total disaster and _all_ etcd nodes are lost, there is no ability to restore the cluster. - -#### Safe Timestamps - -_Available as of v2.3.0_ - -As of v2.2.6, snapshot files are timestamped to simplify processing the files using external tools and scripts, but in some S3 compatible backends, these timestamps were unusable. As of Rancher v2.3.0, the option `safe_timestamp` is added to support compatible file names. When this flag is set to `true`, all special characters in the snapshot filename timestamp are replaced. - ->>**Note:** This option is not available directly in the UI, and is only available through the `Edit as Yaml` interface. +By default, the `local` backup target is selected. The benefits of this option is that there is no external configuration. Snapshots are automatically saved locally to the etcd nodes in the [Rancher launched Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) in `/opt/rke/etcd-snapshots`. All recurring snapshots are taken at configured intervals. The downside of using the `local` backup target is that if there is a total disaster and _all_ etcd nodes are lost, there is no ability to restore the cluster. ### S3 Backup Target @@ -72,13 +105,14 @@ The `S3` backup target allows users to configure a S3 compatible backend to stor |S3 Secret Key|S3 secret key with permission to access the backup bucket|*| | Custom CA Certificate | A custom certificate used to access private S3 backends _Available as of v2.2.5_ || -#### Using a custom CA certificate for S3 +### Using a custom CA certificate for S3 _Available as of v2.2.5_ The backup snapshot can be stored on a custom `S3` backup like [minio](https://min.io/). If the S3 back end uses a self-signed or custom certificate, provide a custom certificate using the `Custom CA Certificate` option to connect to the S3 backend. -# IAM Support for Storing Snapshots in S3 +### IAM Support for Storing Snapshots in S3 + The `S3` backup target supports using IAM authentication to AWS API in addition to using API credentials. An IAM role gives temporary permissions that an application can use when making API calls to S3 storage. To use IAM authentication, the following requirements must be met: - The cluster etcd nodes must have an instance role that has read/write access to the designated backup bucket. @@ -90,8 +124,20 @@ The `S3` backup target supports using IAM authentication to AWS API in addition # Viewing Available Snapshots -The list of all available snapshots for the cluster is available. +The list of all available snapshots for the cluster is available in the Rancher UI. 1. In the **Global** view, navigate to the cluster that you want to view snapshots. 2. Click **Tools > Snapshots** from the navigation bar to view the list of saved snapshots. These snapshots include a timestamp of when they were created. + +# Safe Timestamps + +_Available as of v2.3.0_ + +As of v2.2.6, snapshot files are timestamped to simplify processing the files using external tools and scripts, but in some S3 compatible backends, these timestamps were unusable. As of Rancher v2.3.0, the option `safe_timestamp` is added to support compatible file names. When this flag is set to `true`, all special characters in the snapshot filename timestamp are replaced. + +This option is not available directly in the UI, and is only available through the `Edit as Yaml` interface. + +# Enabling Snapshot Features for Clusters Created Before Rancher v2.2.0 + +If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/). diff --git a/content/rancher/v2.x/en/cluster-admin/certificate-rotation/_index.md b/content/rancher/v2.x/en/cluster-admin/certificate-rotation/_index.md index 2323917c395..357ab776e07 100644 --- a/content/rancher/v2.x/en/cluster-admin/certificate-rotation/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/certificate-rotation/_index.md @@ -25,7 +25,7 @@ Rancher launched Kubernetes clusters have the ability to rotate the auto-generat 1. In the **Global** view, navigate to the cluster that you want to rotate certificates. -2. Select the **Ellipsis (...) > Rotate Certificates**. +2. Select the **⋮ > Rotate Certificates**. 3. Select which certificates that you want to rotate. @@ -47,7 +47,7 @@ Rancher launched Kubernetes clusters have the ability to rotate the auto-generat 1. In the **Global** view, navigate to the cluster that you want to rotate certificates. -2. Select the **Ellipsis (...) > View in API**. +2. Select the **⋮ > View in API**. 3. Click on **RotateCertificates**. diff --git a/content/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/_index.md b/content/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/_index.md index c5929f81b0c..8ce29334dda 100644 --- a/content/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/cleaning-cluster-nodes/_index.md @@ -6,7 +6,7 @@ weight: 2055 This section describes how to disconnect a node from a Rancher-launched Kubernetes cluster and remove all of the Kubernetes components from the node. This process allows you to use the node for other purposes. -When you use Rancher to [launch nodes for a cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher), resources (containers/virtual network interfaces) and configuration items (certificates/configuration files) are created. +When you use Rancher to [launch nodes for a cluster]({{}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher), resources (containers/virtual network interfaces) and configuration items (certificates/configuration files) are created. When removing nodes from your Rancher launched Kubernetes cluster (provided that they are in `Active` state), those resources are automatically cleaned, and the only action needed is to restart the node. When a node has become unreachable and the automatic cleanup process cannot be used, we describe the steps that need to be executed before the node can be added to a cluster again. @@ -24,10 +24,10 @@ When cleaning nodes provisioned using Rancher, the following components are dele | All resources create under the `management.cattle.io` API Group | ✓ | ✓ | ✓ | | | All CRDs created by Rancher v2.x | ✓ | ✓ | ✓ | | -[1]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ -[2]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/ -[3]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ -[4]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/ +[1]: {{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ +[2]: {{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/ +[3]: {{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ +[4]: {{}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/ ## Removing a Node from a Cluster by Rancher UI @@ -59,7 +59,7 @@ After the imported cluster is detached from Rancher, the cluster's workloads wil {{% tab "By UI / API" %}} >**Warning:** This process will remove data from your cluster. Make sure you have created a backup of files you want to keep before executing the command, as data will be lost. -After you initiate the removal of an [imported cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#import-existing-cluster) using the Rancher UI (or API), the following events occur. +After you initiate the removal of an [imported cluster]({{}}/rancher/v2.x/en/cluster-provisioning/#import-existing-cluster) using the Rancher UI (or API), the following events occur. 1. Rancher creates a `serviceAccount` that it uses to remove the Rancher components from the cluster. This account is assigned the [clusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) and [clusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) permissions, which are required to remove the Rancher components. diff --git a/content/rancher/v2.x/en/cluster-admin/cloning-clusters/_index.md b/content/rancher/v2.x/en/cluster-admin/cloning-clusters/_index.md index b097e8b7b12..9e9335b1dd7 100644 --- a/content/rancher/v2.x/en/cluster-admin/cloning-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/cloning-clusters/_index.md @@ -13,16 +13,16 @@ Duplication of imported clusters, clusters in hosted Kubernetes providers, and c | Cluster Type | Cloneable? | |----------------------------------|---------------| -| [Nodes Hosted by Infrastructure Provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) | ✓ | -| [Hosted Kubernetes Providers]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) | | -| [Custom Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/) | | -| [Imported Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/) | | +| [Nodes Hosted by Infrastructure Provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) | ✓ | +| [Hosted Kubernetes Providers]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) | | +| [Custom Cluster]({{}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/) | | +| [Imported Cluster]({{}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/) | | > **Warning:** During the process of duplicating a cluster, you will edit a config file full of cluster settings. However, we recommend editing only values explicitly listed in this document, as cluster duplication is designed for simple cluster copying, _not_ wide scale configuration changes. Editing other values may invalidate the config file, which will lead to cluster deployment failure. ## Prerequisites -Download and install [Rancher CLI]({{< baseurl >}}/rancher/v2.x/en/cli). Remember to [create an API bearer token]({{< baseurl >}}/rancher/v2.x/en/user-settings/api-keys) if necessary. +Download and install [Rancher CLI]({{}}/rancher/v2.x/en/cli). Remember to [create an API bearer token]({{}}/rancher/v2.x/en/user-settings/api-keys) if necessary. ## 1. Export Cluster Config diff --git a/content/rancher/v2.x/en/cluster-admin/cluster-access/_index.md b/content/rancher/v2.x/en/cluster-admin/cluster-access/_index.md index 973ba43dcce..1e530ae86cf 100644 --- a/content/rancher/v2.x/en/cluster-admin/cluster-access/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/cluster-access/_index.md @@ -17,18 +17,18 @@ There are many ways you can interact with Kubernetes clusters that are managed b Interact with your clusters by launching a kubectl shell available in the Rancher UI. This option requires no configuration actions on your part. - For more information, see [Accessing Clusters with kubectl Shell]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-shell). + For more information, see [Accessing Clusters with kubectl Shell]({{}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-shell). - **Terminal remote connection** You can also interact with your clusters by installing [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your local desktop and then copying the cluster's kubeconfig file to your local `~/.kube/config` directory. - For more information, see [Accessing Clusters with kubectl and a kubeconfig File]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-and-a-kubeconfig-file). + For more information, see [Accessing Clusters with kubectl and a kubeconfig File]({{}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-and-a-kubeconfig-file). - **Rancher CLI** - You can control your clusters by downloading Rancher's own command-line interface, [Rancher CLI]({{< baseurl >}}/rancher/v2.x/en/cli/). This CLI tool can interact directly with different clusters and projects or pass them `kubectl` commands. + You can control your clusters by downloading Rancher's own command-line interface, [Rancher CLI]({{}}/rancher/v2.x/en/cli/). This CLI tool can interact directly with different clusters and projects or pass them `kubectl` commands. - **Rancher API** - Finally, you can interact with your clusters over the Rancher API. Before you use the API, you must obtain an [API key]({{< baseurl >}}/rancher/v2.x/en/user-settings/api-keys/). To view the different resource fields and actions for an API object, open the API UI, which can be accessed by clicking on **View in API** for any Rancher UI object. \ No newline at end of file + Finally, you can interact with your clusters over the Rancher API. Before you use the API, you must obtain an [API key]({{}}/rancher/v2.x/en/user-settings/api-keys/). To view the different resource fields and actions for an API object, open the API UI, which can be accessed by clicking on **View in API** for any Rancher UI object. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-admin/cluster-access/cluster-members/_index.md b/content/rancher/v2.x/en/cluster-admin/cluster-access/cluster-members/_index.md index 154fea58a24..0edd67b0730 100644 --- a/content/rancher/v2.x/en/cluster-admin/cluster-access/cluster-members/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/cluster-access/cluster-members/_index.md @@ -9,7 +9,7 @@ aliases: If you want to provide a user with access and permissions to _all_ projects, nodes, and resources within a cluster, assign the user a cluster membership. ->**Tip:** Want to provide a user with access to a _specific_ project within a cluster? See [Adding Project Members]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/project-members/) instead. +>**Tip:** Want to provide a user with access to a _specific_ project within a cluster? See [Adding Project Members]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/project-members/) instead. There are two contexts where you can add cluster members: @@ -33,23 +33,23 @@ Cluster administrators can edit the membership for a cluster, controlling which If external authentication is configured: - - Rancher returns users from your [external authentication]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/) source as you type. + - Rancher returns users from your [external authentication]({{}}/rancher/v2.x/en/admin-settings/authentication/) source as you type. >**Using AD but can't find your users?** - >There may be an issue with your search attribute configuration. See [Configuring Active Directory Authentication: Step 5]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/ad/). + >There may be an issue with your search attribute configuration. See [Configuring Active Directory Authentication: Step 5]({{}}/rancher/v2.x/en/admin-settings/authentication/ad/). - A drop-down allows you to add groups instead of individual users. The drop-down only lists groups that you, the logged in user, are part of. - >**Note:** If you are logged in as a local user, external users do not display in your search results. For more information, see [External Authentication Configuration and Principal Users]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). + >**Note:** If you are logged in as a local user, external users do not display in your search results. For more information, see [External Authentication Configuration and Principal Users]({{}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users). 4. Assign the user or group **Cluster** roles. - [What are Cluster Roles?]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) + [What are Cluster Roles?]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) >**Tip:** For Custom Roles, you can modify the list of individual roles available for assignment. > - > - To add roles to the list, [Add a Custom Role]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/). - > - To remove roles from the list, [Lock/Unlock Roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/locked-roles). + > - To add roles to the list, [Add a Custom Role]({{}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/). + > - To remove roles from the list, [Lock/Unlock Roles]({{}}/rancher/v2.x/en/admin-settings/rbac/locked-roles). **Result:** The chosen users are added to the cluster. diff --git a/content/rancher/v2.x/en/cluster-admin/editing-clusters/_index.md b/content/rancher/v2.x/en/cluster-admin/editing-clusters/_index.md index 5c2cf122f0c..e237deba153 100644 --- a/content/rancher/v2.x/en/cluster-admin/editing-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/editing-clusters/_index.md @@ -3,12 +3,12 @@ title: Cluster Configuration weight: 2025 --- -After you provision a Kubernetes cluster using Rancher, you can still edit options and settings for the cluster. To edit your cluster, open the **Global** view, make sure the **Clusters** tab is selected, and then select **Ellipsis (...) > Edit** for the cluster that you want to edit. +After you provision a Kubernetes cluster using Rancher, you can still edit options and settings for the cluster. To edit your cluster, open the **Global** view, make sure the **Clusters** tab is selected, and then select **⋮ > Edit** for the cluster that you want to edit. To Edit an Existing Cluster ![Edit Cluster]({{}}/img/rancher/edit-cluster.png) -The options and settings available for an existing cluster change based on the method that you used to provision it. For example, only clusters [provisioned by RKE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) have **Cluster Options** available for editing. +The options and settings available for an existing cluster change based on the method that you used to provision it. For example, only clusters [provisioned by RKE]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) have **Cluster Options** available for editing. The following table summarizes the options and settings available for each cluster type: @@ -24,7 +24,7 @@ Cluster administrators can [edit the membership for a cluster,]({{}}/ra ## Cluster Options -When editing clusters, clusters that are [launched using RKE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) feature more options than clusters that are imported or hosted by a Kubernetes provider. The headings that follow document options available only for RKE clusters. +When editing clusters, clusters that are [launched using RKE]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) feature more options than clusters that are imported or hosted by a Kubernetes provider. The headings that follow document options available only for RKE clusters. ### Updating ingress-nginx @@ -34,26 +34,26 @@ If the `updateStrategy` of `ingress-nginx` is `OnDelete`, you will need to delet # Editing Other Cluster Options -In [clusters launched by RKE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), you can edit any of the remaining options that follow. +In [clusters launched by RKE]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), you can edit any of the remaining options that follow. >**Note:** These options are not available for imported clusters or hosted Kubernetes clusters. Options for RKE Clusters -![Cluster Options]({{< baseurl >}}/img/rancher/cluster-options.png) +![Cluster Options]({{}}/img/rancher/cluster-options.png) Option | Description | ---------|----------| Kubernetes Version | The version of Kubernetes installed on each cluster node. For more detail, see [Upgrading Kubernetes]({{}}/rancher/v2.x/en/cluster-admin/upgrading-kubernetes). | - Network Provider | The [container networking interface]({{< baseurl >}}/rancher/v2.x/en/faq/networking/#cni-providers) that powers networking for your cluster.

**Note:** You can only choose this option while provisioning your cluster. It cannot be edited later. | + Network Provider | The [container networking interface]({{}}/rancher/v2.x/en/faq/networking/#cni-providers) that powers networking for your cluster.

**Note:** You can only choose this option while provisioning your cluster. It cannot be edited later. | Project Network Isolation | As of Rancher v2.0.7, if you're using the Canal network provider, you can choose whether to enable or disable inter-project communication. | Nginx Ingress | If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use Nginx ingress within the cluster. | Metrics Server Monitoring | Each cloud provider capable of launching a cluster using RKE can collect metrics and monitor for your cluster nodes. Enable this option to view your node metrics from your cloud provider's portal. | - Pod Security Policy Support | Enables [pod security policies]({{< baseurl >}}/rancher/v2.x/en/admin-settings/pod-security-policies/) for the cluster. After enabling this option, choose a policy using the **Default Pod Security Policy** drop-down. | - Docker version on nodes | Configures whether nodes are allowed to run versions of Docker that Rancher doesn't officially support. If you choose to require a [supported Docker version]({{< baseurl >}}/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb/), Rancher will stop pods from running on nodes that don't have a supported Docker version installed. | + Pod Security Policy Support | Enables [pod security policies]({{}}/rancher/v2.x/en/admin-settings/pod-security-policies/) for the cluster. After enabling this option, choose a policy using the **Default Pod Security Policy** drop-down. | + Docker version on nodes | Configures whether nodes are allowed to run versions of Docker that Rancher doesn't officially support. If you choose to require a [supported Docker version]({{}}/rancher/v2.x/en/installation/options/rke-add-on/layer-7-lb/), Rancher will stop pods from running on nodes that don't have a supported Docker version installed. | Docker Root Directory | The directory on your cluster nodes where you've installed Docker. If you install Docker on your nodes to a non-default directory, update this path. | Default Pod Security Policy | If you enable **Pod Security Policy Support**, use this drop-down to choose the pod security policy that's applied to the cluster. | - Cloud Provider | If you're using a cloud provider to host cluster nodes launched by RKE, enable [this option]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) so that you can use the cloud provider's native features. If you want to store persistent data for your cloud-hosted cluster, this option is required. | + Cloud Provider | If you're using a cloud provider to host cluster nodes launched by RKE, enable [this option]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) so that you can use the cloud provider's native features. If you want to store persistent data for your cloud-hosted cluster, this option is required. |
# Editing Cluster as YAML @@ -67,6 +67,6 @@ Instead of using the Rancher UI to choose Kubernetes options for the cluster, ad In Rancher v2.0.0-v2.2.x, the config file is identical to the [cluster config file for the Rancher Kubernetes Engine]({{}}/rke/latest/en/config-options/), which is the tool Rancher uses to provision clusters. In Rancher v2.3.0, the RKE information is still included in the config file, but it is separated from other options, so that the RKE cluster config options are nested under the `rancher_kubernetes_engine_config` directive. For more information, see the [cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) -![image]({{< baseurl >}}/img/rancher/cluster-options-yaml.png) +![image]({{}}/img/rancher/cluster-options-yaml.png) -For an example of RKE config file syntax, see the [RKE documentation]({{< baseurl >}}/rke/latest/en/example-yamls/). +For an example of RKE config file syntax, see the [RKE documentation]({{}}/rke/latest/en/example-yamls/). diff --git a/content/rancher/v2.x/en/cluster-admin/nodes/_index.md b/content/rancher/v2.x/en/cluster-admin/nodes/_index.md index 31c3e595bae..b5406cb379b 100644 --- a/content/rancher/v2.x/en/cluster-admin/nodes/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/nodes/_index.md @@ -1,65 +1,122 @@ --- title: Nodes and Node Pools weight: 2030 -aliases: - - /rancher/v2.x/en/k8s-in-rancher/nodes/ --- -After you launch a Kubernetes cluster in Rancher, you can manage individual nodes from the cluster's **Node** tab. Depending on the [option used]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) to provision the cluster, there are different node options available. +After you launch a Kubernetes cluster in Rancher, you can manage individual nodes from the cluster's **Node** tab. Depending on the [option used]({{}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) to provision the cluster, there are different node options available. -This page covers the following topics: +> If you want to manage the _cluster_ and not individual nodes, see [Editing Clusters]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters). -- [Node options for each type of cluster](#node-options-for-each-type-of-cluster) -- [Cordoning and draining nodes](#cordoning-and-draining-nodes) -- [Editing a node](#editing-a-node) -- [Viewing a node API](#viewing-a-node-api) +This section covers the following topics: + +- [Node options available for each cluster creation option](#node-options-available-for-each-cluster-creation-option) + - [Nodes hosted by an infrastructure provider](#nodes-hosted-by-an-infrastructure-provider) + - [Nodes provisioned by hosted Kubernetes providers](#nodes-provisioned-by-hosted-kubernetes-providers) + - [Imported nodes](#imported-nodes) +- [Managing and editing individual nodes](#managing-and-editing-individual-nodes) +- [Viewing a node in the Rancher API](#viewing-a-node-in-the-rancher-api) - [Deleting a node](#deleting-a-node) - [Scaling nodes](#scaling-nodes) - [SSH into a node hosted by an infrastructure provider](#ssh-into-a-node-hosted-by-an-infrastructure-provider) -- [Managing node pools](#managing-node-pools) +- [Cordoning a node](#cordoning-a-node) +- [Draining a node](#draining-a-node) + - [Aggressive and safe draining options](#aggressive-and-safe-draining-options) + - [Grace period](#grace-period) + - [Timeout](#timeout) + - [Drained and cordoned state](#drained-and-cordoned-state) +- [Labeling a node to be ignored by Rancher](#labeling-a-node-to-be-ignored-by-rancher) -To manage individual nodes, browse to the cluster that you want to manage and then select **Nodes** from the main menu. You can open the options menu for a node by clicking its **Ellipsis** icon (**...**). +# Node Options Available for Each Cluster Creation Option ->**Note:** If you want to manage the _cluster_ and not individual nodes, see [Editing Clusters]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters). - -# Node Options for Each Type of Cluster - -The following table lists which node options are available for each [type of cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-options) in Rancher. Click the links in the **Option** column for more detailed information about each feature. +The following table lists which node options are available for each [type of cluster]({{}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-options) in Rancher. Click the links in the **Option** column for more detailed information about each feature. | Option | [Nodes Hosted by an Infrastructure Provider][1] | [Custom Node][2] | [Hosted Cluster][3] | [Imported Nodes][4] | Description | | ------------------------------------------------ | ------------------------------------------------ | ---------------- | ------------------- | ------------------- | ------------------------------------------------------------------ | | [Cordon](#cordoning-a-node) | ✓ | ✓ | ✓ | | Marks the node as unschedulable. | | [Drain](#draining-a-node) | ✓ | ✓ | ✓ | | Marks the node as unschedulable _and_ evicts all pods. | -| [Edit](#editing-a-node) | ✓ | ✓ | ✓ | | Enter a custom name, description, label, or taints for a node. | -| [View API](#viewing-a-node-api) | ✓ | ✓ | ✓ | | View API data. | +| [Edit](#managing-and-editing-individual-nodes) | ✓ | ✓ | ✓ | | Enter a custom name, description, label, or taints for a node. | +| [View API](#viewing-a-node-in-the-rancher-api) | ✓ | ✓ | ✓ | | View API data. | | [Delete](#deleting-a-node) | ✓ | ✓ | | | Deletes defective nodes from the cluster. | | [Download Keys](#ssh-into-a-node-hosted-by-an-infrastructure-provider) | ✓ | | | | Download SSH key for in order to SSH into the node. | | [Node Scaling](#scaling-nodes) | ✓ | | | | Scale the number of nodes in the node pool up or down. | -[1]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ -[2]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/ -[3]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ -[4]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/ +[1]: {{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ +[2]: {{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/ +[3]: {{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ +[4]: {{}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/ -### Notes for Node Pool Nodes +### Nodes Hosted by an Infrastructure Provider -Clusters provisioned using [one of the node pool options]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-pools) automatically maintain the node scale that's set during the initial cluster provisioning. This scale determines the number of active nodes that Rancher maintains for the cluster. +Node pools are available when you provision Rancher-launched Kubernetes clusters on nodes that are [hosted in an infrastructure provider.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) -### Notes for Nodes Provisioned by Hosted Kubernetes Providers +Clusters provisioned using [one of the node pool options]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-pools) can be scaled up or down if the node pool is edited. -Options for managing nodes [hosted by a Kubernetes provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) are somewhat limited in Rancher. Rather than using the Rancher UI to make edits such as scaling the number of nodes up or down, edit the cluster directly. +A node pool can also automatically maintain the node scale that's set during the initial cluster provisioning if [node auto-replace is enabled.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-auto-replace) This scale determines the number of active nodes that Rancher maintains for the cluster. -### Notes for Imported Nodes +Rancher uses [node templates]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) to replace nodes in the node pool. Each node template uses cloud provider credentials to allow Rancher to set up the node in the infrastructure provider. + +### Nodes Provisioned by Hosted Kubernetes Providers + +Options for managing nodes [hosted by a Kubernetes provider]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) are somewhat limited in Rancher. Rather than using the Rancher UI to make edits such as scaling the number of nodes up or down, edit the cluster directly. + +### Imported Nodes Although you can deploy workloads to an [imported cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/) using Rancher, you cannot manage individual cluster nodes. All management of imported cluster nodes must take place outside of Rancher. -# Cordoning and Draining Nodes +# Managing and Editing Individual Nodes + +Editing a node lets you: + +* Change its name +* Change its description +* Add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) +* Add/Remove [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) + +To manage individual nodes, browse to the cluster that you want to manage and then select **Nodes** from the main menu. You can open the options menu for a node by clicking its **⋮** icon (**...**). + +# Viewing a Node in the Rancher API + +Select this option to view the node's [API endpoints]({{< baseurl >}}/rancher/v2.x/en/api/). + +# Deleting a Node + +Use **Delete** to remove defective nodes from the cloud provider. + +When you the delete a defective node, Rancher can automatically replace it with an identically provisioned node if the node is in a node pool and [node auto-replace is enabled.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-auto-replace) + +>**Tip:** If your cluster is hosted by an infrastructure provider, and you want to scale your cluster down instead of deleting a defective node, [scale down](#scaling-nodes) rather than delete. + +# Scaling Nodes + +For nodes hosted by an infrastructure provider, you can scale the number of nodes in each [node pool]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-pools) by using the scale controls. This option isn't available for other cluster types. + +# SSH into a Node Hosted by an Infrastructure Provider + +For [nodes hosted by an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/), you have the option of downloading its SSH key so that you can connect to it remotely from your desktop. + +1. From the cluster hosted by an infrastructure provider, select **Nodes** from the main menu. + +1. Find the node that you want to remote into. Select **⋮ > Download Keys**. + + **Step Result:** A ZIP file containing files used for SSH is downloaded. + +1. Extract the ZIP file to any location. + +1. Open Terminal. Change your location to the extracted ZIP file. + +1. Enter the following command: + + ``` + ssh -i id_rsa root@ + ``` + +# Cordoning a Node _Cordoning_ a node marks it as unschedulable. This feature is useful for performing short tasks on the node during small maintenance windows, like reboots, upgrades, or decommissions. When you're done, power back on and make the node schedulable again by uncordoning it. -_Draining_ is the process of first cordoning the node, and then evicting all its pods. This feature is useful for performing node maintenance (like kernel upgrades or hardware maintenance). It prevents new pods from deploying to the node while redistributing existing pods so that users don't experience service interruption. +# Draining a Node -When nodes are drained, pods are handled with the following rules: +_Draining_ is the process of first cordoning the node, and then evicting all its pods. This feature is useful for performing node maintenance (like kernel upgrades or hardware maintenance). It prevents new pods from deploying to the node while redistributing existing pods so that users don't experience service interruption. - For pods with a replica set, the pod is replaced by a new pod that will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod. @@ -69,10 +126,12 @@ You can drain nodes that are in either a `cordoned` or `active` state. When you However, you can override the conditions draining when you initiate the drain. You're also given an opportunity to set a grace period and timeout value. +### Aggressive and Safe Draining Options + The node draining options are different based on your version of Rancher. -### Aggressive and Safe Draining Options for Rancher v2.2.x+ - +{{% tabs %}} +{{% tab "Rancher v2.2.x+" %}} There are two drain modes: aggressive and safe. - **Aggressive Mode** @@ -84,8 +143,8 @@ There are two drain modes: aggressive and safe. - **Safe Mode** If a node has standalone pods or ephemeral data it will be cordoned but not drained. - -### Aggressive and Safe Draining Options for Rancher Prior to v2.2.x +{{% /tab %}} +{{% tab "Rancher prior to v2.2.x" %}} The following list describes each drain option: @@ -100,7 +159,8 @@ The following list describes each drain option: - **Even if there are pods using emptyDir** If a pod uses emptyDir to store local data, you might not be able to safely delete it, since the data in the emptyDir will be deleted once the pod is removed from the node. Similar to the first option, Kubernetes expects the implementation to decide what to do with these pods. Choosing this option will delete these pods. - +{{% /tab %}} +{{% /tabs %}} ### Grace Period @@ -110,7 +170,7 @@ The timeout given to each pod for cleaning things up, so they will have chance t The amount of time drain should continue to wait before giving up. ->**Kubernetes Known Issue:** Currently, the [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) is not enforced while draining a node. This issue will be corrected as of Kubernetes 1.12. +>**Kubernetes Known Issue:** The [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) was not enforced while draining a node prior to Kubernetes 1.12. ### Drained and Cordoned State @@ -122,66 +182,45 @@ Once drain successfully completes, the node will be in a state of `drained`. You >**Want to know more about cordon and drain?** See the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node). +# Labeling a Node to be Ignored by Rancher -# Editing a Node +_Available as of 2.3.3_ -Editing a node lets you: +Some solutions, such as F5's BIG-IP integration, may require creating a node that is never registered to a cluster. -* Change its name -* Change its description -* Add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) -* Add/Remove [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) +Since the node will never finish registering, it will always be shown as unhealthy in the Rancher UI. +In that case, you may want to label the node to be ignored by Rancher so that Rancher only shows nodes as unhealthy when they are actually failing. -# Viewing a Node API +You can label nodes to be ignored by using a setting in the Rancher UI, or by using `kubectl`. -Select this option to view the node's [API endpoints]({{< baseurl >}}/rancher/v2.x/en/api/). +> **Note:** There is an [open issue](https://github.com/rancher/rancher/issues/24172) in which nodes labeled to be ignored can get stuck in an updating state. +### Labeling Nodes to be Ignored with the Rancher UI -# Deleting a Node +To add a node that is ignored by Rancher, -Use **Delete** to remove defective nodes from the cloud provider. When you the delete a defective node, Rancher automatically replaces it with an identically provisioned node. +1. From the **Global** view, click the **Settings** tab. +1. Go to the `ignore-node-name` setting and click **⋮ > Edit.** +1. Enter a name that Rancher will use to ignore nodes. All nodes with this name will be ignored. +1. Click **Save.** ->**Tip:** If your cluster is hosted by an infrastructure provider, and you want to scale your cluster down instead of deleting a defective node, [scale down](#scaling-nodes) rather than delete. +**Result:** Rancher will not wait to register nodes with this name. In the UI, the node will displayed with a grayed-out status. The node is still part of the cluster and can be listed with `kubectl`. +If the setting is changed afterward, the ignored nodes will continue to be hidden. -# Scaling Nodes +### Labeling Nodes to be Ignored with kubectl -For nodes hosted by an infrastructure provider, you can scale the number of nodes in each node pool by using the scale controls. This option isn't available for other cluster types. +To add a node that will be ignored by Rancher, use `kubectl` to create a node that has the following label: -# SSH into a Node Hosted by an Infrastructure Provider +``` +cattle.rancher.io/node-status: ignore +``` -For [nodes hosted by an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/), you have the option of downloading its SSH key so that you can connect to it remotely from your desktop. +**Result:** If you add the node to a cluster, Rancher will not attempt to sync with this node. The node can still be part of the cluster and can be listed with `kubectl`. +If the label is added before the node is added to the cluster, the node will not be shown in the Rancher UI. -1. From the cluster hosted by an infrastructure provider, select **Nodes** from the main menu. +If the label is added after the node is added to a Rancher cluster, the node will not be removed from the UI. -1. Find the node that you want to remote into. Select **Ellipsis (...) > Download Keys**. - - **Step Result:** A ZIP file containing files used for SSH is downloaded. - -1. Extract the ZIP file to any location. - -1. Open Terminal. Change your location to the extracted ZIP file. - -1. Enter the following command: - - ``` - ssh -i id_rsa root@ - ``` - -# Managing Node Pools - -> **Prerequisite:** The options below are available only for clusters that are [launched using RKE.]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) The node pool features are not available for imported clusters or clusters hosted by a Kubernetes provider. - -In clusters [launched by RKE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), you can: - -- Add new [pools of nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) to your cluster. The nodes added to the pool are provisioned according to the [node template]({{< baseurl >}}/rancher/v2.x/en/user-settings/node-templates/) that you use. - - - Click **+** and follow the directions on screen to create a new template. - - - You can also reuse existing templates by selecting one from the **Template** drop-down. - -- Redistribute Kubernetes roles amongst your node pools by making different checkbox selections - -- Scale the number of nodes in a pool up or down (although, if you simply want to maintain your node scale, we recommend using the cluster's [Nodes tab]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/nodes/#nodes-provisioned-by-node-pool) instead.) +If you delete the node from the Rancher server using the Rancher UI or API, the node will not be removed from the cluster if the `nodeName` is listed in the Rancher settings under `ignore-node-name`. diff --git a/content/rancher/v2.x/en/cluster-admin/pod-security-policy/_index.md b/content/rancher/v2.x/en/cluster-admin/pod-security-policy/_index.md index 11e415f5b3a..261e1e11782 100644 --- a/content/rancher/v2.x/en/cluster-admin/pod-security-policy/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/pod-security-policy/_index.md @@ -3,23 +3,23 @@ title: Adding a Pod Security Policy weight: 80 --- -> **Prerequisite:** The options below are available only for clusters that are [launched using RKE.]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) +> **Prerequisite:** The options below are available only for clusters that are [launched using RKE.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) -When your cluster is running pods with security-sensitive configurations, assign it a [pod security policy]({{< baseurl >}}/rancher/v2.x/en/admin-settings/pod-security-policies/), which is a set of rules that monitors the conditions and settings in your pods. If a pod doesn't meet the rules specified in your policy, the policy stops it from running. +When your cluster is running pods with security-sensitive configurations, assign it a [pod security policy]({{}}/rancher/v2.x/en/admin-settings/pod-security-policies/), which is a set of rules that monitors the conditions and settings in your pods. If a pod doesn't meet the rules specified in your policy, the policy stops it from running. You can assign a pod security policy when you provision a cluster. However, if you need to relax or restrict security for your pods later, you can update the policy while editing your cluster. -1. From the **Global** view, find the cluster to which you want to apply a pod security policy. Select **Vertical Ellipsis (...) > Edit**. +1. From the **Global** view, find the cluster to which you want to apply a pod security policy. Select **⋮ > Edit**. 2. Expand **Cluster Options**. 3. From **Pod Security Policy Support**, select **Enabled**. - >**Note:** This option is only available for clusters [provisioned by RKE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). + >**Note:** This option is only available for clusters [provisioned by RKE]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). 4. From the **Default Pod Security Policy** drop-down, select the policy you want to apply to the cluster. - Rancher ships with [policies]({{< baseurl >}}/rancher/v2.x/en/admin-settings/pod-security-policies/#default-pod-security-policies) of `restricted` and `unrestricted`, although you can [create custom policies]({{< baseurl >}}/rancher/v2.x/en/admin-settings/pod-security-policies/#default-pod-security-policies) as well. + Rancher ships with [policies]({{}}/rancher/v2.x/en/admin-settings/pod-security-policies/#default-pod-security-policies) of `restricted` and `unrestricted`, although you can [create custom policies]({{}}/rancher/v2.x/en/admin-settings/pod-security-policies/#default-pod-security-policies) as well. 5. Click **Save**. diff --git a/content/rancher/v2.x/en/cluster-admin/projects-and-namespaces/_index.md b/content/rancher/v2.x/en/cluster-admin/projects-and-namespaces/_index.md index bf9c640651b..4dbae5210ec 100644 --- a/content/rancher/v2.x/en/cluster-admin/projects-and-namespaces/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/projects-and-namespaces/_index.md @@ -50,10 +50,18 @@ You can assign the following resources directly to namespaces: To manage permissions in a vanilla Kubernetes cluster, cluster admins configure role-based access policies for each namespace. With Rancher, user permissions are assigned on the project level instead, and permissions are automatically inherited by any namespace owned by the particular project. -> **Note:** If you create a namespace with `kubectl`, it may be unusable because `kubectl` doesn't require your new namespace to be scoped within a project that you have access to. If your permissions are restricted to the project level, it is better to [create a namespace through Rancher]({{}}/rancher/v2.x/en/project-admin/namespaces/#creating-namespaces) to ensure that you will have permission to access the namespace. - For more information on creating and moving namespaces, see [Namespaces]({{}}/rancher/v2.x/en/project-admin/namespaces/). +### Role-based access control issues with namespaces and kubectl + +Because projects are a concept introduced by Rancher, kubectl does not have the capability to restrict the creation of namespaces to a project the creator has access to. + +This means that when standard users with project-scoped permissions create a namespaces with `kubectl`, it may be unusable because `kubectl` doesn't require the new namespace to be scoped within a certain project. + +If your permissions are restricted to the project level, it is better to [create a namespace through Rancher]({{}}/rancher/v2.x/en/project-admin/namespaces/#creating-namespaces) to ensure that you will have permission to access the namespace. + +If a standard user is a project owner, the user will be able to create namespaces within that project. The Rancher UI will prevent that user from creating namespaces outside the scope of the projects they have access to. + # About Projects In terms of hierarchy: diff --git a/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md b/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md index 0c70a823b0d..44f802fffc3 100644 --- a/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md @@ -1,15 +1,22 @@ --- -title: Restoring etcd +title: Restoring a Cluster from Backup weight: 2050 --- _Available as of v2.2.0_ -etcd backup and recovery for [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) can be easily performed. Snapshots of the etcd database are taken and saved either locally onto the etcd nodes or to a S3 compatible target. The advantages of configuring S3 is that if all etcd nodes are lost, your snapshot is saved remotely and can be used to restore the cluster. +etcd backup and recovery for [Rancher launched Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) can be easily performed. Snapshots of the etcd database are taken and saved either locally onto the etcd nodes or to a S3 compatible target. The advantages of configuring S3 is that if all etcd nodes are lost, your snapshot is saved remotely and can be used to restore the cluster. -Rancher recommends enabling the [ability to set up recurring snapshots of etcd]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/#configuring-recurring-snapshots-for-the-cluster), but [one-time snapshots]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/#one-time-snapshots) can easily be taken as well. Rancher allows restore from [saved snapshots](#restoring-your-cluster-from-a-snapshot) or if you don't have any snapshots, you can still [restore etcd](#recovering-etcd-without-a-snapshot). +Rancher recommends enabling the [ability to set up recurring snapshots of etcd]({{}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/#configuring-recurring-snapshots), but [one-time snapshots]({{}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/#one-time-snapshots) can easily be taken as well. Rancher allows restore from [saved snapshots](#restoring-a-cluster-from-a-snapshot) or if you don't have any snapshots, you can still [restore etcd](#recovering-etcd-without-a-snapshot). ->**Note:** If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the [updated snapshot features]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/). Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to back up and restore etcd through the UI. +As of Rancher v2.4.0, clusters can also be restored to a prior Kubernetes version and cluster configuration. + +This section covers the following topics: + +- [Viewing Available Snapshots](#viewing-available-snapshots) +- [Restoring a Cluster from a Snapshot](#restoring-a-cluster-from-a-snapshot) +- [Recovering etcd without a Snapshot](#recovering-etcd-without-a-snapshot) +- [Enabling snapshot features for clusters created before Rancher v2.2.0](#enabling-snapshot-features-for-clusters-created-before-rancher-v2-2-0) ## Viewing Available Snapshots @@ -19,25 +26,61 @@ The list of all available snapshots for the cluster is available. 2. Click **Tools > Snapshots** from the navigation bar to view the list of saved snapshots. These snapshots include a timestamp of when they were created. -## Restoring your Cluster from a Snapshot +## Restoring a Cluster from a Snapshot If your Kubernetes cluster is broken, you can restore the cluster from a snapshot. -1. In the **Global** view, navigate to the cluster that you want to view snapshots. +Restorations changed in Rancher v2.4.0. -2. Click the **Vertical Ellipsis (...) > Restore Snapshot**. +{{% tabs %}} +{{% tab "Rancher v2.4.0+" %}} -3. Select the snapshot that you want to use for restoring your cluster from the dropdown of available snapshots. Click **Save**. +Snapshots are composed of the cluster data in etcd, the Kubernetes version, and the cluster configuration in the `cluster.yml.` These components allow you can select from the following options when restoring a cluster from a snapshot: - > **Note:** Snapshots from S3 can only be restored from if the cluster is configured to take recurring snapshots on S3. +- **Restore just the etcd contents:** This restoration is similar to restoring to snapshots in Rancher prior to v2.4.0. +- **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes. +- **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading. + +When rolling back to a prior Kubernetes version, the [upgrade strategy options]({{}}/rancher/v2.x/en/cluster-admin/upgrading-kubernetes/#configuring-the-upgrade-strategy) are ignored. Worker nodes are not cordoned or drained before being reverted to the older Kubernetes version, so that an unhealthy cluster can be more quickly restored to a healthy state. + +> **Prerequisite:** To restore snapshots from S3, the cluster needs to be configured to [take recurring snapshots on S3.]({{}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/#configuring-recurring-snapshots) + +1. In the **Global** view, navigate to the cluster that you want to restore from a snapshots. + +2. Click the **⋮ > Restore Snapshot**. + +3. Select the snapshot that you want to use for restoring your cluster from the dropdown of available snapshots. + +4. In the **Restoration Type** field, choose one of the restoration options described above. + +5. Click **Save**. **Result:** The cluster will go into `updating` state and the process of restoring the `etcd` nodes from the snapshot will start. The cluster is restored when it returns to an `active` state. -> **Note:** If you are restoring a cluster with unavailable etcd nodes, it's recommended that all etcd nodes are removed from Rancher before attempting to restore. For clusters that were provisioned using [nodes hosted in an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/), new etcd nodes will automatically be created. For [custom clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/), please ensure that you add new etcd nodes to the cluster. +{{% /tab %}} +{{% tab "Rancher prior to v2.4.0" %}} + +> **Prerequisites:** +> +> - Make sure your etcd nodes are healthy. If you are restoring a cluster with unavailable etcd nodes, it's recommended that all etcd nodes are removed from Rancher before attempting to restore. For clusters in which Rancher used node pools to provision [nodes in an infrastructure provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/), new etcd nodes will automatically be created. For [custom clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/), please ensure that you add new etcd nodes to the cluster. +> - To restore snapshots from S3, the cluster needs to be configured to [take recurring snapshots on S3.]({{}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/#configuring-recurring-snapshots) + +1. In the **Global** view, navigate to the cluster that you want to restore from a snapshot. + +2. Click the **⋮ > Restore Snapshot**. + +3. Select the snapshot that you want to use for restoring your cluster from the dropdown of available snapshots. + +4. Click **Save**. + +**Result:** The cluster will go into `updating` state and the process of restoring the `etcd` nodes from the snapshot will start. The cluster is restored when it returns to an `active` state. + +{{% /tab %}} +{{% /tabs %}} ## Recovering etcd without a Snapshot -If the group of etcd nodes loses quorum, the Kubernetes cluster will report a failure because no operations, e.g. deploying workloads, can be executed in the Kubernetes cluster. Please review the best practices for the what the [number of etcd nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/production/#count-of-etcd-nodes) should be in a Kubernetes cluster. If you want to recover your set of etcd nodes, follow these instructions: +If the group of etcd nodes loses quorum, the Kubernetes cluster will report a failure because no operations, e.g. deploying workloads, can be executed in the Kubernetes cluster. Please review the best practices for the what the [number of etcd nodes]({{}}/rancher/v2.x/en/cluster-provisioning/production/#count-of-etcd-nodes) should be in a Kubernetes cluster. If you want to recover your set of etcd nodes, follow these instructions: 1. Keep only one etcd node in the cluster by removing all other etcd nodes. @@ -63,4 +106,8 @@ If the group of etcd nodes loses quorum, the Kubernetes cluster will report a fa 5. Run the revised command. -6. After the single nodes is up and running, Rancher recommends adding additional etcd nodes to your cluster. If you have a [custom cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/) and you want to reuse an old node, you are required to [clean up the nodes]({{< baseurl >}}/rancher/v2.x/en/faq/cleaning-cluster-nodes/) before attempting to add them back into a cluster. +6. After the single nodes is up and running, Rancher recommends adding additional etcd nodes to your cluster. If you have a [custom cluster]({{}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/) and you want to reuse an old node, you are required to [clean up the nodes]({{}}/rancher/v2.x/en/faq/cleaning-cluster-nodes/) before attempting to add them back into a cluster. + +# Enabling Snapshot Features for Clusters Created Before Rancher v2.2.0 + +If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/). \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-admin/tools/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/_index.md index 8a1a01be91f..ed8fd982157 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/_index.md @@ -13,6 +13,7 @@ Rancher contains a variety of tools that aren't included in Kubernetes to assist - [Logging](#logging) - [Monitoring](#monitoring) - [Istio](#istio) +- [OPA Gatekeeper](#opa-gatekeeper) @@ -47,3 +48,7 @@ Using Rancher, you can monitor the state and processes of your cluster nodes, Ku ## Istio [Istio](https://istio.io/) is an open-source tool that makes it easier for DevOps teams to observe, control, troubleshoot, and secure the traffic within a complex network of microservices. For details on how to enable Istio in Rancher, refer to the [Istio section.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio) + +## OPA Gatekeeper + + [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper) is an open-source project that provides integration between OPA and Kubernetes to provide policy control via admission controller webhooks. For details on how to enable Gatekeeper in Rancher, refer to the [OPA Gatekeeper section.]({{}}/rancher/v2.x/en/cluster-admin/tools/opa-gatekeeper) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/alerts/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/alerts/_index.md index d8d19368108..1b0b9685295 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/alerts/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/alerts/_index.md @@ -11,11 +11,12 @@ Before you can receive alerts, you must configure one or more notifier in Ranche When you create a cluster, some alert rules are predefined. You can receive these alerts if you configure a [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers) for them. -For details about what triggers the predefined alerts, refer to the [documentation on default alerts.]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts) +For details about what triggers the predefined alerts, refer to the [documentation on default alerts.]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts) This section covers the following topics: - [Alert event examples](#alert-event-examples) + - [Prometheus queries](#prometheus-queries) - [Urgency levels](#urgency-levels) - [Scope of alerts](#scope-of-alerts) - [Adding cluster alerts](#adding-cluster-alerts) @@ -25,18 +26,24 @@ This section covers the following topics: Some examples of alert events are: -- A Kubernetes [master component]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) entering an unhealthy state. -- A node or [workload]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/) error occurring. +- A Kubernetes [master component]({{}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) entering an unhealthy state. +- A node or [workload]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/) error occurring. - A scheduled deployment taking place as planned. - A node's hardware resources becoming overstressed. +### Prometheus Queries + +> **Prerequisite:** Monitoring must be [enabled]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) before you can trigger alerts with custom Prometheus queries or expressions. + +When you edit an alert rule, you will have the opportunity to configure the alert to be triggered based on a Prometheus expression. For examples of expressions, refer to [this page.]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression) + # Urgency Levels You can set an urgency level for each alert. This urgency appears in the notification you receive, helping you to prioritize your response actions. For example, if you have an alert configured to inform you of a routine deployment, no action is required. These alerts can be assigned a low priority level. However, if a deployment fails, it can critically impact your organization, and you need to react quickly. Assign these alerts a high priority level. # Scope of Alerts -The scope for alerts can be set at either the cluster level or [project level]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/alerts/). +The scope for alerts can be set at either the cluster level or [project level]({{}}/rancher/v2.x/en/project-admin/tools/alerts/). At the cluster level, Rancher monitors components in your Kubernetes cluster, and sends you alerts related to: @@ -47,9 +54,9 @@ At the cluster level, Rancher monitors components in your Kubernetes cluster, an # Adding Cluster Alerts -As a [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to send you alerts for cluster events. +As a [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to send you alerts for cluster events. ->**Prerequisite:** Before you can receive cluster alerts, you must [add a notifier]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/notifiers/#adding-notifiers). +>**Prerequisite:** Before you can receive cluster alerts, you must [add a notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/#adding-notifiers). 1. From the **Global** view, navigate to the cluster that you want to configure cluster alerts for. Select **Tools > Alerts**. Then click **Add Alert Group**. @@ -180,7 +187,7 @@ This alert type monitors for the overload from Prometheus expression querying, i - [**ETCD**](https://etcd.io/docs/v3.4.0/op-guide/monitoring/) - [**Kubernetes Components**](https://github.com/kubernetes/metrics) - [**Kubernetes Resources**](https://github.com/kubernetes/kube-state-metrics) - - [**Fluentd**](https://docs.fluentd.org/v1.0/articles/monitoring-prometheus) (supported by [Logging]({{< baseurl >}}/rancher/v2.x/en/tools/logging)) + - [**Fluentd**](https://docs.fluentd.org/v1.0/articles/monitoring-prometheus) (supported by [Logging]({{}}/rancher/v2.x//en/cluster-admin/tools/logging)) - [**Cluster Level Grafana**](http://docs.grafana.org/administration/metrics/) - **Cluster Level Prometheus** @@ -218,7 +225,7 @@ This alert type monitors for the overload from Prometheus expression querying, i 1. Continue adding more **Alert Rule** to the group. -1. Finally, choose the [notifiers]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) to send the alerts to. +1. Finally, choose the [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) to send the alerts to. - You can set up multiple notifiers. - You can change notifier recipients on the fly. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts/_index.md index 13277b3fbc4..ea7f91ff0e0 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts/_index.md @@ -5,7 +5,7 @@ weight: 1 When you create a cluster, some alert rules are predefined. These alerts notify you about signs that the cluster could be unhealthy. You can receive these alerts if you configure a [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers) for them. -Several of the alerts use Prometheus expressions as the metric that triggers the alert. For more information on how expressions work, you can refer to the Rancher [documentation about Prometheus expressions]({{< baseurl >}} +Several of the alerts use Prometheus expressions as the metric that triggers the alert. For more information on how expressions work, you can refer to the Rancher [documentation about Prometheus expressions]({{}} /rancher/v2.x/en/cluster-admin/tools/monitoring/expression/) or the Prometheus [documentation about querying metrics](https://prometheus.io/docs/prometheus/latest/querying/basics/). # Alerts for etcd diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/disabling-istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/disabling-istio/_index.md index 3cba3d5a86c..d2035689626 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/disabling-istio/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/disabling-istio/_index.md @@ -18,7 +18,7 @@ To disable Istio, # Disable Istio in a Namespace 1. In the Rancher UI, go to the project that has the namespace where you want to disable Istio. -1. On the **Workloads** tab, you will see a list of namespaces and the workloads deployed in them. Go to the namespace where you want to disable and click the **Ellipsis (...) > Disable Istio Auto Injection.** +1. On the **Workloads** tab, you will see a list of namespaces and the workloads deployed in them. Go to the namespace where you want to disable and click the **⋮ > Disable Istio Auto Injection.** **Result:** When workloads are deployed in this namespace, they will not have the Istio sidecar. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/deploy-workloads/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/deploy-workloads/_index.md index 38bb20f588a..8e52d678bbf 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/deploy-workloads/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/deploy-workloads/_index.md @@ -7,7 +7,7 @@ weight: 4 Enabling Istio in a namespace only enables automatic sidecar injection for new workloads. To enable the Envoy sidecar for existing workloads, you need to enable it manually for each workload. -To inject the Istio sidecar on an existing workload in the namespace, go to the workload, click the **Ellipsis (...),** and click **Redeploy.** When the workload is redeployed, it will have the Envoy sidecar automatically injected. +To inject the Istio sidecar on an existing workload in the namespace, go to the workload, click the **⋮,** and click **Redeploy.** When the workload is redeployed, it will have the Envoy sidecar automatically injected. Wait a few minutes for the workload to upgrade to have the istio sidecar. Click it and go to the Containers section. You should be able to see istio-init and istio-proxy alongside your original workload. This means the Istio sidecar is enabled for the workload. Istio is doing all the wiring for the sidecar envoy. Now Istio can do all the features automatically if you enable them in the yaml. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/_index.md index 9df03283a12..9ea611c7c45 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/_index.md @@ -7,6 +7,8 @@ This cluster uses the default Nginx controller to allow traffic into the cluster A Rancher [administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) can configure Rancher to deploy Istio in a Kubernetes cluster. +> If the cluster has a Pod Security Policy enabled there are [prerequisites steps]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/enable-istio-with-psp/) + 1. From the **Global** view, navigate to the **cluster** where you want to enable Istio. 1. Click **Tools > Istio.** 1. Optional: Configure member access and [resource limits]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/resources/) for the Istio components. Ensure you have enough resources on your worker nodes to enable Istio. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md new file mode 100644 index 00000000000..f31369cfc61 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md @@ -0,0 +1,48 @@ +--- +title: Enable Istio with Pod Security Policies +--- + + >**Note:** The following guide is only for RKE provisioned clusters. + +If you have restrictive Pod Security Policies enabled, then Istio may not be able to function correctly, because it needs certain permissions in order to install itself and manage pod infrastructure. In this section, we will configure a cluster with PSPs enabled for an Istio install, and also set up the Istio CNI plugin. + +The Istio CNI plugin removes the need for each application pod to have a privileged `NET_ADMIN` container. For further information, see the [Istio CNI Plugin docs](https://istio.io/docs/setup/additional-setup/cni). Please note that the [Istio CNI Plugin is in alpha](https://istio.io/about/feature-stages/). + +- 1. [Configure the System Project Policy to allow Istio install.](#2-configure-the-system-project-policy-to-allow-istio-install) +- 2. [Install the CNI plugin in the System project.](#3-install-the-cni-plugin-in-the-system-project) +- 3. [Install Istio.](#4-install-istio) + +### 1. Configure the System Project Policy to allow Istio install + +1. From the main menu of the **Dashboard**, select **Projects/Namespaces**. +1. Find the **Project: System** project and select the **⋮ > Edit**. +1. Change the Pod Security Policy option to be unrestricted, then click Save. + + +### 2. Install the CNI Plugin in the System Project + +1. From the main menu of the **Dashboard**, select **Projects/Namespaces**. +1. Select the **Project: System** project. +1. Choose **Tools > Catalogs** in the navigation bar. +1. Add a catalog with the following: + 1. Name: istio-cni + 1. Catalog URL: https://github.com/istio/cni + 1. Branch: The branch that matches your current release, for example: `release-1.4`. +1. From the main menu select **Apps** +1. Click Launch and select istio-cni +1. Update the namespace to be "kube-system" +1. In the answers section, click "Edit as YAML" and paste in the following, then click launch: + +``` +--- + logLevel: "info" + excludeNamespaces: + - "istio-system" + - "kube-system" +``` + +### 3. Install Istio + +Follow the [primary instructions]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/), adding a custom answer: `istio_cni.enabled: true`. + +After Istio has finished installing, the Apps page in System Projects should show both istio and `istio-cni` applications deployed successfully. Sidecar injection will now be functional. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace/_index.md index 948d15c7c05..9065424e534 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace/_index.md @@ -10,7 +10,7 @@ This namespace setting will only affect new workloads in the namespace. Any pree > **Prerequisite:** To enable Istio in a namespace, the cluster must have Istio enabled. 1. In the Rancher UI, go to the cluster view. Click the **Projects/Namespaces** tab. -1. Go to the namespace where you want to enable the Istio sidecar auto injection and click the **Ellipsis (...).** +1. Go to the namespace where you want to enable the Istio sidecar auto injection and click the **⋮.** 1. Click **Edit.** 1. In the **Istio sidecar auto injection** section, click **Enable.** 1. Click **Save.** @@ -33,7 +33,7 @@ To add the annotation to a workload, 1. From the **Global** view, open the project that has the workload that should not have the sidecar. 1. Click **Resources > Workloads.** -1. Go to the workload that should not have the sidecar and click **Ellipsis (...) > Edit.** +1. Go to the workload that should not have the sidecar and click **⋮ > Edit.** 1. Click **Show Advanced Options.** Then expand the **Labels & Annotations** section. 1. Click **Add Annotation.** 1. In the **Key** field, enter `sidecar.istio.io/inject`. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/node-selectors/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/node-selectors/_index.md index aa7e807b095..994656361e3 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/node-selectors/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/setup/node-selectors/_index.md @@ -14,7 +14,7 @@ In larger deployments, it is strongly advised that Istio's infrastructure be pla First, add a label to the node where Istio components should be deployed. This label can have any key-value pair. For this example, we will use the key `istio` and the value `enabled`. 1. From the cluster view, go to the **Nodes** tab. -1. Go to a worker node that will host the Istio components and click **Ellipsis (...) > Edit.** +1. Go to a worker node that will host the Istio components and click **⋮ > Edit.** 1. Expand the **Labels & Annotations** section. 1. Click **Add Label.** 1. In the fields that appear, enter `istio` for the key and `enabled` for the value. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/logging/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/logging/_index.md index b1431bf3750..07c80a651cf 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/logging/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/logging/_index.md @@ -55,8 +55,8 @@ Logging Driver: json-file You can configure logging at either cluster level or project level. -- Cluster logging writes logs for every pod in the cluster, i.e. in all the projects. For [RKE clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters), it also writes logs for all the Kubernetes system components. -- [Project logging]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/logging/) writes logs for every pod in that particular project. +- Cluster logging writes logs for every pod in the cluster, i.e. in all the projects. For [RKE clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters), it also writes logs for all the Kubernetes system components. +- [Project logging]({{}}/rancher/v2.x/en/project-admin/tools/logging/) writes logs for every pod in that particular project. Logs that are sent to your logging service are from the following locations: @@ -65,7 +65,7 @@ Logs that are sent to your logging service are from the following locations: # Enabling Cluster Logging -As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to send Kubernetes logs to a logging service. +As an [administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to send Kubernetes logs to a logging service. 1. From the **Global** view, navigate to the cluster that you want to configure cluster logging. @@ -73,11 +73,11 @@ As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global 1. Select a logging service and enter the configuration. Refer to the specific service for detailed configuration. Rancher supports integration with the following services: - - [Elasticsearch]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch/) - - [Splunk]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/splunk/) - - [Kafka]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/kafka/) - - [Syslog]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/syslog/) - - [Fluentd]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/fluentd/) + - [Elasticsearch]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch/) + - [Splunk]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/splunk/) + - [Kafka]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/kafka/) + - [Syslog]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/syslog/) + - [Fluentd]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/fluentd/) 1. (Optional) Instead of using the UI to configure the logging services, you can enter custom advanced configurations by clicking on **Edit as File**, which is located above the logging targets. This link is only visible after you select a logging service. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/logging/splunk/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/logging/splunk/_index.md index 00002ac3c71..0d4edcf49ba 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/logging/splunk/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/logging/splunk/_index.md @@ -55,10 +55,10 @@ If your instance of Splunk uses SSL, your **Endpoint** will need to begin with ` 1. Click on **Search & Reporting**. The number of **Indexed Events** listed should be increasing. 1. Click on Data Summary and select the Sources tab. - ![View Logs]({{< baseurl >}}/img/rancher/splunk/splunk4.jpg) + ![View Logs]({{}}/img/rancher/splunk/splunk4.jpg) 1. To view the actual logs, click on the source that you declared earlier. - ![View Logs]({{< baseurl >}}/img/rancher/splunk/splunk5.jpg) + ![View Logs]({{}}/img/rancher/splunk/splunk5.jpg) ## Troubleshooting diff --git a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/_index.md index ede960e2578..9e9703a2d32 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/_index.md @@ -33,29 +33,29 @@ Multi-tenancy support in terms of cluster-only and project-only Prometheus insta # Monitoring Scope -Using Prometheus, you can monitor Rancher at both the cluster level and [project level]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/monitoring/). For each cluster and project that is enabled for monitoring, Rancher deploys a Prometheus server. +Using Prometheus, you can monitor Rancher at both the cluster level and [project level]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/). For each cluster and project that is enabled for monitoring, Rancher deploys a Prometheus server. - Cluster monitoring allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts. - - [Kubernetes control plane]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#kubernetes-components-metrics) - - [etcd database]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#etcd-metrics) - - [All nodes (including workers)]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#cluster-metrics) + - [Kubernetes control plane]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#kubernetes-components-metrics) + - [etcd database]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#etcd-metrics) + - [All nodes (including workers)]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#cluster-metrics) -- [Project monitoring]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/monitoring/) allows you to view the state of pods running in a given project. Prometheus collects metrics from the project's deployed HTTP and TCP/UDP workloads. +- [Project monitoring]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/) allows you to view the state of pods running in a given project. Prometheus collects metrics from the project's deployed HTTP and TCP/UDP workloads. # Enabling Cluster Monitoring -As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Prometheus to monitor your Kubernetes cluster. +As an [administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Prometheus to monitor your Kubernetes cluster. 1. From the **Global** view, navigate to the cluster that you want to configure cluster monitoring. 1. Select **Tools > Monitoring** in the navigation bar. -1. Select **Enable** to show the [Prometheus configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/). Review the [resource consumption recommendations](#resource-consumption) to ensure you have enough resources for Prometheus and on your worker nodes to enable monitoring. Enter in your desired configuration options. +1. Select **Enable** to show the [Prometheus configuration options]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/). Review the [resource consumption recommendations](#resource-consumption) to ensure you have enough resources for Prometheus and on your worker nodes to enable monitoring. Enter in your desired configuration options. 1. Click **Save**. -**Result:** The Prometheus server will be deployed as well as two monitoring applications. The two monitoring applications, `cluster-monitoring` and `monitoring-operator`, are added as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the applications are `active`, you can start viewing [cluster metrics]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/) through the [Rancher dashboard]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics/#rancher-dashboard) or directly from [Grafana]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#grafana). +**Result:** The Prometheus server will be deployed as well as two monitoring applications. The two monitoring applications, `cluster-monitoring` and `monitoring-operator`, are added as an [application]({{}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the applications are `active`, you can start viewing [cluster metrics]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/) through the [Rancher dashboard]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics/#rancher-dashboard) or directly from [Grafana]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#grafana). # Resource Consumption diff --git a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/_index.md index 14c797848cf..61c20f040c0 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/_index.md @@ -35,11 +35,11 @@ Some of the biggest metrics to look out for: 1. Click on **Node Metrics**. -[_Get expressions for Cluster Metrics_]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#cluster-metrics) +[_Get expressions for Cluster Metrics_]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#cluster-metrics) ### Etcd Metrics ->**Note:** Only supported for [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). +>**Note:** Only supported for [Rancher launched Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). Etcd metrics display the operations of the etcd database on each of your cluster nodes. After establishing a baseline of normal etcd operational metrics, observe them for abnormal deltas between metric refreshes, which indicate potential issues with etcd. Always address etcd issues immediately! @@ -55,13 +55,13 @@ Some of the biggest metrics to look out for: If this statistic suddenly grows, it usually indicates network communication issues that constantly force the cluster to elect a new leader. -[_Get expressions for Etcd Metrics_]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#etcd-metrics) +[_Get expressions for Etcd Metrics_]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#etcd-metrics) ### Kubernetes Components Metrics Kubernetes components metrics display data about the cluster's individual Kubernetes components. Primarily, it displays information about connections and latency for each component: the API server, controller manager, scheduler, and ingress controller. ->**Note:** The metrics for the controller manager, scheduler and ingress controller are only supported for [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). +>**Note:** The metrics for the controller manager, scheduler and ingress controller are only supported for [Rancher launched Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). When analyzing Kubernetes component metrics, don't be concerned about any single standalone metric in the charts and graphs that display. Rather, you should establish a baseline for metrics considered normal following a period of observation, e.g. the range of values that your components usually operate within and are considered normal. After you establish this baseline, be on the lookout for large deltas in the charts and graphs, as these big changes usually indicate a problem that you need to investigate. @@ -87,13 +87,13 @@ Some of the more important component metrics to monitor are: How fast ingress is routing connections to your cluster services. -[_Get expressions for Kubernetes Component Metrics_]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#kubernetes-components-metrics) +[_Get expressions for Kubernetes Component Metrics_]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#kubernetes-components-metrics) ## Rancher Logging Metrics -Although the Dashboard for a cluster primarily displays data sourced from Prometheus, it also displays information for cluster logging, provided that you have [configured Rancher to use a logging service]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/). +Although the Dashboard for a cluster primarily displays data sourced from Prometheus, it also displays information for cluster logging, provided that you have [configured Rancher to use a logging service]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/). -[_Get expressions for Rancher Logging Metrics_]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#rancher-logging-metrics) +[_Get expressions for Rancher Logging Metrics_]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#rancher-logging-metrics) ## Finding Workload Metrics @@ -110,4 +110,4 @@ Workload metrics display the hardware utilization for a Kubernetes workload. You - **View the Pod Metrics:** Click on **Pod Metrics**. - **View the Container Metrics:** In the **Containers** section, select a specific container and click on its name. Click on **Container Metrics**. -[_Get expressions for Workload Metrics_]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#workload-metrics) +[_Get expressions for Workload Metrics_]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#workload-metrics) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/_index.md index a667264c69c..9f5170c9779 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/_index.md @@ -1,375 +1,430 @@ --- -title: Expression +title: Prometheus Expressions weight: 4 --- -## In This Document +The PromQL expressions in this doc can be used to configure [alerts.]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) + +> Before expression can be used in alerts, monitoring must be enabled. For more information, refer to the documentation on enabling monitoring [at the cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [at the project level.]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring) + +For more information about querying Prometheus, refer to the official [Prometheus documentation.](https://prometheus.io/docs/prometheus/latest/querying/basics/) - [Cluster Metrics](#cluster-metrics) - + [Node Metrics](#node-metrics) + - [Cluster CPU Utilization](#cluster-cpu-utilization) + - [Cluster Load Average](#cluster-load-average) + - [Cluster Memory Utilization](#cluster-memory-utilization) + - [Cluster Disk Utilization](#cluster-disk-utilization) + - [Cluster Disk I/O](#cluster-disk-i-o) + - [Cluster Network Packets](#cluster-network-packets) + - [Cluster Network I/O](#cluster-network-i-o) +- [Node Metrics](#node-metrics) + - [Node CPU Utilization](#node-cpu-utilization) + - [Node Load Average](#node-load-average) + - [Node Memory Utilization](#node-memory-utilization) + - [Node Disk Utilization](#node-disk-utilization) + - [Node Disk I/O](#node-disk-i-o) + - [Node Network Packets](#node-network-packets) + - [Node Network I/O](#node-network-i-o) - [Etcd Metrics](#etcd-metrics) + - [Etcd Has a Leader](#etcd-has-a-leader) + - [Number of Times the Leader Changes](#number-of-times-the-leader-changes) + - [Number of Failed Proposals](#number-of-failed-proposals) + - [GRPC Client Traffic](#grpc-client-traffic) + - [Peer Traffic](#peer-traffic) + - [DB Size](#db-size) + - [Active Streams](#active-streams) + - [Raft Proposals](#raft-proposals) + - [RPC Rate](#rpc-rate) + - [Disk Operations](#disk-operations) + - [Disk Sync Duration](#disk-sync-duration) - [Kubernetes Components Metrics](#kubernetes-components-metrics) + - [API Server Request Latency](#api-server-request-latency) + - [API Server Request Rate](#api-server-request-rate) + - [Scheduling Failed Pods](#scheduling-failed-pods) + - [Controller Manager Queue Depth](#controller-manager-queue-depth) + - [Scheduler E2E Scheduling Latency](#scheduler-e2e-scheduling-latency) + - [Scheduler Preemption Attempts](#scheduler-preemption-attempts) + - [Ingress Controller Connections](#ingress-controller-connections) + - [Ingress Controller Request Process Time](#ingress-controller-request-process-time) - [Rancher Logging Metrics](#rancher-logging-metrics) + - [Fluentd Buffer Queue Rate](#fluentd-buffer-queue-rate) + - [Fluentd Input Rate](#fluentd-input-rate) + - [Fluentd Output Errors Rate](#fluentd-output-errors-rate) + - [Fluentd Output Rate](#fluentd-output-rate) - [Workload Metrics](#workload-metrics) - + [Pod Metrics](#pod-metrics) - + [Container Metrics](#container-metrics) + - [Workload CPU Utilization](#workload-cpu-utilization) + - [Workload Memory Utilization](#workload-memory-utilization) + - [Workload Network Packets](#workload-network-packets) + - [Workload Network I/O](#workload-network-i-o) + - [Workload Disk I/O](#workload-disk-i-o) +- [Pod Metrics](#pod-metrics) + - [Pod CPU Utilization](#pod-cpu-utilization) + - [Pod Memory Utilization](#pod-memory-utilization) + - [Pod Network Packets](#pod-network-packets) + - [Pod Network I/O](#pod-network-i-o) + - [Pod Disk I/O](#pod-disk-i-o) +- [Container Metrics](#container-metrics) + - [Container CPU Utilization](#container-cpu-utilization) + - [Container Memory Utilization](#container-memory-utilization) + - [Container Disk I/O](#container-disk-i-o) -## Cluster Metrics +# Cluster Metrics -- **CPU Utilization** +### Cluster CPU Utilization - | Catalog | Expression | - | --- | --- | - | Detail | `1 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) by (instance))` | - | Summary | `1 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])))` | +| Catalog | Expression | +| --- | --- | +| Detail | `1 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) by (instance))` | +| Summary | `1 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])))` | -- **Load Average** +### Cluster Load Average - | Catalog | Expression | - | --- | --- | - | Detail |
load1`sum(node_load1) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
load5`sum(node_load5) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
load15`sum(node_load15) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
| - | Summary |
load1`sum(node_load1) by (instance) / count(node_cpu_seconds_total{mode="system"})`
load5`sum(node_load5) by (instance) / count(node_cpu_seconds_total{mode="system"})`
load15`sum(node_load15) by (instance) / count(node_cpu_seconds_total{mode="system"})`
| +| Catalog | Expression | +| --- | --- | +| Detail |
load1`sum(node_load1) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
load5`sum(node_load5) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
load15`sum(node_load15) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
| +| Summary |
load1`sum(node_load1) by (instance) / count(node_cpu_seconds_total{mode="system"})`
load5`sum(node_load5) by (instance) / count(node_cpu_seconds_total{mode="system"})`
load15`sum(node_load15) by (instance) / count(node_cpu_seconds_total{mode="system"})`
| -- **Memory Utilization** +### Cluster Memory Utilization - | Catalog | Expression | - | --- | --- | - | Detail | `1 - sum(node_memory_MemAvailable_bytes) by (instance) / sum(node_memory_MemTotal_bytes) by (instance)` | - | Summary | `1 - sum(node_memory_MemAvailable_bytes) / sum(node_memory_MemTotal_bytes)` | +| Catalog | Expression | +| --- | --- | +| Detail | `1 - sum(node_memory_MemAvailable_bytes) by (instance) / sum(node_memory_MemTotal_bytes) by (instance)` | +| Summary | `1 - sum(node_memory_MemAvailable_bytes) / sum(node_memory_MemTotal_bytes)` | -- **Disk Utilization** +### Cluster Disk Utilization - | Catalog | Expression | - | --- | --- | - | Detail | `(sum(node_filesystem_size_bytes{device!="rootfs"}) by (instance) - sum(node_filesystem_free_bytes{device!="rootfs"}) by (instance)) / sum(node_filesystem_size_bytes{device!="rootfs"}) by (instance)` | - | Summary | `(sum(node_filesystem_size_bytes{device!="rootfs"}) - sum(node_filesystem_free_bytes{device!="rootfs"})) / sum(node_filesystem_size_bytes{device!="rootfs"})` | +| Catalog | Expression | +| --- | --- | +| Detail | `(sum(node_filesystem_size_bytes{device!="rootfs"}) by (instance) - sum(node_filesystem_free_bytes{device!="rootfs"}) by (instance)) / sum(node_filesystem_size_bytes{device!="rootfs"}) by (instance)` | +| Summary | `(sum(node_filesystem_size_bytes{device!="rootfs"}) - sum(node_filesystem_free_bytes{device!="rootfs"})) / sum(node_filesystem_size_bytes{device!="rootfs"})` | -- **Disk I/O** +### Cluster Disk I/O - | Catalog | Expression | - | --- | --- | - | Detail |
read`sum(rate(node_disk_read_bytes_total[5m])) by (instance)`
written`sum(rate(node_disk_written_bytes_total[5m])) by (instance)`
| - | Summary |
read`sum(rate(node_disk_read_bytes_total[5m]))`
written`sum(rate(node_disk_written_bytes_total[5m]))`
| +| Catalog | Expression | +| --- | --- | +| Detail |
read`sum(rate(node_disk_read_bytes_total[5m])) by (instance)`
written`sum(rate(node_disk_written_bytes_total[5m])) by (instance)`
| +| Summary |
read`sum(rate(node_disk_read_bytes_total[5m]))`
written`sum(rate(node_disk_written_bytes_total[5m]))`
| -- **Network Packets** +### Cluster Network Packets - | Catalog | Expression | - | --- | --- | - | Detail |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
| - | Summary |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
| +| Catalog | Expression | +| --- | --- | +| Detail |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
| +| Summary |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
| -- **Network I/O** +### Cluster Network I/O - | Catalog | Expression | - | --- | --- | - | Detail |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
| - | Summary |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
| +| Catalog | Expression | +| --- | --- | +| Detail |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
| +| Summary |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
| -### Node Metrics +# Node Metrics -- **CPU Utilization** +### Node CPU Utilization - | Catalog | Expression | - | --- | --- | - | Detail | `avg(irate(node_cpu_seconds_total{mode!="idle", instance=~"$instance"}[5m])) by (mode)` | - | Summary | `1 - (avg(irate(node_cpu_seconds_total{mode="idle", instance=~"$instance"}[5m])))` | +| Catalog | Expression | +| --- | --- | +| Detail | `avg(irate(node_cpu_seconds_total{mode!="idle", instance=~"$instance"}[5m])) by (mode)` | +| Summary | `1 - (avg(irate(node_cpu_seconds_total{mode="idle", instance=~"$instance"}[5m])))` | -- **Load Average** +### Node Load Average - | Catalog | Expression | - | --- | --- | - | Detail |
load1`sum(node_load1{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load5`sum(node_load5{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load15`sum(node_load15{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
| - | Summary |
load1`sum(node_load1{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load5`sum(node_load5{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load15`sum(node_load15{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
| +| Catalog | Expression | +| --- | --- | +| Detail |
load1`sum(node_load1{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load5`sum(node_load5{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load15`sum(node_load15{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
| +| Summary |
load1`sum(node_load1{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load5`sum(node_load5{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load15`sum(node_load15{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
| -- **Memory Utilization** +### Node Memory Utilization - | Catalog | Expression | - | --- | --- | - | Detail | `1 - sum(node_memory_MemAvailable_bytes{instance=~"$instance"}) / sum(node_memory_MemTotal_bytes{instance=~"$instance"})` | - | Summary | `1 - sum(node_memory_MemAvailable_bytes{instance=~"$instance"}) / sum(node_memory_MemTotal_bytes{instance=~"$instance"}) ` | +| Catalog | Expression | +| --- | --- | +| Detail | `1 - sum(node_memory_MemAvailable_bytes{instance=~"$instance"}) / sum(node_memory_MemTotal_bytes{instance=~"$instance"})` | +| Summary | `1 - sum(node_memory_MemAvailable_bytes{instance=~"$instance"}) / sum(node_memory_MemTotal_bytes{instance=~"$instance"}) ` | -- **Disk Utilization** +### Node Disk Utilization - | Catalog | Expression | - | --- | --- | - | Detail | `(sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) by (device) - sum(node_filesystem_free_bytes{device!="rootfs",instance=~"$instance"}) by (device)) / sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) by (device)` | - | Summary | `(sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) - sum(node_filesystem_free_bytes{device!="rootfs",instance=~"$instance"})) / sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"})` | +| Catalog | Expression | +| --- | --- | +| Detail | `(sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) by (device) - sum(node_filesystem_free_bytes{device!="rootfs",instance=~"$instance"}) by (device)) / sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) by (device)` | +| Summary | `(sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) - sum(node_filesystem_free_bytes{device!="rootfs",instance=~"$instance"})) / sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"})` | -- **Disk I/O** +### Node Disk I/O - | Catalog | Expression | - | --- | --- | - | Detail |
read`sum(rate(node_disk_read_bytes_total{instance=~"$instance"}[5m]))`
written`sum(rate(node_disk_written_bytes_total{instance=~"$instance"}[5m]))`
| - | Summary |
read`sum(rate(node_disk_read_bytes_total{instance=~"$instance"}[5m]))`
written`sum(rate(node_disk_written_bytes_total{instance=~"$instance"}[5m]))`
| +| Catalog | Expression | +| --- | --- | +| Detail |
read`sum(rate(node_disk_read_bytes_total{instance=~"$instance"}[5m]))`
written`sum(rate(node_disk_written_bytes_total{instance=~"$instance"}[5m]))`
| +| Summary |
read`sum(rate(node_disk_read_bytes_total{instance=~"$instance"}[5m]))`
written`sum(rate(node_disk_written_bytes_total{instance=~"$instance"}[5m]))`
| -- **Network Packets** +### Node Network Packets - | Catalog | Expression | - | --- | --- | - | Detail |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
| - | Summary |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
| +| Catalog | Expression | +| --- | --- | +| Detail |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
| +| Summary |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
| -- **Network I/O** +### Node Network I/O - | Catalog | Expression | - | --- | --- | - | Detail |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
| - | Summary |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
| +| Catalog | Expression | +| --- | --- | +| Detail |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
| +| Summary |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
| -## Etcd Metrics +# Etcd Metrics -- **Etcd has a leader** +### Etcd Has a Leader - `max(etcd_server_has_leader)` +`max(etcd_server_has_leader)` -- **Number of leader changes** +### Number of Times the Leader Changes - `max(etcd_server_leader_changes_seen_total)` +`max(etcd_server_leader_changes_seen_total)` -- **Number of failed proposals** +### Number of Failed Proposals - `sum(etcd_server_proposals_failed_total)` +`sum(etcd_server_proposals_failed_total)` -- **GRPC Client Traffic** +### GRPC Client Traffic - | Catalog | Expression | - | --- | --- | - | Detail |
in`sum(rate(etcd_network_client_grpc_received_bytes_total[5m])) by (instance)`
out`sum(rate(etcd_network_client_grpc_sent_bytes_total[5m])) by (instance)`
| - | Summary |
in`sum(rate(etcd_network_client_grpc_received_bytes_total[5m]))`
out`sum(rate(etcd_network_client_grpc_sent_bytes_total[5m]))`
| +| Catalog | Expression | +| --- | --- | +| Detail |
in`sum(rate(etcd_network_client_grpc_received_bytes_total[5m])) by (instance)`
out`sum(rate(etcd_network_client_grpc_sent_bytes_total[5m])) by (instance)`
| +| Summary |
in`sum(rate(etcd_network_client_grpc_received_bytes_total[5m]))`
out`sum(rate(etcd_network_client_grpc_sent_bytes_total[5m]))`
| -- **Peer Traffic** +### Peer Traffic - | Catalog | Expression | - | --- | --- | - | Detail |
in`sum(rate(etcd_network_peer_received_bytes_total[5m])) by (instance)`
out`sum(rate(etcd_network_peer_sent_bytes_total[5m])) by (instance)`
| - | Summary |
in`sum(rate(etcd_network_peer_received_bytes_total[5m]))`
out`sum(rate(etcd_network_peer_sent_bytes_total[5m]))`
| +| Catalog | Expression | +| --- | --- | +| Detail |
in`sum(rate(etcd_network_peer_received_bytes_total[5m])) by (instance)`
out`sum(rate(etcd_network_peer_sent_bytes_total[5m])) by (instance)`
| +| Summary |
in`sum(rate(etcd_network_peer_received_bytes_total[5m]))`
out`sum(rate(etcd_network_peer_sent_bytes_total[5m]))`
| -- **DB Size** +### DB Size - | Catalog | Expression | - | --- | --- | - | Detail | `sum(etcd_debugging_mvcc_db_total_size_in_bytes) by (instance)` | - | Summary | `sum(etcd_debugging_mvcc_db_total_size_in_bytes)` | +| Catalog | Expression | +| --- | --- | +| Detail | `sum(etcd_debugging_mvcc_db_total_size_in_bytes) by (instance)` | +| Summary | `sum(etcd_debugging_mvcc_db_total_size_in_bytes)` | -- **Active Streams** +### Active Streams - | Catalog | Expression | - | --- | --- | - | Detail |
lease-watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) by (instance) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) by (instance)`
watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) by (instance) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) by (instance)`
| - | Summary |
lease-watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"})`
watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"})`
| +| Catalog | Expression | +| --- | --- | +| Detail |
lease-watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) by (instance) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) by (instance)`
watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) by (instance) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) by (instance)`
| +| Summary |
lease-watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"})`
watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"})`
| -- **Raft Proposals** +### Raft Proposals - | Catalog | Expression | - | --- | --- | - | Detail |
applied`sum(increase(etcd_server_proposals_applied_total[5m])) by (instance)`
committed`sum(increase(etcd_server_proposals_committed_total[5m])) by (instance)`
pending`sum(increase(etcd_server_proposals_pending[5m])) by (instance)`
failed`sum(increase(etcd_server_proposals_failed_total[5m])) by (instance)`
| - | Summary |
applied`sum(increase(etcd_server_proposals_applied_total[5m]))`
committed`sum(increase(etcd_server_proposals_committed_total[5m]))`
pending`sum(increase(etcd_server_proposals_pending[5m]))`
failed`sum(increase(etcd_server_proposals_failed_total[5m]))`
| +| Catalog | Expression | +| --- | --- | +| Detail |
applied`sum(increase(etcd_server_proposals_applied_total[5m])) by (instance)`
committed`sum(increase(etcd_server_proposals_committed_total[5m])) by (instance)`
pending`sum(increase(etcd_server_proposals_pending[5m])) by (instance)`
failed`sum(increase(etcd_server_proposals_failed_total[5m])) by (instance)`
| +| Summary |
applied`sum(increase(etcd_server_proposals_applied_total[5m]))`
committed`sum(increase(etcd_server_proposals_committed_total[5m]))`
pending`sum(increase(etcd_server_proposals_pending[5m]))`
failed`sum(increase(etcd_server_proposals_failed_total[5m]))`
| -- **RPC Rate** +### RPC Rate - | Catalog | Expression | - | --- | --- | - | Detail |
total`sum(rate(grpc_server_started_total{grpc_type="unary"}[5m])) by (instance)`
fail`sum(rate(grpc_server_handled_total{grpc_type="unary",grpc_code!="OK"}[5m])) by (instance)`
| - | Summary |
total`sum(rate(grpc_server_started_total{grpc_type="unary"}[5m]))`
fail`sum(rate(grpc_server_handled_total{grpc_type="unary",grpc_code!="OK"}[5m]))`
| +| Catalog | Expression | +| --- | --- | +| Detail |
total`sum(rate(grpc_server_started_total{grpc_type="unary"}[5m])) by (instance)`
fail`sum(rate(grpc_server_handled_total{grpc_type="unary",grpc_code!="OK"}[5m])) by (instance)`
| +| Summary |
total`sum(rate(grpc_server_started_total{grpc_type="unary"}[5m]))`
fail`sum(rate(grpc_server_handled_total{grpc_type="unary",grpc_code!="OK"}[5m]))`
| -- **Disk Operations** +### Disk Operations - | Catalog | Expression | - | --- | --- | - | Detail |
commit-called-by-backend`sum(rate(etcd_disk_backend_commit_duration_seconds_sum[1m])) by (instance)`
fsync-called-by-wal`sum(rate(etcd_disk_wal_fsync_duration_seconds_sum[1m])) by (instance)`
| - | Summary |
commit-called-by-backend`sum(rate(etcd_disk_backend_commit_duration_seconds_sum[1m]))`
fsync-called-by-wal`sum(rate(etcd_disk_wal_fsync_duration_seconds_sum[1m]))`
| +| Catalog | Expression | +| --- | --- | +| Detail |
commit-called-by-backend`sum(rate(etcd_disk_backend_commit_duration_seconds_sum[1m])) by (instance)`
fsync-called-by-wal`sum(rate(etcd_disk_wal_fsync_duration_seconds_sum[1m])) by (instance)`
| +| Summary |
commit-called-by-backend`sum(rate(etcd_disk_backend_commit_duration_seconds_sum[1m]))`
fsync-called-by-wal`sum(rate(etcd_disk_wal_fsync_duration_seconds_sum[1m]))`
| -- **Disk Sync Duration** +### Disk Sync Duration - | Catalog | Expression | - | --- | --- | - | Detail |
wal`histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le))`
db`histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) by (instance, le))`
| - | Summary |
wal`sum(histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le)))`
db`sum(histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) by (instance, le)))`
| +| Catalog | Expression | +| --- | --- | +| Detail |
wal`histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le))`
db`histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) by (instance, le))`
| +| Summary |
wal`sum(histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le)))`
db`sum(histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) by (instance, le)))`
| -## Kubernetes Components Metrics +# Kubernetes Components Metrics -- **API Server Request Latency** +### API Server Request Latency - | Catalog | Expression | - | --- | --- | - | Detail | `avg(apiserver_request_latencies_sum / apiserver_request_latencies_count) by (instance, verb) /1e+06` | - | Summary | `avg(apiserver_request_latencies_sum / apiserver_request_latencies_count) by (instance) /1e+06` | +| Catalog | Expression | +| --- | --- | +| Detail | `avg(apiserver_request_latencies_sum / apiserver_request_latencies_count) by (instance, verb) /1e+06` | +| Summary | `avg(apiserver_request_latencies_sum / apiserver_request_latencies_count) by (instance) /1e+06` | -- **API Server Request Rate** +### API Server Request Rate - | Catalog | Expression | - | --- | --- | - | Detail | `sum(rate(apiserver_request_count[5m])) by (instance, code)` | - | Summary | `sum(rate(apiserver_request_count[5m])) by (instance)` | +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(apiserver_request_count[5m])) by (instance, code)` | +| Summary | `sum(rate(apiserver_request_count[5m])) by (instance)` | -- **Scheduling Failed Pods** +### Scheduling Failed Pods - | Catalog | Expression | - | --- | --- | - | Detail | `sum(kube_pod_status_scheduled{condition="false"})` | - | Summary | `sum(kube_pod_status_scheduled{condition="false"})` | +| Catalog | Expression | +| --- | --- | +| Detail | `sum(kube_pod_status_scheduled{condition="false"})` | +| Summary | `sum(kube_pod_status_scheduled{condition="false"})` | -- **Controller Manager Queue Depth** +### Controller Manager Queue Depth - | Catalog | Expression | - | --- | --- | - | Detail |
volumes`sum(volumes_depth) by instance`
deployment`sum(deployment_depth) by instance`
replicaset`sum(replicaset_depth) by instance`
service`sum(service_depth) by instance`
serviceaccount`sum(serviceaccount_depth) by instance`
endpoint`sum(endpoint_depth) by instance`
daemonset`sum(daemonset_depth) by instance`
statefulset`sum(statefulset_depth) by instance`
replicationmanager`sum(replicationmanager_depth) by instance`
| - | Summary |
volumes`sum(volumes_depth)`
deployment`sum(deployment_depth)`
replicaset`sum(replicaset_depth)`
service`sum(service_depth)`
serviceaccount`sum(serviceaccount_depth)`
endpoint`sum(endpoint_depth)`
daemonset`sum(daemonset_depth)`
statefulset`sum(statefulset_depth)`
replicationmanager`sum(replicationmanager_depth)`
| +| Catalog | Expression | +| --- | --- | +| Detail |
volumes`sum(volumes_depth) by instance`
deployment`sum(deployment_depth) by instance`
replicaset`sum(replicaset_depth) by instance`
service`sum(service_depth) by instance`
serviceaccount`sum(serviceaccount_depth) by instance`
endpoint`sum(endpoint_depth) by instance`
daemonset`sum(daemonset_depth) by instance`
statefulset`sum(statefulset_depth) by instance`
replicationmanager`sum(replicationmanager_depth) by instance`
| +| Summary |
volumes`sum(volumes_depth)`
deployment`sum(deployment_depth)`
replicaset`sum(replicaset_depth)`
service`sum(service_depth)`
serviceaccount`sum(serviceaccount_depth)`
endpoint`sum(endpoint_depth)`
daemonset`sum(daemonset_depth)`
statefulset`sum(statefulset_depth)`
replicationmanager`sum(replicationmanager_depth)`
| -- **Scheduler E2E Scheduling Latency** +### Scheduler E2E Scheduling Latency - | Catalog | Expression | - | --- | --- | - | Detail | `histogram_quantile(0.99, sum(scheduler_e2e_scheduling_latency_microseconds_bucket) by (le, instance)) / 1e+06` | - | Summary | `sum(histogram_quantile(0.99, sum(scheduler_e2e_scheduling_latency_microseconds_bucket) by (le, instance)) / 1e+06)` | +| Catalog | Expression | +| --- | --- | +| Detail | `histogram_quantile(0.99, sum(scheduler_e2e_scheduling_latency_microseconds_bucket) by (le, instance)) / 1e+06` | +| Summary | `sum(histogram_quantile(0.99, sum(scheduler_e2e_scheduling_latency_microseconds_bucket) by (le, instance)) / 1e+06)` | -- **Scheduler Preemption Attempts** +### Scheduler Preemption Attempts - | Catalog | Expression | - | --- | --- | - | Detail | `sum(rate(scheduler_total_preemption_attempts[5m])) by (instance)` | - | Summary | `sum(rate(scheduler_total_preemption_attempts[5m]))` | +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(scheduler_total_preemption_attempts[5m])) by (instance)` | +| Summary | `sum(rate(scheduler_total_preemption_attempts[5m]))` | -- **Ingress Controller Connections** +### Ingress Controller Connections - | Catalog | Expression | - | --- | --- | - | Detail |
reading`sum(nginx_ingress_controller_nginx_process_connections{state="reading"}) by (instance)`
waiting`sum(nginx_ingress_controller_nginx_process_connections{state="waiting"}) by (instance)`
writing`sum(nginx_ingress_controller_nginx_process_connections{state="writing"}) by (instance)`
accepted`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="accepted"}[5m]))) by (instance)`
active`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="active"}[5m]))) by (instance)`
handled`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="handled"}[5m]))) by (instance)`
| - | Summary |
reading`sum(nginx_ingress_controller_nginx_process_connections{state="reading"})`
waiting`sum(nginx_ingress_controller_nginx_process_connections{state="waiting"})`
writing`sum(nginx_ingress_controller_nginx_process_connections{state="writing"})`
accepted`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="accepted"}[5m])))`
active`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="active"}[5m])))`
handled`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="handled"}[5m])))`
| +| Catalog | Expression | +| --- | --- | +| Detail |
reading`sum(nginx_ingress_controller_nginx_process_connections{state="reading"}) by (instance)`
waiting`sum(nginx_ingress_controller_nginx_process_connections{state="waiting"}) by (instance)`
writing`sum(nginx_ingress_controller_nginx_process_connections{state="writing"}) by (instance)`
accepted`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="accepted"}[5m]))) by (instance)`
active`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="active"}[5m]))) by (instance)`
handled`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="handled"}[5m]))) by (instance)`
| +| Summary |
reading`sum(nginx_ingress_controller_nginx_process_connections{state="reading"})`
waiting`sum(nginx_ingress_controller_nginx_process_connections{state="waiting"})`
writing`sum(nginx_ingress_controller_nginx_process_connections{state="writing"})`
accepted`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="accepted"}[5m])))`
active`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="active"}[5m])))`
handled`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="handled"}[5m])))`
| -- **Ingress Controller Request Process Time** +### Ingress Controller Request Process Time - | Catalog | Expression | - | --- | --- | - | Detail | `topk(10, histogram_quantile(0.95,sum by (le, host, path)(rate(nginx_ingress_controller_request_duration_seconds_bucket{host!="_"}[5m]))))` | - | Summary | `topk(10, histogram_quantile(0.95,sum by (le, host)(rate(nginx_ingress_controller_request_duration_seconds_bucket{host!="_"}[5m]))))` | +| Catalog | Expression | +| --- | --- | +| Detail | `topk(10, histogram_quantile(0.95,sum by (le, host, path)(rate(nginx_ingress_controller_request_duration_seconds_bucket{host!="_"}[5m]))))` | +| Summary | `topk(10, histogram_quantile(0.95,sum by (le, host)(rate(nginx_ingress_controller_request_duration_seconds_bucket{host!="_"}[5m]))))` | -## Rancher Logging Metrics +# Rancher Logging Metrics -- **Fluentd Buffer Queue Rate** - | Catalog | Expression | - | --- | --- | - | Detail | `sum(rate(fluentd_output_status_buffer_queue_length[5m])) by (instance)` | - | Summary | `sum(rate(fluentd_output_status_buffer_queue_length[5m]))` | +### Fluentd Buffer Queue Rate -- **Fluentd Input Rate** +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(fluentd_output_status_buffer_queue_length[5m])) by (instance)` | +| Summary | `sum(rate(fluentd_output_status_buffer_queue_length[5m]))` | - | Catalog | Expression | - | --- | --- | - | Detail | `sum(rate(fluentd_input_status_num_records_total[5m])) by (instance)` | - | Summary | `sum(rate(fluentd_input_status_num_records_total[5m]))` | +### Fluentd Input Rate -- **Fluentd Output Errors Rate** +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(fluentd_input_status_num_records_total[5m])) by (instance)` | +| Summary | `sum(rate(fluentd_input_status_num_records_total[5m]))` | - | Catalog | Expression | - | --- | --- | - | Detail | `sum(rate(fluentd_output_status_num_errors[5m])) by (type)` | - | Summary | `sum(rate(fluentd_output_status_num_errors[5m]))` | +### Fluentd Output Errors Rate -- **Fluentd Output Rate** +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(fluentd_output_status_num_errors[5m])) by (type)` | +| Summary | `sum(rate(fluentd_output_status_num_errors[5m]))` | - | Catalog | Expression | - | --- | --- | - | Detail | `sum(rate(fluentd_output_status_num_records_total[5m])) by (instance)` | - | Summary | `sum(rate(fluentd_output_status_num_records_total[5m]))` | +### Fluentd Output Rate -## Workload Metrics +| Catalog | Expression | +| --- | --- | +| Detail | `sum(rate(fluentd_output_status_num_records_total[5m])) by (instance)` | +| Summary | `sum(rate(fluentd_output_status_num_records_total[5m]))` | -- **CPU Utilization** +# Workload Metrics - | Catalog | Expression | - | --- | --- | - | Detail |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
user seconds`sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
system seconds`sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
usage seconds`sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| - | Summary |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
user seconds`sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
system seconds`sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
usage seconds`sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| +### Workload CPU Utilization -- **Memory Utilization** +| Catalog | Expression | +| --- | --- | +| Detail |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
user seconds`sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
system seconds`sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
usage seconds`sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| +| Summary |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
user seconds`sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
system seconds`sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
usage seconds`sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| - | Catalog | Expression | - | --- | --- | - | Detail | `sum(container_memory_working_set_bytes{namespace="$namespace",pod_name=~"$podName", container_name!=""}) by (pod_name)` | - | Summary | `sum(container_memory_working_set_bytes{namespace="$namespace",pod_name=~"$podName", container_name!=""})` | +### Workload Memory Utilization -- **Network Packets** +| Catalog | Expression | +| --- | --- | +| Detail | `sum(container_memory_working_set_bytes{namespace="$namespace",pod_name=~"$podName", container_name!=""}) by (pod_name)` | +| Summary | `sum(container_memory_working_set_bytes{namespace="$namespace",pod_name=~"$podName", container_name!=""})` | - | Catalog | Expression | - | --- | --- | - | Detail |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| - | Summary |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| +### Workload Network Packets -- **Network I/O** +| Catalog | Expression | +| --- | --- | +| Detail |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| +| Summary |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| - | Catalog | Expression | - | --- | --- | - | Detail |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| - | Summary |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| +### Workload Network I/O -- **Disk I/O** +| Catalog | Expression | +| --- | --- | +| Detail |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| +| Summary |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| - | Catalog | Expression | - | --- | --- | - | Detail |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| - | Summary |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| +### Workload Disk I/O -### Pod Metrics +| Catalog | Expression | +| --- | --- | +| Detail |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| +| Summary |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| -- **CPU Utilization** +# Pod Metrics - | Catalog | Expression | - | --- | --- | - | Detail |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
usage seconds`sum(rate(container_cpu_usage_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
system seconds`sum(rate(container_cpu_system_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
user seconds`sum(rate(container_cpu_user_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
| - | Summary |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
usage seconds`sum(rate(container_cpu_usage_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
system seconds`sum(rate(container_cpu_system_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
user seconds`sum(rate(container_cpu_user_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
| +### Pod CPU Utilization -- **Memory Utilization** +| Catalog | Expression | +| --- | --- | +| Detail |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
usage seconds`sum(rate(container_cpu_usage_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
system seconds`sum(rate(container_cpu_system_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
user seconds`sum(rate(container_cpu_user_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
| +| Summary |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
usage seconds`sum(rate(container_cpu_usage_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
system seconds`sum(rate(container_cpu_system_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
user seconds`sum(rate(container_cpu_user_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
| - | Catalog | Expression | - | --- | --- | - | Detail | `sum(container_memory_working_set_bytes{container_name!="POD",namespace="$namespace",pod_name="$podName",container_name!=""}) by (container_name)` | - | Summary | `sum(container_memory_working_set_bytes{container_name!="POD",namespace="$namespace",pod_name="$podName",container_name!=""})` | +### Pod Memory Utilization -- **Network Packets** +| Catalog | Expression | +| --- | --- | +| Detail | `sum(container_memory_working_set_bytes{container_name!="POD",namespace="$namespace",pod_name="$podName",container_name!=""}) by (container_name)` | +| Summary | `sum(container_memory_working_set_bytes{container_name!="POD",namespace="$namespace",pod_name="$podName",container_name!=""})` | - | Catalog | Expression | - | --- | --- | - | Detail |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| - | Summary |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| +### Pod Network Packets -- **Network I/O** +| Catalog | Expression | +| --- | --- | +| Detail |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| +| Summary |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| - | Catalog | Expression | - | --- | --- | - | Detail |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| - | Summary |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| +### Pod Network I/O -- **Disk I/O** +| Catalog | Expression | +| --- | --- | +| Detail |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| +| Summary |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| - | Catalog | Expression | - | --- | --- | - | Detail |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m])) by (container_name)`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m])) by (container_name)`
| - | Summary |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| +### Pod Disk I/O -### Container Metrics +| Catalog | Expression | +| --- | --- | +| Detail |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m])) by (container_name)`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m])) by (container_name)`
| +| Summary |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| -- **CPU Utilization** +# Container Metrics - | Catalog | Expression | - | --- | --- | - | cfs throttled seconds | `sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | - | usage seconds | `sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | - | system seconds | `sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | - | user seconds | `sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | +### Container CPU Utilization -- **Memory Utilization** +| Catalog | Expression | +| --- | --- | +| cfs throttled seconds | `sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | +| usage seconds | `sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | +| system seconds | `sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | +| user seconds | `sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | - `sum(container_memory_working_set_bytes{namespace="$namespace",pod_name="$podName",container_name="$containerName"})` +### Container Memory Utilization -- **Disk IO** +`sum(container_memory_working_set_bytes{namespace="$namespace",pod_name="$podName",container_name="$containerName"})` - | Catalog | Expression | - | --- | --- | - | read | `sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | - | write | `sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | +### Container Disk I/O + +| Catalog | Expression | +| --- | --- | +| read | `sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | +| write | `sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | diff --git a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/_index.md index 0f667bcd1a6..c5cadbc83aa 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/_index.md @@ -6,7 +6,7 @@ weight: 1 _Available as of v2.2.0_ -While configuring monitoring at either the [cluster level]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [project level]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), there are multiple options that can be configured. +While configuring monitoring at either the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [project level]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), there are multiple options that can be configured. Option | Description -------|------------- @@ -20,7 +20,7 @@ Prometheus [CPU Reservation](https://kubernetes.io/docs/concepts/configuration/m Prometheus [Memory Limit](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-memory) | Memory resource limit for the Prometheus pod. Prometheus [Memory Reservation](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-memory) | Memory resource requests for the Prometheus pod. Selector | Ability to select the nodes in which Prometheus and Grafana pods are deployed to. To use this option, the nodes must have labels. -Advanced Options | Since monitoring is an [application](https://github.com/rancher/system-charts/tree/dev/charts/rancher-monitoring) from the [Rancher catalog]({{< baseurl >}}/rancher/v2.x/en/catalog/), it can be [configured like other catalog application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/#configuration-options). _Warning: Any modification to the application without understanding the entire application can lead to catastrophic errors._ +Advanced Options | Since monitoring is an [application](https://github.com/rancher/system-charts/tree/dev/charts/rancher-monitoring) from the [Rancher catalog]({{}}/rancher/v2.x/en/catalog/), it can be [configured like other catalog application]({{}}/rancher/v2.x/en/catalog/apps/#configuration-options). _Warning: Any modification to the application without understanding the entire application can lead to catastrophic errors._ ## Node Exporter @@ -32,8 +32,8 @@ When configuring Prometheus and enabling the node exporter, enter a host port in ## Persistent Storage ->**Prerequisite:** Configure one or more [storage class]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/#adding-storage-classes) to use as [persistent storage]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/) for your Prometheus or Grafana pod. +>**Prerequisite:** Configure one or more [storage class]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/#adding-storage-classes) to use as [persistent storage]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/) for your Prometheus or Grafana pod. By default, when you enable Prometheus for either a cluster or project, all monitoring data that Prometheus collects is stored on its own pod. With local storage, if the Prometheus or Grafana pods fail, all the data is lost. Rancher recommends configuring an external persistent storage to the cluster. With the external persistent storage, if the Prometheus or Grafana pods fail, the new pods can recover using data from the persistent storage. -When enabling persistent storage for Prometheus or Grafana, specify the size of the persistent volume and select the [storage class]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/#storage-classes). +When enabling persistent storage for Prometheus or Grafana, specify the size of the persistent volume and select the [storage class]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/#storage-classes). diff --git a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics/_index.md index 28ccf295c9b..a1dd3946219 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics/_index.md @@ -5,11 +5,11 @@ weight: 2 _Available as of v2.2.0_ -After you've enabled monitoring at either the [cluster level]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [project level]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), you will want to be start viewing the data being collected. There are multiple ways to view this data. +After you've enabled monitoring at either the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [project level]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), you will want to be start viewing the data being collected. There are multiple ways to view this data. ## Rancher Dashboard ->**Note:** This is only available if you've enabled monitoring at the [cluster level]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring). Project specific analytics must be viewed using the project's Grafana instance. +>**Note:** This is only available if you've enabled monitoring at the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring). Project specific analytics must be viewed using the project's Grafana instance. Rancher's dashboards are available at multiple locations: @@ -33,13 +33,13 @@ When analyzing these metrics, don't be concerned about any single standalone met ## Grafana -If you've enabled monitoring at either the [cluster level]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [project level]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), Rancher automatically creates a link to Grafana instance. Use this link to view monitoring data. +If you've enabled monitoring at either the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [project level]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), Rancher automatically creates a link to Grafana instance. Use this link to view monitoring data. Grafana allows you to query, visualize, alert, and ultimately, understand your cluster and workload data. For more information on Grafana and its capabilities, visit the [Grafana website](https://grafana.com/grafana). ### Authentication -Rancher determines which users can access the new Grafana instance, as well as the objects they can view within it, by validating them against the user's [cluster or project roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/). In other words, a user's access in Grafana mirrors their access in Rancher. +Rancher determines which users can access the new Grafana instance, as well as the objects they can view within it, by validating them against the user's [cluster or project roles]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/). In other words, a user's access in Grafana mirrors their access in Rancher. When you go to the Grafana instance, you will be logged in with the username `admin` and the password `admin`. If you log out and log in again, you will be prompted to change your password. You will only have access to the URL of the Grafana instance if you have access to view the corresponding metrics in Rancher. So for example, if your Rancher permissions are scoped to the project level, you won't be able to see the Grafana instance for cluster-level metrics. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/notifiers/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/notifiers/_index.md index 59a82734bd9..c5860f0c33e 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/notifiers/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/notifiers/_index.md @@ -5,8 +5,6 @@ weight: 1 Notifiers are services that inform you of alert events. You can configure notifiers to send alert notifications to staff best suited to take corrective action. -Notifiers are configured at the cluster level. This model ensures that only cluster owners need to configure notifiers, leaving project owners to simply configure alerts in the scope of their projects. You don't need to dispense privileges like SMTP server access or cloud account access. - Rancher integrates with a variety of popular IT services, including: - **Slack**: Send alert notifications to your Slack channels. @@ -15,7 +13,18 @@ Rancher integrates with a variety of popular IT services, including: - **WebHooks**: Update a webpage with alert notifications. - **WeChat**: Send alert notifications to your Enterprise WeChat contacts. -## Adding Notifiers +This section covers the following topics: + +- [Roles-based access control for notifiers](#roles-based-access-control-for-notifiers) +- [Adding notifiers](#adding-notifiers) +- [Managing notifiers](#managing-notifiers) +- [Example payload for a webhook alert notifier](#example-payload-for-a-webhook-alert-notifier) + +### Roles-based Access Control for Notifiers + +Notifiers are configured at the cluster level. This model ensures that only cluster owners need to configure notifiers, leaving project owners to simply configure alerts in the scope of their projects. You don't need to dispense privileges like SMTP server access or cloud account access. + +### Adding Notifiers Set up a notifier so that you can begin configuring and sending alerts. @@ -44,10 +53,10 @@ Set up a notifier so that you can begin configuring and sending alerts. {{% /accordion %}} {{% accordion id="pagerduty" label="PagerDuty" %}} 1. Enter a **Name** for the notifier. -1. From PagerDuty, create a webhook. For instructions, see the [PagerDuty Documentation](https://support.pagerduty.com/docs/webhooks). -1. From PagerDuty, copy the webhook's **Integration Key**. +1. From PagerDuty, create a Prometheus integration. For instructions, see the [PagerDuty Documentation](https://www.pagerduty.com/docs/guides/prometheus-integration-guide/). +1. From PagerDuty, copy the integration's **Integration Key**. 1. From Rancher, enter the key in the **Service Key** field. -1. Click **Test**. If the test is successful, your PagerDuty endpoint outputs `PageDuty setting validated`. +1. Click **Test**. If the test is successful, your PagerDuty endpoint outputs `PagerDuty setting validated`. {{% /accordion %}} {{% accordion id="webhook" label="WebHook" %}} 1. Enter a **Name** for the notifier. @@ -70,17 +79,54 @@ _Available as of v2.2.0_ **Result:** Your notifier is added to Rancher. -## What's Next? -After creating a notifier, set up alerts to receive notifications of Rancher system events. - -- [Cluster owners]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) can set up alerts at the [cluster level]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/alerts/). -- [Project owners]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can set up alerts at the [project level]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/alerts/). - -## Managing Notifiers +### Managing Notifiers After you set up notifiers, you can manage them. From the **Global** view, open the cluster that you want to manage your notifiers. Select **Tools > Notifiers**. You can: - **Edit** their settings that you configured during their initial setup. - **Clone** them, to quickly setup slightly different notifiers. - **Delete** them when they're no longer necessary. + +### Example Payload for a Webhook Alert Notifier + +```json +{ + "receiver": "c-2a3bc:kube-components-alert", + "status": "firing", + "alerts": [ + { + "status": "firing", + "labels": { + "alert_name": "Scheduler is unavailable", + "alert_type": "systemService", + "cluster_name": "mycluster (ID: c-2a3bc)", + "component_name": "scheduler", + "group_id": "c-2a3bc:kube-components-alert", + "logs": "Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused", + "rule_id": "c-2a3bc:kube-components-alert_scheduler-system-service", + "severity": "critical" + }, + "annotations": {}, + "startsAt": "2020-01-30T19:18:13.321684733Z", + "endsAt": "0001-01-01T00:00:00Z", + "generatorURL": "" + } + ], + "groupLabels": { + "component_name": "scheduler", + "rule_id": "c-2a3bc:kube-components-alert_scheduler-system-service" + }, + "commonLabels": { + "alert_name": "Scheduler is unavailable", + "alert_type": "systemService", + "cluster_name": "mycluster (ID: c-2a3bc)" + } +} +``` +### What's Next? + +After creating a notifier, set up alerts to receive notifications of Rancher system events. + +- [Cluster owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) can set up alerts at the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/). +- [Project owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can set up alerts at the [project level]({{}}/rancher/v2.x/en/project-admin/tools/alerts/). diff --git a/content/rancher/v2.x/en/cluster-admin/tools/opa-gatekeper/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/opa-gatekeper/_index.md new file mode 100644 index 00000000000..dceb610f935 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-admin/tools/opa-gatekeper/_index.md @@ -0,0 +1,99 @@ +--- +title: OPA Gatekeeper +weight: 1 +aliases: + - /rancher/v2.x/en/cluster-admin/tools/opa-gatekeeper +--- +_Available as of v2.4.0_ + +> This is an experimental feature for the Rancher v2.4 release. + +To ensure consistency and compliance, every organization needs the ability to define and enforce policies in its environment in an automated way. OPA [https://www.openpolicyagent.org/] (Open Policy Agent) is a policy engine that facilitates policy-based control for cloud native environments. Rancher provides the ability to enable OPA Gatekeeper in Kubernetes clusters, and also installs a couple of built-in policy definitions, which are also called constraint templates. + +OPA provides a high-level declarative language that lets you specify policy as code and ability to extend simple APIs to offload policy decision-making. + +[OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper) is a project that provides integration between OPA and Kubernetes. OPA Gatekeeper provides: + +- An extensible, parameterized policy library. +- Native Kubernetes CRDs for instantiating the policy library, also called “constraints." +- Native Kubernetes CRDs for extending the policy library, also called "constraint templates." +- Audit functionality. + +To read more about OPA, please refer to the [official documentation.](https://www.openpolicyagent.org/docs/latest/) + +# How the OPA Gatekeeper Integration Works + +Kubernetes provides the ability to extend API server functionality via admission controller webhooks, which are invoked whenever a resource is created, updated or deleted. Gatekeeper is installed as a validating webhook and enforces policies defined by Kubernetes custom resource definitions. In addition to the admission control usage, Gatekeeper provides the capability to audit existing resources in Kubernetes clusters and mark current violations of enabled policies. + +OPA Gatekeeper is made availale via Rancher's Helm system chart, and it is installed in a namespace named `gatekeeper-system.` + +# Enabling OPA Gatekeeper in a Cluster + +> **Prerequisites:** +> +> - Only administrators and cluster owners can enable OPA Gatekeeper. +> - The dashboard needs to be enabled using the `dashboard` feature flag. For more information, refer to the [section on enabling experimental features.]({{}}/rancher/v2.x/en/installation/options/feature-flags/) + +1. Navigate to the cluster's **Dashboard** view. +1. On the left side menu, expand the cluster menu and click on **OPA Gatekeeper.** +1. To install Gatekeeper with the default configuration, click on **Enable Gatekeeper (v0.1.0) with defaults.** +1. To change any default configuration, click on **Customize Gatekeeper yaml configuration.** + +# Constraint Templates + +[Constraint templates](https://github.com/open-policy-agent/gatekeeper#constraint-templates) are Kubernetes custom resources that define the schema and Rego logic of the OPA policy to be applied by Gatekeeper. For more information on the Rego policy language, refer to the [official documentation.](https://www.openpolicyagent.org/docs/latest/policy-language/) + +When OPA Gatekeeper is enabled, Rancher installs some templates by default. + +To list the constraint templates installed in the cluster, go to the left side menu under OPA Gatekeeper and click on **Templates.** + +Rancher also provides the ability to create your own constraint templates by importing YAML definitions. + +# Creating and Configuring Constraints + +[Constraints](https://github.com/open-policy-agent/gatekeeper#constraints) are Kubernetes custom resources that define the scope of objects to which a specific constraint template applies to. The complete policy is defined by constraint templates and constraints together. + +> **Prerequisites:** OPA Gatekeeper must be enabled in the cluster. + +To list the constraints installed, go to the left side menu under OPA Gatekeeper, and click on **Constraints.** + +New constraints can be created from a constraint template. + +Rancher provides the ability to create a constraint by using a convenient form that lets you input the various constraint fields. + +The **Edit as yaml** option is also availble to configure the the constraint's yaml definition. + +### Exempting Rancher's System Namespaces from Constraints + +When a constraint is created, ensure that it does not apply to any Rancher or Kubernetes system namespaces. If the system namespaces are not excluded, then it is possible to see many resources under them marked as violations of the constraint. + +To limit the scope of the constraint only to user namespaces, always specify these namespaces under the **Match** field of the constraint. + +Also, the constraint may interfere with other Rancher functionality and deny system workloads from being deployed. To avoid this, exclude all Rancher-specific namespaces from your constraints. + +# Enforcing Constraints in your Cluster + +When the **Enforcement Action** is **Deny,** the constraint is immediately enabled and will deny any requests that violate the policy defined. By default, the enforcement value is **Deny.** + +When the **Enforcement Action** is **Dryrun,** then any resources that violate the policy are only recorded under the constraint's status field. + +To enforce constraints, create a constraint using the form. In the **Enforcement Action** field, choose **Deny.** + +# Audit and Violations in your Cluster + +OPA Gatekeeper runs a periodic audit to check if any existing resource violates any enforced constraint. The audit-interval (default 300s) can be configured while installing Gatekeeper. + +On the Gatekeeper page, any violations of the defined constraints are listed. + +Also under **Constraints,** the number of violations of the constraint can be found. + +The detail view of each constraint lists information about the resource that violated the constraint. + +# Disabling Gatekeeper + +1. Navigate to the cluster's Dashboard view +1. On the left side menu, expand the cluster menu and click on **OPA Gatekeeper.** +1. Click the **⋮ > Disable**. + +**Result:** Upon disabling OPA Gatekeeper, all constraint templates and constraints will also be deleted. + diff --git a/content/rancher/v2.x/en/cluster-admin/upgrading-kubernetes/_index.md b/content/rancher/v2.x/en/cluster-admin/upgrading-kubernetes/_index.md index 2d7f62ea392..d015b8b5b03 100644 --- a/content/rancher/v2.x/en/cluster-admin/upgrading-kubernetes/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/upgrading-kubernetes/_index.md @@ -1,19 +1,79 @@ --- -title: Upgrading Kubernetes +title: Upgrading and Rolling Back Kubernetes weight: 70 --- -> **Prerequisite:** The options below are available only for clusters that are [launched using RKE.]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) +Following an upgrade to the latest version of Rancher, downstream Kubernetes clusters can be upgraded to use the latest supported version of Kubernetes. -Following an upgrade to the latest version of Rancher, you can update your existing clusters to use the latest supported version of Kubernetes. +Rancher calls RKE (Rancher Kubernetes Engine) as a library when provisioning and editing RKE clusters. For more information on configuring the upgrade strategy for RKE clusters, refer to the [RKE documentation]({{}}/rke/latest/en/). -Before a new version of Rancher is released, it's tested with the latest minor versions of Kubernetes to ensure compatibility. For example, Rancher v2.3.0 is was tested with Kubernetes v1.15.4, v1.14.7, and v1.13.11. For details on which versions of Kubernetes were tested on each Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.3.0/) +This section covers the following topics: + +- [New Features](#new-features) +- [Tested Kubernetes Versions](#tested-kubernetes-versions) +- [How Upgrades Work](#how-upgrades-work) +- [Recommended Best Practice for Upgrades](#recommended-best-practice-for-upgrades) +- [Upgrading the Kubernetes Version](#upgrading-the-kubernetes-version) +- [Rolling Back](#rolling-back) +- [Configuring the Upgrade Strategy](#configuring-the-upgrade-strategy) + - [Configuring the Maximum Unavailable Worker Nodes in the Rancher UI](#configuring-the-maximum-unavailable-worker-nodes-in-the-rancher-ui) + - [Enabling Draining Nodes During Upgrades from the Rancher UI](#enabling-draining-nodes-during-upgrades-from-the-rancher-ui) + - [Maintaining Availability for Applications During Upgrades](#maintaining-availability-for-applications-during-upgrades) + - [Configuring the Upgrade Strategy in the cluster.yml](#configuring-the-upgrade-strategy-in-the-cluster-yml) +- [Troubleshooting](#troubleshooting) + +# New Features As of Rancher v2.3.0, the Kubernetes metadata feature was added, which allows Rancher to ship Kubernetes patch versions without upgrading Rancher. For details, refer to the [section on Kubernetes metadata.]({{}}/rancher/v2.x/en/admin-settings/k8s-metadata) ->**Recommended:** Before upgrading Kubernetes, [backup your cluster]({{< baseurl >}}/rancher/v2.x/en/backups). +As of Rancher v2.4.0, -1. From the **Global** view, find the cluster for which you want to upgrade Kubernetes. Select **Vertical Ellipsis (...) > Edit**. +- The ability to import K3s Kubernetes clusters into Rancher was added, along with the ability to upgrade Kubernetes when editing those clusters. For details, refer to the [section on imported clusters.]({{}}/rancher/v2.x/en/cluster-provisioning/imported-clusters) +- New advanced options are exposed in the Rancher UI for configuring the upgrade strategy of an RKE cluster: **Maximum Worker Nodes Unavailable** and **Drain nodes.** These options leverage the new cluster upgrade process of RKE v1.1.0, in which worker nodes are upgraded in batches, so that applications can remain available during cluster upgrades, under [certain conditions.](#maintaining-availability-for-applications-during-upgrades) + +# Tested Kubernetes Versions + +Before a new version of Rancher is released, it's tested with the latest minor versions of Kubernetes to ensure compatibility. For example, Rancher v2.3.0 is was tested with Kubernetes v1.15.4, v1.14.7, and v1.13.11. For details on which versions of Kubernetes were tested on each Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.3.0/) + +# How Upgrades Work + +RKE v1.1.0 changed the way that clusters are upgraded. + +In this section of the [RKE documentation,]({{}}/rke/latest/en/upgrades/how-upgrades-work) you'll learn what happens when you edit or upgrade your RKE Kubernetes cluster. + + +# Recommended Best Practice for Upgrades + +{{% tabs %}} +{{% tab "Rancher v2.4+" %}} +When upgrading the Kubernetes version of a cluster, we recommend that you: + +1. Take a snapshot. +1. Initiate a Kubernetes upgrade. +1. If the upgrade fails, revert the cluster to the pre-upgrade Kubernetes version. Before restoring the cluster from the snapshot in the etcd datastore, the cluster should be running the pre-upgrade Kubernetes version. +1. Restore the cluster from the etcd snapshot. + +The restore operation will work on a cluster that is not in a healthy or active state. +{{% /tab %}} +{{% tab "Rancher prior to v2.4" %}} +When upgrading the Kubernetes version of a cluster, we recommend that you: + +1. Take a snapshot. +1. Initiate a Kubernetes upgrade. +1. If the upgrade fails, restore the cluster from the etcd snapshot. + +The cluster cannot be downgraded to a previous Kubernetes version. +{{% /tab %}} +{{% /tabs %}} + +# Upgrading the Kubernetes Version + +> **Prerequisites:** +> +> - The options below are available only for [Rancher-launched RKE Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) and [imported K3s Kubernetes clusters.]({{}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/#additional-features-for-imported-k3s-clusters) +> - Before upgrading Kubernetes, [back up your cluster.]({{}}/rancher/v2.x/en/backups) + +1. From the **Global** view, find the cluster for which you want to upgrade Kubernetes. Select **⋮ > Edit**. 1. Expand **Cluster Options**. @@ -21,4 +81,81 @@ As of Rancher v2.3.0, the Kubernetes metadata feature was added, which allows Ra 1. Click **Save**. -**Result:** Kubernetes begins upgrading for the cluster. During the upgrade, your cluster is unavailable. \ No newline at end of file +**Result:** Kubernetes begins upgrading for the cluster. + +# Rolling Back + +_Available as of v2.4_ + +A cluster can be restored to a backup in which the previous Kubernetes version was used. For more information, refer to the following sections: + +- [Backing up a cluster]({{}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/#how-snapshots-work) +- [Restoring a cluster from backup]({{}}/rancher/v2.x/en/cluster-admin/restoring-etcd/#restoring-a-cluster-from-a-snapshot) + +# Configuring the Upgrade Strategy + +As of RKE v1.1.0, additional upgrade options became available to give you more granular control over the upgrade process. These options can be used to maintain availability of your applications during a cluster upgrade if certain [conditions and requirements]({{}}/rke/latest/en/upgrades/maintaining-availability) are met. + +The upgrade strategy can be configured in the Rancher UI, or by editing the `cluster.yml`. More advanced options are available by editing the `cluster.yml`. + +### Configuring the Maximum Unavailable Worker Nodes in the Rancher UI + +From the Rancher UI, the maximum number of unavailable worker nodes can be configured. During a cluster upgrade, worker nodes will be upgraded in batches of this size. + +By default, the maximum number of unavailable worker is defined as 10 percent of all worker nodes. This number can be configured as a percentage or as an integer. When defined as a percentage, the batch size is rounded down to the nearest node, with a minimum of one node. + +To change the default number or percentage of worker nodes, + +1. Go to the cluster view in the Rancher UI. +1. Click **⋮ > Edit.** +1. In the **Advanced Options** section, go to the **Maxiumum Worker Nodes Unavailable** field. Enter the percentage of worker nodes that can be upgraded in a batch. Optionally, select **Count** from the drop-down menu and enter the maximum unavailable worker nodes as an integer. +1. Click **Save.** + +**Result:** The cluster is updated to use the new upgrade strategy. + +### Enabling Draining Nodes During Upgrades from the Rancher UI + +By default, RKE [cordons](https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration) each node before upgrading it. [Draining](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) is disabled during upgrades by default. If draining is enabled in the cluster configuration, RKE will both cordon and drain the node before it is upgraded. + +To enable draining each node during a cluster upgrade, + +1. Go to the cluster view in the Rancher UI. +1. Click **⋮ > Edit.** +1. In the **Advanced Options** section, go to the **Drain nodes** field and click **Yes.** +1. Choose a safe or aggressive drain option. For more information about each option, refer to [this section.]({{}}/rancher/v2.x/en/cluster-admin/nodes/#aggressive-and-safe-draining-options) +1. Optionally, configure a grace period. The grace period is the timeout given to each pod for cleaning things up, so they will have chance to exit gracefully. Pods might need to finish any outstanding requests, roll back transactions or save state to some external storage. If this value is negative, the default value specified in the pod will be used. +1. Optionally, configure a timeout, which is the amount of time the drain should continue to wait before giving up. +1. Click **Save.** + +**Result:** The cluster is updated to use the new upgrade strategy. + +> **Note:** As of Rancher v2.4.0, there is a [known issue](https://github.com/rancher/rancher/issues/25478) in which the Rancher UI doesn't show state of etcd and controlplane as drained, even though they are being drained. + +### Maintaining Availability for Applications During Upgrades + +_Available as of RKE v1.1.0_ + +In [this section of the RKE documentation,]({{}}/rke/latest/en/upgrades/maintaining-availability/) you'll learn the requirements to prevent downtime for your applications when upgrading the cluster. + +### Configuring the Upgrade Strategy in the cluster.yml + +More advanced upgrade strategy configuration options are available by editing the `cluster.yml`. + +For details, refer to [Configuring the Upgrade Strategy]({{}}/rke/latest/en/upgrades/configuring-strategy) in the RKE documentation. The section also includes an example `cluster.yml` for configuring the upgrade strategy. + +# Troubleshooting + +If a node doesn't come up after an upgrade, the `rke up` command errors out. + +No upgrade will proceed if the number of unavailable nodes exceeds the configured maximum. + +If an upgrade stops, you may need to fix an unavailable node or remove it from the cluster before the upgrade can continue. + +A failed node could be in many different states: + +- Powered off +- Unavailable +- User drains a node while upgrade is in process, so there are no kubelets on the node +- The upgrade itself failed + +If the max unavailable number of nodes is reached during an upgrade, Rancher user clusters will be stuck in updating state and not move forward with upgrading any other control plane nodes. It will continue to evaluate the set of unavailable nodes in case one of the nodes becomes available. If the node cannot be fixed, you must remove the node in order to continue the upgrade. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md index bd1debc8674..d85a4e9ad3d 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/_index.md @@ -16,7 +16,7 @@ To set up storage, follow these steps: ### Prerequisites -- To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-role-reference) +- To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-role-reference) - If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider. ### 1. Set up persistent storage in an infrastructure provider @@ -93,7 +93,7 @@ The following steps describe how to assign existing storage to a new workload th The following steps describe how to assign persistent storage to an existing workload: 1. From the **Project** view, go to the **Workloads** tab. -1. Go to the workload that you want to add the persistent storage to. The workload type should be a stateful set. Click **Ellipsis (...) > Edit.** +1. Go to the workload that you want to add the persistent storage to. The workload type should be a stateful set. Click **⋮ > Edit.** 1. Expand the **Volumes** section and click **Add Volume > Use an existing persistent volume (claim).**. 1. In the **Persistent Volume Claim** field, select the PVC that you created. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/_index.md index 895e45a11ef..2fc9d2799df 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/_index.md @@ -10,5 +10,5 @@ Rancher supports persistent storage with a variety of volume plugins. However, b For your convenience, Rancher offers documentation on how to configure some of the popular storage methods: -- [NFS]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/examples/nfs/) -- [vSphere]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/examples/vsphere/) +- [NFS]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/examples/nfs/) +- [vSphere]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/examples/vsphere/) diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/ebs/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/ebs/_index.md index 5eaa2de4859..b854daf0ef4 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/ebs/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/ebs/_index.md @@ -13,4 +13,4 @@ This section describes how to set up Amazon's Elastic Block Store in EC2. **Result:** Persistent storage has been created. -For details on how to set up the newly created storage in Rancher, refer to the section on [setting up existing storage.](../attaching-existing-storage) \ No newline at end of file +For details on how to set up the newly created storage in Rancher, refer to the section on [setting up existing storage.]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/attaching-existing-storage/) \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/nfs/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/nfs/_index.md index c91713c4bb0..a9be8884a31 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/nfs/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/nfs/_index.md @@ -65,4 +65,4 @@ Before you can use the NFS storage volume plug-in with Rancher deployments, you ## What's Next? -Within Rancher, add the NFS server as a [storage volume]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#adding-a-persistent-volume) and/or [storage class]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#adding-storage-classes). After adding the server, you can use it for storage for your deployments. +Within Rancher, add the NFS server as a [storage volume]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#adding-a-persistent-volume) and/or [storage class]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#adding-storage-classes). After adding the server, you can use it for storage for your deployments. diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/_index.md index 8fcc55db032..0750143fe22 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/_index.md @@ -5,11 +5,11 @@ aliases: - /rancher/v2.x/en/tasks/clusters/adding-storage/provisioning-storage/vsphere/ --- -To provide stateful workloads with vSphere storage, we recommend creating a vSphereVolume [storage class]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#storage-classes). This practice dynamically provisions vSphere storage when workloads request volumes through a [persistent volume claim]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/persistent-volume-claims/). +To provide stateful workloads with vSphere storage, we recommend creating a vSphereVolume [storage class]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#storage-classes). This practice dynamically provisions vSphere storage when workloads request volumes through a [persistent volume claim]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/persistent-volume-claims/). ### Prerequisites -In order to provision vSphere volumes in a cluster created with the [Rancher Kubernetes Engine (RKE)]({{< baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), the [vSphere cloud provider]({{< baseurl >}}/rke/latest/en/config-options/cloud-providers/vsphere) must be explicitly enabled in the [cluster options]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/). +In order to provision vSphere volumes in a cluster created with the [Rancher Kubernetes Engine (RKE)]({{< baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), the [vSphere cloud provider]({{}}/rke/latest/en/config-options/cloud-providers/vsphere) must be explicitly enabled in the [cluster options]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/). ### Creating A Storage Class @@ -29,7 +29,7 @@ In order to provision vSphere volumes in a cluster created with the [Rancher Kub ### Creating a Workload with a vSphere Volume -1. From the cluster where you configured vSphere storage, begin creating a workload as you would in [Deploying Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). +1. From the cluster where you configured vSphere storage, begin creating a workload as you would in [Deploying Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). 2. For **Workload Type**, select **Stateful set of 1 pod**. 3. Expand the **Volumes** section and click **Add Volume**. 4. Choose **Add a new persistent volume (claim)**. This option will implicitly create the claim once you deploy the workload. @@ -54,7 +54,7 @@ In order to provision vSphere volumes in a cluster created with the [Rancher Kub 9. Once the replacement pod is running, click **Execute Shell**. 10. Inspect the contents of the directory where the volume is mounted by entering `ls -l /`. Note that the file you created earlier is still present. - ![workload-persistent-data]({{< baseurl >}}/img/rancher/workload-persistent-data.png) + ![workload-persistent-data]({{}}/img/rancher/workload-persistent-data.png) ## Why to Use StatefulSets Instead of Deployments diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/how-storage-works/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/how-storage-works/_index.md index a67c767cadd..a2565bd2b5b 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/how-storage-works/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/how-storage-works/_index.md @@ -16,7 +16,7 @@ To use an existing PV, your application will need to use a PVC that is bound to For dynamic storage provisioning, your application will need to use a PVC that is bound to a storage class. The storage class contains the authorization to provision new persistent volumes. -![Setting Up New and Existing Persistent Storage]({{< baseurl >}}/img/rancher/rancher-storage.svg) +![Setting Up New and Existing Persistent Storage]({{}}/img/rancher/rancher-storage.svg) For more information, refer to the [official Kubernetes documentation on storage](https://kubernetes.io/docs/concepts/storage/volumes/) diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/iscsi-volumes/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/iscsi-volumes/_index.md index 0672bbbf6ee..049a654217d 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/iscsi-volumes/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/iscsi-volumes/_index.md @@ -3,7 +3,7 @@ title: iSCSI Volumes weight: 6000 --- -In [Rancher Launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) that store data on iSCSI volumes, you may experience an issue where kubelets fail to automatically connect with iSCSI volumes. This failure is likely due to an incompatibility issue involving the iSCSI initiator tool. You can resolve this issue by installing the iSCSI initiator tool on each of your cluster nodes. +In [Rancher Launched Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) that store data on iSCSI volumes, you may experience an issue where kubelets fail to automatically connect with iSCSI volumes. This failure is likely due to an incompatibility issue involving the iSCSI initiator tool. You can resolve this issue by installing the iSCSI initiator tool on each of your cluster nodes. Rancher Launched Kubernetes clusters storing data on iSCSI volumes leverage the [iSCSI initiator tool](http://www.open-iscsi.com/), which is embedded in the kubelet's `rancher/hyperkube` Docker image. From each kubelet (i.e., the _initiator_), the tool discovers and launches sessions with an iSCSI volume (i.e., the _target_). However, in some instances, the versions of the iSCSI initiator tool installed on the initiator and the target may not match, resulting in a connection failure. diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md index 05ecaf4f436..50f33cce160 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md @@ -66,7 +66,7 @@ These steps describe how to set up a PVC in the namespace where your stateful wo 1. Enter a **Name** for the volume claim. -1. Select the [Namespace]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) of the volume claim. +1. Select the [Namespace]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) of the volume claim. 1. In the **Source** field, click **Use a Storage Class to provision a new persistent volume.** @@ -100,7 +100,7 @@ To attach the PVC to a new workload, To attach the PVC to an existing workload, 1. Go to the project that has the workload that will have the PVC attached. -1. Go to the workload that will have persistent storage and click **Ellipsis (...) > Edit.** +1. Go to the workload that will have persistent storage and click **⋮ > Edit.** 1. Expand the **Volumes** section and click **Add Volume > Add a New Persistent Volume (Claim).** 1. In the **Persistent Volume Claim** section, select the newly created persistent volume claim that is attached to the storage class. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. diff --git a/content/rancher/v2.x/en/cluster-provisioning/_index.md b/content/rancher/v2.x/en/cluster-provisioning/_index.md index ad6df2689bd..31c97b0aa06 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/_index.md @@ -17,20 +17,22 @@ For a conceptual overview of how the Rancher server provisions clusters and what This section covers the following topics: + - [Setting up clusters in a hosted Kubernetes provider](#setting-up-clusters-in-a-hosted-kubernetes-cluster) - [Launching Kubernetes with Rancher](#launching-kubernetes-with-rancher) - - [Launching Kubernetes and Provisioning Nodes in an Infrastructure Provider](#launching-kubernetes-and-provisioning-nodes-in-an-infrastructure-provider) - - [Launching Kubernetes on Existing Custom Nodes](#launching-kubernetes-on-existing-custom-nodes) -- [Importing Existing Cluster](#importing-existing-cluster) - + - [Launching Kubernetes and Provisioning Nodes in an Infrastructure Provider](#launching-kubernetes-and-provisioning-nodes-in-an-infrastructure-provider) + - [Launching Kubernetes on Existing Custom Nodes](#launching-kubernetes-on-existing-custom-nodes) +- [Importing Existing Clusters](#importing-existing-clusters) + - [Importing and Editing K3s Clusters](#importing-and-editing-k3s-clusters) + The following table summarizes the options and settings available for each cluster type: - Rancher Capability | RKE Launched | Hosted Kubernetes Cluster | Imported Cluster - ---------|----------|---------|---------| - Manage member roles | ✓ | ✓ | ✓ - Edit cluster options | ✓ | | - Manage node pools | ✓ | | +| Rancher Capability | RKE Launched | Hosted Kubernetes Cluster | Imported Cluster | +| -------------------- | ------------ | ------------------------- | ---------------- | +| Manage member roles | ✓ | ✓ | ✓ | +| Edit cluster options | ✓ | | +| Manage node pools | ✓ | | # Setting up Clusters in a Hosted Kubernetes Provider @@ -76,6 +78,23 @@ These nodes include on-premise bare metal servers, cloud-hosted virtual machines In this type of cluster, Rancher connects to a Kubernetes cluster that has already been set up. Therefore, Rancher does not provision Kubernetes, but only sets up the Rancher agents to communicate with the cluster. -Note that Rancher does not automate the provisioning, scaling, or upgrade of imported clusters. All other Rancher features, including management of cluster, policy, and workloads, are available for imported clusters. +Note that Rancher does not automate the provisioning, scaling, or upgrade of imported clusters. Other Rancher features, including management of cluster, role-based access control, policy, and workloads, are available for imported clusters. + +For all imported Kubernetes clusters except for K3s clusters, the configuration of an imported cluster still has to be edited outside of Rancher. Some examples of editing the cluster include adding and removing nodes, upgrading the Kubernetes version, and changing Kubernetes component parameters. + +In Rancher v2.4, it became possible to import a K3s cluster and upgrade Kubernetes by editing the cluster in the Rancher UI. For more information, refer to the section on [importing existing clusters.]({{}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/) + +### Importing and Editing K3s Clusters + +_Available as of Rancher v2.4.0_ + +[K3s]({{}}/k3s/latest/en/) is lightweight, fully compliant Kubernetes distribution. K3s Kubernetes clusters can now be imported into Rancher. + +When a K3s cluster is imported, Rancher will recognize it as K3s, and the Rancher UI will expose the following features in addition to the functionality for other imported clusters: + +- The ability to upgrade the K3s version +- The ability to see a read-only version of the K3s cluster's configuration arguments and environment variables used to launch each node in the cluster. + +For more information, refer to the section on [imported K3s clusters.]({{}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/#additional-features-of-imported-k3s-clusters) diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md index 8a5fc2495de..e2da323ee7a 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md @@ -24,9 +24,9 @@ Kubernetes Providers | Available as of | When using Rancher to create a cluster hosted by a provider, you are prompted for authentication information. This information is required to access the provider's API. For more information on how to obtain this information, see the following procedures: -- [Creating a GKE Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke) -- [Creating an EKS Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks) -- [Creating an AKS Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/aks) -- [Creating an ACK Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack) -- [Creating a TKE Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke) -- [Creating a CCE Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce) +- [Creating a GKE Cluster]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke) +- [Creating an EKS Cluster]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks) +- [Creating an AKS Cluster]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/aks) +- [Creating an ACK Cluster]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack) +- [Creating a TKE Cluster]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke) +- [Creating a CCE Cluster]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce) diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/_index.md index cb3951e4e68..32d75c76a00 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/_index.md @@ -6,7 +6,7 @@ weight: 2120 _Available as of v2.2.0_ -You can use Rancher to create a cluster hosted in Alibaba Cloud Kubernetes (ACK). Rancher has already implemented and packaged the [cluster driver]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/) for ACK, but by default, this cluster driver is `inactive`. In order to launch ACK clusters, you will need to [enable the ACK cluster driver]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/#activating-deactivating-cluster-drivers). After enabling the cluster driver, you can start provisioning ACK clusters. +You can use Rancher to create a cluster hosted in Alibaba Cloud Kubernetes (ACK). Rancher has already implemented and packaged the [cluster driver]({{}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/) for ACK, but by default, this cluster driver is `inactive`. In order to launch ACK clusters, you will need to [enable the ACK cluster driver]({{}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/#activating-deactivating-cluster-drivers). After enabling the cluster driver, you can start provisioning ACK clusters. ## Prerequisites diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/_index.md index 39bb5c1c44b..f01af1c27b3 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/_index.md @@ -6,7 +6,7 @@ weight: 2130 _Available as of v2.2.0_ -You can use Rancher to create a cluster hosted in Huawei Cloud Container Engine (CCE). Rancher has already implemented and packaged the [cluster driver]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/) for CCE, but by default, this cluster driver is `inactive`. In order to launch CCE clusters, you will need to [enable the CCE cluster driver]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/#activating-deactivating-cluster-drivers). After enabling the cluster driver, you can start provisioning CCE clusters. +You can use Rancher to create a cluster hosted in Huawei Cloud Container Engine (CCE). Rancher has already implemented and packaged the [cluster driver]({{}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/) for CCE, but by default, this cluster driver is `inactive`. In order to launch CCE clusters, you will need to [enable the CCE cluster driver]({{}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/#activating-deactivating-cluster-drivers). After enabling the cluster driver, you can start provisioning CCE clusters. ## Prerequisites in Huawei diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md index d3a3af145b5..e93fef1472a 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md @@ -38,7 +38,7 @@ For more detailed information on IAM policies for EKS, refer to the official [do The figure below illustrates the high-level architecture of Rancher 2.x. The figure depicts a Rancher Server installation that manages two Kubernetes clusters: one created by RKE and another created by EKS. -![Rancher architecture with EKS hosted cluster]({{< baseurl >}}/img/rancher/rancher-architecture.svg) +![Rancher architecture with EKS hosted cluster]({{}}/img/rancher/rancher-architecture.svg) ## Create the EKS Cluster diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md index 7664d720dbf..f08a196861b 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md @@ -1,17 +1,17 @@ --- title: Creating a GKE Cluster -shortTitle: Google Container Engine +shortTitle: Google Kubernetes Engine weight: 2105 aliases: - /rancher/v2.x/en/tasks/clusters/creating-a-cluster/create-cluster-gke/ --- -## Prerequisites in Google Cloud Platform +## Prerequisites in Google Kubernetes Engine >**Note** >Deploying to GKE will incur charges. -Create a service account using [Google Cloud Platform](https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts). GKE uses this account to operate your cluster. Creating this account also generates a private key used for authentication. +Create a service account using [Google Kubernetes Engine](https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts). GKE uses this account to operate your cluster. Creating this account also generates a private key used for authentication. The service account requires the following roles: @@ -28,7 +28,7 @@ Use {{< product >}} to set up and configure your Kubernetes cluster. 1. From the **Clusters** page, click **Add Cluster**. -2. Choose **Google Container Engine**. +2. Choose **Google Kubernetes Engine**. 3. Enter a **Cluster Name**. diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/_index.md index c3f8087e741..dc6c66b9efb 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/_index.md @@ -6,7 +6,7 @@ weight: 2125 _Available as of v2.2.0_ -You can use Rancher to create a cluster hosted in Tencent Kubernetes Engine (TKE). Rancher has already implemented and packaged the [cluster driver]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/) for TKE, but by default, this cluster driver is `inactive`. In order to launch TKE clusters, you will need to [enable the TKE cluster driver]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/#activating-deactivating-cluster-drivers). After enabling the cluster driver, you can start provisioning TKE clusters. +You can use Rancher to create a cluster hosted in Tencent Kubernetes Engine (TKE). Rancher has already implemented and packaged the [cluster driver]({{}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/) for TKE, but by default, this cluster driver is `inactive`. In order to launch TKE clusters, you will need to [enable the TKE cluster driver]({{}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/#activating-deactivating-cluster-drivers). After enabling the cluster driver, you can start provisioning TKE clusters. ## Prerequisites in Tencent diff --git a/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md index e1cf1478588..8f59cbb3348 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md @@ -1,8 +1,8 @@ --- title: Importing Existing Clusters into Rancher description: Learn how you can create a cluster in Rancher by importing an existing Kubernetes cluster. Then, you can manage it using Rancher -metaTitle: "Kubernetes Cluster Management" -metaDescription: "Learn how you can import an existing Kubernetes cluster and then manage it using Rancher" +metaTitle: 'Kubernetes Cluster Management' +metaDescription: 'Learn how you can import an existing Kubernetes cluster and then manage it using Rancher' weight: 2300 aliases: - /rancher/v2.x/en/tasks/clusters/import-cluster/ @@ -10,9 +10,35 @@ aliases: When managing an imported cluster, Rancher connects to a Kubernetes cluster that has already been set up. Therefore, Rancher does not provision Kubernetes, but only sets up the Rancher agents to communicate with the cluster. -Keep in mind that editing your Kubernetes cluster still has to be done outside of Rancher. Some examples of editing the cluster include adding and removing nodes, upgrading the Kubernetes version, and changing Kubernetes component parameters. +Rancher features, including management of cluster, role-based access control, policy, and workloads, are available for imported clusters. Note that Rancher does not automate the provisioning or scaling of imported clusters. -### Prerequisites +For all imported Kubernetes clusters except for K3s clusters, the configuration of an imported cluster still has to be edited outside of Rancher. Some examples of editing the cluster include adding and removing nodes, upgrading the Kubernetes version, and changing Kubernetes component parameters. + +Rancher v2.4 added the capability to import a K3s cluster into Rancher, as well as the ability to upgrade Kubernetes by editing the cluster in the Rancher UI. + +- [Features](#features) +- [Prerequisites](#prerequisites) +- [Importing a cluster](#importing-a-cluster) +- [Imported K3s clusters](#imported-k3s-clusters) + - [Additional features for imported K3s clusters](#additional-features-for-imported-k3s-clusters) + - [Configuring a K3s Cluster to Enable Importation to Rancher](#configuring-a-k3s-cluster-to-enable-importation-to-rancher) + - [Debug Logging and Troubleshooting for Imported K3s clusters](#debug-logging-and-troubleshooting-for-imported-k3s-clusters) +- [Annotating imported clusters](#annotating-imported-clusters) + +# Features + +After importing a cluster, the cluster owner can: + +- [Manage cluster access]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) through role-based access control +- Enable [monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and [logging]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/) +- Enable [Istio]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/) +- Use [pipelines]({{}}/rancher/v2.x/en/project-admin/pipelines/) +- Configure [alerts]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) and [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) +- Manage [projects]({{}}/rancher/v2.x/en/project-admin/) and [workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/) + +After importing a K3s cluster, the cluster owner can also [upgrade Kubernetes from the Rancher UI.]({{}}/rancher/v2.x/en/cluster-admin/upgrading-kubernetes/) + +# Prerequisites If your existing Kubernetes cluster already has a `cluster-admin` role defined, you must have this `cluster-admin` privilege to import the cluster into Rancher. @@ -23,11 +49,14 @@ kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin \ --user [USER_ACCOUNT] ``` + before running the `kubectl` command to import the cluster. By default, GKE users are not given this privilege, so you will need to run the command before importing GKE clusters. To learn more about role-based access control for GKE, please click [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control). -### Importing a Cluster +> If you are importing a K3s cluster, make sure the `cluster.yml` is readable. It is protected by default. For details, refer to [Configuring a K3s cluster to enable importation to Rancher.](#configuring-a-k3s-cluster-to-enable-importation-to-rancher) + +# Importing a Cluster 1. From the **Clusters** page, click **Add Cluster**. 2. Choose **Import**. @@ -38,7 +67,115 @@ By default, GKE users are not given this privilege, so you will need to run the 7. Copy the `kubectl` command to your clipboard and run it on a node where kubeconfig is configured to point to the cluster you want to import. If you are unsure it is configured correctly, run `kubectl get nodes` to verify before running the command shown in {{< product >}}. 8. If you are using self signed certificates, you will receive the message `certificate signed by unknown authority`. To work around this validation, copy the command starting with `curl` displayed in {{< product >}} to your clipboard. Then run the command on a node where kubeconfig is configured to point to the cluster you want to import. 9. When you finish running the command(s) on your node, click **Done**. -{{< result_import-cluster >}} + {{< result_import-cluster >}} > **Note:** > You can not re-import a cluster that is currently active in a Rancher setup. + +# Imported K3s Clusters + +You can now import a K3s Kubernetes cluster into Rancher. [K3s]({{}}/k3s/latest/en/) is lightweight, fully compliant Kubernetes distribution. You can also upgrade Kubernetes by editing the K3s cluster in the Rancher UI. + +### Additional Features for Imported K3s Clusters + +_Available as of v2.4.0_ + +When a K3s cluster is imported, Rancher will recognize it as K3s, and the Rancher UI will expose the following features in addition to the functionality for other imported clusters: + +- The ability to upgrade the K3s version +- The ability to configure the maximum number of nodes that will be upgraded concurrently +- The ability to see a read-only version of the K3s cluster's configuration arguments and environment variables used to launch each node in the cluster. + +### Configuring K3s Cluster Upgrades + +> It is a Kubernetes best practice to back up the cluster before upgrading. When upgrading a high-availability K3s cluster with an external database, back up the database in whichever way is recommended by the relational database provider. + +The **concurrency** is the maximum number of nodes that are permitted to be unavailable during an upgrade. If number of unavailable nodes is larger than the **concurrency,** the upgrade will fail. If an upgrade fails, you may need to repair or remove failed nodes before the upgrade can succeed. + +- **Controlplane concurrency:** The maximum number of server nodes to upgrade at a single time; also the maximum unavailable server nodes +- **Worker concurrency:** The maximum number worker nodes to upgrade at the same time; also the maximum unavailable worker nodes + +In the K3s documentation, controlplane nodes are called server nodes. These nodes run the Kubernetes master, which maintains the desired state of the cluster. In K3s, these controlplane nodes have the capability to have workloads scheduled to them by default. + +Also in the K3s documentation, nodes with the worker role are called agent nodes. Any workloads or pods that are deployed in the cluster can be scheduled to these nodes by default. + +### Configuring a K3s Cluster to Enable Importation to Rancher + +The K3s server needs to be configured to allow writing to the kubeconfig file. + +This can be accomplished by passing `--write-kubeconfig-mode 644` as a flag during installation: + +``` +$ curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 +``` + +The option can also be specified using the environment variable `K3S_KUBECONFIG_MODE`: + +``` +$ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - +``` + +### Debug Logging and Troubleshooting for Imported K3s Clusters + +Nodes are upgraded by the system upgrade controller running in the downstream cluster. Based on the cluster configuration, Rancher deploys two [plans](https://github.com/rancher/system-upgrade-controller#example-upgrade-plan) to upgrade K3s nodes: one for controlplane nodes and one for workers. The system upgrade controller follows the plans and upgrades the nodes. + +To enable debug logging on the system upgrade controller deployment, edit the [configmap](https://github.com/rancher/system-upgrade-controller/blob/50a4c8975543d75f1d76a8290001d87dc298bdb4/manifests/system-upgrade-controller.yaml#L32) to set the debug environment variable to true. Then restart the `system-upgrade-controller` pod. + +Logs created by the `system-upgrade-controller` can be viewed by running this command: + +``` +kubectl logs -n cattle-system system-upgrade-controller +``` + +The current status of the plans can be viewed with this command: + +``` +kubectl get plans -A -o yaml +``` + +If the cluster becomes stuck in upgrading, restart the `system-upgrade-controller`. + +To prevent issues when upgrading, the [Kubernetes upgrade best practices](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) should be followed. + +### Annotating Imported Clusters + +For all types of imported Kubernetes clusters except for K3s Kubernetes clusters, Rancher doesn't have any information about how the cluster is provisioned or configured. + +Therefore, when Rancher imports a cluster, it assumes that several capabilities are disabled by default. Rancher assumes this in order to avoid exposing UI options to the user even when the capabilities are not enabled in the imported cluster. + +However, if the cluster has a certain capability, such as the ability to use a pod security policy, a user of that cluster might still want to select pod security policies for the cluster in the Rancher UI. In order to do that, the user will need to manually indicate to Rancher that pod security policies are enabled for the cluster. + +By annotating an imported cluster, it is possible to indicate to Rancher that a cluster was given a pod security policy, or another capability, outside of Rancher. + +This example annotation indicates that a pod security policy is enabled: + +```json +"capabilities.cattle.io/pspEnabled": "true" +``` + +The following annotation indicates Ingress capabilities. Note that that the values of non-primitive objects need to be JSON encoded, with quotations escaped. + +```json +"capabilities.cattle.io/ingressCapabilities": "[{"customDefaultBackend":true,"ingressProvider":"asdf"}]" +``` + +These capabilities can be annotated for the cluster: + +- `ingressCapabilities` +- `loadBalancerCapabilities` +- `nodePoolScalingSupported` +- `nodePortRange` +- `pspEnabled` +- `taintSupport` + +All the capabilities and their type definitions can be viewed in the Rancher API view, at `[Rancher Server URL]/v3/schemas/capabilities`. + +To annotate an imported cluster, + +1. Go to the cluster view in Rancher and select **⋮ > Edit.** +1. Expand the **Labels & Annotations** section. +1. Click **Add Annotation.** +1. Add an annotation to the cluster with the format `capabilities/: ` where `value` is the cluster capability that will be overridden by the annotation. In this scenario, Rancher is not aware of any capabilities of the cluster until you add the annotation. +1. Click **Save.** + +**Result:** The annotation does not give the capabilities to the cluster, but it does indicate to Rancher that the cluster has those capabilities. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/production/nodes-and-roles/_index.md b/content/rancher/v2.x/en/cluster-provisioning/production/nodes-and-roles/_index.md index da12ee46111..3722a97e451 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/production/nodes-and-roles/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/production/nodes-and-roles/_index.md @@ -7,7 +7,7 @@ This section describes the roles for etcd nodes, controlplane nodes, and worker This diagram is applicable to Kubernetes clusters [launched with Rancher using RKE.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). -![Cluster diagram]({{< baseurl >}}/img/rancher/clusterdiagram.svg)
+![Cluster diagram]({{}}/img/rancher/clusterdiagram.svg)
Lines show the traffic flow between components. Colors are used purely for visual aid # etcd diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md new file mode 100644 index 00000000000..70dc4464f42 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md @@ -0,0 +1,40 @@ +--- +title: Setting up Cloud Providers +weight: 2300 +aliases: + - /rancher/v2.x/en/concepts/clusters/cloud-providers/ + - /rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers +--- +A _cloud provider_ is a module in Kubernetes that provides an interface for managing nodes, load balancers, and networking routes. For more information, refer to the [official Kubernetes documentation on cloud providers.](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) + +When a cloud provider is set up in Rancher, the Rancher server can automatically provision new nodes, load balancers or persistent storage devices when launching Kubernetes definitions, if the cloud provider you're using supports such automation. + +Your cluster will not provision correctly if you configure a cloud provider cluster of nodes that do not meet the prerequisites. + +By default, the **Cloud Provider** option is set to `None`. + +Supported cloud providers are: + +* Amazon +* Azure + +### Setting up the Amazon Cloud Provider + +For details on enabling the Amazon cloud provider, refer to [this page.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/amazon) + +### Setting up the Azure Cloud Provider + +For details on enabling the Azure cloud provider, refer to [this page.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/azure) + +### Setting up the GCE Cloud Provider + +For details on enabling the Google Compute Engine cloud provider, refer to [this page.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/gce) + +### Setting up a Custom Cloud Provider + +The `Custom` cloud provider is available if you want to configure any [Kubernetes cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/). + +For the custom cloud provider option, you can refer to the [RKE docs]({{}}/rke/latest/en/config-options/cloud-providers/) on how to edit the yaml file for your specific cloud provider. There are specific cloud providers that have more detailed configuration : + +* [vSphere]({{}}/rke/latest/en/config-options/cloud-providers/vsphere/) +* [Openstack]({{}}/rke/latest/en/config-options/cloud-providers/openstack/) \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/amazon/_index.md similarity index 55% rename from content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/_index.md rename to content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/amazon/_index.md index 6b8de366b6a..169fbc92a7a 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/amazon/_index.md @@ -1,34 +1,7 @@ --- -title: Setting up Cloud Providers -weight: 2255 -aliases: - - /rancher/v2.x/en/concepts/clusters/cloud-providers/ +title: Setting up the Amazon Cloud Provider +weight: 1 --- -A _cloud provider_ is a module in Kubernetes that provides an interface for managing nodes, load balancers, and networking routes. For more information, refer to the [official Kubernetes documentation on cloud providers.](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) - -When a cloud provider is set up in Rancher, the Rancher server can automatically provision new nodes, load balancers or persistent storage devices when launching Kubernetes definitions, if the cloud provider you're using supports such automation. - -- [Cloud provider options](#cloud-provider-options) -- [Setting up the Amazon cloud provider](#setting-up-the-amazon-cloud-provider) -- [Setting up the Azure cloud provider](#setting-up-the-azure-cloud-provider) - -## Cloud Provider Options - -By default, the **Cloud Provider** option is set to `None`. Supported cloud providers are: - -* [Amazon](#setting-up-the-amazon-cloud-provider) -* [Azure](#setting-up-the-azure-cloud-provider) - -The `Custom` cloud provider is available if you want to configure any [Kubernetes cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/). - -For the custom cloud provider option, you can refer to the [RKE docs]({{}}/rke/latest/en/config-options/cloud-providers/) on how to edit the yaml file for your specific cloud provider. There are specific cloud providers that have more detailed configuration : - -* [vSphere]({{}}/rke/latest/en/config-options/cloud-providers/vsphere/) -* [Openstack]({{}}/rke/latest/en/config-options/cloud-providers/openstack/) - -> **Warning:** Your cluster will not provision correctly if you configure a cloud provider cluster of nodes that do not meet the prerequisites. Prerequisites for supported cloud providers are listed below. - -## Setting up the Amazon Cloud Provider When using the `Amazon` cloud provider, you can leverage the following capabilities: @@ -174,72 +147,4 @@ Setting the value of the tag to `owned` tells the cluster that all resources wit ### Using Amazon Elastic Container Registry (ECR) -The kubelet component has the ability to automatically obtain ECR credentials, when the IAM profile mentioned in [Create an IAM Role and attach to the instances](#1-create-an-iam-role-and-attach-to-the-instances) is attached to the instance(s). When using a Kubernetes version older than v1.15.0, the Amazon cloud provider needs be configured in the cluster. Starting with Kubernetes version v1.15.0, the kubelet can obtain ECR credentials without having the Amazon cloud provider configured in the cluster. - -## Setting up the Azure Cloud Provider - -When using the `Azure` cloud provider, you can leverage the following capabilities: - -- **Load Balancers:** Launches an Azure Load Balancer within a specific Network Security Group. - -- **Persistent Volumes:** Supports using Azure Blob disks and Azure Managed Disks with standard and premium storage accounts. - -- **Network Storage:** Support Azure Files via CIFS mounts. - -The following account types are not supported for Azure Subscriptions: - -- Single tenant accounts (i.e. accounts with no subscriptions). -- Multi-subscription accounts. - -To set up the Azure cloud provider following credentials need to be configured: - -1. [Set up the Azure Tenant ID](#1-set-up-the-azure-tenant-id) -2. [Set up the Azure Client ID and Azure Client Secret](#2-set-up-the-azure-client-id-and-azure-client-secret) -3. [Configure App Registration Permissions](#3-configure-app-registration-permissions) -4. [Set up Azure Network Security Group Name](#4-set-up-azure-network-security-group-name) - -### 1. Set up the Azure Tenant ID - -Visit [Azure portal](https://portal.azure.com), login and go to **Azure Active Directory** and select **Properties**. Your **Directory ID** is your **Tenant ID** (tenantID). - -If you want to use the Azure CLI, you can run the command `az account show` to get the information. - -### 2. Set up the Azure Client ID and Azure Client Secret - -Visit [Azure portal](https://portal.azure.com), login and follow the steps below to create an **App Registration** and the corresponding **Azure Client ID** (aadClientId) and **Azure Client Secret** (aadClientSecret). - -1. Select **Azure Active Directory**. -1. Select **App registrations**. -1. Select **New application registration**. -1. Choose a **Name**, select `Web app / API` as **Application Type** and a **Sign-on URL** which can be anything in this case. -1. Select **Create**. - -In the **App registrations** view, you should see your created App registration. The value shown in the column **APPLICATION ID** is what you need to use as **Azure Client ID**. - -The next step is to generate the **Azure Client Secret**: - -1. Open your created App registration. -1. In the **Settings** view, open **Keys**. -1. Enter a **Key description**, select an expiration time and select **Save**. -1. The generated value shown in the column **Value** is what you need to use as **Azure Client Secret**. This value will only be shown once. - -### 3. Configure App Registration Permissions - -The last thing you will need to do, is assign the appropriate permissions to your App registration. - -1. Go to **More services**, search for **Subscriptions** and open it. -1. Open **Access control (IAM)**. -1. Select **Add**. -1. For **Role**, select `Contributor`. -1. For **Select**, select your created App registration name. -1. Select **Save**. - -### 4. Set up Azure Network Security Group Name - -A custom Azure Network Security Group (securityGroupName) is needed to allow Azure Load Balancers to work. - -If you provision hosts using Rancher Machine Azure driver, you will need to edit them manually to assign them to this Network Security Group. - -You should already assign custom hosts to this Network Security Group during provisioning. - -Only hosts expected to be load balancer back ends need to be in this group. +The kubelet component has the ability to automatically obtain ECR credentials, when the IAM profile mentioned in [Create an IAM Role and attach to the instances](#1-create-an-iam-role-and-attach-to-the-instances) is attached to the instance(s). When using a Kubernetes version older than v1.15.0, the Amazon cloud provider needs be configured in the cluster. Starting with Kubernetes version v1.15.0, the kubelet can obtain ECR credentials without having the Amazon cloud provider configured in the cluster. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/azure/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/azure/_index.md new file mode 100644 index 00000000000..25884572579 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/azure/_index.md @@ -0,0 +1,70 @@ +--- +title: Setting up the Azure Cloud Provider +weight: 2 +--- + +When using the `Azure` cloud provider, you can leverage the following capabilities: + +- **Load Balancers:** Launches an Azure Load Balancer within a specific Network Security Group. + +- **Persistent Volumes:** Supports using Azure Blob disks and Azure Managed Disks with standard and premium storage accounts. + +- **Network Storage:** Support Azure Files via CIFS mounts. + +The following account types are not supported for Azure Subscriptions: + +- Single tenant accounts (i.e. accounts with no subscriptions). +- Multi-subscription accounts. + +To set up the Azure cloud provider following credentials need to be configured: + +1. [Set up the Azure Tenant ID](#1-set-up-the-azure-tenant-id) +2. [Set up the Azure Client ID and Azure Client Secret](#2-set-up-the-azure-client-id-and-azure-client-secret) +3. [Configure App Registration Permissions](#3-configure-app-registration-permissions) +4. [Set up Azure Network Security Group Name](#4-set-up-azure-network-security-group-name) + +### 1. Set up the Azure Tenant ID + +Visit [Azure portal](https://portal.azure.com), login and go to **Azure Active Directory** and select **Properties**. Your **Directory ID** is your **Tenant ID** (tenantID). + +If you want to use the Azure CLI, you can run the command `az account show` to get the information. + +### 2. Set up the Azure Client ID and Azure Client Secret + +Visit [Azure portal](https://portal.azure.com), login and follow the steps below to create an **App Registration** and the corresponding **Azure Client ID** (aadClientId) and **Azure Client Secret** (aadClientSecret). + +1. Select **Azure Active Directory**. +1. Select **App registrations**. +1. Select **New application registration**. +1. Choose a **Name**, select `Web app / API` as **Application Type** and a **Sign-on URL** which can be anything in this case. +1. Select **Create**. + +In the **App registrations** view, you should see your created App registration. The value shown in the column **APPLICATION ID** is what you need to use as **Azure Client ID**. + +The next step is to generate the **Azure Client Secret**: + +1. Open your created App registration. +1. In the **Settings** view, open **Keys**. +1. Enter a **Key description**, select an expiration time and select **Save**. +1. The generated value shown in the column **Value** is what you need to use as **Azure Client Secret**. This value will only be shown once. + +### 3. Configure App Registration Permissions + +The last thing you will need to do, is assign the appropriate permissions to your App registration. + +1. Go to **More services**, search for **Subscriptions** and open it. +1. Open **Access control (IAM)**. +1. Select **Add**. +1. For **Role**, select `Contributor`. +1. For **Select**, select your created App registration name. +1. Select **Save**. + +### 4. Set up Azure Network Security Group Name + +A custom Azure Network Security Group (securityGroupName) is needed to allow Azure Load Balancers to work. + +If you provision hosts using Rancher Machine Azure driver, you will need to edit them manually to assign them to this Network Security Group. + +You should already assign custom hosts to this Network Security Group during provisioning. + +Only hosts expected to be load balancer back ends need to be in this group. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/gce/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/gce/_index.md new file mode 100644 index 00000000000..000b537c110 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/gce/_index.md @@ -0,0 +1,54 @@ +--- +title: Setting up the Google Compute Engine Cloud Provider +weight: 3 +--- + +In this section, you'll learn how to enable the Google Compute Engine (GCE) cloud provider for custom clusters in Rancher. A custom cluster is one in which Rancher installs Kubernetes on existing nodes. + +The official Kubernetes documentation for the GCE cloud provider is [here.](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#gce) + +> **Prerequisites:** The service account of `Identity and API` access on GCE needs the `Computer Admin` permission. + +If you are using Calico, + +1. Go to the cluster view in the Rancher UI, and click **⋮ > Edit.** +1. Click **Edit as YAML,** and enter the following configuration: + + ``` + rancher_kubernetes_engine_config: + cloud_provider: + name: gce + customCloudProvider: |- + [Global] + project-id= + network-name= + subnetwork-name= + node-instance-prefix= + node-tags= + network: + options: + calico_cloud_provider: "gce" + plugin: "calico" + ``` + +If you are using Canal or Flannel, + +1. Go to the cluster view in the Rancher UI, and click **⋮ > Edit.** +1. Click **Edit as YAML,** and enter the following configuration: + + ``` + rancher_kubernetes_engine_config: + cloud_provider: + name: gce + customCloudProvider: |- + [Global] + project-id= + network-name= + subnetwork-name= + node-instance-prefix= + node-tags= + services: + kube_controller: + extra_args: + configure-cloud-routes: true # we need to allow the cloud provider configure the routes for the hosts + ``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md index 9835c53a18c..c964b58e162 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md @@ -53,18 +53,18 @@ Provision the host according to the [installation requirements]({{}}/ra >**Using Windows nodes as Kubernetes workers?** > - >- See [Enable the Windows Support Option]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/#enable-the-windows-support-option). - >- The only Network Provider available for clusters with Windows support is Flannel. See [Networking Option]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/#networking-option). + >- See [Enable the Windows Support Option]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/#enable-the-windows-support-option). + >- The only Network Provider available for clusters with Windows support is Flannel. See [Networking Option]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/#networking-option). 6. Click **Next**. 7. From **Node Role**, choose the roles that you want filled by a cluster node. >**Notes:** > - >- Using Windows nodes as Kubernetes workers? See [Node Configuration]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/#node-configuration). + >- Using Windows nodes as Kubernetes workers? See [Node Configuration]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/#node-configuration). >- Bare-Metal Server Reminder: If you plan on dedicating bare-metal servers to each role, you must provision a bare-metal server for each role (i.e. provision multiple bare-metal servers). -8. **Optional**: Click **[Show advanced options]({{< baseurl >}}/rancher/v2.x/en/admin-settings/agent-options/)** to specify IP address(es) to use when registering the node, override the hostname of the node, or to add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node. +8. **Optional**: Click **[Show advanced options]({{}}/rancher/v2.x/en/admin-settings/agent-options/)** to specify IP address(es) to use when registering the node, override the hostname of the node, or to add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node. 9. Copy the command displayed on screen to your clipboard. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md index 0d7a0e5ab67..fbac23befac 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md @@ -16,6 +16,7 @@ This section covers the following topics: - [Node templates](#node-templates) - [Node labels](#node-labels) - [Node taints](#node-taints) + - [Administrator control of node templates](#administrator-control-of-node-templates) - [Node pools](#node-pools) - [Node pool taints](#node-pool-taints) - [About node auto-replace](#about-node-auto-replace) @@ -42,11 +43,24 @@ You can add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and Since taints can be added at a node template and node pool, if there is no conflict with the same key and effect of the taints, all taints will be added to the nodes. If there are taints with the same key and different effect, the taints from the node pool will override the taints from the node template. +### Administrator Control of Node Templates + +_Available as of v2.3.3_ + +Administrators can control all node templates. Admins can now maintain all the node templates within Rancher. When a node template owner is no longer using Rancher, the node templates created by them can be managed by administrators so the cluster can continue to be updated and maintained. + +To access all node templates, an administrator will need to do the following: + +1. In the Rancher UI, click the user profile icon in the upper right corner. +1. Click **Node Templates.** + +**Result:** All node templates are listed and grouped by owner. The templates can be edited or cloned by clicking the **⋮.** + # Node Pools Using Rancher, you can create pools of nodes based on a [node template](#node-templates). The benefit of using a node pool is that if a node is destroyed or deleted, you can increase the number of live nodes to compensate for the node that was lost. The node pool helps you ensure that the count of the node pool is as expected. -Each node pool is assigned with a [node component]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) to specify how these nodes should be configured for the Kubernetes cluster. +Each node pool is assigned with a [node component]({{}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) to specify how these nodes should be configured for the Kubernetes cluster. ### Node Pool Taints @@ -83,7 +97,7 @@ When you create the node pool, you can specify the amount of time in minutes tha You can also enable node auto-replace after the cluster is created with the following steps: 1. From the Global view, click the Clusters tab. -1. Go to the cluster where you want to enable node auto-replace, click the vertical ellipsis **(…)**, and click **Edit.** +1. Go to the cluster where you want to enable node auto-replace, click the vertical ⋮ **(…)**, and click **Edit.** 1. In the **Node Pools** section, go to the node pool where you want to enable node auto-replace. In the **Recreate Unreachable After** field, enter the number of minutes that Rancher should wait for a node to respond before replacing the node. 1. Click **Save.** @@ -94,7 +108,7 @@ You can also enable node auto-replace after the cluster is created with the foll You can disable node auto-replace from the Rancher UI with the following steps: 1. From the Global view, click the Clusters tab. -1. Go to the cluster where you want to enable node auto-replace, click the vertical ellipsis **(…)**, and click **Edit.** +1. Go to the cluster where you want to enable node auto-replace, click the vertical ⋮ **(…)**, and click **Edit.** 1. In the **Node Pools** section, go to the node pool where you want to enable node auto-replace. In the **Recreate Unreachable After** field, enter 0. 1. Click **Save.** @@ -112,9 +126,9 @@ Node templates can use cloud credentials to store credentials for launching node - Multiple node templates can share the same cloud credential to create node pools. If your key is compromised or expired, the cloud credential can be updated in a single place, which allows all node templates that are using it to be updated at once. -> **Note:** As of v2.2.0, the default `active` [node drivers]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/node-drivers/) and any node driver, that has fields marked as `password`, are required to use cloud credentials. If you have upgraded to v2.2.0, existing node templates will continue to work with the previous account access information, but when you edit the node template, you will be required to create a cloud credential and the node template will start using it. +> **Note:** As of v2.2.0, the default `active` [node drivers]({{}}/rancher/v2.x/en/admin-settings/drivers/node-drivers/) and any node driver, that has fields marked as `password`, are required to use cloud credentials. If you have upgraded to v2.2.0, existing node templates will continue to work with the previous account access information, but when you edit the node template, you will be required to create a cloud credential and the node template will start using it. -After cloud credentials are created, the user can start [managing the cloud credentials that they created]({{< baseurl >}}/rancher/v2.x/en/user-settings/cloud-credentials/). +After cloud credentials are created, the user can start [managing the cloud credentials that they created]({{}}/rancher/v2.x/en/user-settings/cloud-credentials/). # Node Drivers diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md index 94a58722719..4754951afde 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md @@ -11,7 +11,7 @@ Use Rancher to create a Kubernetes cluster in Amazon EC2. - **AWS EC2 Access Key and Secret Key** that will be used to create the instances. See [Amazon Documentation: Creating Access Keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) how to create an Access Key and Secret Key. - **IAM Policy created** to add to the user of the Access Key And Secret Key. See [Amazon Documentation: Creating IAM Policies (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html#access_policies_create-start) how to create an IAM policy. See our three example JSON policies below: - [Example IAM Policy](#example-iam-policy) - - [Example IAM Policy with PassRole](#example-iam-policy-with-passrole) (needed if you want to use [Kubernetes Cloud Provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers) or want to pass an IAM Profile to an instance) + - [Example IAM Policy with PassRole](#example-iam-policy-with-passrole) (needed if you want to use [Kubernetes Cloud Provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers) or want to pass an IAM Profile to an instance) - [Example IAM Policy to allow encrypted EBS volumes](#example-iam-policy-to-allow-encrypted-ebs-volumes) - **IAM Policy added as Permission** to the user. See [Amazon Documentation: Adding Permissions to a User (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) how to attach it to an user. @@ -99,7 +99,7 @@ Optional: In the **Engine Options** section of the node template, you can config - **Security Groups** creates or configures the Security Groups applied to your nodes. Please refer to [Amazon EC2 security group when using Node Driver]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#security-group-for-nodes-on-aws-ec2) to see what rules are created in the `rancher-nodes` Security Group. - **Instance** configures the instances that will be created. Make sure you configure the correct **SSH User** for the configured AMI.

- If you need to pass an **IAM Instance Profile Name** (not ARN), for example, when you want to use a [Kubernetes Cloud Provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers), you will need an additional permission in your policy. See [Example IAM policy with PassRole](#example-iam-policy-with-passrole) for an example policy. + If you need to pass an **IAM Instance Profile Name** (not ARN), for example, when you want to use a [Kubernetes Cloud Provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers), you will need an additional permission in your policy. See [Example IAM policy with PassRole](#example-iam-policy-with-passrole) for an example policy. 1. {{< step_rancher-template >}} 1. Click **Create**. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md index ecee7787ccc..f59e5e35b86 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md @@ -162,7 +162,7 @@ Only VMs booting from RancherOS ISO are supported. Ensure that the [OS ISO URL](#instance-options) contains the URL of the VMware ISO release for RancherOS: `rancheros-vmware.iso`. - ![image]({{< baseurl >}}/img/rancher/vsphere-node-template-1.png) + ![image]({{}}/img/rancher/vsphere-node-template-1.png) {{% /tab %}} {{% /tabs %}} @@ -226,7 +226,7 @@ To make use of cloud-init initialization, create a cloud config file using valid {{% /tab %}} {{% tab "Rancher prior to v2.3.3" %}} -You may specify the URL of a RancherOS cloud-config.yaml file in the the **Cloud Init** field. Refer to the [RancherOS Documentation]https://rancher.com/docs/os/v1.x/en/installation/configuration/#cloud-config) for details on the supported configuration directives. Note that the URL must be network accessible from the VMs created by the template. +You may specify the URL of a RancherOS cloud-config.yaml file in the the **Cloud Init** field. Refer to the [RancherOS Documentation]https://rancher.com/docs/os/v1.x/en/configuration/#cloud-config) for details on the supported configuration directives. Note that the URL must be network accessible from the VMs created by the template. {{% /tab %}} {{% /tabs %}} diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/_index.md index cdc3d70e232..adf7cdbe8d4 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/_index.md @@ -48,7 +48,7 @@ The options for creating and configuring an instance are different depending on | Creation method | * | The method for setting up an operating system on the node. The operating system can be installed from an ISO or from a VM template. Depending on the creation method, you will also have to specify a VM template, content library, existing VM, or ISO. For more information on creation methods, refer to the section on [configuring instances.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/#c-configure-instances-and-operating-systems) | | Cloud Init | | URL of a `cloud-config.yml` file or URL to provision VMs with. This file allows further customization of the operating system, such as network configuration, DNS servers, or system daemons. The operating system must support `cloud-init`. | | Networks | | Name(s) of the network to attach the VM to. | -| Configuration Parameters used for guestinfo | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). | +| Configuration Parameters used for guestinfo | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). | {{% /tab %}} {{% tab "Rancher prior to v2.3.3" %}} @@ -58,9 +58,9 @@ The options for creating and configuring an instance are different depending on | CPUs | * | Number of vCPUS to assign to VMs. | | Memory | * | Amount of memory to assign to VMs. | | Disk | * | Size of the disk (in MB) to attach to the VMs. | -| Cloud Init | | URL of a [RancherOS cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/) file to provision VMs with. This file allows further customization of the RancherOS operating system, such as network configuration, DNS servers, or system daemons.| +| Cloud Init | | URL of a [RancherOS cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/) file to provision VMs with. This file allows further customization of the RancherOS operating system, such as network configuration, DNS servers, or system daemons.| | OS ISO URL | * | URL of a RancherOS vSphere ISO file to boot the VMs from. You can find URLs for specific versions in the [Rancher OS GitHub Repo](https://github.com/rancher/os). | -| Configuration Parameters | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). | +| Configuration Parameters | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). | {{% /tab %}} {{% /tabs %}} diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md index bd6c563029c..20922c7d3b2 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md @@ -36,7 +36,7 @@ This section is a cluster configuration reference, covering the following topics # Rancher UI Options -When creating a cluster using one of the options described in [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters), you can configure basic Kubernetes options using the **Cluster Options** section. +When creating a cluster using one of the options described in [Rancher Launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters), you can configure basic Kubernetes options using the **Cluster Options** section. ### Kubernetes Version @@ -44,7 +44,7 @@ The version of Kubernetes installed on your cluster nodes. Rancher packages its ### Network Provider -The [Network Provider](https://kubernetes.io/docs/concepts/cluster-administration/networking/) that the cluster uses. For more details on the different networking providers, please view our [Networking FAQ]({{< baseurl >}}/rancher/v2.x/en/faq/networking/cni-providers/). +The [Network Provider](https://kubernetes.io/docs/concepts/cluster-administration/networking/) that the cluster uses. For more details on the different networking providers, please view our [Networking FAQ]({{}}/rancher/v2.x/en/faq/networking/cni-providers/). >**Note:** After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn't allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications. @@ -57,9 +57,9 @@ Out of the box, Rancher is compatible with the following network providers: **Notes on Canal:** -In v2.0.0 - v2.0.4 and v2.0.6, this was the default option for these clusters was Canal with network isolation. With the network isolation automatically enabled, it prevented any pod communication between [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). +In v2.0.0 - v2.0.4 and v2.0.6, this was the default option for these clusters was Canal with network isolation. With the network isolation automatically enabled, it prevented any pod communication between [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). -As of v2.0.7, if you use Canal, you also have the option of using **Project Network Isolation**, which will enable or disable communication between pods in different [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). +As of v2.0.7, if you use Canal, you also have the option of using **Project Network Isolation**, which will enable or disable communication between pods in different [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). >**Attention Rancher v2.0.0 - v2.0.6 Users** > @@ -72,13 +72,13 @@ In v2.0.5, this was the default option, which did not prevent any network isolat **Notes on Weave:** -When Weave is selected as network provider, Rancher will automatically enable encryption by generating a random password. If you want to specify the password manually, please see how to configure your cluster using a [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) and the [Weave Network Plug-in Options]({{< baseurl >}}/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options). +When Weave is selected as network provider, Rancher will automatically enable encryption by generating a random password. If you want to specify the password manually, please see how to configure your cluster using a [Config File]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) and the [Weave Network Plug-in Options]({{}}/rke/latest/en/config-options/add-ons/network-plugins/#weave-network-plug-in-options). ### Kubernetes Cloud Providers You can configure a [Kubernetes cloud provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers). If you want to use [volumes and storage]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider. ->**Note:** If the cloud provider you want to use is not listed as an option, you will need to use the [config file option](#config-file) to configure the cloud provider. Please reference the [RKE cloud provider documentation]({{< baseurl >}}/rke/latest/en/config-options/cloud-providers/) on how to configure the cloud provider. +>**Note:** If the cloud provider you want to use is not listed as an option, you will need to use the [config file option](#config-file) to configure the cloud provider. Please reference the [RKE cloud provider documentation]({{}}/rke/latest/en/config-options/cloud-providers/) on how to configure the cloud provider. If you want to see all the configuration options for a cluster, please click **Show advanced options** on the bottom right. The advanced options are described below: @@ -119,7 +119,7 @@ The following options are available when you create clusters in the Rancher UI. ### NGINX Ingress -Option to enable or disable the [NGINX ingress controller]({{< baseurl >}}/rke/latest/en/config-options/add-ons/ingress-controllers/). +Option to enable or disable the [NGINX ingress controller]({{}}/rke/latest/en/config-options/add-ons/ingress-controllers/). ### Node Port Range @@ -127,15 +127,15 @@ Option to change the range of ports that can be used for [NodePort services](htt ### Metrics Server Monitoring -Option to enable or disable [Metrics Server]({{< baseurl >}}/rke/latest/en/config-options/add-ons/metrics-server/). +Option to enable or disable [Metrics Server]({{}}/rke/latest/en/config-options/add-ons/metrics-server/). ### Pod Security Policy Support -Option to enable and select a default [Pod Security Policy]({{< baseurl >}}/rancher/v2.x/en/admin-settings/pod-security-policies). You must have an existing Pod Security Policy configured before you can use this option. +Option to enable and select a default [Pod Security Policy]({{}}/rancher/v2.x/en/admin-settings/pod-security-policies). You must have an existing Pod Security Policy configured before you can use this option. ### Docker Version on Nodes -Option to require [a supported Docker version]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/) installed on the cluster nodes that are added to the cluster, or to allow unsupported Docker versions installed on the cluster nodes. +Option to require [a supported Docker version]({{}}/rancher/v2.x/en/installation/requirements/) installed on the cluster nodes that are added to the cluster, or to allow unsupported Docker versions installed on the cluster nodes. ### Docker Root Directory @@ -143,7 +143,7 @@ If the nodes you are adding to the cluster have Docker configured with a non-def ### Recurring etcd Snapshots -Option to enable or disable [recurring etcd snapshots]({{< baseurl >}}/rke/latest/en/etcd-snapshots/#etcd-recurring-snapshots). +Option to enable or disable [recurring etcd snapshots]({{}}/rke/latest/en/etcd-snapshots/#etcd-recurring-snapshots). # Cluster Config File @@ -154,7 +154,7 @@ Instead of using the Rancher UI to choose Kubernetes options for the cluster, ad - To edit an RKE config file directly from the Rancher UI, click **Edit as YAML**. - To read from an existing RKE file, click **Read from a file**. -![image]({{< baseurl >}}/img/rancher/cluster-options-yaml.png) +![image]({{}}/img/rancher/cluster-options-yaml.png) The structure of the config file is different depending on your version of Rancher. Below are example config files for Rancher v2.0.0-v2.2.x and for Rancher v2.3.0+. @@ -341,7 +341,7 @@ ssh_agent_auth: false ### Default DNS provider -The table below indicates what DNS provider is deployed by default. See [RKE documentation on DNS provider]({{< baseurl >}}/rke/latest/en/config-options/add-ons/dns/) for more information how to configure a different DNS provider. CoreDNS can only be used on Kubernetes v1.12.0 and higher. +The table below indicates what DNS provider is deployed by default. See [RKE documentation on DNS provider]({{}}/rke/latest/en/config-options/add-ons/dns/) for more information how to configure a different DNS provider. CoreDNS can only be used on Kubernetes v1.12.0 and higher. | Rancher version | Kubernetes version | Default DNS provider | |-------------|--------------------|----------------------| @@ -361,7 +361,7 @@ See [Docker Root Directory](#docker-root-directory). ### enable_cluster_monitoring -Option to enable or disable [Cluster Monitoring]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/). +Option to enable or disable [Cluster Monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/). ### enable_network_policy diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/pod-security-policies/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/pod-security-policies/_index.md index f4567141247..009fca03abb 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/pod-security-policies/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/pod-security-policies/_index.md @@ -7,10 +7,10 @@ _Pod Security Policies_ are objects that control security-sensitive aspects of p ## Adding a Default Pod Security Policy -When you create a new cluster, you can configure it to apply a PSP immediately. As you create the cluster, use the **Cluster Options** to enable a PSP. The PSP assigned to the cluster will be the default PSP for projects within the cluster. +When you create a new cluster with RKE, you can configure it to apply a PSP immediately. As you create the cluster, use the **Cluster Options** to enable a PSP. The PSP assigned to the cluster will be the default PSP for projects within the cluster. >**Prerequisite:** ->Create a Pod Security Policy within Rancher. Before you can assign a default PSP to a new cluster, you must have a PSP available for assignment. For instruction, see [Creating Pod Security Policies]({{< baseurl >}}/rancher/v2.x/en/admin-settings/pod-security-policies/). +>Create a Pod Security Policy within Rancher. Before you can assign a default PSP to a new cluster, you must have a PSP available for assignment. For instruction, see [Creating Pod Security Policies]({{}}/rancher/v2.x/en/admin-settings/pod-security-policies/). >**Note:** >For security purposes, we recommend assigning a PSP as you create your clusters. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/rancher-agents/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/rancher-agents/_index.md index de3e9ba5058..0c5c967613c 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/rancher-agents/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/rancher-agents/_index.md @@ -12,11 +12,11 @@ For a conceptual overview of how the Rancher server provisions clusters and comm ### cattle-cluster-agent -The `cattle-cluster-agent` is used to connect to the Kubernetes API of [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters. The `cattle-cluster-agent` is deployed using a Deployment resource. +The `cattle-cluster-agent` is used to connect to the Kubernetes API of [Rancher Launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters. The `cattle-cluster-agent` is deployed using a Deployment resource. ### cattle-node-agent -The `cattle-node-agent` is used to interact with nodes in a [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) cluster when performing cluster operations. Examples of cluster operations are upgrading Kubernetes version and creating/restoring etcd snapshots. The `cattle-node-agent` is deployed using a DaemonSet resource to make sure it runs on every node. The `cattle-node-agent` is used as fallback option to connect to the Kubernetes API of [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters when `cattle-cluster-agent` is unavailable. +The `cattle-node-agent` is used to interact with nodes in a [Rancher Launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) cluster when performing cluster operations. Examples of cluster operations are upgrading Kubernetes version and creating/restoring etcd snapshots. The `cattle-node-agent` is deployed using a DaemonSet resource to make sure it runs on every node. The `cattle-node-agent` is used as fallback option to connect to the Kubernetes API of [Rancher Launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters when `cattle-cluster-agent` is unavailable. > **Note:** In Rancher v2.2.4 and lower, the `cattle-node-agent` pods did not tolerate all taints, causing Kubernetes upgrades to fail on these nodes. The fix for this has been included in Rancher v2.2.5 and higher. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md index 337a4452bcc..17aeed8c00b 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md @@ -5,7 +5,7 @@ weight: 2240 _Available as of v2.3.0_ -When provisioning a [custom cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/) using Rancher, Rancher uses RKE (the Rancher Kubernetes Engine) to provision the Kubernetes custom cluster on your existing infrastructure. +When provisioning a [custom cluster]({{}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/) using Rancher, Rancher uses RKE (the Rancher Kubernetes Engine) to provision the Kubernetes custom cluster on your existing infrastructure. You can use a mix of Linux and Windows hosts as your cluster nodes. Windows nodes can only be used for deploying workloads, while Linux nodes are required for cluster management. @@ -19,26 +19,20 @@ This guide covers the following topics: -- [Prerequisites](#prerequisites) - [Requirements](#requirements-for-windows-clusters) - [OS and Docker](#os-and-docker-requirements) - [Nodes](#node-requirements) - [Networking](#networking-requirements) - [Architecture](#architecture-requirements) - [Containers](#container-requirements) + - [Cloud Providers](#cloud-providers) - [Tutorial: How to Create a Cluster with Windows Support](#tutorial-how-to-create-a-cluster-with-windows-support) - [Configuration for Storage Classes in Azure](#configuration-for-storage-classes-in-azure) -# Prerequisites - -Before provisioning a new cluster, be sure that you have already installed Rancher on a device that accepts inbound network traffic. This is required in order for the cluster nodes to communicate with Rancher. If you have not already installed Rancher, please refer to the [installation documentation]({{< baseurl >}}/rancher/v2.x/en/installation/) before proceeding with this guide. - -> **Note on Cloud Providers:** If you set a Kubernetes cloud provider in your cluster, some additional steps are required. You might want to set a cloud provider if you want to want to leverage a cloud provider's capabilities, for example, to automatically provision storage, load balancers, or other infrastructure for your cluster. Refer to [this page]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) for details on how to configure a cloud provider cluster of nodes that meet the prerequisites. - # Requirements for Windows Clusters -For a custom cluster, the general node requirements for networking, operating systems, and Docker are the same as the node requirements for a [Rancher installation]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/). +For a custom cluster, the general node requirements for networking, operating systems, and Docker are the same as the node requirements for a [Rancher installation]({{}}/rancher/v2.x/en/installation/requirements/). ### OS and Docker Requirements @@ -64,6 +58,8 @@ Rancher will not provision the node if the node does not meet these requirements ### Networking Requirements +Before provisioning a new cluster, be sure that you have already installed Rancher on a device that accepts inbound network traffic. This is required in order for the cluster nodes to communicate with Rancher. If you have not already installed Rancher, please refer to the [installation documentation]({{}}/rancher/v2.x/en/installation/) before proceeding with this guide. + Rancher only supports Windows using Flannel as the network provider. There are two network options: [**Host Gateway (L2bridge)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) and [**VXLAN (Overlay)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan). The default option is **VXLAN (Overlay)** mode. @@ -84,14 +80,23 @@ We recommend the minimum three-node architecture listed in the table below, but | Node | Operating System | Kubernetes Cluster Role(s) | Purpose | | ------ | --------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- | -| Node 1 | Linux (Ubuntu Server 18.04 recommended) | [Control Plane]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#control-plane-nodes), [etcd]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#etcd-nodes), [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) | Manage the Kubernetes cluster | -| Node 2 | Linux (Ubuntu Server 18.04 recommended) | [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) | Support the Rancher Cluster agent, Metrics server, DNS, and Ingress for the cluster | -| Node 3 | Windows (Windows Server core version 1809 or above) | [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) | Run your Windows containers | +| Node 1 | Linux (Ubuntu Server 18.04 recommended) | [Control Plane]({{}}/rancher/v2.x/en/cluster-provisioning/#control-plane-nodes), [etcd]({{}}/rancher/v2.x/en/cluster-provisioning/#etcd-nodes), [Worker]({{}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) | Manage the Kubernetes cluster | +| Node 2 | Linux (Ubuntu Server 18.04 recommended) | [Worker]({{}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) | Support the Rancher Cluster agent, Metrics server, DNS, and Ingress for the cluster | +| Node 3 | Windows (Windows Server core version 1809 or above) | [Worker]({{}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) | Run your Windows containers | ### Container Requirements Windows requires that containers must be built on the same Windows Server version that they are being deployed on. Therefore, containers must be built on Windows Server core version 1809 or above. If you have existing containers built for an earlier Windows Server core version, they must be re-built on Windows Server core version 1809 or above. +### Cloud Providers + +If you set a Kubernetes cloud provider in your cluster, some additional steps are required. You might want to set a cloud provider if you want to want to leverage a cloud provider's capabilities, for example, to automatically provision storage, load balancers, or other infrastructure for your cluster. Refer to [this page]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) for details on how to configure a cloud provider cluster of nodes that meet the prerequisites. + +If you are using the GCE (Google Compute Engine) cloud provider, you must do the following: + +- Enable the GCE cloud provider in the `cluster.yml` by following [these steps.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/gce) +- When provisioning the cluster in Rancher, choose **Custom cloud provider** as the cloud provider in the Rancher UI. + # Tutorial: How to Create a Cluster with Windows Support This tutorial describes how to create a Rancher-provisioned cluster with the three nodes in the [recommended architecture.](#guide-architecture) @@ -130,11 +135,11 @@ You will provision three nodes: | Node 2 | Linux (Ubuntu Server 18.04 recommended) | | Node 3 | Windows (Windows Server core version 1809 or above required) | -If your nodes are hosted by a **Cloud Provider** and you want automation support such as loadbalancers or persistent storage devices, your nodes have additional configuration requirements. For details, see [Selecting Cloud Providers.]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers) +If your nodes are hosted by a **Cloud Provider** and you want automation support such as loadbalancers or persistent storage devices, your nodes have additional configuration requirements. For details, see [Selecting Cloud Providers.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers) # 2. Create the Custom Cluster -The instructions for creating a custom cluster that supports Windows nodes are very similar to the general [instructions for creating a custom cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster) with some Windows-specific requirements. +The instructions for creating a custom cluster that supports Windows nodes are very similar to the general [instructions for creating a custom cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster) with some Windows-specific requirements. Windows support only be enabled if the cluster uses Kubernetes v1.15+ and the Flannel network provider. @@ -170,7 +175,7 @@ In this section, we fill out a form on the Rancher UI to get a custom command to 1. In the **Node Role** section, choose at least **etcd** and **Control Plane**. We recommend selecting all three. -1. Optional: If you click **Show advanced options,** you can customize the settings for the [Rancher agent]({{< baseurl >}}/rancher/v2.x/en/admin-settings/agent-options/) and [node labels.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) +1. Optional: If you click **Show advanced options,** you can customize the settings for the [Rancher agent]({{}}/rancher/v2.x/en/admin-settings/agent-options/) and [node labels.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) 1. Copy the command displayed on the screen to your clipboard. @@ -188,7 +193,7 @@ After the initial provisioning of your custom cluster, your cluster only has a s 1. From the **Global** view, click **Clusters.** -1. Go to the custom cluster that you created and click **Ellipsis (...) > Edit.** +1. Go to the custom cluster that you created and click **⋮ > Edit.** 1. Scroll down to **Node Operating System**. Choose **Linux**. @@ -216,7 +221,7 @@ You can add Windows hosts to a custom cluster by editing the cluster and choosin 1. From the **Global** view, click **Clusters.** -1. Go to the custom cluster that you created and click **Ellipsis (...) > Edit.** +1. Go to the custom cluster that you created and click **⋮ > Edit.** 1. Scroll down to **Node Operating System**. Choose **Windows**. Note: You will see that the **worker** role is the only available role. @@ -239,11 +244,11 @@ After creating your cluster, you can access it through the Rancher UI. As a best # Configuration for Storage Classes in Azure -If you are using Azure VMs for your nodes, you can use [Azure files](https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv) as a [storage class]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/#adding-storage-classes) for the cluster. +If you are using Azure VMs for your nodes, you can use [Azure files](https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv) as a [storage class]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/#adding-storage-classes) for the cluster. In order to have the Azure platform create the required storage resources, follow these steps: -1. [Configure the Azure cloud provider.]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/#azure) +1. [Configure the Azure cloud provider.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/#azure) 1. Configure `kubectl` to connect to your cluster. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/docs-for-2.1-and-2.2/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/docs-for-2.1-and-2.2/_index.md index e9986f6abae..988427179b4 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/docs-for-2.1-and-2.2/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/docs-for-2.1-and-2.2/_index.md @@ -5,9 +5,9 @@ weight: 9100 _Available from v2.1.0 to v2.1.9 and v2.2.0 to v2.2.3_ -This section describes how to provision Windows clusters in Rancher v2.1.x and v2.2.x. If you are using Rancher v2.3.0 or later, please refer to the new documentation for [v2.3.0 or later]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/). +This section describes how to provision Windows clusters in Rancher v2.1.x and v2.2.x. If you are using Rancher v2.3.0 or later, please refer to the new documentation for [v2.3.0 or later]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/). -When you create a [custom cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/), Rancher uses RKE (the Rancher Kubernetes Engine) to provision the Kubernetes cluster on your existing infrastructure. +When you create a [custom cluster]({{}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/), Rancher uses RKE (the Rancher Kubernetes Engine) to provision the Kubernetes cluster on your existing infrastructure. You can provision a custom Windows cluster using Rancher by using a mix of Linux and Windows hosts as your cluster nodes. @@ -43,23 +43,23 @@ When setting up a custom cluster with support for Windows nodes and containers, ## 1. Provision Hosts -To begin provisioning a custom cluster with Windows support, prepare your host servers. Provision three nodes according to our [requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/)—two Linux, one Windows. Your hosts can be: +To begin provisioning a custom cluster with Windows support, prepare your host servers. Provision three nodes according to our [requirements]({{}}/rancher/v2.x/en/installation/requirements/)—two Linux, one Windows. Your hosts can be: - Cloud-hosted VMs - VMs from virtualization clusters - Bare-metal servers -The table below lists the [Kubernetes roles]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) you'll assign to each host, although you won't enable these roles until further along in the configuration process—we're just informing you of each node's purpose. The first node, a Linux host, is primarily responsible for managing the Kubernetes control plane, although, in this use case, we're installing all three roles on this node. Node 2 is also a Linux worker, which is responsible for Ingress support. Finally, the third node is your Windows worker, which will run your Windows applications. +The table below lists the [Kubernetes roles]({{}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) you'll assign to each host, although you won't enable these roles until further along in the configuration process—we're just informing you of each node's purpose. The first node, a Linux host, is primarily responsible for managing the Kubernetes control plane, although, in this use case, we're installing all three roles on this node. Node 2 is also a Linux worker, which is responsible for Ingress support. Finally, the third node is your Windows worker, which will run your Windows applications. Node | Operating System | Future Cluster Role(s) --------|------------------|------ -Node 1 | Linux (Ubuntu Server 16.04 recommended) | [Control Plane]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#control-plane-nodes), [etcd]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#etcd), [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) -Node 2 | Linux (Ubuntu Server 16.04 recommended) | [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) (This node is used for Ingress support) -Node 3 | Windows (Windows Server core version 1809 or above) | [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) +Node 1 | Linux (Ubuntu Server 16.04 recommended) | [Control Plane]({{}}/rancher/v2.x/en/cluster-provisioning/#control-plane-nodes), [etcd]({{}}/rancher/v2.x/en/cluster-provisioning/#etcd), [Worker]({{}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) +Node 2 | Linux (Ubuntu Server 16.04 recommended) | [Worker]({{}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) (This node is used for Ingress support) +Node 3 | Windows (Windows Server core version 1809 or above) | [Worker]({{}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) ### Requirements -- You can view node requirements for Linux and Windows nodes in the [installation section]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/). +- You can view node requirements for Linux and Windows nodes in the [installation section]({{}}/rancher/v2.x/en/installation/requirements/). - All nodes in a virtualization cluster or a bare metal cluster must be connected using a layer 2 network. - To support [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/), your cluster must include at least one Linux node dedicated to the worker role. - Although we recommend the three node architecture listed in the table above, you can add additional Linux and Windows workers to scale up your cluster for redundancy. @@ -79,20 +79,20 @@ Azure VM | [Enable or Disable IP Forwarding](https://docs.microsoft.com/en-us/az ## 3. Create the Custom Cluster -To create a custom cluster that supports Windows nodes, follow the instructions in [Creating a Cluster with Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster), starting from [2. Create the Custom Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster). While completing the linked instructions, look for steps that requires special actions for Windows nodes, which are flagged with a note. These notes will link back here, to the special Windows instructions listed in the subheadings below. +To create a custom cluster that supports Windows nodes, follow the instructions in [Creating a Cluster with Custom Nodes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster), starting from [2. Create the Custom Cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster). While completing the linked instructions, look for steps that requires special actions for Windows nodes, which are flagged with a note. These notes will link back here, to the special Windows instructions listed in the subheadings below. ### Enable the Windows Support Option While choosing **Cluster Options**, set **Windows Support (Experimental)** to **Enabled**. -After you select this option, resume [Creating a Cluster with Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#create-the-custom-cluster) from [step 6]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#step-6). +After you select this option, resume [Creating a Cluster with Custom Nodes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#create-the-custom-cluster) from [step 6]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#step-6). ### Networking Option When choosing a network provider for a cluster that supports Windows, the only option available is Flannel, as [host-gw](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) is needed for IP routing. -If your nodes are hosted by a cloud provider and you want automation support such as load balancers or persistent storage devices, see [Selecting Cloud Providers]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers) for configuration info. +If your nodes are hosted by a cloud provider and you want automation support such as load balancers or persistent storage devices, see [Selecting Cloud Providers]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers) for configuration info. ### Node Configuration @@ -103,7 +103,7 @@ Option | Setting Node Operating System | Linux Node Roles | etcd
Control Plane
Worker -When you're done with these configurations, resume [Creating a Cluster with Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#create-the-custom-cluster) from [step 8]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#step-8). +When you're done with these configurations, resume [Creating a Cluster with Custom Nodes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#create-the-custom-cluster) from [step 8]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#step-8). diff --git a/content/rancher/v2.x/en/contributing/_index.md b/content/rancher/v2.x/en/contributing/_index.md index 3965c2e7783..1cbf8bd694e 100644 --- a/content/rancher/v2.x/en/contributing/_index.md +++ b/content/rancher/v2.x/en/contributing/_index.md @@ -38,7 +38,7 @@ loglevel repository | https://github.com/rancher/loglevel | This repository is t To see all libraries/projects used in Rancher, see the [`go.mod` file](https://github.com/rancher/rancher/blob/master/go.mod) in the `rancher/rancher` repository. -![Rancher diagram]({{< baseurl >}}/img/rancher/ranchercomponentsdiagram.svg)
+![Rancher diagram]({{}}/img/rancher/ranchercomponentsdiagram.svg)
Rancher components used for provisioning/managing Kubernetes clusters. # Building diff --git a/content/rancher/v2.x/en/faq/networking/_index.md b/content/rancher/v2.x/en/faq/networking/_index.md index ef4a030f7a8..863ad97169d 100644 --- a/content/rancher/v2.x/en/faq/networking/_index.md +++ b/content/rancher/v2.x/en/faq/networking/_index.md @@ -5,5 +5,5 @@ weight: 8005 Networking FAQ's -- [CNI Providers]({{< baseurl >}}/rancher/v2.x/en/faq/networking/cni-providers/) +- [CNI Providers]({{}}/rancher/v2.x/en/faq/networking/cni-providers/) diff --git a/content/rancher/v2.x/en/faq/networking/cni-providers/_index.md b/content/rancher/v2.x/en/faq/networking/cni-providers/_index.md index 08ae7cf4f70..ec07fe5018d 100644 --- a/content/rancher/v2.x/en/faq/networking/cni-providers/_index.md +++ b/content/rancher/v2.x/en/faq/networking/cni-providers/_index.md @@ -10,7 +10,7 @@ CNI (Container Network Interface), a [Cloud Native Computing Foundation project] Kubernetes uses CNI as an interface between network providers and Kubernetes pod networking. -![CNI Logo]({{< baseurl >}}/img/rancher/cni-logo.png) +![CNI Logo]({{}}/img/rancher/cni-logo.png) For more information visit [CNI GitHub project](https://github.com/containernetworking/cni). @@ -28,7 +28,7 @@ This network model is used when an extended L2 bridge is preferred. This network CNI network providers using this network model include Flannel, Canal, and Weave. -![Encapsulated Network]({{< baseurl >}}/img/rancher/encapsulated-network.png) +![Encapsulated Network]({{}}/img/rancher/encapsulated-network.png) #### What is an Unencapsulated Network? @@ -40,7 +40,7 @@ This network model is used when a routed L3 network is preferred. This mode dyna CNI network providers using this network model include Calico and Romana. -![Unencapsulated Network]({{< baseurl >}}/img/rancher/unencapsulated-network.png) +![Unencapsulated Network]({{}}/img/rancher/unencapsulated-network.png) ### What CNI Providers are Provided by Rancher? @@ -48,7 +48,7 @@ Out-of-the-box, Rancher provides the following CNI network providers for Kuberne #### Canal -![Canal Logo]({{< baseurl >}}/img/rancher/canal-logo.png) +![Canal Logo]({{}}/img/rancher/canal-logo.png) Canal is a CNI network provider that gives you the best of Flannel and Calico. It allows users to easily deploy Calico and Flannel networking together as a unified networking solution, combining Calico’s network policy enforcement with the rich superset of Calico (unencapsulated) and/or Flannel (encapsulated) network connectivity options. @@ -62,7 +62,7 @@ For more information, see the [Canal GitHub Page.](https://github.com/projectcal #### Flannel -![Flannel Logo]({{< baseurl >}}/img/rancher/flannel-logo.png) +![Flannel Logo]({{}}/img/rancher/flannel-logo.png) Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan). @@ -70,13 +70,13 @@ Encapsulated traffic is unencrypted by default. Therefore, flannel provides an e Kubernetes workers should open UDP port `8472` (VXLAN) and TCP port `9099` (healthcheck). See [the port requirements for user clusters]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#networking-requirements/) for more details. -![Flannel Diagram]({{< baseurl >}}/img/rancher/flannel-diagram.png) +![Flannel Diagram]({{}}/img/rancher/flannel-diagram.png) For more information, see the [Flannel GitHub Page](https://github.com/coreos/flannel). #### Calico -![Calico Logo]({{< baseurl >}}/img/rancher/calico-logo.png) +![Calico Logo]({{}}/img/rancher/calico-logo.png) Calico enables networking and network policy in Kubernetes clusters across the cloud. Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-premise using BGP. @@ -84,7 +84,7 @@ Calico also provides a stateless IP-in-IP encapsulation mode that can be used, i Kubernetes workers should open TCP port `179` (BGP). See [the port requirements for user clusters]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#networking-requirements/) for more details. -![Calico Diagram]({{< baseurl >}}/img/rancher/calico-diagram.svg) +![Calico Diagram]({{}}/img/rancher/calico-diagram.svg) For more information, see the following pages: @@ -94,7 +94,7 @@ For more information, see the following pages: #### Weave -![Weave Logo]({{< baseurl >}}/img/rancher/weave-logo.png) +![Weave Logo]({{}}/img/rancher/weave-logo.png) _Available as of v2.2.0_ @@ -151,4 +151,4 @@ As of Rancher v2.0.7, Canal is the default CNI network provider. We recommend it ### How can I configure a CNI network provider? -Please see [Cluster Options]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/) on how to configure a network provider for your cluster. For more advanced configuration options, please see how to configure your cluster using a [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) and the options for [Network Plug-ins]({{< baseurl >}}/rke/latest/en/config-options/add-ons/network-plugins/). +Please see [Cluster Options]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/) on how to configure a network provider for your cluster. For more advanced configuration options, please see how to configure your cluster using a [Config File]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) and the options for [Network Plug-ins]({{}}/rke/latest/en/config-options/add-ons/network-plugins/). diff --git a/content/rancher/v2.x/en/faq/removing-rancher/_index.md b/content/rancher/v2.x/en/faq/removing-rancher/_index.md index 01b53b46358..897fdcfce49 100644 --- a/content/rancher/v2.x/en/faq/removing-rancher/_index.md +++ b/content/rancher/v2.x/en/faq/removing-rancher/_index.md @@ -6,7 +6,6 @@ aliases: - /rancher/v2.x/en/installation/removing-rancher/ - /rancher/v2.x/en/admin-settings/removing-rancher/ - /rancher/v2.x/en/admin-settings/removing-rancher/rancher-cluster-nodes/ - - /rancher/v2.x/en/removing-rancher/ --- This page is intended to answer questions about what happens if you don't want Rancher anymore, if you don't want a cluster to be managed by Rancher anymore, or if the Rancher server is deleted. @@ -44,7 +43,7 @@ If an imported cluster is deleted from the Rancher UI, the cluster is detached f To detach the cluster, 1. From the **Global** view in Rancher, go to the **Clusters** tab. -2. Go to the imported cluster that should be detached from Rancher and click **Ellipsis (...) > Delete.** +2. Go to the imported cluster that should be detached from Rancher and click **⋮ > Delete.** 3. Click **Delete.** **Result:** The imported cluster is detached from Rancher and functions normally outside of Rancher. @@ -55,4 +54,4 @@ At this time, there is no functionality to detach these clusters from Rancher. I The capability to manage these clusters without Rancher is being tracked in this [issue.](https://github.com/rancher/rancher/issues/25234) -For information about how to access clusters if the Rancher server is deleted, refer to [this section.](#if-the-rancher-server-is-deleted-how-do-i-access-my-downstream-clusters) \ No newline at end of file +For information about how to access clusters if the Rancher server is deleted, refer to [this section.](#if-the-rancher-server-is-deleted-how-do-i-access-my-downstream-clusters) diff --git a/content/rancher/v2.x/en/faq/security/_index.md b/content/rancher/v2.x/en/faq/security/_index.md index 733b79dbf05..f9d6ec86452 100644 --- a/content/rancher/v2.x/en/faq/security/_index.md +++ b/content/rancher/v2.x/en/faq/security/_index.md @@ -6,10 +6,10 @@ weight: 8007 **Is there a Hardening Guide?** -The Hardening Guide is now located in the main [Security]({{< baseurl >}}/rancher/v2.x/en/security/) section. +The Hardening Guide is now located in the main [Security]({{}}/rancher/v2.x/en/security/) section.
**What are the results of Rancher's Kubernetes cluster when it is CIS benchmarked?** -We have run the CIS Kubernetes benchmark against a hardened Rancher Kubernetes cluster. The results of that assessment can be found in the main [Security]({{< baseurl >}}/rancher/v2.x/en/security/) section. +We have run the CIS Kubernetes benchmark against a hardened Rancher Kubernetes cluster. The results of that assessment can be found in the main [Security]({{}}/rancher/v2.x/en/security/) section. diff --git a/content/rancher/v2.x/en/faq/technical/_index.md b/content/rancher/v2.x/en/faq/technical/_index.md index e901475ca57..1151e35489c 100644 --- a/content/rancher/v2.x/en/faq/technical/_index.md +++ b/content/rancher/v2.x/en/faq/technical/_index.md @@ -56,55 +56,7 @@ New password for default admin user (user-xxxxx): ### How can I enable debug logging? -* Docker Install - * Enable -``` -$ docker exec -ti loglevel --set debug -OK -$ docker logs -f -``` - - * Disable -``` -$ docker exec -ti loglevel --set info -OK -``` - -* Kubernetes install (Helm) - * Enable -``` -$ KUBECONFIG=./kube_config_rancher-cluster.yml -$ kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | awk '{ print $1 }' | xargs -I{} kubectl --kubeconfig $KUBECONFIG -n cattle-system exec {} -- loglevel --set debug -OK -OK -OK -$ kubectl --kubeconfig $KUBECONFIG -n cattle-system logs -l app=rancher -``` - - * Disable -``` -$ KUBECONFIG=./kube_config_rancher-cluster.yml -$ kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | awk '{ print $1 }' | xargs -I{} kubectl --kubeconfig $KUBECONFIG -n cattle-system exec {} -- loglevel --set info -OK -OK -OK -``` - -* Kubernetes install (RKE add-on) - * Enable -``` -$ KUBECONFIG=./kube_config_rancher-cluster.yml -$ kubectl --kubeconfig $KUBECONFIG exec -n cattle-system $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name=="cattle-server") | .metadata.name') -- loglevel --set debug -OK -$ kubectl --kubeconfig $KUBECONFIG logs -n cattle-system -f $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name="cattle-server") | .metadata.name') -``` - - * Disable -``` -$ KUBECONFIG=./kube_config_rancher-cluster.yml -$ kubectl --kubeconfig $KUBECONFIG exec -n cattle-system $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name=="cattle-server") | .metadata.name') -- loglevel --set info -OK -``` +See [Troubleshooting: Logging]({{}}/rancher/v2.x/en/troubleshooting/logging/) ### My ClusterIP does not respond to ping @@ -116,7 +68,7 @@ Node Templates can be accessed by opening your account menu (top right) and sele ### Why is my Layer-4 Load Balancer in `Pending` state? -The Layer-4 Load Balancer is created as `type: LoadBalancer`. In Kubernetes, this needs a cloud provider or controller that can satisfy these requests, otherwise these will be in `Pending` state forever. More information can be found on [Cloud Providers]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) or [Create External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/) +The Layer-4 Load Balancer is created as `type: LoadBalancer`. In Kubernetes, this needs a cloud provider or controller that can satisfy these requests, otherwise these will be in `Pending` state forever. More information can be found on [Cloud Providers]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) or [Create External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/) ### Where is the state of Rancher stored? @@ -129,9 +81,9 @@ We follow the validated Docker versions for upstream Kubernetes releases. The va ### How can I access nodes created by Rancher? -SSH keys to access the nodes created by Rancher can be downloaded via the **Nodes** view. Choose the node which you want to access and click on the vertical ellipsis button at the end of the row, and choose **Download Keys** as shown in the picture below. +SSH keys to access the nodes created by Rancher can be downloaded via the **Nodes** view. Choose the node which you want to access and click on the vertical ⋮ button at the end of the row, and choose **Download Keys** as shown in the picture below. -![Download Keys]({{< baseurl >}}/img/rancher/downloadsshkeys.png) +![Download Keys]({{}}/img/rancher/downloadsshkeys.png) Unzip the downloaded zip file, and use the file `id_rsa` to connect to you host. Be sure to use the correct username (`rancher` or `docker` for RancherOS, `ubuntu` for Ubuntu, `ec2-user` for Amazon Linux) @@ -150,13 +102,13 @@ The UI consists of static files, and works based on responses of the API. That m A node is required to have a static IP configured (or a reserved IP via DHCP). If the IP of a node has changed, you will have to remove it from the cluster and readd it. After it is removed, Rancher will update the cluster to the correct state. If the cluster is no longer in `Provisioning` state, the node is removed from the cluster. -When the IP address of the node changed, Rancher lost connection to the node, so it will be unable to clean the node properly. See [Cleaning cluster nodes]({{< baseurl >}}/rancher/v2.x/en/faq/cleaning-cluster-nodes/) to clean the node. +When the IP address of the node changed, Rancher lost connection to the node, so it will be unable to clean the node properly. See [Cleaning cluster nodes]({{}}/rancher/v2.x/en/faq/cleaning-cluster-nodes/) to clean the node. When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster. ### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? -You can add additional arguments/binds/environment variables via the [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables]({{< baseurl >}}/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls]({{< baseurl >}}/rke/latest/en/example-yamls/). +You can add additional arguments/binds/environment variables via the [Config File]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables]({{}}/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls]({{}}/rke/latest/en/example-yamls/). ### How do I check if my certificate chain is valid? diff --git a/content/rancher/v2.x/en/installation/_index.md b/content/rancher/v2.x/en/installation/_index.md index 234d9457590..eb463283c70 100644 --- a/content/rancher/v2.x/en/installation/_index.md +++ b/content/rancher/v2.x/en/installation/_index.md @@ -2,6 +2,8 @@ title: Installing Rancher description: Learn how to install Rancher in development and production environments. Read about single node and high availability installation weight: 50 +aliases: + - /rancher/v2.x/en/installation/how-ha-works/ --- This section provides an overview of the architecture options of installing Rancher, describing advantages of each option. @@ -10,36 +12,39 @@ This section provides an overview of the architecture options of installing Ranc In this section, -**The Rancher server** manages and provisions Kubernetes clusters. You can interact with downstream Kubernetes clusters through the Rancher server's user interface. - -**RKE (Rancher Kubernetes Engine)** is a certified Kubernetes distribution and CLI/library which creates and manages a Kubernetes cluster. When you create a cluster in the Rancher UI, it calls RKE as a library to provision Rancher-launched Kubernetes clusters. +- **The Rancher server** manages and provisions Kubernetes clusters. You can interact with downstream Kubernetes clusters through the Rancher server's user interface. +- **RKE (Rancher Kubernetes Engine)** is a certified Kubernetes distribution and CLI/library which creates and manages a Kubernetes cluster. +- **K3s (5 less than K8s)** is also a fully compliant Kubernetes distribution. It is newer than RKE, easier to use, and more lightweight, with a binary size of less than 50 MB. As of Rancher v2.4, Rancher can be installed on a K3s cluster. ### Overview of Installation Options -If you use Rancher to deploy Kubernetes clusters, it is important to ensure that the Rancher server doesn't fail, because if it goes down, you could lose access to the Kubernetes clusters that are managed by Rancher. For that reason, we recommend that for a production-grade architecture, you should set up a Kubernetes cluster with RKE, then install Rancher on it. After Rancher is installed, you can use Rancher to deploy and manage Kubernetes clusters. - -For testing or demonstration purposes, you can install Rancher in single Docker container. In this installation, you can use Rancher to set up Kubernetes clusters out-of-the-box. - -Our [instructions for installing Rancher on Kubernetes]({{}}/rancher/v2.x/en/installation/k8s-install) describe how to first use RKE to create and manage a cluster, then install Rancher onto that cluster. For this type of architecture, you will need to deploy three nodes - typically virtual machines - in the infrastructure provider of your choice. You will also need to configure a load balancer to direct front-end traffic to the three nodes. When the nodes are running and fulfill the [node requirements,]({{}}/rancher/v2.x/en/installation/requirements) you can use RKE to deploy Kubernetes onto them, then use Helm to deploy Rancher onto Kubernetes. - -For a longer discussion of Rancher architecture, refer to the [architecture overview,]({{}}/rancher/v2.x/en/overview/architecture) [recommendations for production-grade architecture,]({{}}/rancher/v2.x/en/overview/architecture-recommendations) or our [best practices guide.]({{}}/rancher/v2.x/en/best-practices/deployment-types) - Rancher can be installed on these main architectures: -- **High-availability Kubernetes Install:** We recommend using [Helm,]({{}}/rancher/v2.x/en/overview/concepts/#about-helm) a Kubernetes package manager, to install Rancher on a dedicated Kubernetes cluster. We recommend using three nodes in the cluster because increased availability is achieved by running Rancher on multiple nodes. +- **High-availability Kubernetes Install:** We recommend using [Helm,]({{}}/rancher/v2.x/en/overview/concepts/#about-helm) a Kubernetes package manager, to install Rancher on multiple nodes on a dedicated Kubernetes cluster. For RKE clusters, three nodes are required to achieve a high-availability cluster. For K3s clusters, only two nodes are required. - **Single-node Kubernetes Install:** Another option is to install Rancher with Helm on a Kubernetes cluster, but to only use a single node in the cluster. In this case, the Rancher server doesn't have high availability, which is important for running Rancher in production. However, this option is useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path. In the future, you can add nodes to the cluster to get a high-availability Rancher server. - **Docker Install:** For test and demonstration purposes, Rancher can be installed with Docker on a single node. This installation works out-of-the-box, but there is no migration path from a Docker installation to a high-availability installation on a Kubernetes cluster. Therefore, you may want to use a Kubernetes installation from the start. -The single-node Kubernetes install is achieved by describing only one node in the `cluster.yml` when provisioning the Kubernetes cluster with RKE. The single node should have all three roles: `etcd`, `controlplane`, and `worker`. Then Rancher can be installed with Helm on the cluster in the same way that it would be installed on any other cluster. There are also separate instructions for installing Rancher in an air gap environment or behind an HTTP proxy: | Level of Internet Access | Kubernetes Installation - Strongly Recommended | Docker Installation | | ---------------------------------- | ------------------------------ | ---------- | | With direct access to the Internet | [Docs]({{}}/rancher/v2.x/en/installation/k8s-install/) | [Docs]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) | -| Behind an HTTP proxy | These [docs,]({{}}/rancher/v2.x/en/installation/k8s-install/) plus this [configuration]({{}}/rancher/v2.x/en/installation/options/chart-options/#http-proxy) | These [docs,]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node) plus this [configuration]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node/proxy/) | +| Behind an HTTP proxy | These [docs,]({{}}/rancher/v2.x/en/installation/k8s-install/) plus this [configuration]({{}}/rancher/v2.x/en/installation/options/chart-options/#http-proxy) | These [docs,]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) plus this [configuration]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/proxy/) | | In an air gap environment | [Docs]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap) | [Docs]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap) | +We recommend installing Rancher on a Kubernetes cluster, because in a multi-node cluster, the Rancher management server becomes highly available. This high-availability configuration helps maintain consistent access to the downstream Kubernetes clusters that Rancher will manage. + +For that reason, we recommend that for a production-grade architecture, you should set up a high-availability Kubernetes cluster using either RKE or K3s, then install Rancher on it. After Rancher is installed, you can use Rancher to deploy and manage Kubernetes clusters. + +For testing or demonstration purposes, you can install Rancher in single Docker container. In this Docker install, you can use Rancher to set up Kubernetes clusters out-of-the-box. + +Our [instructions for installing Rancher on Kubernetes]({{}}/rancher/v2.x/en/installation/k8s-install) describe how to first use K3s or RKE to create and manage a Kubernetes cluster, then install Rancher onto that cluster. + +When the nodes in your Kubernetes cluster are running and fulfill the [node requirements,]({{}}/rancher/v2.x/en/installation/requirements) you will use Helm to deploy Rancher onto Kubernetes. Helm uses Rancher's Helm chart to install a replica of Rancher on each node in the Kubernetes cluster. We recommend using a load balancer to direct traffic to each replica of Rancher in the cluster. + +For a longer discussion of Rancher architecture, refer to the [architecture overview,]({{}}/rancher/v2.x/en/overview/architecture) [recommendations for production-grade architecture,]({{}}/rancher/v2.x/en/overview/architecture-recommendations) or our [best practices guide.]({{}}/rancher/v2.x/en/best-practices/deployment-types) + ### Prerequisites Before installing Rancher, make sure that your nodes fulfill all of the [installation requirements.]({{}}/rancher/v2.x/en/installation/requirements/) @@ -57,12 +62,15 @@ Refer to the [Helm chart options]({{}}/rancher/v2.x/en/installation/opt - With [TLS termination on a load balancer]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination) - With a [custom Ingress]({{}}/rancher/v2.x/en/installation/options/chart-options/#customizing-your-ingress) -In the Rancher installation instructions, we recommend using RKE (Rancher Kubernetes Engine) to set up a Kubernetes cluster before installing Rancher on the cluster. RKE has many configuration options for customizing the Kubernetes cluster to suit your specific environment. Please see the [RKE Documentation]({{}}/rke/latest/en/config-options/) for the full list of options and capabilities. +In the Rancher installation instructions, we recommend using K3s or RKE to set up a Kubernetes cluster before installing Rancher on the cluster. Both K3s and RKE have many configuration options for customizing the Kubernetes cluster to suit your specific environment. For the full list of their capabilities, refer to their documentation: + +- [RKE configuration options]({{}}/rke/latest/en/config-options/) +- [K3s configuration options]({{}}/k3s/latest/en/installation/install-options/) ### More Options for Installations with Docker -Refer to the [Docker installation docs]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) for details other configurations including: +Refer to the [docs about options for Docker installs]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) for details about other configurations including: - With [API auditing to record all transactions]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/#api-audit-log) -- With an [external load balancer]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-install-external-lb/) +- With an [external load balancer]({{}}/rancher/v2.x/en/installation/options/single-node-install-external-lb/) - With a [persistent data store]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/#persistent-data) diff --git a/content/rancher/v2.x/en/installation/k8s-install/_index.md b/content/rancher/v2.x/en/installation/k8s-install/_index.md index 36d6949e90d..4a51dbf90b3 100644 --- a/content/rancher/v2.x/en/installation/k8s-install/_index.md +++ b/content/rancher/v2.x/en/installation/k8s-install/_index.md @@ -8,7 +8,7 @@ aliases: For production environments, we recommend installing Rancher in a high-availability configuration so that your user base can always access Rancher Server. When installed in a Kubernetes cluster, Rancher will integrate with the cluster's etcd database and take advantage of Kubernetes scheduling for high-availability. -This section describes how to first use RKE to create and manage a cluster, then install Rancher onto that cluster. For this type of architecture, you will need to deploy three VMs in the infrastructure provider of your choice. You will also need to configure a load balancer to direct front-end traffic to the three VMs. When the VMs are running and fulfill the [node requirements,]({{}}/rancher/v2.x/en/installation/requirements) you can use RKE to deploy Kubernetes onto them, then use the Helm package manager to deploy Rancher onto Kubernetes. +This section describes how to create and manage a Kubernetes cluster, then install Rancher onto that cluster. For this type of architecture, you will need to deploy nodes - typically virtual machines - in the infrastructure provider of your choice. You will also need to configure a load balancer to direct front-end traffic to the three VMs. When the VMs are running and fulfill the [node requirements,]({{}}/rancher/v2.x/en/installation/requirements) you can use RKE or K3s to deploy Kubernetes onto them, then use the Helm package manager to deploy Rancher onto Kubernetes. ### Optional: Installing Rancher on a Single-node Kubernetes Cluster @@ -16,37 +16,24 @@ If you only have one node, but you want to use the Rancher server in production One option is to install Rancher with Helm on a Kubernetes cluster, but to only use a single node in the cluster. In this case, the Rancher server does not have high availability, which is important for running Rancher in production. However, this option is useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path. In the future, you can add nodes to the cluster to get a high-availability Rancher server. -The single-node Kubernetes install can be achieved by describing only one node in the `cluster.yml` when provisioning the Kubernetes cluster with RKE. The single node would have all three roles: `etcd`, `controlplane`, and `worker`. Then Rancher would be installed with Helm on the cluster in the same way that it would be installed on any other cluster. +To set up a single-node RKE cluster, configure only one node in the `cluster.yml` . The single node should have all three roles: `etcd`, `controlplane`, and `worker`. + +To set up a single-node K3s cluster, run the Rancher server installation command on just one node instead of two nodes. + +In both single-node Kubernetes setups, Rancher can be installed with Helm on the Kubernetes cluster in the same way that it would be installed on any other cluster. ### Important Notes on Architecture -The Rancher management server can only be run on an RKE-managed Kubernetes cluster. Use of Rancher on hosted Kubernetes or other providers is not supported. +The Rancher management server can only be run on Kubernetes cluster in an infrastructure provider where Kubernetes is installed using K3s or RKE. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported. For the best performance and security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads. -We recommend the following architecture and configurations for the load balancer and Ingress controllers: - -- DNS for Rancher should resolve to a Layer 4 load balancer (TCP) -- The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster. -- The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443. -- The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment. - -For more information on how a Kubernetes Installation works, refer to [this page.]({{}}/rancher/v2.x/en/installation/how-ha-works) - For information on how Rancher works, regardless of the installation method, refer to the [architecture section.]({{}}/rancher/v2.x/en/overview/architecture) -## Required CLI Tools - -The following CLI tools are required for this install. Please make sure these tools are installed and available in your `$PATH` - -- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool. -- [rke]({{}}/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters. -- [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements]({{}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher. - ## Installation Outline -- [Create Nodes and Load Balancer]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/) -- [Install Kubernetes with RKE]({{}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/) +- [Set up Infrastructure]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/) +- [Set up a Kubernetes Cluster]({{}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/) - [Install Rancher]({{}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/) ## Additional Install Options diff --git a/content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/_index.md b/content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/_index.md index 97c3e200657..0dfb09491d2 100644 --- a/content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/_index.md +++ b/content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/_index.md @@ -1,32 +1,130 @@ --- -title: '1. Create Nodes and Load Balancer' +title: '1. Set up Infrastructure' weight: 185 aliases: - /rancher/v2.x/en/installation/ha/create-nodes-lb --- -Use your infrastructure provider of choice to provision three nodes and a load balancer endpoint for your RKE install. +In this section, you will provision the underlying infrastructure for your Rancher management server. -> **Note:** These nodes must be in the same region/datacenter. You may place these servers in separate availability zones. +The recommended infrastructure for the Rancher-only Kubernetes cluster differs depending on whether Rancher will be installed on a K3s Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container. -### Requirements for OS, Docker, Hardware, and Networking +For more information about each installation option, refer to [this page.]({{}}/rancher/v2.x/en/installation) -Make sure that your nodes fulfill the general [installation requirements.]({{}}/rancher/v2.x/en/installation/requirements/) +> **Note:** These nodes must be in the same region. You may place these servers in separate availability zones (datacenter). -View the OS requirements for RKE at [RKE Requirements.]({{}}/rke/latest/en/os/) +{{% tabs %}} +{{% tab "K3s" %}} +To install the Rancher management server on a high-availability K3s cluster, we recommend setting up the following infrastructure: -### Load Balancer +- **Two Linux nodes,** typically virtual machines, in the infrastructure provider of your choice. +- **An external database** to store the cluster data. We recommend MySQL. +- **A load balancer** to direct traffic to the two nodes. +- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it. -RKE will configure an Ingress controller pod, on each of your nodes. The Ingress controller pods are bound to ports TCP/80 and TCP/443 on the host network and are the entry point for HTTPS traffic to the Rancher server. +### 1. Set up Linux Nodes -Configure a load balancer as a basic Layer 4 TCP forwarder. The exact configuration will vary depending on your environment. +Make sure that your nodes fulfill the general installation requirements for [OS, Docker, hardware, and networking.]({{}}/rancher/v2.x/en/installation/requirements/) + +For an example of one way to set up Linux nodes, refer to this [tutorial]({{}}/rancher/v2.x/en/installation/options/ec2-node) for setting up nodes as instances in Amazon EC2. + +### 2. Set up External Datastore + +The ability to run Kubernetes using a datastore other than etcd sets K3s apart from other Kubernetes distributions. This feature provides flexibility to Kubernetes operators. The available options allow you to select a datastore that best fits your use case. + +For a high-availability K3s installation, you will need to set a [MySQL](https://www.mysql.com/) external database. Rancher has been tested on K3s Kubernetes clusters using MySQL version 5.7 as the datastore. + +When you install Kubernetes using the K3s installation script, you will pass in details for K3s to connect to the database. + +For an example of one way to set up the MySQL database, refer to this [tutorial]({{}}/rancher/v2.x/en/installation/options/rds/) for setting up MySQL on Amazon's RDS service. + +For the complete list of options that are available for configuring a K3s cluster datastore, refer to the [K3s documentation.]({{}}/k3s/latest/en/installation/datastore/) + +### 3. Set up the Load Balancer + +You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server. + +When Kubernetes gets set up in a later step, the K3s tool will deploy a Traefik Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames. + +When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the Traefik Ingress controller to listen for traffic destined for the Rancher hostname. The Traefik Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster. + +For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer: + +- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment. +- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination) + +For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nginx/) + +For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nlb/) > **Important:** > Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications. -#### How-to Guides +### 4. Set up the DNS Record -- For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nginx/) -- For an example showing how to setup an Amazon NLB load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nlb/) +Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer. -### [Next: Install Kubernetes with RKE]({{}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/) +Depending on your environment, this may be an A record pointing to the load balancer IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on. + +You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one. + +For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer) + + + +{{% /tab %}} +{{% tab "RKE" %}} +To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure: + +- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere. +- **A load balancer** to direct front-end traffic to the three nodes. +- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it. + +These nodes must be in the same region/data center. You may place these servers in separate availability zones. + +### Why three nodes? + +In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes. + +The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes. + +### 1. Set up Linux Nodes + +Make sure that your nodes fulfill the general installation requirements for [OS, Docker, hardware, and networking.]({{}}/rancher/v2.x/en/installation/requirements/) + +For an example of one way to set up Linux nodes, refer to this [tutorial]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/ec2-node) for setting up nodes as instances in Amazon EC2. + +### 2. Set up the Load Balancer + +You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server. + +When Kubernetes gets set up in a later step, the RKE tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames. + +When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster. + +For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer: + +- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment. +- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination) + +For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nginx/) + +For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nlb/) + +> **Important:** +> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications. + +### 3. Set up the DNS Record + +Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer. + +Depending on your environment, this may be an A record pointing to the LB IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on. + +You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one. + +For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer) + +{{% /tab %}} +{{% /tabs %}} + +### [Next: Set up a Kubernetes Cluster]({{}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/) \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/k8s-install/helm-rancher/_index.md b/content/rancher/v2.x/en/installation/k8s-install/helm-rancher/_index.md index ea0fcda275f..84b002d3f8f 100644 --- a/content/rancher/v2.x/en/installation/k8s-install/helm-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/k8s-install/helm-rancher/_index.md @@ -16,13 +16,29 @@ To choose a Rancher version to install, refer to [Choosing a Rancher Version.]({ To choose a version of Helm to install Rancher with, refer to the [Helm version requirements]({{}}/rancher/v2.x/en/installation/options/helm-version) -> **Note:** The installation instructions assume you are using Helm 3. For migration of installs started with Helm 2, refer to the official [Helm 2 to 3 migration docs.](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) This [section]({{}}/rancher/v2.x/en/installation/options/helm2) provides a copy of the older installation instructions for Rancher installed on Kubernetes with Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible. +> **Note:** The installation instructions assume you are using Helm 3. For migration of installs started with Helm 2, refer to the official [Helm 2 to 3 migration docs.](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) This [section]({{}}/rancher/v2.x/en/installation/options/helm2) provides a copy of the older installation instructions for Rancher installed on an RKE Kubernetes cluster with Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible. -### Install Helm +To set up Rancher, -Helm requires a simple CLI tool to be installed. Refer to the [instructions provided by the Helm project](https://helm.sh/docs/intro/install/) for your specific platform. +1. [Install the required CLI tools](#1-install-the-required-cli-tools) +2. [Add the Helm chart repository](#2-add-the-helm-chart-repository) +3. [Create a namespace for Rancher](#3-create-a-namespace-for-rancher) +4. [Choose your SSL configuration](#4-choose-your-ssl-configuration) +5. [Install cert-manager](#5-install-cert-manager) (unless you are bringing your own certificates, or TLS will be terminated on a load balancer) +6. [Install Rancher with Helm and your chosen certificate option](#6-install-rancher-with-helm-and-your-chosen-certificate-option) +7. [Verify that the Rancher server is successfully deployed](#7-verify-that-the-rancher-server-is-successfully-deployed) +8. [Save your options](#8-save-your-options) -### Add the Helm Chart Repository +### 1. Install the Required CLI Tools + +The following CLI tools are required for setting up the Kubernetes cluster. Please make sure these tools are installed and available in your `$PATH`. + +Refer to the [instructions provided by the Helm project](https://helm.sh/docs/intro/install/) for your specific platform. + +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool. +- [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements]({{}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher. + +### 2. Add the Helm Chart Repository Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher]({{}}/rancher/v2.x/en/installation/options/server-tags/#helm-chart-repositories). @@ -32,40 +48,42 @@ Use `helm repo add` command to add the Helm chart repository that contains chart helm repo add rancher- https://releases.rancher.com/server-charts/ ``` -### Create a Namespace for Rancher +### 3. Create a Namespace for Rancher -We'll need to define a namespace where the resources created by the Chart should be installed. This should always be `cattle-system`: +We'll need to define a Kubernetes namespace where the resources created by the Chart should be installed. This should always be `cattle-system`: ``` kubectl create namespace cattle-system ``` -### Choose your SSL Configuration +### 4. Choose your SSL Configuration -Rancher Server is designed to be secure by default and requires SSL/TLS configuration. - -There are three recommended options for the source of the certificate. +The Rancher management server is designed to be secure by default and requires SSL/TLS configuration. > **Note:** If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination). -| Configuration | Chart option | Description | Requires cert-manager | -| ------------------------------ | -------------------------------- | ------------------------------------------------------------------------------------------- | ------------------------------------- | -| Rancher Generated Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)
This is the **default** | [yes](#optional-install-cert-manager) | -| Let’s Encrypt | `ingress.tls.source=letsEncrypt` | Use Let's Encrypt to issue a certificate | [yes](#optional-install-cert-manager) | -| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s) | no | +There are three recommended options for the source of the certificate used for TLS termination at the Rancher server: -### Optional: Install cert-manager +- **Rancher-generated TLS certificate:** In this case, you will need to install `cert-manager` into the cluster. Rancher utilizes `cert-manager` to issue and maintain its certificates. Rancher will generate a CA certificate of its own, and sign a cert using that CA. `cert-manager` is then responsible for managing that certificate. +- **Let's Encrypt:** The Let's Encrypt option also uses `cert-manager`. However, in this case, cert-manager is combined with a special Issuer for Let's Encrypt that performs all actions (including request and validation) necessary for getting a Let's Encrypt issued cert. This configuration uses HTTP validation (`HTTP-01`), so the load balancer must have a public DNS record and be accessible from the internet. +- **Bring your own certificate:** This option allows you to bring your own public- or private-CA signed certificate. Rancher will use that certificate to secure websocket and HTTPS traffic. In this case, you must upload this certificate (and associated key) as PEM-encoded files with the name `tls.crt` and `tls.key`. If you are using a private CA, you must also upload that certificate. This is due to the fact that this private CA may not be trusted by your nodes. Rancher will take that CA certificate, and generate a checksum from it, which the various Rancher components will use to validate their connection to Rancher. -Rancher relies on [cert-manager](https://github.com/jetstack/cert-manager) to issue certificates from Rancher's own generated CA or to request Let's Encrypt certificates. -`cert-manager` is only required for certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) and Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`). You should skip this step if you are using your own certificate files (option `ingress.tls.source=secret`) or if you use [TLS termination on an External Load Balancer]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination). +| Configuration | Helm Chart Option | Requires cert-manager | +| ------------------------------ | ----------------------- | ------------------------------------- | +| Rancher Generated Certificates (Default) | `ingress.tls.source=rancher` | [yes](#5-install-cert-manager) | +| Let’s Encrypt | `ingress.tls.source=letsEncrypt` | [yes](#5-install-cert-manager) | +| Certificates from Files | `ingress.tls.source=secret` | no | + +### 5. Install cert-manager + +> You should skip this step if you are bringing your own certificate files (option `ingress.tls.source=secret`), or if you use [TLS termination on an external load balancer]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination). + +This step is only required to use certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) or to request Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`). {{% accordion id="cert-manager" label="Click to Expand" %}} -> **Important:** -> Due to an issue with Helm v2.12.0 and cert-manager, please use Helm v2.12.1 or higher. - -> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.11.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). +> **Important:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.11.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). These instructions are adapted from the [official cert-manager documentation](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm). @@ -73,8 +91,15 @@ These instructions are adapted from the [official cert-manager documentation](ht # Install the CustomResourceDefinition resources separately kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml -> **Important:** -> If you are running Kubernetes v1.15 or below, you will need to add the `--validate=false flag to your kubectl apply command above else you will receive a validation error relating to the x-kubernetes-preserve-unknown-fields field in cert-manager’s CustomResourceDefinition resources. This is a benign error and occurs due to the way kubectl performs resource validation. +# **Important:** +# If you are running Kubernetes v1.15 or below, you +# will need to add the `--validate=false` flag to your +# kubectl apply command, or else you will receive a +# validation error relating to the +# x-kubernetes-preserve-unknown-fields field in +# cert-manager’s CustomResourceDefinition resources. +# This is a benign error and occurs due to the way kubectl +# performs resource validation. # Create the namespace for cert-manager kubectl create namespace cert-manager @@ -105,16 +130,20 @@ cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m {{% /accordion %}} -### Install Rancher with Helm and Your Chosen Certificate Option +### 6. Install Rancher with Helm and Your Chosen Certificate Option + +The exact command to install Rancher differs depending on the certificate configuration. {{% tabs %}} {{% tab "Rancher-generated Certificates" %}} -> **Note:** You need to have [cert-manager](#optional-install-cert-manager) installed before proceeding. -The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface. Because `rancher` is the default option for `ingress.tls.source`, we are not specifying `ingress.tls.source` when running the `helm install` command. +The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface. + +Because `rancher` is the default option for `ingress.tls.source`, we are not specifying `ingress.tls.source` when running the `helm install` command. - Set the `hostname` to the DNS name you pointed at your load balancer. +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher rancher-/rancher \ @@ -133,11 +162,14 @@ deployment "rancher" successfully rolled out {{% /tab %}} {{% tab "Let's Encrypt" %}} -> **Note:** You need to have [cert-manager](#optional-install-cert-manager) installed before proceeding. +This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA. -This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA. This configuration uses HTTP validation (`HTTP-01`) so the load balancer must have a public DNS record and be accessible from the internet. +In the following command, -- Set `hostname` to the public DNS record, set `ingress.tls.source` to `letsEncrypt` and `letsEncrypt.email` to the email address used for communication about your certificate (for example, expiry notices) +- `hostname` is set to the public DNS record, +- `ingress.tls.source` is set to `letsEncrypt` +- `letsEncrypt.email` is set to the email address used for communication about your certificate (for example, expiry notices) +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher rancher-/rancher \ @@ -157,12 +189,18 @@ deployment "rancher" successfully rolled out {{% /tab %}} {{% tab "Certificates from Files" %}} -Create Kubernetes secrets from your own certificates for Rancher to use. +In this option, Kubernetes secrets are created from your own certificates for Rancher to use. -> **Note:** The `Common Name` or a `Subject Alternative Names` entry in the server certificate must match the `hostname` option, or the ingress controller will fail to configure correctly. Although an entry in the `Subject Alternative Names` is technically required, having a matching `Common Name` maximizes compatibility with older browsers/applications. If you want to check if your certificates are correct, see [How do I check Common Name and Subject Alternative Names in my server certificate?]({{}}/rancher/v2.x/en/faq/technical/#how-do-i-check-common-name-and-subject-alternative-names-in-my-server-certificate) +When you run this command, the `hostname` option must match the `Common Name` or a `Subject Alternative Names` entry in the server certificate or the Ingress controller will fail to configure correctly. -- Set `hostname` and set `ingress.tls.source` to `secret`. +Although an entry in the `Subject Alternative Names` is technically required, having a matching `Common Name` maximizes compatibility with older browsers and applications. + +> If you want to check if your certificates are correct, see [How do I check Common Name and Subject Alternative Names in my server certificate?]({{}}/rancher/v2.x/en/faq/technical/#how-do-i-check-common-name-and-subject-alternative-names-in-my-server-certificate) + +- Set the `hostname`. +- Set `ingress.tls.source` to `secret`. - If you are using a Private CA signed certificate , add `--set privateCA=true` to the command shown below. +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher rancher-/rancher \ @@ -171,7 +209,20 @@ helm install rancher rancher-/rancher \ --set ingress.tls.source=secret ``` -Now that Rancher is deployed, see [Adding TLS Secrets]({{}}/rancher/v2.x/en/installation/options/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them. +Now that Rancher is deployed, see [Adding TLS Secrets]({{}}/rancher/v2.x/en/installation/options/tls-secrets/) to publish the certificate files so Rancher and the Ingress controller can use them. +{{% /tab %}} +{{% /tabs %}} + +The Rancher chart configuration has many options for customizing the installation to suit your specific environment. Here are some common advanced scenarios. + +- [HTTP Proxy]({{}}/rancher/v2.x/en/installation/options/chart-options/#http-proxy) +- [Private Docker Image Registry]({{}}/rancher/v2.x/en/installation/options/chart-options/#private-registry-and-air-gap-installs) +- [TLS Termination on an External Load Balancer]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination) + +See the [Chart Options]({{}}/rancher/v2.x/en/installation/options/chart-options/) for the full list of options. + + +### 7. Verify that the Rancher Server is Successfully Deployed After adding the secrets, check if Rancher was rolled out successfully: @@ -190,25 +241,15 @@ rancher 3 3 3 3 3m ``` It should show the same count for `DESIRED` and `AVAILABLE`. -{{% /tab %}} -{{% /tabs %}} -### Advanced Configurations - -The Rancher chart configuration has many options for customizing the install to suit your specific environment. Here are some common advanced scenarios. - -- [HTTP Proxy]({{}}/rancher/v2.x/en/installation/options/chart-options/#http-proxy) -- [Private Docker Image Registry]({{}}/rancher/v2.x/en/installation/options/chart-options/#private-registry-and-air-gap-installs) -- [TLS Termination on an External Load Balancer]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination) - -See the [Chart Options]({{}}/rancher/v2.x/en/installation/options/chart-options/) for the full list of options. - -### Save your options +### 8. Save Your Options Make sure you save the `--set` options you used. You will need to use the same options when you upgrade Rancher to new versions with Helm. ### Finishing Up -That's it you should have a functional Rancher server. Point a browser at the hostname you picked and you should be greeted by the colorful login page. +That's it. You should have a functional Rancher server. + +In a web browser, go to the DNS name that forwards traffic to your load balancer. Then you should be greeted by the colorful login page. Doesn't work? Take a look at the [Troubleshooting]({{}}/rancher/v2.x/en/installation/options/troubleshooting/) Page diff --git a/content/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/_index.md b/content/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/_index.md index 5ba0ca0240d..8f62c8450ec 100644 --- a/content/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/_index.md +++ b/content/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/_index.md @@ -6,21 +6,142 @@ aliases: - /rancher/v2.x/en/installation/ha/kubernetes-rke/ --- -This section describes how to install a Kubernetes cluster on your three nodes according to our [best practices for the Rancher server environment.]({{}}/rancher/v2.x/en/overview/architecture-recommendations/#environment-for-kubernetes-installations) This cluster should be dedicated to run only the Rancher server. We recommend using RKE to install Kubernetes on this cluster. Hosted Kubernetes providers such as EKS should not be used. +This section describes how to install a Kubernetes cluster according to our [best practices for the Rancher server environment.]({{}}/rancher/v2.x/en/overview/architecture-recommendations/#environment-for-kubernetes-installations) This cluster should be dedicated to run only the Rancher server. + +For Rancher prior to v2.4, Rancher should be installed on an RKE Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. + +As of Rancher v2.4, the Rancher management server can be installed on either an RKE cluster or a K3s Kubernetes cluster. K3s is also a fully certified Kubernetes distribution released by Rancher, but is newer than RKE. We recommend installing Rancher on K3s because K3s is easier to use, and more lightweight, with a binary size of less than 50 MB. Note: After Rancher is installed on an RKE cluster, there is no migration path to a K3s setup at this time. + +The Rancher management server can only be run on Kubernetes cluster in an infrastructure provider where Kubernetes is installed using RKE or K3s. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported. For systems without direct internet access, refer to [Air Gap: Kubernetes install.]({{}}/rancher/v2.x/en/installation/air-gap-high-availability/) > **Single-node Installation Tip:** > In a single-node Kubernetes cluster, the Rancher server does not have high availability, which is important for running Rancher in production. However, installing Rancher on a single-node cluster can be useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path. > -> To set up a single-node cluster, configure only one node in the `cluster.yml` when provisioning the cluster with RKE. The single node should have all three roles: `etcd`, `controlplane`, and `worker`. Then Rancher can be installed with Helm on the cluster in the same way that it would be installed on any other cluster. +> To set up a single-node RKE cluster, configure only one node in the `cluster.yml` . The single node should have all three roles: `etcd`, `controlplane`, and `worker`. +> +> To set up a single-node K3s cluster, run the Rancher server installation command on just one node instead of two nodes. +> +> In both single-node setups, Rancher can be installed with Helm on the Kubernetes cluster in the same way that it would be installed on any other cluster. -### Create the `rancher-cluster.yml` File +# Installing Kubernetes -Using the sample below, create the `rancher-cluster.yml` file. Replace the IP Addresses in the `nodes` list with the IP address or DNS names of the 3 nodes you created. + +The steps to set up the Kubernetes cluster differ depending on whether you are using RKE or K3s. + +{{% tabs %}} +{{% tab "K3s" %}} + +### 1. Install Kubernetes and Set up the K3s Server + +When running the command to start the K3s Kubernetes API server, you will pass in an option to use the external datastore that you set up earlier. + +1. Connect to one of the Linux nodes that you have prepared to run the Rancher server. +1. On the Linux node, run this command to start the K3s server and connect it to the external datastore: + ``` + curl -sfL https://get.k3s.io | sh -s - server \ + --datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name" + ``` + Note: The datastore endpoint can also be passed in using the environment variable `$K3S_DATASTORE_ENDPOINT`. + +1. Repeat the same command on your second K3s server node. + +### 2. Confirm that K3s is Running + +To confirm that K3s has been set up successfully, run the following command on either of the K3s server nodes: +``` +sudo k3s kubectl get nodes +``` + +Then you should see two nodes with the master role: +``` +ubuntu@ip-172-31-60-194:~$ sudo k3s kubectl get nodes +NAME STATUS ROLES AGE VERSION +ip-172-31-60-194 Ready master 44m v1.17.2+k3s1 +ip-172-31-63-88 Ready master 6m8s v1.17.2+k3s1 +``` + +Then test the health of the cluster pods: +``` +sudo k3s kubectl get pods --all-namespaces +``` + +**Result:** You have successfully set up a K3s Kubernetes cluster. + +### 3. Save and Start Using the kubeconfig File + +When you installed K3s on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/k3s/k3s.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location. + +To use this `kubeconfig` file, + +1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. +2. Copy the file at `/etc/rancher/k3s/k3s.yaml` and save it to the directory `~/.kube/config` on your local machine. +3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your load balancer, referring to port 6443. (The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443.) Here is an example `k3s.yaml`: + +``` +apiVersion: v1 +clusters: +- cluster: + certificate-authority-data: [CERTIFICATE-DATA] + server: [LOAD-BALANCER-DNS]:6443 # Edit this line + name: default +contexts: +- context: + cluster: default + user: default + name: default +current-context: default +kind: Config +preferences: {} +users: +- name: default + user: + password: [PASSWORD] + username: admin +``` + +**Result:** You can now use `kubectl` to manage your K3s cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using `kubectl`: + +``` +kubectl --kubeconfig ~/.kube/config/k3s.yaml get pods --all-namespaces +``` + +For more information about the `kubeconfig` file, refer to the [K3s documentation]({{}}/k3s/latest/en/cluster-access/) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files. + +### 4. Check the Health of Your Cluster Pods + +Now that you have set up the `kubeconfig` file, you can use `kubectl` to access the cluster from your local machine. + +Check that all the required pods and containers are healthy are ready to continue: +``` +ubuntu@ip-172-31-60-194:~$ sudo kubectl get pods --all-namespaces +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system metrics-server-6d684c7b5-bw59k 1/1 Running 0 8d +kube-system local-path-provisioner-58fb86bdfd-fmkvd 1/1 Running 0 8d +kube-system coredns-d798c9dd-ljjnf 1/1 Running 0 8d +``` + +**Result:** You have confirmed that you can access the cluster with `kubectl` and the K3s cluster is running successfully. Now the Rancher management server can be installed on the cluster. +{{% /tab %}} +{{% tab "RKE" %}} + +### Required CLI Tools + +Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. + +Also install [RKE,]({{}}/rke/latest/en/installation/) the Rancher Kubernetes Engine, a Kubernetes distribution and command-line tool. + +### 1. Create the cluster configuration file + +In this section, you will create a Kubernetes cluster configuration file called `rancher-cluster.yml`. In a later step, when you set up the cluster with an RKE command, it will use this file to install Kubernetes on your nodes. + +Using the sample below as a guide, create the `rancher-cluster.yml` file. Replace the IP addresses in the `nodes` list with the IP address or DNS names of the 3 nodes you created. If your node has public and internal addresses, it is recommended to set the `internal_address:` so Kubernetes will use it for intra-cluster communication. Some services like AWS EC2 require setting the `internal_address:` if you want to use self-referencing security groups or firewalls. +RKE will need to connect to each node over SSH, and it will look for a private key in the default location of `~/.ssh/id_rsa`. If your private key for a certain node is in a different location than the default, you will also need to configure the `ssh_key_path` option for that node. + ```yaml nodes: - address: 165.227.114.63 @@ -40,7 +161,7 @@ services: etcd: snapshot: true creation: 6h - retention: 24h + retention: 24 # Required for external TLS termination with # ingress-nginx v0.22+ @@ -50,7 +171,7 @@ ingress: use-forwarded-headers: "true" ``` -#### Common RKE Nodes Options +
Common RKE Nodes Options
| Option | Required | Description | | ------------------ | -------- | -------------------------------------------------------------------------------------- | @@ -60,15 +181,13 @@ ingress: | `internal_address` | no | The private DNS or IP address for internal cluster traffic | | `ssh_key_path` | no | Path to SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`) | -#### Advanced Configurations +> **Advanced Configurations:** RKE has many configuration options for customizing the install to suit your specific environment. +> +> Please see the [RKE Documentation]({{}}/rke/latest/en/config-options/) for the full list of options and capabilities. +> +> For tuning your etcd cluster for larger Rancher installations, see the [etcd settings guide]({{}}/rancher/v2.x/en/installation/options/etcd/). -RKE has many configuration options for customizing the install to suit your specific environment. - -Please see the [RKE Documentation]({{}}/rke/latest/en/config-options/) for the full list of options and capabilities. - -For tuning your etcd cluster for larger Rancher installations see the [etcd settings guide]({{}}/rancher/v2.x/en/installation/options/etcd/). - -### Run RKE +### 2. Run RKE ``` rke up --config ./rancher-cluster.yml @@ -76,19 +195,23 @@ rke up --config ./rancher-cluster.yml When finished, it should end with the line: `Finished building Kubernetes cluster successfully`. -### Testing Your Cluster +### 3. Test Your Cluster -RKE should have created a file `kube_config_rancher-cluster.yml`. This file has the credentials for `kubectl` and `helm`. +This section describes how to set up your workspace so that you can interact with this cluster using the `kubectl` command-line tool. + +Assuming you have installed `kubectl`, you need to place the `kubeconfig` file in a location where `kubectl` can reach it. The `kubeconfig` file contains the credentials necessary to access your cluster with `kubectl`. + +When you ran `rke up`, RKE should have created a `kubeconfig` file named `kube_config_rancher-cluster.yml`. This file has the credentials for `kubectl` and `helm`. > **Note:** If you have used a different file name from `rancher-cluster.yml`, then the kube config file will be named `kube_config_.yml`. -You can copy this file to `$HOME/.kube/config` or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_rancher-cluster.yml`. +Move this file to `$HOME/.kube/config`, or if you are working with multiple Kubernetes clusters, set the `KUBECONFIG` environmental variable to the path of `kube_config_rancher-cluster.yml`: ``` export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml ``` -Test your connectivity with `kubectl` and see if all your nodes are in `Ready` state. +Test your connectivity with `kubectl` and see if all your nodes are in `Ready` state: ``` kubectl get nodes @@ -99,7 +222,7 @@ NAME STATUS ROLES AGE VER 165.227.127.226 Ready controlplane,etcd,worker 11m v1.13.5 ``` -### Check the Health of Your Cluster Pods +### 4. Check the Health of Your Cluster Pods Check that all the required pods and containers are healthy are ready to continue. @@ -126,7 +249,9 @@ kube-system rke-metrics-addon-deploy-job-7ljkc 0/1 Completed kube-system rke-network-plugin-deploy-job-6pbgj 0/1 Completed 0 30s ``` -### Save Your Files +This confirms that you have successfully installed a Kubernetes cluster that the Rancher server will run on. + +### 5. Save Your Files > **Important** > The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster. @@ -137,8 +262,12 @@ Save a copy of the following files in a secure location: - `kube_config_rancher-cluster.yml`: The [Kubeconfig file]({{}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster. - `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.

_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._ +> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file. + ### Issues or errors? See the [Troubleshooting]({{}}/rancher/v2.x/en/installation/options/troubleshooting/) page. +{{% /tab %}} +{{% /tabs %}} ### [Next: Install Rancher]({{}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/) diff --git a/content/rancher/v2.x/en/installation/options/air-gap-helm2/install-rancher/_index.md b/content/rancher/v2.x/en/installation/options/air-gap-helm2/install-rancher/_index.md index 619c44dcc14..f5895de9604 100644 --- a/content/rancher/v2.x/en/installation/options/air-gap-helm2/install-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/options/air-gap-helm2/install-rancher/_index.md @@ -5,7 +5,6 @@ aliases: - /rancher/v2.x/en/installation/air-gap-installation/install-rancher/ - /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-system-charts/ - /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-for-private-reg/ - - /rancher/v2.x/en/installation/air-gap-high-availability/install-rancher/ - /rancher/v2.x/en/installation/air-gap-single-node/install-rancher - /rancher/v2.x/en/installation/air-gap/install-rancher --- diff --git a/content/rancher/v2.x/en/installation/options/air-gap-helm2/launch-kubernetes/_index.md b/content/rancher/v2.x/en/installation/options/air-gap-helm2/launch-kubernetes/_index.md index a231b04df6f..3faa3ac73c7 100644 --- a/content/rancher/v2.x/en/installation/options/air-gap-helm2/launch-kubernetes/_index.md +++ b/content/rancher/v2.x/en/installation/options/air-gap-helm2/launch-kubernetes/_index.md @@ -77,4 +77,6 @@ Save a copy of the following files in a secure location: - `kube_config_rancher-cluster.yml`: The [Kubeconfig file]({{}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster. - `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.

_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._ +> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file. + ### [Next: Install Rancher]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher) diff --git a/content/rancher/v2.x/en/installation/options/air-gap-helm2/populate-private-registry/_index.md b/content/rancher/v2.x/en/installation/options/air-gap-helm2/populate-private-registry/_index.md index 6a286a8656a..b96ca100b47 100644 --- a/content/rancher/v2.x/en/installation/options/air-gap-helm2/populate-private-registry/_index.md +++ b/content/rancher/v2.x/en/installation/options/air-gap-helm2/populate-private-registry/_index.md @@ -13,7 +13,7 @@ aliases: > > **Note:** Populating the private registry with images is the same process for HA and Docker installations, the differences in this section is based on whether or not you are planning to provision a Windows cluster or not. -By default, all images used to [provision Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/) or launch any [tools]({{}}/rancher/v2.x/en/tools/) in Rancher, e.g. monitoring, pipelines, alerts, are pulled from Docker Hub. In an air gap installation of Rancher, you will need a private registry that is located somewhere accessible by your Rancher server. Then, you will load the registry with all the images. +By default, all images used to [provision Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/) or launch any [tools]({{}}/rancher/v2.x/en/cluster-admin/tools/) in Rancher, e.g. monitoring, pipelines, alerts, are pulled from Docker Hub. In an air gap installation of Rancher, you will need a private registry that is located somewhere accessible by your Rancher server. Then, you will load the registry with all the images. This section describes how to set up your private registry so that when you install Rancher, Rancher will pull all the required images from this registry. @@ -33,11 +33,13 @@ D. Populate the private registry These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space. +If you will use ARM64 hosts, the registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests. + ### A. Find the required assets for your Rancher version -1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. +1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. Click **Assets*.* -2. From the release's **Assets** section (pictured above), download the following files, which are required to install Rancher in an air gap environment: +2. From the release's **Assets** section, download the following files: | Release File | Description | | ---------------- | -------------- | @@ -51,12 +53,12 @@ In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS 1. Fetch the latest `cert-manager` Helm chart and parse the template for image details: - > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.9.1, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). + > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). ```plain helm repo add jetstack https://charts.jetstack.io helm repo update - helm fetch jetstack/cert-manager --version v0.9.1 + helm fetch jetstack/cert-manager --version v0.12.0 helm template ./cert-manager-.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt ``` @@ -120,17 +122,19 @@ These steps expect you to use a Windows Server 1809 workstation that has interne The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters. +Your registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests. + ### A. Find the required assets for your Rancher version 1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. 2. From the release's "Assets" section, download the following files: - | Release File | Description | - | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | - | `rancher-windows-images.txt` | This file contains a list of Windows images needed to provision Windows clusters. | - | `rancher-save-images.ps1` | This script pulls all the images in the `rancher-windows-images.txt` from Docker Hub and saves all of the images as `rancher-windows-images.tar.gz`. | - | `rancher-load-images.ps1` | This script loads the images from the `rancher-windows-images.tar.gz` file and pushes them to your private registry. | +| Release File | Description | +|------------------------|-------------------| +| `rancher-windows-images.txt` | This file contains a list of Windows images needed to provision Windows clusters. | +| `rancher-save-images.ps1` | This script pulls all the images in the `rancher-windows-images.txt` from Docker Hub and saves all of the images as `rancher-windows-images.tar.gz`. | +| `rancher-load-images.ps1` | This script loads the images from the `rancher-windows-images.tar.gz` file and pushes them to your private registry. | ### B. Save the images to your Windows Server workstation @@ -146,9 +150,9 @@ The workstation must have Docker 18.02+ in order to support manifests, which are ### C. Prepare the Docker daemon -1. Append your private registry address to the `allow-nondistributable-artifacts` config field in the Docker daemon (`C:\ProgramData\Docker\config\daemon.json`). Since the base image of Windows images are maintained by the `mcr.microsoft.com` registry, this step is required as the layers in the Microsoft registry are missing from Docker Hub and need to be pulled into the private registry. +Append your private registry address to the `allow-nondistributable-artifacts` config field in the Docker daemon (`C:\ProgramData\Docker\config\daemon.json`). Since the base image of Windows images are maintained by the `mcr.microsoft.com` registry, this step is required as the layers in the Microsoft registry are missing from Docker Hub and need to be pulled into the private registry. - ```json + ``` { ... "allow-nondistributable-artifacts": [ @@ -164,13 +168,11 @@ The workstation must have Docker 18.02+ in order to support manifests, which are Move the images in the `rancher-windows-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-windows-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.ps1` script. 1. Using `powershell`, log into your private registry if required: - ```plain docker login ``` 1. Using `powershell`, use `rancher-load-images.ps1` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry: - ```plain ./rancher-load-images.ps1 --registry ``` @@ -200,32 +202,29 @@ The workstation must have Docker 18.02+ in order to support manifests, which are 1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. -2. From the release's **Assets** section (pictured above), download the following files, which are required to install Rancher in an air gap environment: +2. From the release's **Assets** section, download the following files, which are required to install Rancher in an air gap environment: - | Release File | Description | - | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | - | `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. | - | `rancher-windows-images.txt` | This file contains a list of images needed to provision Windows clusters. | - | `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. | - | `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. | +| Release File | Description | +|----------------------------|------| +| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. | +| `rancher-windows-images.txt` | This file contains a list of images needed to provision Windows clusters. | +| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. | +| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. | ### B. Collect all the required images -1. **For Kubernetes Installs using Rancher Generated Self-Signed Certificate:** In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates. +**For Kubernetes Installs using Rancher Generated Self-Signed Certificate:** In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates. 1. Fetch the latest `cert-manager` Helm chart and parse the template for image details: - - > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.9.1, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). - + > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). ```plain helm repo add jetstack https://charts.jetstack.io helm repo update - helm fetch jetstack/cert-manager --version v0.9.1 + helm fetch jetstack/cert-manager --version v0.12.0 helm template ./cert-manager-.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt ``` 2. Sort and unique the images list to remove any overlap between the sources: - ```plain sort -u rancher-images.txt -o rancher-images.txt ``` @@ -233,37 +232,32 @@ The workstation must have Docker 18.02+ in order to support manifests, which are ### C. Save the images to your workstation 1. Make `rancher-save-images.sh` an executable: - ``` chmod +x rancher-save-images.sh ``` 1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images: - ```plain ./rancher-save-images.sh --image-list ./rancher-images.txt ``` - **Step Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory. + **Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory. ### D. Populate the private registry Move the images in the `rancher-images.tar.gz` to your private registry using the `rancher-load-images.sh script` to load the images. The `rancher-images.txt` / `rancher-windows-images.txt` image list is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. 1. Log into your private registry if required: - ```plain docker login ``` 1. Make `rancher-load-images.sh` an executable: - ``` chmod +x rancher-load-images.sh ``` 1. Use `rancher-load-images.sh` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry: - ```plain ./rancher-load-images.sh --image-list ./rancher-images.txt \ --windows-image-list ./rancher-windows-images.txt \ diff --git a/content/rancher/v2.x/en/installation/options/air-gap-helm2/prepare-nodes/_index.md b/content/rancher/v2.x/en/installation/options/air-gap-helm2/prepare-nodes/_index.md index ff9080548ef..554c05bd98b 100644 --- a/content/rancher/v2.x/en/installation/options/air-gap-helm2/prepare-nodes/_index.md +++ b/content/rancher/v2.x/en/installation/options/air-gap-helm2/prepare-nodes/_index.md @@ -81,8 +81,8 @@ You will need to configure a load balancer as a basic Layer 4 TCP forwarder to d **Load Balancer Configuration Samples:** -- For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nginx) -- For an example showing how to set up an Amazon NLB load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nlb) +- For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nginx) +- For an example showing how to set up an Amazon NLB load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nlb) {{% /tab %}} {{% tab "Docker Install" %}} diff --git a/content/rancher/v2.x/en/installation/options/api-audit-log/_index.md b/content/rancher/v2.x/en/installation/options/api-audit-log/_index.md index cdb9c07ac57..e465c60eb6c 100644 --- a/content/rancher/v2.x/en/installation/options/api-audit-log/_index.md +++ b/content/rancher/v2.x/en/installation/options/api-audit-log/_index.md @@ -70,7 +70,7 @@ kubectl -n cattle-system logs -f rancher-84d886bdbb-s4s69 rancher-audit-log ![Rancher Workload]({{}}/img/rancher/audit_logs_gui/rancher_workload.png) -1. Pick one of the `rancher` pods and select **Ellipsis (...) > View Logs**. +1. Pick one of the `rancher` pods and select **⋮ > View Logs**. ![View Logs]({{}}/img/rancher/audit_logs_gui/view_logs.png) @@ -80,7 +80,7 @@ kubectl -n cattle-system logs -f rancher-84d886bdbb-s4s69 rancher-audit-log #### Shipping the Audit Log -You can enable Rancher's built in log collection and shipping for the cluster to ship the audit and other services logs to a supported collection endpoint. See [Rancher Tools - Logging]({{}}/rancher/v2.x/en/tools/logging) for details. +You can enable Rancher's built in log collection and shipping for the cluster to ship the audit and other services logs to a supported collection endpoint. See [Rancher Tools - Logging]({{}}/rancher/v2.x/en/cluster-admin/tools/logging) for details. ## Audit Log Samples diff --git a/content/rancher/v2.x/en/installation/options/chart-options/_index.md b/content/rancher/v2.x/en/installation/options/chart-options/_index.md index 7bc68cce822..85978052274 100644 --- a/content/rancher/v2.x/en/installation/options/chart-options/_index.md +++ b/content/rancher/v2.x/en/installation/options/chart-options/_index.md @@ -51,13 +51,13 @@ weight: 276 Enabling the [API Audit Log]({{}}/rancher/v2.x/en/installation/api-auditing/). -You can collect this log as you would any container log. Enable the [Logging service under Rancher Tools]({{}}/rancher/v2.x/en/tools/logging/) for the `System` Project on the Rancher server cluster. +You can collect this log as you would any container log. Enable the [Logging service under Rancher Tools]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/) for the `System` Project on the Rancher server cluster. ```plain --set auditLog.level=1 ``` -By default enabling Audit Logging will create a sidecar container in the Rancher pod. This container (`rancher-audit-log`) will stream the log to `stdout`. You can collect this log as you would any container log. When using the sidecar as the audit log destination, the `hostPath`, `maxAge`, `maxBackups`, and `maxSize` options do not apply. It's advised to use your OS or Docker daemon's log rotation features to control disk space use. Enable the [Logging service under Rancher Tools]({{}}/rancher/v2.x/en/tools/logging/) for the Rancher server cluster or System Project. +By default enabling Audit Logging will create a sidecar container in the Rancher pod. This container (`rancher-audit-log`) will stream the log to `stdout`. You can collect this log as you would any container log. When using the sidecar as the audit log destination, the `hostPath`, `maxAge`, `maxBackups`, and `maxSize` options do not apply. It's advised to use your OS or Docker daemon's log rotation features to control disk space use. Enable the [Logging service under Rancher Tools]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/) for the Rancher server cluster or System Project. Set the `auditLog.destination` to `hostPath` to forward logs to volume shared with the host system instead of streaming to a sidecar container. When setting the destination to `hostPath` you may want to adjust the other auditLog parameters for log rotation. diff --git a/content/rancher/v2.x/en/installation/options/ec2-node/_index.md b/content/rancher/v2.x/en/installation/options/ec2-node/_index.md new file mode 100644 index 00000000000..0df051accda --- /dev/null +++ b/content/rancher/v2.x/en/installation/options/ec2-node/_index.md @@ -0,0 +1,64 @@ +--- +title: Setting up Nodes in Amazon EC2 +weight: 280 +--- + +In this tutorial, you will learn one way to set up Linux nodes for the Rancher management server. These nodes will fulfill the node requirements for [OS, Docker, hardware, and networking.]({{}}/rancher/v2.x/en/installation/requirements/) + +If the Rancher server will be installed on an RKE Kubernetes cluster, you should provision three instances. + +If the Rancher server will be installed on a K3s Kubernetes cluster, you only need to provision two instances. + +If the Rancher server is installed in a single Docker container, you only need one instance. + +### 1. Optional Preparation + +- **Create IAM role:** To allow Rancher to manipulate AWS resources, such as provisioning new storage or new nodes, you will need to configure Amazon as a cloud provider. There are several things you'll need to do to set up the cloud provider on EC2, but part of this process is setting up an IAM role for the Rancher server nodes. For the full details on setting up the cloud provider, refer to this [page.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) +- **Create security group:** We also recommend setting up a security group for the Rancher nodes that complies with the [port requirements for Rancher nodes.]({{}}/rancher/v2.x/en/installation/requirements/#port-requirements) The exact requirements will differ depending on whether Kubernetes is installed with RKE or K3s. + +### 2. Provision Instances + +1. Log into the [Amazon AWS EC2 Console](https://console.aws.amazon.com/ec2/) to get started. Make sure to take note of the **Region** where your EC2 instances (Linux nodes) are created, because all of the infrastructure for the Rancher management server should be in the same region. +1. In the left panel, click **Instances.** +1. Click **Launch Instance.** +1. In the section called **Step 1: Choose an Amazon Machine Image (AMI),** we will use Ubuntu 18.04 as the Linux OS, using `ami-0d1cd67c26f5fca19 (64-bit x86)`. Go to the Ubuntu AMI and click **Select.** +1. In the **Step 2: Choose an Instance Type** section, select the `t2.medium` type. +1. Click **Next: Configure Instance Details.** +1. In the **Number of instances** field, enter the number of instances. A high-availability K3s cluster requires only two instances, while a high-availability RKE cluster requires three instances. +1. Optional: If you created an IAM role for Rancher to manipulate AWS resources, select the new IAM role in the **IAM role** field. +1. Click **Next: Add Storage,** **Next: Add Tags,** and **Next: Configure Security Group.** +1. In **Step 6: Configure Security Group,** select a security group that complies with the [port requirements]({{}}/rancher/v2.x/en/installation/requirements/#port-requirements) for Rancher nodes. +1. Click **Review and Launch.** +1. Click **Launch.** +1. Choose a new or existing key pair that you will use to connect to your instance later. If you are using an existing key pair, make sure you already have access to the private key. +1. Click **Launch Instances.** + +**Result:** You have created Rancher nodes that satisfy the requirements for OS, hardware, and networking. Next, you will install Docker on each node. + +### 3. Install Docker and Create User + +1. From the [AWS EC2 console,](https://console.aws.amazon.com/ec2/) click **Instances** in the left panel. +1. Go to the instance that you want to install Docker on. Select the instance and click **Actions > Connect.** +1. Connect to the instance by following the instructions on the screen that appears. Copy the Public DNS of the instance. An example command to SSH into the instance is as follows: +``` +sudo ssh -i [path-to-private-key] ubuntu@[public-DNS-of-instance] +``` +1. When you are connected to the instance, run the following command on the instance to create a user: +``` +sudo usermod -aG docker ubuntu +``` +1. Run the following command on the instance to install Docker with one of Rancher's installation scripts: +``` +curl https://releases.rancher.com/install-docker/18.09.sh | sh +``` +1. Repeat these steps so that Docker is installed on each node that will eventually run the Rancher management server. + +> To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher’s Docker installation scripts. + +**Result:** You have set up Rancher server nodes that fulfill all the node requirements for OS, Docker, hardware and networking. + +### Next Steps for RKE Kubernetes Cluster Nodes + +If you are going to install an RKE cluster on the new nodes, take note of the **IPv4 Public IP** and **Private IP** of each node. This information can be found on the **Description** tab for each node after it is created. The public and private IP will be used to populate the `address` and `internal_address` of each node in the RKE cluster configuration file, `rancher-cluster.yml`. + +RKE will also need access to the private key to connect to each node. Therefore, you might want to take note of the path to your private keys to connect to the nodes, which can also be included in the `rancher-cluster.yml` under the `ssh_key_path` directive for each node. \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/options/feature-flags/_index.md b/content/rancher/v2.x/en/installation/options/feature-flags/_index.md index 655598c04a5..7df5e71056c 100644 --- a/content/rancher/v2.x/en/installation/options/feature-flags/_index.md +++ b/content/rancher/v2.x/en/installation/options/feature-flags/_index.md @@ -24,18 +24,24 @@ Because the API sets the actual value and the command line sets the default valu For example, if you install Rancher, then set a feature flag to true with the Rancher API, then upgrade Rancher with a command that sets the feature flag to false, the default value will still be false, but the feature will still be enabled because it was set with the Rancher API. If you then deleted the set value (true) with the Rancher API, setting it to NULL, the default value (false) would take effect. +> **Note:** As of v2.4.0, there are some feature flags that may require a restart of the Rancher server container. These features that require a restart are marked in the table of these docs and in the UI. + The following is a list of the feature flags available in Rancher: -- `unsupported-storage-drivers`: This feature [allows unsupported storage drivers.]({{}}/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers) In other words, it enables types for storage providers and provisioners that are not enabled by default. +- `dashboard`: This feature enables the new experimental UI that has a new look and feel. The dashboard also leverages a new API in Rancher which allows the UI to access the default Kubernetes resources without any intervention from Rancher. - `istio-virtual-service-ui`: This feature enables a [UI to create, read, update, and delete Istio virtual services and destination rules]({{}}/rancher/v2.x/en/installation/options/feature-flags/istio-virtual-service-ui), which are traffic management features of Istio. +- `proxy`: This feature enables Rancher to use a new simplified code base for the proxy, which can help enhance performance and security. The proxy feature is known to have issues with Helm deployments, which prevents any catalog applications to be deployed which includes Rancher's tools like monitoring, logging, Istio, etc. +- `unsupported-storage-drivers`: This feature [allows unsupported storage drivers.]({{}}/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers) In other words, it enables types for storage providers and provisioners that are not enabled by default. The below table shows the availability and default value for feature flags in Rancher: -| Feature Flag Name | Default Value | Status | Available as of | -| ----------------------------- | ------------- | ------------ | --------------- | -| `unsupported-storage-drivers` | `false` | Experimental | v2.3.0 | -| `istio-virtual-service-ui` | `false` | Experimental | v2.3.0 | -| `istio-virtual-service-ui` | `true` | GA | v2.3.2 | +| Feature Flag Name | Default Value | Status | Available as of | Rancher Restart Required? | +| ----------------------------- | ------------- | ------------ | --------------- |---| +| `dashboard` | `true` | Experimental | v2.4.0 | x | +| `istio-virtual-service-ui` | `false` | Experimental | v2.3.0 | | +| `istio-virtual-service-ui` | `true` | GA | v2.3.2 | | +| `proxy` | `false` | Experimental | v2.4.0 | | +| `unsupported-storage-drivers` | `false` | Experimental | v2.3.0 | | # Enabling Features when Starting Rancher @@ -56,6 +62,8 @@ helm install rancher-latest/rancher \ --set 'extraEnv[0].value==true,=true' # Available as of v2.3.0 ``` +Note: If you are installing an alpha version, Helm requires adding the `--devel` option to the command. + ### Rendering the Helm Chart for Air Gap Installations For an air gap installation of Rancher, you need to add a Helm chart repository and render a Helm template before installing Rancher with Helm. For details, refer to the [air gap installation documentation.]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher) @@ -111,7 +119,7 @@ _Available as of Rancher v2.3.3_ 1. Go to the **Global** view and click **Settings.** 1. Click the **Feature Flags** tab. You will see a list of experimental features. -1. To enable a feature, go to the disabled feature you want to enable and click **Ellipsis (...) > Activate.** +1. To enable a feature, go to the disabled feature you want to enable and click **⋮ > Activate.** **Result:** The feature is enabled. @@ -119,7 +127,7 @@ _Available as of Rancher v2.3.3_ 1. Go to the **Global** view and click **Settings.** 1. Click the **Feature Flags** tab. You will see a list of experimental features. -1. To disable a feature, go to the enabled feature you want to disable and click **Ellipsis (...) > Deactivate.** +1. To disable a feature, go to the enabled feature you want to disable and click **⋮ > Deactivate.** **Result:** The feature is disabled. diff --git a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/_index.md b/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/_index.md index 0690c435343..239ed927a3b 100644 --- a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/_index.md @@ -42,7 +42,7 @@ There are three recommended options for the source of the certificate. > **Important:** > Due to an issue with Helm v2.12.0 and cert-manager, please use Helm v2.12.1 or higher. -> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.9.1, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). +> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). Rancher relies on [cert-manager](https://github.com/jetstack/cert-manager) to issue certificates from Rancher's own generated CA or to request Let's Encrypt certificates. @@ -79,7 +79,7 @@ These instructions are adapted from the [official cert-manager documentation](ht helm install \ --name cert-manager \ --namespace cert-manager \ - --version v0.9.1 \ + --version v0.12.0 \ jetstack/cert-manager ``` @@ -105,6 +105,7 @@ If the ‘webhook’ pod (2nd line) is in a ContainerCreating state, it may stil The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface. Because `rancher` is the default option for `ingress.tls.source`, we are not specifying `ingress.tls.source` when running the `helm install` command. - Set the `hostname` to the DNS name you pointed at your load balancer. +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher-/rancher \ @@ -128,6 +129,7 @@ deployment "rancher" successfully rolled out This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA. This configuration uses HTTP validation (`HTTP-01`) so the load balancer must have a public DNS record and be accessible from the internet. - Set `hostname` to the public DNS record, set `ingress.tls.source` to `letsEncrypt` and `letsEncrypt.email` to the email address used for communication about your certificate (for example, expiry notices) +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher-/rancher \ @@ -155,6 +157,7 @@ Create Kubernetes secrets from your own certificates for Rancher to use. - Set `hostname` and set `ingress.tls.source` to `secret`. - If you are using a Private CA signed certificate , add `--set privateCA=true` to the command shown below. +- If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher-/rancher \ diff --git a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/_index.md b/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/_index.md index b9940f9cac5..e773074fc13 100644 --- a/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/_index.md +++ b/content/rancher/v2.x/en/installation/options/helm2/helm-rancher/chart-options/_index.md @@ -49,13 +49,13 @@ weight: 276 Enabling the [API Audit Log]({{}}/rancher/v2.x/en/installation/api-auditing/). -You can collect this log as you would any container log. Enable the [Logging service under Rancher Tools]({{}}/rancher/v2.x/en/tools/logging/) for the `System` Project on the Rancher server cluster. +You can collect this log as you would any container log. Enable the [Logging service under Rancher Tools]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/) for the `System` Project on the Rancher server cluster. ```plain --set auditLog.level=1 ``` -By default enabling Audit Logging will create a sidecar container in the Rancher pod. This container (`rancher-audit-log`) will stream the log to `stdout`. You can collect this log as you would any container log. When using the sidecar as the audit log destination, the `hostPath`, `maxAge`, `maxBackups`, and `maxSize` options do not apply. It's advised to use your OS or Docker daemon's log rotation features to control disk space use. Enable the [Logging service under Rancher Tools]({{}}/rancher/v2.x/en/tools/logging/) for the Rancher server cluster or System Project. +By default enabling Audit Logging will create a sidecar container in the Rancher pod. This container (`rancher-audit-log`) will stream the log to `stdout`. You can collect this log as you would any container log. When using the sidecar as the audit log destination, the `hostPath`, `maxAge`, `maxBackups`, and `maxSize` options do not apply. It's advised to use your OS or Docker daemon's log rotation features to control disk space use. Enable the [Logging service under Rancher Tools]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/) for the Rancher server cluster or System Project. Set the `auditLog.destination` to `hostPath` to forward logs to volume shared with the host system instead of streaming to a sidecar container. When setting the destination to `hostPath` you may want to adjust the other auditLog parameters for log rotation. diff --git a/content/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/_index.md b/content/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/_index.md index a88ad2801d9..10efe3341a3 100644 --- a/content/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/_index.md +++ b/content/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/_index.md @@ -123,6 +123,8 @@ Save a copy of the following files in a secure location: - `kube_config_rancher-cluster.yml`: The [Kubeconfig file]({{}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster. - `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.

_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._ +> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file. + ### Issues or errors? See the [Troubleshooting]({{}}/rancher/v2.x/en/installation/options/helm2/kubernetes-rke/troubleshooting/) page. diff --git a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb/_index.md b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb/_index.md index fecaef3d2b3..f3c16cb9404 100644 --- a/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb/_index.md +++ b/content/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb/_index.md @@ -1,8 +1,6 @@ --- title: Kubernetes Install with External Load Balancer (TCP/Layer 4) weight: 275 -aliases: -- /rancher/v2.x/en/installation/k8s-install-server-install/ --- > #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** diff --git a/content/rancher/v2.x/en/installation/options/local-system-charts/_index.md b/content/rancher/v2.x/en/installation/options/local-system-charts/_index.md index b2b84f724f3..82def8c7c92 100644 --- a/content/rancher/v2.x/en/installation/options/local-system-charts/_index.md +++ b/content/rancher/v2.x/en/installation/options/local-system-charts/_index.md @@ -37,7 +37,7 @@ In the catalog management page in the Rancher UI, follow these steps: 1. Click **Tools > Catalogs.** -1. The system chart is displayed under the name `system-library`. To edit the configuration of the system chart, click **Ellipsis (...) > Edit.** +1. The system chart is displayed under the name `system-library`. To edit the configuration of the system chart, click **⋮ > Edit.** 1. In the **Catalog URL** field, enter the location of the Git mirror of the `system-charts` repository. diff --git a/content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nginx/_index.md b/content/rancher/v2.x/en/installation/options/nginx/_index.md similarity index 73% rename from content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nginx/_index.md rename to content/rancher/v2.x/en/installation/options/nginx/_index.md index 49a77c9010e..02beb3f87ae 100644 --- a/content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nginx/_index.md +++ b/content/rancher/v2.x/en/installation/options/nginx/_index.md @@ -3,14 +3,16 @@ title: Setting up an NGINX Load Balancer weight: 270 aliases: - /rancher/v2.x/en/installation/ha/create-nodes-lb/nginx + - /rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nginx --- NGINX will be configured as Layer 4 load balancer (TCP) that forwards connections to one of your Rancher nodes. -> **Note:** -> In this configuration, the load balancer is positioned in front of your nodes. The load balancer can be any host capable of running NGINX. -> -> One caveat: do not use one of your Rancher nodes as the load balancer. +In this configuration, the load balancer is positioned in front of your nodes. The load balancer can be any host capable of running NGINX. + +One caveat: do not use one of your Rancher nodes as the load balancer. + +> These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on an RKE Kubernetes cluster, three nodes are required. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. ## Install NGINX @@ -34,20 +36,20 @@ After installing NGINX, you need to update the NGINX configuration file, `nginx. worker_rlimit_nofile 40000; events { - worker_connections 8192; + worker_connections 8192; } stream { - upstream rancher_servers_http { - least_conn; - server :80 max_fails=3 fail_timeout=5s; - server :80 max_fails=3 fail_timeout=5s; - server :80 max_fails=3 fail_timeout=5s; - } - server { - listen 80; - proxy_pass rancher_servers_http; - } + upstream rancher_servers_http { + least_conn; + server :80 max_fails=3 fail_timeout=5s; + server :80 max_fails=3 fail_timeout=5s; + server :80 max_fails=3 fail_timeout=5s; + } + server { + listen 80; + proxy_pass rancher_servers_http; + } upstream rancher_servers_https { least_conn; @@ -61,10 +63,8 @@ After installing NGINX, you need to update the NGINX configuration file, `nginx. } } - ``` - ``` 3. Save `nginx.conf` to your load balancer at the following path: `/etc/nginx/nginx.conf`. diff --git a/content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nlb/_index.md b/content/rancher/v2.x/en/installation/options/nlb/_index.md similarity index 92% rename from content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nlb/_index.md rename to content/rancher/v2.x/en/installation/options/nlb/_index.md index 29aca8a2e39..35d00153769 100644 --- a/content/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nlb/_index.md +++ b/content/rancher/v2.x/en/installation/options/nlb/_index.md @@ -1,13 +1,16 @@ --- -title: Setting up an Amazon NLB Load Balancer +title: Setting up an Amazon ELB Network Load Balancer weight: 277 aliases: - /rancher/v2.x/en/installation/ha/create-nodes-lb/nlb + - /rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nlb --- -This how-to guide describes how to set up a load balancer in Amazon's EC2 service that will direct traffic to multiple instances on EC2. +This how-to guide describes how to set up a Network Load Balancer (NLB) in Amazon's EC2 service that will direct traffic to multiple instances on EC2. -> **Note:** Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ELB or ALB. +These examples show the load balancer being configured to direct traffic to three Rancher server nodes. If Rancher is installed on an RKE Kubernetes cluster, three nodes are required. If Rancher is installed on a K3s Kubernetes cluster, only two nodes are required. + +> **Note:** Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ALB. Configuring an Amazon NLB is a multistage process: diff --git a/content/rancher/v2.x/en/installation/options/rds/_index.md b/content/rancher/v2.x/en/installation/options/rds/_index.md new file mode 100644 index 00000000000..41d7b8eb501 --- /dev/null +++ b/content/rancher/v2.x/en/installation/options/rds/_index.md @@ -0,0 +1,34 @@ +--- +title: Setting up a MySQL Database in Amazon RDS +weight: 290 +--- +This tutorial describes how to set up a MySQL database in Amazon's RDS. + +This database can later be used as an external datastore for a high-availability K3s Kubernetes cluster. + +1. Log into the [Amazon AWS RDS Console](https://console.aws.amazon.com/rds/) to get started. Make sure to select the **Region** where your EC2 instances (Linux nodes) are created. +1. In the left panel, click **Databases.** +1. Click **Create database.** +1. In the **Engine type** section, click **MySQL.** +1. In the **Version** section, choose **MySQL 5.7.22.** +1. In **Settings** section, under **Credentials Settings,** enter a master password for the **admin** master username. Confirm the password. +1. Expand the **Additional configuration** section. In the **Initial database name** field, enter a name. The name can have only letters, numbers, and underscores. This name will be used to connect to the database. +1. Click **Create database.** + +You'll need to capture the following information about the new database so that the K3s Kubernetes cluster can connect to it. + +To see this information in the Amazon RDS console, click **Databases,** and click the name of the database that you created. + +- **Username:** Use the admin username. +- **Password:** Use the admin password. +- **Hostname:** Use the **Endpoint** as the hostname. The endpoint is available in the **Connectivity & security** section. +- **Port:** The port should be 3306 by default. You can confirm it in the **Connectivity & security** section. +- **Database name:** Confirm the name by going to the **Configuration** tab. The name is listed under **DB name.** + +This information will be used to connect to the database in the following format: + +``` +mysql://username:password@tcp(hostname:3306)/database-name +``` + +For more information on configuring the datastore for K3s, refer to the [K3s documentation.]({{}}/k3s/latest/en/installation/datastore/) \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-install-external-lb/_index.md b/content/rancher/v2.x/en/installation/options/single-node-install-external-lb/_index.md similarity index 90% rename from content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-install-external-lb/_index.md rename to content/rancher/v2.x/en/installation/options/single-node-install-external-lb/_index.md index cbc9d67ab9f..c2aa176b058 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-install-external-lb/_index.md +++ b/content/rancher/v2.x/en/installation/options/single-node-install-external-lb/_index.md @@ -1,11 +1,16 @@ --- -title: Docker Install with External Load Balancer +title: Docker Install with TLS Termination at Layer-7 NGINX Load Balancer weight: 252 aliases: - /rancher/v2.x/en/installation/single-node/single-node-install-external-lb/ + - /rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-install-external-lb --- -For development and testing environments that have a special requirement to terminate TLS/SSL at a load balancer instead of your Rancher Server container, deploy Rancher and configure a load balancer to work with it conjunction. This install procedure walks you through deployment of Rancher using a single container, and then provides a sample configuration for a layer 7 Nginx load balancer. +For development and testing environments that have a special requirement to terminate TLS/SSL at a load balancer instead of your Rancher Server container, deploy Rancher and configure a load balancer to work with it conjunction. + +A layer-7 load balancer can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. + +This install procedure walks you through deployment of Rancher using a single container, and then provides a sample configuration for a layer-7 NGINX load balancer. > **Want to skip the external load balancer?** > See [Docker Installation]({{}}/rancher/v2.x/en/installation/single-node) instead. @@ -98,11 +103,11 @@ The load balancer or proxy has to be configured to support the following: | `X-Forwarded-Proto` | `https` | To identify the protocol that a client used to connect to the load balancer or proxy.

**Note:** If this header is present, `rancher/rancher` does not redirect HTTP to HTTPS. | `X-Forwarded-Port` | Port used to reach Rancher. | To identify the protocol that client used to connect to the load balancer or proxy. | `X-Forwarded-For` | IP of the client connection. | To identify the originating IP address of a client. -### Example Nginx configuration +### Example NGINX configuration This NGINX configuration is tested on NGINX 1.14. -> **Note:** This Nginx configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - HTTP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/). +> **Note:** This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - HTTP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/). - Replace `rancher-server` with the IP address or hostname of the node running the Rancher container. - Replace both occurrences of `FQDN` to the DNS name for Rancher. @@ -192,9 +197,9 @@ If you are visiting this page to complete an [Air Gap Installation]({{} {{< persistentdata >}} -This layer 7 Nginx configuration is tested on Nginx version 1.13 (mainline) and 1.14 (stable). +This layer 7 NGINX configuration is tested on NGINX version 1.13 (mainline) and 1.14 (stable). -> **Note:** This Nginx configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/). +> **Note:** This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/). ``` upstream rancher { diff --git a/content/rancher/v2.x/en/installation/options/troubleshooting/_index.md b/content/rancher/v2.x/en/installation/options/troubleshooting/_index.md index 556800e0432..001b49dc0f0 100644 --- a/content/rancher/v2.x/en/installation/options/troubleshooting/_index.md +++ b/content/rancher/v2.x/en/installation/options/troubleshooting/_index.md @@ -174,6 +174,7 @@ SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.10 ### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: no key found +<<<<<<< HEAD The key file specified as `ssh_key_path` cannot be accessed. Make sure that you specified the private key file (not the public key, `.pub`), and that the user that is running the `rke` command can access the private key file. ### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain @@ -186,4 +187,19 @@ If you want to use encrypted private keys, you should use `ssh-agent` to load yo ### Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? -The node is not reachable on the configured `address` and `port`. \ No newline at end of file +The node is not reachable on the configured `address` and `port`. +======= +* The key file specified as `ssh_key_path` cannot be accessed. Make sure that you specified the private key file (not the public key, `.pub`), and that the user that is running the `rke` command can access the private key file. + +### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain + +* The key file specified as `ssh_key_path` is not correct for accessing the node. Double-check if you specified the correct `ssh_key_path` for the node and if you specified the correct user to connect with. + +### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: cannot decode encrypted private keys + +* If you want to use encrypted private keys, you should use `ssh-agent` to load your keys with your passphrase. If the `SSH_AUTH_SOCK` environment variable is found in the environment where the `rke` command is run, it will be used automatically to connect to the node. + +### Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? + +* The node is not reachable on the configured `address` and `port`. +>>>>>>> Make RKE cluster troubleshooting page available for air gap install diff --git a/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/_index.md b/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/_index.md index d2ec4366763..2f224f311b3 100644 --- a/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/_index.md +++ b/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/_index.md @@ -25,10 +25,7 @@ To address these changes, this guide will do two things: > For reinstalling Rancher with Helm, please check [Option B: Reinstalling Rancher Chart]({{}}/rancher/v2.x/en/upgrades/upgrades/ha/#c-upgrade-rancher) under the upgrade Rancher section. -## Upgrade Cert-Manager Only - -> **Note:** -> These instructions are applied if you have no plan to upgrade Rancher. +## Upgrade Cert-Manager The namespace used in these instructions depends on the namespace cert-manager is currently installed in. If it is in kube-system use that in the instructions below. You can verify by running `kubectl get pods --all-namespaces` and checking which namespace the cert-manager-\* pods are listed in. Do not change the namespace cert-manager is running in or this can cause issues. @@ -50,7 +47,7 @@ In order to upgrade cert-manager, follow these instructions: 1. [Uninstall existing deployment](https://cert-manager.io/docs/installation/uninstall/kubernetes/#uninstalling-with-helm) ```plain - helm delete --purge cert-manager + helm uninstall cert-manager ``` Delete the CustomResourceDefinition using the link to the version vX.Y you installed diff --git a/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions/_index.md b/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions/_index.md index 3ea49690f27..3299f50c08d 100644 --- a/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions/_index.md +++ b/content/rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions/_index.md @@ -74,7 +74,7 @@ In order to upgrade cert-manager, follow these instructions: 1. Install the new version of cert-manager ```plain - helm install --version 0.9.1 --name cert-manager --namespace kube-system jetstack/cert-manager + helm install --version 0.12.0 --name cert-manager --namespace kube-system jetstack/cert-manager ``` {{% /accordion %}} @@ -95,13 +95,13 @@ Before you can perform the upgrade, you must prepare your air gapped environment 1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager). ```plain - helm fetch jetstack/cert-manager --version v0.9.1 + helm fetch jetstack/cert-manager --version v0.12.0 ``` 1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files. ```plain - helm template ./cert-manager-v0.9.1.tgz --output-dir . \ + helm template ./cert-manager-v0.12.0.tgz --output-dir . \ --name cert-manager --namespace kube-system \ --set image.repository=/quay.io/jetstack/cert-manager-controller --set webhook.image.repository=/quay.io/jetstack/cert-manager-webhook diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md index 0ebdced73b0..fb264adc5e9 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md @@ -9,29 +9,19 @@ aliases: This section is about installations of Rancher server in an air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. -Throughout the installations instructions, there will be _tabs_ for either a high availability Kubernetes installation or a single-node Docker installation. +The installation steps differ depending on whether Rancher is installed on an RKE Kubernetes cluster, a K3s Kubernetes cluster, or a single Docker container. -### Air Gapped Kubernetes Installations +For more information on each installation option, refer to [this page.]({{}}/rancher/v2.x/en/installation/) -This section covers how to install Rancher on a Kubernetes cluster in an air gapped environment. - -A Kubernetes install is composed of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails. - -### Air Gapped Docker Installations - -These instructions also cover how to install Rancher on a single node in an air gapped environment. - -The Docker installation is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. +Throughout the installation instructions, there will be _tabs_ for each installation option. > **Important:** If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker Installation to a Kubernetes Installation. -Instead of running the Docker installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation. - # Installation Outline -- [1. Prepare your Node(s)]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/) -- [2. Collect and Publish Images to your Private Registry]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/) -- [3. Launch a Kubernetes Cluster with RKE]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/) -- [4. Install Rancher]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/) +1. [Set up infrastructure and private registry]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/) +2. [Collect and publish images to your private registry]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/) +3. [Set up a Kubernetes cluster (Skip this step for Docker installations)]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/) +4. [Install Rancher]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/) ### [Next: Prepare your Node(s)]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/) diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md index c12809c0695..bb1b4ae209c 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md @@ -4,7 +4,6 @@ weight: 400 aliases: - /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-system-charts/ - /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-for-private-reg/ - - /rancher/v2.x/en/installation/air-gap-high-availability/install-rancher/ - /rancher/v2.x/en/installation/air-gap-single-node/install-rancher - /rancher/v2.x/en/installation/air-gap/install-rancher --- diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md index 36f56180c1a..cf96f997ea1 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md @@ -1,21 +1,155 @@ --- -title: '3. Install Kubernetes with RKE (Kubernetes Installs Only)' +title: '3. Install Kubernetes (RKE and K3s installs only)' weight: 300 aliases: - /rancher/v2.x/en/installation/air-gap-high-availability/install-kube --- -This section is about how to prepare to launch a Kubernetes cluster which is used to deploy Rancher server for your air gapped environment. +> Skip this section if you are installing Rancher on a single node with Docker. -Since a Kubernetes Installation requires a Kubernetes cluster, we will create a Kubernetes cluster using [Rancher Kubernetes Engine]({{}}/rke/latest/en/) (RKE). Before being able to start your Kubernetes cluster, you'll need to [install RKE]({{}}/rke/latest/en/installation/) and create a RKE config file. +This section describes how to install a Kubernetes cluster according to our [best practices for the Rancher server environment.]({{}}/rancher/v2.x/en/overview/architecture-recommendations/#environment-for-kubernetes-installations) This cluster should be dedicated to run only the Rancher server. -- [A. Create an RKE Config File](#a-create-an-rke-config-file) -- [B. Run RKE](#b-run-rke) -- [C. Save Your Files](#c-save-your-files) +For Rancher prior to v2.4, Rancher should be installed on an [RKE]({{}}/rke/latest/en/) (Rancher Kubernetes Engine) Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. -### A. Create an RKE Config File +As of Rancher v2.4, the Rancher management server can be installed on either an RKE cluster or a K3s Kubernetes cluster. K3s is also a fully certified Kubernetes distribution released by Rancher, but is newer than RKE. We recommend installing Rancher on K3s because K3s is easier to use, and more lightweight, with a binary size of less than 50 MB. Note: After Rancher is installed on an RKE cluster, there is no migration path to a K3s setup at this time. -From a system that can access ports 22/tcp and 6443/tcp on your host nodes, use the sample below to create a new file named `rancher-cluster.yml`. This file is a Rancher Kubernetes Engine configuration file (RKE config file), which is a configuration for the cluster you're deploying Rancher to. +The Rancher management server can only be run on Kubernetes cluster in an infrastructure provider where Kubernetes is installed using RKE or K3s. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported. + +The steps to set up an air-gapped Kubernetes cluster depend on whether RKE or K3s is used to install Kubernetes. + +{{% tabs %}} +{{% tab "K3s" %}} + +In this guide, we are assuming you have created your nodes in your air gapped environment and have a secure Docker private registry on your bastion server. + +### Installation Outline + +1. [Prepare Images Directory](#1-prepare-images-directory) +2. [Create Registry YAML](#2-create-registry-yaml) +3. [Install K3s](#3-install-k3s) +4. [Save and Start Using the kubeconfig File](#4-save-and-start-using-the-kubeconfig-file) + +### 1. Prepare Images Directory +Obtain the images tar file for your architecture from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be running. + +Place the tar file in the `images` directory before starting K3s on each node, for example: + +```sh +sudo mkdir -p /var/lib/rancher/k3s/agent/images/ +sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/ +``` + +### 2. Create Registry YAML +Create the registries.yaml file at `/etc/rancher/k3s/registries.yaml`. This will tell K3s the necessary details to connect to your private registry. + +The registries.yaml file should look like this before plugging in the necessary information: + +``` +--- +mirrors: + customreg: + endpoint: + - "https://ip-to-server:5000" +configs: + customreg: + auth: + username: xxxxxx # this is the registry username + password: xxxxxx # this is the registry password + tls: + cert_file: + key_file: + ca_file: +``` + +Note, at this time only secure registries are supported with K3s (SSL with custom CA). + +For more information on private registries configuration file for K3s, refer to the [K3s documentation.]({{}}/k3s/latest/en/installation/private-registry/) + +### 3. Install K3s + +Obtain the K3s binary from the [releases](https://github.com/rancher/k3s/releases) page, matching the same version used to get the airgap images tar. +Also obtain the K3s install script at https://get.k3s.io + +Place the binary in `/usr/local/bin` on each node. +Place the install script anywhere on each node, and name it `install.sh`. + +Install K3s on each server: + +``` +INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh +``` + +Install K3s on each agent: + +``` +INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken ./install.sh +``` + +Note, take care to ensure you replace `myserver` with the IP or valid DNS of the server and replace `mynodetoken` with the node-token from the server. +The node-token is on the server at `/var/lib/rancher/k3s/server/node-token` + +>**Note:** K3s additionally provides a `--resolv-conf` flag for kubelets, which may help with configuring DNS in air-gap networks. + +### 4. Save and Start Using the kubeconfig File + +When you installed K3s on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/k3s/k3s.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location. + +To use this `kubeconfig` file, + +1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool. +2. Copy the file at `/etc/rancher/k3s/k3s.yaml` and save it to the directory `~/.kube/config` on your local machine. +3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your load balancer, referring to port 6443. (The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443.) Here is an example `k3s.yaml`: + +``` +apiVersion: v1 +clusters: +- cluster: + certificate-authority-data: [CERTIFICATE-DATA] + server: [LOAD-BALANCER-DNS]:6443 # Edit this line + name: default +contexts: +- context: + cluster: default + user: default + name: default +current-context: default +kind: Config +preferences: {} +users: +- name: default + user: + password: [PASSWORD] + username: admin +``` + +**Result:** You can now use `kubectl` to manage your K3s cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using `kubectl`: + +``` +kubectl --kubeconfig ~/.kube/config/k3s.yaml get pods --all-namespaces +``` + +For more information about the `kubeconfig` file, refer to the [K3s documentation]({{}}/k3s/latest/en/cluster-access/) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files. + +### Note on Upgrading + +Upgrading an air-gap environment can be accomplished in the following manner: + +1. Download the new air-gap images (tar file) from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each node. Delete the old tar file. +2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables. +3. Restart the K3s service (if not restarted automatically by installer). +{{% /tab %}} +{{% tab "RKE" %}} +We will create a Kubernetes cluster using Rancher Kubernetes Engine (RKE). Before being able to start your Kubernetes cluster, you’ll need to install RKE and create a RKE config file. + +### 1. Install RKE + +Install RKE by following the instructions in the [RKE documentation.]({{}}/rke/latest/en/installation/) + +### 2. Create an RKE Config File + +From a system that can access ports 22/TCP and 6443/TCP on the Linux host node(s) that you set up in a previous step, use the sample below to create a new file named `rancher-cluster.yml`. + +This file is an RKE configuration file, which is a configuration for the cluster you're deploying Rancher to. Replace values in the code sample below with help of the _RKE Options_ table. Use the IP address or DNS names of the [3 nodes]({{}}/rancher/v2.x/en/installation/air-gap-high-availability/provision-hosts) you created. @@ -25,11 +159,11 @@ Replace values in the code sample below with help of the _RKE Options_ table. Us | Option | Required | Description | | ------------------ | -------------------- | --------------------------------------------------------------------------------------- | -| `address` | ✓ | The DNS or IP address for the node within the air gap network. | -| `user` | ✓ | A user that can run docker commands. | +| `address` | ✓ | The DNS or IP address for the node within the air gapped network. | +| `user` | ✓ | A user that can run Docker commands. | | `role` | ✓ | List of Kubernetes roles assigned to the node. | | `internal_address` | optional1 | The DNS or IP address used for internal cluster traffic. | -| `ssh_key_path` | | Path to SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`). | +| `ssh_key_path` | | Path to the SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`). | > 1 Some services like AWS EC2 require setting the `internal_address` if you want to use self-referencing security groups or firewalls. @@ -58,7 +192,7 @@ private_registries: is_default: true ``` -### B. Run RKE +### 3. Run RKE After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster: @@ -66,7 +200,7 @@ After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster: rke up --config ./rancher-cluster.yml ``` -### C. Save Your Files +### 4. Save Your Files > **Important** > The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster. @@ -76,9 +210,13 @@ Save a copy of the following files in a secure location: - `rancher-cluster.yml`: The RKE cluster configuration file. - `kube_config_rancher-cluster.yml`: The [Kubeconfig file]({{}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster. - `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains the current state of the cluster including the RKE configuration and the certificates.

_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._ +{{% /tab %}} +{{% /tabs %}} + +> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file. ### Issues or errors? See the [Troubleshooting]({{}}/rancher/v2.x/en/installation/options/troubleshooting/) page. -### [Next: Install Rancher]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher) +### [Next: Install Rancher](../install-rancher) diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md index cc490aa4ca6..6cef213e1ae 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md @@ -8,35 +8,37 @@ aliases: - /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-for-private-reg/ --- -> **Prerequisites:** You must have a [private registry](https://docs.docker.com/registry/deploying/) available to use. -> -> **Note:** Populating the private registry with images is the same process for HA and Docker installations, the differences in this section is based on whether or not you are planning to provision a Windows cluster or not. - -By default, all images used to [provision Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/) or launch any [tools]({{}}/rancher/v2.x/en/tools/) in Rancher, e.g. monitoring, pipelines, alerts, are pulled from Docker Hub. In an air gap installation of Rancher, you will need a private registry that is located somewhere accessible by your Rancher server. Then, you will load the registry with all the images. - This section describes how to set up your private registry so that when you install Rancher, Rancher will pull all the required images from this registry. -By default, we provide the steps of how to populate your private registry assuming you are provisioning Linux only clusters, but if you plan on provisioning any [Windows clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/), there are separate instructions to support the images needed for a Windows cluster. +By default, all images used to [provision Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/) or launch any [tools]({{}}/rancher/v2.x/en/cluster-admin/tools/) in Rancher, e.g. monitoring, pipelines, alerts, are pulled from Docker Hub. In an air gapped installation of Rancher, you will need a private registry that is located somewhere accessible by your Rancher server. Then, you will load the registry with all the images. + +Populating the private registry with images is the same process for installing Rancher with Docker and for installing Rancher on a Kubernetes cluster. + +The steps in this section differ depending on whether or not you are planning to use Rancher to provision a downstream cluster with Windows nodes or not. By default, we provide the steps of how to populate your private registry assuming that Rancher will provision downstream Kubernetes clusters with only Linux nodes. But if you plan on provisioning any [downstream Kubernetes clusters using Windows nodes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/), there are separate instructions to support the images needed. + +> **Prerequisites:** You must have a [private registry](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry) available to use. {{% tabs %}} {{% tab "Linux Only Clusters" %}} For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry. -A. Find the required assets for your Rancher version
-B. Collect all the required images
-C. Save the images to your workstation
-D. Populate the private registry +1. [Find the required assets for your Rancher version](#1-find-the-required-assets-for-your-rancher-version) +2. [Collect the cert-manager image](#2-collect-the-cert-manager-image) (unless you are bringing your own certificates or terminating TLS on a load balancer) +3. [Save the images to your workstation](#3-save-the-images-to-your-workstation) +4. [Populate the private registry](#4-populate-the-private-registry) ### Prerequisites These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space. -### A. Find the required assets for your Rancher version +If you will use ARM64 hosts, the registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests. -1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. +### 1. Find the required assets for your Rancher version -2. From the release's **Assets** section (pictured above), download the following files, which are required to install Rancher in an air gap environment: +1. Go to our [releases page,](https://github.com/rancher/rancher/releases) find the Rancher v2.x.x release that you want to install, and click **Assets.** Note: Don't use releases marked `rc` or `Pre-release`, as they are not stable for production environments. + +2. From the release's **Assets** section, download the following files, which are required to install Rancher in an air gap environment: | Release File | Description | | ---------------- | -------------- | @@ -44,18 +46,20 @@ These steps expect you to use a Linux workstation that has internet access, acce | `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. | | `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. | -### B. Collect all the required images (For Kubernetes Installs using Rancher Generated Self-Signed Certificate) +### 2. Collect the cert-manager image -In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates. +> Skip this step if you are using your own certificates, or if you are terminating TLS on an external load balancer. + +In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. 1. Fetch the latest `cert-manager` Helm chart and parse the template for image details: - > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.9.1, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). + > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). ```plain helm repo add jetstack https://charts.jetstack.io helm repo update - helm fetch jetstack/cert-manager --version v0.9.1 + helm fetch jetstack/cert-manager --version v0.12.0 helm template ./cert-manager-.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt ``` @@ -65,7 +69,7 @@ In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS sort -u rancher-images.txt -o rancher-images.txt ``` -### C. Save the images to your workstation +### 3. Save the images to your workstation 1. Make `rancher-save-images.sh` an executable: ``` @@ -78,23 +82,27 @@ In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS ``` **Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory. -### D. Populate the private registry +### 4. Populate the private registry -Move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. +Next, you will move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images. + +Move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images. + +The `rancher-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. The `rancher-images.tar.gz` should also be in the same directory. 1. Log into your private registry if required: - ```plain - docker login - ``` + ```plain + docker login + ``` 1. Make `rancher-load-images.sh` an executable: - ``` - chmod +x rancher-load-images.sh - ``` + ``` + chmod +x rancher-load-images.sh + ``` 1. Use `rancher-load-images.sh` to extract, tag and push `rancher-images.txt` and `rancher-images.tar.gz` to your private registry: - ```plain - ./rancher-load-images.sh --image-list ./rancher-images.txt --registry - ``` + ```plain + ./rancher-load-images.sh --image-list ./rancher-images.txt --registry + ``` {{% /tab %}} {{% tab "Linux and Windows Clusters" %}} @@ -119,35 +127,36 @@ These steps expect you to use a Windows Server 1809 workstation that has interne The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters. +Your registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests. + ### A. Find the required assets for your Rancher version 1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. 2. From the release's "Assets" section, download the following files: - | Release File | Description | - | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | - | `rancher-windows-images.txt` | This file contains a list of Windows images needed to provision Windows clusters. | - | `rancher-save-images.ps1` | This script pulls all the images in the `rancher-windows-images.txt` from Docker Hub and saves all of the images as `rancher-windows-images.tar.gz`. | - | `rancher-load-images.ps1` | This script loads the images from the `rancher-windows-images.tar.gz` file and pushes them to your private registry. | +| Release File | Description | +|----------------------------|------------------| +| `rancher-windows-images.txt` | This file contains a list of Windows images needed to provision Windows clusters. | +| `rancher-save-images.ps1` | This script pulls all the images in the `rancher-windows-images.txt` from Docker Hub and saves all of the images as `rancher-windows-images.tar.gz`. | +| `rancher-load-images.ps1` | This script loads the images from the `rancher-windows-images.tar.gz` file and pushes them to your private registry. | ### B. Save the images to your Windows Server workstation 1. Using `powershell`, go to the directory that has the files that were downloaded in the previous step. 1. Run `rancher-save-images.ps1` to create a tarball of all the required images: - ```plain ./rancher-save-images.ps1 ``` - **Step Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-windows-images.tar.gz`. Check that the output is in the directory. + **Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-windows-images.tar.gz`. Check that the output is in the directory. ### C. Prepare the Docker daemon -1. Append your private registry address to the `allow-nondistributable-artifacts` config field in the Docker daemon (`C:\ProgramData\Docker\config\daemon.json`). Since the base image of Windows images are maintained by the `mcr.microsoft.com` registry, this step is required as the layers in the Microsoft registry are missing from Docker Hub and need to be pulled into the private registry. +Append your private registry address to the `allow-nondistributable-artifacts` config field in the Docker daemon (`C:\ProgramData\Docker\config\daemon.json`). Since the base image of Windows images are maintained by the `mcr.microsoft.com` registry, this step is required as the layers in the Microsoft registry are missing from Docker Hub and need to be pulled into the private registry. - ```json + ``` { ... "allow-nondistributable-artifacts": [ @@ -160,16 +169,16 @@ The workstation must have Docker 18.02+ in order to support manifests, which are ### D. Populate the private registry -Move the images in the `rancher-windows-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-windows-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.ps1` script. +Move the images in the `rancher-windows-images.tar.gz` to your private registry using the scripts to load the images. + +The `rancher-windows-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.ps1` script. The `rancher-windows-images.tar.gz` should also be in the same directory. 1. Using `powershell`, log into your private registry if required: - ```plain docker login ``` 1. Using `powershell`, use `rancher-load-images.ps1` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry: - ```plain ./rancher-load-images.ps1 --registry ``` @@ -197,34 +206,31 @@ The workstation must have Docker 18.02+ in order to support manifests, which are ### A. Find the required assets for your Rancher version -1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. +1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. Click **Assets.** -2. From the release's **Assets** section (pictured above), download the following files, which are required to install Rancher in an air gap environment: +2. From the release's **Assets** section, download the following files: - | Release File | Description | - | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | - | `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. | - | `rancher-windows-images.txt` | This file contains a list of images needed to provision Windows clusters. | - | `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. | - | `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. | +| Release File | Description | +|----------------------------| -------------------------- | +| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. | +| `rancher-windows-images.txt` | This file contains a list of images needed to provision Windows clusters. | +| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. | +| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. | ### B. Collect all the required images -1. **For Kubernetes Installs using Rancher Generated Self-Signed Certificate:** In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates. +**For Kubernetes Installs using Rancher Generated Self-Signed Certificate:** In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates. 1. Fetch the latest `cert-manager` Helm chart and parse the template for image details: - - > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.9.1, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). - + > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). ```plain helm repo add jetstack https://charts.jetstack.io helm repo update - helm fetch jetstack/cert-manager --version v0.9.1 + helm fetch jetstack/cert-manager --version v0.12.0 helm template ./cert-manager-.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt ``` 2. Sort and unique the images list to remove any overlap between the sources: - ```plain sort -u rancher-images.txt -o rancher-images.txt ``` @@ -232,37 +238,34 @@ The workstation must have Docker 18.02+ in order to support manifests, which are ### C. Save the images to your workstation 1. Make `rancher-save-images.sh` an executable: - ``` chmod +x rancher-save-images.sh ``` 1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images: - ```plain ./rancher-save-images.sh --image-list ./rancher-images.txt ``` - **Step Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory. + **Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory. ### D. Populate the private registry -Move the images in the `rancher-images.tar.gz` to your private registry using the `rancher-load-images.sh script` to load the images. The `rancher-images.txt` / `rancher-windows-images.txt` image list is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. +Move the images in the `rancher-images.tar.gz` to your private registry using the `rancher-load-images.sh script` to load the images. + +The image list, `rancher-images.txt` or `rancher-windows-images.txt`, is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. The `rancher-images.tar.gz` should also be in the same directory. 1. Log into your private registry if required: - ```plain docker login ``` 1. Make `rancher-load-images.sh` an executable: - ``` chmod +x rancher-load-images.sh ``` 1. Use `rancher-load-images.sh` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry: - ```plain ./rancher-load-images.sh --image-list ./rancher-images.txt \ --windows-image-list ./rancher-windows-images.txt \ @@ -274,6 +277,6 @@ Move the images in the `rancher-images.tar.gz` to your private registry using th {{% /tab %}} {{% /tabs %}} -### [Next: Kubernetes Installs - Launch a Kubernetes Cluster with RKE]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/) +### [Next step for Kubernetes Installs - Launch a Kubernetes Cluster]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/) -### [Next: Docker Installs - Install Rancher]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/) +### [Next step for Docker Installs - Install Rancher]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/) diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/_index.md index 791d25bfa4b..ec59eb1582a 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/_index.md @@ -1,102 +1,174 @@ --- -title: '1. Prepare your Node(s)' +title: '1. Set up Infrastructure and Private Registry' weight: 100 aliases: - /rancher/v2.x/en/installation/air-gap-single-node/provision-host --- -This section is about how to prepare your node(s) to install Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation. +In this section, you will provision the underlying infrastructure for your Rancher management server in an air gapped environment. You will also set up the private Docker registry that must be available to your Rancher node(s). -# Prerequisites +An air gapped environment is an environment where the Rancher server is installed offline or behind a firewall. + +The infrastructure depends on whether you are installing Rancher on a K3s Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container. For more information on each installation option, refer to [this page.]({{}}/rancher/v2.x/en/installation/) {{% tabs %}} -{{% tab "Kubernetes Install (Recommended)" %}} +{{% tab "K3s" %}} +We recommend setting up the following infrastructure for a high-availability installation: -### OS, Docker, Hardware, and Networking +- **Two Linux nodes,** typically virtual machines, in the infrastructure provider of your choice. +- **An external database** to store the cluster data. PostgreSQL, MySQL, and etcd are supported. +- **A load balancer** to direct traffic to the two nodes. +- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it. +- **A private Docker registry** to distribute Docker images to your machines. -Make sure that your node(s) fulfill the general [installation requirements.]({{}}/rancher/v2.x/en/installation/requirements/) - -### Private Registry - -Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines. - -If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/). - -### CLI Tools - -The following CLI tools are required for the Kubernetes Install. Make sure these tools are installed on your workstation and available in your `$PATH`. - -- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool. -- [rke]({{}}/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters. -- [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements]({{}}/rancher/v2.x/en/installation/options/helm-version) to choose a version of Helm to install Rancher. - -{{% /tab %}} -{{% tab "Docker Install" %}} - -### OS, Docker, Hardware, and Networking - -Make sure that your node(s) fulfill the general [installation requirements.]({{}}/rancher/v2.x/en/installation/requirements/) - -### Private Registry - -Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines. - -If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/). -{{% /tab %}} -{{% /tabs %}} - -# Set up Infrastructure - -{{% tabs %}} -{{% tab "Kubernetes Install (Recommended)" %}} - -Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes install is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails. - -### Recommended Architecture - -- DNS for Rancher should resolve to a layer 4 load balancer -- The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster. -- The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443. -- The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment. - -
Rancher installed on a Kubernetes cluster with layer 4 load balancer, depicting SSL termination at ingress controllers
- -![Rancher HA]({{}}/img/rancher/ha/rancher2ha.svg) - -### A. Provision three air gapped Linux hosts according to our requirements +### 1. Set up Linux Nodes These hosts will be disconnected from the internet, but require being able to connect with your private registry. -View hardware and software requirements for each of your cluster nodes in [Requirements]({{}}/rancher/v2.x/en/installation/requirements). +Make sure that your nodes fulfill the general installation requirements for [OS, Docker, hardware, and networking.]({{}}/rancher/v2.x/en/installation/requirements/) -### B. Set up your Load Balancer +For an example of one way to set up Linux nodes, refer to this [tutorial]({{}}/rancher/v2.x/en/installation/options/ec2-node) for setting up nodes as instances in Amazon EC2. -When setting up the Kubernetes cluster that will run the Rancher server components, an Ingress controller pod will be deployed on each of your nodes. The Ingress controller pods are bound to ports TCP/80 and TCP/443 on the host network and are the entry point for HTTPS traffic to the Rancher server. +### 2. Set up External Datastore -You will need to configure a load balancer as a basic Layer 4 TCP forwarder to direct traffic to these ingress controller pods. The exact configuration will vary depending on your environment. +The ability to run Kubernetes using a datastore other than etcd sets K3s apart from other Kubernetes distributions. This feature provides flexibility to Kubernetes operators. The available options allow you to select a datastore that best fits your use case. + +For a high-availability K3s installation, you will need to set up one of the following external databases: + +* [PostgreSQL](https://www.postgresql.org/) (certified against versions 10.7 and 11.5) +* [MySQL](https://www.mysql.com/) (certified against version 5.7) +* [etcd](https://etcd.io/) (certified against version 3.3.15) + +When you install Kubernetes, you will pass in details for K3s to connect to the database. + +For an example of one way to set up the database, refer to this [tutorial]({{}}/rancher/v2.x/en/installation/options/rds) for setting up a MySQL database on Amazon's RDS service. + +For the complete list of options that are available for configuring a K3s cluster datastore, refer to the [K3s documentation.]({{}}/k3s/latest/en/installation/datastore/) + +### 3. Set up the Load Balancer + +You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server. + +When Kubernetes gets set up in a later step, the K3s tool will deploy a Traefik Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames. + +When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the Traefik Ingress controller to listen for traffic destined for the Rancher hostname. The Traefik Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster. + +For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer: + +- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment. +- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination) + +For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nginx/) + +For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nlb/) > **Important:** -> Only use this load balancer (i.e, the `local` cluster Ingress) to load balance the Rancher server. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. +> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications. -**Load Balancer Configuration Samples:** +### 4. Set up the DNS Record -- For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nginx) -- For an example showing how to set up an Amazon NLB load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nlb) +Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer. +Depending on your environment, this may be an A record pointing to the load balancer IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on. + +You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one. + +For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer) + +### 5. Set up a Private Docker Registry + +Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines. + +In a later step, when you set up your K3s Kubernetes cluster, you will create a [private registries configuration file]({{}}/k3s/latest/en/installation/private-registry/) with details from this registry. + +If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry) {{% /tab %}} -{{% tab "Docker Install" %}} +{{% tab "RKE" %}} -The Docker installation is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. +To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure: -> **Important:** If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker installation to a Kubernetes Installation. +- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere. +- **A load balancer** to direct front-end traffic to the three nodes. +- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it. +- **A private Docker registry** to distribute Docker images to your machines. -Instead of running the Docker installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation. +These nodes must be in the same region/data center. You may place these servers in separate availability zones. -### A. Provision a single, air gapped Linux host according to our Requirements +### Why three nodes? + +In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes. + +The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes. + +### 1. Set up Linux Nodes These hosts will be disconnected from the internet, but require being able to connect with your private registry. -View hardware and software requirements for each of your cluster nodes in [Requirements]({{}}/rancher/v2.x/en/installation/requirements). +Make sure that your nodes fulfill the general installation requirements for [OS, Docker, hardware, and networking.]({{}}/rancher/v2.x/en/installation/requirements/) + +For an example of one way to set up Linux nodes, refer to this [tutorial]({{}}/rancher/v2.x/en/installation/options/ec2-node) for setting up nodes as instances in Amazon EC2. + +### 2. Set up the Load Balancer + +You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server. + +When Kubernetes gets set up in a later step, the RKE tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames. + +When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster. + +For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer: + +- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment. +- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.]({{}}/rancher/v2.x/en/installation/options/chart-options/#external-tls-termination) + +For an example showing how to set up an NGINX load balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nginx/) + +For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.]({{}}/rancher/v2.x/en/installation/options/nlb/) + +> **Important:** +> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications. + +### 3. Set up the DNS Record + +Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer. + +Depending on your environment, this may be an A record pointing to the LB IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on. + +You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one. + +For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer) + +### 4. Set up a Private Docker Registry + +Rancher supports air gap installs using a secure Docker private registry. You must have your own private registry or other means of distributing Docker images to your machines. + +In a later step, when you set up your K3s Kubernetes cluster, you will create a [private registries configuration file]({{}}/k3s/latest/en/installation/private-registry/) with details from this registry. + +If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry) + +{{% /tab %}} +{{% tab "Docker" %}} +> The Docker installation is for Rancher users that are wanting to test out Rancher. Since there is only one node and a single Docker container, if the node goes down, you will lose all the data of your Rancher server. +> +> For running Rancher in production, we recommend installing Rancher on a high-availability Kubernetes cluster. There is no upgrade path to transition your Docker installation to a Kubernetes Installation. +> +> If you want to save resources by using a single node in the short term, while preserving a migration path to a high-availability installation, we recommend installing Rancher on a single-node Kubernetes cluster. + +### 1. Set up a Linux Node + +This host will be disconnected from the Internet, but needs to be able to connect to your private registry. + +Make sure that your node fulfills the general installation requirements for [OS, Docker, hardware, and networking.]({{}}/rancher/v2.x/en/installation/requirements/) + +For an example of one way to set up Linux nodes, refer to this [tutorial]({{}}/rancher/v2.x/en/installation/options/ec2-node) for setting up nodes as instances in Amazon EC2. + +### 2. Set up a Private Docker Registry + +Rancher supports air gap installs using a Docker private registry on your bastion server. You must have your own private registry or other means of distributing Docker images to your machines. + +In a later step, when you set up your K3s Kubernetes cluster, you will create a [private registries configuration file]({{}}/k3s/latest/en/installation/private-registry/) with details from this registry. + +If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/) {{% /tab %}} {{% /tabs %}} diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md index 0f6386a6f62..ac06ed134eb 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md @@ -13,7 +13,7 @@ For development and testing environments only, Rancher can be installed by runni In this installation scenario, you'll install Docker on a single Linux host, and then deploy Rancher on your host using a single Docker container. > **Want to use an external load balancer?** -> See [Docker Install with an External Load Balancer]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-install-external-lb) instead. +> See [Docker Install with an External Load Balancer]({{}}/rancher/v2.x/en/installation/options/single-node-install-external-lb) instead. # Requirements for OS, Docker, Hardware, and Networking @@ -44,8 +44,8 @@ Log into your Linux host, and then run the minimum installation command below. ```bash docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - rancher/rancher:latest + -p 80:80 -p 443:443 \ + rancher/rancher:latest ``` ### Option B: Bring Your Own Certificate, Self-signed @@ -68,11 +68,11 @@ After creating your certificate, run the Docker command below to install Rancher ```bash docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - -v //:/etc/rancher/ssl/cert.pem \ - -v //:/etc/rancher/ssl/key.pem \ - -v //:/etc/rancher/ssl/cacerts.pem \ - rancher/rancher:latest + -p 80:80 -p 443:443 \ + -v //:/etc/rancher/ssl/cert.pem \ + -v //:/etc/rancher/ssl/key.pem \ + -v //:/etc/rancher/ssl/cacerts.pem \ + rancher/rancher:latest ``` ### Option C: Bring Your Own Certificate, Signed by a Recognized CA @@ -97,11 +97,11 @@ After obtaining your certificate, run the Docker command below. ```bash docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - -v //:/etc/rancher/ssl/cert.pem \ - -v //:/etc/rancher/ssl/key.pem \ - rancher/rancher:latest \ - --no-cacerts + -p 80:80 -p 443:443 \ + -v //:/etc/rancher/ssl/cert.pem \ + -v //:/etc/rancher/ssl/key.pem \ + rancher/rancher:latest \ + --no-cacerts ``` ### Option D: Let's Encrypt Certificate @@ -124,9 +124,9 @@ After you fulfill the prerequisites, you can install Rancher using a Let's Encry ``` docker run -d --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - rancher/rancher:latest \ - --acme-domain + -p 80:80 -p 443:443 \ + rancher/rancher:latest \ + --acme-domain ``` ## Advanced Options diff --git a/content/rancher/v2.x/en/installation/requirements/_index.md b/content/rancher/v2.x/en/installation/requirements/_index.md index 65c9a808a6a..ec293b2fa0e 100644 --- a/content/rancher/v2.x/en/installation/requirements/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/_index.md @@ -13,6 +13,7 @@ Make sure the node(s) for the Rancher server fulfill the following requirements: - [Operating Systems and Docker Requirements](#operating-systems-and-docker-requirements) - [Hardware Requirements](#hardware-requirements) - [CPU and Memory](#cpu-and-memory) + - [CPU and Memory for Rancher prior to v2.4.0](#cpu-and-memory-for-rancher-prior-to-v2-4-0) - [Disks](#disks) - [Networking Requirements](#networking-requirements) - [Node IP Addresses](#node-ip-addresses) @@ -26,7 +27,15 @@ The Rancher UI works best in Firefox or Chrome. Rancher should work with any modern Linux distribution and any modern Docker version. -Rancher has been tested and is supported with Ubuntu, CentOS, Oracle Linux, RancherOS, and RedHat Enterprise Linux. +Rancher and RKE have been tested and are supported on Ubuntu, CentOS, Oracle Linux, RancherOS, and RedHat Enterprise Linux. + +K3s should run on just about any flavor of Linux. However, K3s is tested on the following operating systems and their subsequent non-major releases: + +- Ubuntu 16.04 (amd64) +- Ubuntu 18.04 (amd64) +- Raspbian Buster (armhf) + +If you are installing Rancher on a K3s cluster with Alpine Linux, follow [these steps]({{}}/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup. For details on which OS and Docker versions were tested with each Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/) @@ -34,7 +43,7 @@ All supported operating systems are 64-bit x86. The `ntp` (Network Time Protocol) package should be installed. This prevents errors with certificate validation that can occur when the time is not synchronized between the client and server. -Some distributions of Linux derived from RHEL, including Oracle Linux, may have default firewall rules that block communication with Helm. This [how-to guide]({{}}/rancher/v2.x/en/installation/options/firewall) shows how to check the default firewall rules and how to open the ports with `firewalld` if necessary. +Some distributions of Linux may have default firewall rules that block communication with Helm. This [how-to guide]({{}}/rancher/v2.x/en/installation/options/firewall) shows how to check the default firewall rules for Oracle Linux and how to open the ports with `firewalld` if necessary. If you plan to run Rancher on ARM64, see [Running on ARM64 (Experimental).]({{}}/rancher/v2.x/en/installation/options/arm64-platform/) @@ -48,25 +57,45 @@ This section describes the CPU, memory, and disk requirements for the nodes wher ### CPU and Memory -Hardware requirements scale based on the size of your Rancher deployment. Provision each individual node according to the requirements. The requirements are different depending on if you are installing Rancher with Docker or on a Kubernetes cluster. +Hardware requirements scale based on the size of your Rancher deployment. Provision each individual node according to the requirements. The requirements are different depending on if you are installing Rancher in a single container with Docker, or if you are installing Rancher on a Kubernetes cluster. {{% tabs %}} -{{% tab "Nodes in Kubernetes Install" %}} +{{% tab "RKE Install Requirements" %}} -These requirements apply to [installing Rancher on a Kubernetes cluster.]({{}}/rancher/v2.x/en/installation/k8s-install/) +These requirements apply to each host in an [RKE Kubernetes cluster where the Rancher server is installed.]({{}}/rancher/v2.x/en/installation/k8s-install/) -| Deployment Size | Clusters | Nodes | vCPUs | RAM | -| --------------- | --------- | ---------- | ----------------------------------------------- | ----------------------------------------------- | -| Small | Up to 5 | Up to 50 | 2 | 8 GB | -| Medium | Up to 15 | Up to 200 | 4 | 16 GB | -| Large | Up to 50 | Up to 500 | 8 | 32 GB | -| X-Large | Up to 100 | Up to 1000 | 32 | 128 GB | -| XX-Large | 100+ | 1000+ | [Contact Rancher](https://rancher.com/contact/) | [Contact Rancher](https://rancher.com/contact/) | +Performance increased in Rancher v2.4.0. For the requirements of Rancher prior to v2.4.0, refer to [this section.](#cpu-and-memory-for-rancher-prior-to-v2-4-0) + +| Deployment Size | Clusters | Nodes | vCPUs | RAM | +| --------------- | --------- | ---------- | -------| ------- | +| Small | Up to 150 | Up to 1500 | 2 | 8 GB | +| Medium | Up to 300 | Up to 3000 | 4 | 16 GB | +| Large | Up to 500 | Up to 5000 | 8 | 32 GB | +| X-Large | Up to 1000 | Up to 10000 | 16 | 64 GB | +| XX-Large | Up to 2000 | Up to 20000 | 32 | 128GB | + +[Contact Rancher](https://rancher.com/contact/) for more than 2000 clusters and/or 20000 nodes. +{{% /tab %}} + +{{% tab "K3s Install Requirements" %}} + +These requirements apply to each host in a [K3s Kubernetes cluster where the Rancher server is installed.]({{}}/rancher/v2.x/en/installation/k8s-install/) + +| Deployment Size | Clusters | Nodes | vCPUs | RAM | Database Size | +| --------------- | ---------- | ------------ | -------| ---------| ------------ | +| Small | Up to 150 | Up to 1500 | 2 | 8 GB | 2 cores 4GB + 1000 IOPS | +| Medium | Up to 300 | Up to 3000 | 4 | 16 GB | 2 cores 4GB + 1000 IOPS | +| Large | Up to 500 | Up to 5000 | 8 | 32 GB | 2 cores 4GB + 1000 IOPS | +| X-Large | Up to 1000 | Up to 10000 | 16 | 64 GB | 2 cores 4GB + 1000 IOPS | +| XX-Large | Up to 2000 | Up to 20000 | 32 | 128GB | 2 cores 4GB + 1000 IOPS | + +[Contact Rancher](https://rancher.com/contact/) for more than 2000 clusters and/or 20000 nodes. {{% /tab %}} -{{% tab "Node in Docker Install" %}} -These requirements apply to [single node]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) installations of Rancher. +{{% tab "Docker Install Requirements" %}} + +These requirements apply to a host with a [single-node]({{}}/rancher/v2.x/en/installation/other-installation-methods/single-node-docker) installation of Rancher. | Deployment Size | Clusters | Nodes | vCPUs | RAM | | --------------- | -------- | --------- | ----- | ---- | @@ -76,6 +105,20 @@ These requirements apply to [single node]({{}}/rancher/v2.x/en/installa {{% /tab %}} {{% /tabs %}} +### CPU and Memory for Rancher prior to v2.4.0 + +{{% accordion label="Click to expand" %}} +These requirements apply to installing Rancher on an RKE Kubernetes cluster prior to Rancher v2.4.0: + +| Deployment Size | Clusters | Nodes | vCPUs | RAM | +| --------------- | --------- | ---------- | ----------------------------------------------- | ----------------------------------------------- | +| Small | Up to 5 | Up to 50 | 2 | 8 GB | +| Medium | Up to 15 | Up to 200 | 4 | 16 GB | +| Large | Up to 50 | Up to 500 | 8 | 32 GB | +| X-Large | Up to 100 | Up to 1000 | 32 | 128 GB | +| XX-Large | 100+ | 1000+ | [Contact Rancher](https://rancher.com/contact/) | [Contact Rancher](https://rancher.com/contact/) | +{{% /accordion %}} + ### Disks Rancher performance depends on etcd in the cluster performance. To ensure optimal speed, we recommend always using SSD disks to back your Rancher management Kubernetes cluster. On cloud providers, you will also want to use the minimum size that allows the maximum IOPS. In larger clusters, consider using dedicated storage devices for etcd data and wal directories. @@ -92,13 +135,65 @@ Each node used should have a static IP configured, regardless of whether you are This section describes the port requirements for nodes running the `rancher/rancher` container. -The port requirements are different depending on whether you are installing Rancher on a single node or on a high-availability Kubernetes cluster. - -- **For a Docker installation,** you only need to open the ports required to enable Rancher to communicate with downstream user clusters. -- **For a high-availability installation,** the same ports need to be opened, as well as additional ports required to set up the Kubernetes cluster that Rancher is installed on. +The port requirements are different depending on whether you are installing Rancher on a K3s cluster, on an RKE cluster, or in a single Docker container. {{% tabs %}} -{{% tab "Kubernetes Install Port Requirements" %}} +{{% tab "K3s" %}} +### Ports for Communication with Downstream Clusters + +To communicate with downstream clusters, Rancher requires different ports to be open depending on the infrastructure you are using. + +For example, if you are deploying Rancher on nodes hosted by an infrastructure provider, port `22` must be open for SSH. + +The following diagram depicts the ports that are opened for each [cluster type]({{}}/rancher/v2.x/en/cluster-provisioning). + +
Port Requirements for the Rancher Management Plane
+ +![Basic Port Requirements]({{}}/img/rancher/port-communications.svg) + +The following tables break down the port requirements for inbound and outbound traffic: + +
Inbound Rules for Rancher Nodes
+ +| Protocol | Port | Source | Description | +| -------- | ---- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------- | +| TCP | 80 | Load balancer/proxy that does external SSL termination | Rancher UI/API when external SSL termination is used | +| TCP | 443 |
  • server nodes
  • agent nodes
  • hosted/imported Kubernetes
  • any source that needs to be able to use the Rancher UI or API
| Rancher agent, Rancher UI/API, kubectl | + +
Outbound Rules for Rancher Nodes
+ +| Protocol | Port | Destination | Description | +| -------- | ---- | -------------------------------------------------------- | --------------------------------------------- | +| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | +| TCP | 443 | `35.160.43.145/32`, `35.167.242.46/32`, `52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine | +| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | + +**Note** Rancher nodes may also require additional outbound access for any external [authentication provider]({{}}/rancher/v2.x/en/admin-settings/authentication/) which is configured (LDAP for example). + +### Additional Port Requirements for Nodes in a K3s Kubernetes Cluster + +You will need to open additional ports to launch the Kubernetes cluster that is required for a high-availability installation of Rancher. + +The K3s server needs port 6443 to be accessible by the nodes. + +The nodes need to be able to reach other nodes over UDP port 8472 when Flannel VXLAN is used. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel. However, if you do not use Flannel and provide your own custom CNI, then port 8472 is not needed by K3s. + +If you wish to utilize the metrics server, you will need to open port 10250 on each node. + +> **Important:** The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disables access to port 8472. + +
Inbound Rules for Rancher Server Nodes
+ +| Protocol | Port | Source | Description +|-----|-----|----------------|---| +| TCP | 6443 | K3s server nodes | Kubernetes API +| UDP | 8472 | K3s server and agent nodes | Required only for Flannel VXLAN +| TCP | 10250 | K3s server and agent nodes | kubelet + +Typically all outbound traffic is allowed. +{{% /tab %}} +{{% tab "RKE" %}} ### Ports for Communication with Downstream Clusters To communicate with downstream clusters, Rancher requires different ports to be open depending on the infrastructure you are using. @@ -131,11 +226,13 @@ The following tables break down the port requirements for inbound and outbound t **Note** Rancher nodes may also require additional outbound access for any external [authentication provider]({{}}/rancher/v2.x/en/admin-settings/authentication/) which is configured (LDAP for example). -### Additional Port Requirements for Nodes in an HA/Kubernetes Cluster +### Additional Port Requirements for Nodes in an RKE Kubernetes Cluster -You will need to open additional ports to launch the Kubernetes cluster that are required for a high-availability installation of Rancher. +You will need to open additional ports to launch the Kubernetes cluster that is required for a high-availability installation of Rancher. -If you follow the Rancher installation documentation for setting up a Kubernetes cluster using RKE, you will set up a cluster in which all three nodes have all three roles: etcd, controlplane, and worker. In that case, you can refer to this list of requirements for each node with all three roles: +If you follow the Rancher installation documentation for setting up a Kubernetes cluster using RKE, you will set up a cluster in which all three nodes have all three roles: etcd, controlplane, and worker. In that case, you can refer to this list of requirements for each node with all three roles. + +If you installed Rancher on a Kubernetes cluster that doesn't have all three roles on each node, refer to the [port requirements for the Rancher Kubernetes Engine (RKE).]({{}}/rke/latest/en/os/#ports) The RKE docs show a breakdown of the port requirements for each role.
Inbound Rules for Nodes with All Three Roles: etcd, Controlplane, and Worker
@@ -170,14 +267,13 @@ TCP | 9099 | the node itself (local traffic, not across nodes) | Canal/Flannel l TCP | 10250 | etcd nodes, controlplane nodes, and worker nodes | kubelet | TCP | 10254 | the node itself (local traffic, not across nodes) | Ingress controller livenessProbe/readinessProbe -The ports that need to be opened for each node depend on the node's Kubernetes role: etcd, controlplane, or worker. If you installed Rancher on a Kubernetes cluster that doesn't have all three roles on each node, refer to the [port requirements for the Rancher Kubernetes Engine (RKE).]({{}}/rke/latest/en/os/#ports) The RKE docs show a breakdown of the port requirements for each role. {{% /tab %}} -{{% tab "Single Node Port Requirements" %}} +{{% tab "Docker" %}} ### Ports for Communication with Downstream Clusters -To communicate with downstream clusters, Rancher requires different ports to be open depending on the infrastructure you are using. +For a Docker installation, you only need to open the ports required to enable Rancher to communicate with downstream user clusters. -For example, if you are deploying Rancher on nodes hosted by an infrastructure provider, port `22` must be open for SSH. +The port requirements depend on the infrastructure you are using. For example, if you are deploying Rancher on nodes hosted by an infrastructure provider, port `22` must be open for SSH. The following diagram depicts the ports that are opened for each [cluster type]({{}}/rancher/v2.x/en/cluster-provisioning). @@ -185,12 +281,12 @@ The following diagram depicts the ports that are opened for each [cluster type]( ![Basic Port Requirements]({{}}/img/rancher/port-communications.svg) -The following tables break down the port requirements for inbound and outbound traffic: +The following tables break down the port requirements for Rancher nodes, for inbound and outbound traffic: **Note** Rancher nodes may also require additional outbound access for any external [authentication provider]({{}}/rancher/v2.x/en/admin-settings/authentication/) which is configured (LDAP for example). -
Inbound Rules for Rancher Nodes
+
Inbound Rules
| Protocol | Port | Source | Description | | -------- | ---- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------- | @@ -198,7 +294,7 @@ The following tables break down the port requirements for inbound and outbound t | TCP | 443 |
  • etcd nodes
  • controlplane nodes
  • worker nodes
  • hosted/imported Kubernetes
  • any source that needs to be able to use the Rancher UI or API
| Rancher agent, Rancher UI/API, kubectl | -
Outbound Rules for Rancher Nodes
+
Outbound Rules
| Protocol | Port | Source | Description | | -------- | ---- | -------------------------------------------------------- | --------------------------------------------- | diff --git a/content/rancher/v2.x/en/installation/requirements/ports/_index.md b/content/rancher/v2.x/en/installation/requirements/ports/_index.md index 7a2a7ec8dce..26278915bf0 100644 --- a/content/rancher/v2.x/en/installation/requirements/ports/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/ports/_index.md @@ -8,15 +8,82 @@ To operate properly, Rancher requires a number of ports to be open on Rancher no ## Rancher Nodes -The following table lists the ports that need to be open to and from nodes that are running the Rancher server container for [Docker installs]({{}}/rancher/v2.x/en/installation/single-node-install/) or pods for [installing Rancher on Kubernetes]({{}}/rancher/v2.x/en/installation/k8s-install/). +The following table lists the ports that need to be open to and from nodes that are running the Rancher server. -{{< ports-rancher-nodes >}} +The port requirements differ based on whether Rancher is installed in a K3s Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container. -**Note** Rancher nodes may also require additional outbound access for any external authentication provider which is configured (LDAP for example). +{{% tabs %}} +{{% tab "K3s" %}} + +The K3s server needs port 6443 to be accessible by the nodes. + +The nodes need to be able to reach other nodes over UDP port 8472 when Flannel VXLAN is used. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel. However, if you do not use Flannel and provide your own custom CNI, then port 8472 is not needed by K3s. + +If you wish to utilize the metrics server, you will need to open port 10250 on each node. + +> **Important:** The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disables access to port 8472. + +
Inbound Rules for Rancher Server Nodes
+ +| Protocol | Port | Source | Description +|-----|-----|----------------|---| +| TCP | 6443 | K3s server nodes | Kubernetes API +| UDP | 8472 | K3s server and agent nodes | Required only for Flannel VXLAN. +| TCP | 10250 | K3s server and agent nodes | kubelet + +Typically all outbound traffic is allowed. + +{{% /tab %}} +{{% tab "RKE" %}} +
Inbound Rules for Rancher Nodes
+ +| Protocol | Port | Source | Description | +|-----|-----|----------------|---| +| TCP | 80 | Load Balancer/Reverse Proxy | HTTP traffic to Rancher UI/API | +| TCP | 443 |
  • Load Balancer/Reverse Proxy
  • IPs of all cluster nodes and other API/UI clients
| HTTPS traffic to Rancher UI/API | + +
Outbound Rules for Rancher Nodes
+ +| Protocol | Port | Destination | Description | +|-----|-----|----------------|---| +| TCP | 443 | `35.160.43.145`,`35.167.242.46`,`52.33.59.17` | Rancher catalog (git.rancher.io) | +| TCP | 22 | Any node created using a node driver | SSH provisioning of node by node driver | +| TCP | 2376 | Any node created using a node driver | Docker daemon TLS port used by node driver | +| TCP | Provider dependent | Port of the Kubernetes API endpoint in hosted cluster | Kubernetes API | + +{{% /tab %}} +{{% tab "Docker" %}} + +
Inbound Rules for Rancher Node
+ +| Protocol | Port | Source | Description +|-----|-----|----------------|---| +| TCP | 80 | Load balancer/proxy that does external SSL termination | Rancher UI/API when external SSL termination is used +| TCP | 443 |
  • hosted/imported Kubernetes
  • any source that needs to be able to use the Rancher UI or API
| Rancher agent, Rancher UI/API, kubectl + +
Outbound Rules for Rancher Node
+ +| Protocol | Port | Source | Description | +|-----|-----|----------------|---| +| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver | +| TCP | 443 | `35.160.43.145/32`,`35.167.242.46/32`,`52.33.59.17/32` | git.rancher.io (catalogs) | +| TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine | +| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server | + +{{% /tab %}} +{{% /tabs %}} + +> **Notes:** +> +> - Rancher nodes may also require additional outbound access for any external authentication provider which is configured (LDAP for example). +> - Kubernetes recommends TCP 30000-32767 for node port services. +> - For firewalls, traffic may need to be enabled within the cluster and pod CIDR. ## Downstream Kubernetes Cluster Nodes -The ports required to be open for cluster nodes changes depending on how the cluster was launched. Each of the tabs below list the ports that need to be opened for different [cluster creation options]({{}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-options). +Downstream Kubernetes clusters run your apps and services. This section describes what ports need to be opened on the nodes in downstream clusters so that Rancher can communicate with them. + +The port requirements differ depending on how the downstream cluster was launched. Each of the tabs below list the ports that need to be opened for different [cluster types]({{}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-options). >**Tip:** > diff --git a/content/rancher/v2.x/en/k8s-in-rancher/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/_index.md index 5b112b5725e..71830fc1f00 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/_index.md @@ -7,19 +7,19 @@ aliases: - /rancher/v2.x/en/concepts/resources/ --- -When your project is set up, [project members]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can start managing their applications and all the components that comprise it. +When your project is set up, [project members]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can start managing their applications and all the components that comprise it. ## Workloads -Deploy applications to your cluster nodes using [workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/), which are objects that contain pods that run your apps, along with metadata that set rules for the deployment's behavior. Workloads can be deployed within the scope of the entire clusters or within a namespace. +Deploy applications to your cluster nodes using [workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/), which are objects that contain pods that run your apps, along with metadata that set rules for the deployment's behavior. Workloads can be deployed within the scope of the entire clusters or within a namespace. -When deploying a workload, you can deploy from any image. There are a variety of [workload types]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/#workload-types) to choose from which determine how your application should run. +When deploying a workload, you can deploy from any image. There are a variety of [workload types]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/#workload-types) to choose from which determine how your application should run. Following a workload deployment, you can continue working with it. You can: -- [Upgrade]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads) the workload to a newer version of the application it's running. -- [Roll back]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads) a workload to a previous version, if an issue occurs during upgrade. -- [Add a sidecar]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar), which is a workload that supports a primary workload. +- [Upgrade]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads) the workload to a newer version of the application it's running. +- [Roll back]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads) a workload to a previous version, if an issue occurs during upgrade. +- [Add a sidecar]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar), which is a workload that supports a primary workload. ## Load Balancing and Ingress @@ -31,10 +31,10 @@ If you want your applications to be externally accessible, you must add a load b Rancher supports two types of load balancers: -- [Layer-4 Load Balancers]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-4-load-balancer) -- [Layer-7 Load Balancers]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-7-load-balancer) +- [Layer-4 Load Balancers]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-4-load-balancer) +- [Layer-7 Load Balancers]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-7-load-balancer) -For more information, see [load balancers]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers). +For more information, see [load balancers]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers). #### Ingress @@ -42,29 +42,29 @@ Load Balancers can only handle one IP address per service, which means if you ru Ingress is a set or rules that act as a load balancer. Ingress works in conjunction with one or more ingress controllers to dynamically route service requests. When the ingress receives a request, the ingress controller(s) in your cluster program the load balancer to direct the request to the correct service based on service subdomains or path rules that you've configured. -For more information, see [Ingress]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress). +For more information, see [Ingress]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress). When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a Global DNS entry. -For more information, see [Global DNS]({{< baseurl >}}/rancher/v2.x/en/catalog/globaldns/). +For more information, see [Global DNS]({{}}/rancher/v2.x/en/catalog/globaldns/). ## Service Discovery After you expose your cluster to external requests using a load balancer and/or ingress, it's only available by IP address. To create a resolveable hostname, you must create a service record, which is a record that maps an IP address, external hostname, DNS record alias, workload(s), or labelled pods to a specific hostname. -For more information, see [Service Discovery]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/service-discovery). +For more information, see [Service Discovery]({{}}/rancher/v2.x/en/k8s-in-rancher/service-discovery). ## Pipelines -After your project has been [configured to a version control provider]({{< baseurl >}}/rancher/v2.x/en/project-admin/pipelines/#version-control-providers), you can add the repositories and start configuring a pipeline for each repository. +After your project has been [configured to a version control provider]({{}}/rancher/v2.x/en/project-admin/pipelines/#version-control-providers), you can add the repositories and start configuring a pipeline for each repository. -For more information, see [Pipelines]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/pipelines/). +For more information, see [Pipelines]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/). ## Applications Besides launching individual components of an application, you can use the Rancher catalog to start launching applications, which are Helm charts. -For more information, see [Applications in a Project]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/). +For more information, see [Applications in a Project]({{}}/rancher/v2.x/en/catalog/apps/). ## Kubernetes Resources @@ -72,7 +72,7 @@ Within the context of a Rancher project or namespace, _resources_ are files and Resources include: -- [Certificates]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/certificates/): Files used to encrypt/decrypt data entering or leaving the cluster. -- [ConfigMaps]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/configmaps/): Files that store general configuration information, such as a group of config files. -- [Secrets]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/secrets/): Files that store sensitive data like passwords, tokens, or keys. -- [Registries]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/registries/): Files that carry credentials used to authenticate with private registries. +- [Certificates]({{}}/rancher/v2.x/en/k8s-in-rancher/certificates/): Files used to encrypt/decrypt data entering or leaving the cluster. +- [ConfigMaps]({{}}/rancher/v2.x/en/k8s-in-rancher/configmaps/): Files that store general configuration information, such as a group of config files. +- [Secrets]({{}}/rancher/v2.x/en/k8s-in-rancher/secrets/): Files that store sensitive data like passwords, tokens, or keys. +- [Registries]({{}}/rancher/v2.x/en/k8s-in-rancher/registries/): Files that carry credentials used to authenticate with private registries. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/certificates/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/certificates/_index.md index e4c3b501564..0bf10731b0f 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/certificates/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/certificates/_index.md @@ -18,13 +18,13 @@ Add SSL certificates to either projects, namespaces, or both. A project scoped c 1. Enter a **Name** for the certificate. - >**Note:** Kubernetes classifies SSL certificates as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your SSL certificate must have a unique name among the other certificates, ConfigMaps, registries, and secrets within your project/workspace. + >**Note:** Kubernetes classifies SSL certificates as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your SSL certificate must have a unique name among the other certificates, registries, and secrets within your project/workspace. 1. Select the **Scope** of the certificate. - **Available to all namespaces in this project:** The certificate is available for any deployment in any namespaces in the project. - - **Available to a single namespace:** The certificate is only available for the deployments in one [namespace]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). If you choose this option, select a **Namespace** from the drop-down list or click **Add to a new namespace** to add the certificate to a namespace you create on the fly. + - **Available to a single namespace:** The certificate is only available for the deployments in one [namespace]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). If you choose this option, select a **Namespace** from the drop-down list or click **Add to a new namespace** to add the certificate to a namespace you create on the fly. 1. From **Private Key**, either copy and paste your certificate's private key into the text box (include the header and footer), or click **Read from a file** to browse to the private key on your file system. If possible, we recommend using **Read from a file** to reduce likelihood of error. @@ -42,4 +42,4 @@ Add SSL certificates to either projects, namespaces, or both. A project scoped c ## What's Next? -Now you can add the certificate when launching an ingress within the current project or namespace. For more information, see [Adding Ingress]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/). +Now you can add the certificate when launching an ingress within the current project or namespace. For more information, see [Adding Ingress]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/). diff --git a/content/rancher/v2.x/en/k8s-in-rancher/configmaps/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/configmaps/_index.md index ea62cc86e4f..20419b97d07 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/configmaps/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/configmaps/_index.md @@ -18,7 +18,7 @@ ConfigMaps accept key value pairs in common string formats, like config files or 1. Enter a **Name** for the Config Map. - >**Note:** Kubernetes classifies ConfigMaps as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your ConfigMaps must have a unique name among the other certificates, ConfigMaps, registries, and secrets within your workspace. + >**Note:** Kubernetes classifies ConfigMaps as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your ConfigMaps must have a unique name among the other certificates, registries, and secrets within your workspace. 1. Select the **Namespace** you want to add Config Map to. You can also add a new namespace on the fly by clicking **Add to a new namespace**. @@ -26,7 +26,7 @@ ConfigMaps accept key value pairs in common string formats, like config files or 1. Click **Save**. - >**Note:** Don't use ConfigMaps to store sensitive data [use a secret]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/secrets/). + >**Note:** Don't use ConfigMaps to store sensitive data [use a secret]({{}}/rancher/v2.x/en/k8s-in-rancher/secrets/). > >**Tip:** You can add multiple key value pairs to the ConfigMap by copying and pasting. > @@ -41,4 +41,4 @@ Now that you have a ConfigMap added to a namespace, you can add it to a workload - Application environment variables. - Specifying parameters for a Volume mounted to the workload. -For more information on adding ConfigMaps to a workload, see [Deploying Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). +For more information on adding ConfigMaps to a workload, see [Deploying Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md index 2301619cd7a..b5f6ea2d0b2 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md @@ -19,17 +19,17 @@ The way that you manage HPAs is different based on your version of the Kubernete HPAs are also managed differently based on your version of Rancher: -- **For Rancher v2.3.0+**: You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. For more information, refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). To scale the HPA based on custom metrics, you still need to use `kubectl`. For more information, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus). -- **For Rancher Prior to v2.3.0:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl). +- **For Rancher v2.3.0+**: You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. For more information, refer to [Managing HPAs with the Rancher UI]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). To scale the HPA based on custom metrics, you still need to use `kubectl`. For more information, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus). +- **For Rancher Prior to v2.3.0:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl). You might have additional HPA installation steps if you are using an older version of Rancher: - **For Rancher v2.0.7+:** Clusters created in Rancher v2.0.7 and higher automatically have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use HPA. -- **For Rancher Prior to v2.0.7:** Clusters created in Rancher prior to v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7). +- **For Rancher Prior to v2.0.7:** Clusters created in Rancher prior to v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7). ## Testing HPAs with a Service Deployment -In Rancher v2.3.x+, you can see your HPA's current number of replicas by going to your project and clicking **Resources > HPA.** For more information, refer to [Get HPA Metrics and Status]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/). +In Rancher v2.3.x+, you can see your HPA's current number of replicas by going to your project and clicking **Resources > HPA.** For more information, refer to [Get HPA Metrics and Status]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/). You can also use `kubectl` to get the status of HPAs that you test with your load testing tool. For more information, refer to [Testing HPAs with kubectl] -({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/). +({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/). diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md index 222b0cb3d8c..d0d487a49ed 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md @@ -20,7 +20,7 @@ HPA improves your services by: ## How HPA Works -![HPA Schema]({{< baseurl >}}/img/rancher/horizontal-pod-autoscaler.jpg) +![HPA Schema]({{}}/img/rancher/horizontal-pod-autoscaler.jpg) HPA is implemented as a control loop, with a period controlled by the `kube-controller-manager` flags below: diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md index 1d6d4584a0b..ab9b55db752 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md @@ -19,7 +19,7 @@ Be sure that your Kubernetes cluster services are running with these flags at mi - `horizontal-pod-autoscaler-upscale-delay: "3m0s"` - `horizontal-pod-autoscaler-sync-period: "30s"` -For an RKE Kubernetes cluster definition, add this snippet in the `services` section. To add this snippet using the Rancher v2.0 UI, open the **Clusters** view and select **Ellipsis (...) > Edit** for the cluster in which you want to use HPA. Then, from **Cluster Options**, click **Edit as YAML**. Add the following snippet to the `services` section: +For an RKE Kubernetes cluster definition, add this snippet in the `services` section. To add this snippet using the Rancher v2.0 UI, open the **Clusters** view and select **⋮ > Edit** for the cluster in which you want to use HPA. Then, from **Cluster Options**, click **Edit as YAML**. Add the following snippet to the `services` section: ``` services: diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md index 2d3cf10c87c..0d19fa185e2 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md @@ -13,11 +13,11 @@ This section describes HPA management with `kubectl`. This document has instruct ### Note For Rancher v2.3.x -In Rancher v2.3.x, you can create, view, and delete HPAs from the Rancher UI. You can also configure them to scale based on CPU or memory usage from the Rancher UI. For more information, refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). For scaling HPAs based on other metrics than CPU or memory, you still need `kubectl`. +In Rancher v2.3.x, you can create, view, and delete HPAs from the Rancher UI. You can also configure them to scale based on CPU or memory usage from the Rancher UI. For more information, refer to [Managing HPAs with the Rancher UI]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). For scaling HPAs based on other metrics than CPU or memory, you still need `kubectl`. ### Note For Rancher Prior to v2.0.7 -Clusters created with older versions of Rancher don't automatically have all the requirements to create an HPA. To install an HPA on these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7). +Clusters created with older versions of Rancher don't automatically have all the requirements to create an HPA. To install an HPA on these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7). ##### Basic kubectl Command for Managing HPAs diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md index 5a3af016138..b08eb8f8624 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md @@ -7,7 +7,7 @@ _Available as of v2.3.0_ The Rancher UI supports creating, managing, and deleting HPAs. You can configure CPU or memory usage as the metric that the HPA uses to scale. -If you want to create HPAs that scale based on other metrics than CPU and memory, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus). +If you want to create HPAs that scale based on other metrics than CPU and memory, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus). ## Creating an HPA @@ -25,7 +25,7 @@ If you want to create HPAs that scale based on other metrics than CPU and memory 1. Specify the **Minimum Scale** and **Maximum Scale** for the HPA. -1. Configure the metrics for the HPA. You can choose memory or CPU usage as the metric that will cause the HPA to scale the service up or down. In the **Quantity** field, enter the percentage of the workload's memory or CPU usage that will cause the HPA to scale the service. To configure other HPA metrics, including metrics available from Prometheus, you need to [manage HPAs using kubectl]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus). +1. Configure the metrics for the HPA. You can choose memory or CPU usage as the metric that will cause the HPA to scale the service up or down. In the **Quantity** field, enter the percentage of the workload's memory or CPU usage that will cause the HPA to scale the service. To configure other HPA metrics, including metrics available from Prometheus, you need to [manage HPAs using kubectl]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus). 1. Click **Create** to create the HPA. @@ -48,7 +48,7 @@ If you want to create HPAs that scale based on other metrics than CPU and memory 1. Find the HPA which you would like to delete. -1. Click **Ellipsis (...) > Delete**. +1. Click **⋮ > Delete**. 1. Click **Delete** to confirm. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md index cb49344658d..7df9409d618 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md @@ -3,7 +3,7 @@ title: Testing HPAs with kubectl weight: 3031 --- -This document describes how to check the status of your HPAs after scaling them up or down with your load testing tool. For information on how to check the status from the Rancher UI (at least version 2.3.x), refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/). +This document describes how to check the status of your HPAs after scaling them up or down with your load testing tool. For information on how to check the status from the Rancher UI (at least version 2.3.x), refer to [Managing HPAs with the Rancher UI]({{}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/). For HPA to work correctly, service deployments should have resources request definitions for containers. Follow this hello-world example to test if HPA is working correctly. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/_index.md index 096c69a6c17..6c56007e544 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/_index.md @@ -14,10 +14,10 @@ If you want your applications to be externally accessible, you must add a load b Rancher supports two types of load balancers: -- [Layer-4 Load Balancers]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-4-load-balancer) -- [Layer-7 Load Balancers]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-7-load-balancer) +- [Layer-4 Load Balancers]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-4-load-balancer) +- [Layer-7 Load Balancers]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#layer-7-load-balancer) -For more information, see [load balancers]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers). +For more information, see [load balancers]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers). ### Load Balancer Limitations @@ -28,9 +28,9 @@ Load Balancers have a couple of limitations you should be aware of: - If you want to use a load balancer with a Hosted Kubernetes cluster (i.e., clusters hosted in GKE, EKS, or AKS), the load balancer must be running within that cloud provider's infrastructure. Please review the compatibility tables regarding support for load balancers based on how you've provisioned your clusters: - - [Support for Layer-4 Load Balancing]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#support-for-layer-4-load-balancing) + - [Support for Layer-4 Load Balancing]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#support-for-layer-4-load-balancing) - - [Support for Layer-7 Load Balancing]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#support-for-layer-7-load-balancing) + - [Support for Layer-7 Load Balancing]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#support-for-layer-7-load-balancing) ## Ingress @@ -48,7 +48,7 @@ Ingress works in conjunction with one or more ingress controllers to dynamically Each Kubernetes Ingress resource corresponds roughly to a file in `/etc/nginx/sites-available/` containing a `server{}` configuration block, where requests for specific files and folders are configured. -Your ingress, which creates a port of entry to your cluster similar to a load balancer, can reside within your cluster or externally. Ingress and ingress controllers residing in RKE-launcher clusters are powered by [Nginx](https://www.nginx.com/). +Your ingress, which creates a port of entry to your cluster similar to a load balancer, can reside within your cluster or externally. Ingress and ingress controllers residing in RKE-launched clusters are powered by [Nginx](https://www.nginx.com/). Ingress can provide other functionality as well, such as SSL termination, name-based virtual hosting, and more. @@ -56,6 +56,6 @@ Ingress can provide other functionality as well, such as SSL termination, name-b > >Refrain from adding an Ingress to the `local` cluster. The Nginx Ingress Controller that Rancher uses acts as a global entry point for _all_ clusters managed by Rancher, including the `local` cluster. Therefore, when users try to access an application, your Rancher connection may drop due to the Nginx configuration being reloaded. We recommend working around this issue by deploying applications only in clusters that you launch using Rancher. -- For more information on how to set up ingress in Rancher, see [Ingress]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress). +- For more information on how to set up ingress in Rancher, see [Ingress]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress). - For complete information about ingress and ingress controllers, see the [Kubernetes Ingress Documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/) -- When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a Global DNS entry, see [Global DNS]({{< baseurl >}}/rancher/v2.x/en/catalog/globaldns/). +- When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a Global DNS entry, see [Global DNS]({{}}/rancher/v2.x/en/catalog/globaldns/). diff --git a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md index d90fc336f02..4392f9fedd8 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md @@ -6,7 +6,7 @@ aliases: - /rancher/v2.x/en/tasks/workloads/add-ingress/ --- -Ingress can be added for workloads to provide load balancing, SSL termination and host/path based routing. When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{< baseurl >}}/rancher/v2.x/en/catalog/globaldns/). +Ingress can be added for workloads to provide load balancing, SSL termination and host/path based routing. When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{}}/rancher/v2.x/en/catalog/globaldns/). 1. From the **Global** view, open the project that you want to add ingress to. @@ -14,7 +14,7 @@ Ingress can be added for workloads to provide load balancing, SSL termination an 1. Enter a **Name** for the ingress. -1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new [namespace]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) on the fly by clicking **Add to a new namespace**. +1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new [namespace]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) on the fly by clicking **Add to a new namespace**. 1. Create ingress forwarding **Rules**. @@ -65,7 +65,7 @@ Ingress can be added for workloads to provide load balancing, SSL termination an 1. If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications. - >**Note:** You must have an SSL certificate that the ingress can use to encrypt/decrypt communications. For more information see [Adding SSL Certificates]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/certificates/). + >**Note:** You must have an SSL certificate that the ingress can use to encrypt/decrypt communications. For more information see [Adding SSL Certificates]({{}}/rancher/v2.x/en/k8s-in-rancher/certificates/). 1. Click **Add Certificate**. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/_index.md index 9edfc95f878..7ae7742018d 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/_index.md @@ -67,8 +67,8 @@ The benefit of using xip.io is that you obtain a working entrypoint URL immediat #### Tutorials -- [Kubernetes installation with External Load Balancer (HTTPS/Layer 7)]({{< baseurl >}}/rancher/v2.x/en/installation/ha-server-install-external-lb) -- [Kubernetes installation with External Load Balancer (TCP/Layer 4)]({{< baseurl >}}/rancher/v2.x/en/installation/ha-server-install) -- [Docker Installation with External Load Balancer]({{< baseurl >}}/rancher/v2.x/en/installation/single-node-install-external-lb) +- [Kubernetes installation with External Load Balancer (HTTPS/Layer 7)]({{}}/rancher/v2.x/en/installation/ha-server-install-external-lb) +- [Kubernetes installation with External Load Balancer (TCP/Layer 4)]({{}}/rancher/v2.x/en/installation/ha-server-install) +- [Docker Installation with External Load Balancer]({{}}/rancher/v2.x/en/installation/single-node-install-external-lb) diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/_index.md index e39437f23a7..e20e1794245 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/_index.md @@ -6,36 +6,176 @@ aliases: --- +Rancher's pipeline provides a simple CI/CD experience. Use it to automatically checkout code, run builds or scripts, publish Docker images or catalog applications, and deploy the updated software to users. + +Setting up a pipeline can help developers deliver new software as quickly and efficiently as possible. Using Rancher, you can integrate with a GitHub repository to setup a continuous integration (CI) pipeline. + +After configuring Rancher and GitHub, you can deploy containers running Jenkins to automate a pipeline execution: + +- Build your application from code to image. +- Validate your builds. +- Deploy your build images to your cluster. +- Run unit tests. +- Run regression tests. + >**Notes:** > ->- Pipelines are new and improved for Rancher v2.1! Therefore, if you configured pipelines while using v2.0.x, you'll have to reconfigure them after upgrading to v2.1. ->- Still using v2.0.x? See the pipeline documentation for [previous versions]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/docs-for-v2.0.x). +>- Pipelines improved in Rancher v2.1. Therefore, if you configured pipelines while using v2.0.x, you'll have to reconfigure them after upgrading to v2.1. +>- Still using v2.0.x? See the pipeline documentation for [previous versions]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/docs-for-v2.0.x). +>- Rancher's pipeline provides a simple CI/CD experience, but it does not offer the full power and flexibility of and is not a replacement of enterprise-grade Jenkins or other CI tools your team uses. -Before setting up any pipelines, review the [pipeline overview]({{< baseurl >}}/rancher/v2.x/en/project-admin/pipelines/) and ensure that the project has [configured authentication to your version control provider]({{< baseurl >}}/rancher/v2.x/en/project-admin/pipelines/#version-control-providers), e.g. GitHub, GitLab, Bitbucket. If you haven't configured a version control provider, you can always use [Rancher's example repositories]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/pipelines/example/) to view some common pipeline deployments. +This section covers the following topics: -If you can access a project, you can enable repositories to start building pipelines. Only an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owner or member]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can authorize version control providers. +- [Concepts](#concepts) +- [How Pipelines Work](#how-pipelines-work) +- [Roles-based Access Control for Pipelines](#roles-based-access-control-for-pipelines) +- [Setting up Pipelines](#setting-up-pipelines) + - [Configure version control providers](#1-configure-version-control-providers) + - [Configure repositories](#2-configure-repositories) + - [Configure the pipeline](#3-configure-the-pipeline) +- [Pipeline Configuration Reference](#pipeline-configuration-reference) +- [Running your Pipelines](#running-your-pipelines) +- [Triggering a Pipeline](#triggering-a-pipeline) + - [Modifying the Event Triggers for the Repository](#modifying-the-event-triggers-for-the-repository) -## Concepts +# Concepts -When setting up a pipeline, it's helpful to know a few related terms. +For an explanation of concepts and terminology used in this section, refer to [this page.]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/concepts) -- **Pipeline:** +# How Pipelines Work - A pipeline consists of stages and steps. It is based on a specific repository. It defines the process to build, test, and deploy your code. Rancher uses the [pipeline as code](https://jenkins.io/doc/book/pipeline-as-code/) model. Pipeline configuration is represented as a pipeline file in the source code repository, using the file name `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`. +After enabling the ability to use pipelines in a project, you can configure multiple pipelines in each project. Each pipeline is unique and can be configured independently. -- **Stages:** +A pipeline is configured off of a group of files that are checked into source code repositories. Users can configure their pipelines either through the Rancher UI or by adding a `.rancher-pipeline.yml` into the repository. - A pipeline stage consists of multiple steps. Stages are executed in the order defined in the pipeline file. The steps in a stage are executed concurrently. A stage starts when all steps in the former stage finish without failure. +Before pipelines can be configured, you will need to configure authentication to your version control provider, e.g. GitHub, GitLab, Bitbucket. If you haven't configured a version control provider, you can always use [Rancher's example repositories]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/example-repos/) to view some common pipeline deployments. -- **Steps:** +When you configure a pipeline in one of your projects, a namespace specifically for the pipeline is automatically created. The following components are deployed to it: - A pipeline step is executed inside a specified stage. A step fails if it exits with a code other than `0`. If a step exits with this failure code, the entire pipeline fails and terminates. + - **Jenkins:** -- **Workspace:** + The pipeline's build engine. Because project users do not directly interact with Jenkins, it's managed and locked. - The workspace is the working directory shared by all pipeline steps. In the beginning of a pipeline, source code is checked out to the workspace. The command for every step bootstraps in the workspace. During a pipeline execution, the artifacts from a previous step will be available in future steps. The working directory is an ephemeral volume and will be cleaned out with the executor pod when a pipeline execution is finished. + >**Note:** There is no option to use existing Jenkins deployments as the pipeline engine. -## Configuring Repositories + - **Docker Registry:** + + Out-of-the-box, the default target for your build-publish step is an internal Docker Registry. However, you can make configurations to push to a remote registry instead. The internal Docker Registry is only accessible from cluster nodes and cannot be directly accessed by users. Images are not persisted beyond the lifetime of the pipeline and should only be used in pipeline runs. If you need to access your images outside of pipeline runs, please push to an external registry. + + - **Minio:** + + Minio storage is used to store the logs for pipeline executions. + + >**Note:** The managed Jenkins instance works statelessly, so don't worry about its data persistency. The Docker Registry and Minio instances use ephemeral volumes by default, which is fine for most use cases. If you want to make sure pipeline logs can survive node failures, you can configure persistent volumes for them, as described in [data persistency for pipeline components]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/storage). + +# Roles-based Access Control for Pipelines + +If you can access a project, you can enable repositories to start building pipelines. + +Only [administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can configure version control providers and manage global pipeline execution settings. + +Project members can only configure repositories and pipelines. + +# Setting up Pipelines + +To set up pipelines, you will need to do the following: + +1. [Configure version control providers](#1-configure-version-control-providers) +2. [Configure repositories](#2-configure-repositories) +3. [Configure the pipeline](#3-configure-the-pipeline) + +### 1. Configure Version Control Providers + +Before you can start configuring a pipeline for your repository, you must configure and authorize a version control provider. + +| Provider | Available as of | +| --- | --- | +| GitHub | v2.0.0 | +| GitLab | v2.1.0 | +| Bitbucket | v2.2.0 | + +Select your provider's tab below and follow the directions. + +{{% tabs %}} +{{% tab "GitHub" %}} +1. From the **Global** view, navigate to the project that you want to configure pipelines. + +1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**. + +1. Follow the directions displayed to **Setup a Github application**. Rancher redirects you to Github to setup an OAuth App in Github. + +1. From GitHub, copy the **Client ID** and **Client Secret**. Paste them into Rancher. + +1. If you're using GitHub for enterprise, select **Use a private github enterprise installation**. Enter the host address of your GitHub installation. + +1. Click **Authenticate**. + +{{% /tab %}} +{{% tab "GitLab" %}} + +_Available as of v2.1.0_ + +1. From the **Global** view, navigate to the project that you want to configure pipelines. + +1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**. + +1. Follow the directions displayed to **Setup a GitLab application**. Rancher redirects you to GitLab. + +1. From GitLab, copy the **Application ID** and **Secret**. Paste them into Rancher. + +1. If you're using GitLab for enterprise setup, select **Use a private gitlab enterprise installation**. Enter the host address of your GitLab installation. + +1. Click **Authenticate**. + +>**Note:** +> 1. Pipeline uses Gitlab [v4 API](https://docs.gitlab.com/ee/api/v3_to_v4.html) and the supported Gitlab version is 9.0+. +> 2. If you use GitLab 10.7+ and your Rancher setup is in a local network, enable the **Allow requests to the local network from hooks and services** option in GitLab admin settings. +{{% /tab %}} +{{% tab "Bitbucket Cloud" %}} + +_Available as of v2.2.0_ + +1. From the **Global** view, navigate to the project that you want to configure pipelines. + +1. Select **Tools > Pipelines** in the navigation bar. + +1. Choose the **Use public Bitbucket Cloud** option. + +1. Follow the directions displayed to **Setup a Bitbucket Cloud application**. Rancher redirects you to Bitbucket to setup an OAuth consumer in Bitbucket. + +1. From Bitbucket, copy the consumer **Key** and **Secret**. Paste them into Rancher. + +1. Click **Authenticate**. + +{{% /tab %}} +{{% tab "Bitbucket Server" %}} + +_Available as of v2.2.0_ + +1. From the **Global** view, navigate to the project that you want to configure pipelines. + +1. Select **Tools > Pipelines** in the navigation bar. + +1. Choose the **Use private Bitbucket Server setup** option. + +1. Follow the directions displayed to **Setup a Bitbucket Server application**. + +1. Enter the host address of your Bitbucket server installation. + +1. Click **Authenticate**. + +>**Note:** +> Bitbucket server needs to do SSL verification when sending webhooks to Rancher. Please ensure that Rancher server's certificate is trusted by the Bitbucket server. There are two options: +> +> 1. Setup Rancher server with a certificate from a trusted CA. +> 1. If you're using self-signed certificates, import Rancher server's certificate to the Bitbucket server. For instructions, see the Bitbucket server documentation for [configuring self-signed certificates](https://confluence.atlassian.com/bitbucketserver/if-you-use-self-signed-certificates-938028692.html). +> +{{% /tab %}} +{{% /tabs %}} + +**Result:** After the version control provider is authenticated, you will be automatically re-directed to start configuring which repositories you want start using with a pipeline. + +### 2. Configure Repositories After the version control provider is authorized, you are automatically re-directed to start configuring which repositories that you want start using pipelines with. Even if someone else has set up the version control provider, you will see their repositories and can build a pipeline. @@ -53,187 +193,58 @@ After the version control provider is authorized, you are automatically re-direc **Results:** You have a list of repositories that you can start configuring pipelines for. -## Pipeline Configuration +### 3. Configure the Pipeline -Now that repositories are added to your project, you can start configuring the pipeline by adding automated stages and steps. For your convenience, there are multiple built-in [step types](#step-types) for dedicated tasks. +Now that repositories are added to your project, you can start configuring the pipeline by adding automated stages and steps. For your convenience, there are multiple built-in step types for dedicated tasks. 1. From the **Global** view, navigate to the project that you want to configure pipelines. 1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** -1. Find the repository that you want to set up a pipeline for. Pipelines can be configured either through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`. Throughout the next couple of steps, we'll provide the options of how to do pipeline configuration through the UI or the YAML file. +1. Find the repository that you want to set up a pipeline for. - * If you are going to use the UI, select the vertical **Ellipsis (...) > Edit Config** to configure the pipeline using the UI. After the pipeline is configured, you must view the YAML file and push it to the repository. - * If you are going to use the YAML file, select the vertical **Ellipsis (...) **View/Edit YAML** to configure the pipeline. If you choose to use a YAML file, you need to push it to the repository after any changes in order for it to be updated in the repository. +1. Configure the pipeline through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`. Pipeline configuration is split into stages and steps. Stages must fully complete before moving onto the next stage, but steps in a stage run concurrently. For each stage, you can add different step types. Note: As you build out each step, there are different advanced options based on the step type. Advanced options include trigger rules, environment variables, and secrets. For more information on configuring the pipeline through the UI or the YAML file, refer to the [pipeline configuration reference.]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/config) - >**Note:** When editing the pipeline configuration, it takes a few moments for Rancher to check for an existing pipeline configuration. + * If you are going to use the UI, select the vertical **⋮ > Edit Config** to configure the pipeline using the UI. After the pipeline is configured, you must view the YAML file and push it to the repository. + * If you are going to use the YAML file, select the vertical **⋮ > View/Edit YAML** to configure the pipeline. If you choose to use a YAML file, you need to push it to the repository after any changes in order for it to be updated in the repository. When editing the pipeline configuration, it takes a few moments for Rancher to check for an existing pipeline configuration. 1. Select which `branch` to use from the list of branches. -1. Pipeline configuration is split into stages and [steps](#step-types). Remember that stages must fully complete before moving onto the next stage, but steps in a stage run concurrently. +1. _Available as of v2.2.0_ Optional: Set up notifications. - For each stage, you can add different step types. Learn more about how to configure each step type: +1. Set up the trigger rules for the pipeline. - - [Run Script](#run-script) - - [Build and Publish Images](#build-and-publish-images) - - [Publish Catalog Template](#publish-catalog-template) - - [Deploy YAML](#deploy-yaml) - - [Deploy Catalog App](#deploy-catalog-app) - - >**Note:** As you build out each step, there are different [advanced options](#advanced-options) based on the step type. - - {{% accordion id="stages-and-steps" label="Adding Stages and Steps" %}} -{{% tabs %}} -{{% tab "By UI" %}} -
-If you haven't added any stages, click **Configure pipeline for this branch** to configure the pipeline through the UI. - -1. Add stages to your pipeline execution by clicking **Add Stage**. - - 1. Enter a **Name** for each stage of your pipeline. - 1. For each stage, you can configure [trigger rules](#trigger-rules) by clicking on **Show Advanced Options**. Note: this can always be updated at a later time. - -1. After you've created a stage, start [adding steps](#step-types) by clicking **Add a Step**. You can add multiple steps to each stage. -
-
-{{% /tab %}} -{{% tab "By YAML" %}} -
-For each stage, you can add multiple steps. Read more about each [step type](#step-types) and the [advanced options](#advanced-options) to get all the details on how to configure the YAML. This is only a small example of how to have multiple stages with a singular step in each stage. - -```yaml -# example -stages: - - name: Build something - # Conditions for stages - when: - branch: master - event: [ push, pull_request ] - # Multiple steps run concurrently - steps: - - runScriptConfig: - image: busybox - shellScript: date -R - - name: Publish my image - steps: - - publishImageConfig: - dockerfilePath: ./Dockerfile - buildContext: . - tag: rancher/rancher:v2.0.0 - # Optionally push to remote registry - pushRemote: true - registry: reg.example.com -``` -
-{{% /tab %}} -{{% /tabs %}} - {{% /accordion %}} - -1. _Available as of v2.2.0_ - - **Notifications:** Decide if you want to set up notifications for your pipeline. You can enable notifications to any [notifiers]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) based on the build status of a pipeline. Before enabling notifications, Rancher recommends [setting up notifiers]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/notifiers/#adding-notifiers) so it will be easy to add recipients immediately. - - {{% accordion id="notification" label="Configuring Notifications" %}} - - -{{% tabs %}} -{{% tab "By UI" %}} -
-_Available as of v2.2.0_ - -1. Within the **Notification** section, turn on notifications by clicking **Enable**. - -1. Select the conditions for the notification. You can select to get a notification for the following statuses: `Failed`, `Success`, `Changed`. For example, if you want to receive notifications when an execution fails, select **Failed**. - -1. If you don't have any existing [notifiers]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/notifiers), Rancher will provide a warning that no notifiers are set up and provide a link to be able to go to the notifiers page. Follow the [instructions]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/notifiers/#adding-notifiers) to add a notifier. If you already have notifiers, you can add them to the notification by clicking the **Add Recipient** button. - - > **Note:** Notifiers are configured at a cluster level and require a different level of permissions. - -1. For each recipient, select which notifier type from the dropdown. Based on the type of notifier, you can use the default recipient or override the recipient with a different one. For example, if you have a notifier for _Slack_, you can update which channel to send the notification to. You can add additional notifiers by clicking **Add Recipient**. -
-
-{{% /tab %}} -{{% tab "By YAML" %}} -
-_Available as of v2.2.0_ - -In the `notification` section, you will provide the following information: - -* **Recipients:** This will be the list of notifiers/recipients that will receive the notification. - * **Notifier:** The ID of the notifier. This can be found by finding the notifier and selecting **View in API** to get the ID. - * **Recipient:** Depending on the type of the notifier, the "default recipient" can be used or you can override this with a different recipient. For example, when configuring a slack notifier, you select a channel as your default recipient, but if you wanted to send notifications to a different channel, you can select a different recipient. -* **Condition:** Select which conditions of when you want the notification to be sent. -* **Message (Optional):** If you want to change the default notification message, you can edit this in the yaml. Note: This option is not available in the UI. - -```yaml -# Example -stages: - - name: Build something - steps: - - runScriptConfig: - image: busybox - shellScript: ls -notification: - recipients: - - # Recipient - recipient: "#mychannel" - # ID of Notifier - notifier: "c-wdcsr:n-c9pg7" - - recipient: "test@example.com" - notifier: "c-wdcsr:n-lkrhd" - # Select which statuses you want the notification to be sent - condition: ["Failed", "Success", "Changed"] - # Ability to override the default message (Optional) - message: "my-message" -``` -
-{{% /tab %}} -{{% /tabs %}} - - {{% /accordion %}} - -1. Set up the **[Trigger Rules](#trigger-rules)** for the pipeline. - -1. Enter a **Timeout** for the pipeline. By default, each pipeline execution has a timeout of 60 minutes. If the pipeline execution cannot complete within its timeout period, the pipeline is aborted. - - {{% accordion id="timeout" label="Setting up Timeout" %}} - -{{% tabs %}} -{{% tab "By UI" %}} -
-Enter a new value in the **Timeout** field. -
-
-{{% /tab %}} -{{% tab "By YAML" %}} -
-In the `timeout` section, enter the timeout value in minutes. -```yaml -# example -stages: - - name: Build something - steps: - - runScriptConfig: - image: busybox - shellScript: ls -# timeout in minutes -timeout: 30 -``` -
-{{% /tab %}} -{{% /tabs %}} - - {{% /accordion %}} +1. Enter a **Timeout** for the pipeline. 1. When all the stages and steps are configured, click **Done**. **Results:** Your pipeline is now configured and ready to be run. -## Running your Pipelines -Run your pipeline for the first time. From the project view in Rancher, go to **Resources > Pipelines.** (In versions prior to v2.3.0, go to the **Pipelines** tab.) Find your pipeline and select the vertical **Ellipsis (...) > Run**. +# Pipeline Configuration Reference -During this initial run, your pipeline is tested, and the following [pipeline components]({{< baseurl >}}/rancher/v2.x/en/project-admin/pipelines/#how-pipelines-work) are deployed to your project as workloads in a new namespace dedicated to the pipeline: +Refer to [this page]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/config) for details on how to configure a pipeline to: + +- Run a script +- Build and publish images +- Publish catalog templates +- Deploy YAML +- Deploy a catalog app + +The configuration reference also covers how to configure: + +- Notifications +- Timeouts +- The rules that trigger a pipeline +- Environment variables +- Secrets + + +# Running your Pipelines + +Run your pipeline for the first time. From the project view in Rancher, go to **Resources > Pipelines.** (In versions prior to v2.3.0, go to the **Pipelines** tab.) Find your pipeline and select the vertical **⋮ > Run**. + +During this initial run, your pipeline is tested, and the following pipeline components are deployed to your project as workloads in a new namespace dedicated to the pipeline: - `docker-registry` - `jenkins` @@ -241,7 +252,7 @@ During this initial run, your pipeline is tested, and the following [pipeline co This process takes several minutes. When it completes, you can view each pipeline component from the project **Workloads** tab. -## Pipeline Setting +# Triggering a Pipeline When a repository is enabled, a webhook is automatically set in the version control provider. By default, the pipeline is triggered by a **push** event to a repository, but you can modify the event(s) that trigger running the pipeline. @@ -251,7 +262,7 @@ Available Events: * **Pull Request**: Whenever a pull request is made to the repository, the pipeline is triggered. * **Tag**: When a tag is created in the repository, the pipeline is triggered. -> **Note:** This option doesn't exist for Rancher's [example repositories]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/pipelines/example/). +> **Note:** This option doesn't exist for Rancher's [example repositories]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/example-repos/). ### Modifying the Event Triggers for the Repository @@ -259,498 +270,8 @@ Available Events: 1. 1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** -1. Find the repository that you want to modify the event triggers. Select the vertical **Ellipsis (...) > Setting**. +1. Find the repository that you want to modify the event triggers. Select the vertical **⋮ > Setting**. 1. Select which event triggers (**Push**, **Pull Request** or **Tag**) you want for the repository. -1. Click **Save**. - -## Step Types - -Within each stage, you can add as many steps as you'd like. When there are multiple steps in one stage, they run concurrently. - -- [Run Script](#run-script) -- [Build and Publish Images](#build-and-publish-images) -- [Publish Catalog Template](#publish-catalog-template) -- [Deploy YAML](#deploy-yaml) -- [Deploy Catalog App](#deploy-catalog-app) - - - -### Run Script - -The **Run Script** step executes arbitrary commands in the workspace inside a specified container. You can use it to build, test and do more, given whatever utilities the base image provides. For your convenience, you can use variables to refer to metadata of a pipeline execution. Please refer to the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) for the list of available variables. - -{{% tabs %}} - -{{% tab "By UI" %}} - -1. From the **Step Type** drop-down, choose **Run Script** and fill in the form. - -1. Click **Add**. - -{{% /tab %}} - -{{% tab "By YAML" %}} -
-```yaml -# example -stages: -- name: Build something - steps: - - runScriptConfig: - image: golang - shellScript: go build -``` -
-{{% /tab %}} - -{{% /tabs %}} - -### Build and Publish Images - -The **Build and Publish Image** step builds and publishes a Docker image. This process requires a Dockerfile in your source code's repository to complete successfully. - -_Available as of Rancher v2.1.0_ - -The option to publish an image to an insecure registry is not exposed in the UI, but you can specify an environment variable in the YAML that allows you to publish an image insecurely. - -{{% tabs %}} - -{{% tab "By UI" %}} -1. From the **Step Type** drop-down, choose **Build and Publish**. - -1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**. - - Field | Description | - ---------|----------| - Dockerfile Path | The relative path to the Dockerfile in the source code repo. By default, this path is `./Dockerfile`, which assumes the Dockerfile is in the root directory. You can set it to other paths in different use cases (`./path/to/myDockerfile` for example). | - Image Name | The image name in `name:tag` format. The registry address is not required. For example, to build `example.com/repo/my-image:dev`, enter `repo/my-image:dev`. | - Push image to remote repository | An option to set the registry that publishes the image that's built. To use this option, enable it and choose a registry from the drop-down. If this option is disabled, the image is pushed to the internal registry. | - Build Context

(**Show advanced options**)| By default, the root directory of the source code (`.`). For more details, see the Docker [build command documentation](https://docs.docker.com/engine/reference/commandline/build/). - -{{% /tab %}} - -{{% tab "By YAML" %}} - -You can use specific arguments for Docker daemon and the build. They are not exposed in the UI, but they are available in pipeline YAML format, as indicated in the example below. Available environment variables include: - -Variable Name | Description -------------------------|------------------------------------------------------------ -PLUGIN_DRY_RUN | Disable docker push -PLUGIN_DEBUG | Docker daemon executes in debug mode -PLUGIN_MIRROR | Docker daemon registry mirror -PLUGIN_INSECURE | Docker daemon allows insecure registries -PLUGIN_BUILD_ARGS | Docker build args, a comma separated list - -
-```yaml -# This example shows an environment variable being used -# in the Publish Image step. This variable allows you to -# publish an image to an insecure registry: - -stages: -- name: Publish Image - steps: - - publishImageConfig: - dockerfilePath: ./Dockerfile - buildContext: . - tag: repo/app:v1 - pushRemote: true - registry: example.com - env: - PLUGIN_INSECURE: "true" -``` -
-{{% /tab %}} - -{{% /tabs %}} - -### Publish Catalog Template - -_Available as of v2.2.0_ - -The **Publish Catalog Template** step publishes a version of a catalog app template (i.e. Helm chart) to a [git hosted chart repository]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/). It generates a git commit and pushes it to your chart repository. This process requires a chart folder in your source code's repository and a pre-configured secret in the dedicated pipeline namespace to complete successfully. Any variables in the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) is supported for any file in the chart folder. - -{{% tabs %}} - -{{% tab "By UI" %}} -
- -1. From the **Step Type** drop-down, choose **Publish Catalog Template**. - -1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**. - - Field | Description | - ---------|----------| - Chart Folder | The relative path to the chart folder in the source code repo, where the `Chart.yaml` file is located. | - Catalog Template Name | The name of the template. For example, wordpress. | - Catalog Template Version | The version of the template you want to publish, it should be consistent with the version defined in the `Chart.yaml` file. | - Protocol | You can choose to publish via HTTP(S) or SSH protocol. | - Secret | The secret that stores your Git credentials. You need to create a secret in dedicated pipeline namespace in the project before adding this step. If you use HTTP(S) protocol, store Git username and password in `USERNAME` and `PASSWORD` key of the secret. If you use SSH protocol, store Git deploy key in `DEPLOY_KEY` key of the secret. After the secret is created, select it in this option. | - Git URL | The Git URL of the chart repository that the template will be published to. | - Git Branch | The Git branch of the chart repository that the template will be published to. | - Author Name | The author name used in the commit message. | - Author Email | The author email used in the commit message. | - - -{{% /tab %}} - -{{% tab "By YAML" %}} -
-You can add **Publish Catalog Template** steps directly in the `.rancher-pipeline.yml` file. - -Under the `steps` section, add a step with `publishCatalogConfig`. You will provide the following information: - -* Path: The relative path to the chart folder in the source code repo, where the `Chart.yaml` file is located. -* CatalogTemplate: The name of the template. -* Version: The version of the template you want to publish, it should be consistent with the version defined in the `Chart.yaml` file. -* GitUrl: The git URL of the chart repository that the template will be published to. -* GitBranch: The git branch of the chart repository that the template will be published to. -* GitAuthor: The author name used in the commit message. -* GitEmail: The author email used in the commit message. -* Credentials: You should provide Git credentials by referencing secrets in dedicated pipeline namespace. If you publish via SSH protocol, inject your deploy key to the `DEPLOY_KEY` environment variable. If you publish via HTTP(S) protocol, inject your username and password to `USERNAME` and `PASSWORD` environment variables. - -```yaml -# example -stages: -- name: Publish Wordpress Template - steps: - - publishCatalogConfig: - path: ./charts/wordpress/latest - catalogTemplate: wordpress - version: ${CICD_GIT_TAG} - gitUrl: git@github.com:myrepo/charts.git - gitBranch: master - gitAuthor: example-user - gitEmail: user@example.com - envFrom: - - sourceName: publish-keys - sourceKey: DEPLOY_KEY -``` - -
-{{% /tab %}} - -{{% /tabs %}} - -### Deploy YAML - -This step deploys arbitrary Kubernetes resources to the project. This deployment requires a Kubernetes manifest file to be present in the source code repository. Pipeline variable substitution is supported in the manifest file. You can view an example file at [GitHub](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml). Please refer to the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) for the list of available variables. - -{{% tabs %}} - -{{% tab "By UI" %}} - -1. From the **Step Type** drop-down, choose **Deploy YAML** and fill in the form. - -1. Enter the **YAML Path**, which is the path to the manifest file in the source code. - -1. Click **Add**. - -{{% /tab %}} - -{{% tab "By YAML" %}} -
-```yaml -# example -stages: -- name: Deploy - steps: - - applyYamlConfig: - path: ./deployment.yaml -``` -
-{{% /tab %}} - -{{% /tabs %}} - -### Deploy Catalog App - -_Available as of v2.2.0_ - -The **Deploy Catalog App** step deploys a catalog app in the project. It will install a new app if it is not present, or upgrade an existing one. - -{{% tabs %}} - -{{% tab "By UI" %}} - -1. From the **Step Type** drop-down, choose **Deploy Catalog App**. - -1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**. - - Field | Description | - ---------|----------| - Catalog | The catalog from which the app template will be used. | - Template Name | The name of the app template. For example, wordpress. | - Template Version | The version of the app template you want to deploy. | - Namespace | The target namespace where you want to deploy the app. | - App Name | The name of the app you want to deploy. | - Answers | Key-value pairs of answers used to deploy the app. | - - -{{% /tab %}} - -{{% tab "By YAML" %}} -
-You can add **Deploy Catalog App** steps directly in the `.rancher-pipeline.yml` file. - -Under the `steps` section, add a step with `applyAppConfig`. You will provide the following information: - -* CatalogTemplate: The ID of the template. This can be found by clicking `Launch app` and selecting `View details` for the app. It is the last part of the URL. -* Version: The version of the template you want to deploy. -* Answers: Key-value pairs of answers used to deploy the app. -* Name: The name of the app you want to deploy. -* TargetNamespace: The target namespace where you want to deploy the app. - -```yaml -# example -stages: -- name: Deploy App - steps: - - applyAppConfig: - catalogTemplate: cattle-global-data:library-mysql - version: 0.3.8 - answers: - persistence.enabled: "false" - name: testmysql - targetNamespace: test -``` -
-{{% /tab %}} -{{% /tabs %}} - -## Advanced Options - -Within a pipeline, there are multiple advanced options for different parts of the pipeline. - -- [Trigger Rules](#trigger-rules) -- [Environment Variables](#environment-variables) -- [Secrets](#secrets) - -### Trigger Rules - -Trigger rules can be created to have fine-grained control of pipeline executions in your pipeline configuration. Trigger rules come in two types: - -- **Run this when:** - - This type of rule starts the pipeline, stage, or step when a trigger explicitly occurs. - -- **Do Not Run this when:** - - This type of rule skips the pipeline, stage, or step when a trigger explicitly occurs. - -If all conditions evaluate to `true`, then the pipeline/stage/step is executed. Otherwise it is skipped. When a pipeline is skipped, none of the pipeline is executed. When a stage/step is skipped, it is considered successful and follow-up stages/steps continue to run. - -Wildcard character (`*`) expansion is supported in `branch` conditions. - -{{% tabs %}} -{{% tab "Pipeline Trigger" %}} - -1. From the **Global** view, navigate to the project that you want to configure a pipeline trigger rule. - -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** - -1. From the repository for which you want to manage trigger rules, select the vertical **Ellipsis (...) > Edit Config**. - -1. Click on **Show Advanced Options**. - -1. In the **Trigger Rules** section, configure rules to run or skip the pipeline. - - 1. Click **Add Rule**. In the **Value** field, enter the name of the branch that triggers the pipeline. - - 1. **Optional:** Add more branches that trigger a build. - -1. Click **Done.** - -{{% /tab %}} -{{% tab "Stage Trigger" %}} -1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule. - -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** - -1. From the repository for which you want to manage trigger rules, select the vertical **Ellipsis (...) > Edit Config**. - -1. Find the **stage** that you want to manage trigger rules, click the **Edit** icon for that stage. - -1. Click **Show advanced options**. - -1. In the **Trigger Rules** section, configure rules to run or skip the stage. - - 1. Click **Add Rule**. - - 1. Choose the **Type** that triggers the stage and enter a value. - - | Type | Value | - | ------ | -------------------------------------------------------------------- | - | Branch | The name of the branch that triggers the stage. | - | Event | The type of event that triggers the stage. Values are: `Push`, `Pull Request`, `Tag` | - -1. Click **Save**. - -{{% /tab %}} -{{% tab "Step Trigger" %}} -1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule. - -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** - -1. From the repository for which you want to manage trigger rules, select the vertical **Ellipsis (...) > Edit Config**. - -1. Find the **step** that you want to manage trigger rules, click the **Edit** icon for that step. - -1. Click **Show advanced options**. - -1. In the **Trigger Rules** section, configure rules to run or skip the step. - - 1. Click **Add Rule**. - - 1. Choose the **Type** that triggers the step and enter a value. - - | Type | Value | - | ------ | -------------------------------------------------------------------- | - | Branch | The name of the branch that triggers the step. | - | Event | The type of event that triggers the step. Values are: `Push`, `Pull Request`, `Tag` | - -1. Click **Save**. - -{{% /tab %}} -{{% tab "By YAML" %}} -
-```yaml -# example -stages: - - name: Build something - # Conditions for stages - when: - branch: master - event: [ push, pull_request ] - # Multiple steps run concurrently - steps: - - runScriptConfig: - image: busybox - shellScript: date -R - # Conditions for steps - when: - branch: [ master, dev ] - event: push -# branch conditions for the pipeline -branch: - include: [ master, feature/*] - exclude: [ dev ] -``` -
-{{% /tab %}} -{{% /tabs %}} - -### Environment Variables - -When configuring a pipeline, certain [step types](#step-types) allow you to use environment variables to configure the step's script. - -{{% tabs %}} -{{% tab "By UI" %}} -1. From the **Global** view, navigate to the project that you want to configure pipelines. - -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** - -1. From the pipeline for which you want to edit build triggers, select **Ellipsis (...) > Edit Config**. - -1. Within one of the stages, find the **step** that you want to add an environment variable for, click the **Edit** icon. - -1. Click **Show advanced options**. - -1. Click **Add Variable**, and then enter a key and value in the fields that appear. Add more variables if needed. - -1. Add your environment variable(s) into either the script or file. - -1. Click **Save**. - -{{% /tab %}} - -{{% tab "By YAML" %}} -
-```yaml -# example -stages: - - name: Build something - steps: - - runScriptConfig: - image: busybox - shellScript: echo ${FIRST_KEY} && echo ${SECOND_KEY} - env: - FIRST_KEY: VALUE - SECOND_KEY: VALUE2 -``` -
-{{% /tab %}} - -{{% /tabs %}} - -### Secrets - -If you need to use security-sensitive information in your pipeline scripts (like a password), you can pass them in using Kubernetes [secrets]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/secrets/). - -#### Prerequisite -Create a secret in the same project as your pipeline, or explicitly in the namespace where pipeline build pods run. -
- ->**Note:** Secret injection is disabled on [pull request events](#pipeline-setting). - -{{% tabs %}} -{{% tab "By UI" %}} -1. From the **Global** view, navigate to the project that you want to configure pipelines. - -1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** - -1. From the pipeline for which you want to edit build triggers, select **Ellipsis (...) > Edit Config**. - -1. Within one of the stages, find the **step** that you want to use a secret for, click the **Edit** icon. - -1. Click **Show advanced options**. - -1. Click **Add From Secret**. Select the secret file that you want to use. Then choose a key. Optionally, you can enter an alias for the key. - -1. Click **Save**. - -{{% /tab %}} -{{% tab "By YAML" %}} -
-```yaml -# example -stages: - - name: Build something - steps: - - runScriptConfig: - image: busybox - shellScript: echo ${ALIAS_ENV} - # environment variables from project secrets - envFrom: - - sourceName: my-secret - sourceKey: secret-key - targetKey: ALIAS_ENV -``` -
-{{% /tab %}} -{{% /tabs %}} - -## Pipeline Variable Substitution Reference - -For your convenience, the following variables are available for your pipeline configuration scripts. During pipeline executions, these variables are replaced by metadata. You can reference them in the form of `${VAR_NAME}`. - -Variable Name | Description -------------------------|------------------------------------------------------------ -`CICD_GIT_REPO_NAME` | Repository name (Github organization omitted). -`CICD_GIT_URL` | URL of the Git repository. -`CICD_GIT_COMMIT` | Git commit ID being executed. -`CICD_GIT_BRANCH` | Git branch of this event. -`CICD_GIT_REF` | Git reference specification of this event. -`CICD_GIT_TAG` | Git tag name, set on tag event. -`CICD_EVENT` | Event that triggered the build (`push`, `pull_request` or `tag`). -`CICD_PIPELINE_ID` | Rancher ID for the pipeline. -`CICD_EXECUTION_SEQUENCE` | Build number of the pipeline. -`CICD_EXECUTION_ID` | Combination of `{CICD_PIPELINE_ID}-{CICD_EXECUTION_SEQUENCE}`. -`CICD_REGISTRY` | Address for the Docker registry for the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step. -`CICD_IMAGE` | Name of the image built from the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step. It does not contain the image tag.

[Example](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml) +1. Click **Save**. \ No newline at end of file diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/concepts/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/concepts/_index.md new file mode 100644 index 00000000000..db8e3a24a58 --- /dev/null +++ b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/concepts/_index.md @@ -0,0 +1,36 @@ +--- +title: Concepts +weight: 1 +--- + +The purpose of this page is to explain common concepts and terminology related to pipelines. + +- **Pipeline:** + + A _pipeline_ is a software delivery process that is broken into different stages and steps. Setting up a pipeline can help developers deliver new software as quickly and efficiently as possible. Within Rancher, you can configure pipelines for each of your Rancher projects. A pipeline is based on a specific repository. It defines the process to build, test, and deploy your code. Rancher uses the [pipeline as code](https://jenkins.io/doc/book/pipeline-as-code/) model. Pipeline configuration is represented as a pipeline file in the source code repository, using the file name `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`. + +- **Stages:** + + A pipeline stage consists of multiple steps. Stages are executed in the order defined in the pipeline file. The steps in a stage are executed concurrently. A stage starts when all steps in the former stage finish without failure. + +- **Steps:** + + A pipeline step is executed inside a specified stage. A step fails if it exits with a code other than `0`. If a step exits with this failure code, the entire pipeline fails and terminates. + +- **Workspace:** + + The workspace is the working directory shared by all pipeline steps. In the beginning of a pipeline, source code is checked out to the workspace. The command for every step bootstraps in the workspace. During a pipeline execution, the artifacts from a previous step will be available in future steps. The working directory is an ephemeral volume and will be cleaned out with the executor pod when a pipeline execution is finished. + +Typically, pipeline stages include: + +- **Build:** + + Each time code is checked into your repository, the pipeline automatically clones the repo and builds a new iteration of your software. Throughout this process, the software is typically reviewed by automated tests. + +- **Publish:** + + After the build is completed, either a Docker image is built and published to a Docker registry or a catalog template is published. + +- **Deploy:** + + After the artifacts are published, you would release your application so users could start using the updated product. \ No newline at end of file diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/config/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/config/_index.md new file mode 100644 index 00000000000..7443af2daad --- /dev/null +++ b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/config/_index.md @@ -0,0 +1,658 @@ +--- +title: Pipeline Configuration Reference +weight: 1 +--- + +In this section, you'll learn how to configure pipelines. + +- [Step Types](#step-types) +- [Step Type: Run Script](#step-type-run-script) +- [Step Type: Build and Publish Images](#step-type-build-and-publish-images) +- [Step Type: Publish Catalog Template](#step-type-publish-catalog-template) +- [Step Type: Deploy YAML](#step-type-deploy-yaml) +- [Step Type: Deploy Catalog App](#step-type-deploy-catalog-app) +- [Notifications](#notifications) +- [Timeouts](#timeouts) +- [Triggers and Trigger Rules](#triggers-and-trigger-rules) +- [Environment Variables](#environment-variables) +- [Secrets](#secrets) +- [Pipeline Variable Substitution Reference](#pipeline-variable-substitution-reference) +- [Global Pipeline Execution Settings](#global-pipeline-execution-settings) + - [Executor Quota](#executor-quota) + - [Resource Quota for Executors](#resource-quota-for-executors) + - [Custom CA](#custom-ca) +- [Persistent Data for Pipeline Components](#persistent-data-for-pipeline-components) +- [Example rancher-pipeline.yml](#example-rancher-pipeline-yml) + +# Step Types + +Within each stage, you can add as many steps as you'd like. When there are multiple steps in one stage, they run concurrently. + +Step types include: + +- [Run Script](#step-type-run-script) +- [Build and Publish Images](#step-type-build-and-publish-images) +- [Publish Catalog Template](#step-type-publish-catalog-template) +- [Deploy YAML](#step-type-deploy-yaml) +- [Deploy Catalog App](#step-type-deploy-catalog-app) + + + +### Configuring Steps By UI + +If you haven't added any stages, click **Configure pipeline for this branch** to configure the pipeline through the UI. + +1. Add stages to your pipeline execution by clicking **Add Stage**. + + 1. Enter a **Name** for each stage of your pipeline. + 1. For each stage, you can configure [trigger rules](#triggers-and-trigger-rules) by clicking on **Show Advanced Options**. Note: this can always be updated at a later time. + +1. After you've created a stage, start [adding steps](#step-types) by clicking **Add a Step**. You can add multiple steps to each stage. + +### Configuring Steps by YAML + +For each stage, you can add multiple steps. Read more about each [step type](#step-types) and the advanced options to get all the details on how to configure the YAML. This is only a small example of how to have multiple stages with a singular step in each stage. + +```yaml +# example +stages: + - name: Build something + # Conditions for stages + when: + branch: master + event: [ push, pull_request ] + # Multiple steps run concurrently + steps: + - runScriptConfig: + image: busybox + shellScript: date -R + - name: Publish my image + steps: + - publishImageConfig: + dockerfilePath: ./Dockerfile + buildContext: . + tag: rancher/rancher:v2.0.0 + # Optionally push to remote registry + pushRemote: true + registry: reg.example.com +``` +# Step Type: Run Script + +The **Run Script** step executes arbitrary commands in the workspace inside a specified container. You can use it to build, test and do more, given whatever utilities the base image provides. For your convenience, you can use variables to refer to metadata of a pipeline execution. Please refer to the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) for the list of available variables. + +### Configuring Script by UI + +1. From the **Step Type** drop-down, choose **Run Script** and fill in the form. + +1. Click **Add**. + +### Configuring Script by YAML +```yaml +# example +stages: +- name: Build something + steps: + - runScriptConfig: + image: golang + shellScript: go build +``` +# Step Type: Build and Publish Images + +_Available as of Rancher v2.1.0_ + +The **Build and Publish Image** step builds and publishes a Docker image. This process requires a Dockerfile in your source code's repository to complete successfully. + +The option to publish an image to an insecure registry is not exposed in the UI, but you can specify an environment variable in the YAML that allows you to publish an image insecurely. + +### Configuring Building and Publishing Images by UI +1. From the **Step Type** drop-down, choose **Build and Publish**. + +1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**. + + Field | Description | + ---------|----------| + Dockerfile Path | The relative path to the Dockerfile in the source code repo. By default, this path is `./Dockerfile`, which assumes the Dockerfile is in the root directory. You can set it to other paths in different use cases (`./path/to/myDockerfile` for example). | + Image Name | The image name in `name:tag` format. The registry address is not required. For example, to build `example.com/repo/my-image:dev`, enter `repo/my-image:dev`. | + Push image to remote repository | An option to set the registry that publishes the image that's built. To use this option, enable it and choose a registry from the drop-down. If this option is disabled, the image is pushed to the internal registry. | + Build Context

(**Show advanced options**)| By default, the root directory of the source code (`.`). For more details, see the Docker [build command documentation](https://docs.docker.com/engine/reference/commandline/build/). + +### Configuring Building and Publishing Images by YAML + +You can use specific arguments for Docker daemon and the build. They are not exposed in the UI, but they are available in pipeline YAML format, as indicated in the example below. Available environment variables include: + +Variable Name | Description +------------------------|------------------------------------------------------------ +PLUGIN_DRY_RUN | Disable docker push +PLUGIN_DEBUG | Docker daemon executes in debug mode +PLUGIN_MIRROR | Docker daemon registry mirror +PLUGIN_INSECURE | Docker daemon allows insecure registries +PLUGIN_BUILD_ARGS | Docker build args, a comma separated list + +
+ +```yaml +# This example shows an environment variable being used +# in the Publish Image step. This variable allows you to +# publish an image to an insecure registry: + +stages: +- name: Publish Image + steps: + - publishImageConfig: + dockerfilePath: ./Dockerfile + buildContext: . + tag: repo/app:v1 + pushRemote: true + registry: example.com + env: + PLUGIN_INSECURE: "true" +``` + +# Step Type: Publish Catalog Template + +_Available as of v2.2.0_ + +The **Publish Catalog Template** step publishes a version of a catalog app template (i.e. Helm chart) to a [git hosted chart repository]({{}}/rancher/v2.x/en/catalog/custom/). It generates a git commit and pushes it to your chart repository. This process requires a chart folder in your source code's repository and a pre-configured secret in the dedicated pipeline namespace to complete successfully. Any variables in the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) is supported for any file in the chart folder. + +### Configuring Publishing a Catalog Template by UI + +1. From the **Step Type** drop-down, choose **Publish Catalog Template**. + +1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**. + + Field | Description | + ---------|----------| + Chart Folder | The relative path to the chart folder in the source code repo, where the `Chart.yaml` file is located. | + Catalog Template Name | The name of the template. For example, wordpress. | + Catalog Template Version | The version of the template you want to publish, it should be consistent with the version defined in the `Chart.yaml` file. | + Protocol | You can choose to publish via HTTP(S) or SSH protocol. | + Secret | The secret that stores your Git credentials. You need to create a secret in dedicated pipeline namespace in the project before adding this step. If you use HTTP(S) protocol, store Git username and password in `USERNAME` and `PASSWORD` key of the secret. If you use SSH protocol, store Git deploy key in `DEPLOY_KEY` key of the secret. After the secret is created, select it in this option. | + Git URL | The Git URL of the chart repository that the template will be published to. | + Git Branch | The Git branch of the chart repository that the template will be published to. | + Author Name | The author name used in the commit message. | + Author Email | The author email used in the commit message. | + + +### Configuring Publishing a Catalog Template by YAML + +You can add **Publish Catalog Template** steps directly in the `.rancher-pipeline.yml` file. + +Under the `steps` section, add a step with `publishCatalogConfig`. You will provide the following information: + +* Path: The relative path to the chart folder in the source code repo, where the `Chart.yaml` file is located. +* CatalogTemplate: The name of the template. +* Version: The version of the template you want to publish, it should be consistent with the version defined in the `Chart.yaml` file. +* GitUrl: The git URL of the chart repository that the template will be published to. +* GitBranch: The git branch of the chart repository that the template will be published to. +* GitAuthor: The author name used in the commit message. +* GitEmail: The author email used in the commit message. +* Credentials: You should provide Git credentials by referencing secrets in dedicated pipeline namespace. If you publish via SSH protocol, inject your deploy key to the `DEPLOY_KEY` environment variable. If you publish via HTTP(S) protocol, inject your username and password to `USERNAME` and `PASSWORD` environment variables. + +```yaml +# example +stages: +- name: Publish Wordpress Template + steps: + - publishCatalogConfig: + path: ./charts/wordpress/latest + catalogTemplate: wordpress + version: ${CICD_GIT_TAG} + gitUrl: git@github.com:myrepo/charts.git + gitBranch: master + gitAuthor: example-user + gitEmail: user@example.com + envFrom: + - sourceName: publish-keys + sourceKey: DEPLOY_KEY +``` + +# Step Type: Deploy YAML + +This step deploys arbitrary Kubernetes resources to the project. This deployment requires a Kubernetes manifest file to be present in the source code repository. Pipeline variable substitution is supported in the manifest file. You can view an example file at [GitHub](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml). Please refer to the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) for the list of available variables. + +### Configure Deploying YAML by UI + +1. From the **Step Type** drop-down, choose **Deploy YAML** and fill in the form. + +1. Enter the **YAML Path**, which is the path to the manifest file in the source code. + +1. Click **Add**. + +### Configure Deploying YAML by YAML + +```yaml +# example +stages: +- name: Deploy + steps: + - applyYamlConfig: + path: ./deployment.yaml +``` + +# Step Type :Deploy Catalog App + +_Available as of v2.2.0_ + +The **Deploy Catalog App** step deploys a catalog app in the project. It will install a new app if it is not present, or upgrade an existing one. + +### Configure Deploying Catalog App by UI + +1. From the **Step Type** drop-down, choose **Deploy Catalog App**. + +1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**. + + Field | Description | + ---------|----------| + Catalog | The catalog from which the app template will be used. | + Template Name | The name of the app template. For example, wordpress. | + Template Version | The version of the app template you want to deploy. | + Namespace | The target namespace where you want to deploy the app. | + App Name | The name of the app you want to deploy. | + Answers | Key-value pairs of answers used to deploy the app. | + + +### Configure Deploying Catalog App by YAML + +You can add **Deploy Catalog App** steps directly in the `.rancher-pipeline.yml` file. + +Under the `steps` section, add a step with `applyAppConfig`. You will provide the following information: + +* CatalogTemplate: The ID of the template. This can be found by clicking `Launch app` and selecting `View details` for the app. It is the last part of the URL. +* Version: The version of the template you want to deploy. +* Answers: Key-value pairs of answers used to deploy the app. +* Name: The name of the app you want to deploy. +* TargetNamespace: The target namespace where you want to deploy the app. + +```yaml +# example +stages: +- name: Deploy App + steps: + - applyAppConfig: + catalogTemplate: cattle-global-data:library-mysql + version: 0.3.8 + answers: + persistence.enabled: "false" + name: testmysql + targetNamespace: test +``` + +# Timeouts + +By default, each pipeline execution has a timeout of 60 minutes. If the pipeline execution cannot complete within its timeout period, the pipeline is aborted. + +### Configuring Timeouts by UI + +Enter a new value in the **Timeout** field. + +### Configuring Timeouts by YAML + +In the `timeout` section, enter the timeout value in minutes. + +```yaml +# example +stages: + - name: Build something + steps: + - runScriptConfig: + image: busybox + shellScript: ls +# timeout in minutes +timeout: 30 +``` + +# Notifications + +You can enable notifications to any [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) based on the build status of a pipeline. Before enabling notifications, Rancher recommends [setting up notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/#adding-notifiers) so it will be easy to add recipients immediately. + +### Configuring Notifications by UI + +_Available as of v2.2.0_ + +1. Within the **Notification** section, turn on notifications by clicking **Enable**. + +1. Select the conditions for the notification. You can select to get a notification for the following statuses: `Failed`, `Success`, `Changed`. For example, if you want to receive notifications when an execution fails, select **Failed**. + +1. If you don't have any existing [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers), Rancher will provide a warning that no notifiers are set up and provide a link to be able to go to the notifiers page. Follow the [instructions]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/#adding-notifiers) to add a notifier. If you already have notifiers, you can add them to the notification by clicking the **Add Recipient** button. + + > **Note:** Notifiers are configured at a cluster level and require a different level of permissions. + +1. For each recipient, select which notifier type from the dropdown. Based on the type of notifier, you can use the default recipient or override the recipient with a different one. For example, if you have a notifier for _Slack_, you can update which channel to send the notification to. You can add additional notifiers by clicking **Add Recipient**. + +### Configuring Notifications by YAML +_Available as of v2.2.0_ + +In the `notification` section, you will provide the following information: + +* **Recipients:** This will be the list of notifiers/recipients that will receive the notification. + * **Notifier:** The ID of the notifier. This can be found by finding the notifier and selecting **View in API** to get the ID. + * **Recipient:** Depending on the type of the notifier, the "default recipient" can be used or you can override this with a different recipient. For example, when configuring a slack notifier, you select a channel as your default recipient, but if you wanted to send notifications to a different channel, you can select a different recipient. +* **Condition:** Select which conditions of when you want the notification to be sent. +* **Message (Optional):** If you want to change the default notification message, you can edit this in the yaml. Note: This option is not available in the UI. + +```yaml +# Example +stages: + - name: Build something + steps: + - runScriptConfig: + image: busybox + shellScript: ls +notification: + recipients: + - # Recipient + recipient: "#mychannel" + # ID of Notifier + notifier: "c-wdcsr:n-c9pg7" + - recipient: "test@example.com" + notifier: "c-wdcsr:n-lkrhd" + # Select which statuses you want the notification to be sent + condition: ["Failed", "Success", "Changed"] + # Ability to override the default message (Optional) + message: "my-message" +``` + +# Triggers and Trigger Rules + +After you configure a pipeline, you can trigger it using different methods: + +- **Manually:** + + After you configure a pipeline, you can trigger a build using the latest CI definition from Rancher UI. When a pipeline execution is triggered, Rancher dynamically provisions a Kubernetes pod to run your CI tasks and then remove it upon completion. + +- **Automatically:** + + When you enable a repository for a pipeline, webhooks are automatically added to the version control system. When project users interact with the repo by pushing code, opening pull requests, or creating a tag, the version control system sends a webhook to Rancher Server, triggering a pipeline execution. + + To use this automation, webhook management permission is required for the repository. Therefore, when users authenticate and fetch their repositories, only those on which they have webhook management permission will be shown. + +Trigger rules can be created to have fine-grained control of pipeline executions in your pipeline configuration. Trigger rules come in two types: + +- **Run this when:** This type of rule starts the pipeline, stage, or step when a trigger explicitly occurs. + +- **Do Not Run this when:** This type of rule skips the pipeline, stage, or step when a trigger explicitly occurs. + +If all conditions evaluate to `true`, then the pipeline/stage/step is executed. Otherwise it is skipped. When a pipeline is skipped, none of the pipeline is executed. When a stage/step is skipped, it is considered successful and follow-up stages/steps continue to run. + +Wildcard character (`*`) expansion is supported in `branch` conditions. + +This section covers the following topics: + +- [Configuring pipeline triggers](#configuring-pipeline-triggers) +- [Configuring stage triggers](#configuring-stage-triggers) +- [Configuring step triggers](#configuring-step-triggers) +- [Configuring triggers by YAML](#configuring-triggers-by-yaml) + +### Configuring Pipeline Triggers + +1. From the **Global** view, navigate to the project that you want to configure a pipeline trigger rule. + +1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** + +1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**. + +1. Click on **Show Advanced Options**. + +1. In the **Trigger Rules** section, configure rules to run or skip the pipeline. + + 1. Click **Add Rule**. In the **Value** field, enter the name of the branch that triggers the pipeline. + + 1. **Optional:** Add more branches that trigger a build. + +1. Click **Done.** + +### Configuring Stage Triggers + +1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule. + +1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** + +1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**. + +1. Find the **stage** that you want to manage trigger rules, click the **Edit** icon for that stage. + +1. Click **Show advanced options**. + +1. In the **Trigger Rules** section, configure rules to run or skip the stage. + + 1. Click **Add Rule**. + + 1. Choose the **Type** that triggers the stage and enter a value. + + | Type | Value | + | ------ | -------------------------------------------------------------------- | + | Branch | The name of the branch that triggers the stage. | + | Event | The type of event that triggers the stage. Values are: `Push`, `Pull Request`, `Tag` | + +1. Click **Save**. + +### Configuring Step Triggers + +1. From the **Global** view, navigate to the project that you want to configure a stage trigger rule. + +1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** + +1. From the repository for which you want to manage trigger rules, select the vertical **⋮ > Edit Config**. + +1. Find the **step** that you want to manage trigger rules, click the **Edit** icon for that step. + +1. Click **Show advanced options**. + +1. In the **Trigger Rules** section, configure rules to run or skip the step. + + 1. Click **Add Rule**. + + 1. Choose the **Type** that triggers the step and enter a value. + + | Type | Value | + | ------ | -------------------------------------------------------------------- | + | Branch | The name of the branch that triggers the step. | + | Event | The type of event that triggers the step. Values are: `Push`, `Pull Request`, `Tag` | + +1. Click **Save**. + + +### Configuring Triggers by YAML + +```yaml +# example +stages: + - name: Build something + # Conditions for stages + when: + branch: master + event: [ push, pull_request ] + # Multiple steps run concurrently + steps: + - runScriptConfig: + image: busybox + shellScript: date -R + # Conditions for steps + when: + branch: [ master, dev ] + event: push +# branch conditions for the pipeline +branch: + include: [ master, feature/*] + exclude: [ dev ] +``` + +# Environment Variables + +When configuring a pipeline, certain [step types](#step-types) allow you to use environment variables to configure the step's script. + +### Configuring Environment Variables by UI + +1. From the **Global** view, navigate to the project that you want to configure pipelines. + +1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** + +1. From the pipeline for which you want to edit build triggers, select **⋮ > Edit Config**. + +1. Within one of the stages, find the **step** that you want to add an environment variable for, click the **Edit** icon. + +1. Click **Show advanced options**. + +1. Click **Add Variable**, and then enter a key and value in the fields that appear. Add more variables if needed. + +1. Add your environment variable(s) into either the script or file. + +1. Click **Save**. + +### Configuring Environment Variables by YAML + +```yaml +# example +stages: + - name: Build something + steps: + - runScriptConfig: + image: busybox + shellScript: echo ${FIRST_KEY} && echo ${SECOND_KEY} + env: + FIRST_KEY: VALUE + SECOND_KEY: VALUE2 +``` + +# Secrets + +If you need to use security-sensitive information in your pipeline scripts (like a password), you can pass them in using Kubernetes [secrets]({{}}/rancher/v2.x/en/k8s-in-rancher/secrets/). + +### Prerequisite +Create a secret in the same project as your pipeline, or explicitly in the namespace where pipeline build pods run. +
+ +>**Note:** Secret injection is disabled on [pull request events](#triggers-and-trigger-rules). + +### Configuring Secrets by UI + +1. From the **Global** view, navigate to the project that you want to configure pipelines. + +1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** + +1. From the pipeline for which you want to edit build triggers, select **⋮ > Edit Config**. + +1. Within one of the stages, find the **step** that you want to use a secret for, click the **Edit** icon. + +1. Click **Show advanced options**. + +1. Click **Add From Secret**. Select the secret file that you want to use. Then choose a key. Optionally, you can enter an alias for the key. + +1. Click **Save**. + +### Configuring Secrets by YAML + +```yaml +# example +stages: + - name: Build something + steps: + - runScriptConfig: + image: busybox + shellScript: echo ${ALIAS_ENV} + # environment variables from project secrets + envFrom: + - sourceName: my-secret + sourceKey: secret-key + targetKey: ALIAS_ENV +``` + +# Pipeline Variable Substitution Reference + +For your convenience, the following variables are available for your pipeline configuration scripts. During pipeline executions, these variables are replaced by metadata. You can reference them in the form of `${VAR_NAME}`. + +Variable Name | Description +------------------------|------------------------------------------------------------ +`CICD_GIT_REPO_NAME` | Repository name (Github organization omitted). +`CICD_GIT_URL` | URL of the Git repository. +`CICD_GIT_COMMIT` | Git commit ID being executed. +`CICD_GIT_BRANCH` | Git branch of this event. +`CICD_GIT_REF` | Git reference specification of this event. +`CICD_GIT_TAG` | Git tag name, set on tag event. +`CICD_EVENT` | Event that triggered the build (`push`, `pull_request` or `tag`). +`CICD_PIPELINE_ID` | Rancher ID for the pipeline. +`CICD_EXECUTION_SEQUENCE` | Build number of the pipeline. +`CICD_EXECUTION_ID` | Combination of `{CICD_PIPELINE_ID}-{CICD_EXECUTION_SEQUENCE}`. +`CICD_REGISTRY` | Address for the Docker registry for the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step. +`CICD_IMAGE` | Name of the image built from the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step. It does not contain the image tag.

[Example](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml) + +# Global Pipeline Execution Settings + +After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher. These settings can be edited by selecting **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**. + +- [Executor Quota](#executor-quota) +- [Resource Quota for Executors](#resource-quota-for-executors) +- [Custom CA](#custom-ca) + +### Executor Quota + +Select the maximum number of pipeline executors. The _executor quota_ decides how many builds can run simultaneously in the project. If the number of triggered builds exceeds the quota, subsequent builds will queue until a vacancy opens. By default, the quota is `2`. A value of `0` or less removes the quota limit. + +### Resource Quota for Executors + +_Available as of v2.2.0_ + +Configure compute resources for Jenkins agent containers. When a pipeline execution is triggered, a build pod is dynamically provisioned to run your CI tasks. Under the hood, A build pod consists of one Jenkins agent container and one container for each pipeline step. You can [manage compute resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for every containers in the pod. + +Edit the **Memory Reservation**, **Memory Limit**, **CPU Reservation** or **CPU Limit**, then click **Update Limit and Reservation**. + +To configure compute resources for pipeline-step containers: + +You can configure compute resources for pipeline-step containers in the `.rancher-pipeline.yml` file. + +In a [step type]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#step-types), you will provide the following information: + +* **CPU Reservation (`CpuRequest`)**: CPU request for the container of a pipeline step. +* **CPU Limit (`CpuLimit`)**: CPU limit for the container of a pipeline step. +* **Memory Reservation (`MemoryRequest`)**: Memory request for the container of a pipeline step. +* **Memory Limit (`MemoryLimit`)**: Memory limit for the container of a pipeline step. + +```yaml +# example +stages: + - name: Build something + steps: + - runScriptConfig: + image: busybox + shellScript: ls + cpuRequest: 100m + cpuLimit: 1 + memoryRequest:100Mi + memoryLimit: 1Gi + - publishImageConfig: + dockerfilePath: ./Dockerfile + buildContext: . + tag: repo/app:v1 + cpuRequest: 100m + cpuLimit: 1 + memoryRequest:100Mi + memoryLimit: 1Gi +``` + +>**Note:** Rancher sets default compute resources for pipeline steps except for `Build and Publish Images` and `Run Script` steps. You can override the default value by specifying compute resources in the same way. + +### Custom CA + +_Available as of v2.2.0_ + +If you want to use a version control provider with a certificate from a custom/internal CA root, the CA root certificates need to be added as part of the version control provider configuration in order for the pipeline build pods to succeed. + +1. Click **Edit cacerts**. + +1. Paste in the CA root certificates and click **Save cacerts**. + +**Result:** Pipelines can be used and new pods will be able to work with the self-signed-certificate. + +# Persistent Data for Pipeline Components + +The internal Docker registry and the Minio workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes. + +For details on setting up persistent storage for pipelines, refer to [this page.]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/storage) + +# Example rancher-pipeline.yml + +An example pipeline configuration file is on [this page.]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/example) \ No newline at end of file diff --git a/content/rancher/v2.x/en/project-admin/pipelines/docs-for-v2.0.x/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/docs-for-v2.0.x/_index.md similarity index 96% rename from content/rancher/v2.x/en/project-admin/pipelines/docs-for-v2.0.x/_index.md rename to content/rancher/v2.x/en/k8s-in-rancher/pipelines/docs-for-v2.0.x/_index.md index 5febdc414c6..412822f0f96 100644 --- a/content/rancher/v2.x/en/project-admin/pipelines/docs-for-v2.0.x/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/docs-for-v2.0.x/_index.md @@ -3,9 +3,10 @@ title: v2.0.x Pipeline Documentation weight: 9000 aliases: - /rancher/v2.x/en/project-admin/tools/pipelines/docs-for-v2.0.x + - /rancher/v2.x/en/project-admin/pipelines/docs-for-v2.0.x --- ->**Note:** This section describes the pipeline feature as implemented in Rancher v2.0.x. If you are using Rancher v2.1 or later, where pipelines have been significantly improved, please refer to the new documentation for [v2.1 or later]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines). +>**Note:** This section describes the pipeline feature as implemented in Rancher v2.0.x. If you are using Rancher v2.1 or later, where pipelines have been significantly improved, please refer to the new documentation for [v2.1 or later]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/). diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/example-repos/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/example-repos/_index.md index 00ddc2f207f..4b7fddc4b85 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/example-repos/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/example-repos/_index.md @@ -5,15 +5,21 @@ aliases: - /rancher/v2.x/en/tools/pipelines/quick-start-guide/ --- -Rancher ships with several example repositories that you can use to familiarize yourself with pipelines. We recommend configuring and testing the example repository that most resembles your environment before using pipelines with your own repositories in a production environment. Use this example repository as a sandbox for repo configuration, build demonstration, etc. Rancher includes example repositories for: +Rancher ships with several example repositories that you can use to familiarize yourself with pipelines. We recommend configuring and testing the example repository that most resembles your environment before using pipelines with your own repositories in a production environment. Use this example repository as a sandbox for repo configuration, build demonstration, etc. Rancher includes example repositories for: - Go - Maven - php -> **Note:** The example repositories are only available if you have not [configured a version control provider]({{< baseurl >}}/rancher/v2.x/en/project-admin/pipelines). +> **Note:** The example repositories are only available if you have not [configured a version control provider]({{}}/rancher/v2.x/en/project-admin/pipelines). -## Configure Repositories +To start using these example repositories, + +1. [Enable the example repositories](#1-enable-the-example-repositories) +2. [View the example pipeline](#2-view-the-example-pipeline) +3. [Run the example pipeline](#3-run-the-example-pipeline) + +### 1. Enable the Example Repositories By default, the example pipeline repositories are disabled. Enable one (or more) to test out the pipeline feature and see how it works. @@ -39,7 +45,7 @@ By default, the example pipeline repositories are disabled. Enable one (or more) - `jenkins` - `minio` -## View the Example Pipeline +### 2. View the Example Pipeline After enabling an example repository, review the pipeline to see how it is set up. @@ -47,11 +53,11 @@ After enabling an example repository, review the pipeline to see how it is set u 1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** -1. Find the example repository, select the vertical **Ellipsis (...)**. There are two ways to view the pipeline: +1. Find the example repository, select the vertical **⋮**. There are two ways to view the pipeline: * **Rancher UI**: Click on **Edit Config** to view the stages and steps of the pipeline. * **YAML**: Click on View/Edit YAML to view the `./rancher-pipeline.yml` file. -## Run the Example Pipeline +### 3. Run the Example Pipeline After enabling an example repository, run the pipeline to see how it works. @@ -59,12 +65,12 @@ After enabling an example repository, run the pipeline to see how it works. 1. Click **Resources > Pipelines.** In versions prior to v2.3.0, click **Workloads > Pipelines.** -1. Find the example repository, select the vertical **Ellipsis (...) > Run**. +1. Find the example repository, select the vertical **⋮ > Run**. >**Note:** When you run a pipeline the first time, it takes a few minutes to pull relevant images and provision necessary pipeline components. **Result:** The pipeline runs. You can see the results in the logs. -## What's Next? +### What's Next? -For detailed information about setting up your own pipeline for your repository, [configure a version control provider]({{< baseurl >}}/rancher/v2.x/en/project-admin/pipelines), [enable a repository](#configure-repositories) and finally [configure your pipeline]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#pipeline-configuration). +For detailed information about setting up your own pipeline for your repository, [configure a version control provider]({{}}/rancher/v2.x/en/project-admin/pipelines), [enable a repository](#configure-repositories) and finally [configure your pipeline]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#pipeline-configuration). \ No newline at end of file diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/example/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/example/_index.md index 0b756ed4de9..512c87af456 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/example/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/example/_index.md @@ -7,7 +7,9 @@ aliases: Pipelines can be configured either through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`. -In the [pipeline configuration docs](), we provide examples of each available feature within pipelines. Here is a full example for those who want to jump right in. +In the [pipeline configuration reference]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines/config), we provide examples of how to configure each feature using the Rancher UI or using YAML configuration. + +Below is a full example `rancher-pipeline.yml` for those who want to jump right in. ```yaml # example diff --git a/content/rancher/v2.x/en/k8s-in-rancher/pipelines/storage/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/storage/_index.md new file mode 100644 index 00000000000..6fec0fa6ccb --- /dev/null +++ b/content/rancher/v2.x/en/k8s-in-rancher/pipelines/storage/_index.md @@ -0,0 +1,103 @@ +--- +title: Configuring Persistent Data for Pipeline Components +weight: 600 +--- + +The internal [Docker registry](#how-pipelines-work) and the [Minio](#how-pipelines-work) workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes. + +This section assumes that you understand how persistent storage works in Kubernetes. For more information, refer to the section on [how storage works.]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/how-storage-works/) + +>**Prerequisites (for both parts A and B):** +> +>[Persistent volumes]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/) must be available for the cluster. + +### A. Configuring Persistent Data for Docker Registry + +1. From the project that you're configuring a pipeline for, and click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab. + +1. Find the `docker-registry` workload and select **⋮ > Edit**. + +1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section: + + - **Add Volume > Add a new persistent volume (claim)** + - **Add Volume > Use an existing persistent volume (claim)** + +1. Complete the form that displays to choose a persistent volume for the internal Docker registry. +{{% tabs %}} +{{% tab "Add a new persistent volume" %}} +
+1. Enter a **Name** for the volume claim. + +1. Select a volume claim **Source**: + + - If you select **Use a Storage Class to provision a new persistent volume**, select a storage class and enter a **Capacity**. + + - If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down. +1. From the **Customize** section, choose the read/write access for the volume. + +1. Click **Define**. + +{{% /tab %}} + +{{% tab "Use an existing persistent volume" %}} +
+1. Enter a **Name** for the volume claim. + +1. Choose a **Persistent Volume Claim** from the drop-down. + +1. From the **Customize** section, choose the read/write access for the volume. + +1. Click **Define**. + +{{% /tab %}} + +{{% /tabs %}} + +1. From the **Mount Point** field, enter `/var/lib/registry`, which is the data storage path inside the Docker registry container. + +1. Click **Upgrade**. + +### B. Configuring Persistent Data for Minio + +1. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Find the `minio` workload and select **⋮ > Edit**. + +1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section: + + - **Add Volume > Add a new persistent volume (claim)** + - **Add Volume > Use an existing persistent volume (claim)** + +1. Complete the form that displays to choose a persistent volume for the internal Docker registry. +{{% tabs %}} + +{{% tab "Add a new persistent volume" %}} +
+1. Enter a **Name** for the volume claim. + +1. Select a volume claim **Source**: + + - If you select **Use a Storage Class to provision a new persistent volume**, select a storage class and enter a **Capacity**. + + - If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down. +1. From the **Customize** section, choose the read/write access for the volume. + +1. Click **Define**. + +{{% /tab %}} +{{% tab "Use an existing persistent volume" %}} +
+1. Enter a **Name** for the volume claim. + +1. Choose a **Persistent Volume Claim** from the drop-down. + +1. From the **Customize** section, choose the read/write access for the volume. + +1. Click **Define**. + +{{% /tab %}} +{{% /tabs %}} + +1. From the **Mount Point** field, enter `/data`, which is the data storage path inside the Minio container. + +1. Click **Upgrade**. + +**Result:** Persistent storage is configured for your pipeline components. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/registries/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/registries/_index.md index 80c621c65fb..8fdff42b006 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/registries/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/registries/_index.md @@ -28,9 +28,9 @@ Currently, deployments pull the private registry credentials automatically only 1. Enter a **Name** for the registry. - >**Note:** Kubernetes classifies secrets, certificates, ConfigMaps, and registries all as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your registry must have a unique name among all secrets within your workspace. + >**Note:** Kubernetes classifies secrets, certificates, and registries all as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your registry must have a unique name among all secrets within your workspace. -1. Select a **Scope** for the registry. You can either make the registry available for the entire project or a single [namespace]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). +1. Select a **Scope** for the registry. You can either make the registry available for the entire project or a single [namespace]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). 1. Select the website that hosts your private registry. Then enter credentials that authenticate with the registry. For example, if you use DockerHub, provide your DockerHub username and password. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/secrets/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/secrets/_index.md index b6f31611d9e..e251e88271b 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/secrets/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/secrets/_index.md @@ -11,7 +11,7 @@ aliases: When configuring a workload, you'll be able to choose which secrets to include. Like config maps, secrets can be referenced by workloads as either an environment variable or a volume mount. -Any update to an active secrets won't automatically update the pods that are using it. Restart those pods to have them use the new secret. +Mounted secrets will be updated automatically unless they are mounted as subpath volumes. For details on how updated secrets are propagated, refer to the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/configuration/secret/#mounted-secrets-are-updated-automatically) # Creating Secrets @@ -23,9 +23,9 @@ When creating a secret, you can make it available for any deployment within a pr 3. Enter a **Name** for the secret. - >**Note:** Kubernetes classifies secrets, certificates, ConfigMaps, and registries all as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your secret must have a unique name among all secrets within your workspace. + >**Note:** Kubernetes classifies secrets, certificates, and registries all as [secrets](https://kubernetes.io/docs/concepts/configuration/secret/), and no two secrets in a project or namespace can have duplicate names. Therefore, to prevent conflicts, your secret must have a unique name among all secrets within your workspace. -4. Select a **Scope** for the secret. You can either make the registry available for the entire project or a single [namespace]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). +4. Select a **Scope** for the secret. You can either make the registry available for the entire project or a single [namespace]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces). 5. From **Secret Values**, click **Add Secret Value** to add a key value pair. Add as many values as you need. @@ -37,10 +37,10 @@ When creating a secret, you can make it available for any deployment within a pr **Result:** Your secret is added to the project or namespace, depending on the scope you chose. You can view the secret in the Rancher UI from the **Resources > Secrets** view. -Any update to an active secrets won't automatically update the pods that are using it. Restart those pods to have them use the new secret. +Mounted secrets will be updated automatically unless they are mounted as subpath volumes. For details on how updated secrets are propagated, refer to the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/configuration/secret/#mounted-secrets-are-updated-automatically) # What's Next? Now that you have a secret added to the project or namespace, you can add it to a workload that you deploy. -For more information on adding secret to a workload, see [Deploying Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). +For more information on adding secret to a workload, see [Deploying Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). diff --git a/content/rancher/v2.x/en/k8s-in-rancher/service-discovery/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/service-discovery/_index.md index 6b0b289ef04..09334ecc2a0 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/service-discovery/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/service-discovery/_index.md @@ -8,7 +8,7 @@ aliases: For every workload created, a complementing Service Discovery entry is created. This Service Discovery entry enables DNS resolution for the workload's pods using the following naming convention: `..svc.cluster.local`. -However, you also have the option of creating additional Service Discovery records. You can use these additional records so that a given [namespace]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) resolves with one or more external IP addresses, an external hostname, an alias to another DNS record, other workloads, or a set of pods that match a selector that you create. +However, you also have the option of creating additional Service Discovery records. You can use these additional records so that a given [namespace]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) resolves with one or more external IP addresses, an external hostname, an alias to another DNS record, other workloads, or a set of pods that match a selector that you create. 1. From the **Global** view, open the project that you want to add a DNS record to. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/workloads/_index.md index 617929af284..3eed10c697d 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/workloads/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/workloads/_index.md @@ -71,9 +71,9 @@ There are several types of services available in Rancher. The descriptions below This section of the documentation contains instructions for deploying workloads and using workload options. -- [Deploy Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/) -- [Upgrade Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/) -- [Rollback Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads/) +- [Deploy Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/) +- [Upgrade Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/) +- [Rollback Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads/) ## Related Links diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar/_index.md index abe6b1c5fe7..36b4355c4f2 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar/_index.md @@ -10,7 +10,7 @@ A _sidecar_ is a container that extends or enhances the main container in a pod. 1. Click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab. -1. Find the workload that you want to extend. Select **Ellipsis icon (...) > Add a Sidecar**. +1. Find the workload that you want to extend. Select **⋮ icon (...) > Add a Sidecar**. 1. Enter a **Name** for the sidecar. @@ -30,7 +30,7 @@ A _sidecar_ is a container that extends or enhances the main container in a pod. 1. Click **Launch**. -**Result:** The sidecar is deployed according to your parameters. Following its deployment, you can view the sidecar by selecting **Ellipsis icon (...) > Edit** for the main deployment. +**Result:** The sidecar is deployed according to your parameters. Following its deployment, you can view the sidecar by selecting **⋮ icon (...) > Edit** for the main deployment. ## Related Links diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/_index.md index 123c2fd295f..b899d310fe6 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/_index.md @@ -14,25 +14,25 @@ Deploy a workload to run an application in one or more containers. 1. Enter a **Name** for the workload. -1. Select a [workload type]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/). The workload defaults to a scalable deployment, by can change the workload type by clicking **More options.** +1. Select a [workload type]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/). The workload defaults to a scalable deployment, by can change the workload type by clicking **More options.** 1. From the **Docker Image** field, enter the name of the Docker image that you want to deploy to the project, optionally prefacing it with the registry host (e.g. `quay.io`, `registry.gitlab.com`, etc.). During deployment, Rancher pulls this image from the specified public or private registry. If no registry host is provided, Rancher will pull the image from [Docker Hub](https://hub.docker.com/explore/). Enter the name exactly as it appears in the registry server, including any required path, and optionally including the desired tag (e.g. `registry.gitlab.com/user/path/image:tag`). If no tag is provided, the `latest` tag will be automatically used. -1. Either select an existing [namespace]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces), or click **Add to a new namespace** and enter a new namespace. +1. Either select an existing [namespace]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces), or click **Add to a new namespace** and enter a new namespace. -1. Click **Add Port** to enter a port mapping, which enables access to the application inside and outside of the cluster . For more information, see [Services]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/#services). +1. Click **Add Port** to enter a port mapping, which enables access to the application inside and outside of the cluster . For more information, see [Services]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/#services). 1. Configure the remaining options: - **Environment Variables** - Use this section to either specify environment variables for your workload to consume on the fly, or to pull them from another source, such as a secret or [ConfigMap]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/configmaps/). + Use this section to either specify environment variables for your workload to consume on the fly, or to pull them from another source, such as a secret or [ConfigMap]({{}}/rancher/v2.x/en/k8s-in-rancher/configmaps/). - **Node Scheduling** - **Health Check** - **Volumes** - Use this section to add storage for your workload. You can manually specify the volume that you want to add, use a persistent volume claim to dynamically create a volume for the workload, or read data for a volume to use from a file such as a [ConfigMap]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/configmaps/). + Use this section to add storage for your workload. You can manually specify the volume that you want to add, use a persistent volume claim to dynamically create a volume for the workload, or read data for a volume to use from a file such as a [ConfigMap]({{}}/rancher/v2.x/en/k8s-in-rancher/configmaps/). When you are deploying a Stateful Set, you should use a Volume Claim Template when using Persistent Volumes. This will ensure that Persistent Volumes are created dynamically when you scale your Stateful Set. This option is available in the UI as of Rancher v2.2.0. @@ -44,7 +44,7 @@ Deploy a workload to run an application in one or more containers. > >- In [Amazon AWS](https://aws.amazon.com/), the nodes must be in the same Availability Zone and possess IAM permissions to attach/unattach volumes. > - >- The cluster must be using the [AWS cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws) option. For more information on enabling this option see [Creating an Amazon EC2 Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/) or [Creating a Custom Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/). + >- The cluster must be using the [AWS cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws) option. For more information on enabling this option see [Creating an Amazon EC2 Cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/) or [Creating a Custom Cluster]({{}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/). 1. Click **Show Advanced Options** and configure: diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads/_index.md index 4be9cd00eaf..d9ad17aab29 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads/_index.md @@ -9,7 +9,7 @@ Sometimes there is a need to rollback to the previous version of the application 1. From the **Global** view, open the project running the workload you want to rollback. -1. Find the workload that you want to rollback and select **Vertical Ellipsis (... ) > Rollback**. +1. Find the workload that you want to rollback and select **Vertical ⋮ (... ) > Rollback**. 1. Choose the revision that you want to roll back to. Click **Rollback**. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md index 5d47c733ed4..bf9a17e4f3d 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md @@ -8,7 +8,7 @@ When a new version of an application image is released on Docker Hub, you can up 1. From the **Global** view, open the project running the workload you want to upgrade. -1. Find the workload that you want to upgrade and select **Vertical Ellipsis (... ) > Edit**. +1. Find the workload that you want to upgrade and select **Vertical ⋮ (... ) > Edit**. 1. Update the **Docker Image** to the updated version of the application image on Docker Hub. diff --git a/content/rancher/v2.x/en/overview/_index.md b/content/rancher/v2.x/en/overview/_index.md index 92c84b5cb81..9a6b66224c3 100644 --- a/content/rancher/v2.x/en/overview/_index.md +++ b/content/rancher/v2.x/en/overview/_index.md @@ -22,7 +22,7 @@ Rancher provides an intuitive user interface for DevOps engineers to manage thei The following figure illustrates the role Rancher plays in IT and DevOps organizations. Each team deploys their applications on the public or private clouds they choose. IT administrators gain visibility and enforce policies across all users, clusters, and clouds. -![Platform]({{< baseurl >}}/img/rancher/platform.png) +![Platform]({{}}/img/rancher/platform.png) # Features of the Rancher API Server @@ -54,7 +54,7 @@ The Rancher API server is built on top of an embedded Kubernetes API server and # Editing Downstream Clusters with Rancher -The options and settings available for an existing cluster change based on the method that you used to provision it. For example, only clusters [provisioned by RKE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) have **Cluster Options** available for editing. +The options and settings available for an existing cluster change based on the method that you used to provision it. For example, only clusters [provisioned by RKE]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) have **Cluster Options** available for editing. After a cluster is created with Rancher, a cluster administrator can manage cluster membership, enable pod security policies, and manage node pools, among [other options.]({{}}/rancher/v2.x/en/cluster-admin/editing-clusters/) diff --git a/content/rancher/v2.x/en/overview/architecture-recommendations/_index.md b/content/rancher/v2.x/en/overview/architecture-recommendations/_index.md index 016f7a8ce62..f7bf7597106 100644 --- a/content/rancher/v2.x/en/overview/architecture-recommendations/_index.md +++ b/content/rancher/v2.x/en/overview/architecture-recommendations/_index.md @@ -20,20 +20,35 @@ A user cluster is a downstream Kubernetes cluster that runs your apps and servic If you have a Docker installation of Rancher, the node running the Rancher server should be separate from your downstream clusters. -In Kubernetes Installations of Rancher, the Rancher server cluster should also be separate from the user clusters. +In Kubernetes installations of Rancher, the Rancher server cluster should also be separate from the user clusters. ![Separation of Rancher Server from User Clusters]({{}}/img/rancher/rancher-architecture-separation-of-rancher-server.svg) # Why HA is Better for Rancher in Production -We recommend installing the Rancher server on a three-node Kubernetes cluster for production, primarily because it protects the Rancher server data. The Rancher server stores its data in etcd in both single-node and Kubernetes Installations. +We recommend installing the Rancher server on a high-availability Kubernetes cluster, primarily because it protects the Rancher server data. In a high-availability installation, a load balancer serves as the single point of contact for clients, distributing network traffic across multiple servers in the cluster and helping to prevent any one server from becoming a point of failure. -When Rancher is installed on a single node, if the node goes down, there is no copy of the etcd data available on other nodes and you could lose the data on your Rancher server. +We don't recommend installing Rancher in a single Docker container, because if the node goes down, there is no copy of the cluster data available on other nodes and you could lose the data on your Rancher server. -By contrast, in the high-availability installation, +Rancher needs to be installed on either a high-availability [RKE (Rancher Kubernetes Engine)]({{}}/rke/latest/en/) Kubernetes cluster, or a high-availability [K3s (5 less than K8s)]({{}}/k3s/latest/en/) Kubernetes cluster. Both RKE and K3s are fully certified Kubernetes distributions. -- The etcd data is replicated on three nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails. -- A load balancer serves as the single point of contact for clients, distributing network traffic across multiple servers in the cluster and helping to prevent any one server from becoming a point of failure. Note: This [example]({{}}/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nginx/) of how to configure an NGINX server as a basic layer 4 load balancer (TCP). +### K3s Kubernetes Cluster Installations + +If you are installing Rancher v2.4 for the first time, we recommend installing it on a K3s Kubernetes cluster. One main advantage of this K3s architecture is that it allows an external datastore to hold the cluster data, allowing the K3s server nodes to be treated as ephemeral. + +The option to install Rancher on a K3s cluster is a feature introduced in Rancher v2.4. K3s is easy to install, with half the memory of Kubernetes, all in a binary less than 50 MB. + +
Architecture of a K3s Kubernetes Cluster Running the Rancher Management Server
+![Architecture of a K3s Kubernetes Cluster Running the Rancher Management Server]({{}}/img/rancher/k3s-server-storage.svg) + +### RKE Kubernetes Cluster Installations + +If you are installing Rancher prior to v2.4, you will need to install Rancher on an RKE cluster, in which the cluster data is stored on each node with the etcd role. As of Rancher v2.4, there is no migration path to transition the Rancher server from an RKE cluster to a K3s cluster. All versions of the Rancher server, including v2.4+, can be installed on an RKE cluster. + +In an RKE installation, the cluster data is replicated on each of three etcd nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails. + +
Architecture of an RKE Kubernetes Cluster Running the Rancher Management Server
+![Architecture of an RKE Kubernetes cluster running the Rancher management server]({{}}/img/rancher/rke-server-storage.svg) # Recommended Load Balancer Configuration for Kubernetes Installations @@ -44,29 +59,42 @@ We recommend the following configurations for the load balancer and Ingress cont * The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443. * The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment. -
Rancher installed on a Kubernetes cluster with layer 4 load balancer, depicting SSL termination at ingress controllers
-![Rancher HA]({{< baseurl >}}/img/rancher/ha/rancher2ha.svg) -Rancher installed on a Kubernetes cluster with Layer 4 load balancer (TCP), depicting SSL termination at ingress controllers +
Rancher installed on a Kubernetes cluster with layer 4 load balancer, depicting SSL termination at Ingress controllers
+![Rancher HA]({{}}/img/rancher/ha/rancher2ha.svg) # Environment for Kubernetes Installations It is strongly recommended to install Rancher on a Kubernetes cluster on hosted infrastructure such as Amazon's EC2 or Google Compute Engine. -For the best performance and greater security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads. +For the best performance and greater security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads. It is not recommended to install Rancher on top of a managed Kubernetes service such as Amazon’s EKS or Google Kubernetes Engine. These hosted Kubernetes solutions do not expose etcd to a degree that is manageable for Rancher, and their customizations can interfere with Rancher operations. -# Recommended Node Roles for Kubernetes Installations +# Recommended Node Roles for Kubernetes Installations -We recommend installing Rancher on a Kubernetes cluster in which each node has all three Kubernetes roles: etcd, controlplane, and worker. +Our recommendations for the roles of each node differ depending on whether Rancher is installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster. -### Comparing Node Roles for the Rancher Server Cluster and User Clusters +### K3s Cluster Roles -Our recommendation for node roles on the Rancher server cluster contrast with our recommendations for the downstream user clusters that run your apps and services. We recommend that each node in a user cluster should have a single role for stability and scalability. +In K3s clusters, there are two types of nodes: server nodes and agent nodes. Both servers and agents can have workloads scheduled on them. Server nodes run the Kubernetes master. + +For the cluster running the Rancher management server, we recommend using two server nodes. Agent nodes are not required. + +### RKE Cluster Roles + +If Rancher is installed on an RKE Kubernetes cluster, the cluster should have three nodes, and each node should have all three Kubernetes roles: etcd, controlplane, and worker. + +### Contrasting RKE Cluster Architecture for Rancher Server and for Downstream Kubernetes Clusters + +Our recommendation for RKE node roles on the Rancher server cluster contrasts with our recommendations for the downstream user clusters that run your apps and services. + +Rancher uses RKE as a library when provisioning downstream Kubernetes clusters. Note: The capability to provision downstream K3s clusters will be added in a future version of Rancher. + +For downstream Kubernetes clusters, we recommend that each node in a user cluster should have a single role for stability and scalability. ![Kubernetes Roles for Nodes in Rancher Server Cluster vs. User Clusters]({{}}/img/rancher/rancher-architecture-node-roles.svg) -Kubernetes only requires at least one node with each role and does not require nodes to be restricted to one role. However, for the clusters that run your apps, we recommend separate roles for each node so that workloads on worker nodes don't interfere with the Kubernetes master or cluster data as your services scale. +RKE only requires at least one node with each role and does not require nodes to be restricted to one role. However, for the clusters that run your apps, we recommend separate roles for each node so that workloads on worker nodes don't interfere with the Kubernetes master or cluster data as your services scale. We recommend that downstream user clusters should have at least: @@ -80,9 +108,9 @@ With that said, it is safe to use all three roles on three nodes when setting up * It maintains multiple instances of the master components by having multiple `controlplane` nodes. * No other workloads than Rancher itself should be created on this cluster. -Because no additional workloads will be deployed on the Rancher server cluster, in most cases it is not necessary to use the same architecture that we recommend for the scalability and reliability of user clusters. +Because no additional workloads will be deployed on the Rancher server cluster, in most cases it is not necessary to use the same architecture that we recommend for the scalability and reliability of downstream clusters. -For more best practices for user clusters, refer to the [production checklist]({{}}/rancher/v2.x/en/cluster-provisioning/production) or our [best practices guide.]({{}}/rancher/v2.x/en/best-practices/management/#tips-for-scaling-and-reliability) +For more best practices for downstream clusters, refer to the [production checklist]({{}}/rancher/v2.x/en/cluster-provisioning/production) or our [best practices guide.]({{}}/rancher/v2.x/en/best-practices/management/#tips-for-scaling-and-reliability) # Architecture for an Authorized Cluster Endpoint diff --git a/content/rancher/v2.x/en/overview/architecture/_index.md b/content/rancher/v2.x/en/overview/architecture/_index.md index c28ab874aa8..8c05752602d 100644 --- a/content/rancher/v2.x/en/overview/architecture/_index.md +++ b/content/rancher/v2.x/en/overview/architecture/_index.md @@ -31,13 +31,13 @@ The majority of Rancher 2.x software runs on the Rancher Server. Rancher Server The figure below illustrates the high-level architecture of Rancher 2.x. The figure depicts a Rancher Server installation that manages two downstream Kubernetes clusters: one created by RKE and another created by Amazon EKS (Elastic Kubernetes Service). -For the best performance and security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads. +For the best performance and security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads. The diagram below shows how users can manipulate both [Rancher-launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters and [hosted Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) clusters through Rancher's authentication proxy:
Managing Kubernetes Clusters through Rancher's Authentication Proxy
-![Architecture]({{< baseurl >}}/img/rancher/rancher-architecture-rancher-api-server.svg) +![Architecture]({{}}/img/rancher/rancher-architecture-rancher-api-server.svg) You can install Rancher on a single node, or on a high-availability Kubernetes cluster. @@ -128,7 +128,9 @@ The files mentioned below are needed to maintain, troubleshoot and upgrade your - `kube_config_rancher-cluster.yml`: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down. - `rancher-cluster.rkestate`: The Kubernetes cluster state file. This file contains credentials for full access to the cluster. Note: This state file is only created when using RKE v0.2.0 or higher. -For more information on connecting to a cluster without the Rancher authentication proxy and other configuration options, refer to the [kubeconfig file]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubeconfig/) documentation. +> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file. + +For more information on connecting to a cluster without the Rancher authentication proxy and other configuration options, refer to the [kubeconfig file]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/) documentation. # Tools for Provisioning Kubernetes Clusters diff --git a/content/rancher/v2.x/en/project-admin/_index.md b/content/rancher/v2.x/en/project-admin/_index.md index 1fa9df84378..508e627147d 100644 --- a/content/rancher/v2.x/en/project-admin/_index.md +++ b/content/rancher/v2.x/en/project-admin/_index.md @@ -18,19 +18,19 @@ Rancher projects resolve this issue by allowing you to apply resources and acces You can use projects to perform actions like: -- [Assign users access to a group of namespaces]({{< baseurl >}}/rancher/v2.x/en/project-admin/project-members) -- Assign users [specific roles in a project]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles). A role can be owner, member, read-only, or [custom]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/) -- [Set resource quotas]({{< baseurl >}}/rancher/v2.x/en/project-admin/resource-quotas/) -- [Manage namespaces]({{< baseurl >}}/rancher/v2.x/en/project-admin/namespaces/) -- [Configure tools]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/) +- [Assign users access to a group of namespaces]({{}}/rancher/v2.x/en/project-admin/project-members) +- Assign users [specific roles in a project]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles). A role can be owner, member, read-only, or [custom]({{}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/) +- [Set resource quotas]({{}}/rancher/v2.x/en/project-admin/resource-quotas/) +- [Manage namespaces]({{}}/rancher/v2.x/en/project-admin/namespaces/) +- [Configure tools]({{}}/rancher/v2.x/en/project-admin/tools/) - [Set up pipelines for continuous integration and deployment]({{}}/rancher/v2.x/en/project-admin/pipelines) - [Configure pod security policies]({{}}/rancher/v2.x/en/project-admin/pod-security-policies) ### Authorization -Non-administrative users are only authorized for project access after an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owner or member]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) adds them to the project's **Members** tab. +Non-administrative users are only authorized for project access after an [administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owner or member]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) adds them to the project's **Members** tab. -Whoever creates the project automatically becomes a [project owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles). +Whoever creates the project automatically becomes a [project owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles). ## Switching between Projects diff --git a/content/rancher/v2.x/en/project-admin/namespaces/_index.md b/content/rancher/v2.x/en/project-admin/namespaces/_index.md index b8a400c9a79..82b308daf17 100644 --- a/content/rancher/v2.x/en/project-admin/namespaces/_index.md +++ b/content/rancher/v2.x/en/project-admin/namespaces/_index.md @@ -9,14 +9,14 @@ Although you assign resources at the project level so that each namespace in the Resources that you can assign directly to namespaces include: -- [Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/) -- [Load Balancers/Ingress]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/) -- [Service Discovery Records]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/service-discovery/) -- [Persistent Volume Claims]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/persistent-volume-claims/) -- [Certificates]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/certificates/) -- [ConfigMaps]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/configmaps/) -- [Registries]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/registries/) -- [Secrets]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/secrets/) +- [Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/) +- [Load Balancers/Ingress]({{}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/) +- [Service Discovery Records]({{}}/rancher/v2.x/en/k8s-in-rancher/service-discovery/) +- [Persistent Volume Claims]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/persistent-volume-claims/) +- [Certificates]({{}}/rancher/v2.x/en/k8s-in-rancher/certificates/) +- [ConfigMaps]({{}}/rancher/v2.x/en/k8s-in-rancher/configmaps/) +- [Registries]({{}}/rancher/v2.x/en/k8s-in-rancher/registries/) +- [Secrets]({{}}/rancher/v2.x/en/k8s-in-rancher/secrets/) To manage permissions in a vanilla Kubernetes cluster, cluster admins configure role-based access policies for each namespace. With Rancher, user permissions are assigned on the project level instead, and permissions are automatically inherited by any namespace owned by the particular project. @@ -27,7 +27,7 @@ To manage permissions in a vanilla Kubernetes cluster, cluster admins configure Create a new namespace to isolate apps and resources in a project. -When working with project resources that you can assign to a namespace (i.e., [workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/), [certificates]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/certificates/), [ConfigMaps]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/configmaps), etc.) you can create a namespace on the fly. +>**Tip:** When working with project resources that you can assign to a namespace (i.e., [workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/), [certificates]({{}}/rancher/v2.x/en/k8s-in-rancher/certificates/), [ConfigMaps]({{}}/rancher/v2.x/en/k8s-in-rancher/configmaps), etc.) you can create a namespace on the fly. 1. From the **Global** view, open the project where you want to create a namespace. @@ -35,7 +35,7 @@ When working with project resources that you can assign to a namespace (i.e., [w 1. From the main menu, select **Namespace**. The click **Add Namespace**. -1. **Optional:** If your project has [Resource Quotas]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) in effect, you can override the default resource **Limits** (which places a cap on the resources that the namespace can consume). +1. **Optional:** If your project has [Resource Quotas]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) in effect, you can override the default resource **Limits** (which places a cap on the resources that the namespace can consume). 1. Enter a **Name** and then click **Create**. @@ -54,7 +54,7 @@ Cluster admins and members may occasionally need to move a namespace to another >**Notes:** > >- Don't move the namespaces in the `System` project. Moving these namespaces can adversely affect cluster networking. - >- You cannot move a namespace into a project that already has a [resource quota]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/) configured. + >- You cannot move a namespace into a project that already has a [resource quota]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/) configured. >- If you move a namespace from a project that has a quota set to a project with no quota set, the quota is removed from the namespace. 1. Choose a new project for the new namespace and then click **Move**. Alternatively, you can remove the namespace from all projects by selecting **None**. @@ -65,4 +65,4 @@ Cluster admins and members may occasionally need to move a namespace to another You can always override the namespace default limit to provide a specific namespace with access to more (or less) project resources. -For more information, see how to [edit namespace resource quotas]({{< baseurl >}}/rancher/v2.x/en/project-admin//resource-quotas/override-namespace-default/#editing-namespace-resource-quotas). \ No newline at end of file +For more information, see how to [edit namespace resource quotas]({{}}/rancher/v2.x/en/project-admin//resource-quotas/override-namespace-default/#editing-namespace-resource-quotas). \ No newline at end of file diff --git a/content/rancher/v2.x/en/project-admin/pipelines/_index.md b/content/rancher/v2.x/en/project-admin/pipelines/_index.md index 0c65147cd77..7eea9d66735 100644 --- a/content/rancher/v2.x/en/project-admin/pipelines/_index.md +++ b/content/rancher/v2.x/en/project-admin/pipelines/_index.md @@ -9,8 +9,6 @@ aliases: --- Using Rancher, you can integrate with a GitHub repository to setup a continuous integration (CI) pipeline. -To set up a pipeline, you'll first need to authorize Rancher using your GitHub settings. Directions are provided in the Rancher UI. After authorizing Rancher in GitHub, provide Rancher with a client ID and secret to authenticate. - After configuring Rancher and GitHub, you can deploy containers running Jenkins to automate a pipeline execution: - Build your application from code to image. @@ -19,346 +17,4 @@ After configuring Rancher and GitHub, you can deploy containers running Jenkins - Run unit tests. - Run regression tests. - - - - -A _pipeline_ is a software delivery process that is broken into different stages and steps. Setting up a pipeline can help developers deliver new software as quickly and efficiently as possible. Within Rancher, you can configure pipelines for each of your Rancher projects. - -Typically, pipeline stages include: - -- **Build:** - - Each time code is checked into your repository, the pipeline automatically clones the repo and builds a new iteration of your software. Throughout this process, the software is typically reviewed by automated tests. - -- **Publish:** - - After the build is completed, either a Docker image is built and published to a Docker registry or a catalog template is published. - -- **Deploy:** - - After the artifacts are published, you would release your application so users could start using the updated product. - -Only [administrators]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can [configure version control providers](#version-control-providers) and [manage global pipeline execution settings](#managing-global-pipeline-execution-settings). Project members can only configure [repositories]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#configuring-repositories) and [pipelines]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#pipeline-configuration). - - -> **Notes:** -> -> - Pipelines were improved in Rancher v2.1. Therefore, if you configured pipelines while using v2.0.x, you'll have to reconfigure them after upgrading to v2.1. -> - Still using v2.0.x? See the pipeline documentation for [previous versions]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/docs-for-v2.0.x). - -## Overview - -Rancher's pipeline provides a simple CI/CD experience. Use it to automatically checkout code, run builds or scripts, publish Docker images or catalog applications, and deploy the updated software to users. - -After enabling the ability to use pipelines in a project, you can configure multiple pipelines in each project. Each pipeline is unique and can be configured independently. - -A pipeline is configured off of a group of files that are checked into source code repositories. Users can configure their pipelines either through the Rancher UI or by adding a `.rancher-pipeline.yml` into the repository. - ->**Note:** Rancher's pipeline provides a simple CI/CD experience, but it does not offer the full power and flexibility of and is not a replacement of enterprise-grade Jenkins or other CI tools your team uses. - - -## How Pipelines Work - -When you configure a pipeline in one of your projects, a namespace specifically for the pipeline is automatically created. The following components are deployed to it: - - - **Jenkins:** - - The pipeline's build engine. Because project users do not directly interact with Jenkins, it's managed and locked. - - >**Note:** There is no option to use existing Jenkins deployments as the pipeline engine. - - - **Docker Registry:** - - Out-of-the-box, the default target for your build-publish step is an internal Docker Registry. However, you can make configurations to push to a remote registry instead. The internal Docker Registry is only accessible from cluster nodes and cannot be directly accessed by users. Images are not persisted beyond the lifetime of the pipeline and should only be used in pipeline runs. If you need to access your images outside of pipeline runs, please push to an external registry. - - - **Minio:** - - Minio storage is used to store the logs for pipeline executions. - - >**Note:** The managed Jenkins instance works statelessly, so don't worry about its data persistency. The Docker Registry and Minio instances use ephemeral volumes by default, which is fine for most use cases. If you want to make sure pipeline logs can survive node failures, you can configure persistent volumes for them, as described in [data persistency for pipeline components](#configuring-persistent-data-for-pipeline-components). - -## Pipeline Triggers - -After you configure a pipeline, you can trigger it using different methods: - - -- **Manually:** - - After you configure a pipeline, you can trigger a build using the latest CI definition from Rancher UI. When a pipeline execution is triggered, Rancher dynamically provisions a Kubernetes pod to run your CI tasks and then remove it upon completion. - -- **Automatically:** - - When you enable a repository for a pipeline, webhooks are automatically added to the version control system. When project users interact with the repo—push code, open pull requests, or create a tag—the version control system sends a webhook to Rancher Server, triggering a pipeline execution. - - To use this automation, webhook management permission is required for the repository. Therefore, when users authenticate and fetch their repositories, only those on which they have webhook management permission will be shown. - -## Version Control Providers - -Before you can start [configuring a pipeline]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/pipelines/) for your repository, you must configure and authorize a version control provider. - -| Provider | Available as of | -| --- | --- | -| GitHub | v2.0.0 | -| GitLab | v2.1.0 | -| Bitbucket | v2.2.0 | - -Select your provider's tab below and follow the directions. - -{{% tabs %}} -{{% tab "GitHub" %}} -1. From the **Global** view, navigate to the project that you want to configure pipelines. - -1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**. - -1. Follow the directions displayed to **Setup a Github application**. Rancher redirects you to Github to setup an OAuth App in Github. - -1. From GitHub, copy the **Client ID** and **Client Secret**. Paste them into Rancher. - -1. If you're using GitHub for enterprise, select **Use a private github enterprise installation**. Enter the host address of your GitHub installation. - -1. Click **Authenticate**. - -{{% /tab %}} -{{% tab "GitLab" %}} - -_Available as of v2.1.0_ - -1. From the **Global** view, navigate to the project that you want to configure pipelines. - -1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**. - -1. Follow the directions displayed to **Setup a GitLab application**. Rancher redirects you to GitLab. - -1. From GitLab, copy the **Application ID** and **Secret**. Paste them into Rancher. - -1. If you're using GitLab for enterprise setup, select **Use a private gitlab enterprise installation**. Enter the host address of your GitLab installation. - -1. Click **Authenticate**. - ->**Note:** -> 1. Pipeline uses Gitlab [v4 API](https://docs.gitlab.com/ee/api/v3_to_v4.html) and the supported Gitlab version is 9.0+. -> 2. If you use GitLab 10.7+ and your Rancher setup is in a local network, enable the **Allow requests to the local network from hooks and services** option in GitLab admin settings. -{{% /tab %}} -{{% tab "Bitbucket Cloud" %}} - -_Available as of v2.2.0_ - -1. From the **Global** view, navigate to the project that you want to configure pipelines. - -1. Select **Tools > Pipelines** in the navigation bar. - -1. Choose the **Use public Bitbucket Cloud** option. - -1. Follow the directions displayed to **Setup a Bitbucket Cloud application**. Rancher redirects you to Bitbucket to setup an OAuth consumer in Bitbucket. - -1. From Bitbucket, copy the consumer **Key** and **Secret**. Paste them into Rancher. - -1. Click **Authenticate**. - -{{% /tab %}} -{{% tab "Bitbucket Server" %}} - -_Available as of v2.2.0_ - -1. From the **Global** view, navigate to the project that you want to configure pipelines. - -1. Select **Tools > Pipelines** in the navigation bar. - -1. Choose the **Use private Bitbucket Server setup** option. - -1. Follow the directions displayed to **Setup a Bitbucket Server application**. - -1. Enter the host address of your Bitbucket server installation. - -1. Click **Authenticate**. - ->**Note:** -> Bitbucket server needs to do SSL verification when sending webhooks to Rancher. Please ensure that Rancher server's certificate is trusted by the Bitbucket server. There are two options: -> -> 1. Setup Rancher server with a certificate from a trusted CA. -> 1. If you're using self-signed certificates, import Rancher server's certificate to the Bitbucket server. For instructions, see the Bitbucket server documentation for [configuring self-signed certificates](https://confluence.atlassian.com/bitbucketserver/if-you-use-self-signed-certificates-938028692.html). -> -{{% /tab %}} -{{% /tabs %}} - -**Result:** After the version control provider is authenticated, you will be automatically re-directed to start [configuring which repositories]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#configuring-repositories) that you want start using with a pipeline. Once a repository is enabled, you can start to [configure the pipeline]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#pipeline-configuration). - -## Managing Global Pipeline Execution Settings - -After configuring a version control provider, there are several options that can be configured globally on how [pipelines]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/pipelines/) are executed in Rancher. - -1. From the **Global** view, navigate to the project that you want to configure pipelines. - -1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**. - -1. Edit the different settings: - - {{% accordion id="executor-quota" label="Executor Quota" %}} - -Select the maximum number of pipeline executors. The _executor quota_ decides how many builds can run simultaneously in the project. If the number of triggered builds exceeds the quota, subsequent builds will queue until a vacancy opens. By default, the quota is `2`. A value of `0` or less removes the quota limit. - {{% /accordion %}} - - {{% accordion id="resource-quota" label="Resource Quota for Executors" %}} - -_Available as of v2.2.0_ - -Configure compute resources for Jenkins agent containers. When a pipeline execution is triggered, a build pod is dynamically provisioned to run your CI tasks. Under the hood, A build pod consists of one Jenkins agent container and one container for each pipeline step. You can [manage compute resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for every containers in the pod. - -Edit the **Memory Reservation**, **Memory Limit**, **CPU Reservation** or **CPU Limit**, then click **Update Limit and Reservation**. - -To configure compute resources for pipeline-step containers: -{{% tabs %}} -{{% tab "By YAML" %}} - -You can configure compute resources for pipeline-step containers in the `.rancher-pipeline.yml` file. - -In a [step type]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#step-types), you will provide the following information: - -* **CPU Reservation (`CpuRequest`)**: CPU request for the container of a pipeline step. -* **CPU Limit (`CpuLimit`)**: CPU limit for the container of a pipeline step. -* **Memory Reservation (`MemoryRequest`)**: Memory request for the container of a pipeline step. -* **Memory Limit (`MemoryLimit`)**: Memory limit for the container of a pipeline step. - -```yaml -# example -stages: - - name: Build something - steps: - - runScriptConfig: - image: busybox - shellScript: ls - cpuRequest: 100m - cpuLimit: 1 - memoryRequest:100Mi - memoryLimit: 1Gi - - publishImageConfig: - dockerfilePath: ./Dockerfile - buildContext: . - tag: repo/app:v1 - cpuRequest: 100m - cpuLimit: 1 - memoryRequest:100Mi - memoryLimit: 1Gi -``` - ->**Note:** Rancher sets default compute resources for pipeline steps except for `Build and Publish Images` and `Run Script` steps. You can override the default value by specifying compute resources in the same way. -{{% /tab %}} -{{% /tabs %}} - - {{% /accordion %}} - {{% accordion id="cacerts" label="Custom CA" %}} - -_Available as of v2.2.0_ - -If you want to use a version control provider with a certificate from a custom/internal CA root, the CA root certificates need to be added as part of the version control provider configuration in order for the pipeline build pods to succeed. - -1. Click **Edit cacerts**. - -1. Paste in the CA root certificates and click **Save cacerts**. - -**Result:** Pipelines can be used and new pods will be able to work with the self-signed-certificate. - - {{% /accordion %}} - -## Configuring Persistent Data for Pipeline Components - -The internal [Docker registry](#how-pipelines-work) and the [Minio](#how-pipelines-work) workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes. - ->**Prerequisites (for both parts A and B):** -> ->[Persistent volumes]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#persistent-volumes) must be available for the cluster. - -### A. Configuring Persistent Data for Docker Registry - -1. From the project that you're configuring a pipeline for, and click **Resources > Workloads.** In versions prior to v2.3.0, select the **Workloads** tab. - -1. Find the `docker-registry` workload and select **Ellipsis (...) > Edit**. - -1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section: - - - **Add Volume > Add a new persistent volume (claim)** - - **Add Volume > Use an existing persistent volume (claim)** - -1. Complete the form that displays to choose a persistent volume for the internal Docker registry. -{{% tabs %}} - -{{% tab "Add a new persistent volume" %}} -
-1. Enter a **Name** for the volume claim. - -1. Select a volume claim **Source**: - - - If you select **Use a Storage Class to provision a new persistent volume**, select a [Storage Class]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#storage-classes) and enter a **Capacity**. - - - If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down. -1. From the **Customize** section, choose the read/write access for the volume. - -1. Click **Define**. - -{{% /tab %}} - -{{% tab "Use an existing persistent volume" %}} -
-1. Enter a **Name** for the volume claim. - -1. Choose a **Persistent Volume Claim** from the drop-down. - -1. From the **Customize** section, choose the read/write access for the volume. - -1. Click **Define**. - -{{% /tab %}} - -{{% /tabs %}} - -1. From the **Mount Point** field, enter `/var/lib/registry`, which is the data storage path inside the Docker registry container. - -1. Click **Upgrade**. - -### B. Configuring Persistent Data for Minio - -1. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Find the `minio` workload and select **Ellipsis (...) > Edit**. - -1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section: - - - **Add Volume > Add a new persistent volume (claim)** - - **Add Volume > Use an existing persistent volume (claim)** - -1. Complete the form that displays to choose a persistent volume for the internal Docker registry. -{{% tabs %}} - -{{% tab "Add a new persistent volume" %}} -
-1. Enter a **Name** for the volume claim. - -1. Select a volume claim **Source**: - - - If you select **Use a Storage Class to provision a new persistent volume**, select a [Storage Class]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#storage-classes) and enter a **Capacity**. - - - If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down. -1. From the **Customize** section, choose the read/write access for the volume. - -1. Click **Define**. - -{{% /tab %}} - -{{% tab "Use an existing persistent volume" %}} -
-1. Enter a **Name** for the volume claim. - -1. Choose a **Persistent Volume Claim** from the drop-down. - -1. From the **Customize** section, choose the read/write access for the volume. - -1. Click **Define**. - -{{% /tab %}} - -{{% /tabs %}} - -1. From the **Mount Point** field, enter `/data`, which is the data storage path inside the Minio container. - -1. Click **Upgrade**. - -**Result:** Persistent storage is configured for your pipeline components. +For details, refer to the [pipelines]({{}}/rancher/v2.x/en/k8s-in-rancher/pipelines) section. \ No newline at end of file diff --git a/content/rancher/v2.x/en/project-admin/pod-security-policies/_index.md b/content/rancher/v2.x/en/project-admin/pod-security-policies/_index.md index e92356c11c6..e7c01b2aec9 100644 --- a/content/rancher/v2.x/en/project-admin/pod-security-policies/_index.md +++ b/content/rancher/v2.x/en/project-admin/pod-security-policies/_index.md @@ -3,20 +3,20 @@ title: Pod Security Policies weight: 5600 --- -> These cluster options are only available for [clusters in which Rancher has launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). +> These cluster options are only available for [clusters in which Rancher has launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). You can always assign a pod security policy (PSP) to an existing project if you didn't assign one during creation. ### Prerequisites -- Create a Pod Security Policy within Rancher. Before you can assign a default PSP to an existing project, you must have a PSP available for assignment. For instruction, see [Creating Pod Security Policies]({{< baseurl >}}/rancher/v2.x/en/admin-settings/pod-security-policies/). -- Assign a default Pod Security Policy to the project's cluster. You can't assign a PSP to a project until one is already applied to the cluster. For more information, see [Existing Cluster: Adding a Pod Security Policy]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/#adding-changing-a-pod-security-policy). +- Create a Pod Security Policy within Rancher. Before you can assign a default PSP to an existing project, you must have a PSP available for assignment. For instruction, see [Creating Pod Security Policies]({{}}/rancher/v2.x/en/admin-settings/pod-security-policies/). +- Assign a default Pod Security Policy to the project's cluster. You can't assign a PSP to a project until one is already applied to the cluster. For more information, see [Existing Cluster: Adding a Pod Security Policy]({{}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/#adding-changing-a-pod-security-policy). ### Applying a Pod Security Policy 1. From the **Global** view, find the cluster containing the project you want to apply a PSP to. 1. From the main menu, select **Projects/Namespaces**. -1. Find the project that you want to add a PSP to. From that project, select **Vertical Ellipsis (...) > Edit**. +1. Find the project that you want to add a PSP to. From that project, select **⋮ > Edit**. 1. From the **Pod Security Policy** drop-down, select the PSP you want to apply to the project. Assigning a PSP to a project will: diff --git a/content/rancher/v2.x/en/project-admin/project-members/_index.md b/content/rancher/v2.x/en/project-admin/project-members/_index.md index 00c97f2098a..c1848a0de7c 100644 --- a/content/rancher/v2.x/en/project-admin/project-members/_index.md +++ b/content/rancher/v2.x/en/project-admin/project-members/_index.md @@ -10,11 +10,11 @@ If you want to provide a user with access and permissions to _specific_ projects You can add members to a project as it is created, or add them to an existing project. ->**Tip:** Want to provide a user with access to _all_ projects within a cluster? See [Adding Cluster Members]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/cluster-members/) instead. +>**Tip:** Want to provide a user with access to _all_ projects within a cluster? See [Adding Cluster Members]({{}}/rancher/v2.x/en/cluster-provisioning/cluster-members/) instead. ### Adding Members to a New Project -You can add members to a project as you create it (recommended if possible). For details on creating a new project, refer to the [cluster administration section.]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/) +You can add members to a project as you create it (recommended if possible). For details on creating a new project, refer to the [cluster administration section.]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/) ### Adding Members to an Existing Project @@ -36,7 +36,7 @@ Following project creation, you can add users as project members so that they ca 1. Assign the user or group **Project** roles. - [What are Project Roles?]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) + [What are Project Roles?]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) >**Notes:** > @@ -44,8 +44,8 @@ Following project creation, you can add users as project members so that they ca > >- For `Custom` roles, you can modify the list of individual roles available for assignment. > - > - To add roles to the list, [Add a Custom Role]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles). - > - To remove roles from the list, [Lock/Unlock Roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/). + > - To add roles to the list, [Add a Custom Role]({{}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles). + > - To remove roles from the list, [Lock/Unlock Roles]({{}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/). **Result:** The chosen users are added to the project. diff --git a/content/rancher/v2.x/en/project-admin/resource-quotas/_index.md b/content/rancher/v2.x/en/project-admin/resource-quotas/_index.md index 03bcf25570a..e4df1e5fdee 100644 --- a/content/rancher/v2.x/en/project-admin/resource-quotas/_index.md +++ b/content/rancher/v2.x/en/project-admin/resource-quotas/_index.md @@ -9,15 +9,15 @@ In situations where several teams share a cluster, one team may overconsume the This page is a how-to guide for creating resource quotas in existing projects. -Resource quotas can also be set when a new project is created. For details, refer to the section on [creating new projects.]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/projects-and-namespaces/#creating-projects) +Resource quotas can also be set when a new project is created. For details, refer to the section on [creating new projects.]({{}}/rancher/v2.x/en/cluster-admin/projects-and-namespaces/#creating-projects) -> Resource quotas in Rancher include the same functionality as the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). However, in Rancher, resource quotas have been extended so that you can apply them to [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects). For details on how resource quotas work with projects in Rancher, refer to [this page.](./quotas-for-projects) +> Resource quotas in Rancher include the same functionality as the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). However, in Rancher, resource quotas have been extended so that you can apply them to [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects). For details on how resource quotas work with projects in Rancher, refer to [this page.](./quotas-for-projects) ### Applying Resource Quotas to Existing Projects _Available as of v2.0.1_ -Edit [resource quotas]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) when: +Edit [resource quotas]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) when: - You want to limit the resources that a project and its namespaces can use. - You want to scale the resources available to a project up or down when a research quota is already in effect. @@ -26,11 +26,11 @@ Edit [resource quotas]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-a 1. From the main menu, select **Projects/Namespaces**. -1. Find the project that you want to add a resource quota to. From that project, select **Ellipsis (...) > Edit**. +1. Find the project that you want to add a resource quota to. From that project, select **⋮ > Edit**. 1. Expand **Resource Quotas** and click **Add Quota**. Alternatively, you can edit existing quotas. -1. Select a [Resource Type]({{< baseurl >}}/rancher/v2.x/en/project-admin/resource-quotas/#resource-quota-types). +1. Select a [Resource Type]({{}}/rancher/v2.x/en/project-admin/resource-quotas/#resource-quota-types). 1. Enter values for the **Project Limit** and the **Namespace Default Limit**. diff --git a/content/rancher/v2.x/en/project-admin/resource-quotas/override-container-default/_index.md b/content/rancher/v2.x/en/project-admin/resource-quotas/override-container-default/_index.md index bd9d1517459..5d3bf362301 100644 --- a/content/rancher/v2.x/en/project-admin/resource-quotas/override-container-default/_index.md +++ b/content/rancher/v2.x/en/project-admin/resource-quotas/override-container-default/_index.md @@ -13,14 +13,14 @@ To avoid setting these limits on each and every container during workload creati _Available as of v2.2.0_ -Edit [container default resource limit]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/#setting-container-default-resource-limit) when: +Edit [container default resource limit]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/#setting-container-default-resource-limit) when: - You have a CPU or Memory resource quota set on a project, and want to supply the corresponding default values for a container. - You want to edit the default container resource limit. 1. From the **Global** view, open the cluster containing the project to which you want to edit the container default resource limit. 1. From the main menu, select **Projects/Namespaces**. -1. Find the project that you want to edit the container default resource limit. From that project, select **Ellipsis (...) > Edit**. +1. Find the project that you want to edit the container default resource limit. From that project, select **⋮ > Edit**. 1. Expand **Container Default Resource Limit** and edit the values. ### Resource Limit Propagation diff --git a/content/rancher/v2.x/en/project-admin/resource-quotas/override-namespace-default/_index.md b/content/rancher/v2.x/en/project-admin/resource-quotas/override-namespace-default/_index.md index 0501008f985..f87f5612e06 100644 --- a/content/rancher/v2.x/en/project-admin/resource-quotas/override-namespace-default/_index.md +++ b/content/rancher/v2.x/en/project-admin/resource-quotas/override-namespace-default/_index.md @@ -5,26 +5,26 @@ weight: 2 Although the **Namespace Default Limit** propagates from the project to each namespace, in some cases, you may need to increase (or decrease) the performance for a specific namespace. In this situation, you can override the default limits by editing the namespace. -In the diagram below, the Rancher administrator has a resource quota in effect for their project. However, the administrator wants to override the namespace limits for `Namespace 3` so that it performs better. Therefore, the administrator [raises the namespace limits]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#editing-namespace-resource-quotas) for `Namespace 3` so that the namespace can access more resources. +In the diagram below, the Rancher administrator has a resource quota in effect for their project. However, the administrator wants to override the namespace limits for `Namespace 3` so that it performs better. Therefore, the administrator [raises the namespace limits]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#editing-namespace-resource-quotas) for `Namespace 3` so that the namespace can access more resources. Namespace Default Limit Override -![Namespace Default Limit Override]({{< baseurl >}}/img/rancher/rancher-resource-quota-override.svg) +![Namespace Default Limit Override]({{}}/img/rancher/rancher-resource-quota-override.svg) -How to: [Editing Namespace Resource Quotas]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#editing-namespace-resource-quotas) +How to: [Editing Namespace Resource Quotas]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#editing-namespace-resource-quotas) ### Editing Namespace Resource Quotas -If there is a [resource quota]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) configured for a project, you can override the namespace default limit to provide a specific namespace with access to more (or less) project resources. +If there is a [resource quota]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) configured for a project, you can override the namespace default limit to provide a specific namespace with access to more (or less) project resources. 1. From the **Global** view, open the cluster that contains the namespace for which you want to edit the resource quota. 1. From the main menu, select **Projects/Namespaces**. -1. Find the namespace for which you want to edit the resource quota. Select **Ellipsis (...) > Edit**. +1. Find the namespace for which you want to edit the resource quota. Select **⋮ > Edit**. 1. Edit the Resource Quota **Limits**. These limits determine the resources available to the namespace. The limits must be set within the configured project limits. - For more information about each **Resource Type**, see [Resource Quota Types]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/#resource-quota-types). + For more information about each **Resource Type**, see [Resource Quota Types]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/#resource-quota-types). >**Note:** > diff --git a/content/rancher/v2.x/en/project-admin/resource-quotas/quotas-for-projects/_index.md b/content/rancher/v2.x/en/project-admin/resource-quotas/quotas-for-projects/_index.md index 73d7c180f80..3b1691f60b0 100644 --- a/content/rancher/v2.x/en/project-admin/resource-quotas/quotas-for-projects/_index.md +++ b/content/rancher/v2.x/en/project-admin/resource-quotas/quotas-for-projects/_index.md @@ -3,16 +3,16 @@ title: How Resource Quotas Work in Rancher Projects weight: 1 --- -Resource quotas in Rancher include the same functionality as the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). However, in Rancher, resource quotas have been extended so that you can apply them to [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects). +Resource quotas in Rancher include the same functionality as the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). However, in Rancher, resource quotas have been extended so that you can apply them to [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects). In a standard Kubernetes deployment, resource quotas are applied to individual namespaces. However, you cannot apply the quota to your namespaces simultaneously with a single action. Instead, the resource quota must be applied multiple times. In the following diagram, a Kubernetes administrator is trying to enforce a resource quota without Rancher. The administrator wants to apply a resource quota that sets the same CPU and memory limit to every namespace in his cluster (`Namespace 1-4`) . However, in the base version of Kubernetes, each namespace requires a unique resource quota. The administrator has to create four different resource quotas that have the same specs configured (`Resource Quota 1-4`) and apply them individually. Base Kubernetes: Unique Resource Quotas Being Applied to Each Namespace -![Native Kubernetes Resource Quota Implementation]({{< baseurl >}}/img/rancher/kubernetes-resource-quota.svg) +![Native Kubernetes Resource Quota Implementation]({{}}/img/rancher/kubernetes-resource-quota.svg) -Resource quotas are a little different in Rancher. In Rancher, you apply a resource quota to the [project]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects), and then the quota propagates to each namespace, whereafter Kubernetes enforces your limits using the native version of resource quotas. If you want to change the quota for a specific namespace, you can [override it](#overriding-the-default-limit-for-a-namespace). +Resource quotas are a little different in Rancher. In Rancher, you apply a resource quota to the [project]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects), and then the quota propagates to each namespace, whereafter Kubernetes enforces your limits using the native version of resource quotas. If you want to change the quota for a specific namespace, you can [override it](#overriding-the-default-limit-for-a-namespace). The resource quota includes two limits, which you set while creating or editing a project: @@ -28,7 +28,7 @@ The resource quota includes two limits, which you set while creating or editing In the following diagram, a Rancher administrator wants to apply a resource quota that sets the same CPU and memory limit for every namespace in their project (`Namespace 1-4`). However, in Rancher, the administrator can set a resource quota for the project (`Project Resource Quota`) rather than individual namespaces. This quota includes resource limits for both the entire project (`Project Limit`) and individual namespaces (`Namespace Default Limit`). Rancher then propagates the `Namespace Default Limit` quotas to each namespace (`Namespace Resource Quota`). Rancher: Resource Quotas Propagating to Each Namespace -![Rancher Resource Quota Implementation]({{< baseurl >}}/img/rancher/rancher-resource-quota.svg) +![Rancher Resource Quota Implementation]({{}}/img/rancher/rancher-resource-quota.svg) The following table explains the key differences between the two quota types. diff --git a/content/rancher/v2.x/en/project-admin/tools/alerts/_index.md b/content/rancher/v2.x/en/project-admin/tools/alerts/_index.md index fa9c7b0bdaa..786722a3827 100644 --- a/content/rancher/v2.x/en/project-admin/tools/alerts/_index.md +++ b/content/rancher/v2.x/en/project-admin/tools/alerts/_index.md @@ -9,7 +9,7 @@ Notifiers and alerts are built on top of the [Prometheus Alertmanager](https://p Before you can receive alerts, one or more [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers) must be configured at the cluster level. -Only [administrators]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can manage project alerts. +Only [administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can manage project alerts. This section covers the following topics: @@ -20,7 +20,7 @@ This section covers the following topics: ## Alerts Scope -The scope for alerts can be set at either the [cluster level]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/alerts/) or project level. +The scope for alerts can be set at either the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) or project level. At the project level, Rancher monitors specific deployments and sends alerts for: @@ -123,13 +123,13 @@ This alert type monitors for the availability of all workloads marked with tags
_Available as of v2.2.4_ -If you enable [project monitoring]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/#monitoring), this alert type monitors for the overload from Prometheus expression querying. +If you enable [project monitoring]({{}}/rancher/v2.x/en/project-admin/tools/#monitoring), this alert type monitors for the overload from Prometheus expression querying. 1. Input or select an **Expression**, the drop down shows the original metrics from Prometheus, including: - [**Container**](https://github.com/google/cadvisor) - [**Kubernetes Resources**](https://github.com/kubernetes/kube-state-metrics) - - [**Customize**]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/monitoring/#project-metrics) + - [**Customize**]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#project-metrics) - [**Project Level Grafana**](http://docs.grafana.org/administration/metrics/) - **Project Level Prometheus** @@ -167,7 +167,7 @@ If you enable [project monitoring]({{< baseurl >}}/rancher/v2.x/en/project-admin 1. Continue adding more **Alert Rule** to the group. -1. Finally, choose the [notifiers]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) that send you alerts. +1. Finally, choose the [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) that send you alerts. - You can set up multiple notifiers. - You can change notifier recipients on the fly. diff --git a/content/rancher/v2.x/en/project-admin/tools/logging/_index.md b/content/rancher/v2.x/en/project-admin/tools/logging/_index.md index 5e842ce96c7..8c60ddf64eb 100644 --- a/content/rancher/v2.x/en/project-admin/tools/logging/_index.md +++ b/content/rancher/v2.x/en/project-admin/tools/logging/_index.md @@ -17,7 +17,7 @@ Rancher supports the following services: >**Note:** You can only configure one logging service per cluster or per project. -Only [administrators]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can configure Rancher to send Kubernetes logs to a logging service. +Only [administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can configure Rancher to send Kubernetes logs to a logging service. ## Requirements @@ -41,7 +41,7 @@ Setting up a logging service to collect logs from your cluster/project has sever You can configure logging at either cluster level or project level. -- [Cluster logging]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/) writes logs for every pod in the cluster, i.e. in all the projects. For [RKE clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters), it also writes logs for all the Kubernetes system components. +- [Cluster logging]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/) writes logs for every pod in the cluster, i.e. in all the projects. For [RKE clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters), it also writes logs for all the Kubernetes system components. - Project logging writes logs for every pod in that particular project. @@ -59,11 +59,11 @@ Logs that are sent to your logging service are from the following locations: 1. Select a logging service and enter the configuration. Refer to the specific service for detailed configuration. Rancher supports the following services: - - [Elasticsearch]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch/) - - [Splunk]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/splunk/) - - [Kafka]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/kafka/) - - [Syslog]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/syslog/) - - [Fluentd]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/fluentd/) + - [Elasticsearch]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch/) + - [Splunk]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/splunk/) + - [Kafka]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/kafka/) + - [Syslog]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/syslog/) + - [Fluentd]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/fluentd/) 1. (Optional) Instead of using the UI to configure the logging services, you can enter custom advanced configurations by clicking on **Edit as File**, which is located above the logging targets. This link is only visible after you select a logging service. diff --git a/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md b/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md index 7174c065867..c5372b0ba6d 100644 --- a/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md +++ b/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md @@ -19,19 +19,19 @@ This section covers the following topics: ### Monitoring Scope -Using Prometheus, you can monitor Rancher at both the [cluster level]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and project level. For each cluster and project that is enabled for monitoring, Rancher deploys a Prometheus server. +Using Prometheus, you can monitor Rancher at both the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and project level. For each cluster and project that is enabled for monitoring, Rancher deploys a Prometheus server. -- [Cluster monitoring]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts. +- [Cluster monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts. - - [Kubernetes control plane]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#kubernetes-components-metrics) - - [etcd database]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#etcd-metrics) - - [All nodes (including workers)]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#cluster-metrics) + - [Kubernetes control plane]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#kubernetes-components-metrics) + - [etcd database]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#etcd-metrics) + - [All nodes (including workers)]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#cluster-metrics) - Project monitoring allows you to view the state of pods running in a given project. Prometheus collects metrics from the project's deployed HTTP and TCP/UDP workloads. ### Permissions to Configure Project Monitoring -Only [administrators]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can configure project level monitoring. Project members can only view monitoring metrics. +Only [administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), [cluster owners or members]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), or [project owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can configure project level monitoring. Project members can only view monitoring metrics. ### Enabling Project Monitoring @@ -41,7 +41,7 @@ Only [administrators]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global 1. Select **Tools > Monitoring** in the navigation bar. -1. Select **Enable** to show the [Prometheus configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/). Enter in your desired configuration options. +1. Select **Enable** to show the [Prometheus configuration options]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/). Enter in your desired configuration options. 1. Click **Save**. @@ -53,11 +53,11 @@ Prometheus|750m| 750Mi | 1000m | 1000Mi | Yes Grafana | 100m | 100Mi | 200m | 200Mi | No -**Result:** A single application,`project-monitoring`, is added as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) to the project. After the application is `active`, you can start viewing [project metrics](#project-metrics) through the [Rancher dashboard]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#rancher-dashboard) or directly from [Grafana]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#grafana). +**Result:** A single application,`project-monitoring`, is added as an [application]({{}}/rancher/v2.x/en/catalog/apps/) to the project. After the application is `active`, you can start viewing [project metrics](#project-metrics) through the [Rancher dashboard]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#rancher-dashboard) or directly from [Grafana]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#grafana). ### Project Metrics -[Workload metrics]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#workload-metrics) are available for the project if monitoring is enabled at the [cluster level]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and at the [project level.](#enabling-project-monitoring) +[Workload metrics]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#workload-metrics) are available for the project if monitoring is enabled at the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and at the [project level.](#enabling-project-monitoring) You can monitor custom metrics from any [exporters.](https://prometheus.io/docs/instrumenting/exporters/) You can also expose some custom endpoints on deployments without needing to configure Prometheus for your project. diff --git a/content/rancher/v2.x/en/quick-start-guide/_index.md b/content/rancher/v2.x/en/quick-start-guide/_index.md index 630450f42d2..be103b469ef 100644 --- a/content/rancher/v2.x/en/quick-start-guide/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/_index.md @@ -4,14 +4,14 @@ metaDescription: Use this section to jump start your Rancher deployment and test short title: Use this section to jump start your Rancher deployment and testing. It contains instructions for a simple Rancher setup and some common use cases. weight: 25 --- ->**Note:** The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation]({{< baseurl >}}/rancher/v2.x/en/installation/). +>**Note:** The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation]({{}}/rancher/v2.x/en/installation/). Howdy buckaroos! Use this section of the docs to jump start your deployment and testing of Rancher 2.x! It contains instructions for a simple Rancher setup and some common use cases. We plan on adding more content to this section in the future. We have Quick Start Guides for: -- [Deploying Rancher Server]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/deployment/): Get started running Rancher using the method most convenient for you. +- [Deploying Rancher Server]({{}}/rancher/v2.x/en/quick-start-guide/deployment/): Get started running Rancher using the method most convenient for you. -- [Deploying Workloads]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/workload/): Deploy a simple workload and expose it, letting you access it from outside the cluster. +- [Deploying Workloads]({{}}/rancher/v2.x/en/quick-start-guide/workload/): Deploy a simple workload and expose it, letting you access it from outside the cluster. -- [Using the CLI]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/cli/): Use `kubectl` or Rancher command line interface (CLI) to interact with your Rancher instance. +- [Using the CLI]({{}}/rancher/v2.x/en/quick-start-guide/cli/): Use `kubectl` or Rancher command line interface (CLI) to interact with your Rancher instance. diff --git a/content/rancher/v2.x/en/quick-start-guide/deployment/_index.md b/content/rancher/v2.x/en/quick-start-guide/deployment/_index.md index f11ab6241fb..f7d4da476aa 100644 --- a/content/rancher/v2.x/en/quick-start-guide/deployment/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/deployment/_index.md @@ -7,6 +7,8 @@ Use one of the following guides to deploy and provision Rancher and a Kubernetes - [DigitalOcean](./digital-ocean-qs) (uses Terraform) - [AWS](./amazon-aws-qs) (uses Terraform) +- [Azure](./microsoft-azure-qs) (uses Terraform) +- [GCP](./google-gcp-qs) (uses Terraform) - [Vagrant](./quickstart-vagrant) If you prefer, the following guide will take you through the same process in individual steps. Use this if you want to run Rancher in a different provider, on prem, or if you would just like to see how easy it is. diff --git a/content/rancher/v2.x/en/quick-start-guide/deployment/amazon-aws-qs/_index.md b/content/rancher/v2.x/en/quick-start-guide/deployment/amazon-aws-qs/_index.md index 65fee61875a..3e9ddae02c2 100644 --- a/content/rancher/v2.x/en/quick-start-guide/deployment/amazon-aws-qs/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/deployment/amazon-aws-qs/_index.md @@ -3,7 +3,7 @@ title: Rancher AWS Quick Start Guide description: Read this step by step Rancher AWS guide to quickly deploy a Rancher Server with a single node cluster attached. weight: 100 --- -The following steps will quickly deploy a Rancher Server with a single node cluster attached. +The following steps will quickly deploy a Rancher Server on AWS with a single node cluster attached. ## Prerequisites @@ -20,38 +20,48 @@ The following steps will quickly deploy a Rancher Server with a single node clus 1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`. -2. Go into the AWS folder containing the terraform file by executing `cd quickstart/aws`. +1. Go into the AWS folder containing the terraform files by executing `cd quickstart/aws`. -3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`. +1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`. -4. Edit `terraform.tfvars` and customize the following variables at minimum. To change node counts and sizes, see `node sizes`. +1. Edit `terraform.tfvars` and customize the following variables: + - `aws_access_key` - Amazon AWS Access Key + - `aws_secret_key` - Amazon AWS Secret Key + - `rancher_server_admin_password` - Admin password for created Rancher server - - `aws_access_key` - Amazon AWS Access Key - - `aws_secret_key` - Amazon AWS Secret Key - - `ssh_key_name` - Amazon AWS Key Pair Name - - `prefix` - Resource Prefix - -5. **Optional:** Modify the count of the various node types within `terraform.tfvars`. See the [Quickstart Readme](https://github.com/rancher/quickstart) for more information on the variables. +1. **Optional:** Modify optional variables within `terraform.tfvars`. +See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [AWS Quickstart Readme](https://github.com/rancher/quickstart/tree/master/aws) for more information. +Suggestions include: + - `aws_region` - Amazon AWS region, choose the closest instead of the default + - `prefix` - Prefix for all created resources + - `instance_type` - EC2 instance size used, minimum is `t3a.medium` but `t3a.large` or `t3a.xlarge` could be used if within budget + - `ssh_key_file_name` - Use a specific SSH key instead of `~/.ssh/id_rsa` (public key is assumed to be `${ssh_key_file_name}.pub`) -6. Run `terraform init`. +1. Run `terraform init`. -7. To initiate the creation of the environment, run `terraform apply`. Then wait for the following output: +1. Install the [RKE terraform provider](https://github.com/rancher/terraform-provider-rke), see [installation instructions](https://github.com/rancher/terraform-provider-rke#using-the-provider). - ``` - Apply complete! Resources: 3 added, 0 changed, 0 destroyed. - Outputs: - rancher-url = [ - https://xxx.xxx.xxx.xxx - ] - ``` +1. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following: -8. Paste the `rancher-url` from the output above into the browser. Log in when prompted (default username is `admin`, and default password is `admin`). + ``` + Apply complete! Resources: 16 added, 0 changed, 0 destroyed. -**Result:** Rancher Server and your Kubernetes cluster is installed in Amazon AWS. + Outputs: + + rancher_node_ip = xx.xx.xx.xx + rancher_server_url = https://ec2-xx-xx-xx-xx.compute-1.amazonaws.com + workload_node_ip = yy.yy.yy.yy + ``` + +1. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`). + +#### Result + +Two Kubernetes clusters are deployed into your AWS account, one running Rancher Server and the other ready for experimentation deployments. ### What's Next? -Use Rancher to create a deployment. For more information, see [Creating Deployments]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/workload). +Use Rancher to create a deployment. For more information, see [Creating Deployments]({{}}/rancher/v2.x/en/quick-start-guide/workload). ## Destroying the Environment diff --git a/content/rancher/v2.x/en/quick-start-guide/deployment/digital-ocean-qs/_index.md b/content/rancher/v2.x/en/quick-start-guide/deployment/digital-ocean-qs/_index.md index 800757e7674..95b4820090d 100644 --- a/content/rancher/v2.x/en/quick-start-guide/deployment/digital-ocean-qs/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/deployment/digital-ocean-qs/_index.md @@ -1,8 +1,9 @@ --- -title: DigitalOcean Quick Start +title: Rancher DigitalOcean Quick Start Guide +description: Read this step by step Rancher DigitalOcean guide to quickly deploy a Rancher Server with a single node cluster attached. weight: 100 --- -The following steps will quickly deploy a Rancher Server with a single node cluster attached. +The following steps will quickly deploy a Rancher Server on DigitalOcean with a single node cluster attached. ## Prerequisites @@ -18,39 +19,50 @@ The following steps will quickly deploy a Rancher Server with a single node clus 1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`. -2. Go into the DigitalOcean folder containing the terraform file by executing `cd quickstart/do`. +1. Go into the DigitalOcean folder containing the terraform files by executing `cd quickstart/do`. -3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`. +1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`. -4. Edit `terraform.tfvars` to include your DigitalOcean Access Key. +1. Edit `terraform.tfvars` and customize the following variables: + - `do_token` - DigitalOcean access key + - `rancher_server_admin_password` - Admin password for created Rancher server -5. **Optional:** Edit `terraform.tfvars` to: +1. **Optional:** Modify optional variables within `terraform.tfvars`. +See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [DO Quickstart Readme](https://github.com/rancher/quickstart/tree/master/do) for more information. +Suggestions include: + - `do_region` - DigitalOcean region, choose the closest instead of the default + - `prefix` - Prefix for all created resources + - `droplet_size` - Droplet size used, minimum is `s-2vcpu-4gb` but `s-4vcpu-8g` could be used if within budget + - `ssh_key_file_name` - Use a specific SSH key instead of `~/.ssh/id_rsa` (public key is assumed to be `${ssh_key_file_name}.pub`) - - Change the number of nodes. (`count_agent_all_nodes`) - - Change the password of the `admin` user for logging into Rancher. (`admin_password`) +1. Run `terraform init`. -6. Run `terraform init`. +1. Install the [RKE terraform provider](https://github.com/rancher/terraform-provider-rke), see [installation instructions](https://github.com/rancher/terraform-provider-rke#using-the-provider). -7. To initiate the creation of the environment, run `terraform apply`. Then wait for the following output: +1. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following: - ``` - Apply complete! Resources: 2 added, 0 changed, 0 destroyed. - Outputs: - rancher-url = [ - https://xxx.xxx.xxx.xxx - ] - ``` + ``` + Apply complete! Resources: 15 added, 0 changed, 0 destroyed. -8. Paste the `rancher-url` from the output above into the browser. Log in when prompted (default password is `admin`). + Outputs: -**Result:** Rancher Server and your Kubernetes cluster is installed on DigitalOcean. + rancher_node_ip = xx.xx.xx.xx + rancher_server_url = https://rancher.xx.xx.xx.xx.xip.io + workload_node_ip = yy.yy.yy.yy + ``` + +1. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`). + +#### Result + +Two Kubernetes clusters are deployed into your DigitalOcean account, one running Rancher Server and the other ready for experimentation deployments. ### What's Next? -Use Rancher to create a deployment. For more information, see [Creating Deployments]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/workload). +Use Rancher to create a deployment. For more information, see [Creating Deployments]({{}}/rancher/v2.x/en/quick-start-guide/workload). ## Destroying the Environment -1. From the `quickstart/do` folder, execute `terraform destroy --force`. +1. From the `quickstart/aws` folder, execute `terraform destroy --auto-approve`. 2. Wait for confirmation that all resources have been destroyed. diff --git a/content/rancher/v2.x/en/quick-start-guide/deployment/google-gcp-qs/_index.md b/content/rancher/v2.x/en/quick-start-guide/deployment/google-gcp-qs/_index.md new file mode 100644 index 00000000000..76c83050fae --- /dev/null +++ b/content/rancher/v2.x/en/quick-start-guide/deployment/google-gcp-qs/_index.md @@ -0,0 +1,69 @@ +--- +title: Rancher GCP Quick Start Guide +description: Read this step by step Rancher GCP guide to quickly deploy a Rancher Server with a single node cluster attached. +weight: 100 +--- +The following steps will quickly deploy a Rancher server on GCP in a single-node RKE Kubernetes cluster, with a single-node downstream Kubernetes cluster attached. + +## Prerequisites + +>**Note** +>Deploying to Google GCP will incur charges. + +- [Google GCP Account](https://console.cloud.google.com/): A Google GCP Account is required to create resources for deploying Rancher and Kubernetes. +- [Google GCP Project](https://cloud.google.com/appengine/docs/standard/nodejs/building-app/creating-project): Use this link to follow a tutorial to create a GCP Project if you don't have one yet. +- [Google GCP Service Account](https://cloud.google.com/iam/docs/creating-managing-service-account-keys): Use this link and follow instructions to create a GCP service account and token file. +- [Terraform](https://www.terraform.io/downloads.html): Used to provision the server and cluster in Google GCP. + + +## Getting Started + +1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`. + +1. Go into the GCP folder containing the terraform files by executing `cd quickstart/gcp`. + +1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`. + +1. Edit `terraform.tfvars` and customize the following variables: + - `gcp_account_json` - GCP service account file path and file name + - `rancher_server_admin_password` - Admin password for created Rancher server + +1. **Optional:** Modify optional variables within `terraform.tfvars`. +See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [GCP Quickstart Readme](https://github.com/rancher/quickstart/tree/master/gcp) for more information. +Suggestions include: + - `gcp_region` - Google GCP region, choose the closest instead of the default + - `prefix` - Prefix for all created resources + - `machine_type` - Compute instance size used, minimum is `n1-standard-1` but `n1-standard-2` or `n1-standard-4` could be used if within budget + - `ssh_key_file_name` - Use a specific SSH key instead of `~/.ssh/id_rsa` (public key is assumed to be `${ssh_key_file_name}.pub`) + +1. Run `terraform init`. + +1. Install the [RKE terraform provider](https://github.com/rancher/terraform-provider-rke), see [installation instructions](https://github.com/rancher/terraform-provider-rke#using-the-provider). + +1. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following: + + ``` + Apply complete! Resources: 16 added, 0 changed, 0 destroyed. + + Outputs: + + rancher_node_ip = xx.xx.xx.xx + rancher_server_url = https://xx-xx-xx-xx.nip.io + workload_node_ip = yy.yy.yy.yy + ``` + +1. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`). + +#### Result + +Two Kubernetes clusters are deployed into your GCP account, one running Rancher Server and the other ready for experimentation deployments. + +### What's Next? + +Use Rancher to create a deployment. For more information, see [Creating Deployments]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/workload). + +## Destroying the Environment + +1. From the `quickstart/gcp` folder, execute `terraform destroy --auto-approve`. + +2. Wait for confirmation that all resources have been destroyed. diff --git a/content/rancher/v2.x/en/quick-start-guide/deployment/microsoft-azure-qs/_index.md b/content/rancher/v2.x/en/quick-start-guide/deployment/microsoft-azure-qs/_index.md new file mode 100644 index 00000000000..dffe9a9531b --- /dev/null +++ b/content/rancher/v2.x/en/quick-start-guide/deployment/microsoft-azure-qs/_index.md @@ -0,0 +1,74 @@ +--- +title: Rancher Azure Quick Start Guide +description: Read this step by step Rancher Azure guide to quickly deploy a Rancher Server with a single node cluster attached. +weight: 100 +--- + +The following steps will quickly deploy a Rancher server on Azure in a single-node RKE Kubernetes cluster, with a single-node downstream Kubernetes cluster attached. + +## Prerequisites + +>**Note** +>Deploying to Microsoft Azure will incur charges. + +- [Microsoft Azure Account](https://azure.microsoft.com/en-us/free/): A Microsoft Azure Account is required to create resources for deploying Rancher and Kubernetes. +- [Microsoft Azure Subscription](https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/create-subscription#create-a-subscription-in-the-azure-portal): Use this link to follow a tutorial to create a Microsoft Azure subscription if you don't have one yet. +- [Micsoroft Azure Tenant](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-create-new-tenant): Use this link and follow instructions to create a Microsoft Azure tenant. +- [Microsoft Azure Client ID/Secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal): Use this link and follow instructions to create a Microsoft Azure client and secret. +- [Terraform](https://www.terraform.io/downloads.html): Used to provision the server and cluster in Microsoft Azure. + + +## Getting Started + +1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`. + +1. Go into the Azure folder containing the terraform files by executing `cd quickstart/azure`. + +1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`. + +1. Edit `terraform.tfvars` and customize the following variables: + - `azure_subscription_id` - Microsoft Azure Subscription ID + - `azure_client_id` - Microsoft Azure Client ID + - `azure_client_secret` - Microsoft Azure Client Secret + - `azure_tenant_id` - Microsoft Azure Tenant ID + - `rancher_server_admin_password` - Admin password for created Rancher server + +2. **Optional:** Modify optional variables within `terraform.tfvars`. +See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [Azure Quickstart Readme](https://github.com/rancher/quickstart/tree/master/azure) for more information. +Suggestions include: + - `azure_location` - Microsoft Azure region, choose the closest instead of the default + - `prefix` - Prefix for all created resources + - `instance_type` - Compute instance size used, minimum is `Standard_DS2_v2` but `Standard_DS2_v3` or `Standard_DS3_v2` could be used if within budget + - `ssh_key_file_name` - Use a specific SSH key instead of `~/.ssh/id_rsa` (public key is assumed to be `${ssh_key_file_name}.pub`) + +1. Run `terraform init`. + +1. Install the [RKE terraform provider](https://github.com/rancher/terraform-provider-rke), see [installation instructions](https://github.com/rancher/terraform-provider-rke#using-the-provider). + +1. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following: + + ``` + Apply complete! Resources: 16 added, 0 changed, 0 destroyed. + + Outputs: + + rancher_node_ip = xx.xx.xx.xx + rancher_server_url = https://xx-xx-xx-xx.nip.io + workload_node_ip = yy.yy.yy.yy + ``` + +1. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`). + +#### Result + +Two Kubernetes clusters are deployed into your Azure account, one running Rancher Server and the other ready for experimentation deployments. + +### What's Next? + +Use Rancher to create a deployment. For more information, see [Creating Deployments]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/workload). + +## Destroying the Environment + +1. From the `quickstart/azure` folder, execute `terraform destroy --auto-approve`. + +2. Wait for confirmation that all resources have been destroyed. diff --git a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md index fc89564232a..b4c2457eeaa 100644 --- a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md @@ -3,7 +3,7 @@ title: Manual Quick Start weight: 300 --- Howdy Partner! This tutorial walks you through: - + - Installation of {{< product >}} 2.x - Creation of your first cluster - Deployment of an application, Nginx @@ -38,7 +38,7 @@ This Quick Start Guide is divided into different tasks for easier consumption. > > For a full list of port requirements, refer to [Docker Installation]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/). - Provision the host according to our [Requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/). + Provision the host according to our [Requirements]({{}}/rancher/v2.x/en/installation/requirements/). ### 2. Install Rancher @@ -49,7 +49,7 @@ To install Rancher on your host, connect to it and then use a shell to install. 2. From your shell, enter the following command: ``` - $ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher +sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher ``` **Result:** Rancher is installed. @@ -105,4 +105,4 @@ Congratulations! You have created your first cluster. #### What's Next? -Use Rancher to create a deployment. For more information, see [Creating Deployments]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/workload). +Use Rancher to create a deployment. For more information, see [Creating Deployments]({{}}/rancher/v2.x/en/quick-start-guide/workload). diff --git a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-vagrant/_index.md b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-vagrant/_index.md index c9bb875285a..bf8db298c3c 100644 --- a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-vagrant/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-vagrant/_index.md @@ -29,7 +29,7 @@ The following steps quickly deploy a Rancher Server with a single node cluster a ### What's Next? -Use Rancher to create a deployment. For more information, see [Creating Deployments]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/workload). +Use Rancher to create a deployment. For more information, see [Creating Deployments]({{}}/rancher/v2.x/en/quick-start-guide/workload). ## Destroying the Environment diff --git a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md index ebf52672472..df4b32406cc 100644 --- a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md @@ -77,6 +77,6 @@ Congratulations! You have successfully deployed a workload exposed via an ingres When you're done using your sandbox, destroy the Rancher Server and your cluster. See one of the following: -- [Amazon AWS: Destroying the Environment]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/deployment/amazon-aws-qs/#destroying-the-environment) -- [DigitalOcean: Destroying the Environment]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/deployment/digital-ocean-qs/#destroying-the-environment) -- [Vagrant: Destroying the Environment]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/deployment/quickstart-vagrant/#destroying-the-environment) +- [Amazon AWS: Destroying the Environment]({{}}/rancher/v2.x/en/quick-start-guide/deployment/amazon-aws-qs/#destroying-the-environment) +- [DigitalOcean: Destroying the Environment]({{}}/rancher/v2.x/en/quick-start-guide/deployment/digital-ocean-qs/#destroying-the-environment) +- [Vagrant: Destroying the Environment]({{}}/rancher/v2.x/en/quick-start-guide/deployment/quickstart-vagrant/#destroying-the-environment) diff --git a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md index ace03022684..71d79215dd9 100644 --- a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md @@ -33,15 +33,15 @@ For this workload, you'll be deploying the application Rancher Hello-World. 9. From the **As a** drop-down, make sure that **NodePort (On every node)** is selected. - ![As a dropdown, NodePort (On every node selected)]({{< baseurl >}}/img/rancher/nodeport-dropdown.png) + ![As a dropdown, NodePort (On every node selected)]({{}}/img/rancher/nodeport-dropdown.png) 10. From the **On Listening Port** field, leave the **Random** value in place. - ![On Listening Port, Random selected]({{< baseurl >}}/img/rancher/listening-port-field.png) + ![On Listening Port, Random selected]({{}}/img/rancher/listening-port-field.png) 11. From the **Publish the container port** field, enter port `80`. - ![Publish the container port, 80 entered]({{< baseurl >}}/img/rancher/container-port-field.png) + ![Publish the container port, 80 entered]({{}}/img/rancher/container-port-field.png) 12. Leave the remaining options on their default setting. We'll tell you about them later. @@ -151,6 +151,6 @@ Congratulations! You have successfully deployed a workload exposed via a NodePor When you're done using your sandbox, destroy the Rancher Server and your cluster. See one of the following: -- [Amazon AWS: Destroying the Environment]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/deployment/amazon-aws-qs/#destroying-the-environment) -- [DigitalOcean: Destroying the Environment]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/deployment/digital-ocean-qs/#destroying-the-environment) -- [Vagrant: Destroying the Environment]({{< baseurl >}}/rancher/v2.x/en/quick-start-guide/deployment/quickstart-vagrant/#destroying-the-environment) +- [Amazon AWS: Destroying the Environment]({{}}/rancher/v2.x/en/quick-start-guide/deployment/amazon-aws-qs/#destroying-the-environment) +- [DigitalOcean: Destroying the Environment]({{}}/rancher/v2.x/en/quick-start-guide/deployment/digital-ocean-qs/#destroying-the-environment) +- [Vagrant: Destroying the Environment]({{}}/rancher/v2.x/en/quick-start-guide/deployment/quickstart-vagrant/#destroying-the-environment) diff --git a/content/rancher/v2.x/en/security/_index.md b/content/rancher/v2.x/en/security/_index.md index 67c99950877..384657223f3 100644 --- a/content/rancher/v2.x/en/security/_index.md +++ b/content/rancher/v2.x/en/security/_index.md @@ -33,13 +33,19 @@ On this page, we provide security-related documentation along with resources to ### Running a CIS Security Scan on a Kubernetes Cluster -_Available as of v2.4.0-alpha1_ +_Available as of v2.4.0_ Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS (Center for Internet Security) Kubernetes Benchmark. -The CIS Kubernetes Benchmark is a reference document that can be used to establish a secure configuration baseline for Kubernetes. The Benchmark provides recommendations of two types: Scored and Not Scored. We run tests related to only Scored recommendations. +The CIS Kubernetes Benchmark is a reference document that can be used to establish a secure configuration baseline for Kubernetes. -When Rancher runs a CIS Security Scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. +The Center for Internet Security (CIS) is a 501(c)(3) nonprofit organization, formed in October 2000, with a mission is to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace." + +CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. + +The Benchmark provides recommendations of two types: Scored and Not Scored. We run tests related to only Scored recommendations. + +When Rancher runs a CIS security scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. For details, refer to the section on [security scans.]({{}}/rancher/v2.x/en/security/security-scan) @@ -65,7 +71,7 @@ Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes V The benchmark self-assessment is a companion to the Rancher security hardening guide. While the hardening guide shows you how to harden the cluster, the benchmark guide is meant to help you evaluate the level of security of the hardened cluster. -Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher created clusters. The original benchmark documents can be downloaded from the [CIS website](https://www.cisecurity.org/benchmark/kubernetes/). +Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher created clusters. The original benchmark documents can be downloaded from the [CIS website](https://www.cisecurity.org/benchmark/kubernetes/). Each version of Rancher's self assessment guide corresponds to specific versions of the hardening guide, Rancher, Kubernetes, and the CIS Benchmark: @@ -92,7 +98,7 @@ Rancher is committed to informing the community of security issues in our produc | ID | Description | Date | Resolution | |----|-------------|------|------------| -| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/). | +| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions]({{}}/rancher/v2.x/en/upgrades/rollbacks/). | | [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) | | [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) | | [CVE-2019-12274](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274) | Nodes using the built-in node drivers using a file path option allows the machine to read arbitrary files including sensitive ones from inside the Rancher server container. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) | diff --git a/content/rancher/v2.x/en/security/benchmark-2.3.3/_index.md b/content/rancher/v2.x/en/security/benchmark-2.3.3/_index.md index 44086210d49..488d48686eb 100644 --- a/content/rancher/v2.x/en/security/benchmark-2.3.3/_index.md +++ b/content/rancher/v2.x/en/security/benchmark-2.3.3/_index.md @@ -11,7 +11,7 @@ Self Assessment Guide Version | Rancher Version | Hardening Guide Version | Kube ---------------------------|----------|---------|-------|----- Self Assessment Guide v2.3.3 | Rancher v2.3.3 | Hardening Guide v2.3.3 | Kubernetes v1.16 | Benchmark v1.4.1 -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.3.3/Rancher_Benchmark_Assessment.pdf) +[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.3.x/Rancher_Benchmark_Assessment.pdf) > The CIS Benchmark version v1.4.1 covers the security posture of Kubernetes 1.13 clusters. This self-assessment has been run against Kubernetes 1.16, using the guidelines outlined in the CIS v1.4.1 benchmark. Updates to the CIS benchmarks will be applied to this document as they are released. diff --git a/content/rancher/v2.x/en/security/benchmark-2.3/_index.md b/content/rancher/v2.x/en/security/benchmark-2.3/_index.md index 74ff4c693a1..f383707019f 100644 --- a/content/rancher/v2.x/en/security/benchmark-2.3/_index.md +++ b/content/rancher/v2.x/en/security/benchmark-2.3/_index.md @@ -29,13 +29,6 @@ Scoring the commands is different in Rancher Labs than in the CIS Benchmark. Whe When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the the `jq` command to provide human-readable formatting. -#### Known Scored Control Failures - -The following scored controls do not currently pass, and Rancher Labs is working towards addressing these through future enhancements to the product. - -- 1.1.21 - Ensure that the `--kubelet-certificate-authority` argument is set as appropriate (Scored) -- 2.1.8 - Ensure that the `--hostname-override` argument is not set (Scored) - ### Controls --- @@ -148,7 +141,7 @@ docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--profiling=false"). **Note:** This deprecated flag was removed in 1.14, so it cannot be set. -**Result:** Pass +**Result:** Not Applicable #### 1.1.10 - Ensure that the admission control plugin `AlwaysAdmit` is not set (Scored) @@ -326,7 +319,7 @@ docker inspect kube-apiserver | jq -e '.[0].Args[] | match("--kubelet-certificat **Returned Value:** none -**Result:** Fail (See Mitigation) +**Result:** Pass #### 1.1.22 - Ensure that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments are set as appropriate (Scored) @@ -756,17 +749,9 @@ docker inspect kube-controller-manager | jq -e '.[0].Args[] | match("--root-ca-f **Notes** -RKE does not yet support certificate rotation. This feature is due for the 0.1.12 release of RKE. +RKE handles certificate rotation through an external process. -**Audit** - -``` bash -docker inspect kube-controller-manager | jq -e '.[0].Args[] | match("--feature-gates=.*(RotateKubeletServerCertificate=true).*").captures[].string' -``` - -**Returned Value:** `RotateKubeletServerCertificate=true` - -**Result:** Pass +**Result:** Not Applicable #### 1.3.7 - Ensure that the `--address` argument is set to 127.0.0.1 (Scored) @@ -788,7 +773,7 @@ docker inspect kube-controller-manager | jq -e '.[0].Args[] | match("--address=1 RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 1.4.2 - Ensure that the API server pod specification file ownership is set to `root:root` (Scored) @@ -796,7 +781,7 @@ RKE doesn't require or maintain a configuration file for kube-apiserver. All con RKE doesn't require or maintain a configuration file for kube-apiserver. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 1.4.3 - Ensure that the controller manager pod specification file permissions are set to `644` or more restrictive (Scored) @@ -804,7 +789,7 @@ RKE doesn't require or maintain a configuration file for kube-apiserver. All con RKE doesn't require or maintain a configuration file for `kube-controller-manager`. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 1.4.4 - Ensure that the controller manager pod specification file ownership is set to `root:root` (Scored) @@ -812,7 +797,7 @@ RKE doesn't require or maintain a configuration file for `kube-controller-manage RKE doesn't require or maintain a configuration file for `kube-controller-manager`. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 1.4.5 - Ensure that the scheduler pod specification file permissions are set to `644` or more restrictive (Scored) @@ -820,7 +805,7 @@ RKE doesn't require or maintain a configuration file for `kube-controller-manage RKE doesn't require or maintain a configuration file for `kube-scheduler`. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 1.4.6 - Ensure that the scheduler pod specification file ownership is set to `root:root` (Scored) @@ -828,7 +813,7 @@ RKE doesn't require or maintain a configuration file for `kube-scheduler`. All c RKE doesn't require or maintain a configuration file for kube-scheduler. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 1.4.7 - Ensure that the `etcd` pod specification file permissions are set to `644` or more restrictive (Scored) @@ -836,7 +821,7 @@ RKE doesn't require or maintain a configuration file for kube-scheduler. All con RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 1.4.8 - Ensure that the `etcd` pod specification file ownership is set to `root:root` (Scored) @@ -844,7 +829,7 @@ RKE doesn't require or maintain a configuration file for etcd. All configuration RKE doesn't require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 1.4.9 - Ensure that the Container Network Interface file permissions are set to `644` or more restrictive (Not Scored) @@ -965,7 +950,7 @@ stat -c %U:%G /var/lib/rancher/etcd RKE does not store the kubernetes default kubeconfig credentials file on the nodes. It's presented to user where RKE is run. We recommend that this kube_config_cluster.yml file be kept in secure store. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 1.4.14 - Ensure that ownership of `admin.conf` is set to `root:root` (Scored) @@ -973,7 +958,7 @@ RKE does not store the kubernetes default kubeconfig credentials file on the nod RKE does not store the default `kubectl` config credentials file on the nodes. It presents credentials to the user when `rke` is first run, and only on the device where the user ran the command. Rancher Labs recommends that this `kube_config_cluster.yml` file be kept in secure store. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 1.4.15 - Ensure that the file permissions for `scheduler.conf` are set to `644` or more restrictive (Scored) @@ -1509,15 +1494,7 @@ docker inspect kubelet | jq -e '.[0].Args[] | match("--make-iptables-util-chains **Notes** This is used by most cloud providers. Not setting this is not practical in most cases. -**Audit** - -``` bash -docker inspect kubelet | jq -e '.[0].Args[] | match("--hostname-override=.*").string' -``` - -**Returned Value:** `--hostname-override=` - -**Result:** Fail +**Result:** Not Applicable #### 2.1.9 - Ensure that the `--event-qps` argument is set to `0` (Scored) @@ -1581,19 +1558,15 @@ docker inspect kubelet | jq -e '.[0].Args[] | match("--rotate-certificates=true" **Returned Value:** `null` -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 2.1.13 - Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) -**Audit** +**Notes** -``` bash -docker inspect kubelet | jq -e '.[0].Args[] | match("--feature-gates=.*(RotateKubeletServerCertificate=true).*").captures[].string' -``` +RKE handles certificate rotation through an external process. -**Returned Value:** `RotateKubeletServerCertificate=true` - -**Result:** Pass +**Result:** Not Applicable #### 2.1.14 - Ensure that the kubelet only makes use of strong cryptographic ciphers (Not Scored) @@ -1719,7 +1692,7 @@ stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-node.yaml RKE doesn't require or maintain a configuration file for kubelet. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 2.2.4 - Ensure that the kubelet service file ownership is set to `root:root` (Scored) @@ -1728,7 +1701,7 @@ RKE doesn't require or maintain a configuration file for kubelet. All configurat RKE doesn't require or maintain a configuration file for kubelet. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 2.2.5 - Ensure that the proxy kubeconfig file permissions are set to `644` or more restrictive (Scored) @@ -1784,7 +1757,7 @@ stat -c %U:%G /etc/kubernetes/ssl/kube-ca.pem RKE doesn't require or maintain a configuration file for kubelet. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable #### 2.2.10 - Ensure that the kubelet configuration file permissions are set to `644` or more restrictive (Scored) @@ -1792,4 +1765,4 @@ RKE doesn't require or maintain a configuration file for kubelet. All configurat RKE doesn't require or maintain a configuration file for kubelet. All configuration is passed in as arguments at container run time. -**Result:** Pass (Not Applicable) +**Result:** Not Applicable diff --git a/content/rancher/v2.x/en/security/hardening-2.1/_index.md b/content/rancher/v2.x/en/security/hardening-2.1/_index.md index 890f17f35a8..0248d9f3f9d 100644 --- a/content/rancher/v2.x/en/security/hardening-2.1/_index.md +++ b/content/rancher/v2.x/en/security/hardening-2.1/_index.md @@ -15,7 +15,7 @@ Hardening Guide v2.1 | Rancher v2.1.x | Benchmark v1.3.0 | Kubernetes 1.11 [Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.1.x/Rancher_Hardening_Guide.pdf) -For more detail on how a hardened cluster scores against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.1.x]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.1/). +For more detail on how a hardened cluster scores against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.1.x]({{}}/rancher/v2.x/en/security/benchmark-2.1/). ### Profile Definitions diff --git a/content/rancher/v2.x/en/security/hardening-2.2/_index.md b/content/rancher/v2.x/en/security/hardening-2.2/_index.md index 64db81ee176..de19613499f 100644 --- a/content/rancher/v2.x/en/security/hardening-2.2/_index.md +++ b/content/rancher/v2.x/en/security/hardening-2.2/_index.md @@ -15,7 +15,7 @@ Hardening Guide v2.2 | Rancher v2.2.x | Benchmark v1.4.1, 1.4.0 | Kubernetes 1.1 [Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.2.x/Rancher_Hardening_Guide.pdf) -For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.2.x]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.2/). +For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.2.x]({{}}/rancher/v2.x/en/security/benchmark-2.2/). ### Profile Definitions diff --git a/content/rancher/v2.x/en/security/hardening-2.3.3/_index.md b/content/rancher/v2.x/en/security/hardening-2.3.3/_index.md index 00eb91129ef..d25489d2e06 100644 --- a/content/rancher/v2.x/en/security/hardening-2.3.3/_index.md +++ b/content/rancher/v2.x/en/security/hardening-2.3.3/_index.md @@ -15,7 +15,7 @@ Hardening Guide v2.3.3 | Rancher v2.3.3 | Benchmark v1.4.1 | Kubernetes 1.14, 1. [Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.3.3/Rancher_Hardening_Guide.pdf) -For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide v2.3.3]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.3.3/). +For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide v2.3.3]({{}}/rancher/v2.x/en/security/benchmark-2.3.3/). ### Profile Definitions @@ -149,7 +149,7 @@ Verify that the permissions are `700` or more restrictive. **Remediation** -Follow the steps as documented in [1.4.12]({{< baseurl >}}/rancher/v2.x/en/security/hardening-2.3.3/#1-4-12-ensure-that-the-etcd-data-directory-ownership-is-set-to-etcd-etcd) remediation. +Follow the steps as documented in [1.4.12]({{}}/rancher/v2.x/en/security/hardening-2.3.3/#1-4-12-ensure-that-the-etcd-data-directory-ownership-is-set-to-etcd-etcd) remediation. ### 1.4.12 - Ensure that the etcd data directory ownership is set to `etcd:etcd` @@ -613,7 +613,7 @@ addons: | kind: Group name: system:authenticated --- - apiVersion: extensions/v1beta1 + apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted-psp diff --git a/content/rancher/v2.x/en/security/hardening-2.3.5/_index.md b/content/rancher/v2.x/en/security/hardening-2.3.5/_index.md index 91cb760826f..75a48a7ba50 100644 --- a/content/rancher/v2.x/en/security/hardening-2.3.5/_index.md +++ b/content/rancher/v2.x/en/security/hardening-2.3.5/_index.md @@ -22,6 +22,10 @@ This document provides prescriptive guidance for hardening a production installa For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.3.5]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.3.5/). +#### Known Issues + +Rancher **exec shell** and **view logs** for pods are **not** functional in a cis 1.5 hardened setup when only public ip is provided when registering custom nodes. + ### Configure Kernel Runtime Parameters The following `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: @@ -43,7 +47,7 @@ A user account and group for the **etcd** service is required to be setup prior To create the **etcd** group run the following console commands. ``` -addgroup --gid 52034 etcd +groupadd --gid 52034 etcd useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd ``` @@ -118,6 +122,10 @@ metadata: name: default-allow-all spec: podSelector: {} + ingress: + - {} + egress: + - {} policyTypes: - Ingress - Egress @@ -179,7 +187,6 @@ services: infra_container_image: "" cluster_dns_server: "" fail_swap_on: false - generate_serving_certificate: true kubeproxy: image: "" extra_args: {} @@ -511,7 +518,7 @@ rancher_kubernetes_engine_config: kind: Group name: system:authenticated --- - apiVersion: extensions/v1beta1 + apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: restricted diff --git a/content/rancher/v2.x/en/security/hardening-2.3/_index.md b/content/rancher/v2.x/en/security/hardening-2.3/_index.md index 3918cbefa70..f237643c192 100644 --- a/content/rancher/v2.x/en/security/hardening-2.3/_index.md +++ b/content/rancher/v2.x/en/security/hardening-2.3/_index.md @@ -14,7 +14,7 @@ Hardening Guide v2.3 | Rancher v2.3.0-v2.3.2 | Benchmark v1.4.1 | Kubernetes 1.1 [Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.3.x/Rancher_Hardening_Guide.pdf) -For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.3.x]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.3/). +For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.3.x]({{}}/rancher/v2.x/en/security/benchmark-2.3/). ### Profile Definitions @@ -411,7 +411,7 @@ Verify that the permissions are `700` or more restrictive. **Remediation** -Follow the steps as documented in [1.4.12]({{< baseurl >}}/rancher/v2.x/en/security/hardening-2.3/#1-4-12-ensure-that-the-etcd-data-directory-ownership-is-set-to-etcd-etcd) remediation. +Follow the steps as documented in [1.4.12]({{}}/rancher/v2.x/en/security/hardening-2.3/#1-4-12-ensure-that-the-etcd-data-directory-ownership-is-set-to-etcd-etcd) remediation. ### 1.4.12 - Ensure that the etcd data directory ownership is set to `etcd:etcd` @@ -1266,6 +1266,7 @@ services: anonymous-auth: "false" feature-gates: "RotateKubeletServerCertificate=true" tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256" + generate_serving_certificate: true kube-api: pod_security_policy: true extra_args: diff --git a/content/rancher/v2.x/en/security/security-scan/_index.md b/content/rancher/v2.x/en/security/security-scan/_index.md index f2ba6ebb3bc..7ff5cb3bd20 100644 --- a/content/rancher/v2.x/en/security/security-scan/_index.md +++ b/content/rancher/v2.x/en/security/security-scan/_index.md @@ -3,27 +3,126 @@ title: Security Scans weight: 1 --- -_Available as of v2.4.0-alpha1_ +_Available as of v2.4.0_ -Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS (Center for Internet Security) Kubernetes Benchmark. +Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. -The CIS Kubernetes Benchmark is a reference document that can be used to establish a secure configuration baseline for Kubernetes. The Benchmark provides recommendations of two types: Scored and Not Scored. We run tests related to only Scored recommendations. +The Center for Internet Security (CIS) is a 501(c)(3) nonprofit organization, formed in October 2000, with a mission is to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". The organization is headquartered in East Greenbush, New York, with members including large corporations, government agencies, and academic institutions. -When Rancher runs a CIS Security Scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. +CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. + +The Benchmark provides recommendations of two types: Scored and Not Scored. We run tests related to only Scored recommendations. + +- [About the CIS Benchmark](#about-the-cis-benchmark) +- [About the generated report](#about-the-generated-report) +- [Test profiles](#test-profiles) +- [Skipped and not applicable tests](#skipped-and-not-applicable-tests) + - [CIS Benchmark v1.4 skipped tests](#cis-benchmark-v1-4-skipped-tests) + - [CIS Benchmark v1.4 not applicable tests](#cis-benchmark-v1-4-not-applicable-tests) +- [Prerequisites](#prerequisites) +- [Running a scan](#running-a-scan) +- [Scheduling recurring scans](#scheduling-recurring-scans) +- [Skipping tests](#skipping-tests) +- [Setting alerts](#setting-alerts) +- [Deleting a report](#deleting-a-report) +- [Downloading a report](#downloading-a-report) + +# About the CIS Benchmark + +The Center for Internet Security is a 501(c)(3) nonprofit organization, formed in October 2000, with a mission is to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". The organization is headquartered in East Greenbush, New York, with members including large corporations, government agencies, and academic institutions. + +CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. + +The official Benchmark documents are available through the CIS website. The sign-up form to access the documents is [here.](https://learn.cisecurity.org/benchmarks) To check clusters for CIS Kubernetes Benchmark compliance, the security scan leverages [kube-bench,](https://github.com/aquasecurity/kube-bench) an open-source tool from Aqua Security. -### About the Generated Report +# About the Generated Report Each scan generates a report can be viewed in the Rancher UI and can be downloaded in CSV format. -To determine which version of the [Benchmark](https://www.cisecurity.org/benchmark/kubernetes/) to use in the scan, Rancher chooses a version that is appropriate for the cluster's Kubernetes version. The Benchmark version is included in the generated report. +As of Rancher v2.4, the scan will use the CIS Benchmark v1.4. The Benchmark version is included in the generated report. -Each test in the report is identified by its corresponding Scored test in the Benchmark. For example, if a cluster fails test 1.3.6, you can look up the description and rationale for the section 1.3.6 in the Benchmark itself, or in Rancher's [hardening guide for the Kubernetes version that the cluster is using.]({{}}/rancher/v2.x/en/security/#rancher-hardening-guide) Recommendations marked as Not Scored in the Benchmark are not included in the report. +The Benchmark provides recommendations of two types: Scored and Not Scored. Recommendations marked as Not Scored in the Benchmark are not included in the generated report. -Similarly, for information on how to manually audit the test result, you could look up section 1.3.6 in Rancher's [self-assessment guide for the corresponding Kubernetes version.]({{}}/rancher/v2.x/en/security/#the-cis-benchmark-and-self-assessment) +Some tests are designated as "Not Applicable." These tests will not be run on any CIS scan because of the way that Rancher provisions RKE clusters. For information on how test results can be audited, and why some tests are designated to be not applicable, refer to Rancher's [self-assessment guide for the corresponding Kubernetes version.]({{}}/rancher/v2.x/en/security/#the-cis-benchmark-and-self-assessment) -### Prerequisites +The report contains the following information: + +| Column in Report | Description | +|------------------|-------------| +| ID | The ID number of the CIS Benchmark. | +| Description | The description of the CIS Benchmark test. | +| Remediation | What needs to be fixed in order to pass the test. | +| State of Test | Indicates if the test passed, failed, was skipped, or was not applicable. | +| Node type | The node role, which affects which tests are run on the node. Master tests are run on controlplane nodes, etcd tests are run on etcd nodes, and node tests are run on the worker nodes. | +| Nodes | The name(s) of the node that the test was run on. | +| Passed_Nodes | The name(s) of the nodes that the test passed on. | +| Failed_Nodes | The name(s) of the nodes that the test failed on. | + +Refer to [the table in the cluster hardening guide]({{}}/rancher/v2.x/en/security/#rancher-hardening-guide) for information on which versions of Kubernetes, the Benchmark, Rancher, and our cluster hardening guide correspond to each other. Also refer to the hardening guide for configuration files of CIS-compliant clusters and information on remediating failed tests. + +# Test Profiles + +For every CIS benchmark version, Rancher ships with two types of profiles. These profiles are named based on the type of cluster (e.g. `RKE`), the CIS benchmark version (e.g. CIS 1.4) and the profile type (e.g. `Permissive` or `Hardened`). For example, a full profile name would be `RKE-CIS-1.4-Permissive` + +All profiles will have a set of not applicable tests that will be skipped during the CIS scan. These tests are not applicable based on how a RKE cluster manages Kubernetes. + +There are 2 types of profiles: + +- **Permissive:** This profile has a set of tests that have been will be skipped as these tests will fail on a default RKE Kubernetes cluster. Besides the list of skipped tests, the profile will also not run the not applicable tests. +- **Hardened:** This profile will not skip any tests, except for the non-applicable tests. + +In order to pass the "Hardened" profile, you will need to follow the steps on the [hardening guide]({{}}/rancher/v2.x/en/security/#rancher-hardening-guide) and use the `cluster.yml` defined in the hardening guide to provision a hardened cluster. + +# Skipped and Not Applicable Tests + +### CIS Benchmark v1.4 Skipped Tests + +Number | Description | Reason for Skipping +---|---|--- +1.1.11 | "Ensure that the admission control plugin AlwaysPullImages is set (Scored)" | Enabling AlwaysPullImages can use significant bandwidth. +1.1.21 | "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Scored)" | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. +1.1.24 | "Ensure that the admission control plugin PodSecurityPolicy is set (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. +1.1.34 | "Ensure that the --encryption-provider-config argument is set as appropriate (Scored)" | Enabling encryption changes how data can be recovered as data is encrypted. +1.1.35 | "Ensure that the encryption provider is set to aescbc (Scored)" | Enabling encryption changes how data can be recovered as data is encrypted. +1.1.36 | "Ensure that the admission control plugin EventRateLimit is set (Scored)" | EventRateLimit needs to be tuned depending on the cluster. +1.2.2 | "Ensure that the --address argument is set to 127.0.0.1 (Scored)" | Adding this argument prevents Rancher's monitoring tool to collect metrics on the scheduler. +1.3.7 | "Ensure that the --address argument is set to 127.0.0.1 (Scored)" | Adding this argument prevents Rancher's monitoring tool to collect metrics on the controller manager. +1.4.12 | "Ensure that the etcd data directory ownership is set to etcd:etcd (Scored)" | A system service account is required for etcd data directory ownership. Refer to Rancher's hardening guide for more details on how to configure this ownership. +1.7.2 | "Do not admit containers wishing to share the host process ID namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. +1.7.3 | "Do not admit containers wishing to share the host IPC namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. +1.7.4 | "Do not admit containers wishing to share the host network namespace (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. +1.7.5 | " Do not admit containers with allowPrivilegeEscalation (Scored)" | Enabling Pod Security Policy can cause applications to unexpectedly fail. +2.1.6 | "Ensure that the --protect-kernel-defaults argument is set to true (Scored)" | System level configurations are required prior to provisioning the cluster in order for this argument to be set to true. +2.1.10 | "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored)" | When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers. + +### CIS Benchmark v1.4 Not Applicable Tests + +Number | Description | Reason for being not applicable +---|---|--- +1.1.9 | "Ensure that the --repair-malformed-updates argument is set to false (Scored)" | The argument --repair-malformed-updates has been removed as of Kubernetes version 1.14 +1.3.6 | "Ensure that the RotateKubeletServerCertificate argument is set to true" | Cluster provisioned by RKE handles certificate rotation directly through RKE. +1.4.1 | "Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +1.4.2 | "Ensure that the API server pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. +1.4.3 | "Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +1.4.4 | "Ensure that the controller manager pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. +1.4.5 | "Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +1.4.6 | "Ensure that the scheduler pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. +1.4.7 | "Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd. +1.4.8 | "Ensure that the etcd pod specification file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd. +1.4.13 | "Ensure that the admin.conf file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. +1.4.14 | "Ensure that the admin.conf file ownership is set to root:root (Scored)" | Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. +2.1.8 | "Ensure that the --hostname-override argument is not set (Scored)" | Clusters provisioned by RKE clusters and most cloud providers require hostnames. +2.1.12 | "Ensure that the --rotate-certificates argument is not set to false (Scored)" | Cluster provisioned by RKE handles certificate rotation directly through RKE. +2.1.13 | "Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)" | Cluster provisioned by RKE handles certificate rotation directly through RKE. +2.2.3 | "Ensure that the kubelet service file permissions are set to 644 or more restrictive (Scored)" | Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +2.2.4 | "Ensure that the kubelet service file ownership is set to root:root (Scored)" | Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. +2.2.9 | "Ensure that the kubelet configuration file ownership is set to root:root (Scored)" | RKE doesn’t require or maintain a configuration file for the kubelet. +2.2.10 | "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Scored)" | RKE doesn’t require or maintain a configuration file for the kubelet. + + +# Prerequisites To run security scans on a cluster and access the generated reports, you must be an [Administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [Cluster Owner.]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) @@ -31,36 +130,124 @@ Rancher can only run security scans on clusters that were created with RKE, whic The security scan cannot run in a cluster that has Windows nodes. -### Running a Scan +You will only be able to see the CIS scan reports for clusters that you have access to. + +# Running a Scan 1. From the cluster view in Rancher, click **Tools > CIS Scans.** 1. Click **Run Scan.** +1. Choose a CIS scan profile. **Result:** A report is generated and displayed in the **CIS Scans** page. To see details of the report, click the report's name. -### Skipping a Test +# Scheduling Recurring Scans -1. From the cluster view in Rancher, click **Tools > CIS Scans.** -1. Click the name of the report that has tests you want to skip. -1. A **Skip** button is displayed next to each failed test. Click **Skip** for each test that should be skipped. +Recurring scans can be scheduled to run on any RKE Kubernetes cluster. -**Result:** The tests will be skipped on the next scan. +To enable recurring scans, edit the advanced options in the cluster configuration during cluster creation or after the cluster has been created. -To re-run the security scan, go to the top of the page and click **Run Scan.** +To schedule scans for an existing cluster: -### Un-skipping a Test +1. Go to the cluster view in Rancher. +1. Click **Tools > CIS Scans.** +1. Click **Add Schedule.** This takes you to the section of the cluster editing page that is applicable to configuring a schedule for CIS scans. (This section can also be reached by going to the cluster view, clicking **⋮ > Edit,** and going to the **Advanced Options.**) +1. In the **CIS Scan Enabled** field, click **Yes.** +1. In the **CIS Scan Profile** field, choose a **Permissive** or **Hardened** profile. The corresponding CIS Benchmark version is included in the profile name. Note: Any skipped tests [defined in a separate ConfigMap](#skipping-tests) will be skipped regardless of whether a **Permissive** or **Hardened** profile is selected. When selecting the the permissive profile, you should see which tests were skipped by Rancher (tests that are skipped by default for RKE clusters) and which tests were skipped by a Rancher user. In the hardened test profile, the only skipped tests will be skipped by users. +1. In the **CIS Scan Interval (cron)** job, enter a [cron expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) to define how often the cluster will be scanned. +1. In the **CIS Scan Report Retention** field, enter the number of past reports that should be kept. -1. From the cluster view in Rancher, click **Tools > CIS Scans.** -1. Click the name of the report that has tests you want to un-skip. -1. An **Unskip** button is displayed next to each skipped test. Click **Unskip** for each test that should not be skipped. +**Result:** The security scan will run and generate reports at the scheduled intervals. -**Result:** The tests will not be skipped on the next scan. +The test schedule can be configured in the `cluster.yml`: -To re-run the security scan, go to the top of the page and click **Run Scan.** +```yaml +scheduled_cluster_scan: +    enabled: true +    scan_config: +        cis_scan_config: +            override_benchmark_version: rke-cis-1.4 +            profile: permissive +    schedule_config: +        cron_schedule: 0 0 * * * +        retention: 24 +``` -### Deleting a Report + +# Skipping Tests + +You can define a set of tests that will be skipped by the CIS scan when the next report is generated. + +These tests will be skipped for subsequent CIS scans, including both manually triggered and scheduled scans, and the tests will be skipped with any profile. + +The skipped tests will be listed alongside the test profile name in the cluster configuration options when a test profile is selected for a recurring cluster scan. The skipped tests will also be shown every time a scan is triggered manually from the Rancher UI by clicking **Run Scan.** The display of skipped tests allows you to know ahead of time which tests will be run in each scan. + +To skip tests, you will need to define them in a Kubernetes ConfigMap resource. Each skipped CIS scan test is listed in the ConfigMap alongside the version of the CIS benchmark that the test belongs to. + +To skip tests by editing a ConfigMap resource, + +1. Create a `security-scan` namespace. +1. Create a ConfigMap named `security-scan-cfg`. +1. Enter the skip information under the key `config.json` in the following format. The CIS benchmark version is specified alongside the tests to be skipped for that version: + +```json +{ + "config.json": { + "skip": { + "rke-cis-1.4": [ "1.1.1", "1.2.2"] + } + } +} +``` + +**Result:** These tests will be skipped on subsequent scans that use the defined CIS Benchmark version. + +# Setting Alerts + +Rancher provides a set of alerts for cluster scans. which are not configured to have notifiers by default: + +- A manual cluster scan was completed +- A manual cluster scan has failures +- A scheduled cluster scan was completed +- A scheduled cluster scan has failures + +> **Prerequisite:** You need to configure a [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) before configuring, sending, or receiving alerts. + +To activate an existing alert for a CIS scan result, + +1. From the cluster view in Rancher, click **Tools > Alerts.** +1. Go to the section called **A set of alerts for cluster scans.** +1. Go to the alert you want to activate and click **⋮ > Activate.** +1. Go to the alert rule group **A set of alerts for cluster scans** and click **⋮ > Edit.** +1. Scroll down to the **Alert** section. In the **To** field, select the notifier that you would like to use for sending alert notifications. +1. Optional: To limit the frequency of the notifications, click on **Show advanced options** and configure the time interval of the alerts. +1. Click **Save.** + +**Result:** The notifications will be triggered when the a scan is run on a cluster and the active alerts have satisfied conditions. + +To create a new alert, + +1. Go to the cluster view and click **Tools > CIS Scans.** +1. Click **Add Alert.** +1. Fill out the form. +1. Enter a name for the alert. +1. In the **Is** field, set the alert to be triggered when a scan is completed or when a scan has a failure. +1. In the **Send a** field, set the alert as a **Critical,** **Warning,** or **Info** alert level. +1. Choose a [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) for the alert. + +**Result:** The alert is created and activated. The notifications will be triggered when the a scan is run on a cluster and the active alerts have satisfied conditions. + +For more information about alerts, refer to [this page.]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) + +# Deleting a Report 1. From the cluster view in Rancher, click **Tools > CIS Scans.** 1. Go to the report that should be deleted. -1. Click the **Ellipsis (...) > Delete.** -1. Click **Delete.** \ No newline at end of file +1. Click the **⋮ > Delete.** +1. Click **Delete.** + +# Downloading a Report + +1. From the cluster view in Rancher, click **Tools > CIS Scans.** +1. Go to the report that you want to download. Click **⋮ > Download.** + +**Result:** The report is downloaded in CSV format. For more information on each columns, refer to the [section about the generated report.](#about-the-generated-report) diff --git a/content/rancher/v2.x/en/system-tools/_index.md b/content/rancher/v2.x/en/system-tools/_index.md index 10a48611e45..257f73cf171 100644 --- a/content/rancher/v2.x/en/system-tools/_index.md +++ b/content/rancher/v2.x/en/system-tools/_index.md @@ -3,7 +3,7 @@ title: System Tools weight: 6001 --- -System Tools is a tool to perform operational tasks on [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters or [RKE cluster as used for installing Rancher on Kubernetes]({{< baseurl >}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/). The tasks include: +System Tools is a tool to perform operational tasks on [Rancher Launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters or [installations of Rancher on an RKE cluster.]({{}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/) The tasks include: * Collect logging and system metrics from nodes. * Remove Kubernetes resources created by Rancher. @@ -41,7 +41,7 @@ After you download the tools, complete the following actions: # Logs -The logs subcommand will collect log files of core Kubernetes cluster components from nodes in [Rancher-launched Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) or nodes on an [RKE Kubernetes cluster that Rancher is installed on.]({{}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/). See [Troubleshooting]({{< baseurl >}}//rancher/v2.x/en/troubleshooting/) for a list of core Kubernetes cluster components. +The logs subcommand will collect log files of core Kubernetes cluster components from nodes in [Rancher-launched Kubernetes clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) or nodes on an [RKE Kubernetes cluster that Rancher is installed on.]({{}}/rancher/v2.x/en/installation/k8s-install/kubernetes-rke/). See [Troubleshooting]({{}}//rancher/v2.x/en/troubleshooting/) for a list of core Kubernetes cluster components. System Tools will use the provided kubeconfig file to deploy a DaemonSet, that will copy all the logfiles from the core Kubernetes cluster components and add them to a single tar file (`cluster-logs.tar` by default). If you only want to collect logging from a single node, you can specify the node by using `--node NODENAME` or `-n NODENAME`. @@ -81,7 +81,7 @@ The following are the options for the stats command: # Remove ->**Warning:** This command will remove data from your etcd nodes. Make sure you have created a [backup of etcd]({{< baseurl >}}/rancher/v2.x/en/backups/backups) before executing the command. +>**Warning:** This command will remove data from your etcd nodes. Make sure you have created a [backup of etcd]({{}}/rancher/v2.x/en/backups/backups) before executing the command. When you install Rancher on a Kubernetes cluster, it will create Kubernetes resources to run and to store configuration data. If you want to remove Rancher from your cluster, you can use the `remove` subcommand to remove the Kubernetes resources. When you use the `remove` subcommand, the following resources will be removed: @@ -101,7 +101,7 @@ When you install Rancher on a Kubernetes cluster, it will create Kubernetes reso When you run the command below, all the resources listed [above](#remove) will be removed from the cluster. ->**Warning:** This command will remove data from your etcd nodes. Make sure you have created a [backup of etcd]({{< baseurl >}}/rancher/v2.x/en/backups/backups) before executing the command. +>**Warning:** This command will remove data from your etcd nodes. Make sure you have created a [backup of etcd]({{}}/rancher/v2.x/en/backups/backups) before executing the command. ``` ./system-tools remove --kubeconfig --namespace diff --git a/content/rancher/v2.x/en/troubleshooting/_index.md b/content/rancher/v2.x/en/troubleshooting/_index.md index 7f6b30c3891..edb5fb4f061 100644 --- a/content/rancher/v2.x/en/troubleshooting/_index.md +++ b/content/rancher/v2.x/en/troubleshooting/_index.md @@ -5,7 +5,7 @@ weight: 8100 This section contains information to help you troubleshoot issues when using Rancher. -- [Kubernetes components]({{< baseurl >}}/rancher/v2.x/en/troubleshooting/kubernetes-components/) +- [Kubernetes components]({{}}/rancher/v2.x/en/troubleshooting/kubernetes-components/) If you need help troubleshooting core Kubernetes cluster components like: * `etcd` @@ -16,22 +16,27 @@ This section contains information to help you troubleshoot issues when using Ran * `kube-proxy` * `nginx-proxy` -- [Kubernetes resources]({{< baseurl >}}/rancher/v2.x/en/troubleshooting/kubernetes-resources/) +- [Kubernetes resources]({{}}/rancher/v2.x/en/troubleshooting/kubernetes-resources/) Options for troubleshooting Kubernetes resources like Nodes, Ingress Controller and Rancher Agents are described in this section. -- [Networking]({{< baseurl >}}/rancher/v2.x/en/troubleshooting/networking/) +- [Networking]({{}}/rancher/v2.x/en/troubleshooting/networking/) Steps to troubleshoot networking issues can be found here. -- [DNS]({{< baseurl >}}/rancher/v2.x/en/troubleshooting/dns/) +- [DNS]({{}}/rancher/v2.x/en/troubleshooting/dns/) When you experience name resolution issues in your cluster. -- [Troubleshooting Rancher installed on Kubernetes]({{< baseurl >}}/rancher/v2.x/en/troubleshooting/rancherha/) +- [Troubleshooting Rancher installed on Kubernetes]({{}}/rancher/v2.x/en/troubleshooting/rancherha/) - If you experience issues with your [Rancher server installed on Kubernetes]({{< baseurl >}}/rancher/v2.x/en/installation/k8s-install/) + If you experience issues with your [Rancher server installed on Kubernetes]({{}}/rancher/v2.x/en/installation/k8s-install/) -- [Imported clusters]({{< baseurl >}}/rancher/v2.x/en/troubleshooting/imported-clusters/) +- [Imported clusters]({{}}/rancher/v2.x/en/troubleshooting/imported-clusters/) + + If you experience issues when [Importing Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/) + +- [Logging]({{}}/rancher/v2.x/en/troubleshooting/logging/) + + Read more about what log levels can be configured and how to configure a log level. - If you experience issues when [Importing Kubernetes Clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/) diff --git a/content/rancher/v2.x/en/troubleshooting/dns/_index.md b/content/rancher/v2.x/en/troubleshooting/dns/_index.md index f64f6e5729b..ecbe88a7588 100644 --- a/content/rancher/v2.x/en/troubleshooting/dns/_index.md +++ b/content/rancher/v2.x/en/troubleshooting/dns/_index.md @@ -7,7 +7,7 @@ The commands/steps listed on this page can be used to check name resolution issu Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_rancher-cluster.yml` for Rancher HA) or are using the embedded kubectl via the UI. -Before running the DNS checks, check the [default DNS provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#default-dns-provider) for your cluster and make sure that [the overlay network is functioning correctly]({{< baseurl >}}/rancher/v2.x/en/troubleshooting/networking/#check-if-overlay-network-is-functioning-correctly) as this can also be the reason why DNS resolution (partly) fails. +Before running the DNS checks, check the [default DNS provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#default-dns-provider) for your cluster and make sure that [the overlay network is functioning correctly]({{}}/rancher/v2.x/en/troubleshooting/networking/#check-if-overlay-network-is-functioning-correctly) as this can also be the reason why DNS resolution (partly) fails. ### Check if DNS pods are running @@ -196,7 +196,7 @@ services: > **Note:** As the `kubelet` is running inside a container, the path for files located in `/etc` and `/usr` are in `/host/etc` and `/host/usr` inside the `kubelet` container. -See [Editing Cluster as YAML]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/#editing-cluster-as-yaml) how to apply this change. When the provisioning of the cluster has finished, you have to remove the kube-dns pod to activate the new setting in the pod: +See [Editing Cluster as YAML]({{}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/#editing-cluster-as-yaml) how to apply this change. When the provisioning of the cluster has finished, you have to remove the kube-dns pod to activate the new setting in the pod: ``` kubectl delete pods -n kube-system -l k8s-app=kube-dns diff --git a/content/rancher/v2.x/en/troubleshooting/kubernetes-components/_index.md b/content/rancher/v2.x/en/troubleshooting/kubernetes-components/_index.md index 0c73699ee9f..d2e32f91537 100644 --- a/content/rancher/v2.x/en/troubleshooting/kubernetes-components/_index.md +++ b/content/rancher/v2.x/en/troubleshooting/kubernetes-components/_index.md @@ -3,7 +3,7 @@ title: Kubernetes Components weight: 100 --- -The commands and steps listed in this section apply to the core Kubernetes components on [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters. +The commands and steps listed in this section apply to the core Kubernetes components on [Rancher Launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters. This section includes troubleshooting tips in the following categories: @@ -14,5 +14,5 @@ This section includes troubleshooting tips in the following categories: # Kubernetes Component Diagram -![Cluster diagram]({{< baseurl >}}/img/rancher/clusterdiagram.svg)
+![Cluster diagram]({{}}/img/rancher/clusterdiagram.svg)
Lines show the traffic flow between components. Colors are used purely for visual aid \ No newline at end of file diff --git a/content/rancher/v2.x/en/troubleshooting/kubernetes-components/controlplane/_index.md b/content/rancher/v2.x/en/troubleshooting/kubernetes-components/controlplane/_index.md index a94b1a04ee7..1ca42591cf2 100644 --- a/content/rancher/v2.x/en/troubleshooting/kubernetes-components/controlplane/_index.md +++ b/content/rancher/v2.x/en/troubleshooting/kubernetes-components/controlplane/_index.md @@ -29,7 +29,7 @@ bdf3898b8063 rancher/hyperkube:v1.11.5-rancher1 "/opt/rke-tools/en..." # Controlplane Container Logging -> **Note:** If you added multiple nodes with the `controlplane` role, both `kube-controller-manager` and `kube-scheduler` use a leader election process to determine the leader. Only the current leader will log the performed actions. See [Kubernetes leader election]({{< baseurl >}}/rancher/v2.x/en/troubleshooting/kubernetes-resources/#kubernetes-leader-election) how to retrieve the current leader. +> **Note:** If you added multiple nodes with the `controlplane` role, both `kube-controller-manager` and `kube-scheduler` use a leader election process to determine the leader. Only the current leader will log the performed actions. See [Kubernetes leader election]({{}}/rancher/v2.x/en/troubleshooting/kubernetes-resources/#kubernetes-leader-election) how to retrieve the current leader. The logging of the containers can contain information on what the problem could be. diff --git a/content/rancher/v2.x/en/troubleshooting/kubernetes-resources/_index.md b/content/rancher/v2.x/en/troubleshooting/kubernetes-resources/_index.md index c8eae70b743..f4a6b8aecf1 100644 --- a/content/rancher/v2.x/en/troubleshooting/kubernetes-resources/_index.md +++ b/content/rancher/v2.x/en/troubleshooting/kubernetes-resources/_index.md @@ -3,7 +3,7 @@ title: Kubernetes resources weight: 101 --- -The commands/steps listed on this page can be used to check the most important Kubernetes resources and apply to [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters. +The commands/steps listed on this page can be used to check the most important Kubernetes resources and apply to [Rancher Launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters. Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_rancher-cluster.yml` for Rancher HA) or are using the embedded kubectl via the UI. diff --git a/content/rancher/v2.x/en/troubleshooting/logging/_index.md b/content/rancher/v2.x/en/troubleshooting/logging/_index.md new file mode 100644 index 00000000000..50024334901 --- /dev/null +++ b/content/rancher/v2.x/en/troubleshooting/logging/_index.md @@ -0,0 +1,48 @@ +--- +title: Logging +weight: 110 +--- + +The following log levels are used in Rancher: + +| Name | Description | +|---------|-------------| +| `info` | Logs informational messages. This is the default log level. | +| `debug` | Logs more detailed messages that can be used to debug. | +| `trace` | Logs very detailed messages on internal functions. This is very verbose and can contain sensitive information. | + +### How to configure a log level + +* Kubernetes install + * Configure debug log level +``` +$ KUBECONFIG=./kube_config_rancher-cluster.yml +$ kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | awk '{ print $1 }' | while read rancherpod; do kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $rancherpod -- loglevel --set debug; done +OK +OK +OK +$ kubectl --kubeconfig $KUBECONFIG -n cattle-system logs -l app=rancher +``` + + * Configure info log level +``` +$ KUBECONFIG=./kube_config_rancher-cluster.yml +$ kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | awk '{ print $1 }' | while read rancherpod; do kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $rancherpod -- loglevel --set info; done +OK +OK +OK +``` + +* Docker Install + * Configure debug log level +``` +$ docker exec -ti loglevel --set debug +OK +$ docker logs -f +``` + + * Configure info log level +``` +$ docker exec -ti loglevel --set info +OK +``` diff --git a/content/rancher/v2.x/en/troubleshooting/networking/_index.md b/content/rancher/v2.x/en/troubleshooting/networking/_index.md index d76fbf67773..7259b61a3e0 100644 --- a/content/rancher/v2.x/en/troubleshooting/networking/_index.md +++ b/content/rancher/v2.x/en/troubleshooting/networking/_index.md @@ -112,7 +112,7 @@ If there is no output, the cluster is not affected. |------------|------------| | GitHub issue | [#15146](https://github.com/rancher/rancher/issues/15146) | -If pods in system namespaces cannot communicate with pods in other system namespaces, you will need to follow the instructions in [Upgrading to v2.0.7+ — Namespace Migration]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/) to restore connectivity. Symptoms include: +If pods in system namespaces cannot communicate with pods in other system namespaces, you will need to follow the instructions in [Upgrading to v2.0.7+ — Namespace Migration]({{}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/) to restore connectivity. Symptoms include: - NGINX ingress controller showing `504 Gateway Time-out` when accessed. - NGINX ingress controller logging `upstream timed out (110: Connection timed out) while connecting to upstream` when accessed. diff --git a/content/rancher/v2.x/en/upgrades/_index.md b/content/rancher/v2.x/en/upgrades/_index.md index 5fdcdc3dc16..4debea3156e 100644 --- a/content/rancher/v2.x/en/upgrades/_index.md +++ b/content/rancher/v2.x/en/upgrades/_index.md @@ -1,13 +1,11 @@ --- title: Upgrades and Rollbacks weight: 150 -aliases: - - /rancher/v2.x/en/backups/rollbacks/ --- ### Upgrading Rancher -- [Upgrades]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/) +- [Upgrades]({{}}/rancher/v2.x/en/upgrades/upgrades/) ### Rolling Back Unsuccessful Upgrades @@ -16,7 +14,7 @@ In the event that your Rancher Server does not upgrade successfully, you can rol - [Rollbacks for Rancher installed with Docker]({{}}/rancher/v2.x/en/upgrades/single-node-rollbacks) - [Rollbacks for Rancher installed on a Kubernetes cluster]({{}}/rancher/v2.x/en/upgrades/ha-server-rollbacks) -> **Note:** If you are rolling back to versions in either of these scenarios, you must follow some extra [instructions]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/) in order to get your clusters working. +> **Note:** If you are rolling back to versions in either of these scenarios, you must follow some extra [instructions]({{}}/rancher/v2.x/en/upgrades/rollbacks/) in order to get your clusters working. > >- Rolling back from v2.1.6+ to any version between v2.1.0 - v2.1.5 or v2.0.0 - v2.0.10. >- Rolling back from v2.0.11+ to any version between v2.0.0 - v2.0.10. diff --git a/content/rancher/v2.x/en/upgrades/rollbacks/_index.md b/content/rancher/v2.x/en/upgrades/rollbacks/_index.md index 245af441455..4a3c79a010a 100644 --- a/content/rancher/v2.x/en/upgrades/rollbacks/_index.md +++ b/content/rancher/v2.x/en/upgrades/rollbacks/_index.md @@ -32,7 +32,7 @@ Because of the changes necessary to address [CVE-2018-20321](https://cve.mitre.o 2. After executing the command a `tokens.json` file will be created. Important! Back up this file in a safe place.** You will need it to restore functionality to your clusters after rolling back Rancher. **If you lose this file, you may lose access to your clusters.** -3. Rollback Rancher following the [normal instructions]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/). +3. Rollback Rancher following the [normal instructions]({{}}/rancher/v2.x/en/upgrades/rollbacks/). 4. Once Rancher comes back up, every cluster managed by Rancher (except for Imported clusters) will be in an `Unavailable` state. diff --git a/content/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/_index.md b/content/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/_index.md index 3288777bd26..2cca7a4b78a 100644 --- a/content/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/_index.md +++ b/content/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/_index.md @@ -7,7 +7,7 @@ aliases: If you upgrade Rancher and the upgrade does not complete successfully, you may need to rollback your Rancher Server to its last healthy state. -To restore Rancher follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{< baseurl >}}/rancher/v2.x/en/backups/restorations/ha-restoration) +To restore Rancher follow the procedure detailed here: [Restoring Backups — Kubernetes installs]({{}}/rancher/v2.x/en/backups/restorations/ha-restoration) Restoring a snapshot of the Rancher Server cluster will revert Rancher to the version and state at the time of the snapshot. diff --git a/content/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/_index.md b/content/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/_index.md index 0a041e08ae8..4705d65d1d8 100644 --- a/content/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/_index.md +++ b/content/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/_index.md @@ -2,7 +2,6 @@ title: Docker Rollback weight: 1015 aliases: - - /rancher/v2.x/en/backups/rollbacks/single-node-rollbacks - /rancher/v2.x/en/upgrades/single-node-rollbacks --- @@ -24,7 +23,7 @@ In this command, `` is the version of Rancher you were ru Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the [procedure below](#creating-a-backup). Terminal `docker ps` Command, Displaying Where to Find `` and `` -![Placeholder Reference]({{< baseurl >}}/img/rancher/placeholder-ref-2.png) +![Placeholder Reference]({{}}/img/rancher/placeholder-ref-2.png) | Placeholder | Example | Description | | -------------------------- | -------------------------- | ------------------------------------------------------- | @@ -59,9 +58,9 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s ``` You can obtain the name for your Rancher container by entering `docker ps`. -1. Move the backup tarball that you created during completion of [Docker Upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/) onto your Rancher Server. Change to the directory that you moved it to. Enter `dir` to confirm that it's there. +1. Move the backup tarball that you created during completion of [Docker Upgrade]({{}}/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/) onto your Rancher Server. Change to the directory that you moved it to. Enter `dir` to confirm that it's there. - If you followed the naming convention we suggested in [Docker Upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/), it will have a name similar to (`rancher-data-backup--.tar.gz`). + If you followed the naming convention we suggested in [Docker Upgrade]({{}}/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/), it will have a name similar to (`rancher-data-backup--.tar.gz`). 1. Run the following command to replace the data in the `rancher-data` container with the data in the backup tarball, replacing the [placeholder](#before-you-start). Don't forget to close the quotes. diff --git a/content/rancher/v2.x/en/upgrades/upgrades/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/_index.md index d83b0af6f5a..68539cc09c1 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/_index.md @@ -14,17 +14,17 @@ The following table lists some of the most noteworthy issues to be considered wh Upgrade Scenario | Issue ---|--- Upgrading to v2.3.0+ | Any user provisioned cluster will be automatically updated upon any edit as tolerations were added to the images used for Kubernetes provisioning. -Upgrading to v2.2.0-v2.2.x | Rancher introduced the [system charts](https://github.com/rancher/system-charts) repository which contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository locally and configure Rancher to use that repository. Please follow the instructions to [configure Rancher system charts]({{< baseurl >}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0). +Upgrading to v2.2.0-v2.2.x | Rancher introduced the [system charts](https://github.com/rancher/system-charts) repository which contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository locally and configure Rancher to use that repository. Please follow the instructions to [configure Rancher system charts]({{}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0). Upgrading from v2.0.13 or earlier | If your cluster's certificates have expired, you will need to perform [additional steps]({{}}/rancher/v2.x/en/cluster-admin/certificate-rotation/#rotating-expired-certificates-after-upgrading-older-rancher-versions) to rotate the certificates. -Upgrading from v2.0.7 or earlier | Rancher introduced the `system` project, which is a project that's automatically created to store important namespaces that Kubernetes needs to operate. During upgrade to v2.0.7+, Rancher expects these namespaces to be unassigned from all projects. Before beginning upgrade, check your system namespaces to make sure that they're unassigned to [prevent cluster networking issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#preventing-cluster-networking-issues). +Upgrading from v2.0.7 or earlier | Rancher introduced the `system` project, which is a project that's automatically created to store important namespaces that Kubernetes needs to operate. During upgrade to v2.0.7+, Rancher expects these namespaces to be unassigned from all projects. Before beginning upgrade, check your system namespaces to make sure that they're unassigned to [prevent cluster networking issues]({{}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#preventing-cluster-networking-issues). ### Caveats -Upgrades _to_ or _from_ any chart in the [rancher-alpha repository]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/#helm-chart-repositories/) aren't supported. +Upgrades _to_ or _from_ any chart in the [rancher-alpha repository]({{}}/rancher/v2.x/en/installation/options/server-tags/#helm-chart-repositories/) aren't supported. ### RKE Add-on Installs **Important: RKE add-on install is only supported up to Rancher v2.0.8** -Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). +Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install - Installation Outline]({{}}/rancher/v2.x/en/installation/k8s-install/#installation-outline). -If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. +If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. diff --git a/content/rancher/v2.x/en/upgrades/upgrades/ha/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/ha/_index.md index b2ff236b0d1..6b57e500f0a 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/ha/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/ha/_index.md @@ -8,7 +8,7 @@ aliases: The following instructions will guide you through using Helm to upgrade a Rancher server that was installed on a Kubernetes cluster. -To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{}}/rke/latest/en/config-options/services/) or [add-ons]({{< baseurl >}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine. +To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{}}/rke/latest/en/config-options/services/) or [add-ons]({{}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine. If you installed Rancher using the RKE Add-on yaml, follow the directions to [migrate or upgrade]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on). @@ -34,7 +34,7 @@ Follow the steps to upgrade Rancher server: ### A. Back up Your Kubernetes Cluster that is Running Rancher Server -[Take a one-time snapshot]({{< baseurl >}}/rancher/v2.x/en/backups/backups/ha-backups/#option-b-one-time-snapshots) +[Take a one-time snapshot]({{}}/rancher/v2.x/en/backups/backups/ha-backups/#option-b-one-time-snapshots) of your Kubernetes cluster running Rancher server. You'll use the snapshot as a restoration point if something goes wrong during upgrade. ### B. Update the Helm chart repository @@ -47,7 +47,7 @@ of your Kubernetes cluster running Rancher server. You'll use the snapshot as a 1. Get the repository name that you used to install Rancher. - For information about the repos and their differences, see [Helm Chart Repositories]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/#helm-chart-repositories). + For information about the repos and their differences, see [Helm Chart Repositories]({{}}/rancher/v2.x/en/installation/options/server-tags/#helm-chart-repositories). {{< release-channel >}} @@ -59,7 +59,7 @@ of your Kubernetes cluster running Rancher server. You'll use the snapshot as a rancher- https://releases.rancher.com/server-charts/ ``` - > **Note:** If you want to switch to a different Helm chart repository, please follow the [steps on how to switch repositories]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/#switching-to-a-different-helm-chart-repository). If you switch repositories, make sure to list the repositories again before continuing onto Step 3 to ensure you have the correct one added. + > **Note:** If you want to switch to a different Helm chart repository, please follow the [steps on how to switch repositories]({{}}/rancher/v2.x/en/installation/options/server-tags/#switching-to-a-different-helm-chart-repository). If you switch repositories, make sure to list the repositories again before continuing onto Step 3 to ensure you have the correct one added. 1. Fetch the latest chart to install Rancher from the Helm chart repository. @@ -80,14 +80,14 @@ This section describes how to upgrade normal (Internet-connected) or air gap ins Get the values, which were passed with `--set`, from the current Rancher Helm chart that is installed. ``` -helm get values rancher +helm get values rancher -n cattle-system hostname: rancher.my.org ``` > **Note:** There will be more values that are listed with this command. This is just an example of one of the values. -If you are also upgrading cert-manager to the latest version from a version older than 0.11.0, follow `Option B: Reinstalling Rancher`. Otherwise, follow `Option A: Upgrading Rancher`. +If you are also upgrading cert-manager to the latest version from a version older than 0.11.0, follow `Option B: Reinstalling Rancher and cert-manager`. Otherwise, follow `Option A: Upgrading Rancher`. {{% accordion label="Option A: Upgrading Rancher" %}} @@ -105,12 +105,10 @@ helm upgrade rancher rancher-/rancher \ {{% /accordion %}} -{{% accordion label="Option B: Reinstalling Rancher chart" %}} +{{% accordion label="Option B: Reinstalling Rancher and cert-manager" %}} If you are currently running the cert-manger whose version is older than v0.11, and want to upgrade both Rancher and cert-manager to a newer version, then you need to reinstall both Rancher and cert-manger due to the API change in cert-manger v0.11. -Please refer the [Upgrading Cert-Manager]({{< baseurl >}}/rancher/v2.x/en/installation/options/upgrading-cert-manager) page for more information. - 1. Uninstall Rancher ``` @@ -125,6 +123,8 @@ Please refer the [Upgrading Cert-Manager]({{< baseurl >}}/rancher/v2.x/en/instal --set hostname=rancher.my.org ``` +3. Uninstall and reinstall `cert-manager` according to the instructions on the [Upgrading Cert-Manager]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager) page. + {{% /accordion %}} {{% /tab %}} @@ -190,8 +190,8 @@ Log into Rancher to confirm that the upgrade succeeded. >**Having network issues following upgrade?** > -> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking). +> See [Restoring Cluster Networking]({{}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking). ## Rolling Back -Should something go wrong, follow the [roll back]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/) instructions to restore the snapshot you took before you preformed the upgrade. +Should something go wrong, follow the [roll back]({{}}/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/) instructions to restore the snapshot you took before you preformed the upgrade. diff --git a/content/rancher/v2.x/en/upgrades/upgrades/ha/helm2/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/ha/helm2/_index.md index 1c717cd0a03..da4d7ba5cd6 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/ha/helm2/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/ha/helm2/_index.md @@ -3,15 +3,15 @@ title: Upgrading Rancher Installed on Kubernetes with Helm 2 weight: 1050 --- -> After Helm 3 was released, the [instructions for upgrading Rancher on a Kubernetes cluster](./ha) were updated to use Helm 3. +> Helm 3 has been released. If you are using Helm 2, we recommend [migrating to Helm 3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) because it is simpler to use and more secure than Helm 2. > -> If you are using Helm 2, we recommend [migrating to Helm 3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) because it is simpler to use and more secure than Helm 2. +> The [current instructions for Upgrading Rancher Installed on Kubernetes](https://rancher.com/docs/rancher/v2.x/en/upgrades/upgrades/ha/) use Helm 3. > > This section provides a copy of the older instructions for upgrading Rancher with Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible. The following instructions will guide you through using Helm to upgrade a Rancher server that is installed on a Kubernetes cluster. -To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{}}/rke/latest/en/config-options/services/) or [add-ons]({{< baseurl >}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine. +To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services]({{}}/rke/latest/en/config-options/services/) or [add-ons]({{}}/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE]({{}}/rke/latest/en/upgrades/), the Rancher Kubernetes Engine. If you installed Rancher using the RKE Add-on yaml, follow the directions to [migrate or upgrade]({{}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on). @@ -37,7 +37,7 @@ Follow the steps to upgrade Rancher server: ### A. Back up Your Kubernetes Cluster that is Running Rancher Server -[Take a one-time snapshot]({{< baseurl >}}/rancher/v2.x/en/backups/backups/ha-backups/#option-b-one-time-snapshots) +[Take a one-time snapshot]({{}}/rancher/v2.x/en/backups/backups/ha-backups/#option-b-one-time-snapshots) of your Kubernetes cluster running Rancher server. You'll use the snapshot as a restoration point if something goes wrong during upgrade. ### B. Update the Helm chart repository @@ -50,7 +50,7 @@ of your Kubernetes cluster running Rancher server. You'll use the snapshot as a 1. Get the repository name that you used to install Rancher. - For information about the repos and their differences, see [Helm Chart Repositories]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/#helm-chart-repositories). + For information about the repos and their differences, see [Helm Chart Repositories]({{}}/rancher/v2.x/en/installation/options/server-tags/#helm-chart-repositories). {{< release-channel >}} @@ -62,7 +62,7 @@ of your Kubernetes cluster running Rancher server. You'll use the snapshot as a rancher- https://releases.rancher.com/server-charts/ ``` - > **Note:** If you want to switch to a different Helm chart repository, please follow the [steps on how to switch repositories]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/#switching-to-a-different-helm-chart-repository). If you switch repositories, make sure to list the repositories again before continuing onto Step 3 to ensure you have the correct one added. + > **Note:** If you want to switch to a different Helm chart repository, please follow the [steps on how to switch repositories]({{}}/rancher/v2.x/en/installation/options/server-tags/#switching-to-a-different-helm-chart-repository). If you switch repositories, make sure to list the repositories again before continuing onto Step 3 to ensure you have the correct one added. 1. Fetch the latest chart to install Rancher from the Helm chart repository. @@ -110,7 +110,7 @@ helm upgrade rancher-/rancher \ If you are currently running the cert-manger whose version is older than v0.11, and want to upgrade both Rancher and cert-manager to a newer version, then you need to reinstall both Rancher and cert-manger due to the API change in cert-manger v0.11. -Please refer the [Upgrading Cert-Manager]({{< baseurl >}}/rancher/v2.x/en/installation/options/upgrading-cert-manager) page for more information. +Please refer the [Upgrading Cert-Manager]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager) page for more information. 1. Uninstall Rancher @@ -192,8 +192,8 @@ Log into Rancher to confirm that the upgrade succeeded. >**Having network issues following upgrade?** > -> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking). +> See [Restoring Cluster Networking]({{}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking). ## Rolling Back -Should something go wrong, follow the [roll back]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/) instructions to restore the snapshot you took before you preformed the upgrade. +Should something go wrong, follow the [roll back]({{}}/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/) instructions to restore the snapshot you took before you preformed the upgrade. diff --git a/content/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/_index.md index c5e8091bdba..77b7a515e7a 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/_index.md @@ -57,7 +57,7 @@ kubectl -n cattle-system get secret cattle-keys-server -o jsonpath --template='{ Remove the Kubernetes objects created by the RKE install. -> **Note:** Removing these Kubernetes components will not affect the Rancher configuration or database, but with any maintenance it is a good idea to create a backup of the data before hand. See [Creating Backups-Kubernetes Install]({{< baseurl >}}/rancher/v2.x/en/backups/backups/ha-backups) for details. +> **Note:** Removing these Kubernetes components will not affect the Rancher configuration or database, but with any maintenance it is a good idea to create a backup of the data before hand. See [Creating Backups-Kubernetes Install]({{}}/rancher/v2.x/en/backups/backups/ha-backups) for details. ``` kubectl -n cattle-system delete ingress cattle-ingress-http @@ -105,5 +105,5 @@ addons: |- From here follow the standard install steps. -* [3 - Initialize Helm]({{< baseurl >}}/rancher/v2.x/en/installation/options/helm2/helm-init/) -* [4 - Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/) +* [3 - Initialize Helm]({{}}/rancher/v2.x/en/installation/options/helm2/helm-init/) +* [4 - Install Rancher]({{}}/rancher/v2.x/en/installation/options/helm2/helm-rancher/) diff --git a/content/rancher/v2.x/en/upgrades/upgrades/namespace-migration/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/namespace-migration/_index.md index 2d85fdad4d6..56855eb7b5e 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/namespace-migration/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/namespace-migration/_index.md @@ -52,11 +52,11 @@ You can prevent cluster networking issues from occurring during your upgrade to >1 Only displays if this feature is enabled for the cluster.
Moving namespaces out of projects
- ![Moving Namespaces]({{< baseurl >}}/img/rancher/move-namespaces.png) + ![Moving Namespaces]({{}}/img/rancher/move-namespaces.png) 1. Repeat these steps for each cluster where you've assigned system namespaces to projects. -**Result:** All system namespaces are moved out of Rancher projects. You can now safely begin the [upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades). +**Result:** All system namespaces are moved out of Rancher projects. You can now safely begin the [upgrade]({{}}/rancher/v2.x/en/upgrades/upgrades). ## Restoring Cluster Networking @@ -171,8 +171,8 @@ Reset the cluster nodes' network policies to restore connectivity.
If you can access Rancher, but one or more of the clusters that you launched using Rancher has no networking, you can repair them by moving the -- From the cluster's [embedded kubectl shell]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-shell). -- By [downloading the cluster kubeconfig file and running it]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-and-a-kubeconfig-file) from your workstation. +- From the cluster's [embedded kubectl shell]({{}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-shell). +- By [downloading the cluster kubeconfig file and running it]({{}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-and-a-kubeconfig-file) from your workstation. ``` for namespace in $(kubectl --kubeconfig kube_config_rancher-cluster.yml get ns -o custom-columns=NAME:.metadata.name --no-headers); do diff --git a/content/rancher/v2.x/en/upgrades/upgrades/single-node/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/single-node/_index.md index 6c5581e8f0b..3b1448cc156 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/single-node/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/single-node/_index.md @@ -28,7 +28,7 @@ In this command, `` is the name of your Rancher containe Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the upgrade. Terminal `docker ps` Command, Displaying Where to Find `` and `` -![Placeholder Reference]({{< baseurl >}}/img/rancher/placeholder-ref.png) +![Placeholder Reference]({{}}/img/rancher/placeholder-ref.png) | Placeholder | Example | Description | | -------------------------- | -------------------------- | --------------------------------------------------------- | @@ -95,7 +95,7 @@ Pull the image of the Rancher version that you want to upgrade to. Placeholder | Description ------------|------------- -`` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. +`` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. ``` docker pull rancher/rancher: @@ -129,13 +129,13 @@ If you have selected to use the Rancher generated self-signed certificate, you a Placeholder | Description ------------|------------- -`` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. +`` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. ``` docker run -d --volumes-from rancher-data \ --restart=unless-stopped \ -p 80:80 -p 443:443 \ - rancher/rancher: + rancher/rancher: ``` {{% /accordion %}} @@ -152,16 +152,16 @@ Placeholder | Description `` | The path to your full certificate chain. `` | The path to the private key for your certificate. `` | The path to the certificate authority's certificate. -`` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. +`` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. ``` docker run -d --volumes-from rancher-data \ --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - -v //:/etc/rancher/ssl/cert.pem \ - -v //:/etc/rancher/ssl/key.pem \ - -v //:/etc/rancher/ssl/cacerts.pem \ - rancher/rancher: + - 80:80 -p 443:443 \ + - //:/etc/rancher/ssl/cert.pem \ + - //:/etc/rancher/ssl/key.pem \ + - //:/etc/rancher/ssl/cacerts.pem \ + rancher/rancher: ``` {{% /accordion %}} @@ -176,15 +176,15 @@ Placeholder | Description `` | The path to the directory containing your certificate files. `` | The path to your full certificate chain. `` | The path to the private key for your certificate. -`` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. +`` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. ``` docker run -d --volumes-from rancher-data \ --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - -v //:/etc/rancher/ssl/cert.pem \ - -v //:/etc/rancher/ssl/key.pem \ - rancher/rancher: \ + - 80:80 -p 443:443 \ + - //:/etc/rancher/ssl/cert.pem \ + - //:/etc/rancher/ssl/key.pem \ + rancher/rancher: \ --no-cacerts ``` {{% /accordion %}} @@ -201,14 +201,14 @@ If you have selected to use [Let's Encrypt](https://letsencrypt.org/) certificat Placeholder | Description ------------|------------- -`` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. +`` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. `` | The domain address that you had originally started with ``` docker run -d --volumes-from rancher-data \ --restart=unless-stopped \ - -p 80:80 -p 443:443 \ - rancher/rancher: \ + -p 80:80 -p 443:443 \ + rancher/rancher: \ --acme-domain ``` @@ -230,7 +230,7 @@ If you have selected to use the Rancher generated self-signed certificate, you a Placeholder | Description ------------|------------- `` | Your private registry URL and port. -`` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/) that you want to to upgrade to. +`` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to to upgrade to. ``` docker run -d --volumes-from rancher-data \ @@ -255,7 +255,7 @@ Placeholder | Description `` | The path to the private key for your certificate. `` | The path to the certificate authority's certificate. `` | Your private registry URL and port. -`` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. +`` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. ``` docker run -d --restart=unless-stopped \ @@ -281,7 +281,7 @@ Placeholder | Description `` | The path to your full certificate chain. `` | The path to the private key for your certificate. `` | Your private registry URL and port. -`` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. +`` | The release tag of the [Rancher version]({{}}/rancher/v2.x/en/installation/options/server-tags/) that you want to upgrade to. > **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher. @@ -308,7 +308,7 @@ Log into Rancher. Confirm that the upgrade succeeded by checking the version dis >**Having network issues in your user clusters following upgrade?** > -> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking). +> See [Restoring Cluster Networking]({{}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking). ### F. Clean up Your Old Rancher Server Container @@ -317,4 +317,4 @@ Remove the previous Rancher server container. If you only stop the previous Ranc ## Rolling Back -If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/). +If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback]({{}}/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/). diff --git a/content/rancher/v2.x/en/user-settings/_index.md b/content/rancher/v2.x/en/user-settings/_index.md index 4fea8416f2c..c048530c560 100644 --- a/content/rancher/v2.x/en/user-settings/_index.md +++ b/content/rancher/v2.x/en/user-settings/_index.md @@ -7,12 +7,12 @@ aliases: Within Rancher, each user has a number of settings associated with their login: personal preferences, API keys, etc. You can configure these settings by choosing from the **User Settings** menu. You can open this menu by clicking your avatar, located within the main menu. -![User Settings Menu]({{< baseurl >}}/img/rancher/user-settings.png) +![User Settings Menu]({{}}/img/rancher/user-settings.png) The available user settings are: -- [API & Keys]({{< baseurl >}}/rancher/v2.x/en/user-settings/api-keys/): If you want to interact with Rancher programmatically, you need an API key. Follow the directions in this section to obtain a key.gferfgre -- [Cloud Credentials]({{< baseurl >}}/rancher/v2.x/en/user-settings/cloud-credentials/): Manage cloud credentials [used by node templates]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) to [provision nodes for clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters). Note: Available as of v2.2.0. -- [Node Templates]({{< baseurl >}}/rancher/v2.x/en/user-settings/node-templates): Manage templates [used by Rancher to provision nodes for clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters). -- [Preferences]({{< baseurl >}}/rancher/v2.x/en/user-settings/preferences): Sets superficial preferences for the Rancher UI. +- [API & Keys]({{}}/rancher/v2.x/en/user-settings/api-keys/): If you want to interact with Rancher programmatically, you need an API key. Follow the directions in this section to obtain a key.gferfgre +- [Cloud Credentials]({{}}/rancher/v2.x/en/user-settings/cloud-credentials/): Manage cloud credentials [used by node templates]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) to [provision nodes for clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters). Note: Available as of v2.2.0. +- [Node Templates]({{}}/rancher/v2.x/en/user-settings/node-templates): Manage templates [used by Rancher to provision nodes for clusters]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters). +- [Preferences]({{}}/rancher/v2.x/en/user-settings/preferences): Sets superficial preferences for the Rancher UI. - Log Out: Ends your user session. diff --git a/content/rancher/v2.x/en/user-settings/api-keys/_index.md b/content/rancher/v2.x/en/user-settings/api-keys/_index.md index a824b0d58f5..bddabe76c3c 100644 --- a/content/rancher/v2.x/en/user-settings/api-keys/_index.md +++ b/content/rancher/v2.x/en/user-settings/api-keys/_index.md @@ -29,7 +29,7 @@ API Keys are composed of four components: The API key won't be valid after expiration. Shorter expiration periods are more secure. - A scope will limit the API key so that it will only work against the Kubernetes API of the specified cluster. If the cluster is configured with an Authorized Cluster Endpoint, you will be able to use a scoped token directly against the cluster's API without proxying through the Rancher server. See [Authorized Cluster Endpoints]({{< baseurl >}}/rancher/v2.x/en/overview/architecture/#4-authorized-cluster-endpoint) for more information. + A scope will limit the API key so that it will only work against the Kubernetes API of the specified cluster. If the cluster is configured with an Authorized Cluster Endpoint, you will be able to use a scoped token directly against the cluster's API without proxying through the Rancher server. See [Authorized Cluster Endpoints]({{}}/rancher/v2.x/en/overview/architecture/#4-authorized-cluster-endpoint) for more information. 4. Click **Create**. @@ -43,7 +43,7 @@ API Keys are composed of four components: - Enter your API key information into the application that will send requests to the Rancher API. - Learn more about the Rancher endpoints and parameters by selecting **View in API** for an object in the Rancher UI. -- API keys are used for API calls and [Rancher CLI]({{< baseurl >}}/rancher/v2.x/en/cli). +- API keys are used for API calls and [Rancher CLI]({{}}/rancher/v2.x/en/cli). ## Deleting API Keys diff --git a/content/rancher/v2.x/en/user-settings/cloud-credentials/_index.md b/content/rancher/v2.x/en/user-settings/cloud-credentials/_index.md index 57884ad24d5..148f8f6783f 100644 --- a/content/rancher/v2.x/en/user-settings/cloud-credentials/_index.md +++ b/content/rancher/v2.x/en/user-settings/cloud-credentials/_index.md @@ -5,7 +5,7 @@ weight: 7011 _Available as of v2.2.0_ -When you create a cluster [hosted by an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools), [node templates]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. +When you create a cluster [hosted by an infrastructure provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools), [node templates]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. Node templates can use cloud credentials to access the credential information required to provision nodes in the infrastructure providers. The same cloud credential can be used by multiple node templates. By using a cloud credential, you do not have to re-enter access keys for the same cloud provider. Cloud credentials are stored as Kubernetes secrets. @@ -13,7 +13,7 @@ Cloud credentials are only used by node templates if there are fields marked as You can create cloud credentials in two contexts: -- [During creation of a node template]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) for a cluster. +- [During creation of a node template]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) for a cluster. - In the **User Settings** All cloud credentials are bound to the user profile of who created it. They **cannot** be shared across users. @@ -23,29 +23,29 @@ All cloud credentials are bound to the user profile of who created it. They **ca 1. From your user settings, select **User Avatar > Cloud Credentials**. 1. Click **Add Cloud Credential**. 1. Enter a name for the cloud credential. -1. Select a **Cloud Credential Type** from the drop down. The values of this dropdown is based on the `active` [node drivers]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/node-drivers/) in Rancher. +1. Select a **Cloud Credential Type** from the drop down. The values of this dropdown is based on the `active` [node drivers]({{}}/rancher/v2.x/en/admin-settings/drivers/node-drivers/) in Rancher. 1. Based on the selected cloud credential type, enter the required values to authenticate with the infrastructure provider. 1. Click **Create**. -**Result:** The cloud credential is created and can immediately be used to [create node templates]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates). +**Result:** The cloud credential is created and can immediately be used to [create node templates]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates). ## Updating a Cloud Credential When access credentials are changed or compromised, updating a cloud credential allows you to rotate those credentials while keeping the same node template. 1. From your user settings, select **User Avatar > Cloud Credentials**. -1. Choose the cloud credential you want to edit and click the **Vertical Ellipsis (...) > Edit**. +1. Choose the cloud credential you want to edit and click the **⋮ > Edit**. 1. Update the credential information and click **Save**. -**Result:** The cloud credential is updated with the new access credentials. All existing node templates using this cloud credential will automatically use the updated information whenever [new nodes are added]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/). +**Result:** The cloud credential is updated with the new access credentials. All existing node templates using this cloud credential will automatically use the updated information whenever [new nodes are added]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/). ## Deleting a Cloud Credential -In order to delete cloud credentials, there must not be any node template associated with it. If you are unable to delete the cloud credential, [delete any node templates]({{< baseurl >}}/rancher/v2.x/en/user-settings/node-templates/#deleting-a-node-template) that are still associated to that cloud credential. +In order to delete cloud credentials, there must not be any node template associated with it. If you are unable to delete the cloud credential, [delete any node templates]({{}}/rancher/v2.x/en/user-settings/node-templates/#deleting-a-node-template) that are still associated to that cloud credential. 1. From your user settings, select **User Avatar > Cloud Credentials**. 1. You can either individually delete a cloud credential or bulk delete. - - To individually delete one, choose the cloud credential you want to edit and click the **Vertical Ellipsis (...) > Delete**. + - To individually delete one, choose the cloud credential you want to edit and click the **⋮ > Delete**. - To bulk delete cloud credentials, select one or more cloud credentials from the list. Click **Delete**. 1. Confirm that you want to delete these cloud credentials. diff --git a/content/rancher/v2.x/en/user-settings/node-templates/_index.md b/content/rancher/v2.x/en/user-settings/node-templates/_index.md index 2ebd89b0bd7..0b6f411fc76 100644 --- a/content/rancher/v2.x/en/user-settings/node-templates/_index.md +++ b/content/rancher/v2.x/en/user-settings/node-templates/_index.md @@ -3,9 +3,9 @@ title: Managing Node Templates weight: 7010 --- -When you provision a cluster [hosted by an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools), [node templates]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. You can create node templates in two contexts: +When you provision a cluster [hosted by an infrastructure provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools), [node templates]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. You can create node templates in two contexts: -- While [provisioning a node pool cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools). +- While [provisioning a node pool cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools). - At any time, from your [user settings](#creating-a-node-template-from-user-settings). When you create a node template, it is bound to your user profile. Node templates cannot be shared among users. You can delete stale node templates that you no longer user from your user settings. @@ -16,14 +16,14 @@ When you create a node template, it is bound to your user profile. Node template 1. Click **Add Template**. 1. Select one of the cloud providers available. Then follow the instructions on screen to configure the template. -**Result:** The template is configured. You can use the template later when you [provision a node pool cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools). +**Result:** The template is configured. You can use the template later when you [provision a node pool cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools). ## Updating a Node Template 1. From your user settings, select **User Avatar > Node Templates**. -1. Choose the node template that you want to edit and click the **Vertical Ellipsis (...) > Edit**. +1. Choose the node template that you want to edit and click the **⋮ > Edit**. - > **Note:** As of v2.2.0, the default `active` [node drivers]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/node-drivers/) and any node driver, that has fields marked as `password`, are required to use [cloud credentials]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#cloud-credentials). If you have upgraded to v2.2.0, existing node templates will continue to work with the previous account access information, but when you edit the node template, you will be required to create a cloud credential and the node template will start using it. + > **Note:** As of v2.2.0, the default `active` [node drivers]({{}}/rancher/v2.x/en/admin-settings/drivers/node-drivers/) and any node driver, that has fields marked as `password`, are required to use [cloud credentials]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#cloud-credentials). If you have upgraded to v2.2.0, existing node templates will continue to work with the previous account access information, but when you edit the node template, you will be required to create a cloud credential and the node template will start using it. 1. Edit the required information and click **Save**. @@ -34,10 +34,10 @@ When you create a node template, it is bound to your user profile. Node template When creating new node templates from your user settings, you can clone an existing template and quickly update its settings rather than creating a new one from scratch. Cloning templates saves you the hassle of re-entering access keys for the cloud provider. 1. From your user settings, select **User Avatar > Node Templates**. -1. Find the template you want to clone. Then select **Ellipsis > Clone**. +1. Find the template you want to clone. Then select **⋮ > Clone**. 1. Complete the rest of the form. -**Result:** The template is cloned and configured. You can use the template later when you [provision a node pool cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools). +**Result:** The template is cloned and configured. You can use the template later when you [provision a node pool cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools). ## Deleting a Node Template diff --git a/content/rancher/v2.x/en/v1.6-migration/_index.md b/content/rancher/v2.x/en/v1.6-migration/_index.md index 8d065e00458..0766c009821 100644 --- a/content/rancher/v2.x/en/v1.6-migration/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/_index.md @@ -13,20 +13,20 @@ This video demonstrates a complete walk through of migration from Rancher v1.6 t ## Migration Plan ->**Want to more about Kubernetes before getting started?** Read our [Kubernetes Introduction]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/kub-intro). +>**Want to more about Kubernetes before getting started?** Read our [Kubernetes Introduction]({{}}/rancher/v2.x/en/v1.6-migration/kub-intro). -- [1. Get Started]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/get-started) +- [1. Get Started]({{}}/rancher/v2.x/en/v1.6-migration/get-started) >**Already a Kubernetes user in v1.6?** > > _Get Started_ is the only section you need to review for migration to v2.x. You can skip everything else. -- [2. Migrate Your Services]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/run-migration-tool/) -- [3. Expose Your Services]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/expose-services/) -- [4. Configure Health Checks]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/monitor-apps) -- [5. Schedule Your Services]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/) -- [6. Service Discovery]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/discover-services/) -- [7. Load Balancing]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/load-balancing/) +- [2. Migrate Your Services]({{}}/rancher/v2.x/en/v1.6-migration/run-migration-tool/) +- [3. Expose Your Services]({{}}/rancher/v2.x/en/v1.6-migration/expose-services/) +- [4. Configure Health Checks]({{}}/rancher/v2.x/en/v1.6-migration/monitor-apps) +- [5. Schedule Your Services]({{}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/) +- [6. Service Discovery]({{}}/rancher/v2.x/en/v1.6-migration/discover-services/) +- [7. Load Balancing]({{}}/rancher/v2.x/en/v1.6-migration/load-balancing/) ## Migration Example Files @@ -48,4 +48,4 @@ During migration, we'll export these services from Rancher v1.6. The export gen A file for Rancher-specific functionality such as health checks and load balancers. These files cannot be read by Rancher v2.x, so don't worry about their contents—we're discarding them and recreating them using the v2.x UI. -### [Next: Get Started]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/get-started) +### [Next: Get Started]({{}}/rancher/v2.x/en/v1.6-migration/get-started) diff --git a/content/rancher/v2.x/en/v1.6-migration/discover-services/_index.md b/content/rancher/v2.x/en/v1.6-migration/discover-services/_index.md index 90112383200..0df7741ae6b 100644 --- a/content/rancher/v2.x/en/v1.6-migration/discover-services/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/discover-services/_index.md @@ -9,7 +9,7 @@ This document will also show you how to link the workloads and services that you
Resolve the output.txt Link Directive
-![Resolve Link Directive]({{< baseurl >}}/img/rancher/resolve-links.png) +![Resolve Link Directive]({{}}/img/rancher/resolve-links.png) ## In This Document @@ -27,7 +27,7 @@ This document will also show you how to link the workloads and services that you For Rancher v2.x, we've replaced the Rancher DNS microservice used in v1.6 with native [Kubernetes DNS support](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/), which provides equivalent service discovery for Kubernetes workloads and pods. Former Cattle users can replicate all the service discovery features from Rancher v1.6 in v2.x. There's no loss of functionality. -Kubernetes schedules a DNS pod and service in the cluster, which is similar to the [Rancher v1.6 DNS microservice]({{< baseurl >}}/rancher/v1.6/en/cattle/internal-dns-service/#internal-dns-service-in-cattle-environments). Kubernetes then configures its kubelets to route all DNS lookups to this DNS service, which is skyDNS, a flavor of the default Kube-DNS implementation. +Kubernetes schedules a DNS pod and service in the cluster, which is similar to the [Rancher v1.6 DNS microservice]({{}}/rancher/v1.6/en/cattle/internal-dns-service/#internal-dns-service-in-cattle-environments). Kubernetes then configures its kubelets to route all DNS lookups to this DNS service, which is skyDNS, a flavor of the default Kube-DNS implementation. The following table displays each service discovery feature available in the two Rancher releases. @@ -60,11 +60,11 @@ Pods can also be resolved using the `hostname` and `subdomain` fields if set in When you migrate v1.6 services to v2.x, Rancher does not automatically create a Kubernetes service record for each migrated deployment. Instead, you'll have to link the deployment and service together manually, using any of the methods listed below. -In the image below, the `web-deployment.yml` and `web-service.yml` files [created after parsing]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/run-migration-tool/#migration-example-file-output) our [migration example services]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/#migration-example-files) are linked together. +In the image below, the `web-deployment.yml` and `web-service.yml` files [created after parsing]({{}}/rancher/v2.x/en/v1.6-migration/run-migration-tool/#migration-example-file-output) our [migration example services]({{}}/rancher/v2.x/en/v1.6-migration/#migration-example-files) are linked together.
Linked Workload and Kubernetes Service
-![Linked Workload and Kubernetes Service]({{< baseurl >}}/img/rancher/linked-service-workload.png) +![Linked Workload and Kubernetes Service]({{}}/img/rancher/linked-service-workload.png) ### Service Name Alias Creation @@ -76,7 +76,7 @@ Using the v2.x UI, use the context menu to navigate to the `Project` view. Then Click **Add Record** to create new DNS records. Then view the various options supported to link to external services or to create aliases for another workload, DNS record, or set of pods.
Add Service Discovery Record
-![Add Service Discovery Record]({{< baseurl >}}/img/rancher/add-record.png) +![Add Service Discovery Record]({{}}/img/rancher/add-record.png) The following table indicates which alias options are implemented natively by Kubernetes and which options are implemented by Rancher leveraging Kubernetes. @@ -89,4 +89,4 @@ Pointing to another workload | | ✓ Create alias for another DNS record | | ✓ -### [Next: Load Balancing]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/load-balancing/) +### [Next: Load Balancing]({{}}/rancher/v2.x/en/v1.6-migration/load-balancing/) diff --git a/content/rancher/v2.x/en/v1.6-migration/expose-services/_index.md b/content/rancher/v2.x/en/v1.6-migration/expose-services/_index.md index 35c81ae5e58..5e7207b1630 100644 --- a/content/rancher/v2.x/en/v1.6-migration/expose-services/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/expose-services/_index.md @@ -9,7 +9,7 @@ Use this document to correct workloads that list `ports` in `output.txt`. You ca
Resolve ports for the web Workload
-![Resolve Ports]({{< baseurl >}}/img/rancher/resolve-ports.png) +![Resolve Ports]({{}}/img/rancher/resolve-ports.png) ## In This Document @@ -38,7 +38,7 @@ A _HostPort_ is a port exposed to the public on a _specific node_ running one or In the following diagram, a user is trying to access an instance of Nginx, which is running within a pod on port 80. However, the Nginx deployment is assigned a HostPort of 9890. The user can connect to this pod by browsing to its host IP address, followed by the HostPort in use (9890 in case). -![HostPort Diagram]({{< baseurl >}}/img/rancher/hostPort.svg) +![HostPort Diagram]({{}}/img/rancher/hostPort.svg) #### HostPort Pros @@ -71,7 +71,7 @@ NodePorts help you circumvent an IP address shortcoming. Although pods can be re In the following diagram, a user is trying to connect to an instance of Nginx running in a Kubernetes cluster managed by Rancher. Although he knows what NodePort Nginx is operating on (30216 in this case), he does not know the IP address of the specific node that the pod is running on. However, with NodePort enabled, he can connect to the pod using the IP address for _any_ node in the cluster. Kubeproxy will forward the request to the correct node and pod. -![NodePort Diagram]({{< baseurl >}}/img/rancher/nodePort.svg) +![NodePort Diagram]({{}}/img/rancher/nodePort.svg) NodePorts are available within your Kubernetes cluster on an internal IP. If you want to expose pods external to the cluster, use NodePorts in conjunction with an external load balancer. Traffic requests from outside your cluster for `:` are directed to the workload. The `` can be the IP address of any node in your Kubernetes cluster. @@ -101,4 +101,4 @@ For example, for the `web-deployment.yml` file parsed from v1.6 that we've been {{< img "/img/rancher/set-nodeport.gif" "Set NodePort" >}} -### [Next: Configure Health Checks]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/monitor-apps) +### [Next: Configure Health Checks]({{}}/rancher/v2.x/en/v1.6-migration/monitor-apps) diff --git a/content/rancher/v2.x/en/v1.6-migration/get-started/_index.md b/content/rancher/v2.x/en/v1.6-migration/get-started/_index.md index 4d4f2d9ad40..453f833724e 100644 --- a/content/rancher/v2.x/en/v1.6-migration/get-started/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/get-started/_index.md @@ -22,7 +22,7 @@ The first step in migrating from v1.6 to v2.x is to install the Rancher v2.x Ser New for v2.x, all communication to Rancher Server is encrypted. The procedures below instruct you not only on installation of Rancher, but also creation and installation of these certificates. -Before installing v2.x, provision one host or more to function as your Rancher Server(s). You can find the requirements for these hosts in [Server Requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/). +Before installing v2.x, provision one host or more to function as your Rancher Server(s). You can find the requirements for these hosts in [Server Requirements]({{}}/rancher/v2.x/en/installation/requirements/). After provisioning your node(s), install Rancher: @@ -34,19 +34,19 @@ After provisioning your node(s), install Rancher: For production environments where your user base requires constant access to your cluster, we recommend installing Rancher in a high availability Kubernetes installation. This installation procedure provisions a three-node cluster and installs Rancher on each node using a Helm chart. - >**Important Difference:** Although you could install Rancher v1.6 in a high-availability Kubernetes configuration using an external database and a Docker command on each node, Rancher v2.x in a Kubernetes install requires an existing Kubernetes cluster. Review [Kubernetes Install]({{< baseurl >}}/rancher/v2.x/en/installation/k8s-install/) for full requirements. + >**Important Difference:** Although you could install Rancher v1.6 in a high-availability Kubernetes configuration using an external database and a Docker command on each node, Rancher v2.x in a Kubernetes install requires an existing Kubernetes cluster. Review [Kubernetes Install]({{}}/rancher/v2.x/en/installation/k8s-install/) for full requirements. ## B. Configure Authentication -After your Rancher v2.x Server is installed, we recommend configuring external authentication (like Active Directory or GitHub) so that users can log into Rancher using their single sign-on. For a full list of supported authentication providers and instructions on how to configure them, see [Authentication]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication). +After your Rancher v2.x Server is installed, we recommend configuring external authentication (like Active Directory or GitHub) so that users can log into Rancher using their single sign-on. For a full list of supported authentication providers and instructions on how to configure them, see [Authentication]({{}}/rancher/v2.x/en/admin-settings/authentication).
Rancher v2.x Authentication
-![Rancher v2.x Authentication]({{< baseurl >}}/img/rancher/auth-providers.svg) +![Rancher v2.x Authentication]({{}}/img/rancher/auth-providers.svg) ### Local Users -Although we recommend using an external authentication provider, Rancher v1.6 and v2.x both offer support for users local to Rancher. However, these users cannot be migrated from Rancher v1.6 to v2.x. If you used local users in Rancher v1.6 and want to continue this practice in v2.x, you'll need to [manually recreate these user accounts]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/) and assign them access rights. +Although we recommend using an external authentication provider, Rancher v1.6 and v2.x both offer support for users local to Rancher. However, these users cannot be migrated from Rancher v1.6 to v2.x. If you used local users in Rancher v1.6 and want to continue this practice in v2.x, you'll need to [manually recreate these user accounts]({{}}/rancher/v2.x/en/admin-settings/authentication/) and assign them access rights. As a best practice, you should use a hybrid of external _and_ local authentication. This practice provides access to Rancher should your external authentication experience an interruption, as you can still log in using a local user account. Set up a few local accounts as administrative users of Rancher. @@ -61,7 +61,7 @@ Begin work in Rancher v2.x by using it to provision a new Kubernetes cluster, wh A cluster and project in combined together in Rancher v2.x is equivalent to a v1.6 environment. A _cluster_ is the compute boundary (i.e., your hosts) and a _project_ is an administrative boundary (i.e., a grouping of namespaces used to assign access rights to users). -There's more basic info on provisioning clusters in the headings below, but for full information, see [Provisioning Kubernetes Clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/). +There's more basic info on provisioning clusters in the headings below, but for full information, see [Provisioning Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/). ### Clusters @@ -69,32 +69,32 @@ In Rancher v1.6, compute nodes were added to an _environment_. Rancher v2.x esch Rancher v2.x lets you launch a Kubernetes cluster anywhere. Host your cluster using: -- A [hosted Kubernetes provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/). -- A [pool of nodes from an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/). Rancher launches Kubernetes on the nodes. -- Any [custom node(s)]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/). Rancher can launch Kubernetes on the nodes, be they bare metal servers, virtual machines, or cloud hosts on a less popular infrastructure provider. +- A [hosted Kubernetes provider]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/). +- A [pool of nodes from an infrastructure provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/). Rancher launches Kubernetes on the nodes. +- Any [custom node(s)]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/). Rancher can launch Kubernetes on the nodes, be they bare metal servers, virtual machines, or cloud hosts on a less popular infrastructure provider. ### Projects -Additionally, Rancher v2.x introduces [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/), which are objects that divide clusters into different application groups that are useful for applying user permissions. This model of clusters and projects allow for multi-tenancy because hosts are owned by the cluster, and the cluster can be further divided into multiple projects where users can manage their apps, but not those of others. +Additionally, Rancher v2.x introduces [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/), which are objects that divide clusters into different application groups that are useful for applying user permissions. This model of clusters and projects allow for multi-tenancy because hosts are owned by the cluster, and the cluster can be further divided into multiple projects where users can manage their apps, but not those of others. When you create a cluster, two projects are automatically created: - The `System` project, which includes system namespaces where important Kubernetes resources are running (like ingress controllers and cluster dns services) - The `Default` project. -However, for production environments, we recommend [creating your own project]({{< baseurl >}}/rancher/v2.x/en/project-admin/namespaces/#creating-projects) and giving it a descriptive name. +However, for production environments, we recommend [creating your own project]({{}}/rancher/v2.x/en/project-admin/namespaces/#creating-projects) and giving it a descriptive name. -After provisioning a new cluster and project, you can authorize your users to access and use project resources. Similarly to Rancher v1.6 environments, Rancher v2.x allows you to [assign users to projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/). By assigning users to projects, you can limit what applications and resources a user can access. +After provisioning a new cluster and project, you can authorize your users to access and use project resources. Similarly to Rancher v1.6 environments, Rancher v2.x allows you to [assign users to projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/). By assigning users to projects, you can limit what applications and resources a user can access. ## D. Create Stacks -In Rancher v1.6, _stacks_ were used to group together the services that belong to your application. In v2.x, you need to [create namespaces]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#creating-namespaces), which are the v2.x equivalent of stacks, for the same purpose. +In Rancher v1.6, _stacks_ were used to group together the services that belong to your application. In v2.x, you need to [create namespaces]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#creating-namespaces), which are the v2.x equivalent of stacks, for the same purpose. In Rancher v2.x, namespaces are child objects to projects. When you create a project, a `default` namespace is added to the project, but you can create your own to parallel your stacks from v1.6. During migration, if you don't explicitly define which namespace a service should be deployed to, it's deployed to the `default` namespace. -Just like v1.6, Rancher v2.x supports service discovery within and across namespaces (we'll get to [service discovery]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/discover-services) soon). +Just like v1.6, Rancher v2.x supports service discovery within and across namespaces (we'll get to [service discovery]({{}}/rancher/v2.x/en/v1.6-migration/discover-services) soon). -### [Next: Migrate Your Services]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/run-migration-tool) +### [Next: Migrate Your Services]({{}}/rancher/v2.x/en/v1.6-migration/run-migration-tool) diff --git a/content/rancher/v2.x/en/v1.6-migration/kub-intro/_index.md b/content/rancher/v2.x/en/v1.6-migration/kub-intro/_index.md index e3b188466f7..a29115d4d13 100644 --- a/content/rancher/v2.x/en/v1.6-migration/kub-intro/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/kub-intro/_index.md @@ -36,4 +36,4 @@ Because Rancher v1.6 defaulted to our Cattle container orchestrator, it primaril More detailed information on Kubernetes concepts can be found in the [Kubernetes Concepts Documentation](https://kubernetes.io/docs/concepts/). -### [Next: Get Started]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/get-started/) +### [Next: Get Started]({{}}/rancher/v2.x/en/v1.6-migration/get-started/) diff --git a/content/rancher/v2.x/en/v1.6-migration/load-balancing/_index.md b/content/rancher/v2.x/en/v1.6-migration/load-balancing/_index.md index 6885d6794a1..183eef1bee3 100644 --- a/content/rancher/v2.x/en/v1.6-migration/load-balancing/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/load-balancing/_index.md @@ -5,13 +5,13 @@ weight: 700 If your applications are public-facing and consume significant traffic, you should place a load balancer in front of your cluster so that users can always access their apps without service interruption. Typically, you can fulfill a high volume of service requests by [horizontally scaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) your deployment, which spins up additional application containers as traffic ramps up. However, this technique requires routing that distributes traffic across your nodes efficiently. In cases where you need to accommodate public traffic that scales up and down, you'll need a load balancer. -As outlined in [its documentation]({{< baseurl >}}/rancher/v1.6/en/cattle/adding-load-balancers/), Rancher v1.6 provided rich support for load balancing using its own microservice powered by HAProxy, which supports HTTP, HTTPS, TCP hostname, and path-based routing. Most of these same features are available in v2.x. However, load balancers that you used with v1.6 cannot be migrated to v2.x. You'll have to manually recreate your v1.6 load balancer in v2.x. +As outlined in [its documentation]({{}}/rancher/v1.6/en/cattle/adding-load-balancers/), Rancher v1.6 provided rich support for load balancing using its own microservice powered by HAProxy, which supports HTTP, HTTPS, TCP hostname, and path-based routing. Most of these same features are available in v2.x. However, load balancers that you used with v1.6 cannot be migrated to v2.x. You'll have to manually recreate your v1.6 load balancer in v2.x. If you encounter the `output.txt` text below after parsing your v1.6 Compose files to Kubernetes manifests, you'll have to resolve it by manually creating a load balancer in v2.x.
output.txt Load Balancer Directive
-![Resolve Load Balancer Directive]({{< baseurl >}}/img/rancher/resolve-load-balancer.png) +![Resolve Load Balancer Directive]({{}}/img/rancher/resolve-load-balancer.png) ## In This Document @@ -35,7 +35,7 @@ By default, Rancher v2.x replaces the v1.6 load balancer microservice with the n ## Load Balancer Deployment -In Rancher v1.6, you could add port/service rules for configuring your HAProxy to load balance for target services. You could also configure the hostname/path-based routing rules. +In Rancher v1.6, you could add port/service rules for configuring your HA proxy to load balance for target services. You could also configure the hostname/path-based routing rules. Rancher v2.x offers similar functionality, but load balancing is instead handled by Ingress. An Ingress is a specification of rules that a controller component applies to your load balancer. The actual load balancer can run outside of your cluster or within it. @@ -43,7 +43,7 @@ By default, Rancher v2.x deploys NGINX Ingress Controller on clusters provisione RKE deploys NGINX Ingress Controller as a [Kubernetes DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/), meaning that an NGINX instance is deployed on every node in the cluster. NGINX acts like an Ingress Controller listening to Ingress creation within your entire cluster, and it also configures itself as the load balancer to satisfy the Ingress rules. The DaemonSet is configured with hostNetwork to expose two ports: 80 and 443. -For more information NGINX Ingress Controller, their deployment as DaemonSets, deployment configuration options, see the [RKE documentation]({{< baseurl >}}/rke/latest/en/config-options/add-ons/ingress-controllers/). +For more information NGINX Ingress Controller, their deployment as DaemonSets, deployment configuration options, see the [RKE documentation]({{}}/rke/latest/en/config-options/add-ons/ingress-controllers/). ## Load Balancing Architecture @@ -55,13 +55,13 @@ In Rancher v1.6 you could deploy a scalable load balancer service within your st
Rancher v1.6 Load Balancing Architecture
-![Rancher v1.6 Load Balancing]({{< baseurl >}}/img/rancher/cattle-load-balancer.svg) +![Rancher v1.6 Load Balancing]({{}}/img/rancher/cattle-load-balancer.svg) The Rancher v2.x Ingress Controller is a DaemonSet, it is globally deployed on all schedulable nodes to serve your entire Kubernetes Cluster. Therefore, when you program the Ingress rules, you must use a unique hostname and path to point to your workloads, as the load balancer node IP addresses and ports 80 and 443 are common access points for all workloads.
Rancher v2.x Load Balancing Architecture
-![Rancher v2.x Load Balancing]({{< baseurl >}}/img/rancher/kubernetes-load-balancer.svg) +![Rancher v2.x Load Balancing]({{}}/img/rancher/kubernetes-load-balancer.svg) ## Ingress Caveats @@ -79,13 +79,13 @@ You can launch a new load balancer to replace your load balancer from v1.6. Usin >**Prerequisite:** Before deploying Ingress, you must have a workload deployed that's running a scale of two or more pods. > -![Workload Scale]({{< baseurl >}}/img/rancher/workload-scale.png) +![Workload Scale]({{}}/img/rancher/workload-scale.png) For balancing between these two pods, you must create a Kubernetes Ingress rule. To create this rule, navigate to your cluster and project, and click **Resources > Workloads > Load Balancing.** (In versions prior to v2.3.0, click **Workloads > Load Balancing.**) Then click **Add Ingress**. This GIF below depicts how to add Ingress to one of your projects.
Browsing to Load Balancer Tab and Adding Ingress
-![Adding Ingress]({{< baseurl >}}/img/rancher/add-ingress.gif) +![Adding Ingress]({{}}/img/rancher/add-ingress.gif) Similar to a service/port rules in Rancher v1.6, here you can specify rules targeting your workload's container port. The sections below demonstrate how to create Ingress rules. @@ -97,13 +97,13 @@ For example, let's say you have multiple workloads deployed to a single namespac
Ingress: Path-Based Routing Configuration
-![Ingress: Path-Based Routing Configuration]({{< baseurl >}}/img/rancher/add-ingress-form.png) +![Ingress: Path-Based Routing Configuration]({{}}/img/rancher/add-ingress-form.png) Rancher v2.x also places a convenient link to the workloads on the Ingress record. If you configure an external DNS to program the DNS records, this hostname can be mapped to the Kubernetes Ingress address.
Workload Links
-![Load Balancer Links to Workloads]({{< baseurl >}}/img/rancher/load-balancer-links.png) +![Load Balancer Links to Workloads]({{}}/img/rancher/load-balancer-links.png) The Ingress address is the IP address in your cluster that the Ingress Controller allocates for your workload. You can reach your workload by browsing to this IP address. Use `kubectl` command below to see the Ingress address assigned by the controller: @@ -115,24 +115,24 @@ kubectl get ingress Rancher v2.x Ingress functionality supports the HTTPS protocol, but if you want to use it, you need to use a valid SSL/TLS certificate. While configuring Ingress rules, use the **SSL/TLS Certificates** section to configure a certificate. -- We recommend [uploading a certificate]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/certificates/) from a known certificate authority (you'll have to do this before configuring Ingress). Then, while configuring your load balancer, use the **Choose a certificate** option and select the uploaded certificate that you want to use. -- If you have configured [NGINX default certificate]({{< baseurl >}}/rke/latest/en/config-options/add-ons/ingress-controllers/#configuring-an-nginx-default-certificate), you can select **Use default ingress controller certificate**. +- We recommend [uploading a certificate]({{}}/rancher/v2.x/en/k8s-in-rancher/certificates/) from a known certificate authority (you'll have to do this before configuring Ingress). Then, while configuring your load balancer, use the **Choose a certificate** option and select the uploaded certificate that you want to use. +- If you have configured [NGINX default certificate]({{}}/rke/latest/en/config-options/add-ons/ingress-controllers/#configuring-an-nginx-default-certificate), you can select **Use default ingress controller certificate**.
Load Balancer Configuration: SSL/TLS Certificate Section
-![SSL/TLS Certificates Section]({{< baseurl >}}/img/rancher/load-balancer-ssl-certs.png) +![SSL/TLS Certificates Section]({{}}/img/rancher/load-balancer-ssl-certs.png) ### TCP Load Balancing Options #### Layer-4 Load Balancer -For the TCP protocol, Rancher v2.x supports configuring a Layer 4 load balancer using the cloud provider in which your Kubernetes cluster is deployed. Once this load balancer appliance is configured for your cluster, when you choose the option of a `Layer-4 Load Balancer` for port-mapping during workload deployment, Rancher automatically creates a corresponding load balancer service. This service will call the corresponding cloud provider and configure the load balancer appliance to route requests to the appropriate pods. See [Cloud Providers]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) for information on how to configure LoadBalancer services for your cloud provider. +For the TCP protocol, Rancher v2.x supports configuring a Layer 4 load balancer using the cloud provider in which your Kubernetes cluster is deployed. Once this load balancer appliance is configured for your cluster, when you choose the option of a `Layer-4 Load Balancer` for port-mapping during workload deployment, Rancher automatically creates a corresponding load balancer service. This service will call the corresponding cloud provider and configure the load balancer appliance to route requests to the appropriate pods. See [Cloud Providers]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) for information on how to configure LoadBalancer services for your cloud provider. For example, if we create a deployment named `myapp` and specify a Layer 4 load balancer in the **Port Mapping** section, Rancher will automatically add an entry to the **Load Balancer** tab named `myapp-loadbalancer`.
Workload Deployment: Layer 4 Load Balancer Creation
-![Deploy Layer-4 Load Balancer]({{< baseurl >}}/img/rancher/deploy-workload-load-balancer.png) +![Deploy Layer-4 Load Balancer]({{}}/img/rancher/deploy-workload-load-balancer.png) Once configuration of the load balancer succeeds, the Rancher UI provides a link to your workload's public endpoint. @@ -144,13 +144,13 @@ However, there is a workaround to use NGINX's TCP balancing by creating a Kubern To configure NGINX to expose your services via TCP, you can add the ConfigMap `tcp-services` that should exist in the `ingress-nginx` namespace. This namespace also contains the NGINX Ingress Controller pods. -![Layer-4 Load Balancer: ConfigMap Workaround]({{< baseurl >}}/img/rancher/layer-4-lb-config-map.png) +![Layer-4 Load Balancer: ConfigMap Workaround]({{}}/img/rancher/layer-4-lb-config-map.png) The key in the ConfigMap entry should be the TCP port that you want to expose for public access: `:`. As shown above, two workloads are listed in the `Default` namespace. For example, the first entry in the ConfigMap above instructs NGINX to expose the `myapp` workload (the one in the `default` namespace that's listening on private port 80) over external port `6790`. Adding these entries to the ConfigMap automatically updates the NGINX pods to configure these workloads for TCP balancing. The workloads exposed should be available at `:`. If they are not accessible, you might have to expose the TCP port explicitly using a NodePort service. ## Rancher v2.x Load Balancing Limitations -Cattle provided feature-rich load balancer support that is [well documented]({{< baseurl >}}/rancher/v1.6/en/cattle/adding-load-balancers/#load-balancers). Some of these features do not have equivalents in Rancher v2.x. This is the list of such features: +Cattle provided feature-rich load balancer support that is [well documented]({{}}/rancher/v1.6/en/cattle/adding-load-balancers/#load-balancers). Some of these features do not have equivalents in Rancher v2.x. This is the list of such features: - No support for SNI in current NGINX Ingress Controller. - TCP load balancing requires a load balancer appliance enabled by cloud provider within the cluster. There is no Ingress support for TCP on Kubernetes. diff --git a/content/rancher/v2.x/en/v1.6-migration/monitor-apps/_index.md b/content/rancher/v2.x/en/v1.6-migration/monitor-apps/_index.md index c9ea17668c4..b1a2f1cc110 100644 --- a/content/rancher/v2.x/en/v1.6-migration/monitor-apps/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/monitor-apps/_index.md @@ -13,7 +13,7 @@ For example, for the image below, we would configure liveness probes for the `we
Resolve health_check for the web and webLB Workloads
-![Resolve health_check]({{< baseurl >}}/img/rancher/resolve-health-checks.png) +![Resolve health_check]({{}}/img/rancher/resolve-health-checks.png) ## In This Document @@ -33,7 +33,7 @@ The health check microservice features two types of health checks, which have a - **TCP health checks**: - These health checks check if a TCP connection opens at the specified port for the monitored service. For full details, see the [Rancher v1.6 documentation]({{< baseurl >}}/rancher/v1.6/en/cattle/health-checks/). + These health checks check if a TCP connection opens at the specified port for the monitored service. For full details, see the [Rancher v1.6 documentation]({{}}/rancher/v1.6/en/cattle/health-checks/). - **HTTP health checks**: @@ -73,7 +73,7 @@ The following diagram displays kubelets running probes on containers they are mo ## Configuring Probes in Rancher v2.x -The [migration-tool CLI]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/run-migration-tool/) cannot parse health checks from Compose files to Kubernetes manifest. Therefore, if want you to add health checks to your Rancher v2.x workloads, you'll have to add them manually. +The [migration-tool CLI]({{}}/rancher/v2.x/en/v1.6-migration/run-migration-tool/) cannot parse health checks from Compose files to Kubernetes manifest. Therefore, if want you to add health checks to your Rancher v2.x workloads, you'll have to add them manually. Using the Rancher v2.x UI, you can add TCP or HTTP health checks to Kubernetes workloads. By default, Rancher asks you to configure a readiness check for your workloads and applies a liveness check using the same configuration. Optionally, you can define a separate liveness check. @@ -83,7 +83,7 @@ Configure probes by using the **Health Check** section while editing deployments
Edit Deployment: Health Check Section
-![Health Check Section]({{< baseurl >}}/img/rancher/health-check-section.png) +![Health Check Section]({{}}/img/rancher/health-check-section.png) ### Configuring Checks @@ -95,7 +95,7 @@ While you create a workload using Rancher v2.x, we recommend configuring a check TCP checks monitor your deployment's health by attempting to open a connection to the pod over a specified port. If the probe can open the port, it's considered healthy. Failure to open it is considered unhealthy, which notifies Kubernetes that it should kill the pod and then replace it according to its [restart policy](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy). (this applies to Liveness probes, for Readiness probes, it will mark the pod as Unready). -You can configure the probe along with values for specifying its behavior by selecting the **TCP connection opens successfully** option in the **Health Check** section. For more information, see [Deploying Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). For help setting probe timeout and threshold values, see [Health Check Parameter Mappings](#health-check-parameter-mappings). +You can configure the probe along with values for specifying its behavior by selecting the **TCP connection opens successfully** option in the **Health Check** section. For more information, see [Deploying Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). For help setting probe timeout and threshold values, see [Health Check Parameter Mappings](#health-check-parameter-mappings). ![TCP Check]({{}}/img/rancher/readiness-check-tcp.png) @@ -133,7 +133,7 @@ When you configure a readiness check using Rancher v2.x, the `readinessProbe` di HTTP checks monitor your deployment's health by sending an HTTP GET request to a specific URL path that you define. If the pod responds with a message range of `200`-`400`, the health check is considered successful. If the pod replies with any other value, the check is considered unsuccessful, so Kubernetes kills and replaces the pod according to its [restart policy](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy). (this applies to Liveness probes, for Readiness probes, it will mark the pod as Unready). -You can configure the probe along with values for specifying its behavior by selecting the **HTTP returns successful status** or **HTTPS returns successful status**. For more information, see [Deploying Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). For help setting probe timeout and threshold values, see [Health Check Parameter Mappings](#healthcheck-parameter-mappings). +You can configure the probe along with values for specifying its behavior by selecting the **HTTP returns successful status** or **HTTPS returns successful status**. For more information, see [Deploying Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). For help setting probe timeout and threshold values, see [Health Check Parameter Mappings](#healthcheck-parameter-mappings). ![HTTP Check]({{}}/img/rancher/readiness-check-http.png) @@ -153,7 +153,7 @@ While configuring a readiness check for either the TCP or HTTP protocol, you can Rancher v2.x, like v1.6, lets you perform health checks using the TCP and HTTP protocols. However, Rancher v2.x also lets you check the health of a pod by running a command inside of it. If the container exits with a code of `0` after running the command, the pod is considered healthy. -You can configure a liveness or readiness check that executes a command that you specify by selecting the `Command run inside the container exits with status 0` option from **Health Checks** while [deploying a workload]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). +You can configure a liveness or readiness check that executes a command that you specify by selecting the `Command run inside the container exits with status 0` option from **Health Checks** while [deploying a workload]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). ![Healthcheck Execute Command]({{}}/img/rancher/healthcheck-cmd-exec.png) @@ -171,4 +171,4 @@ Rancher v1.6 Compose Parameter | Rancher v2.x Kubernetes Parameter `initializing_timeout` | `initialDelaySeconds` `strategy` | `restartPolicy` -### [Next: Schedule Your Services]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/) +### [Next: Schedule Your Services]({{}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/) diff --git a/content/rancher/v2.x/en/v1.6-migration/run-migration-tool/_index.md b/content/rancher/v2.x/en/v1.6-migration/run-migration-tool/_index.md index f1d02645957..ebdebd5b9bd 100644 --- a/content/rancher/v2.x/en/v1.6-migration/run-migration-tool/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/run-migration-tool/_index.md @@ -50,7 +50,7 @@ After you download migration-tools CLI, rename it and make it executable. Next, use the migration-tools CLI to export all stacks in all of the Cattle environments into Compose files. Then, for stacks that you want to migrate to Rancher v2.x, convert the Compose files into Kubernetes manifest. ->**Prerequisite:** Create an [Account API Key]({{< baseurl >}}/rancher/v1.6/en/api/v2-beta/api-keys/#account-api-keys) to authenticate with Rancher v1.6 when using the migration-tools CLI. +>**Prerequisite:** Create an [Account API Key]({{}}/rancher/v1.6/en/api/v2-beta/api-keys/#account-api-keys) to authenticate with Rancher v1.6 when using the migration-tools CLI. 1. Export the Docker Compose files for your Cattle environments and stacks from Rancher v1.6. @@ -62,7 +62,7 @@ Next, use the migration-tools CLI to export all stacks in all of the Cattle envi **Step Result:** migration-tools exports Compose files (`docker-compose.yml` and `rancher-compose.yml`) for each stack in the `--export-dir` directory. If you omitted this option, Compose files are output to your current directory. - A unique directory is created for each environment and stack. For example, if we export each [environment/stack]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/#migration-example-files) from Rancher v1.6, the following directory structure is created: + A unique directory is created for each environment and stack. For example, if we export each [environment/stack]({{}}/rancher/v2.x/en/v1.6-migration/#migration-example-files) from Rancher v1.6, the following directory structure is created: ``` export/ # migration-tools --export-dir @@ -85,7 +85,7 @@ Next, use the migration-tools CLI to export all stacks in all of the Cattle envi >**Note:** If you omit the `--docker-file` and `--rancher-file` options from your command, migration-tools uses the current working directory to find Compose files. ->**Want full usage and options for the migration-tools CLI?** See the [Migration Tools CLI Reference]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/run-migration-tool/migration-tools-ref/). +>**Want full usage and options for the migration-tools CLI?** See the [Migration Tools CLI Reference]({{}}/rancher/v2.x/en/v1.6-migration/run-migration-tool/migration-tools-ref/). ### migration-tools CLI Output @@ -104,7 +104,7 @@ When a you export a service from Rancher v1.6 that exposes public ports, migrati #### Migration Example File Output -If we parse the two example files from [Migration Example Files]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/#migration-example-files), `docker-compose.yml` and `rancher-compose.yml`, the following files are output: +If we parse the two example files from [Migration Example Files]({{}}/rancher/v2.x/en/v1.6-migration/#migration-example-files), `docker-compose.yml` and `rancher-compose.yml`, the following files are output: File | Description -----|------------ @@ -244,13 +244,13 @@ You can deploy the Kubernetes manifests created by migration-tools by importing
Deploy Services: Import Kubernetes Manifest
-![Deploy Services]({{< baseurl >}}/img/rancher/deploy-service.gif) +![Deploy Services]({{}}/img/rancher/deploy-service.gif) {{% /tab %}} {{% tab "Rancher CLI" %}} ->**Prerequisite:** [Install Rancher CLI]({{< baseurl >}}/rancher/v2.x/en/cli/) for Rancher v2.x. +>**Prerequisite:** [Install Rancher CLI]({{}}/rancher/v2.x/en/cli/) for Rancher v2.x. Use the following Rancher CLI commands to deploy your application using Rancher v2.x. For each Kubernetes manifest output by migration-tools CLI, enter one of the commands below to import it into Rancher v2.x. @@ -267,7 +267,7 @@ Following importation, you can view your v1.6 services in the v2.x UI as Kuberne
Imported Services
-![Imported Services]({{< baseurl >}}/img/rancher/imported-workloads.png) +![Imported Services]({{}}/img/rancher/imported-workloads.png) ## What Now? @@ -275,15 +275,15 @@ Although the migration-tool CLI parses your Rancher v1.6 Compose files to Kubern
Edit Migrated Services
-![Edit Migrated Workload]({{< baseurl >}}/img/rancher/edit-migration-workload.gif) +![Edit Migrated Workload]({{}}/img/rancher/edit-migration-workload.gif) As mentioned in [Migration Tools CLI Output](#migration-tools-cli-output), the `output.txt` files generated during parsing lists the manual steps you must make for each deployment. Review the upcoming topics for more information on manually editing your Kubernetes specs. -Open your `output.txt` file and take a look at its contents. When you parsed your Compose files into Kubernetes manifests, migration-tools CLI output a manifest for each workload that it creates for Kubernetes. For example, our when our [Migration Example Files]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/#migration-example-files) are parsed into Kubernetes manifests, `output.txt` lists each resultant parsed [Kubernetes manifest file](#migration-example-file-output) (i.e., workloads). Each workload features a list of action items to restore operations for the workload in v2.x. +Open your `output.txt` file and take a look at its contents. When you parsed your Compose files into Kubernetes manifests, migration-tools CLI output a manifest for each workload that it creates for Kubernetes. For example, our when our [Migration Example Files]({{}}/rancher/v2.x/en/v1.6-migration/#migration-example-files) are parsed into Kubernetes manifests, `output.txt` lists each resultant parsed [Kubernetes manifest file](#migration-example-file-output) (i.e., workloads). Each workload features a list of action items to restore operations for the workload in v2.x.
Output.txt Example
-![output.txt]({{< baseurl >}}/img/rancher/output-dot-text.png) +![output.txt]({{}}/img/rancher/output-dot-text.png) The following table lists possible directives that may appear in `output.txt`, what they mean, and links on how to resolve them. @@ -296,16 +296,16 @@ Directive | Instructions [scale][5] | In v1.6, scale refers to the number of container replicas running on a single node. In v2.x, this feature is replaced by replica sets. start_on_create | No Kubernetes equivalent. No action is required from you. -[1]:{{< baseurl >}}/rancher/v2.x/en/v1.6-migration/monitor-apps/#configuring-probes-in-rancher-v2-x -[2]:{{< baseurl >}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/#scheduling-using-labels -[3]:{{< baseurl >}}/rancher/v2.x/en/v1.6-migration/discover-services -[4]:{{< baseurl >}}/rancher/v2.x/en/v1.6-migration/expose-services -[5]:{{< baseurl >}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/#scheduling-pods-to-a-specific-node +[1]:{{}}/rancher/v2.x/en/v1.6-migration/monitor-apps/#configuring-probes-in-rancher-v2-x +[2]:{{}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/#scheduling-using-labels +[3]:{{}}/rancher/v2.x/en/v1.6-migration/discover-services +[4]:{{}}/rancher/v2.x/en/v1.6-migration/expose-services +[5]:{{}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/#scheduling-pods-to-a-specific-node -[7]:{{< baseurl >}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/#scheduling-using-labels -[8]:{{< baseurl >}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/#scheduling-global-services -[9]:{{< baseurl >}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/#label-affinity-antiaffinity +[7]:{{}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/#scheduling-using-labels +[8]:{{}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/#scheduling-global-services +[9]:{{}}/rancher/v2.x/en/v1.6-migration/schedule-workloads/#label-affinity-antiaffinity -### [Next: Expose Your Services]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/expose-services/) +### [Next: Expose Your Services]({{}}/rancher/v2.x/en/v1.6-migration/expose-services/) diff --git a/content/rancher/v2.x/en/v1.6-migration/schedule-workloads/_index.md b/content/rancher/v2.x/en/v1.6-migration/schedule-workloads/_index.md index 5d070f1638f..e78fa280b0c 100644 --- a/content/rancher/v2.x/en/v1.6-migration/schedule-workloads/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/schedule-workloads/_index.md @@ -13,7 +13,7 @@ You can schedule your migrated v1.6 services while editing a deployment. Schedul
Editing Workloads: Workload Type and Node Scheduling Sections
-![Workload Type and Node Scheduling Sections]({{< baseurl >}}/img/rancher/migrate-schedule-workloads.png) +![Workload Type and Node Scheduling Sections]({{}}/img/rancher/migrate-schedule-workloads.png) ## In This Document @@ -39,7 +39,7 @@ Rancher v2.x retains _all_ methods available in v1.6 for scheduling your service In v1.6, you would schedule a service to a host while adding a service to a Stack. In Rancher v2.x., the equivalent action is to schedule a workload for deployment. The following composite image shows a comparison of the UI used for scheduling in Rancher v2.x versus v1.6. -![Node Scheduling: Rancher v2.x vs v1.6]({{< baseurl >}}/img/rancher/node-scheduling.png) +![Node Scheduling: Rancher v2.x vs v1.6]({{}}/img/rancher/node-scheduling.png) ## Node Scheduling Options @@ -47,7 +47,7 @@ Rancher offers a variety of options when scheduling nodes to host workload pods You can choose a scheduling option as you deploy a workload. The term _workload_ is synonymous with adding a service to a Stack in Rancher v1.6). You can deploy a workload by using the context menu to browse to a cluster project (` > > Workloads`). -The sections that follow provide information on using each scheduling options, as well as any notable changes from Rancher v1.6. For full instructions on deploying a workload in Rancher v2.x beyond just scheduling options, see [Deploying Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). +The sections that follow provide information on using each scheduling options, as well as any notable changes from Rancher v1.6. For full instructions on deploying a workload in Rancher v2.x beyond just scheduling options, see [Deploying Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). Option | v1.6 Feature | v2.x Feature -------|------|------ @@ -64,13 +64,13 @@ Option | v1.6 Feature | v2.x Feature In v1.6, you could control the number of container replicas deployed for a service. You can schedule pods the same way in v2.x, but you'll have to set the scale manually while editing a workload. -![Resolve Scale]({{< baseurl >}}/img/rancher/resolve-scale.png) +![Resolve Scale]({{}}/img/rancher/resolve-scale.png) During migration, you can resolve `scale` entries in `output.txt` by setting a value for the **Workload Type** option **Scalable deployment** depicted below.
Scalable Deployment Option
-![Workload Scale]({{< baseurl >}}/img/rancher/workload-type-option.png) +![Workload Scale]({{}}/img/rancher/workload-type-option.png) ### Scheduling Pods to a Specific Node @@ -81,7 +81,7 @@ As you deploy a workload, use the **Node Scheduling** section to choose a node t
Rancher v2.x: Workload Deployment
-![Workload Tab and Group by Node Icon]({{< baseurl >}}/img/rancher/schedule-specific-node.png) +![Workload Tab and Group by Node Icon]({{}}/img/rancher/schedule-specific-node.png) Rancher schedules pods to the node you select if 1) there are compute resource available for the node and 2) you've configured port mapping to use the HostPort option, that there are no port conflicts. @@ -89,7 +89,7 @@ If you expose the workload using a NodePort that conflicts with another workload After the workload is created, you can confirm that the pods are scheduled to your chosen node. From the project view, click **Resources > Workloads.** (In versions prior to v2.3.0, click the **Workloads** tab.) Click the **Group by Node** icon to sort your workloads by node. Note that both Nginx pods are scheduled to the same node. -![Pods Scheduled to Same Node]({{< baseurl >}}/img/rancher/scheduled-nodes.png) +![Pods Scheduled to Same Node]({{}}/img/rancher/scheduled-nodes.png) ). A _DaemonSet_ functions exactly like a Rancher v1.6 global service. The Kubernetes scheduler deploys a pod on each node of the cluster, and as new nodes are added, the scheduler will start new pods on them provided they match the scheduling requirements of the workload. Additionally, in v2.x, you can also limit a DaemonSet to be deployed to nodes that have a specific label. @@ -217,7 +217,7 @@ To create a daemonset while configuring a workload, choose **Run one pod on each
Workload Configuration: Choose run one pod on each node to configure daemonset
-![choose Run one pod on each node]({{< baseurl >}}/img/rancher/workload-type.png) +![choose Run one pod on each node]({{}}/img/rancher/workload-type.png) ### Scheduling Pods Using Resource Constraints @@ -240,8 +240,8 @@ To declare resource constraints, edit your migrated workloads, editing the **Sec
Scheduling: Resource Constraint Settings
-![Resource Constraint Settings]({{< baseurl >}}/img/rancher/resource-constraint-settings.png) +![Resource Constraint Settings]({{}}/img/rancher/resource-constraint-settings.png) You can find more detail about these specs and how to use them in the [Kubernetes Documentation](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container). -### [Next: Service Discovery]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/discover-services/) +### [Next: Service Discovery]({{}}/rancher/v2.x/en/v1.6-migration/discover-services/) diff --git a/content/rke/latest/en/cert-mgmt/_index.md b/content/rke/latest/en/cert-mgmt/_index.md index 5d202d6bbdc..21f9f53011e 100644 --- a/content/rke/latest/en/cert-mgmt/_index.md +++ b/content/rke/latest/en/cert-mgmt/_index.md @@ -12,9 +12,9 @@ Certificates are an important part of Kubernetes clusters and are used for all K ## Generating Certificate Signing Requests (CSRs) and Keys -If you want to create and sign the certificates by a real Certificate Authority (CA), you can use RKE to [generate a set of Certificate Signing Requests (CSRs) and keys]({{< baseurl >}}/rke/latest/en/installation/certs/#generating-certificate-signing-requests-csrs-and-keys). +If you want to create and sign the certificates by a real Certificate Authority (CA), you can use RKE to [generate a set of Certificate Signing Requests (CSRs) and keys]({{}}/rke/latest/en/installation/certs/#generating-certificate-signing-requests-csrs-and-keys). -You can use the CSRs and keys to sign the certificates by a real CA. After the certificates are signed, these custom certificates can be used by RKE to as [custom certificates]({{< baseurl >}}/rke/latest/en/installation/certs/) for the Kubernetes cluster. +You can use the CSRs and keys to sign the certificates by a real CA. After the certificates are signed, these custom certificates can be used by RKE to as [custom certificates]({{}}/rke/latest/en/installation/certs/) for the Kubernetes cluster. ## Certificate Rotation diff --git a/content/rke/latest/en/config-options/_index.md b/content/rke/latest/en/config-options/_index.md index ecf29f2a412..abbf6e2209a 100644 --- a/content/rke/latest/en/config-options/_index.md +++ b/content/rke/latest/en/config-options/_index.md @@ -6,35 +6,35 @@ weight: 200 When setting up your `cluster.yml` for RKE, there are a lot of different options that can be configured to control the behavior of how RKE launches Kubernetes. -There are several options that can be configured in cluster configuration option. There are several [example yamls]({{< baseurl >}}/rke/latest/en/example-yamls/) that contain all the options. +There are several options that can be configured in cluster configuration option. There are several [example yamls]({{}}/rke/latest/en/example-yamls/) that contain all the options. ### Configuring Nodes -* [Nodes]({{< baseurl >}}/rke/latest/en/config-options/nodes/) +* [Nodes]({{}}/rke/latest/en/config-options/nodes/) * [Ignoring unsupported Docker versions](#supported-docker-versions) -* [Private Registries]({{< baseurl >}}/rke/latest/en/config-options/private-registries/) +* [Private Registries]({{}}/rke/latest/en/config-options/private-registries/) * [Cluster Level SSH Key Path](#cluster-level-ssh-key-path) * [SSH Agent](#ssh-agent) -* [Bastion Host]({{< baseurl >}}/rke/latest/en/config-options/bastion-host/) +* [Bastion Host]({{}}/rke/latest/en/config-options/bastion-host/) ### Configuring Kubernetes Cluster * [Cluster Name](#cluster-name) * [Kubernetes Version](#kubernetes-version) * [Prefix Path](#prefix-path) -* [System Images]({{< baseurl >}}/rke/latest/en/config-options/system-images/) -* [Services]({{< baseurl >}}/rke/latest/en/config-options/services/) -* [Extra Args and Binds and Environment Variables]({{< baseurl >}}/rke/latest/en/config-options/services/services-extras/) -* [External Etcd]({{< baseurl >}}/rke/latest/en/config-options/services/external-etcd/) -* [Authentication]({{< baseurl >}}/rke/latest/en/config-options/authentication/) -* [Authorization]({{< baseurl >}}/rke/latest/en/config-options/authorization/) +* [System Images]({{}}/rke/latest/en/config-options/system-images/) +* [Services]({{}}/rke/latest/en/config-options/services/) +* [Extra Args and Binds and Environment Variables]({{}}/rke/latest/en/config-options/services/services-extras/) +* [External Etcd]({{}}/rke/latest/en/config-options/services/external-etcd/) +* [Authentication]({{}}/rke/latest/en/config-options/authentication/) +* [Authorization]({{}}/rke/latest/en/config-options/authorization/) * [Rate Limiting]({{}}/rke/latest/en/config-options/rate-limiting/) -* [Cloud Providers]({{< baseurl >}}/rke/latest/en/config-options/cloud-providers/) +* [Cloud Providers]({{}}/rke/latest/en/config-options/cloud-providers/) * [Audit Log]({{}}/rke/latest/en/config-options/audit-log) -* [Add-ons]({{< baseurl >}}/rke/latest/en/config-options/add-ons/) - * [Network Plug-ins]({{< baseurl >}}/rke/latest/en/config-options/add-ons/network-plugins/) - * [DNS providers]({{< baseurl >}}/rke/latest/en/config-options/add-ons/dns/) - * [Ingress Controllers]({{< baseurl >}}/rke/latest/en/config-options/add-ons/ingress-controllers/) - * [Metrics Server]({{< baseurl >}}/rke/latest/en/config-options/add-ons/metrics-server/) - * [User-Defined Add-ons]({{< baseurl >}}/rke/latest/en/config-options/add-ons/user-defined-add-ons/) +* [Add-ons]({{}}/rke/latest/en/config-options/add-ons/) + * [Network Plug-ins]({{}}/rke/latest/en/config-options/add-ons/network-plugins/) + * [DNS providers]({{}}/rke/latest/en/config-options/add-ons/dns/) + * [Ingress Controllers]({{}}/rke/latest/en/config-options/add-ons/ingress-controllers/) + * [Metrics Server]({{}}/rke/latest/en/config-options/add-ons/metrics-server/) + * [User-Defined Add-ons]({{}}/rke/latest/en/config-options/add-ons/user-defined-add-ons/) * [Add-ons Job Timeout](#add-ons-job-timeout) @@ -79,7 +79,7 @@ prefix_path: /opt/custom_path ### Cluster Level SSH Key Path -RKE connects to host(s) using `ssh`. Typically, each node will have an independent path for each ssh key, i.e. `ssh_key_path`, in the `nodes` section, but if you have a SSH key that is able to access **all** hosts in your cluster configuration file, you can set the path to that ssh key at the top level. Otherwise, you would set the ssh key path in the [nodes]({{< baseurl >}}/rke/latest/en/config-options/nodes/). +RKE connects to host(s) using `ssh`. Typically, each node will have an independent path for each ssh key, i.e. `ssh_key_path`, in the `nodes` section, but if you have a SSH key that is able to access **all** hosts in your cluster configuration file, you can set the path to that ssh key at the top level. Otherwise, you would set the ssh key path in the [nodes]({{}}/rke/latest/en/config-options/nodes/). If ssh key paths are defined at the cluster level and at the node level, the node-level key will take precedence. @@ -109,4 +109,4 @@ $ echo $SSH_AUTH_SOCK ### Add-ons Job Timeout -You can define [add-ons]({{< baseurl >}}/rke/latest/en/config-options/add-ons/) to be deployed after the Kubernetes cluster comes up, which uses Kubernetes [jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). RKE will stop attempting to retrieve the job status after the timeout, which is in seconds. The default timeout value is `30` seconds. +You can define [add-ons]({{}}/rke/latest/en/config-options/add-ons/) to be deployed after the Kubernetes cluster comes up, which uses Kubernetes [jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). RKE will stop attempting to retrieve the job status after the timeout, which is in seconds. The default timeout value is `30` seconds. diff --git a/content/rke/latest/en/config-options/add-ons/_index.md b/content/rke/latest/en/config-options/add-ons/_index.md index a665230b268..f2cb7765e3b 100644 --- a/content/rke/latest/en/config-options/add-ons/_index.md +++ b/content/rke/latest/en/config-options/add-ons/_index.md @@ -5,12 +5,12 @@ weight: 260 RKE supports configuring pluggable add-ons in the cluster YML. Add-ons are used to deploy several cluster components including: -* [Network plug-ins]({{< baseurl >}}/rke/latest/en/config-options/add-ons/network-plugins/) -* [Ingress controller]({{< baseurl >}}/rke/latest/en/config-options/add-ons/ingress-controllers/) -* [DNS provider]({{< baseurl >}}/rke/latest/en/config-options/add-ons/dns/) -* [Metrics Server]({{< baseurl >}}/rke/latest/en/config-options/add-ons/metrics-server/) +* [Network plug-ins]({{}}/rke/latest/en/config-options/add-ons/network-plugins/) +* [Ingress controller]({{}}/rke/latest/en/config-options/add-ons/ingress-controllers/) +* [DNS provider]({{}}/rke/latest/en/config-options/add-ons/dns/) +* [Metrics Server]({{}}/rke/latest/en/config-options/add-ons/metrics-server/) -These add-ons require images that can be found under the [`system_images` directive]({{< baseurl >}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with each add-on, but these can be overridden by changing the image tag in `system_images`. +These add-ons require images that can be found under the [`system_images` directive]({{}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with each add-on, but these can be overridden by changing the image tag in `system_images`. There are a few things worth noting: @@ -25,7 +25,7 @@ As of version v0.1.7, add-ons are split into two categories: - **Critical add-ons:** If these add-ons fail to deploy for any reason, RKE will error out. - **Non-critical add-ons:** If these add-ons fail to deploy, RKE will only log a warning and continue deploying any other add-ons. -Currently, only the [network plug-in]({{< baseurl >}}/rke/latest/en/config-options/add-ons/network-plugins/) is considered critical. KubeDNS, [ingress controllers]({{< baseurl >}}/rke/latest/en/config-options/add-ons/ingress-controllers/) and [user-defined add-ons]({{< baseurl >}}/rke/latest/en/config-options/add-ons/user-defined-add-ons/) are considered non-critical. +Currently, only the [network plug-in]({{}}/rke/latest/en/config-options/add-ons/network-plugins/) is considered critical. KubeDNS, [ingress controllers]({{}}/rke/latest/en/config-options/add-ons/ingress-controllers/) and [user-defined add-ons]({{}}/rke/latest/en/config-options/add-ons/user-defined-add-ons/) are considered non-critical. ## Add-on deployment jobs diff --git a/content/rke/latest/en/config-options/add-ons/dns/_index.md b/content/rke/latest/en/config-options/add-ons/dns/_index.md index a00aa2e5a12..2e63a5c25be 100644 --- a/content/rke/latest/en/config-options/add-ons/dns/_index.md +++ b/content/rke/latest/en/config-options/add-ons/dns/_index.md @@ -26,7 +26,7 @@ CoreDNS can only be used on Kubernetes v1.12.0 and higher. RKE will deploy CoreDNS as a Deployment with the default replica count of 1. The pod consists of 1 container: `coredns`. RKE will also deploy coredns-autoscaler as a Deployment, which will scale the coredns Deployment by using the number of cores and nodes. Please see [Linear Mode](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler#linear-mode) for more information about this logic. -The images used for CoreDNS are under the [`system_images` directive]({{< baseurl >}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with CoreDNS, but these can be overridden by changing the image tag in `system_images`. +The images used for CoreDNS are under the [`system_images` directive]({{}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with CoreDNS, but these can be overridden by changing the image tag in `system_images`. ## Scheduling CoreDNS @@ -66,7 +66,7 @@ dns: RKE will deploy kube-dns as a Deployment with the default replica count of 1. The pod consists of 3 containers: `kubedns`, `dnsmasq` and `sidecar`. RKE will also deploy kube-dns-autoscaler as a Deployment, which will scale the kube-dns Deployment by using the number of cores and nodes. Please see [Linear Mode](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler#linear-mode) for more information about this logic. -The images used for kube-dns are under the [`system_images` directive]({{< baseurl >}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with kube-dns, but these can be overridden by changing the image tag in `system_images`. +The images used for kube-dns are under the [`system_images` directive]({{}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with kube-dns, but these can be overridden by changing the image tag in `system_images`. ## Scheduling kube-dns @@ -116,3 +116,36 @@ You can disable the default DNS provider by specifying `none` to the dns `provi dns: provider: none ``` + +# NodeLocal DNS + +_Available as of v1.1.0_ + +> **Note:** The option to enable NodeLocal DNS is available for: +> +> * Kubernetes v1.15.11 and up +> * Kubernetes v1.16.8 and up +> * Kubernetes v1.17.4 and up + +NodeLocal DNS is an additional component that can be deployed on each node to improve DNS performance. It is not a replacement for the `provider` parameter, you will still need to have one of the available DNS providers configured. See [Using NodeLocal DNSCache in Kubernetes clusters](https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/) for more information on how NodeLocal DNS works. + +Enable NodeLocal DNS by configuring an IP address. + +## Configuring NodeLocal DNS + +The `ip_address` parameter is used to configure what link-local IP address will be configured one each host to listen on, make sure this IP address is not already configured on the host. + +```yaml +dns: + provider: coredns + nodelocal: + ip_address: "169.254.20.10" +``` + +> **Note:** When enabling NodeLocal DNS on an existing cluster, pods that are currently running will not be modified, the updated `/etc/resolv.conf` configuration will take effect only for pods started after enabling NodeLocal DNS. + +## Removing NodeLocal DNS + +By removing the `ip_address` value, NodeLocal DNS will be removed from the cluster. + +> **Warning:** When removing NodeLocal DNS, a disruption to DNS can be expected. The updated `/etc/resolv.conf` configuration will take effect only for pods that are started after removing NodeLocal DNS. In general pods using the default `dnsPolicy: ClusterFirst` will need to be re-deployed. diff --git a/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md b/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md index a7da4af0cd6..4e32fb33858 100644 --- a/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md +++ b/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md @@ -10,7 +10,7 @@ By default, RKE deploys the NGINX ingress controller on all schedulable nodes. RKE will deploy the ingress controller as a DaemonSet with `hostnetwork: true`, so ports `80`, and `443` will be opened on each node where the controller is deployed. -The images used for ingress controller is under the [`system_images` directive]({{< baseurl >}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with the ingress controller, but these can be overridden by changing the image tag in `system_images`. +The images used for ingress controller is under the [`system_images` directive]({{}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with the ingress controller, but these can be overridden by changing the image tag in `system_images`. ## Scheduling Ingress Controllers diff --git a/content/rke/latest/en/config-options/add-ons/metrics-server/_index.md b/content/rke/latest/en/config-options/add-ons/metrics-server/_index.md index 88775ac5577..61f0d303601 100644 --- a/content/rke/latest/en/config-options/add-ons/metrics-server/_index.md +++ b/content/rke/latest/en/config-options/add-ons/metrics-server/_index.md @@ -7,7 +7,7 @@ By default, RKE deploys [Metrics Server](https://github.com/kubernetes-incubator RKE will deploy Metrics Server as a Deployment. -The image used for Metrics Server is under the [`system_images` directive]({{< baseurl >}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there is a default image associated with the Metrics Server, but these can be overridden by changing the image tag in `system_images`. +The image used for Metrics Server is under the [`system_images` directive]({{}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there is a default image associated with the Metrics Server, but these can be overridden by changing the image tag in `system_images`. ## Disabling the Metrics Server diff --git a/content/rke/latest/en/config-options/add-ons/network-plugins/_index.md b/content/rke/latest/en/config-options/add-ons/network-plugins/_index.md index cb26c78fe57..7da2af08643 100644 --- a/content/rke/latest/en/config-options/add-ons/network-plugins/_index.md +++ b/content/rke/latest/en/config-options/add-ons/network-plugins/_index.md @@ -20,7 +20,7 @@ network: plugin: flannel ``` -The images used for network plug-ins are under the [`system_images` directive]({{< baseurl >}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with each network plug-in, but these can be overridden by changing the image tag in `system_images`. +The images used for network plug-ins are under the [`system_images` directive]({{}}/rke/latest/en/config-options/system-images/). For each Kubernetes version, there are default images associated with each network plug-in, but these can be overridden by changing the image tag in `system_images`. # Disabling Deployment of a Network Plug-in diff --git a/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md b/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md index 3f2dd072f91..72808d38936 100644 --- a/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md +++ b/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md @@ -3,7 +3,7 @@ title: User-Defined Add-Ons weight: 263 --- -Besides the [network plug-in]({{< baseurl >}}/rke/latest/en/config-options/add-ons/network-plugins) and [ingress controllers]({{< baseurl >}}/rke/latest/en/config-options/add-ons/ingress-controllers/), you can define any add-on that you want deployed after the Kubernetes cluster is deployed. +Besides the [network plug-in]({{}}/rke/latest/en/config-options/add-ons/network-plugins) and [ingress controllers]({{}}/rke/latest/en/config-options/add-ons/ingress-controllers/), you can define any add-on that you want deployed after the Kubernetes cluster is deployed. There are two ways that you can specify an add-on. diff --git a/content/rke/latest/en/config-options/bastion-host/_index.md b/content/rke/latest/en/config-options/bastion-host/_index.md index 3b6848759c6..d2710e8c42d 100644 --- a/content/rke/latest/en/config-options/bastion-host/_index.md +++ b/content/rke/latest/en/config-options/bastion-host/_index.md @@ -3,7 +3,7 @@ title: Bastion/Jump Host Configuration weight: 220 --- -Since RKE uses `ssh` to connect to [nodes]({{< baseurl >}}/rke/latest/en/config-options/nodes/), you can configure the `cluster.yml` so RKE will use a bastion host. Keep in mind that the [port requirements]({{< baseurl >}}/rke/latest/en/os/#ports) for the RKE node move to the configured bastion host. Our private SSH key(s) only needs to reside on the host running RKE. You do not need to copy your private SSH key(s) to the bastion host. +Since RKE uses `ssh` to connect to [nodes]({{}}/rke/latest/en/config-options/nodes/), you can configure the `cluster.yml` so RKE will use a bastion host. Keep in mind that the [port requirements]({{}}/rke/latest/en/os/#ports) for the RKE node move to the configured bastion host. Our private SSH key(s) only needs to reside on the host running RKE. You do not need to copy your private SSH key(s) to the bastion host. ```yaml bastion_host: diff --git a/content/rke/latest/en/config-options/cloud-providers/_index.md b/content/rke/latest/en/config-options/cloud-providers/_index.md index 27881c437e2..45501bcf784 100644 --- a/content/rke/latest/en/config-options/cloud-providers/_index.md +++ b/content/rke/latest/en/config-options/cloud-providers/_index.md @@ -6,9 +6,9 @@ weight: 250 RKE supports the ability to set your specific [cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) for your Kubernetes cluster. There are specific cloud configurations for these cloud providers. To enable a cloud provider its name as well as any required configuration options must be provided under the `cloud_provider` directive in the cluster YML. -* [AWS]({{< baseurl >}}/rke/latest/en/config-options/cloud-providers/aws) -* [Azure]({{< baseurl >}}/rke/latest/en/config-options/cloud-providers/azure) -* [OpenStack]({{< baseurl >}}/rke/latest/en/config-options/cloud-providers/openstack) -* [vSphere]({{< baseurl >}}/rke/latest/en/config-options/cloud-providers/vsphere) +* [AWS]({{}}/rke/latest/en/config-options/cloud-providers/aws) +* [Azure]({{}}/rke/latest/en/config-options/cloud-providers/azure) +* [OpenStack]({{}}/rke/latest/en/config-options/cloud-providers/openstack) +* [vSphere]({{}}/rke/latest/en/config-options/cloud-providers/vsphere) -Outside of this list, RKE also supports the ability to handle any [custom cloud provider]({{< baseurl >}}/rke/latest/en/config-options/cloud-providers/custom). +Outside of this list, RKE also supports the ability to handle any [custom cloud provider]({{}}/rke/latest/en/config-options/cloud-providers/custom). diff --git a/content/rke/latest/en/config-options/cloud-providers/vsphere/troubleshooting/_index.md b/content/rke/latest/en/config-options/cloud-providers/vsphere/troubleshooting/_index.md index 6d2cffca67f..a63f81c36ba 100644 --- a/content/rke/latest/en/config-options/cloud-providers/vsphere/troubleshooting/_index.md +++ b/content/rke/latest/en/config-options/cloud-providers/vsphere/troubleshooting/_index.md @@ -8,11 +8,11 @@ If you are experiencing issues while provisioning a cluster with enabled vSphere - controller-manager (Manages volumes in vCenter) - kubelet: (Mounts vSphere volumes to pods) -If your cluster is not configured with external [Cluster Logging]({{< baseurl >}}/rancher/v2.x/en/tools/logging/), you will need to SSH into nodes to get the logs of the `kube-controller-manager` (running on one of the control plane nodes) and the `kubelet` (pertaining to the node where the stateful pod has been scheduled). +If your cluster is not configured with external [Cluster Logging]({{}}/rancher/v2.x//en/cluster-admin/tools//logging/), you will need to SSH into nodes to get the logs of the `kube-controller-manager` (running on one of the control plane nodes) and the `kubelet` (pertaining to the node where the stateful pod has been scheduled). The easiest way to create a SSH session with a node is the Rancher CLI tool. -1. [Configure the Rancher CLI]({{< baseurl >}}/rancher/v2.x/en/cli/) for your cluster. +1. [Configure the Rancher CLI]({{}}/rancher/v2.x/en/cli/) for your cluster. 2. Run the following command to get a shell to the corresponding nodes: ```sh diff --git a/content/rke/latest/en/config-options/nodes/_index.md b/content/rke/latest/en/config-options/nodes/_index.md index 75321c4c6b9..e15b7e98f21 100644 --- a/content/rke/latest/en/config-options/nodes/_index.md +++ b/content/rke/latest/en/config-options/nodes/_index.md @@ -116,7 +116,7 @@ The `internal_address` provides the ability to have nodes with multiple addresse The `hostname_override` is used to be able to provide a friendly name for RKE to use when registering the node in Kubernetes. This hostname doesn't need to be a routable address, but it must be a valid [Kubernetes resource name](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names). If the `hostname_override` isn't set, then the `address` directive is used when registering the node in Kubernetes. -> **Note:** When [cloud providers]({{< baseurl >}}/rke/latest/en/config-options/cloud-providers/) are configured, you may need to override the hostname in order to use the cloud provider correctly. There is an exception for the [AWS cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws), where the `hostname_override` field will be explicitly ignored. +> **Note:** When [cloud providers]({{}}/rke/latest/en/config-options/cloud-providers/) are configured, you may need to override the hostname in order to use the cloud provider correctly. There is an exception for the [AWS cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws), where the `hostname_override` field will be explicitly ignored. ### SSH Port @@ -130,7 +130,7 @@ For each node, you specify the `user` to be used when connecting to this node. T For each node, you specify the path, i.e. `ssh_key_path`, for the SSH private key to be used when connecting to this node. The default key path for each node is `~/.ssh/id_rsa`. -> **Note:** If you have a private key that can be used across all nodes, you can set the [SSH key path at the cluster level]({{< baseurl >}}/rke/latest/en/config-options/#cluster-level-ssh-key-path). The SSH key path set in each node will always take precedence. +> **Note:** If you have a private key that can be used across all nodes, you can set the [SSH key path at the cluster level]({{}}/rke/latest/en/config-options/#cluster-level-ssh-key-path). The SSH key path set in each node will always take precedence. ### SSH Key @@ -150,7 +150,7 @@ If the Docker socket is different than the default, you can set the `docker_sock ### Labels -You have the ability to add an arbitrary map of labels for each node. It can be used when using the [ingress controller's]({{< baseurl >}}/rke/latest/en/config-options/add-ons/ingress-controllers/) `node_selector` option. +You have the ability to add an arbitrary map of labels for each node. It can be used when using the [ingress controller's]({{}}/rke/latest/en/config-options/add-ons/ingress-controllers/) `node_selector` option. ### Taints diff --git a/content/rke/latest/en/config-options/private-registries/_index.md b/content/rke/latest/en/config-options/private-registries/_index.md index 5a5c1a4d18e..2f448920312 100644 --- a/content/rke/latest/en/config-options/private-registries/_index.md +++ b/content/rke/latest/en/config-options/private-registries/_index.md @@ -19,7 +19,7 @@ private_registries: ### Default Registry -As of v0.1.10, RKE supports specifying a default registry from the list of private registries to be used with all [system images]({{< baseurl >}}/rke/latest/en/config-options/system-images/) . In this example .RKE will use `registry.com` as the default registry for all system images, e.g. `rancher/rke-tools:v0.1.14` will become `registry.com/rancher/rke-tools:v0.1.14`. +As of v0.1.10, RKE supports specifying a default registry from the list of private registries to be used with all [system images]({{}}/rke/latest/en/config-options/system-images/) . In this example .RKE will use `registry.com` as the default registry for all system images, e.g. `rancher/rke-tools:v0.1.14` will become `registry.com/rancher/rke-tools:v0.1.14`. ```yaml private_registries: @@ -31,9 +31,9 @@ private_registries: ### Air-gapped Setups -By default, all system images are being pulled from DockerHub. If you are on a system that does not have access to DockerHub, you will need to create a private registry that is populated with all the required [system images]({{< baseurl >}}/rke/latest/en/config-options/system-images/). +By default, all system images are being pulled from DockerHub. If you are on a system that does not have access to DockerHub, you will need to create a private registry that is populated with all the required [system images]({{}}/rke/latest/en/config-options/system-images/). -As of v0.1.10, you have to configure your private registry credentials, but you can specify this registry as a default registry so that all [system images]({{< baseurl >}}/rke/latest/en/config-options/system-images/) are pulled from the designated private registry. You can use the command `rke config --system-images` to get the list of default system images to populate your private registry. +As of v0.1.10, you have to configure your private registry credentials, but you can specify this registry as a default registry so that all [system images]({{}}/rke/latest/en/config-options/system-images/) are pulled from the designated private registry. You can use the command `rke config --system-images` to get the list of default system images to populate your private registry. -Prior to v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images]({{< baseurl >}}/rke/latest/en/config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name. +Prior to v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images]({{}}/rke/latest/en/config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name. diff --git a/content/rke/latest/en/config-options/services/_index.md b/content/rke/latest/en/config-options/services/_index.md index cfd88cc39a9..b1c7a4d4c1e 100644 --- a/content/rke/latest/en/config-options/services/_index.md +++ b/content/rke/latest/en/config-options/services/_index.md @@ -6,7 +6,7 @@ weight: 230 To deploy Kubernetes, RKE deploys several core components or services in Docker containers on the nodes. Based on the roles of the node, the containers deployed may be different. -**All services support additional [custom arguments, Docker mount binds and extra environment variables]({{< baseurl >}}/rke/latest/en/config-options/services/services-extras/).** +**All services support additional [custom arguments, Docker mount binds and extra environment variables]({{}}/rke/latest/en/config-options/services/services-extras/).** | Component | Services key name in cluster.yml | |-------------------------|----------------------------------| @@ -23,13 +23,13 @@ Kubernetes uses [etcd](https://etcd.io/) as a store for cluster state and data. RKE supports running etcd in a single node mode or in HA cluster mode. It also supports adding and removing etcd nodes to the cluster. -You can enable etcd to [take recurring snapshots]({{< baseurl >}}/rke/latest/en/etcd-snapshots/#recurring-snapshots). These snapshots can be used to [restore etcd]({{< baseurl >}}/rke/latest/en/etcd-snapshots/#etcd-disaster-recovery). +You can enable etcd to [take recurring snapshots]({{}}/rke/latest/en/etcd-snapshots/#recurring-snapshots). These snapshots can be used to [restore etcd]({{}}/rke/latest/en/etcd-snapshots/#etcd-disaster-recovery). -By default, RKE will deploy a new etcd service, but you can also run Kubernetes with an [external etcd service]({{< baseurl >}}/rke/latest/en/config-options/services/external-etcd/). +By default, RKE will deploy a new etcd service, but you can also run Kubernetes with an [external etcd service]({{}}/rke/latest/en/config-options/services/external-etcd/). ## Kubernetes API Server -> **Note for Rancher 2 users** If you are configuring Cluster Options using a [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) when creating [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), the names of services should contain underscores only: `kube_api`. This only applies to Rancher v2.0.5 and v2.0.6. +> **Note for Rancher 2 users** If you are configuring Cluster Options using a [Config File]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) when creating [Rancher Launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), the names of services should contain underscores only: `kube_api`. This only applies to Rancher v2.0.5 and v2.0.6. The [Kubernetes API](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/) REST service, which handles requests and data for all Kubernetes objects and provide shared state for all the other Kubernetes components. @@ -58,10 +58,10 @@ RKE supports the following options for the `kube-api` service : - **Pod Security Policy** (`pod_security_policy`) - An option to enable the [Kubernetes Pod Security Policy](https://kubernetes.io/docs/concepts/policy/pod-security-policy/). By default, we do not enable pod security policies as it is set to `false`. > **Note:** If you set `pod_security_policy` value to `true`, RKE will configure an open policy to allow any pods to work on the cluster. You will need to configure your own policies to fully utilize PSP. - **Always Pull Images** (`always_pull_images`) - Enable `AlwaysPullImages` Admission controller plugin. Enabling `AlwaysPullImages` is a security best practice. It forces Kubernetes to validate the image and pull credentials with the remote image registry. Local image layer cache will still be used, but it does add a small bit of overhead when launching containers to pull and compare image hashes. _Note: Available as of v0.2.0_ -- **Secrets Encryption Config** (`secrets_encryption_config`) - Manage Kubernetes at-rest data encryption. Documented [here]({{< baseurl >}}//rke/latest/en/config-options/secrets-encryption) +- **Secrets Encryption Config** (`secrets_encryption_config`) - Manage Kubernetes at-rest data encryption. Documented [here]({{}}//rke/latest/en/config-options/secrets-encryption) ## Kubernetes Controller Manager -> **Note for Rancher 2 users** If you are configuring Cluster Options using a [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) when creating [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), the names of services should contain underscores only: `kube_controller`. This only applies to Rancher v2.0.5 and v2.0.6. +> **Note for Rancher 2 users** If you are configuring Cluster Options using a [Config File]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) when creating [Rancher Launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), the names of services should contain underscores only: `kube_controller`. This only applies to Rancher v2.0.5 and v2.0.6. The [Kubernetes Controller Manager](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/) service is the component responsible for running Kubernetes main control loops. The controller manager monitors the cluster desired state through the Kubernetes API server and makes the necessary changes to the current state to reach the desired state. diff --git a/content/rke/latest/en/config-options/services/external-etcd/_index.md b/content/rke/latest/en/config-options/services/external-etcd/_index.md index 173fa826972..8ee04bb7797 100644 --- a/content/rke/latest/en/config-options/services/external-etcd/_index.md +++ b/content/rke/latest/en/config-options/services/external-etcd/_index.md @@ -5,7 +5,7 @@ weight: 232 By default, RKE will launch etcd servers, but RKE also supports being able to use an external etcd. RKE only supports connecting to a TLS enabled etcd setup. -> **Note:** RKE will not accept having external etcd servers in conjunction with [nodes]({{< baseurl >}}/rke/latest/en/config-options/nodes/) with the `etcd` role. +> **Note:** RKE will not accept having external etcd servers in conjunction with [nodes]({{}}/rke/latest/en/config-options/nodes/) with the `etcd` role. ```yaml services: diff --git a/content/rke/latest/en/config-options/system-images/_index.md b/content/rke/latest/en/config-options/system-images/_index.md index ae16387c7cc..041a99a186e 100644 --- a/content/rke/latest/en/config-options/system-images/_index.md +++ b/content/rke/latest/en/config-options/system-images/_index.md @@ -75,4 +75,4 @@ system_images: ### Air-gapped Setups -If you have an air-gapped setup and cannot access `docker.io`, you will need to set up your [private registry]({{< baseurl >}}/rke/latest/en/config-options/private-registries/) in your cluster configuration file. After you set up private registry, you will need to update these images to pull from your private registry. +If you have an air-gapped setup and cannot access `docker.io`, you will need to set up your [private registry]({{}}/rke/latest/en/config-options/private-registries/) in your cluster configuration file. After you set up private registry, you will need to update these images to pull from your private registry. diff --git a/content/rke/latest/en/etcd-snapshots/_index.md b/content/rke/latest/en/etcd-snapshots/_index.md index d973feb3d2f..735fb8bab96 100644 --- a/content/rke/latest/en/etcd-snapshots/_index.md +++ b/content/rke/latest/en/etcd-snapshots/_index.md @@ -13,7 +13,7 @@ _Available as of v0.2.0_ RKE can upload your snapshots to a S3 compatible backend. -**Note:** As of RKE v0.2.0, the `pki.bundle.tar.gz` file is no longer required because of a change in how the [Kubernetes cluster state is stored]({{< baseurl >}}/rke/latest/en/installation/#kubernetes-cluster-state). +**Note:** As of RKE v0.2.0, the `pki.bundle.tar.gz` file is no longer required because of a change in how the [Kubernetes cluster state is stored]({{}}/rke/latest/en/installation/#kubernetes-cluster-state). # Backing Up a Cluster diff --git a/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md b/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md index b98f7e4ed42..400aee3b3e3 100644 --- a/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md +++ b/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md @@ -54,8 +54,8 @@ $ rke etcd snapshot-save \ | `--bucket-name` value | Specify s3 bucket name | * | | `--folder` value | Specify folder inside bucket where backup will be stored. This is optional. _Available as of v0.3.0_ | * | | `--region` value | Specify the s3 bucket location (optional) | * | -| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{< baseurl >}}/rke/latest/en/config-options/#ssh-agent) | | -| `--ignore-docker-version` | [Disable Docker version check]({{< baseurl >}}/rke/latest/en/config-options/#supported-docker-versions) | +| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{}}/rke/latest/en/config-options/#ssh-agent) | | +| `--ignore-docker-version` | [Disable Docker version check]({{}}/rke/latest/en/config-options/#supported-docker-versions) | The `--access-key` and `--secret-key` options are not required if the `etcd` nodes are AWS EC2 instances that have been configured with a suitable IAM instance profile. @@ -116,8 +116,8 @@ $ rke etcd snapshot-save --config cluster.yml --name snapshot-name | --- | --- | | `--name` value | Specify snapshot name | | `--config` value | Specify an alternate cluster YAML file (default: `cluster.yml`) [$RKE_CONFIG] | -| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{< baseurl >}}/rke/latest/en/config-options/#ssh-agent) | -| `--ignore-docker-version` | [Disable Docker version check]({{< baseurl >}}/rke/latest/en/config-options/#supported-docker-versions) | +| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{}}/rke/latest/en/config-options/#ssh-agent) | +| `--ignore-docker-version` | [Disable Docker version check]({{}}/rke/latest/en/config-options/#supported-docker-versions) | {{% /tab %}} {{% /tabs %}} diff --git a/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md b/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md index a4e0ce38419..3f26ea9ee47 100644 --- a/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md +++ b/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md @@ -33,7 +33,7 @@ $ rke etcd snapshot-restore --config cluster.yml --name mysnapshot The snapshot is assumed to be located in `/opt/rke/etcd-snapshots`. -**Note:** The `pki.bundle.tar.gz` file is not needed because RKE v0.2.0 changed how the [Kubernetes cluster state is stored]({{< baseurl >}}/rke/latest/en/installation/#kubernetes-cluster-state). +**Note:** The `pki.bundle.tar.gz` file is not needed because RKE v0.2.0 changed how the [Kubernetes cluster state is stored]({{}}/rke/latest/en/installation/#kubernetes-cluster-state). ### Example of Restoring from a Snapshot in S3 @@ -67,8 +67,8 @@ $ rke etcd snapshot-restore \ | `--bucket-name` value | Specify s3 bucket name | *| | `--folder` value | Specify folder inside bucket where backup will be stored. This is optional. This is optional. _Available as of v0.3.0_ | *| | `--region` value | Specify the s3 bucket location (optional) | *| -| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{< baseurl >}}/rke/latest/en/config-options/#ssh-agent) | | -| `--ignore-docker-version` | [Disable Docker version check]({{< baseurl >}}/rke/latest/en/config-options/#supported-docker-versions) | +| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{}}/rke/latest/en/config-options/#ssh-agent) | | +| `--ignore-docker-version` | [Disable Docker version check]({{}}/rke/latest/en/config-options/#supported-docker-versions) | {{% /tab %}} {{% tab "RKE prior to v0.2.0"%}} @@ -109,8 +109,8 @@ The `pki.bundle.tar.gz` file is also expected to be in the same location. | --- | --- | | `--name` value | Specify snapshot name | | `--config` value | Specify an alternate cluster YAML file (default: `cluster.yml`) [$RKE_CONFIG] | -| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{< baseurl >}}/rke/latest/en/config-options/#ssh-agent) | -| `--ignore-docker-version` | [Disable Docker version check]({{< baseurl >}}/rke/latest/en/config-options/#supported-docker-versions) | +| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{}}/rke/latest/en/config-options/#ssh-agent) | +| `--ignore-docker-version` | [Disable Docker version check]({{}}/rke/latest/en/config-options/#supported-docker-versions) | {{% /tab %}} {{% /tabs %}} diff --git a/content/rke/latest/en/example-yamls/_index.md b/content/rke/latest/en/example-yamls/_index.md index 9b155eecca8..9fe11e634f8 100644 --- a/content/rke/latest/en/example-yamls/_index.md +++ b/content/rke/latest/en/example-yamls/_index.md @@ -5,9 +5,9 @@ aliases: - /rke/latest/en/config-options/example-yamls/ --- -There are lots of different [configuration options]({{< baseurl >}}/rke/latest/en/config-options/) that can be set in the cluster configuration file for RKE. Here are some examples of files: +There are lots of different [configuration options]({{}}/rke/latest/en/config-options/) that can be set in the cluster configuration file for RKE. Here are some examples of files: -> **Note for Rancher 2 users** If you are configuring Cluster Options using a [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) when creating [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), the names of services should contain underscores only: `kube_api` and `kube_controller`. This only applies to Rancher v2.0.5 and v2.0.6. +> **Note for Rancher 2 users** If you are configuring Cluster Options using a [Config File]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) when creating [Rancher Launched Kubernetes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), the names of services should contain underscores only: `kube_api` and `kube_controller`. This only applies to Rancher v2.0.5 and v2.0.6. ## Minimal `cluster.yml` example diff --git a/content/rke/latest/en/installation/_index.md b/content/rke/latest/en/installation/_index.md index 3bba060b4bc..8ff1de0eaa0 100644 --- a/content/rke/latest/en/installation/_index.md +++ b/content/rke/latest/en/installation/_index.md @@ -7,8 +7,8 @@ weight: 50 RKE is a fast, versatile Kubernetes installer that you can use to install Kubernetes on your Linux hosts. You can get started in a couple of quick and easy steps: 1. [Download the RKE Binary](#download-the-rke-binary) - 1. [Alternative RKE macOS Install - Homebrew](#alternative-rke-macos-install---homebrew) - 1. [Alternative RKE macOS Install - MacPorts](#alternative-rke-macos-install---macports) + 1. [Alternative RKE macOS Install - Homebrew](#alternative-rke-macos-x-install-homebrew) + 1. [Alternative RKE macOS Install - MacPorts](#alternative-rke-macos-install-macports) 1. [Prepare the Nodes for the Kubernetes Cluster](#prepare-the-nodes-for-the-kubernetes-cluster) 1. [Creating the Cluster Configuration File](#creating-the-cluster-configuration-file) 1. [Deploying Kubernetes with RKE](#deploying-kubernetes-with-rke) @@ -93,20 +93,20 @@ $ port upgrade rke The Kubernetes cluster components are launched using Docker on a Linux distro. You can use any Linux you want, as long as you can install Docker on it. -Review the [OS requirements]({{< baseurl >}}/rke/latest/en/installation/os/) and configure each node appropriately. +Review the [OS requirements]({{}}/rke/latest/en/installation/os/) and configure each node appropriately. ## Creating the Cluster Configuration File -RKE uses a cluster configuration file, referred to as `cluster.yml` to determine what nodes will be in the cluster and how to deploy Kubernetes. There are [many configuration options]({{< baseurl >}}/rke/latest/en/config-options/) that can be set in the `cluster.yml`. In our example, we will be assuming the minimum of one [node]({{< baseurl >}}/rke/latest/en/config-options/nodes) for your Kubernetes cluster. +RKE uses a cluster configuration file, referred to as `cluster.yml` to determine what nodes will be in the cluster and how to deploy Kubernetes. There are [many configuration options]({{}}/rke/latest/en/config-options/) that can be set in the `cluster.yml`. In our example, we will be assuming the minimum of one [node]({{}}/rke/latest/en/config-options/nodes) for your Kubernetes cluster. There are two easy ways to create a `cluster.yml`: -- Using our [minimal `cluster.yml`]({{< baseurl >}}/rke/latest/en/example-yamls/#minimal-cluster-yml-example) and updating it based on the node that you will be using. +- Using our [minimal `cluster.yml`]({{}}/rke/latest/en/example-yamls/#minimal-cluster-yml-example) and updating it based on the node that you will be using. - Using `rke config` to query for all the information needed. ### Using `rke config` -Run `rke config` to create a new `cluster.yml` in the current directory. This command will prompt you for all the information needed to build a cluster. See [cluster configuration options]({{< baseurl >}}/rke/latest/en/config-options/) for details on the various options. +Run `rke config` to create a new `cluster.yml` in the current directory. This command will prompt you for all the information needed to build a cluster. See [cluster configuration options]({{}}/rke/latest/en/config-options/) for details on the various options. ``` rke config --name cluster.yml @@ -136,7 +136,7 @@ To create an HA cluster, specify more than one host with role `controlplane`. _Available as of v0.2.0_ -By default, Kubernetes clusters require certificates and RKE auto-generates the certificates for all cluster components. You can also use [custom certificates]({{< baseurl >}}/rke/latest/en/installation/certs/). After the Kubernetes cluster is deployed, you can [manage these auto-generated certificates]({{< baseurl >}}/rke/latest/en/cert-mgmt/#certificate-rotation). +By default, Kubernetes clusters require certificates and RKE auto-generates the certificates for all cluster components. You can also use [custom certificates]({{}}/rke/latest/en/installation/certs/). After the Kubernetes cluster is deployed, you can [manage these auto-generated certificates]({{}}/rke/latest/en/cert-mgmt/#certificate-rotation). ## Deploying Kubernetes with RKE @@ -165,9 +165,11 @@ The last line should read `Finished building Kubernetes cluster successfully` to Save a copy of the following files in a secure location: - `cluster.yml`: The RKE cluster configuration file. -- `kube_config_cluster.yml`: The [Kubeconfig file]({{< baseurl >}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster. +- `kube_config_cluster.yml`: The [Kubeconfig file]({{}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster. - `cluster.rkestate`: The [Kubernetes Cluster State file](#kubernetes-cluster-state), this file contains credentials for full access to the cluster.

_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._ +> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file. + ### Kubernetes Cluster State The Kubernetes cluster state, which consists of the cluster configuration file `cluster.yml` and components certificates in Kubernetes cluster, is saved by RKE, but depending on your RKE version, the cluster state is saved differently. @@ -178,9 +180,9 @@ Prior to v0.2.0, RKE saved the Kubernetes cluster state as a secret. When updati ## Interacting with your Kubernetes cluster -After your cluster is up and running, you can start using the [generated kubeconfig file]({{< baseurl >}}/rke/latest/en/kubeconfig) to start interacting with your Kubernetes cluster using `kubectl`. +After your cluster is up and running, you can start using the [generated kubeconfig file]({{}}/rke/latest/en/kubeconfig) to start interacting with your Kubernetes cluster using `kubectl`. After installation, there are several maintenance items that might arise: -* [Certificate Management]({{< baseurl >}}/rke/latest/en/cert-mgmt/) -* [Adding and Removing Nodes in the cluster]({{< baseurl >}}/rke/latest/en/managing-clusters) +* [Certificate Management]({{}}/rke/latest/en/cert-mgmt/) +* [Adding and Removing Nodes in the cluster]({{}}/rke/latest/en/managing-clusters) diff --git a/content/rke/latest/en/installation/certs/_index.md b/content/rke/latest/en/installation/certs/_index.md index 19e5a04a0e9..1907a0a68eb 100644 --- a/content/rke/latest/en/installation/certs/_index.md +++ b/content/rke/latest/en/installation/certs/_index.md @@ -7,7 +7,7 @@ _Available as of v0.2.0_ By default, Kubernetes clusters require certificates and RKE auto-generates the certificates for all the Kubernetes services. RKE can also use custom certificates for these Kubernetes services. -When [deploying Kubernetes with RKE]({{< baseurl >}}/rke/latest/en/installation/#deploying-kubernetes-with-rke), there are two additional options that can be used with `rke up` so that RKE uses custom certificates. +When [deploying Kubernetes with RKE]({{}}/rke/latest/en/installation/#deploying-kubernetes-with-rke), there are two additional options that can be used with `rke up` so that RKE uses custom certificates. | Option | Description | | --- | --- | @@ -45,7 +45,7 @@ The following certificates must exist in the certificate directory. If you want to create and sign the certificates by a real Certificate Authority (CA), you can use RKE to generate a set of Certificate Signing Requests (CSRs) and keys. Using the `rke cert generate-csr` command, you can generate the CSRs and keys. -1. Set up your `cluster.yml` with the [node information]({{< baseurl >}}/rke/latest/en/config-options/nodes/). +1. Set up your `cluster.yml` with the [node information]({{}}/rke/latest/en/config-options/nodes/). 2. Run `rke cert generate-csr` to generate certificates for the node(s) in the `cluster.yml`. By default, the CSRs and keys will be saved in `./cluster_certs`. To have them saved in a different directory, use `--cert-dir` to define what directory to have them saved in. diff --git a/content/rke/latest/en/managing-clusters/_index.md b/content/rke/latest/en/managing-clusters/_index.md index 4a9ff3ddb8e..5cb87f3a6d4 100644 --- a/content/rke/latest/en/managing-clusters/_index.md +++ b/content/rke/latest/en/managing-clusters/_index.md @@ -8,7 +8,7 @@ aliases: ### Adding/Removing Nodes -RKE supports adding/removing [nodes]({{< baseurl >}}/rke/latest/en/config-options/nodes/) for worker and controlplane hosts. +RKE supports adding/removing [nodes]({{}}/rke/latest/en/config-options/nodes/) for worker and controlplane hosts. In order to add additional nodes, you update the original `cluster.yml` file with any additional nodes and specify their role in the Kubernetes cluster. @@ -20,11 +20,13 @@ After you've made changes to add/remove nodes, run `rke up` with the updated `cl You can add/remove only worker nodes, by running `rke up --update-only`. This will ignore everything else in the `cluster.yml` except for any worker nodes. +> **Note:** When using `--update-only`, other actions that do not specifically relate to nodes may be deployed or updated, for example [addons]({{< baseurl >}}/rke/latest/en/config-options/add-ons). + ### Removing Kubernetes Components from Nodes In order to remove the Kubernetes components from nodes, you use the `rke remove` command. -> **Warning:** This command is irreversible and will destroy the Kubernetes cluster, including etcd snapshots on S3. If there is a disaster and your cluster is inaccessible, refer to the process for [restoring your cluster from a snapshot]({{< baseurl >}}/rke/latest/en/etcd-snapshots/#etcd-disaster-recovery). +> **Warning:** This command is irreversible and will destroy the Kubernetes cluster, including etcd snapshots on S3. If there is a disaster and your cluster is inaccessible, refer to the process for [restoring your cluster from a snapshot]({{}}rke/latest/en/etcd-snapshots/#etcd-disaster-recovery). The `rke remove` command does the following to each node in the `cluster.yml`: diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index d9da146c135..9c09b13e0b8 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -31,7 +31,7 @@ weight: 5 RKE runs on almost any Linux OS with Docker installed. Most of the development and testing of RKE occurred on Ubuntu 16.04. However, some OS's have restrictions and specific requirements. -- [SSH user]({{< baseurl >}}/rke/latest/en/config-options/nodes/#ssh-user) - The SSH user used for node access must be a member of the `docker` group on the node: +- [SSH user]({{}}/rke/latest/en/config-options/nodes/#ssh-user) - The SSH user used for node access must be a member of the `docker` group on the node: ``` usermod -aG docker @@ -100,7 +100,7 @@ net.bridge.bridge-nf-call-iptables=1 ### Red Hat Enterprise Linux (RHEL) / Oracle Enterprise Linux (OEL) / CentOS -If using Red Hat Enterprise Linux, Oracle Enterprise Linux or CentOS, you cannot use the `root` user as [SSH user]({{< baseurl >}}/rke/latest/en/config-options/nodes/#ssh-user) due to [Bugzilla 1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). Please follow the instructions below how to setup Docker correctly, based on the way you installed Docker on the node. +If using Red Hat Enterprise Linux, Oracle Enterprise Linux or CentOS, you cannot use the `root` user as [SSH user]({{}}/rke/latest/en/config-options/nodes/#ssh-user) due to [Bugzilla 1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). Please follow the instructions below how to setup Docker correctly, based on the way you installed Docker on the node. #### Using upstream Docker If you are using upstream Docker, the package name is `docker-ce` or `docker-ee`. You can check the installed package by executing: diff --git a/content/rke/latest/en/troubleshooting/_index.md b/content/rke/latest/en/troubleshooting/_index.md index fa39cdc4053..c05e95884df 100644 --- a/content/rke/latest/en/troubleshooting/_index.md +++ b/content/rke/latest/en/troubleshooting/_index.md @@ -3,5 +3,5 @@ title: Troubleshooting weight: 400 --- -* [SSH Connectivity Errors]({{< baseurl >}}/rke/latest/en/troubleshooting/ssh-connectivity-errors/) -* [Provisioning Errors]({{< baseurl >}}/rke/latest/en/troubleshooting/provisioning-errors/) +* [SSH Connectivity Errors]({{}}/rke/latest/en/troubleshooting/ssh-connectivity-errors/) +* [Provisioning Errors]({{}}/rke/latest/en/troubleshooting/provisioning-errors/) diff --git a/content/rke/latest/en/troubleshooting/provisioning-errors/_index.md b/content/rke/latest/en/troubleshooting/provisioning-errors/_index.md index 71cabddb9cc..a9867b3271a 100644 --- a/content/rke/latest/en/troubleshooting/provisioning-errors/_index.md +++ b/content/rke/latest/en/troubleshooting/provisioning-errors/_index.md @@ -5,7 +5,7 @@ weight: 200 ### Failed to get job complete status -Most common reason for this error is that a node is having issues that block the deploy job from completing successfully. See [Get node conditions]({{< baseurl >}}/rancher/v2.x/en/troubleshooting/kubernetes-resources/#get-node-conditions) how to check node conditions. +Most common reason for this error is that a node is having issues that block the deploy job from completing successfully. See [Get node conditions]({{}}/rancher/v2.x/en/troubleshooting/kubernetes-resources/#get-node-conditions) how to check node conditions. You can also retrieve the log from the job to see if it has an indication of the error, make sure you replace `rke-network-plugin-deploy-job` with the job name from the error: diff --git a/content/rke/latest/en/upgrades/_index.md b/content/rke/latest/en/upgrades/_index.md index 5e47ee6ab09..a90c542623d 100644 --- a/content/rke/latest/en/upgrades/_index.md +++ b/content/rke/latest/en/upgrades/_index.md @@ -3,31 +3,40 @@ title: Upgrades weight: 100 --- -After RKE has deployed Kubernetes, you can upgrade the versions of the components in your Kubernetes cluster, the [definition of the Kubernetes services]({{< baseurl >}}/rke/latest/en/config-options/services/) or the [add-ons]({{< baseurl >}}/rke/latest/en/config-options/add-ons/). +After RKE has deployed Kubernetes, you can upgrade the versions of the components in your Kubernetes cluster, the [definition of the Kubernetes services]({{}}/rke/latest/en/config-options/services/) or the [add-ons]({{}}/rke/latest/en/config-options/add-ons/). The default Kubernetes version for each RKE version can be found in [the RKE release notes](https://github.com/rancher/rke/releases/). -You can also select a newer version of Kubernetes to install for your cluster. Downgrading Kubernetes is not supported. +You can also select a newer version of Kubernetes to install for your cluster. Each version of RKE has a specific [list of supported Kubernetes versions.](#listing-supported-kubernetes-versions) -In case the Kubernetes version is defined in the `kubernetes_version` directive and under the `system-images` directive are defined, the `system-images` configuration will take precedence over `kubernetes_version`. +In case the Kubernetes version is defined in the `kubernetes_version` directive and under the `system-images` directive, the `system-images` configuration will take precedence over the `kubernetes_version`. This page covers the following topics: +- [How upgrades work](#how-upgrades-work) - [Prerequisites](#prerequisites) - [Upgrading Kubernetes](#upgrading-kubernetes) +- [Configuring the upgrade strategy](#configuring-the-upgrade-strategy) +- [Maintaining availability for applications during upgrades](#maintaining-availability-for-applications-during-upgrades) - [Listing supported Kubernetes versions](#listing-supported-kubernetes-versions) - [Kubernetes version precedence](#kubernetes-version-precedence) - [Using an unsupported Kubernetes version](#using-an-unsupported-kubernetes-version) - [Mapping the Kubernetes version to services](#mapping-the-kubernetes-version-to-services) - [Service upgrades](#service-upgrades) -- [Add-ons upgrades](#add-ons-upgrades) +- [Upgrading Nodes Manually](#upgrading-nodes-manually) +- [Rolling Back the Kubernetes Version](#rolling-back-the-kubernetes-version) +- [Troubleshooting](#troubleshooting) + +### How Upgrades Work + +In [this section,]({{}}/rke/latest/en/upgrades/how-upgrades-work) you'll learn what happens when you edit or upgrade your RKE Kubernetes cluster. ### Prerequisites - Ensure that any `system_images` configuration is absent from the `cluster.yml`. The Kubernetes version should only be listed under the `system_images` directive if an [unsupported version](#using-an-unsupported-kubernetes-version) is being used. Refer to [Kubernetes version precedence](#kubernetes-version-precedence) for more information. -- Ensure that the correct files to manage [Kubernetes cluster state]({{< baseurl >}}/rke/latest/en/installation/#kubernetes-cluster-state) are present in the working directory. Refer to the tabs below for the required files, which differ based on the RKE version. +- Ensure that the correct files to manage [Kubernetes cluster state]({{}}/rke/latest/en/installation/#kubernetes-cluster-state) are present in the working directory. Refer to the tabs below for the required files, which differ based on the RKE version. {{% tabs %}} {{% tab "RKE v0.2.0+" %}} @@ -46,8 +55,6 @@ RKE saves the Kubernetes cluster state as a secret. When updating the state, RKE ### Upgrading Kubernetes -> **Note:** RKE does not support rolling back to previous versions. - To upgrade the Kubernetes version of an RKE-provisioned cluster, set the `kubernetes_version` string in the `cluster.yml` to the desired version from the [list of supported Kubernetes versions](#listing-supported-kubernetes-versions) for the specific version of RKE: ```yaml @@ -60,6 +67,18 @@ Then invoke `rke up`: $ rke up --config cluster.yml ``` +### Configuring the Upgrade Strategy + +As of v0.1.8, upgrades to add-ons are supported. [Add-ons]({{}}/rke/latest/en/config-options/add-ons/) can also be upgraded by changing any of the add-ons and running `rke up` again with the updated configuration file. + +As of v1.1.0, additional upgrade options became available to give you more granular control over the upgrade process. These options can be used to maintain availability of your applications during a cluster upgrade. + +For details on upgrade configuration options, refer to [Configuring the Upgrade Strategy.]({{}}/rke/latest/en/upgrades/configuring-strategy) + +### Maintaining Availability for Applications During Upgrades + +In [this section,]({{}}/rke/latest/en/upgrades/maintaining-availability/) you'll learn the requirements to prevent downtime for your applications when you upgrade the cluster using `rke up`. + ### Listing Supported Kubernetes Versions Please refer to the [release notes](https://github.com/rancher/rke/releases) of the RKE version that you are running, to find the list of supported Kubernetes versions as well as the default Kubernetes version. @@ -86,7 +105,7 @@ As of v0.2.0, if a version is defined in `kubernetes_version` and is not found i Prior to v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, the default version from the supported list is used. -If you want to use a different version from the supported list, please use the [system images]({{< baseurl >}}/rke/latest/en/config-options/system-images/) option. +If you want to use a different version from the supported list, please use the [system images]({{}}/rke/latest/en/config-options/system-images/) option. ### Mapping the Kubernetes Version to Services @@ -98,12 +117,40 @@ For RKE prior to v0.3.0, the service defaults are located [here](https://github. ### Service Upgrades -[Services]({{< baseurl >}}/rke/latest/en/config-options/services/) can be upgraded by changing any of the services arguments or `extra_args` and running `rke up` again with the updated configuration file. +[Services]({{}}/rke/latest/en/config-options/services/) can be upgraded by changing any of the services arguments or `extra_args` and running `rke up` again with the updated configuration file. > **Note:** The following arguments, `service_cluster_ip_range` or `cluster_cidr`, cannot be changed as any changes to these arguments will result in a broken cluster. Currently, network pods are not automatically upgraded. -### Add-Ons Upgrades +### Upgrading Nodes Manually -As of v0.1.8, upgrades to add-ons are supported. +_Available as of v1.1.0_ -[Add-ons]({{< baseurl >}}/rke/latest/en/config-options/add-ons/) can also be upgraded by changing any of the add-ons and running `rke up` again with the updated configuration file. +You can manually update each type of node separately. As a best practice, upgrade the etcd nodes first, followed by controlplane and then worker nodes. + +### Rolling Back the Kubernetes Version + +_Available as of v1.1.0_ + +A cluster can be restored back to a snapshot that uses a previous Kubernetes version. + +### Troubleshooting + +_Applies to v1.1.0+_ + +If a node doesn't come up after an upgrade, the `rke up` command errors out. + +No upgrade will proceed if the number of unavailable nodes exceeds the configured maximum. + +If an upgrade stops, you may need to fix an unavailable node or remove it from the cluster before the upgrade can continue. + +A failed node could be in many different states: + +- Powered off +- Unavailable +- User drains a node while upgrade is in process, so there are no kubelets on the node +- The upgrade itself failed + +Some expected failure scenarios include the following: + +- If the maximum unavailable number of nodes is reached during an upgrade, the RKE CLI will error out and exit the CLI with a failure code. +- If some nodes fail to upgrade, but the number of failed nodes doesn't reach the maximum unavailable number of nodes, the RKE CLI logs the nodes that were unable to upgrade and continues to upgrade the add-ons. After the add-ons are upgraded, RKE will error out and exit the CLI with a failure code regardless of add-on upgrade status. \ No newline at end of file diff --git a/content/rke/latest/en/upgrades/configuring-strategy/_index.md b/content/rke/latest/en/upgrades/configuring-strategy/_index.md new file mode 100644 index 00000000000..e9e8ce188c5 --- /dev/null +++ b/content/rke/latest/en/upgrades/configuring-strategy/_index.md @@ -0,0 +1,171 @@ +--- +title: Configuring the Upgrade Strategy +weight: 2 +--- + +In this section, you'll learn how to configure the maximum number of unavailable controlplane and worker nodes, how to drain nodes before upgrading them, and how to configure the replicas for addons such as Ingress. + +- [Maximum Unavailable Nodes](#maximum-unavailable-nodes) +- [Draining Nodes](#draining-nodes) +- [Replicas for Ingress and Networking Addons](#replicas-for-ingress-and-networking-addons) +- [Replicas for DNS and Monitoring Addons](#replicas-for-dns-and-monitoring-addons) +- [Example cluster.yml](#example-cluster-yml) + +### Maximum Unavailable Nodes + +The maximum number of unavailable controlplane and worker nodes can be configured in the `cluster.yml` before upgrading the cluster: + +- **max_unavailable_controlplane:** The maximum number of controlplane nodes that can fail without causing the cluster upgrade to fail. By default, `max_unavailable_controlplane` is defined as one node. +- **max_unavailable_worker:** The maximum number of worker nodes that can fail without causing the cluster upgrade to fail. By default, `max_unavailable_worker` is defined as 10 percent of all worker nodes.* + +/* This number can be configured as a percentage or as an integer. When defined as a percentage, the batch size is rounded down to the nearest node, with a minimum of one node per batch. + +An example configuration of the cluster upgrade strategy is shown below: + +```yaml +upgrade_strategy: + max_unavailable_worker: 10% + max_unavailable_controlplane: 1 +``` + +### Draining Nodes + +By default, nodes are cordoned first before upgrading. Each node should always be cordoned before starting its upgrade so that new pods will not be scheduled to it, and traffic will not reach the node. In addition to cordoning each node, RKE can also be configured to drain each node before starting its upgrade. Draining a node will evict all the pods running on the computing resource. + +For information on draining and how to safely drain a node, refer to the [Kubernetes documentation.](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) + +If the `drain` directive is set to `true` in the `cluster.yml`, worker nodes will be drained before they are upgraded. The default value is false: + +```yaml +upgrade_strategy: + max_unavailable_worker: 10% + max_unavailable_controlplane: 1 + drain: false + node_drain_input: + force: false + ignore_daemonsets: true + delete_local_data: false + grace_period: -1 // grace period specified for each pod spec will be used + timeout: 60 +``` + +### Replicas for Ingress and Networking Addons + +The Ingress and network addons are launched as Kubernetes [daemonsets.](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) If no value is given for the [update strategy,](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy) Kubernetes sets the update strategy to `rollingUpdate` by default, with `maxUnavailable` set to 1. + +An example configuration of the Ingress and network addons is shown below: + +```yaml +ingress: + provider: nginx + update_strategy: + strategy: RollingUpdate + rollingUpdate: + maxUnavailable: 5 +network: + plugin: canal + update_strategy: + strategy: RollingUpdate + rollingUpdate: + maxUnavailable: 6 +``` + +### Replicas for DNS and Monitoring Addons + +The DNS and monitoring addons are launched as Kubernetes [deployments.](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) These addons include `coredns`, `kubedns`, and `metrics-server`, the monitoring deployment. + +If no value is configured for their [update strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy) in the `cluster.yml`, Kubernetes sets the update strategy to `rollingUpdate` by default, with `maxUnavailable` set to 25% and `maxSurge` set to 25%. + +The DNS addons use `cluster-proportional-autoscaler`, which is an [open-source container image](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler) that watches over the number of schedulable nodes and cores of the cluster and resizes the number of replicas for the required resource. This functionality is useful for applications that need to be autoscaled with the number of nodes in the cluster. For the DNS addon, the fields needed for the `cluster-proportional-autoscaler` are made configurable. + +The following table shows the default values for these fields: + +Field Name | Default Value +-----------|-------------- +coresPerReplica | 128 +nodesPerReplica | 4 +min | 1 +preventSinglePointFailure | true + +The `cluster-proportional-autoscaler` uses this formula to calculate the number of replicas: + +```plain +replicas = max( ceil( cores * 1/coresPerReplica ) , ceil( nodes * 1/nodesPerReplica ) ) +replicas = min(replicas, max) +replicas = max(replicas, min) +``` + +An example configuration of the DNS and monitoring addons is shown below: + +```yaml +dns: + provider: coredns + update_strategy: + strategy: RollingUpdate + rollingUpdate: + maxUnavailable: 20% + maxSurge: 15% + linear_autoscaler_params: + cores_per_replica: 0.34 + nodes_per_replica: 4 + prevent_single_point_failure: true + min: 2 + max: 3 +monitoring: + provider: metrics-server + update_strategy: + strategy: RollingUpdate + rollingUpdate: + maxUnavailable: 8 +``` + +### Example cluster.yml + +```yaml +# If you intened to deploy Kubernetes in an air-gapped environment, +# please consult the documentation on how to configure custom RKE images. +nodes: +# At least three etcd nodes, two controlplane nodes, and two worker nodes, +# nodes skipped for brevity +upgrade_strategy: + max_unavailable_worker: 10% + max_unavailable_controlplane: 1 + drain: false + node_drain_input: + force: false + ignore_daemonsets: true + delete_local_data: false + grace_period: -1 // grace period specified for each pod spec will be used + timeout: 60 +ingress: + provider: nginx + update_strategy: # Available in v2.4 + strategy: RollingUpdate + rollingUpdate: + maxUnavailable: 5 +network: + plugin: canal + update_strategy: # Available in v2.4 + strategy: RollingUpdate + rollingUpdate: + maxUnavailable: 6 +dns: + provider: coredns + update_strategy: # Available in v2.4 + strategy: RollingUpdate + rollingUpdate: + maxUnavailable: 20% + maxSurge: 15% + linear_autoscaler_params: + cores_per_replica: 0.34 + nodes_per_replica: 4 + prevent_single_point_failure: true + min: 2 + max: 3 +monitoring: + provider: metrics-server + update_strategy: # Available in v2.4 + strategy: RollingUpdate + rollingUpdate: + maxUnavailable: 8 +``` diff --git a/content/rke/latest/en/upgrades/how-upgrades-work/_index.md b/content/rke/latest/en/upgrades/how-upgrades-work/_index.md new file mode 100644 index 00000000000..fbd7b5e729c --- /dev/null +++ b/content/rke/latest/en/upgrades/how-upgrades-work/_index.md @@ -0,0 +1,90 @@ +--- +title: How Upgrades Work +weight: 1 +--- + +In this section, you'll learn what happens when you edit or upgrade your RKE Kubernetes cluster. The below sections describe how each type of node is upgraded by default when a cluster is upgraded using `rke up`. + +{{% tabs %}} +{{% tab "RKE v1.1.0+" %}} + +The following features are new in RKE v1.1.0: + +- The ability to upgrade or edit a cluster without downtime for your applications. +- The ability to manually upgrade nodes of a certain role without upgrading others. +- The ability to restore a Kubernetes cluster to an older Kubernetes version by restoring it to a snapshot that includes the older Kubernetes version. This capability allows you to safely upgrade one type of node at a time, because if an upgrade cannot be completed by all nodes in the cluster, you can downgrade the Kubernetes version of the nodes that were already upgraded. + +When a cluster is upgraded with `rke up`, using the default options, the following process is used: + +1. The etcd plane gets get updated, one node at a time. +1. Controlplane nodes get updated, one node at a time. This includes the controlplane components and worker plane components of the controlplane nodes. +1. Worker plane components of etcd nodes get updated, one node at a time. +1. Worker nodes get updated in batches of a configurable size. The default configuration for the maximum number of unavailable nodes is ten percent, rounded down to the nearest node, with a minimum batch size of one node. +1. [Addons]({{}}/rke/latest/en/config-options/add-ons/) get upgraded one by one. + +The following sections break down in more detail what happens when etcd nodes, controlplane nodes, worker nodes, and addons are upgraded. This information is intended to be used to help you understand the update strategy for the cluster, and may be useful when troubleshooting problems with upgrading the cluster. + +### Upgrades of etcd Nodes + +A cluster upgrade begins by upgrading the etcd nodes one at a time. + +If an etcd node fails at any time, the upgrade will fail and no more nodes will be upgraded. The cluster will be stuck in an updating state and not move forward to upgrading controlplane or worker nodes. + +### Upgrades of Controlplane Nodes + +Controlplane nodes are upgraded one at a time by default. The maximum number of unavailable controlplane nodes can also be configured, so that they can be upgraded in batches. + +As long as the maximum unavailable number or percentage of controlplane nodes has not been reached, Rancher will continue to upgrade other controlplane nodes, then the worker nodes. + +If any controlplane nodes were unable to be upgraded, the upgrade will not proceed to the worker nodes. + +### Upgrades of Worker Nodes + +By default, worker nodes are upgraded in batches. The size of the batch is determined by the maximum number of unavailable worker nodes, configured as the `max_unavailable_worker` directive in the `cluster.yml`. + +By default, the `max_unavailable_worker` nodes is defined as 10 percent of all worker nodes. This number can be configured as a percentage or as an integer. When defined as a percentage, the batch size is rounded down to the nearest node, with a minimum of one node. + +For example, if you have 11 worker nodes and `max_unavailable_worker` is 25%, two nodes will be upgraded at once because 25% of 11 is 2.75. If you have two worker nodes and `max_unavailable_worker` is 1%, the worker nodes will be upgraded one at a time because the minimum batch size is one. + +When each node in a batch returns to a Ready state, the next batch of nodes begins to upgrade. If `kubelet` and `kube-proxy` have started, the node is Ready. As long as the `max_unavailable_worker` number of nodes have not failed, Rancher will continue to upgrade other worker nodes. + +RKE scans the cluster before starting the upgrade to find the powered down or unreachable hosts. The upgrade will stop if that number matches or exceeds the maximum number of unavailable nodes. + +RKE will cordon each node before upgrading it, and uncordon the node afterward. RKE can also be configured to [drain](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) nodes before upgrading them. + +RKE will handle all worker node upgrades before upgrading any add-ons. As long as the maximum number of unavailable worker nodes is not reached, RKE will attempt to upgrade the [addons.](#upgrades-of-addons) For example, if a cluster has two worker nodes and one worker node fails, but the maximum unavailable worker nodes is greater than one, the addons will still be upgraded. + +### Upgrades of Addons + +The availability of your applications partly depends on the availability of [RKE addons.]({{}}/rke/latest/en/config-options/add-ons/) Addons are used to deploy several cluster components, including network plug-ins, the Ingress controller, DNS provider, and metrics server. + +Because RKE addons are necessary for allowing traffic into the cluster, they will need to be updated in batches to maintain availability. You will need to configure the maximum number of unavailable replicas for each addon in the `cluster.yml` to ensure that your cluster will retain enough available replicas during an upgrade. + +For more information on configuring the number of replicas for each addon, refer to [this section.](#replicas-for-rke-addons) + +For an example showing how to configure the addons, refer to the [example cluster.yml.]({{}}/rke/latest/en/upgrades/configuring-strategy/#example-cluster-yml) + +{{% /tab %}} +{{% tab "RKE prior to v1.1.0" %}} + +When a cluster is upgraded with `rke up`, using the default options, the following process is used: + +- etcd nodes get updated first, one at a time. +- Controlplane nodes get updated second, one at a time. +- Worker nodes and addons get updated third, in batches of 50 or the total number of worker nodes, whichever is lower. +- Addons get upgraded one by one. + +### Upgrades of Controlplane and etcd Nodes + +Controlplane and etcd nodes would be upgraded in batches of 50 nodes or the total number of controlplane nodes, whichever is lower. + +If a node fails at any time, the upgrade will stop upgrading any other nodes and fail. + +### Upgrades of Worker Nodes + +Worker nodes are upgraded simultaneously, in batches of either 50 or the total number of worker nodes, whichever is lower. If a worker node fails at any time, the upgrade stops. + +When a worker node is upgraded, it restarts several Docker processes, including the `kubelet` and `kube-proxy`. When `kube-proxy` comes up, it flushes `iptables`. When this happens, pods on this node can’t be accessed, resulting in downtime for the applications. + +{{% /tab %}} +{{% /tabs %}} diff --git a/content/rke/latest/en/upgrades/maintaining-availability/_index.md b/content/rke/latest/en/upgrades/maintaining-availability/_index.md new file mode 100644 index 00000000000..03cc98b7517 --- /dev/null +++ b/content/rke/latest/en/upgrades/maintaining-availability/_index.md @@ -0,0 +1,43 @@ +--- +title: Maintaining Availability for Applications During Upgrades +weight: 1 +--- +_Available as of v1.1.0_ + +In this section, you'll learn the requirements to prevent downtime for your applications when you upgrade the cluster using `rke up`. + +An upgrade without downtime is one in which your workloads are available on at least a single node, and all critical addon services, such as Ingress and DNS, are available during the upgrade. + +The way that clusters are upgraded changed in RKE v1.1.0. For details, refer to [How Upgrades Work.]({{}}/rke/latest/en/upgrades/how-upgrades-work) + +This availability is achieved by upgrading worker nodes in batches of a configurable size, and ensuring that your workloads run on a number of nodes that exceeds that maximum number of unavailable worker nodes. + +To avoid downtime for your applications during an upgrade, you will need to configure your workloads to continue running despite the rolling upgrade of worker nodes. There are also requirements for the cluster architecture and Kubernetes target version. + +1. [Kubernetes Version Requirement](#1-kubernetes-version-requirement) +2. [Cluster Requirements](#2-cluster-requirements) +3. [Workload Requirements](#3-workload-requirements) + +### 1. Kubernetes Version Requirement + +When upgrading to a newer Kubernetes version, the upgrade must be from a minor release to the next minor version, or to within the same patch release series. + +### 2. Cluster Requirements + +The following must be true of the cluster that will be upgraded: + +1. The cluster has three or more etcd nodes. +1. The cluster has two or more controlplane nodes. +1. The cluster has two or more worker nodes. +1. The Ingress, DNS, and other addons are schedulable to a number of nodes that exceeds the maximum number of unavailable worker nodes, also called the batch size. By default, the minimum number of unavailable worker nodes is 10 percent of worker nodes, rounded down to the nearest node, with a minimum batch size of one node. + +### 3. Workload Requirements + +The following must be true of the cluster's applications: + +1. The application and Ingress are deployed across a number of nodes exceeding the maximum number of unavailable worker nodes, also called the batch size. By default, the minimum number of unavailable worker nodes is 10 percent of worker nodes, rounded down to the nearest node, with a minimum batch size of one node. +1. The applications must make use of liveness and readiness probes. + +For information on how to use node selectors to assign pods to nodes, refer to the [official Kubernetes documentation.](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/) + +For information on configuring the number of replicas for each addon, refer to [this section.]({{}}/rke/latest/en/upgrades/configuring-strategy/) \ No newline at end of file diff --git a/layouts/partials/seo.html b/layouts/partials/seo.html new file mode 100644 index 00000000000..e1ae8ae8a14 --- /dev/null +++ b/layouts/partials/seo.html @@ -0,0 +1,135 @@ + + {{ with .Params.metaTitle }} + {{ . }} + {{ else }} + {{ if eq .Section "tags" }} + {{ .Title }} Blog Posts by Rancher + {{ else }} + Rancher Docs: {{ .Title }} + {{ end }} + {{ end }} + + +{{- .Scratch.Set "permalink" .Permalink -}} +{{- if (and .Pages (not .IsHome)) -}} + {{/* + Hugo doesn't generate permalinks for lists with the page number in them, + which makes all the pages of a list look lik the same page to a search + engine, which is bad. + */}} + + {{- $by := .Params.pageBy | default .Site.Params.pageBy | default "default" -}} + {{- $limit := .Site.Params.pageLimit | default 10 -}} + + {{- if (eq .Site.Params.pageBy "newest") -}} + {{- $paginator := .Paginate .Pages.ByDate.Reverse $limit -}} + {{- .Scratch.Set "paginator" $paginator -}} + {{- else if (eq .Site.Params.pageBy "title") -}} + {{- $paginator := .Paginate .Pages.ByTitle $limit -}} + {{- .Scratch.Set "paginator" $paginator -}} + {{- else -}} + {{- $paginator := .Paginate $limit -}} + {{- .Scratch.Set "paginator" $paginator -}} + {{- end -}} + + {{- $paginator := .Scratch.Get "paginator" -}} + {{- if (gt $paginator.PageNumber 1) -}} + {{ .Scratch.Set "permalink" ($paginator.URL | absURL) }} + {{- end -}} + + {{ with $paginator.Prev -}} + + {{- end }} + {{ with $paginator.Next -}} + + {{- end }} +{{- end -}} + + {{ $permalink := .Scratch.Get "permalink" }} + {{ if .Params.canonical }} + + {{ end }} + + {{ if .RSSLink -}} + + {{- end }} + + {{ if eq .Section "tags" }} + + {{ else }} + + {{ end }} + + + + + + + + + + + + + + + + + {{ range .Params.categories }}{{ end }} + {{ if isset .Params "date" }}{{ end }} + +{{- if .IsHome -}} + +{{- else if .IsPage -}} + +{{ end }} diff --git a/nginx.conf b/nginx.conf index cbe1b9139c8..564352c0877 100644 --- a/nginx.conf +++ b/nginx.conf @@ -117,6 +117,49 @@ map $request_uri $redirect_uri { ~^/docs/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/?$ /docs/rancher/v2.x/en/upgrades/upgrades/ha/; ~^/docs/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/?$ /docs/rancher/v2.x/en/upgrades/upgrades/single-node/; ~^/docs/rke/latest/en/installation/os/?$ /docs/rke/latest/en/os/; + + ~^/docs/rancher/v2.x/en/k8s-in-rancher/nodes/?$ /docs/rancher/v2.x/en/cluster-admin/nodes/; + ~^/docs/rancher/v2.x/en/installation/air-gap-high-availability/install-rancher/?$ /docs/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/; + ~^/docs/rancher/v2.x/en/installation/k8s-install-server-install/?$ /docs/rancher/v2.x/en/installation/options/helm2/rke-add-on/layer-4-lb/; + ~^/docs/rancher/v1.0/en/infrastructure/hosts/?$ /docs/rancher/v1.0/en/rancher-ui/infrastructure/hosts/; + ~^/docs/rancher/v2.x/en/cluster-admin/cluster-access/kubeconfig/?$ /docs/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/; + ~^/docs/os/v1.0/en/configuration/custom-console/?$ /docs/os/latest/en/configuration/switching-consoles/; + ~^/docs/os/latest/en/configuration/switching-consoles/?$ /docs/os/v1.x/en/configuration/switching-consoles/; + ~^/docs/os/v1.1/en/configuration/custom-console/?$ /docs/os/v1.1/en/configuration/switching-consoles/; + ~^/docs/os/v1.1/en/system-services/built-in-system-services/?$ /docs/os/v1.1/en/boot-process/built-in-system-services/; + ~^/docs/os/v1.2/en/configuration/custom-console/?$ /docs/os/v1.2/en/configuration/switching-consoles/; + ~^/docs/os/v1.2/en/system-services/built-in-system-services/?$ /docs/os/v1.2/en/boot-process/built-in-system-services/; + ~^/docs/rancher/v2.x/en/removing-rancher/?$ /docs/rancher/v2.x/en/faq/removing-rancher/; + ~^/docs/rancher/v2.x/en/installation/ha/?$ /docs/rancher/v2.x/en/installation/k8s-install/; + ~^/docs/rancher/v2.x/en/installation/ha/helm-rancher/?$ /docs/rancher/v2.x/en/installation/k8s-install/helm-rancher/; + ~^/docs/rancher/v2.x/en/installation/other-installation-methods/single-node/?$ /docs/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/; + ~^/docs/rancher/v2.x/en/installation/air-gap/install-rancher/?$ /docs/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/; + ~^/docs/rancher/v1.0/en/api/v1/access-control/?$ /docs/rancher/v1.0/en/api/v1/api-keys/; + ~^/docs/os/latest/en/storage/additional-mounts/?$ /docs/os/v1.x/en/storage/additional-mounts/; + ~^/docs/os/v1.0/en/configuration/custom-rancheros-iso/?$ /docs/os/latest/custom-builds/custom-rancheros-iso/; + ~^/docs/os/v1.0/en/configuration/custom-kernels/?$ /docs/os/latest/custom-builds/custom-kernels/; + ~^/docs/rancher/v1.0/en/environments/?$ /docs/rancher/v1.0/en/configuration/environments/; + ~^/docs/os/v1.1/en/configuration/custom-kernels/?$ /docs/os/v1.1/en/custom-builds/custom-kernels/; + ~^/docs/os/v1.0/en/system-services/built-in-system-services/?$ /docs/os/latest/boot-process/built-in-system-services/; + ~^/docs/os/latest/custom-builds/custom-rancheros-iso/?$ /docs/os/v1.x/en/custom-builds/custom-rancheros-iso/; + ~^/docs/os/v1.0/en/system-services/?$ /docs/os/latest/en/system-services/adding-system-services/; + ~^/docs/os/v1.0/en/configuration/additional-mounts/?$ /docs/os/latest/en/storage/additional-mounts/; + ~^/docs/os/latest/custom-builds/custom-kernels/?$ /docs/os/v1.x/en/custom-builds/custom-kernels/; + ~^/docs/os/v1.1/en/system-services/?$ /docs/os/v1.1/en/system-services/adding-system-services/; + ~^/docs/os/v1.1/en/configuration/additional-mounts/?$ /docs/os/v1.1/en/storage/additional-mounts/; + ~^/docs/os/latest/boot-process/built-in-system-services/?$ /docs/os/v1.x/en/boot-process/built-in-system-services/; + ~^/docs/os/latest/en/system-services/adding-system-services/?$ /docs/os/v1.x/en/system-services/adding-system-services/; + ~^/docs/rancher/v1.0/en/cattle/rancher-compose/?$ /docs/rancher/v1.0/en/rancher-compose/; + ~^/docs/os/v1.1/en/configuration/custom-docker/?$ /docs/os/v1.1/en/configuration/switching-docker-versions/; + ~^/docs/os/v1.2/en/configuration/custom-kernels/?$ /docs/os/v1.x/en/custom-builds/custom-kernels/; + ~^/docs/os/v1.2/en/configuration/custom-rancheros-iso/?$ /docs/os/v1.x/en/custom-builds/custom-rancheros-iso/; + ~^/docs/os/v1.2/en/system-services/?$ /docs/os/v1.2/en/system-services/adding-system-services/; + ~^/docs/os/v1.2/en/configuration/additional-mounts/?$ /docs/os/v1.2/en/storage/additional-mounts/; + ~^/docs/rancher/v2.x/en/backups/rollbacks/?$ /docs/rancher/v2.x/en/upgrades/; + ~^/docs/rancher/v2.x/en/admin-settings/feature-flags/enable-not-default-storage-drivers/?$ /docs/rancher/v2.x/en/installation/options/feature-flags/enable-not-default-storage-drivers/; + ~^/docs/rancher/v2.x/en/installation/server-tags/?$ /docs/rancher/v2.x/en/installation/options/server-tags/; + ~^/rancher/v2.x/en/admin-settings/feature-flags/istio-virtual-service-ui/?$ /docs/rancher/v2.x/en/installation/options/feature-flags/istio-virtual-service-ui/; + ~^/docs/os/v1.1/en/configuration/custom-rancheros-iso/?$ /docs/os/v1.1/en/custom-builds/custom-rancheros-iso/; } server { diff --git a/scripts/converters/Dockerfile b/scripts/converters/Dockerfile index 4c907fe4bbc..1a30b8cf809 100644 --- a/scripts/converters/Dockerfile +++ b/scripts/converters/Dockerfile @@ -8,14 +8,15 @@ RUN apt-get autoclean RUN pip3 install WeasyPrint -COPY fonts/ /usr/share/fonts/truetype/ - WORKDIR /doc_tools +COPY fonts/ fonts/ COPY css css/ COPY images images/ COPY templates templates/ COPY headers headers/ COPY scripts scripts/ +RUN ls -la fonts + ENTRYPOINT ["scripts/entrypoint.sh"] diff --git a/scripts/converters/css/style-portrait.css b/scripts/converters/css/style-portrait.css index c07b2789ef1..e6bcd2303ef 100644 --- a/scripts/converters/css/style-portrait.css +++ b/scripts/converters/css/style-portrait.css @@ -2,11 +2,18 @@ Theme Name: Linux Academy Study Guide Template 08-14-2019 */ -@font-face {font-family: Poppins;src: url(./fonts/Poppins/Poppins-Regular.ttf);} -@font-face {font-family: Roboto;src: url(./fonts/Roboto/Roboto-Regular.ttf);} -@font-face {font-family: PoppinsExtraLight; src: url(./fonts/Poppins/Poppins-ExtraLight.ttf);} +/* +#@font-face {font-family: Poppins;src: url(fonts/Poppins/Poppins-Regular.ttf);} +@font-face {font-family: Poppins;src: url('https://fonts.googleapis.com/css?family=Poppins&display=swap');} +@font-face {font-family: Roboto;src: url(fonts/truetype/Roboto/Roboto-Regular.ttf);} +@font-face {font-family: PoppinsExtraLight; src: url(fonts/truetype/Poppins/Poppins-ExtraLight.ttf);} +*/ + /* This lighter one is only used as H1, and in the table of contents */ +font-family: 'Poppins', sans-serif; +font-family: 'Roboto', sans-serif; + @page :first { size: portrait; @@ -14,7 +21,7 @@ Theme Name: Linux Academy Study Guide Template 08-14-2019 border-left-style: none; background:none; background: url("../images/rancher-logo-stacked-color.png") no-repeat left; - background-size: 10cm; + background-size: 50cm; background-position: top 1cm left; margin-top:1cm; margin-bottom:1cm; @@ -23,7 +30,7 @@ Theme Name: Linux Academy Study Guide Template 08-14-2019 @top-left { background: #000; color:#fff; - content: "v2.3.4"; + content: "v2.3.5"; height: 1cm; text-align: center; width: 5cm; diff --git a/scripts/converters/templates/default.html b/scripts/converters/templates/default.html index f895d65d169..edfcb20d972 100644 --- a/scripts/converters/templates/default.html +++ b/scripts/converters/templates/default.html @@ -1,6 +1,9 @@ + diff --git a/src/img/rancher/open-rancher-app.png b/src/img/rancher/open-rancher-app.png new file mode 100644 index 00000000000..2817d0efe20 Binary files /dev/null and b/src/img/rancher/open-rancher-app.png differ diff --git a/src/img/rancher/search-app-registrations.png b/src/img/rancher/search-app-registrations.png new file mode 100644 index 00000000000..4ab244da885 Binary files /dev/null and b/src/img/rancher/search-app-registrations.png differ diff --git a/static/img/rancher/k3s-server-storage.svg b/static/img/rancher/k3s-server-storage.svg new file mode 100644 index 00000000000..45fe9f58ac7 --- /dev/null +++ b/static/img/rancher/k3s-server-storage.svg @@ -0,0 +1,3 @@ + + +
Server Node
Server Node
Server Node
Server Node
Load Balancer
Load Balanc...
External Datastore
Extern...
Cluster Data
Cluster Data
K3s Cluster
K3s Cluster
Viewer does not support full SVG 1.1
\ No newline at end of file diff --git a/static/img/rancher/new-app-registration-1.png b/static/img/rancher/new-app-registration-1.png new file mode 100644 index 00000000000..8fed06426f1 Binary files /dev/null and b/static/img/rancher/new-app-registration-1.png differ diff --git a/static/img/rancher/new-app-registration-2.png b/static/img/rancher/new-app-registration-2.png new file mode 100644 index 00000000000..0b33711a383 Binary files /dev/null and b/static/img/rancher/new-app-registration-2.png differ diff --git a/static/img/rancher/rke-server-storage.svg b/static/img/rancher/rke-server-storage.svg new file mode 100644 index 00000000000..f5529ef35c7 --- /dev/null +++ b/static/img/rancher/rke-server-storage.svg @@ -0,0 +1,3 @@ + + +
Node with controlplane, etcd, and worker roles
Node with controlpla...
Node with controlplane, etcd, and worker roles
Node with controlpla...
Node with controlplane, etcd, and worker roles
Node with controlpla...
etcd
etcd
etcd
etcd
etcd
etcd
Load Balancer
Load Balanc...
Cluster Data
Cluster Data
RKE Cluster
RKE Cluster
Viewer does not support full SVG 1.1
\ No newline at end of file diff --git a/static/img/rancher/select-client-secret.png b/static/img/rancher/select-client-secret.png new file mode 100644 index 00000000000..5533bc42d8d Binary files /dev/null and b/static/img/rancher/select-client-secret.png differ diff --git a/static/img/rancher/select-required-permissions-1.png b/static/img/rancher/select-required-permissions-1.png new file mode 100644 index 00000000000..d18c06ef1c2 Binary files /dev/null and b/static/img/rancher/select-required-permissions-1.png differ diff --git a/static/img/rancher/select-required-permissions-2.png b/static/img/rancher/select-required-permissions-2.png new file mode 100644 index 00000000000..d6e3459cfa5 Binary files /dev/null and b/static/img/rancher/select-required-permissions-2.png differ diff --git a/static/img/rancher/shibboleth-with-openldap-groups.svg b/static/img/rancher/shibboleth-with-openldap-groups.svg new file mode 100644 index 00000000000..7f3694c842e --- /dev/null +++ b/static/img/rancher/shibboleth-with-openldap-groups.svg @@ -0,0 +1,3 @@ + + +
First-time login to Rancher
First-time...
Adding OpenLDAP Group Permissions to Rancher Resources
Adding OpenLDAP Group Permissions to Rancher Resources
Redirect user to Shibboleth
Redirect us...
Rancher
Ranch...
Shibboleth
Shibb...
OpenLDAP
OpenL...
Rancher
Admin
Ranch...
Search for groups
Search for gro...
Search for groups
Search for gro...
Provide groups to Rancher
Provide groups t...
  OpenLDAP Group Member
  OpenLDAP Group Member
Rancher admin or user with sufficient privileges can add an OpenLDAP group to a resource such as a cluster, project, or namespace.
Rancher admin or...
An existing OpenLDAP group member logs in to Rancher. Shibboleth validates credentials and provides group details from OpenLDAP.
An existing Open...
Select group for Rancher resource
Select group for...
Access resources that group has permissions for
Access resources...
User enters username and password
User enters userna...
Validate credentials
Validate creden...
Provide user details
Provide user de...
SAML assertion to Rancher with user attributes, including groups
SAML assertion...
Viewer does not support full SVG 1.1
\ No newline at end of file diff --git a/yarn.lock b/yarn.lock index 9f22028add4..e0a4ab425b0 100644 --- a/yarn.lock +++ b/yarn.lock @@ -658,9 +658,9 @@ acorn-walk@^6.0.1: integrity sha512-OtUw6JUTgxA2QoqqmrmQ7F2NYqiBPi/L2jqHyFtllhOUvXYQXf0Z1CYUinIfyT4bTCGmrA7gX9FvHA81uzCoVw== acorn@^5.5.3: - version "5.7.3" - resolved "https://registry.yarnpkg.com/acorn/-/acorn-5.7.3.tgz#67aa231bf8812974b85235a96771eb6bd07ea279" - integrity sha512-T/zvzYRfbVojPWahDsE5evJdHb3oJoQfFbsrKM7w5Zcs++Tr257tia3BmMP8XYVjp1S9RZXQMh7gao96BlqZOw== + version "5.7.4" + resolved "https://registry.yarnpkg.com/acorn/-/acorn-5.7.4.tgz#3e8d8a9947d0599a1796d10225d7432f4a4acf5e" + integrity sha512-1D++VG7BhrtvQpNbBzovKNc1FLGGEE/oGe7b9xJm/RFHMBeUaUGpluV9RLjZa47YFdPcDAenEYuq9pQPcMdLJg== acorn@^6.0.1: version "6.1.0"