Merge branch 'master' into rke-macports

This commit is contained in:
Catherine Luse
2020-04-14 14:45:39 -07:00
committed by GitHub
370 changed files with 7439 additions and 3997 deletions
+6 -6
View File
@@ -69,7 +69,7 @@
</div>
<div class="buttons-container">
<a href="{{< baseurl >}}/rancher/v2.x/en/v1.6-migration/">
<a href="{{<baseurl>}}/rancher/v2.x/en/v1.6-migration/">
<button class="button text">
<span>Read More</span>
</button>
@@ -110,7 +110,7 @@
<p class="description-label">Rancher manages all of your Kubernetes clusters everywhere, unifies them under centralized RBAC, monitors them and lets you easily deploy and manage workloads through an intuitive user interface.</p>
<div class="buttons-container">
<a href="{{< baseurl >}}/rancher/v2.x/en/">
<a href="{{<baseurl>}}/rancher/v2.x/en/">
<button class="button text">
<span>Read the docs</span>
</button>
@@ -164,7 +164,7 @@
<p class="description-label">RancherOS is the lightest, easiest way to run Docker in production. Engineered from the ground up for security and speed, it runs all system services and user workloads within Docker containers.</p>
<div class="buttons-container">
<a href="{{< baseurl >}}/os/v1.x/en/">
<a href="{{<baseurl>}}/os/v1.x/en/">
<button class="button text">
<span>Read the docs</span>
</button>
@@ -191,7 +191,7 @@
<p class="description-label">Rancher Kubernetes Engine (RKE) is an extremely simple, lightning fast Kubernetes installer that works everywhere.</p>
<div class="buttons-container">
<a href="{{< baseurl >}}/rke/v0.1.x/en/">
<a href="{{<baseurl>}}/rke/v0.1.x/en/">
<button class="button text">
<span>Read the docs</span>
</button>
@@ -215,10 +215,10 @@
<hr/>
<p class="description-label">Lightweight Kubernetes. Easy to install, half the memory, all in a binary less than 40mb.</p>
<p class="description-label">Lightweight Kubernetes. Easy to install, half the memory, all in a binary less than 50mb.</p>
<div class="buttons-container">
<a href="{{< baseurl >}}/k3s/latest/en/">
<a href="{{<baseurl>}}/k3s/latest/en/">
<button class="button text">
<span>Read the docs</span>
</button>
+2 -2
View File
@@ -4,7 +4,7 @@ shortTitle: K3s
name: "menu"
---
Lightweight Kubernetes. Easy to install, half the memory, all in a binary less than 50mb.
Lightweight Kubernetes. Easy to install, half the memory, all in a binary of less than 50mb.
Great for:
@@ -12,7 +12,7 @@ Great for:
* IoT
* CI
* ARM
* Situations where a PhD in k8s clusterology is infeasible
* Situations where a PhD in K8s clusterology is infeasible
# What is K3s?
+66
View File
@@ -10,11 +10,14 @@ This section contains advanced information describing the different ways you can
- [Auto-deploying manifests](#auto-deploying-manifests)
- [Using Docker as the container runtime](#using-docker-as-the-container-runtime)
- [Secrets Encryption Config (Experimental)](#secrets-encryption-config-experimental)
- [Running K3s with RootlessKit (Experimental)](#running-k3s-with-rootlesskit-experimental)
- [Node labels and taints](#node-labels-and-taints)
- [Starting the server with the installation script](#starting-the-server-with-the-installation-script)
- [Additional preparation for Alpine Linux setup](#additional-preparation-for-alpine-linux-setup)
- [Running K3d (K3s in Docker) and docker-compose](#running-k3d-k3s-in-docker-and-docker-compose)
- [Enabling legacy iptables on Raspbian Buster](#enabling-legacy-iptables-on-raspbian-buster)
- [Experimental SELinux Support](#experimental-selinux-support)
# Auto-Deploying Manifests
@@ -30,6 +33,45 @@ K3s will generate config.toml for containerd in `/var/lib/rancher/k3s/agent/etc/
The `config.toml.tmpl` will be treated as a Golang template file, and the `config.Node` structure is being passed to the template, the following is an example on how to use the structure to customize the configuration file https://github.com/rancher/k3s/blob/master/pkg/agent/templates/templates.go#L16-L32
# Secrets Encryption Config (Experimental)
As of v1.17.4+k3s1, K3s added the experimental feature of enabling secrets encryption at rest by passing the flag `--secrets-encryption` on a server, this flag will do the following automatically:
- Generate an AES-CBC key
- Generate an encryption config file with the generated key
```
{
"kind": "EncryptionConfiguration",
"apiVersion": "apiserver.config.k8s.io/v1",
"resources": [
{
"resources": [
"secrets"
],
"providers": [
{
"aescbc": {
"keys": [
{
"name": "aescbckey",
"secret": "xxxxxxxxxxxxxxxxxxx"
}
]
}
},
{
"identity": {}
}
]
}
]
}
```
- Pass the config to the KubeAPI as encryption-provider-config
Once enabled any created secret will be encrypted with this key. Note that if you disable encryption then any encrypted secrets will not be readable until you enable encryption again.
# Running K3s with RootlessKit (Experimental)
> **Warning:** This feature is experimental.
@@ -162,3 +204,27 @@ Alternatively the `docker run` command can also be used:
-e K3S_TOKEN=${NODE_TOKEN} \
--privileged rancher/k3s:vX.Y.Z
# Enabling legacy iptables on Raspbian Buster
Raspbian Buster defaults to using `nftables` instead of `iptables`. **K3S** networking features require `iptables` and do not work with `nftables`. Follow the steps below to switch configure **Buster** to use `legacy iptables`:
```
sudo iptables -F
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo reboot
```
# Experimental SELinux Support
As of release v1.17.4+k3s1, experimental support for SELinux has been added to K3s's embedded containerd. If you are installing K3s on a system where SELinux is enabled by default (such as CentOS), you must ensure the proper SELinux policies have been installed. The [install script]({{<baseurl>}}/k3s/latest/en/installation/install-options/#installation-script-options) will fail if they are not. The necessary policies can be installed with the following commands:
```
yum install -y container-selinux selinux-policy-base
rpm -i https://rpm.rancher.io/k3s-selinux-0.1.1-rc1.el7.noarch.rpm
```
To force the install script to log a warning rather than fail, you can set the following environment variable: `INSTALL_K3S_SELINUX_WARN=true`.
You can turn off SELinux enforcement in the embedded containerd by launching K3s with the `--disable-selinux` flag.
Note that support for SELinux in containerd is still under development. Progress can be tracked in [this pull request](https://github.com/containerd/cri/pull/1246).
+2 -2
View File
@@ -33,7 +33,7 @@ Single server clusters can meet a variety of use cases, but for environments whe
* An **external datastore** (as opposed to the embedded SQLite datastore used in single-server setups)
<figcaption>K3s Architecture with a High-availability Server</figcaption>
![Architecture]({{< baseurl >}}/img/rancher/k3s-architecture-ha-server.png)
![Architecture]({{<baseurl>}}/img/rancher/k3s-architecture-ha-server.png)
### Fixed Registration Address for Agent Nodes
@@ -41,7 +41,7 @@ In the high-availability server configuration, each node must also register with
After registration, the agent nodes establish a connection directly to one of the server nodes.
![k3s HA]({{< baseurl >}}/img/k3s/k3s-production-setup.svg)
![k3s HA]({{<baseurl>}}/img/k3s/k3s-production-setup.svg)
# How Agent Node Registration Works
+5 -6
View File
@@ -3,16 +3,15 @@ title: "Installation"
weight: 20
---
This section contains instructions for installing K3s in various environments. Please ensure you have met the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/) before you begin installing K3s.
This section contains instructions for installing K3s in various environments. Please ensure you have met the [Installation Requirements]({{< baseurl >}}/k3s/latest/en/installation/installation-requirements/) before you begin installing K3s.
[Installation and Configuration Options]({{< baseurl >}}/k3s/latest/en/installation/install-options/) provides guidance on the options available to you when installing K3s.
[Installation and Configuration Options]({{<baseurl>}}/k3s/latest/en/installation/install-options/) provides guidance on the options available to you when installing K3s.
[High Availability with an External DB]({{<baseurl>}}/k3s/latest/en/installation/ha/) details how to set up an HA K3s cluster backed by an external datastore such as MySQL, PostgreSQL, or etcd.
[High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha/) details how to set up an HA K3s cluster backed by an external datastore such as MySQL, PostgreSQL, or etcd.
[High Availability with Embedded DB (Experimental)]({{<baseurl>}}/k3s/latest/en/installation/ha-embedded/) details how to set up an HA K3s cluster that leverages a built-in distributed database.
[High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded/) details how to set up an HA K3s cluster that leverages a built-in distributed database.
[Air-Gap Installation]({{< baseurl >}}/k3s/latest/en/installation/airgap/) details how to set up K3s in environments that do not have direct access to the Internet.
[Air-Gap Installation]({{<baseurl>}}/k3s/latest/en/installation/airgap/) details how to set up K3s in environments that do not have direct access to the Internet.
### Uninstalling
@@ -3,77 +3,115 @@ title: "Air-Gap Install"
weight: 60
---
In this guide, we are assuming you have created your nodes in your air-gap environment and have a secure Docker private registry on your bastion server.
You can install K3s in an air-gapped environment using two different methods. You can either deploy a private registry and mirror docker.io or you can manually deploy images such as for small clusters.
# Installation Outline
# Private Registry Method
1. [Prepare Images Directory](#prepare-images-directory)
2. [Create Registry YAML](#create-registry-YAML)
3. [Install K3s](#install-k3s)
This document assumes you have already created your nodes in your air-gap environment and have a secure Docker private registry on your bastion host.
If you have not yet set up a private Docker registry, refer to the official documentation [here](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry).
### Prepare Images Directory
### Create the Registry YAML
Follow the [Private Registry Configuration]({{< baseurl >}}/k3s/latest/en/installation/private-registry) guide to create and configure the registry.yaml file.
Once you have completed this, you may now go to the [Install K3s](#install-k3s) section below.
# Manually Deploy Images Method
We are assuming you have created your nodes in your air-gap environment.
This method requires you to manually deploy the necessary images to each node and is appropriate for edge deployments where running a private registry is not practical.
### Prepare the Images Directory and K3s Binary
Obtain the images tar file for your architecture from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be running.
Place the tar file in the `images` directory before starting K3s on each node, for example:
Place the tar file in the `images` directory, for example:
```sh
sudo mkdir -p /var/lib/rancher/k3s/agent/images/
sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/
```
### Create Registry YAML
Create the registries.yaml file at `/etc/rancher/k3s/registries.yaml`. This will tell K3s the necessary details to connect to your private registry.
The registries.yaml file should look like this before plugging in the necessary information:
Place the k3s binary at /usr/local/bin/k3s and ensure it is executable.
```
---
mirrors:
customreg:
endpoint:
- "https://ip-to-server:5000"
configs:
customreg:
auth:
username: xxxxxx # this is the registry username
password: xxxxxx # this is the registry password
tls:
cert_file: <path to the cert file used in the registry>
key_file: <path to the key file used in the registry>
ca_file: <path to the ca file used in the registry>
```
Follow the steps in the next section to install K3s.
Note, at this time only secure registries are supported with K3s (SSL with custom CA)
# Install K3s
### Install K3s
Only after you have completed either the [Private Registry Method](#private-registry-method) or the [Manually Deploy Images Method](#manually-deploy-images-method) above should you install K3s.
Obtain the K3s binary from the [releases](https://github.com/rancher/k3s/releases) page, matching the same version used to get the airgap images tar.
Also obtain the K3s install script at https://get.k3s.io
Obtain the K3s binary from the [releases](https://github.com/rancher/k3s/releases) page, matching the same version used to get the airgap images.
Obtain the K3s install script at https://get.k3s.io
Place the binary in `/usr/local/bin` on each node.
Place the install script anywhere on each node, name it `install.sh`.
Place the binary in `/usr/local/bin` on each node and ensure it is executable.
Place the install script anywhere on each node, and name it `install.sh`.
Install K3s on each server:
### Install Options
You can install K3s on one or more servers as described below.
{{% tabs %}}
{{% tab "Single Server Configuration" %}}
To install K3s on a single server simply do the following on the server node.
```
INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh
```
Install K3s on each agent:
Then, to optionally add additional agents do the following on each agent node. Take care to ensure you replace `myserver` with the IP or valid DNS of the server and replace `mynodetoken` with the node token from the server typically at `/var/lib/rancher/k3s/server/node-token`
```
INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken ./install.sh
```
Note, take care to ensure you replace `myserver` with the IP or valid DNS of the server and replace `mynodetoken` with the node-token from the server.
The node-token is on the server at `/var/lib/rancher/k3s/server/node-token`
{{% /tab %}}
{{% tab "High Availability Configuration" %}}
Reference the [High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha) or [High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded) guides. You will be tweaking install commands so you specify `INSTALL_K3S_SKIP_DOWNLOAD=true` and run your install script locally instead of via curl. You will also utilize `INSTALL_K3S_EXEC='args'` to supply any arguments to k3s.
For example, step two of the High Availability with an External DB guide mentions the following:
```
curl -sfL https://get.k3s.io | sh -s - server \
--datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name"
```
Instead, you would modify such examples like below:
```
INSTALL_K3S_SKIP_DOWNLOAD=true INSTALL_K3S_EXEC='server --datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name"' ./install.sh
```
{{% /tab %}}
{{% /tabs %}}
>**Note:** K3s additionally provides a `--resolv-conf` flag for kubelets, which may help with configuring DNS in air-gap networks.
# Upgrading
### Install Script Method
Upgrading an air-gap environment can be accomplished in the following manner:
1. Download the new air-gap images (tar file) from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each node. Delete the old tar file.
2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables.
1. Download the new air-gap images (tar file) from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each
node. Delete the old tar file.
2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past
with the same environment variables.
3. Restart the K3s service (if not restarted automatically by installer).
### Automated Upgrades Method
As of v1.17.4+k3s1 K3s supports [automated upgrades]({{< baseurl >}}/k3s/latest/en/upgrades/automated/). To enable this in air-gapped environments, you must ensure the required images are available in your private registry.
You will need the version of rancher/k3s-upgrade that corresponds to the version of K3s you intend to upgrade to. Note, the image tag replaces the `+` in the K3s release with a `-` because Docker images do not support `+`.
You will also need the versions of system-upgrade-controller and kubectl that are specified in the system-upgrade-controller manifest YAML that you will deploy. Check for the latest release of the system-upgrade-controller [here](https://github.com/rancher/system-upgrade-controller/releases/latest) and download the system-upgrade-controller.yaml to determine the versions you need to push to your private registry. For example, in release v0.4.0 of the system-upgrade-controller, these images are specified in the manifest YAML:
```
rancher/system-upgrade-controller:v0.4.0
rancher/kubectl:v0.17.0
```
Once you have added the necessary rancher/k3s-upgrade, rancher/system-upgrade-controller, and rancher/kubectl images to your private registry, follow the [automated upgrades]({{< baseurl >}}/k3s/latest/en/upgrades/automated/) guide.
@@ -14,6 +14,7 @@ K3s supports the following datastore options:
* Embedded [SQLite](https://www.sqlite.org/index.html)
* [PostgreSQL](https://www.postgresql.org/) (certified against versions 10.7 and 11.5)
* [MySQL](https://www.mysql.com/) (certified against version 5.7)
* [MariaDB](https://mariadb.org/) (certified against version 10.3.20)
* [etcd](https://etcd.io/) (certified against version 3.3.15)
* Embedded [DQLite](https://dqlite.io/) for High Availability (experimental)
@@ -50,9 +51,9 @@ If you only supply `postgres://` as the endpoint, K3s will attempt to do the fo
{{% /tab %}}
{{% tab "MySQL" %}}
{{% tab "MySQL / MariaDB" %}}
In its most common form, the `datastore-endpoint` parameter for MySQL has the following format:
In its most common form, the `datastore-endpoint` parameter for MySQL and MariaDB has the following format:
`mysql://username:password@tcp(hostname:3306)/database-name`
@@ -94,4 +95,4 @@ k3s server
```
### Embedded DQLite for HA (Experimental)
K3s's use of DQLite is similar to its use of SQLite. It is simple to set up and manage. As such, there is no external configuration or additional steps to take in order to use this option. Please see [High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded/) for instructions on how to run with this option.
K3s's use of DQLite is similar to its use of SQLite. It is simple to set up and manage. As such, there is no external configuration or additional steps to take in order to use this option. Please see [High Availability with Embedded DB (Experimental)]({{<baseurl>}}/k3s/latest/en/installation/ha-embedded/) for instructions on how to run with this option.
@@ -3,7 +3,7 @@ title: High Availability with an External DB
weight: 30
---
>**Note:** Official support for installing Rancher on a Kubernetes cluster was introduced in our v1.0.0 release.
> **Note:** Official support for installing Rancher on a Kubernetes cluster was introduced in our v1.0.0 release.
This section describes how to install a high-availability K3s cluster with an external database.
@@ -28,10 +28,10 @@ Setting up an HA cluster requires the following steps:
4. [Join agent nodes](#4-optional-join-agent-nodes)
### 1. Create an External Datastore
You will first need to create an external datastore for the cluster. See the [Cluster Datastore Options]({{< baseurl >}}/k3s/latest/en/installation/datastore/) documentation for more details.
You will first need to create an external datastore for the cluster. See the [Cluster Datastore Options]({{<baseurl>}}/k3s/latest/en/installation/datastore/) documentation for more details.
### 2. Launch Server Nodes
K3s requires two or more server nodes for this HA configuration. See the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/) guide for minimum machine requirements.
K3s requires two or more server nodes for this HA configuration. See the [Installation Requirements]({{<baseurl>}}/k3s/latest/en/installation/installation-requirements/) guide for minimum machine requirements.
When running the `k3s server` command on these nodes, you must set the `datastore-endpoint` parameter so that K3s knows how to connect to the external datastore.
@@ -50,22 +50,24 @@ To configure TLS certificates when launching server nodes, refer to the [datasto
By default, server nodes will be schedulable and thus your workloads can get launched on them. If you wish to have a dedicated control plane where no user workloads will run, you can use taints. The <span style='white-space: nowrap'>`node-taint`</span> parameter will allow you to configure nodes with taints, for example <span style='white-space: nowrap'>`--node-taint k3s-controlplane=true:NoExecute`</span>.
Once you've launched the `k3s server` process on all server nodes, ensure that the cluster has come up properly with `k3s kubectl get nodes`. You should see your server nodes in the Ready state.
Once you've launched the `k3s server` process on all server nodes, ensure that the cluster has come up properly with `k3s kubectl get nodes`. You should see your server nodes in the Ready state.
### 3. Configure the Fixed Registration Address
Agent nodes need a URL to register against. This can be the IP or hostname of any of the server nodes, but in many cases those may change over time. For example, if you are running your cluster in a cloud that supports scaling groups, you may scale the server node group up and down over time, causing nodes to be created and destroyed and thus having different IPs from the initial set of server nodes. Therefore, you should have a stable endpoint in front of the server nodes that will not change over time. This endpoint can be set up using any number approaches, such as:
* A layer-4 (TCP) load balancer
* Round-robin DNS
* Virtual or elastic IP addresses
This endpoint can also be used for accessing the Kubernetes API. So you can, for example, modify your [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file to point to it instead of a specific node.
This endpoint can also be used for accessing the Kubernetes API. So you can, for example, modify your [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file to point to it instead of a specific node. To avoid certificate errors in such a configuration, you should install the server with the `--tls-san YOUR_IP_OR_HOSTNAME_HERE` option. This option adds an additional hostname or IP as a Subject Alternative Name in the TLS cert, and it can be specified multiple times if you would like to access via both the IP and the hostname.
### 4. Optional: Join Agent Nodes
Because K3s server nodes are schedulable by default, the minimum number of nodes for an HA K3s server cluster is two server nodes and zero agent nodes. To add nodes designated to run your apps and services, join agent nodes to your cluster.
Joining agent nodes in an HA cluster is the same as joining agent nodes in a single server cluster. You just need to specify the URL the agent should register to and the token it should use.
```
K3S_TOKEN=SECRET k3s agent --server https://fixed-registration-address:6443
```
@@ -5,16 +5,18 @@ weight: 20
This page focuses on the options that can be used when you set up K3s for the first time:
- [Installation script options](#installation-script-options)
- [Installing K3s from the binary](#installing-k3s-from-the-binary)
- [Options for installation with script](#options-for-installation-with-script)
- [Options for installation from binary](#options-for-installation-from-binary)
- [Registration options for the K3s server](#registration-options-for-the-k3s-server)
- [Registration options for the K3s agent](#registration-options-for-the-k3s-agent)
For more advanced options, refer to [this page.]({{<baseurl>}}/k3s/latest/en/advanced)
# Installation Script Options
> Throughout the K3s documentation, you will see some options that can be passed in as both command flags and environment variables. For help with passing in options, refer to [How to Use Flags and Environment Variables.]({{<baseurl>}}/k3s/latest/en/installation/install-options/how-to-flags)
As mentioned in the [Quick-Start Guide]({{< baseurl >}}/k3s/latest/en/quick-start/), you can use the installation script available at https://get.k3s.io to install K3s as a service on systemd and openrc based systems.
### Options for Installation with Script
As mentioned in the [Quick-Start Guide]({{<baseurl>}}/k3s/latest/en/quick-start/), you can use the installation script available at https://get.k3s.io to install K3s as a service on systemd and openrc based systems.
The simplest form of this command is as follows:
```sh
@@ -23,58 +25,25 @@ curl -sfL https://get.k3s.io | sh -
When using this method to install K3s, the following environment variables can be used to configure the installation:
- `INSTALL_K3S_SKIP_DOWNLOAD`
If set to true will not download K3s hash or binary.
- `INSTALL_K3S_SYMLINK`
If set to 'skip' will not create symlinks, 'force' will overwrite, default will symlink if command does not exist in path.
- `INSTALL_K3S_SKIP_START`
If set to true will not start K3s service.
- `INSTALL_K3S_VERSION`
Version of K3s to download from github. Will attempt to download the latest version if not specified.
- `INSTALL_K3S_BIN_DIR`
Directory to install K3s binary, links, and uninstall script to, or use `/usr/local/bin` as the default.
- `INSTALL_K3S_BIN_DIR_READ_ONLY`
If set to true will not write files to `INSTALL_K3S_BIN_DIR`, forces setting `INSTALL_K3S_SKIP_DOWNLOAD=true`.
- `INSTALL_K3S_SYSTEMD_DIR`
Directory to install systemd service and environment files to, or use `/etc/systemd/system` as the default.
- `INSTALL_K3S_EXEC`
Command with flags to use for launching K3s in the service. If the command is not specified, it will default to "agent" if `K3S_URL` is set or "server" if it is not set.
The final systemd command resolves to a combination of this environment variable and script args. To illustrate this, the following commands result in the same behavior of registering a server without flannel:
```sh
curl ... | INSTALL_K3S_EXEC="--no-flannel" sh -s -
curl ... | INSTALL_K3S_EXEC="server --no-flannel" sh -s -
curl ... | INSTALL_K3S_EXEC="server" sh -s - --no-flannel
curl ... | sh -s - server --no-flannel
curl ... | sh -s - --no-flannel
```
- `INSTALL_K3S_NAME`
Name of systemd service to create, will default from the K3s exec command if not specified. If specified the name will be prefixed with 'k3s-'.
- `INSTALL_K3S_TYPE`
Type of systemd service to create, will default from the K3s exec command if not specified.
| Environment Variable | Description |
|-----------------------------|---------------------------------------------|
| `INSTALL_K3S_SKIP_DOWNLOAD` | If set to true will not download K3s hash or binary. |
| `INSTALL_K3S_SYMLINK` | By default will create symlinks for the kubectl, crictl, and ctr binaries if the commands do not already exist in path. If set to 'skip' will not create symlinks and 'force' will overwrite. |
| `INSTALL_K3S_SKIP_START` | If set to true will not start K3s service. |
| `INSTALL_K3S_VERSION` | Version of K3s to download from Github. Will attempt to download the latest version if not specified. |
| `INSTALL_K3S_BIN_DIR` | Directory to install K3s binary, links, and uninstall script to, or use `/usr/local/bin` as the default. |
| `INSTALL_K3S_BIN_DIR_READ_ONLY` | If set to true will not write files to `INSTALL_K3S_BIN_DIR`, forces setting `INSTALL_K3S_SKIP_DOWNLOAD=true`. |
| `INSTALL_K3S_SYSTEMD_DIR` | Directory to install systemd service and environment files to, or use `/etc/systemd/system` as the default. |
| `INSTALL_K3S_EXEC` | Command with flags to use for launching K3s in the service. If the command is not specified, and the `K3S_URL` is set, it will default to "agent." If `K3S_URL` not set, it will default to "server." For help, refer to [this example.]({{<baseurl>}}/k3s/latest/en/installation/install-options/how-to-flags/#example-b-install-k3s-exec) |
| `INSTALL_K3S_NAME` | Name of systemd service to create, will default to 'k3s' if running k3s as a server and 'k3s-agent' if running k3s as an agent. If specified the name will be prefixed with 'k3s-'. |
| `INSTALL_K3S_TYPE` | Type of systemd service to create, will default from the K3s exec command if not specified.
Environment variables which begin with `K3S_` will be preserved for the systemd and openrc services to use. Setting `K3S_URL` without explicitly setting an exec command will default the command to "agent". When running the agent `K3S_TOKEN` must also be set.
Environment variables which begin with `K3S_` will be preserved for the systemd and openrc services to use.
Setting `K3S_URL` without explicitly setting an exec command will default the command to "agent".
When running the agent `K3S_TOKEN` must also be set.
# Installing K3s from the Binary
@@ -89,120 +58,13 @@ Command | Description
<span class='nowrap'>`k3s ctr`</span> | Run an embedded [ctr](https://github.com/projectatomic/containerd/blob/master/docs/cli.md). This is a CLI for containerd, the container daemon used by K3s. Useful for debugging.
<span class='nowrap'>`k3s help`</span> | Shows a list of commands or help for one command
The `k3s server` and `k3s agent` commands have additional configuration options that can be viewed with <span class='nowrap'>`k3s server --help`</span> or <span class='nowrap'>`k3s agent --help`</span>. For convenience, that help text is presented here:
The `k3s server` and `k3s agent` commands have additional configuration options that can be viewed with <span class='nowrap'>`k3s server --help`</span> or <span class='nowrap'>`k3s agent --help`</span>.
# Registration Options for the K3s Server
```
NAME:
k3s server - Run management server
### Registration Options for the K3s Server
USAGE:
k3s server [OPTIONS]
For details on configuring the K3s server, refer to the [server configuration reference.]({{<baseurl>}}/k3s/latest/en/installation/install-options/server-config)
OPTIONS:
-v value (logging) Number for the log level verbosity (default: 0)
--vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging
--log value, -l value (logging) Log to file
--alsologtostderr (logging) Log to standard error as well as file (if set)
--bind-address value (listener) k3s bind address (default: 0.0.0.0)
--https-listen-port value (listener) HTTPS listen port (default: 6443)
--advertise-address value (listener) IP address that apiserver uses to advertise to members of the cluster (default: node-external-ip/node-ip)
--advertise-port value (listener) Port that apiserver uses to advertise to members of the cluster (default: listen-port) (default: 0)
--tls-san value (listener) Add additional hostname or IP as a Subject Alternative Name in the TLS cert
--data-dir value, -d value (data) Folder to hold state default /var/lib/rancher/k3s or ${HOME}/.rancher/k3s if not root
--cluster-cidr value (networking) Network CIDR to use for pod IPs (default: "10.42.0.0/16")
--service-cidr value (networking) Network CIDR to use for services IPs (default: "10.43.0.0/16")
--cluster-dns value (networking) Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10)
--cluster-domain value (networking) Cluster Domain (default: "cluster.local")
--flannel-backend value (networking) One of 'none', 'vxlan', 'ipsec', or 'flannel' (default: "vxlan")
--token value, -t value (cluster) Shared secret used to join a server or agent to a cluster [$K3S_TOKEN]
--token-file value (cluster) File containing the cluster-secret/token [$K3S_TOKEN_FILE]
--write-kubeconfig value, -o value (client) Write kubeconfig for admin client to this file [$K3S_KUBECONFIG_OUTPUT]
--write-kubeconfig-mode value (client) Write kubeconfig with this mode [$K3S_KUBECONFIG_MODE]
--kube-apiserver-arg value (flags) Customized flag for kube-apiserver process
--kube-scheduler-arg value (flags) Customized flag for kube-scheduler process
--kube-controller-manager-arg value (flags) Customized flag for kube-controller-manager process
--kube-cloud-controller-manager-arg value (flags) Customized flag for kube-cloud-controller-manager process
--datastore-endpoint value (db) Specify etcd, Mysql, Postgres, or Sqlite (default) data source name [$K3S_DATASTORE_ENDPOINT]
--datastore-cafile value (db) TLS Certificate Authority file used to secure datastore backend communication [$K3S_DATASTORE_CAFILE]
--datastore-certfile value (db) TLS certification file used to secure datastore backend communication [$K3S_DATASTORE_CERTFILE]
--datastore-keyfile value (db) TLS key file used to secure datastore backend communication [$K3S_DATASTORE_KEYFILE]
--default-local-storage-path value (storage) Default local storage path for local provisioner storage class
--no-deploy value (components) Do not deploy packaged components (valid items: coredns, servicelb, traefik, local-storage, metrics-server)
--disable-scheduler (components) Disable Kubernetes default scheduler
--disable-cloud-controller (components) Disable k3s default cloud controller manager
--disable-network-policy (components) Disable k3s default network policy controller
--node-name value (agent/node) Node name [$K3S_NODE_NAME]
--with-node-id (agent/node) Append id to node name
--node-label value (agent/node) Registering kubelet with set of labels
--node-taint value (agent/node) Registering kubelet with set of taints
--docker (agent/runtime) Use docker instead of containerd
--container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation
--pause-image value (agent/runtime) Customized pause image for containerd sandbox
--private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml")
--node-ip value, -i value (agent/networking) IP address to advertise for node
--node-external-ip value (agent/networking) External IP address to advertise for node
--resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF]
--flannel-iface value (agent/networking) Override default flannel interface
--flannel-conf value (agent/networking) Override default flannel config file
--kubelet-arg value (agent/flags) Customized flag for kubelet process
--kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process
--rootless (experimental) Run rootless
--agent-token value (experimental/cluster) Shared secret used to join agents to the cluster, but not servers [$K3S_AGENT_TOKEN]
--agent-token-file value (experimental/cluster) File containing the agent secret [$K3S_AGENT_TOKEN_FILE]
--server value, -s value (experimental/cluster) Server to connect to, used to join a cluster [$K3S_URL]
--cluster-init (experimental/cluster) Initialize new cluster master [$K3S_CLUSTER_INIT]
--cluster-reset (experimental/cluster) Forget all peers and become a single cluster new cluster master [$K3S_CLUSTER_RESET]
--no-flannel (deprecated) use --flannel-backend=none
--cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET]
```
# Registration Options for the K3s Agent
```
NAME:
k3s agent - Run node agent
### Registration Options for the K3s Agent
USAGE:
k3s agent [OPTIONS]
OPTIONS:
-v value (logging) Number for the log level verbosity (default: 0)
--vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging
--log value, -l value (logging) Log to file
--alsologtostderr (logging) Log to standard error as well as file (if set)
--token value, -t value (cluster) Token to use for authentication [$K3S_TOKEN]
--token-file value (cluster) Token file to use for authentication [$K3S_TOKEN_FILE]
--server value, -s value (cluster) Server to connect to [$K3S_URL]
--data-dir value, -d value (agent/data) Folder to hold state (default: "/var/lib/rancher/k3s")
--node-name value (agent/node) Node name [$K3S_NODE_NAME]
--with-node-id (agent/node) Append id to node name
--node-label value (agent/node) Registering kubelet with set of labels
--node-taint value (agent/node) Registering kubelet with set of taints
--docker (agent/runtime) Use docker instead of containerd
--container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation
--pause-image value (agent/runtime) Customized pause image for containerd sandbox
--private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml")
--node-ip value, -i value (agent/networking) IP address to advertise for node
--node-external-ip value (agent/networking) External IP address to advertise for node
--resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF]
--flannel-iface value (agent/networking) Override default flannel interface
--flannel-conf value (agent/networking) Override default flannel config file
--kubelet-arg value (agent/flags) Customized flag for kubelet process
--kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process
--rootless (experimental) Run rootless
--no-flannel (deprecated) use --flannel-backend=none
--cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET]
```
### Node Labels and Taints for Agents
K3s agents can be configured with the options `--node-label` and `--node-taint` which adds a label and taint to the kubelet. The two options only add labels and/or taints at registration time, so they can only be added once and not changed after that again by running K3s commands.
Below is an example showing how to add labels and a taint:
```
--node-label foo=bar \
--node-label hello=world \
--node-taint key1=value1:NoExecute
```
If you want to change node labels and taints after node registration you should use `kubectl`. Refer to the official Kubernetes documentation for details on how to add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) and [node labels.](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node)
For details on configuring the K3s agent, refer to the [agent configuration reference.]({{<baseurl>}}/k3s/latest/en/installation/install-options/agent-config)
@@ -0,0 +1,136 @@
---
title: K3s Agent Configuration Reference
weight: 2
---
In this section, you'll learn how to configure the K3s agent.
> Throughout the K3s documentation, you will see some options that can be passed in as both command flags and environment variables. For help with passing in options, refer to [How to Use Flags and Environment Variables.]({{<baseurl>}}/k3s/latest/en/installation/install-options/how-to-flags)
- [Logging](#logging)
- [Cluster Options](#cluster-options)
- [Data](#data)
- [Node](#node)
- [Runtime](#runtime)
- [Networking](#networking)
- [Customized Flags](#customized-flags)
- [Experimental](#experimental)
- [Deprecated](#deprecated)
- [Node Labels and Taints for Agents](#node-labels-and-taints-for-agents)
- [K3s Agent CLI Help](#k3s-agent-cli-help)
### Logging
| Flag | Default | Description |
|------|---------|-------------|
| `-v` value | 0 | Number for the log level verbosity |
| `--vmodule` value | N/A | Comma-separated list of pattern=N settings for file-filtered logging |
| `--log value, -l` value | N/A | Log to file |
| `--alsologtostderr` | N/A | Log to standard error as well as file (if set) |
### Cluster Options
| Flag | Environment Variable | Description |
|------|----------------------|-------------|
| `--token value, -t` value | `K3S_TOKEN` | Token to use for authentication |
| `--token-file` value | `K3S_TOKEN_FILE` | Token file to use for authentication |
| `--server value, -s` value | `K3S_URL` | Server to connect to |
### Data
| Flag | Default | Description |
|------|---------|-------------|
| `--data-dir value, -d` value | "/var/lib/rancher/k3s" | Folder to hold state |
### Node
| Flag | Environment Variable | Description |
|------|----------------------|-------------|
| `--node-name` value | `K3S_NODE_NAME` | Node name |
| `--with-node-id` | N/A | Append id to node name |
| `--node-label` value | N/A | Registering and starting kubelet with set of labels |
| `--node-taint` value | N/A | Registering kubelet with set of taints |
### Runtime
| Flag | Default | Description |
|------|---------|-------------|
| `--docker` | N/A | Use docker instead of containerd |
| `--container-runtime-endpoint` value | N/A | Disable embedded containerd and use alternative CRI implementation |
| `--pause-image` value | "docker.io/rancher/pause:3.1" | Customized pause image for containerd or docker sandbox | (agent/runtime) (default: )
| `--private-registry` value | "/etc/rancher/k3s/registries.yaml" | Private registry configuration file |
### Networking
| Flag | Environment Variable | Description |
|------|----------------------|-------------|
| `--node-ip value, -i` value | N/A | IP address to advertise for node |
| `--node-external-ip` value | N/A | External IP address to advertise for node |
| `--resolv-conf` value | `K3S_RESOLV_CONF` | Kubelet resolv.conf file |
| `--flannel-iface` value | N/A | Override default flannel interface |
| `--flannel-conf` value | N/A | Override default flannel config file |
### Customized Flags
| Flag | Description |
|------|--------------|
| `--kubelet-arg` value | Customized flag for kubelet process |
| `--kube-proxy-arg` value | Customized flag for kube-proxy process |
### Experimental
| Flag | Description |
|------|--------------|
| `--rootless` | Run rootless |
### Deprecated
| Flag | Environment Variable | Description |
|------|----------------------|-------------|
| `--no-flannel` | N/A | Use `--flannel-backend=none` |
| `--cluster-secret` value | `K3S_CLUSTER_SECRET` | Use `--token` |
### Node Labels and Taints for Agents
K3s agents can be configured with the options `--node-label` and `--node-taint` which adds a label and taint to the kubelet. The two options only add labels and/or taints at registration time, so they can only be added once and not changed after that again by running K3s commands.
Below is an example showing how to add labels and a taint:
```bash
--node-label foo=bar \
--node-label hello=world \
--node-taint key1=value1:NoExecute
```
If you want to change node labels and taints after node registration you should use `kubectl`. Refer to the official Kubernetes documentation for details on how to add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) and [node labels.](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node)
### K3s Agent CLI Help
> If an option appears in brackets below, for example `[$K3S_URL]`, it means that the option can be passed in as an environment variable of that name.
```bash
NAME:
k3s agent - Run node agent
USAGE:
k3s agent [OPTIONS]
OPTIONS:
-v value (logging) Number for the log level verbosity (default: 0)
--vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging
--log value, -l value (logging) Log to file
--alsologtostderr (logging) Log to standard error as well as file (if set)
--token value, -t value (cluster) Token to use for authentication [$K3S_TOKEN]
--token-file value (cluster) Token file to use for authentication [$K3S_TOKEN_FILE]
--server value, -s value (cluster) Server to connect to [$K3S_URL]
--data-dir value, -d value (agent/data) Folder to hold state (default: "/var/lib/rancher/k3s")
--node-name value (agent/node) Node name [$K3S_NODE_NAME]
--with-node-id (agent/node) Append id to node name
--node-label value (agent/node) Registering and starting kubelet with set of labels
--node-taint value (agent/node) Registering kubelet with set of taints
--docker (agent/runtime) Use docker instead of containerd
--container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation
--pause-image value (agent/runtime) Customized pause image for containerd or docker sandbox (default: "docker.io/rancher/pause:3.1")
--private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml")
--node-ip value, -i value (agent/networking) IP address to advertise for node
--node-external-ip value (agent/networking) External IP address to advertise for node
--resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF]
--flannel-iface value (agent/networking) Override default flannel interface
--flannel-conf value (agent/networking) Override default flannel config file
--kubelet-arg value (agent/flags) Customized flag for kubelet process
--kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process
--rootless (experimental) Run rootless
--no-flannel (deprecated) use --flannel-backend=none
--cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET]
```
@@ -0,0 +1,35 @@
---
title: How to Use Flags and Environment Variables
weight: 3
---
Throughout the K3s documentation, you will see some options that can be passed in as both command flags and environment variables. The below examples show how these options can be passed in both ways.
### Example A: K3S_KUBECONFIG_MODE
The option to allow writing to the kubeconfig file is useful for allowing a K3s cluster to be imported into Rancher. Below are two ways to pass in the option.
Using the flag `--write-kubeconfig-mode 644`:
```bash
$ curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644
```
Using the environment variable `K3S_KUBECONFIG_MODE`:
```bash
$ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s -
```
### Example B: INSTALL_K3S_EXEC
If this command is not specified as a server or agent command, it will default to "agent" if `K3S_URL` is set, or "server" if it is not set.
The final systemd command resolves to a combination of this environment variable and script args. To illustrate this, the following commands result in the same behavior of registering a server without flannel:
```bash
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-flannel" sh -s -
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --no-flannel" sh -s -
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - --no-flannel
curl -sfL https://get.k3s.io | sh -s - server --no-flannel
curl -sfL https://get.k3s.io | sh -s - --no-flannel
```
@@ -0,0 +1,250 @@
---
title: K3s Server Configuration Reference
weight: 1
---
In this section, you'll learn how to configure the K3s server.
> Throughout the K3s documentation, you will see some options that can be passed in as both command flags and environment variables. For help with passing in options, refer to [How to Use Flags and Environment Variables.]({{<baseurl>}}/k3s/latest/en/installation/install-options/how-to-flags)
- [Commonly Used Options](#commonly-used-options)
- [Database](#database)
- [Cluster Options](#cluster-options)
- [Client Options](#client-options)
- [Agent Options](#agent-options)
- [Agent Nodes](#agent-nodes)
- [Agent Runtime](#agent-runtime)
- [Agent Networking](#agent-networking)
- [Advanced Options](#advanced-options)
- [Logging](#logging)
- [Listeners](#listeners)
- [Data](#data)
- [Networking](#networking)
- [Customized Options](#customized-options)
- [Storage Class](#storage-class)
- [Kubernetes Components](#kubernetes-components)
- [Customized Flags for Kubernetes Processes](#customized-flags-for-kubernetes-processes)
- [Experimental Options](#experimental-options)
- [Deprecated Options](#deprecated-options)
- [K3s Server Cli Help](#k3s-server-cli-help)
# Commonly Used Options
### Database
| Flag | Environment Variable | Description |
|------|----------------------|-------------|
| `--datastore-endpoint` value | `K3S_DATASTORE_ENDPOINT` | Specify etcd, Mysql, Postgres, or Sqlite (default) data source name |
| `--datastore-cafile` value | `K3S_DATASTORE_CAFILE` | TLS Certificate Authority file used to secure datastore backend communication |
| `--datastore-certfile` value | `K3S_DATASTORE_CERTFILE` | TLS certification file used to secure datastore backend communication |
| `--datastore-keyfile` value | `K3S_DATASTORE_KEYFILE` | TLS key file used to secure datastore backend communication |
### Cluster Options
| Flag | Environment Variable | Description |
|------|----------------------|-------------|
| `--token value, -t` value | `K3S_TOKEN` | Shared secret used to join a server or agent to a cluster |
| `--token-file` value | `K3S_TOKEN_FILE` | File containing the cluster-secret/token |
### Client Options
| Flag | Environment Variable | Description |
|------|----------------------|-------------|
| `--write-kubeconfig value, -o` value | `K3S_KUBECONFIG_OUTPUT` | Write kubeconfig for admin client to this file |
| `--write-kubeconfig-mode` value | `K3S_KUBECONFIG_MODE` | Write kubeconfig with this [mode.](https://en.wikipedia.org/wiki/Chmod) The option to allow writing to the kubeconfig file is useful for allowing a K3s cluster to be imported into Rancher. An example value is 644. |
# Agent Options
K3s agent options are available as server options because the server has the agent process embedded within.
### Agent Nodes
| Flag | Environment Variable | Description |
|------|----------------------|-------------|
| `--node-name` value | `K3S_NODE_NAME` | Node name |
| `--with-node-id` | N/A | Append id to node name | (agent/node)
| `--node-label` value | N/A | Registering and starting kubelet with set of labels |
| `--node-taint` value | N/A | Registering kubelet with set of taints |
### Agent Runtime
| Flag | Default | Description |
|------|---------|-------------|
| `--docker` | N/A | Use docker instead of containerd | (agent/runtime)
| `--container-runtime-endpoint` value | N/A | Disable embedded containerd and use alternative CRI implementation |
| `--pause-image` value | "docker.io/rancher/pause:3.1" | Customized pause image for containerd or Docker sandbox |
| `--private-registry` value | "/etc/rancher/k3s/registries.yaml" | Private registry configuration file |
### Agent Networking
the agent options are there because the server has the agent process embedded within
| Flag | Environment Variable | Description |
|------|----------------------|-------------|
| `--node-ip value, -i` value | N/A | IP address to advertise for node |
| `--node-external-ip` value | N/A | External IP address to advertise for node |
| `--resolv-conf` value | `K3S_RESOLV_CONF` | Kubelet resolv.conf file |
| `--flannel-iface` value | N/A | Override default flannel interface |
| `--flannel-conf` value | N/A | Override default flannel config file |
# Advanced Options
### Logging
| Flag | Default | Description |
|------|---------|-------------|
| `-v` value | 0 | Number for the log level verbosity |
| `--vmodule` value | N/A | Comma-separated list of pattern=N settings for file-filtered logging |
| `--log value, -l` value | N/A | Log to file |
| `--alsologtostderr` | N/A | Log to standard error as well as file (if set) |
### Listeners
| Flag | Default | Description |
|------|---------|-------------|
| `--bind-address` value | 0.0.0.0 | k3s bind address |
| `--https-listen-port` value | 6443 | HTTPS listen port |
| `--advertise-address` value | node-external-ip/node-ip | IP address that apiserver uses to advertise to members of the cluster |
| `--advertise-port` value | 0 | Port that apiserver uses to advertise to members of the cluster (default: listen-port) |
| `--tls-san` value | N/A | Add additional hostname or IP as a Subject Alternative Name in the TLS cert
### Data
| Flag | Default | Description |
|------|---------|-------------|
| `--data-dir value, -d` value | `/var/lib/rancher/k3s` or `${HOME}/.rancher/k3s` if not root | Folder to hold state |
### Networking
| Flag | Default | Description |
|------|---------|-------------|
| `--cluster-cidr` value | "10.42.0.0/16" | Network CIDR to use for pod IPs |
| `--service-cidr` value | "10.43.0.0/16" | Network CIDR to use for services IPs |
| `--cluster-dns` value | "10.43.0.10" | Cluster IP for coredns service. Should be in your service-cidr range |
| `--cluster-domain` value | "cluster.local" | Cluster Domain |
| `--flannel-backend` value | "vxlan" | One of 'none', 'vxlan', 'ipsec', 'host-gw', or 'wireguard' |
### Customized Flags
| Flag | Description |
|------|--------------|
| `--kube-apiserver-arg` value | Customized flag for kube-apiserver process |
| `--kube-scheduler-arg` value | Customized flag for kube-scheduler process |
| `--kube-controller-manager-arg` value | Customized flag for kube-controller-manager process |
| `--kube-cloud-controller-manager-arg` value | Customized flag for kube-cloud-controller-manager process |
### Storage Class
| Flag | Description |
|------|--------------|
| `--default-local-storage-path` value | Default local storage path for local provisioner storage class |
### Kubernetes Components
| Flag | Description |
|------|--------------|
| `--disable` value | Do not deploy packaged components and delete any deployed components (valid items: coredns, servicelb, traefik,local-storage, metrics-server) |
| `--disable-scheduler` | Disable Kubernetes default scheduler |
| `--disable-cloud-controller` | Disable k3s default cloud controller manager |
| `--disable-network-policy` | Disable k3s default network policy controller |
### Customized Flags for Kubernetes Processes
| Flag | Description |
|------|--------------|
| `--kubelet-arg` value | Customized flag for kubelet process |
| `--kube-proxy-arg` value | Customized flag for kube-proxy process |
### Experimental Options
| Flag | Environment Variable | Description |
|------|----------------------|-------------|
| `--rootless` | N/A | Run rootless | (experimental)
| `--agent-token` value | `K3S_AGENT_TOKEN` | Shared secret used to join agents to the cluster, but not servers |
| `--agent-token-file` value | `K3S_AGENT_TOKEN_FILE` | File containing the agent secret |
| `--server value, -s` value | `K3S_URL` | Server to connect to, used to join a cluster |
| `--cluster-init` | `K3S_CLUSTER_INIT` | Initialize new cluster master |
| `--cluster-reset` | `K3S_CLUSTER_RESET` | Forget all peers and become a single cluster new cluster master |
| `--secrets-encryption` | N/A | Enable Secret encryption at rest |
### Deprecated Options
| Flag | Environment Variable | Description |
|------|----------------------|-------------|
| `--no-flannel` | N/A | Use --flannel-backend=none |
| `--no-deploy` value | N/A | Do not deploy packaged components (valid items: coredns, servicelb, traefik, local-storage, metrics-server) |
| `--cluster-secret` value | `K3S_CLUSTER_SECRET` | Use --token |
# K3s Server CLI Help
> If an option appears in brackets below, for example `[$K3S_TOKEN]`, it means that the option can be passed in as an environment variable of that name.
```bash
NAME:
k3s server - Run management server
USAGE:
k3s server [OPTIONS]
OPTIONS:
-v value (logging) Number for the log level verbosity (default: 0)
--vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging
--log value, -l value (logging) Log to file
--alsologtostderr (logging) Log to standard error as well as file (if set)
--bind-address value (listener) k3s bind address (default: 0.0.0.0)
--https-listen-port value (listener) HTTPS listen port (default: 6443)
--advertise-address value (listener) IP address that apiserver uses to advertise to members of the cluster (default: node-external-ip/node-ip)
--advertise-port value (listener) Port that apiserver uses to advertise to members of the cluster (default: listen-port) (default: 0)
--tls-san value (listener) Add additional hostname or IP as a Subject Alternative Name in the TLS cert
--data-dir value, -d value (data) Folder to hold state default /var/lib/rancher/k3s or ${HOME}/.rancher/k3s if not root
--cluster-cidr value (networking) Network CIDR to use for pod IPs (default: "10.42.0.0/16")
--service-cidr value (networking) Network CIDR to use for services IPs (default: "10.43.0.0/16")
--cluster-dns value (networking) Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10)
--cluster-domain value (networking) Cluster Domain (default: "cluster.local")
--flannel-backend value (networking) One of 'none', 'vxlan', 'ipsec', 'host-gw', or 'wireguard' (default: "vxlan")
--token value, -t value (cluster) Shared secret used to join a server or agent to a cluster [$K3S_TOKEN]
--token-file value (cluster) File containing the cluster-secret/token [$K3S_TOKEN_FILE]
--write-kubeconfig value, -o value (client) Write kubeconfig for admin client to this file [$K3S_KUBECONFIG_OUTPUT]
--write-kubeconfig-mode value (client) Write kubeconfig with this mode [$K3S_KUBECONFIG_MODE]
--kube-apiserver-arg value (flags) Customized flag for kube-apiserver process
--kube-scheduler-arg value (flags) Customized flag for kube-scheduler process
--kube-controller-manager-arg value (flags) Customized flag for kube-controller-manager process
--kube-cloud-controller-manager-arg value (flags) Customized flag for kube-cloud-controller-manager process
--datastore-endpoint value (db) Specify etcd, Mysql, Postgres, or Sqlite (default) data source name [$K3S_DATASTORE_ENDPOINT]
--datastore-cafile value (db) TLS Certificate Authority file used to secure datastore backend communication [$K3S_DATASTORE_CAFILE]
--datastore-certfile value (db) TLS certification file used to secure datastore backend communication [$K3S_DATASTORE_CERTFILE]
--datastore-keyfile value (db) TLS key file used to secure datastore backend communication [$K3S_DATASTORE_KEYFILE]
--default-local-storage-path value (storage) Default local storage path for local provisioner storage class
--disable value (components) Do not deploy packaged components and delete any deployed components (valid items: coredns, servicelb, traefik, local-storage, metrics-server)
--disable-scheduler (components) Disable Kubernetes default scheduler
--disable-cloud-controller (components) Disable k3s default cloud controller manager
--disable-network-policy (components) Disable k3s default network policy controller
--node-name value (agent/node) Node name [$K3S_NODE_NAME]
--with-node-id (agent/node) Append id to node name
--node-label value (agent/node) Registering and starting kubelet with set of labels
--node-taint value (agent/node) Registering kubelet with set of taints
--docker (agent/runtime) Use docker instead of containerd
--container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation
--pause-image value (agent/runtime) Customized pause image for containerd or docker sandbox (default: "docker.io/rancher/pause:3.1")
--private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml")
--node-ip value, -i value (agent/networking) IP address to advertise for node
--node-external-ip value (agent/networking) External IP address to advertise for node
--resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF]
--flannel-iface value (agent/networking) Override default flannel interface
--flannel-conf value (agent/networking) Override default flannel config file
--kubelet-arg value (agent/flags) Customized flag for kubelet process
--kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process
--rootless (experimental) Run rootless
--agent-token value (experimental/cluster) Shared secret used to join agents to the cluster, but not servers [$K3S_AGENT_TOKEN]
--agent-token-file value (experimental/cluster) File containing the agent secret [$K3S_AGENT_TOKEN_FILE]
--server value, -s value (experimental/cluster) Server to connect to, used to join a cluster [$K3S_URL]
--cluster-init (experimental/cluster) Initialize new cluster master [$K3S_CLUSTER_INIT]
--cluster-reset (experimental/cluster) Forget all peers and become a single cluster new cluster master [$K3S_CLUSTER_RESET]
--secrets-encryption (experimental) Enable Secret encryption at rest
--no-flannel (deprecated) use --flannel-backend=none
--no-deploy value (deprecated) Do not deploy packaged components (valid items: coredns, servicelb, traefik, local-storage, metrics-server)
--cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET]
```
@@ -1,6 +1,8 @@
---
title: Installation Requirements
weight: 1
aliases:
- /k3s/latest/en/installation/node-requirements/
---
K3s is very lightweight, but has some minimum requirements as outlined below.
@@ -9,7 +11,7 @@ Whether you're configuring a K3s cluster to run in a Docker or Kubernetes setup,
## Prerequisites
* Two nodes cannot have the same hostname. If all your nodes have the same hostname, pass `--node-name` or set `$K3S_NODE_NAME` with a unique name for each node you add to the cluster.
* Two nodes cannot have the same hostname. If all your nodes have the same hostname, use the `--with-node-id` option to append a random suffix for each node, or otherwise devise a unique name to pass with `--node-name` or `$K3S_NODE_NAME` for each node you add to the cluster.
## Operating Systems
@@ -17,9 +19,10 @@ K3s should run on just about any flavor of Linux. However, K3s is tested on the
* Ubuntu 16.04 (amd64)
* Ubuntu 18.04 (amd64)
* Raspbian Buster (armhf)
> If you are using Alpine Linux, follow [these steps]({{<baseurl>}}/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup.
> * If you are using **Raspbian Buster**, follow [these steps]({{<baseurl>}}/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster) to switch to legacy iptables.
> * If you are using **Alpine Linux**, follow [these steps]({{<baseurl>}}/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup.
## Hardware
@@ -34,15 +37,28 @@ K3s performance depends on the performance of the database. To ensure optimal sp
## Networking
The K3s server needs port 6443 to be accessible by the nodes. The nodes need to be able to reach other nodes over UDP port 8472 (Flannel VXLAN). If you do not use Flannel and provide your own custom CNI, then port 8472 is not needed by K3s. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel.
The K3s server needs port 6443 to be accessible by the nodes.
IMPORTANT: The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disabled access to port 8472.
The nodes need to be able to reach other nodes over UDP port 8472 when Flannel VXLAN is used. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel. However, if you do not use Flannel and provide your own custom CNI, then port 8472 is not needed by K3s.
If you wish to utilize the metrics server, you will need to open port 10250 on each node.
> **Important:** The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disables access to port 8472.
<figcaption>Inbound Rules for K3s Server Nodes</figcaption>
| Protocol | Port | Source | Description
|-----|-----|----------------|---|
| TCP | 6443 | K3s server nodes | Kubernetes API
| UDP | 8472 | K3s server and agent nodes | Required only for Flannel VXLAN
| TCP | 10250 | K3s server and agent nodes | kubelet
Typically all outbound traffic is allowed.
## Large Clusters
Hardware requirements are based on the size of your K3s cluster. For production and large clusters, we recommend using a high-availability setup with an external database. The following options are recommended for the external database in production:
- MySQL
- PostgreSQL
- etcd
@@ -65,6 +81,17 @@ The cluster performance depends on database performance. To ensure optimal speed
### Network
You should consider increasing the subnet size for the cluster CIDR so that you don't run out of IPs for the pods. You can do that by passing the `--cluster-cidr` option to K3s server upon starting.
You should consider increasing the subnet size for the cluster CIDR so that you don't run out of IPs for the pods. You can do that by passing the `--cluster-cidr` option to K3s server upon starting.
### Database
K3s supports different databases including MySQL, PostgreSQL, MariaDB, and etcd, the following is a sizing guide for the database resources you need to run large clusters:
| Deployment Size | Nodes | VCPUS | RAM |
|:---------------:|:---------:|:-----:|:-----:|
| Small | Up to 10 | 1 | 2 GB |
| Medium | Up to 100 | 2 | 8 GB |
| Large | Up to 250 | 4 | 16 GB |
| X-Large | Up to 500 | 8 | 32 GB |
| XX-Large | 500+ | 16 | 64 GB |
@@ -3,7 +3,7 @@ title: "Network Options"
weight: 25
---
> **Note:** Please reference the [Networking]({{< baseurl >}}/k3s/latest/en/networking) page for information about CoreDNS, Traefik, and the Service LB.
> **Note:** Please reference the [Networking]({{<baseurl>}}/k3s/latest/en/networking) page for information about CoreDNS, Traefik, and the Service LB.
By default, K3s will run with flannel as the CNI, using VXLAN as the default backend. To change the CNI, refer to the section on configuring a [custom CNI](#custom-cni). To change the flannel backend, refer to the flannel options section.
@@ -25,7 +25,7 @@ Mirrors is a directive that defines the names and endpoints of the private regis
```
mirrors:
"mycustomreg.com:5000":
docker.io:
endpoint:
- "https://mycustomreg.com:5000"
```
@@ -59,7 +59,7 @@ Below are examples showing how you may configure `/etc/rancher/k3s/registries.ya
```
mirrors:
"mycustomreg.com:5000":
docker.io:
endpoint:
- "https://mycustomreg.com:5000"
configs:
@@ -78,7 +78,7 @@ configs:
```
mirrors:
"mycustomreg.com:5000":
docker.io:
endpoint:
- "https://mycustomreg.com:5000"
configs:
@@ -101,7 +101,7 @@ Below are examples showing how you may configure `/etc/rancher/k3s/registries.ya
```
mirrors:
"mycustomreg.com:5000":
docker.io:
endpoint:
- "http://mycustomreg.com:5000"
configs:
@@ -116,7 +116,7 @@ configs:
```
mirrors:
"mycustomreg.com:5000":
docker.io:
endpoint:
- "http://mycustomreg.com:5000"
```
@@ -127,3 +127,18 @@ mirrors:
> In case of no TLS communication, you need to specify `http://` for the endpoints, otherwise it will default to https.
In order for the registry changes to take effect, you need to restart K3s on each node.
# Adding Images to the Private Registry
First, obtain the k3s-images.txt file from GitHub for the release you are working with.
Pull the K3s images listed on the k3s-images.txt file from docker.io
Example: `docker pull docker.io/rancher/coredns-coredns:1.6.3`
Then, retag the images to the private registry.
Example: `docker tag coredns-coredns:1.6.3 mycustomreg:5000/coredns-coredns`
Last, push the images to the private registry.
Example: `docker push mycustomreg:5000/coredns-coredns`
+4 -4
View File
@@ -3,11 +3,11 @@ title: "Networking"
weight: 35
---
>**Note:** CNI options are covered in detail on the [Installation Network Options]({{< baseurl >}}/k3s/latest/en/installation/network-options/) page. Please reference that page for details on Flannel and the various flannel backend options or how to set up your own CNI.
>**Note:** CNI options are covered in detail on the [Installation Network Options]({{<baseurl>}}/k3s/latest/en/installation/network-options/) page. Please reference that page for details on Flannel and the various flannel backend options or how to set up your own CNI.
Open Ports
----------
Please reference the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/#networking) page for port information.
Please reference the [Installation Requirements]({{<baseurl>}}/k3s/latest/en/installation/installation-requirements/#networking) page for port information.
CoreDNS
-------
@@ -21,7 +21,7 @@ Traefik Ingress Controller
[Traefik](https://traefik.io/) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It simplifies networking complexity while designing, deploying, and running applications.
Traefik is deployed by default when starting the server. For more information see [Auto Deploying Manifests]({{< baseurl >}}/k3s/latest/en/advanced/#auto-deploying-manifests). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml` and any changes made to this file will automatically be deployed to Kubernetes in a manner similar to `kubectl apply`.
Traefik is deployed by default when starting the server. For more information see [Auto Deploying Manifests]({{<baseurl>}}/k3s/latest/en/advanced/#auto-deploying-manifests). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml` and any changes made to this file will automatically be deployed to Kubernetes in a manner similar to `kubectl apply`.
The Traefik ingress controller will use ports 80, 443, and 8080 on the host (i.e. these will not be usable for HostPort or NodePort).
@@ -34,4 +34,4 @@ Service Load Balancer
K3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available, the load balancer will stay in Pending.
To disable the embedded load balancer, run the server with the `--no-deploy servicelb` option. This is necessary if you wish to run a different load balancer, such as MetalLB.
To disable the embedded load balancer, run the server with the `--no-deploy servicelb` option. This is necessary if you wish to run a different load balancer, such as MetalLB.
+3 -37
View File
@@ -3,42 +3,8 @@ title: "Upgrades"
weight: 25
---
You can upgrade K3s by using the installation script, or by manually installing the binary of the desired version.
This section describes how to upgrade your K3s cluster.
>**Note:** When upgrading, upgrade server nodes first one at a time, then any worker nodes.
[Upgrade basics]({{< baseurl >}}/k3s/latest/en/upgrades/basic/) describes several techniques for upgrading your cluster manually. It can also be used as a basis for upgrading through third-party Infrastructure-as-Code tools like [Terraform](https://www.terraform.io/).
### Upgrade K3s Using the Installation Script
To upgrade K3s from an older version you can re-run the installation script using the same flags, for example:
```sh
curl -sfL https://get.k3s.io | sh -
```
If you want to upgrade to specific version you can run the following command:
```sh
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh -
```
### Manually Upgrade K3s Using the Binary
Or to manually upgrade K3s:
1. Download the desired version of K3s from [releases](https://github.com/rancher/k3s/releases/latest)
2. Install to an appropriate location (normally `/usr/local/bin/k3s`)
3. Stop the old version
4. Start the new version
### Restarting K3s
Restarting K3s is supported by the installation script for systemd and openrc.
To restart manually for systemd use:
```sh
sudo systemctl restart k3s
```
To restart manually for openrc use:
```sh
sudo service k3s restart
```
[Automated upgrades]({{< baseurl >}}/k3s/latest/en/upgrades/automated/) describes how to perform Kubernetes-native automated upgrades using Rancher's [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller).
@@ -0,0 +1,115 @@
---
title: "Automated Upgrades"
weight: 20
---
>**Note:** This feature is available as of [v1.17.4+k3s1](https://github.com/rancher/k3s/releases/tag/v1.17.4%2Bk3s1)
### Overview
You can manage K3s cluster upgrades using Rancher's system-upgrade-controller. This is a Kubernetes-native approach to cluster upgrades. It leverages a [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources), the `plan`, and a [controller](https://kubernetes.io/docs/concepts/architecture/controller/) that schedules upgrades based on the configured plans.
A plan defines upgrade policies and requirements. This documentation will provide plans with defaults appropriate for upgrading a K3s cluster. For more advanced plan configuration options, please review the [CRD](https://github.com/rancher/system-upgrade-controller/blob/master/pkg/apis/upgrade.cattle.io/v1/types.go).
The controller schedules upgrades by monitoring plans and selecting nodes to run upgrade [jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) on. A plan defines which nodes should be upgraded through a [label selector](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). When a job has run to completion successfully, the controller will label the node on which it ran accordingly.
>**Note:** The upgrade job that is launched must be highly privileged. It is configured with the following:
>
- Host `IPC`, `NET`, and `PID` namespaces
- The `CAP_SYS_BOOT` capability
- Host root mounted at `/host` with read and write permissions
For more details on the design and architecture of the system-upgrade-controller or its integration with K3s, see the following Git repositories:
- [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller)
- [k3s-upgrade](https://github.com/rancher/k3s-upgrade)
To automate upgrades in this manner you must:
1. Install the system-upgrade-controller into your cluster
1. Configure plans
### Install the system-upgrade-controller
The system-upgrade-controller can be installed as a deployment into your cluster. The deployment requires a service-account, clusterRoleBinding, and a configmap. To install these components, run the following command:
```
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/download/v0.4.0/system-upgrade-controller.yaml
```
The controller can be configured and customized via the previously mentioned configmap, but the controller must be redeployed for the changes to be applied.
### Configure plans
It is recommended that you minimally create two plans: a plan for upgrading server (master) nodes and a plan for upgrading agent (worker) nodes. As needed, you can create additional plans to control the rollout of the upgrade across nodes. The following two example plans will upgrade your cluster to K3s v1.17.4+k3s1. Once the plans are created, the controller will pick them up and begin to upgrade your cluster.
```
# Server plan
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: server-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: In
values:
- "true"
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
version: v1.17.4+k3s1
---
# Agent plan
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: agent-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: DoesNotExist
prepare:
args:
- prepare
- server-plan
image: rancher/k3s-upgrade:v1.17.4-k3s1
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
version: v1.17.4+k3s1
```
There are a few important things to call out regarding these plans:
First, the plans must be created in the same namespace where the controller was deployed.
Second, the `concurrency` field indicates how many nodes can be upgraded at the same time.
Third, the server-plan targets server nodes by specifying a label selector that selects nodes with the `node-role.kubernetes.io/master` label. The agent-plan targets agent nodes by specifying a label selector that select nodes without that label.
Fourth, the `prepare` step in the agent-plan will cause upgrade jobs for that plan to wait for the server-plan to complete before they execute.
Fifth, both plans have the `version` field set to v1.17.4+k3s1. Alternatively, you can omit the `version` field and set the `channel` field to a URL that resolves to a release of K3s. This will cause the controller to monitor that URL and upgrade the cluster any time it resolves to a new release. This is designed specifically to work with the [latest release functionality of GitHub](https://help.github.com/en/github/administering-a-repository/linking-to-releases). Thus, you can configure your plans with the following channel to ensure your cluster is always automatically upgraded to the latest release of K3s:
```
apiVersion: upgrade.cattle.io/v1
kind: Plan
...
spec:
...
channel: https://github.com/rancher/k3s/releases/latest
```
As stated, the upgrade will begin as soon as the controller detects that a plan was created. Updating a plan will cause the controller to re-evaluate the plan and determine if another upgrade is needed.
You can monitor the progress of an upgrade by viewing the plan and jobs via kubectl:
```
kubectl -n system-upgrade get plans -o yaml
kubectl -n system-upgrade get jobs -o yaml
```
@@ -0,0 +1,59 @@
---
title: "Upgrade Basics"
weight: 10
---
You can upgrade K3s by using the installation script, or by manually installing the binary of the desired version.
>**Note:** When upgrading, upgrade server nodes first one at a time, then any worker nodes.
### Upgrade K3s Using the Installation Script
To upgrade K3s from an older version you can re-run the installation script using the same flags, for example:
```sh
curl -sfL https://get.k3s.io | sh -
```
If you want to upgrade to specific version you can run the following command:
```sh
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh -
```
### Manually Upgrade K3s Using the Binary
Or to manually upgrade K3s:
1. Download the desired version of the K3s binary from [releases](https://github.com/rancher/k3s/releases)
2. Copy the downloaded binary to `/usr/local/bin/k3s` (or your desired location)
3. Stop the old k3s binary
4. Launch the new k3s binary
### Restarting K3s
Restarting K3s is supported by the installation script for systemd and OpenRC.
**systemd**
To restart servers manually:
```sh
sudo systemctl restart k3s
```
To restart agents manually:
```sh
sudo systemctl restart k3s-agent
```
**OpenRC**
To restart servers manually:
```sh
sudo service k3s restart
```
To restart agents mantually:
```sh
sudo service k3s-agent restart
```
+2 -2
View File
@@ -25,11 +25,11 @@ VMWare | 1GB | 1280MB (rancheros.iso) <br> 2048MB (ran
GCE | 1GB | 1280MB
AWS | 1GB | 1.7GB
You can adjust memory requirements by custom building RancherOS, please refer to [reduce-memory-requirements]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/#reduce-memory-requirements)
You can adjust memory requirements by custom building RancherOS, please refer to [reduce-memory-requirements]({{<baseurl>}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/#reduce-memory-requirements)
### How RancherOS Works
Everything in RancherOS is a Docker container. We accomplish this by launching two instances of Docker. One is what we call **System Docker** and is the first process on the system. All other system services, like `ntpd`, `syslog`, and `console`, are running in Docker containers. System Docker replaces traditional init systems like `systemd` and is used to launch [additional system services](installation/system-services/adding-system-services/).
Everything in RancherOS is a Docker container. We accomplish this by launching two instances of Docker. One is what we call **System Docker** and is the first process on the system. All other system services, like `ntpd`, `syslog`, and `console`, are running in Docker containers. System Docker replaces traditional init systems like `systemd` and is used to launch [additional system services](installation/system-services/).
System Docker runs a special container called **Docker**, which is another Docker daemon responsible for managing all of the users containers. Any containers that you launch as a user from the console will run inside this Docker. This creates isolation from the System Docker containers and ensures that normal user commands dont impact system services.
+3 -3
View File
@@ -1,6 +1,6 @@
---
title: About
weight: 4
title: Additional Resources
weight: 200
---
## Developing
@@ -59,7 +59,7 @@ All of repositories are located within our main GitHub [page](https://github.com
[RancherOS Repo](https://github.com/rancher/os): This repo contains the bulk of the RancherOS code.
[RancherOS Services Repo](https://github.com/rancher/os-services): This repo is where any [system-services]({{< baseurl >}}/os/v1.x/en//installation/system-services/adding-system-services/) can be contributed.
[RancherOS Services Repo](https://github.com/rancher/os-services): This repo is where any [system-services]({{< baseurl >}}/os/v1.x/en//system-services/) can be contributed.
[RancherOS Images Repo](https://github.com/rancher/os-images): This repo is for the corresponding service images.
@@ -7,7 +7,7 @@ RancherOS can be used to launch [Rancher](/rancher/) and be used as the OS to ad
### Launching Agents using Cloud-Config
You can easily add hosts into Rancher by using [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) to launch the rancher/agent container.
You can easily add hosts into Rancher by using [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) to launch the rancher/agent container.
After Rancher is launched and host registration has been saved, you will be able to find use the custom option to add Rancher OS nodes.
@@ -37,7 +37,7 @@ rancher:
```
<br>
> **Note:** You can not name the service `rancher-agent` as this will not allow the rancher/agent container to be launched correctly. Please read more about why [you can't name your container as `rancher-agent`]({{< baseurl >}}/rancher/v1.6/en/faqs/agents/#adding-in-name-rancher-agent).
> **Note:** You can not name the service `rancher-agent` as this will not allow the rancher/agent container to be launched correctly. Please read more about why [you can't name your container as `rancher-agent`]({{<baseurl>}}/rancher/v1.6/en/faqs/agents/#adding-in-name-rancher-agent).
### Adding in Host Labels
@@ -1,6 +1,8 @@
---
title: Configuration
weight: 120
aliases:
- /os/v1.x/en/installation/configuration
---
There are two ways that RancherOS can be configured.
@@ -34,7 +36,7 @@ In our example above, we have our `#cloud-config` line to indicate it's a cloud-
### Manually Changing Configuration
To update RancherOS configuration after booting, the `ros config set <key> <value>` command can be used.
For more complicated settings, like the [sysctl settings]({{< baseurl >}}/os/v1.x/en/installation/configuration/sysctl/), you can also create a small YAML file and then run `sudo ros config merge -i <your yaml file>`.
For more complicated settings, like the [sysctl settings]({{< baseurl >}}/os/v1.x/en/configuration/sysctl/), you can also create a small YAML file and then run `sudo ros config merge -i <your yaml file>`.
#### Getting Values
@@ -1,6 +1,8 @@
---
title: Kernel boot parameters
weight: 133
aliases:
- /os/v1.x/en/installation/configuration/adding-kernel-parameters
---
RancherOS parses the Linux kernel boot cmdline to add any keys it understands to its configuration. This allows you to modify what cloud-init sources it will use on boot, to enable `rancher.debug` logging, or to almost any other configuration setting.
@@ -27,7 +29,7 @@ $ sudo system-docker run --rm -it -v /:/host alpine vi /host/boot/global.cfg
### During installation
If you want to set the extra kernel parameters when you are [Installing RancherOS to Disk]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk/) please use the `--append` parameter.
If you want to set the extra kernel parameters when you are [Installing RancherOS to Disk]({{< baseurl >}}/os/v1.x/en/installation/server/install-to-disk/) please use the `--append` parameter.
```bash
$ sudo ros install -d /dev/sda --append "rancheros.autologin=tty1"
@@ -1,6 +1,8 @@
---
title: Air Gap Configuration
weight: 138
aliases:
- /os/v1.x/en/installation/configuration/airgap-configuration
---
In the air gap environment, the Docker registry, RancherOS repositories URL, and the RancherOS upgrade URL should be configured to ensure the OS can pull images, update OS services, and upgrade the OS.
@@ -10,10 +12,10 @@ In the air gap environment, the Docker registry, RancherOS repositories URL, and
You should use a private Docker registry so that `user-docker` and `system-docker` can pull images.
1. Add the private Docker registry domain to the [images prefix]({{< baseurl >}}/os/v1.x/en/installation/configuration/images-prefix/).
2. Set the private registry certificates for `user-docker`. For details, refer to [Certificates for Private Registries]({{< baseurl >}}/os/v1.x/en/installation/configuration/private-registries/#certificates-for-private-registries)
1. Add the private Docker registry domain to the [images prefix]({{< baseurl >}}/os/v1.x/en/configuration/images-prefix/).
2. Set the private registry certificates for `user-docker`. For details, refer to [Certificates for Private Registries]({{< baseurl >}}/os/v1.x/en/configuration/private-registries/#certificates-for-private-registries)
3. Set the private registry certificates for `system-docker`. There are two ways to set the certificates:
- To set the private registry certificates before RancherOS starts, you can run a script included with RancherOS. For details, refer to [Set Custom Certs in ISO]({{< baseurl >}}/os/v1.x/en/installation/configuration/airgap-configuration/#set-custom-certs-in-iso).
- To set the private registry certificates before RancherOS starts, you can run a script included with RancherOS. For details, refer to [Set Custom Certs in ISO]({{< baseurl >}}/os/v1.x/en/configuration/airgap-configuration/#set-custom-certs-in-iso).
- To set the private registry certificates after RancherOS starts, append your private registry certs to the `/etc/ssl/certs/ca-certificates.crt.rancher` file. Then reboot to make the certs fully take effect.
4. The images used by RancherOS should be pushed to your private registry.
@@ -84,7 +86,11 @@ $ sudo ros config set rancher.upgrade.url https://foo.bar.com/os/releases.yml
Here is a total cloud-config example for using RancherOS in an air gap environment.
For `system-docker`, see [Configuring Private Docker Registry]({{< baseurl >}}/os/v1.x/en/installation/configuration/airgap-configuration/#configuring-private-docker-registry).
<<<<<<< HEAD:content/os/v1.x/en/installation/configuration/airgap-configuration/_index.md
For `system-docker`, see [Configuring Private Docker Registry]({{<baseurl>}}/os/v1.x/en/installation/configuration/airgap-configuration/#configuring-private-docker-registry).
=======
For `system-docker`, see [Configuring Private Docker Registry]({{< baseurl >}}/os/v1.x/en/configuration/airgap-configuration/#configuring-private-docker-registry).
>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/configuration/airgap-configuration/_index.md
```yaml
#cloud-config
@@ -1,11 +1,13 @@
---
title: Date and time zone
weight: 121
aliases:
- /os/v1.x/en/installation/configuration/date-and-timezone
---
The default console keeps time in the Coordinated Universal Time (UTC) zone and synchronizes clocks with the Network Time Protocol (NTP). The Network Time Protocol daemon (ntpd) is an operating system program that maintains the system time in synchronization with time servers using the NTP.
RancherOS can run ntpd in the System Docker container. You can update its configurations by updating `/etc/ntp.conf`. For an example of how to update a file such as `/etc/ntp.conf` within a container, refer to [this page.]({{< baseurl >}}/os/v1.x/en/installation/configuration/write-files/#writing-files-in-specific-system-services)
RancherOS can run ntpd in the System Docker container. You can update its configurations by updating `/etc/ntp.conf`. For an example of how to update a file such as `/etc/ntp.conf` within a container, refer to [this page.]({{< baseurl >}}/os/v1.x/en/configuration/write-files/#writing-files-in-specific-system-services)
The default console cannot support changing the time zone because including `tzdata` (time zone data) will increase the ISO size. However, you can change the time zone in the container by passing a flag to specify the time zone when you run the container:
@@ -1,6 +1,8 @@
---
title: Disabling Access to RancherOS
weight: 136
aliases:
- /os/v1.x/en/installation/configuration/disable-access-to-system
---
_Available as of v1.5_
@@ -1,9 +1,11 @@
---
title: Configuring Docker or System Docker
weight: 126
aliases:
- /os/v1.x/en/installation/configuration/docker
---
In RancherOS, you can configure System Docker and Docker daemons by using [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config).
In RancherOS, you can configure System Docker and Docker daemons by using [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config).
### Configuring Docker
@@ -61,7 +63,7 @@ Key | Value | Default | Description
---|---|---| ---
`extra_args` | List of Strings | `[]` | Arbitrary daemon arguments, appended to the generated command
`environment` | List of Strings | `[]` |
`tls` | Boolean | `false` | When [setting up TLS]({{< baseurl >}}/os/v1.x/en/installation/configuration/setting-up-docker-tls/), this key needs to be set to true.
`tls` | Boolean | `false` | When [setting up TLS]({{< baseurl >}}/os/v1.x/en/configuration/setting-up-docker-tls/), this key needs to be set to true.
`tls_args` | List of Strings (used only if `tls: true`) | `[]` |
`server_key` | String (used only if `tls: true`)| `""` | PEM encoded server TLS key.
`server_cert` | String (used only if `tls: true`) | `""` | PEM encoded server TLS certificate.
@@ -120,7 +122,7 @@ $ ros config set rancher.system_docker.bip 172.19.0.0/16
_Available as of v1.4.x_
The default path of system-docker logs is `/var/log/system-docker.log`. If you want to write the system-docker logs to a separate partition,
e.g. [RANCHER_OEM partition]({{< baseurl >}}/os/v1.x/en/about/custom-partition-layout/#use-rancher-oem-partition), you can try `rancher.defaults.system_docker_logs`:
e.g. [RANCHER_OEM partition]({{<baseurl>}}/os/v1.x/en/about/custom-partition-layout/#use-rancher-oem-partition), you can try `rancher.defaults.system_docker_logs`:
```
#cloud-config
@@ -1,9 +1,11 @@
---
title: Setting the Hostname
weight: 124
aliases:
- /os/v1.x/en/installation/configuration/hostname
---
You can set the hostname of the host using [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). The example below shows how to configure it.
You can set the hostname of the host using [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). The example below shows how to configure it.
```yaml
#cloud-config
@@ -1,6 +1,8 @@
---
title: Images prefix
weight: 121
aliases:
- /os/v1.x/en/installation/configuration/images-prefix
---
_Available as of v1.3_
@@ -1,6 +1,8 @@
---
title: Installing Kernel Modules that require Kernel Headers
weight: 135
aliases:
- /os/v1.x/en/installation/configuration/kernel-modules-kernel-headers
---
To compile any kernel modules, you will need to download the kernel headers. The kernel headers are available in the form of a system service. Since the kernel headers are a system service, they need to be enabled using the `ros service` command.
@@ -1,6 +1,8 @@
---
title: Loading Kernel Modules
weight: 134
aliases:
- /os/v1.x/en/installation/configuration/loading-kernel-modules
---
Since RancherOS v0.8, we build our own kernels using an unmodified kernel.org LTS kernel.
@@ -1,9 +1,11 @@
---
title: Private Registries
weight: 128
aliases:
- /os/v1.x/en/installation/configuration/private-registries
---
When launching services through a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config), it is sometimes necessary to pull a private image from DockerHub or from a private registry. Authentication for these can be embedded in your cloud-config.
When launching services through a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config), it is sometimes necessary to pull a private image from DockerHub or from a private registry. Authentication for these can be embedded in your cloud-config.
For example, to add authentication for DockerHub:
@@ -61,7 +63,7 @@ write_files:
### Certificates for Private Registries
Certificates can be stored in the standard locations (i.e. `/etc/docker/certs.d`) following the [Docker documentation](https://docs.docker.com/registry/insecure). By using the `write_files` directive of the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config), the certificates can be written directly into `/etc/docker/certs.d`.
Certificates can be stored in the standard locations (i.e. `/etc/docker/certs.d`) following the [Docker documentation](https://docs.docker.com/registry/insecure). By using the `write_files` directive of the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config), the certificates can be written directly into `/etc/docker/certs.d`.
```yaml
#cloud-config
@@ -1,6 +1,8 @@
---
title: Resizing a Device Partition
weight: 131
aliases:
- /os/v1.x/en/installation/configuration/resizing-device-partition
---
The `resize_device` cloud config option can be used to automatically extend the first partition (assuming its `ext4`) to fill the size of it's device.
@@ -1,6 +1,8 @@
---
title: Running Commands
weight: 123
aliases:
- /os/v1.x/en/installation/configuration/running-commands
---
You can automate running commands on boot using the `runcmd` cloud-config directive. Commands can be specified as either a list or a string. In the latter case, the command is executed with `sh`.
@@ -31,4 +33,4 @@ write_files:
docker run -d nginx
```
Running Docker commands in this manner is useful when pieces of the `docker run` command are dynamically generated. For services whose configuration is static, [adding a system service]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/) is recommended.
Running Docker commands in this manner is useful when pieces of the `docker run` command are dynamically generated. For services whose configuration is static, [adding a system service]({{< baseurl >}}/os/v1.x/en/system-services/) is recommended.
@@ -1,6 +1,8 @@
---
title: Setting up Docker TLS
weight: 127
aliases:
- /os/v1.x/en/installation/configuration/setting-up-docker-tls
---
`ros tls generate` is used to generate both the client and server TLS certificates for Docker.
@@ -1,9 +1,11 @@
---
title: SSH Settings
weight: 121
aliases:
- /os/v1.x/en/installation/configuration/ssh-keys
---
RancherOS supports adding SSH keys through the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file. Within the cloud-config file, you simply add the ssh keys within the `ssh_authorized_keys` key.
RancherOS supports adding SSH keys through the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file. Within the cloud-config file, you simply add the ssh keys within the `ssh_authorized_keys` key.
```yaml
#cloud-config
@@ -1,15 +1,27 @@
---
title: Switching Consoles
weight: 125
aliases:
- /os/v1.x/en/installation/configuration/switching-consoles
---
When [booting from the ISO]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/), RancherOS starts with the default console, which is based on busybox.
<<<<<<< HEAD:content/os/v1.x/en/installation/configuration/switching-consoles/_index.md
When [booting from the ISO]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/), RancherOS starts with the default console, which is based on busybox.
You can select which console you want RancherOS to start with using the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config).
You can select which console you want RancherOS to start with using the [cloud-config]({{<baseurl>}}/os/v1.x/en/installation/configuration/#cloud-config).
### Enabling Consoles using Cloud-Config
When launching RancherOS with a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file, you can select which console you want to use.
When launching RancherOS with a [cloud-config]({{<baseurl>}}/os/v1.x/en/installation/configuration/#cloud-config) file, you can select which console you want to use.
=======
When [booting from the ISO]({{< baseurl >}}/os/v1.x/en/installation/workstation//boot-from-iso/), RancherOS starts with the default console, which is based on busybox.
You can select which console you want RancherOS to start with using the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config).
### Enabling Consoles using Cloud-Config
When launching RancherOS with a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file, you can select which console you want to use.
>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/configuration/switching-consoles/_index.md
Currently, the list of available consoles are:
@@ -102,7 +114,7 @@ All consoles except the default (busybox) console are persistent. Persistent con
<br>
> **Note:** When using a persistent console and in the current version's console, [rolling back]({{< baseurl >}}/os/v1.x/en/upgrading/#rolling-back-an-upgrade) is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported.
> **Note:** When using a persistent console and in the current version's console, [rolling back]({{<baseurl>}}/os/v1.x/en/upgrading/#rolling-back-an-upgrade) is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported.
### Enabling Consoles
@@ -1,9 +1,15 @@
---
title: Switching Docker Versions
weight: 129
aliases:
- /os/v1.x/en/installation/configuration/switching-docker-versions
---
The version of User Docker used in RancherOS can be configured using a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file or by using the `ros engine` command.
<<<<<<< HEAD:content/os/v1.x/en/installation/configuration/switching-docker-versions/_index.md
The version of User Docker used in RancherOS can be configured using a [cloud-config]({{<baseurl>}}/os/v1.x/en/installation/configuration/#cloud-config) file or by using the `ros engine` command.
=======
The version of User Docker used in RancherOS can be configured using a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file or by using the `ros engine` command.
>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/configuration/switching-docker-versions/_index.md
> **Note:** There are known issues in Docker when switching between versions. For production systems, we recommend setting the Docker engine only once [using a cloud-config](#setting-the-docker-engine-using-cloud-config).
@@ -83,7 +89,11 @@ FROM scratch
COPY engine /engine
```
Once the image is built a [system service]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/) configuration file must be created. An [example file](https://github.com/rancher/os-services/blob/master/d/docker-18.06.3-ce.yml) can be found in the rancher/os-services repo. Change the `image` field to point to the Docker engine image you've built.
<<<<<<< HEAD:content/os/v1.x/en/installation/configuration/switching-docker-versions/_index.md
Once the image is built a [system service]({{<baseurl>}}/os/v1.x/en/installation/system-services/adding-system-services/) configuration file must be created. An [example file](https://github.com/rancher/os-services/blob/master/d/docker-18.06.3-ce.yml) can be found in the rancher/os-services repo. Change the `image` field to point to the Docker engine image you've built.
=======
Once the image is built a [system service]({{< baseurl >}}/os/v1.x/en/system-services/) configuration file must be created. An [example file](https://github.com/rancher/os-services/blob/master/d/docker-18.06.3-ce.yml) can be found in the rancher/os-services repo. Change the `image` field to point to the Docker engine image you've built.
>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/configuration/switching-docker-versions/_index.md
All of the previously mentioned methods of switching Docker engines are now available. For example, if your service file is located at `https://myservicefile` then the following cloud-config file could be used to use your custom Docker engine.
@@ -1,6 +1,8 @@
---
title: Sysctl Settings
weight: 132
aliases:
- /os/v1.x/en/installation/configuration/sysctl
---
The `rancher.sysctl` cloud-config key can be used to control sysctl parameters. This works in a manner similar to `/etc/sysctl.conf` for other Linux distros.
@@ -1,11 +1,13 @@
---
title: Users
weight: 130
aliases:
- /os/v1.x/en/installation/configuration/users
---
Currently, we don't support adding other users besides `rancher`.
You _can_ add users in the console container, but these users will only exist as long as the console container exists. It only makes sense to add users in a [persistent consoles]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence).
You _can_ add users in the console container, but these users will only exist as long as the console container exists. It only makes sense to add users in a [persistent consoles]({{<baseurl>}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence).
If you want the console user to be able to ssh into RancherOS, you need to add them
to the `docker` group.
@@ -1,6 +1,8 @@
---
title: Writing Files
weight: 122
aliases:
- /os/v1.x/en/installation/configuration/write-files
---
You can automate writing files to disk using the `write_files` cloud-config directive.
+32 -2
View File
@@ -1,4 +1,34 @@
---
title: Installation
weight: 2
title: Installing and Running RancherOS
weight: 100
aliases:
- /os/v1.x/en/installation/running-rancheros
---
RancherOS runs on virtualization platforms, cloud providers and bare metal servers. We also support running a local VM on your laptop.
To start running RancherOS as quickly as possible, follow our [Quick Start Guide]({{< baseurl >}}/os/v1.x/en/quick-start-guide/).
# Platforms
Refer to the below resources for more information on installing Rancher on your platform.
### Workstation
- [Docker Machine]({{< baseurl >}}/os/v1.x/en/installation/workstation//docker-machine)
- [Boot from ISO]({{< baseurl >}}/os/v1.x/en/installation/workstation//boot-from-iso)
### Cloud
- [Amazon EC2]({{< baseurl >}}/os/v1.x/en/installation/cloud/aws)
- [Google Compute Engine]({{< baseurl >}}/os/v1.x/en/installation/cloud/gce)
- [DigitalOcean]({{< baseurl >}}/os/v1.x/en/installation/cloud/do)
- [Azure]({{< baseurl >}}/os/v1.x/en/installation/cloud/azure)
- [OpenStack]({{< baseurl >}}/os/v1.x/en/installation/cloud/openstack)
- [VMware ESXi]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi)
- [Aliyun]({{< baseurl >}}/os/v1.x/en/installation/cloud/aliyun)
### Bare Metal & Virtual Servers
- [PXE]({{< baseurl >}}/os/v1.x/en/installation/server/pxe)
- [Install to Hard Disk]({{< baseurl >}}/os/v1.x/en/installation/server/install-to-disk)
- [Raspberry Pi]({{< baseurl >}}/os/v1.x/en/installation/server/raspberry-pi)
@@ -11,13 +11,13 @@ Prior to launching RancherOS EC2 instances, the [ECS Container Instance IAM Role
### Launching an instance with ECS
RancherOS makes it easy to join your ECS cluster. The ECS agent is a [system service]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/) that is enabled in the ECS enabled AMI. There may be other RancherOS AMIs that don't have the ECS agent enabled by default, but it can easily be added in the user data on any RancherOS AMI.
RancherOS makes it easy to join your ECS cluster. The ECS agent is a [system service]({{< baseurl >}}/os/v1.x/en/system-services/) that is enabled in the ECS enabled AMI. There may be other RancherOS AMIs that don't have the ECS agent enabled by default, but it can easily be added in the user data on any RancherOS AMI.
When launching the RancherOS AMI, you'll need to specify the **IAM Role** and **Advanced Details** -> **User Data** in the **Configure Instance Details** step.
For the **IAM Role**, you'll need to be sure to select the ECS Container Instance IAM role.
For the **User Data**, you'll need to pass in the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file.
For the **User Data**, you'll need to pass in the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file.
```yaml
#cloud-config
@@ -37,7 +37,7 @@ rancher:
By default, the ECS agent will be using the `latest` tag for the `amazon-ecs-agent` image. In v0.5.0, we introduced the ability to select which version of the `amazon-ecs-agent`.
To select the version, you can update your [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file.
To select the version, you can update your [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file.
```yaml
#cloud-config
@@ -3,17 +3,17 @@ title: Built-in System Services
weight: 150
---
To launch RancherOS, we have built-in system services. They are defined in the [Docker Compose](https://docs.docker.com/compose/compose-file/) format, and can be found in the default system config file, `/usr/share/ros/os-config.yml`. You can [add your own system services]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/) or override services in the cloud-config.
To launch RancherOS, we have built-in system services. They are defined in the [Docker Compose](https://docs.docker.com/compose/compose-file/) format, and can be found in the default system config file, `/usr/share/ros/os-config.yml`. You can [add your own system services]({{< baseurl >}}/os/v1.x/en/system-services/) or override services in the cloud-config.
### preload-user-images
Read more about [image preloading]({{< baseurl >}}/os/v1.x/en/installation/boot-process/image-preloading/).
Read more about [image preloading]({{<baseurl>}}/os/v1.x/en/installation/boot-process/image-preloading/).
### network
During this service, networking is set up, e.g. hostname, interfaces, and DNS.
It is configured by `hostname` and `rancher.network`settings in [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config).
It is configured by `hostname` and `rancher.network`settings in [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config).
### ntp
@@ -24,13 +24,13 @@ Runs `ntpd` in a System Docker container.
This service provides the RancherOS user interface by running `sshd` and `getty`. It completes the RancherOS configuration on start up:
1. If the `rancher.password=<password>` kernel parameter exists, it sets `<password>` as the password for the `rancher` user.
2. If there are no host SSH keys, it generates host SSH keys and saves them under `rancher.ssh.keys` in [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config).
2. If there are no host SSH keys, it generates host SSH keys and saves them under `rancher.ssh.keys` in [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config).
3. Runs `cloud-init -execute`, which does the following:
* Updates `.ssh/authorized_keys` in `/home/rancher` and `/home/docker` from [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/ssh-keys/) and metadata.
* Writes files specified by the `write_files` [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/write-files/) setting.
* Resizes the device specified by the `rancher.resize_device` [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/resizing-device-partition/) setting.
* Mount devices specified in the `mounts` [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/storage/additional-mounts/) setting.
* Set sysctl parameters specified in the`rancher.sysctl` [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/sysctl/) setting.
* Updates `.ssh/authorized_keys` in `/home/rancher` and `/home/docker` from [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/ssh-keys/) and metadata.
* Writes files specified by the `write_files` [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/write-files/) setting.
* Resizes the device specified by the `rancher.resize_device` [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/resizing-device-partition/) setting.
* Mount devices specified in the `mounts` [cloud-config]({{< baseurl >}}/os/v1.x/en/storage/additional-mounts/) setting.
* Set sysctl parameters specified in the`rancher.sysctl` [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/sysctl/) setting.
4. If user-data contained a file that started with `#!`, then a file would be saved at `/var/lib/rancher/conf/cloud-config-script` during cloud-init and then executed. Any errors are ignored.
5. Runs `/opt/rancher/bin/start.sh` if it exists and is executable. Any errors are ignored.
6. Runs `/etc/rc.local` if it exists and is executable. Any errors are ignored.
@@ -7,7 +7,7 @@ Userdata and metadata can be fetched from a cloud provider, VM runtime, or manag
### Userdata
Userdata is a file given by users when launching RancherOS hosts. It is stored in different locations depending on its format. If the userdata is a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file, indicated by beginning with `#cloud-config` and being in YAML format, it is stored in `/var/lib/rancher/conf/cloud-config.d/boot.yml`. If the userdata is a script, indicated by beginning with `#!`, it is stored in `/var/lib/rancher/conf/cloud-config-script`.
Userdata is a file given by users when launching RancherOS hosts. It is stored in different locations depending on its format. If the userdata is a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file, indicated by beginning with `#cloud-config` and being in YAML format, it is stored in `/var/lib/rancher/conf/cloud-config.d/boot.yml`. If the userdata is a script, indicated by beginning with `#!`, it is stored in `/var/lib/rancher/conf/cloud-config-script`.
### Metadata
@@ -15,7 +15,7 @@ Although the specifics vary based on provider, a metadata file will typically co
## Configuration Load Order
[Cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config/) is read by system services when they need to get configuration. Each additional file overwrites and extends the previous configuration file.
[Cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config/) is read by system services when they need to get configuration. Each additional file overwrites and extends the previous configuration file.
1. `/usr/share/ros/os-config.yml` - This is the system default configuration, which should **not** be modified by users.
2. `/usr/share/ros/oem/oem-config.yml` - This will typically exist by OEM, which should **not** be modified by users.
@@ -1,6 +1,8 @@
---
title: Aliyun
weight: 111
aliases:
- /os/v1.x/en/installation/running-rancheros/cloud/aliyun
---
# Adding the RancherOS Image into Aliyun
@@ -13,7 +15,7 @@ RancherOS is available as an image in Aliyun, and can be easily run in Elastic C
Example:
![RancherOS on Aliyun 1]({{< baseurl >}}/img/os/RancherOS_aliyun1.jpg)
![RancherOS on Aliyun 1]({{<baseurl>}}/img/os/RancherOS_aliyun1.jpg)
## Options
@@ -29,6 +31,6 @@ After the image is uploaded, we can use the `Aliyun Console` to start a new inst
Since the image is private, we need to use the `Custom Images`.
![RancherOS on Aliyun 2]({{< baseurl >}}/img/os/RancherOS_aliyun2.jpg)
![RancherOS on Aliyun 2]({{<baseurl>}}/img/os/RancherOS_aliyun2.jpg)
After the instance is successfully started, we can login with the `rancher` user via SSH.
@@ -1,6 +1,8 @@
---
title: Amazon EC2
weight: 105
aliases:
- /os/v1.x/en/installation/running-rancheros/cloud/aws
---
RancherOS is available as an Amazon Web Services AMI, and can be easily run on EC2. You can launch RancherOS either using the AWS Command Line Interface (CLI) or using the AWS console.
@@ -28,7 +30,11 @@ Lets walk through how to import and create a RancherOS on EC2 machine using t
{{< img "/img/os/Rancher_aws1.png" "RancherOS on AWS 1">}}
2. Select the **Community AMIs** on the sidebar and search for **RancherOS**. Pick the latest version and click **Select**.
{{< img "/img/os/Rancher_aws2.png" "RancherOS on AWS 2">}}
3. Go through the steps of creating the instance type through the AWS console. If you want to pass in a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file during boot of RancherOS, you'd pass in the file as **User data** by expanding the **Advanced Details** in **Step 3: Configure Instance Details**. You can pass in the data as text or as a file.
<<<<<<< HEAD:content/os/v1.x/en/installation/running-rancheros/cloud/aws/_index.md
3. Go through the steps of creating the instance type through the AWS console. If you want to pass in a [cloud-config]({{<baseurl>}}/os/v1.x/en/installation/configuration/#cloud-config) file during boot of RancherOS, you'd pass in the file as **User data** by expanding the **Advanced Details** in **Step 3: Configure Instance Details**. You can pass in the data as text or as a file.
=======
3. Go through the steps of creating the instance type through the AWS console. If you want to pass in a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file during boot of RancherOS, you'd pass in the file as **User data** by expanding the **Advanced Details** in **Step 3: Configure Instance Details**. You can pass in the data as text or as a file.
>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/installation/cloud/aws/_index.md
{{< img "/img/os/Rancher_aws6.png" "RancherOS on AWS 6">}}
After going through all the steps, you finally click on **Launch**, and either create a new key pair or choose an existing key pair to be used with the EC2 instance. If you have created a new key pair, download the key pair. If you have chosen an existing key pair, make sure you have the key pair accessible. Click on **Launch Instances**.
{{< img "/img/os/Rancher_aws3.png" "RancherOS on AWS 3">}}
@@ -1,6 +1,8 @@
---
title: Azure
weight: 110
aliases:
- /os/v1.x/en/installation/running-rancheros/cloud/azure
---
RancherOS has been published in Azure Marketplace, you can get it from [here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/rancher.rancheros).
@@ -1,6 +1,8 @@
---
title: Digital Ocean
weight: 107
aliases:
- /os/v1.x/en/installation/running-rancheros/cloud/do
---
RancherOS is available in the Digital Ocean portal. RancherOS is a member of container distributions and you can find it easily.
@@ -15,7 +17,7 @@ To start a RancherOS Droplet on Digital Ocean:
1. Click **Create Droplet.**
1. Click the **Container distributions** tab.
1. Click **RancherOS.**
1. Choose a plan. Make sure your Droplet has the [minimum hardware requirements for RancherOS]({{< baseurl >}}os/v1.x/en/overview/#hardware-requirements).
1. Choose a plan. Make sure your Droplet has the [minimum hardware requirements for RancherOS]({{<baseurl>}}/os/v1.x/en/overview/#hardware-requirements).
1. Choose any options for backups, block storage, and datacenter region.
1. Optional: In the **Select additional options** section, you can check the **User data** box and enter a `cloud-config` file in the text box that appears. The `cloud-config` file is used to provide a script to be run on the first boot. An example is below.
1. Choose an SSH key that you have access to, or generate a new SSH key.
@@ -1,9 +1,11 @@
---
title: Google Compute Engine (GCE)
weight: 106
aliases:
- /os/v1.x/en/installation/running-rancheros/cloud/gce
---
> **Note:** Due to the maximum transmission unit (MTU) of [1460 bytes on GCE](https://cloud.google.com/compute/docs/troubleshooting#packetfragmentation), you will need to configure your [network interfaces]({{< baseurl >}}/os/v1.x/en/installation/networking/interfaces/) and both the [Docker and System Docker]({{< baseurl >}}/os/v1.x/en/installation/configuration/docker/) to use a MTU of 1460 bytes or you will encounter weird networking related errors.
> **Note:** Due to the maximum transmission unit (MTU) of [1460 bytes on GCE](https://cloud.google.com/compute/docs/troubleshooting#packetfragmentation), you will need to configure your [network interfaces]({{< baseurl >}}/os/v1.x/en/networking/interfaces/) and both the [Docker and System Docker]({{< baseurl >}}/os/v1.x/en/configuration/docker/) to use a MTU of 1460 bytes or you will encounter weird networking related errors.
### Adding the RancherOS Image into GCE
@@ -26,7 +28,7 @@ $ gcloud compute instances create --project <PROJECT_ID> --zone <ZONE_TO_CREATE_
### Using a Cloud Config File with GCE
If you want to pass in your own cloud config file that will be processed by [cloud init]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config), you can pass it as metadata upon creation of the instance during the `gcloud compute` command. The file will need to be stored locally before running the command. The key of the metadata will be `user-data` and the value is the location of the file. If any SSH keys are added in the cloud config file, it will also be added to the **rancher** user.
If you want to pass in your own cloud config file that will be processed by [cloud init]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config), you can pass it as metadata upon creation of the instance during the `gcloud compute` command. The file will need to be stored locally before running the command. The key of the metadata will be `user-data` and the value is the location of the file. If any SSH keys are added in the cloud config file, it will also be added to the **rancher** user.
```
$ gcloud compute instances create --project <PROJECT_ID> --zone <ZONE_TO_CREATE_INSTANCE> <INSTANCE_NAME> --image <PRIVATE_IMAGE_NAME> --metadata-from-file user-data=/Directory/of/Cloud_Config.yml
@@ -74,11 +76,11 @@ Updated [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE_OF
After the image is uploaded, it's easy to use the console to create new instances. You will **not** be able to upload your own cloud config file when creating instances through the console. You can add it after the instance is created using `gcloud compute` commands and resetting the instance.
1. Make sure you are in the project that the image was created in.
![RancherOS on GCE 4]({{< baseurl >}}/img/os/Rancher_gce4.png)
![RancherOS on GCE 4]({{<baseurl>}}/img/os/Rancher_gce4.png)
2. In the navigation bar, click on the **VM instances**, which is located at Compute -> Compute Engine -> Metadata. Click on **Create instance**.
![RancherOS on GCE 5]({{< baseurl >}}/img/os/Rancher_gce5.png)
![RancherOS on GCE 5]({{<baseurl>}}/img/os/Rancher_gce5.png)
2. Fill out the information for your instance. In the **Image** dropdown, your private image will be listed among the public images provided by Google. Select the private image for RancherOS. Click **Create**.
![RancherOS on GCE 6]({{< baseurl >}}/img/os/Rancher_gce6.png)
![RancherOS on GCE 6]({{<baseurl>}}/img/os/Rancher_gce6.png)
3. Your instance is being created and will be up and running shortly!
#### Adding SSH keys
@@ -89,7 +91,7 @@ In order to SSH into the GCE instance, you will need to have SSH keys set up in
In your project, click on **Metadata**, which is located within Compute -> Compute Engine -> Metadata. Click on **SSH Keys**.
![RancherOS on GCE 7]({{< baseurl >}}/img/os/Rancher_gce7.png)
![RancherOS on GCE 7]({{<baseurl>}}/img/os/Rancher_gce7.png)
Add the SSH keys that you want to have access to any instances within your project.
@@ -99,11 +101,11 @@ Note: If you do this after any RancherOS instance is created, you will need to r
After your instance is created, click on the instance name. Scroll down to the **SSH Keys** section and click on **Add SSH key**. This key will only be applicable to the instance.
![RancherOS on GCE 8]({{< baseurl >}}/img/os/Rancher_gce8.png)
![RancherOS on GCE 8]({{<baseurl>}}/img/os/Rancher_gce8.png)
After the SSH keys have been added, you'll need to reset the machine, by clicking **Reset**.
![RancherOS on GCE 9]({{< baseurl >}}/img/os/Rancher_gce9.png)
![RancherOS on GCE 9]({{<baseurl>}}/img/os/Rancher_gce9.png)
After a little bit, you will be able to SSH into the box using the **rancher** user.
@@ -1,8 +1,10 @@
---
title: OpenStack
weight: 109
aliases:
- /os/v1.x/en/installation/running-rancheros/cloud/openstack
---
As of v0.5.0, RancherOS releases include an Openstack image that can be found on our [releases page](https://github.com/rancher/os/releases). The image format is [QCOW3](https://wiki.qemu.org/Features/Qcow3#Fully_QCOW2_backwards-compatible_feature_set) that is backward compatible with QCOW2.
When launching an instance using the image, you must enable **Advanced Options** -> **Configuration Drive** and in order to use a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file.
When launching an instance using the image, you must enable **Advanced Options** -> **Configuration Drive** and in order to use a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file.
@@ -1,6 +1,8 @@
---
title: VMware ESXi
weight: 108
aliases:
- /os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi
---
As of v1.1.0, RancherOS automatically detects that it is running on VMware ESXi, and automatically adds the `open-vm-tools` service to be downloaded and started, and uses `guestinfo` keys to set the cloud-init data.
@@ -3,13 +3,23 @@ title: Custom Console
weight: 180
---
When [booting from the ISO]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/), RancherOS starts with the default console, which is based on busybox.
<<<<<<< HEAD
When [booting from the ISO]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/), RancherOS starts with the default console, which is based on busybox.
You can select which console you want RancherOS to start with using the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config).
You can select which console you want RancherOS to start with using the [cloud-config]({{<baseurl>}}/os/v1.x/en/installation/configuration/#cloud-config).
### Enabling Consoles using Cloud-Config
When launching RancherOS with a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file, you can select which console you want to use.
When launching RancherOS with a [cloud-config]({{<baseurl>}}/os/v1.x/en/installation/configuration/#cloud-config) file, you can select which console you want to use.
=======
When [booting from the ISO]({{< baseurl >}}/os/v1.x/en/installation/workstation//boot-from-iso/), RancherOS starts with the default console, which is based on busybox.
You can select which console you want RancherOS to start with using the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config).
### Enabling Consoles using Cloud-Config
When launching RancherOS with a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config) file, you can select which console you want to use.
>>>>>>> Reorganize RancherOS docs
Currently, the list of available consoles are:
@@ -102,7 +112,7 @@ All consoles except the default (busybox) console are persistent. Persistent con
<br>
> **Note:** When using a persistent console and in the current version's console, [rolling back]({{< baseurl >}}/os/v1.x/en/upgrading/#rolling-back-an-upgrade) is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported.
> **Note:** When using a persistent console and in the current version's console, [rolling back]({{<baseurl>}}/os/v1.x/en/upgrading/#rolling-back-an-upgrade) is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported.
### Enabling Consoles
@@ -59,7 +59,7 @@ Your kernel should be packaged and published as a set of files of the following
### Building a RancherOS release using the Packaged kernel files.
By default, RancherOS ships with the kernel provided by the [os-kernel repository](https://github.com/rancher/os-kernel). Swapping out the default kernel can by done by [building your own custom RancherOS ISO]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/).
By default, RancherOS ships with the kernel provided by the [os-kernel repository](https://github.com/rancher/os-kernel). Swapping out the default kernel can by done by [building your own custom RancherOS ISO]({{<baseurl>}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/).
Create a clone of the main [RancherOS repository](https://github.com/rancher/os) to your local machine with a `git clone`.
@@ -75,6 +75,6 @@ ARG KERNEL_VERSION_amd64=4.14.63-rancher
ARG KERNEL_URL_amd64=https://link/xxxx
```
After you've replaced the URL with your custom kernel, you can follow the steps in [building your own custom RancherOS ISO]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/).
After you've replaced the URL with your custom kernel, you can follow the steps in [building your own custom RancherOS ISO]({{<baseurl>}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/).
> **Note:** `KERNEL_URL` settings should point to a Linux kernel, compiled and packaged in a specific way. You can fork [os-kernel repository](https://github.com/rancher/os-kernel) to package your own kernel.
@@ -11,7 +11,7 @@ Create a clone of the main [RancherOS repository](https://github.com/rancher/os)
$ git clone https://github.com/rancher/os.git
```
In the root of the repository, the "General Configuration" section of `Dockerfile.dapper` can be updated to use [custom kernels]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-kernels).
In the root of the repository, the "General Configuration" section of `Dockerfile.dapper` can be updated to use [custom kernels]({{<baseurl>}}/os/v1.x/en/installation/custom-builds/custom-kernels).
After you've saved your edits, run `make` in the root directory. After the build has completed, a `./dist/artifacts` directory will be created with the custom built RancherOS release files.
Build Requirements: `bash`, `make`, `docker` (Docker version >= 1.10.3)
@@ -29,7 +29,7 @@ If you need a compressed ISO, you can run this command:
$ make release
```
The `rancheros.iso` is ready to be used to [boot RancherOS from ISO]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/) or [launch RancherOS using Docker Machine]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/docker-machine).
The `rancheros.iso` is ready to be used to [boot RancherOS from ISO]({{< baseurl >}}/os/v1.x/en/installation/workstation//boot-from-iso/) or [launch RancherOS using Docker Machine]({{< baseurl >}}/os/v1.x/en/installation/workstation//docker-machine).
## Creating a GCE Image Archive
@@ -50,7 +50,7 @@ RANCHEROS_VERSION=v1.4.0 make build-gce
#### Reduce Memory Requirements
With changes to the kernel and built Docker, RancherOS booting requires more memory. For details, please refer to the [memory requirements]({{< baseurl >}}/os/v1.x/en/#hardware-requirements).
With changes to the kernel and built Docker, RancherOS booting requires more memory. For details, please refer to the [memory requirements]({{<baseurl>}}/os/v1.x/en/#hardware-requirements).
By customizing the ISO, you can reduce the memory usage on boot. The easiest way is to downgrade the built-in Docker version, because Docker takes up a lot of space.
This can effectively reduce the memory required to decompress the `initrd` on boot. Using docker 17.03 is a good choice:
@@ -3,37 +3,37 @@ title: Running RancherOS
weight: 100
---
RancherOS runs on virtualization platforms, cloud providers and bare metal servers. We also support running a local VM on your laptop. To start running RancherOS as quickly as possible, follow our [Quick Start Guide]({{< baseurl >}}/os/v1.x/en/quick-start-guide/).
RancherOS runs on virtualization platforms, cloud providers and bare metal servers. We also support running a local VM on your laptop. To start running RancherOS as quickly as possible, follow our [Quick Start Guide]({{<baseurl>}}/os/v1.x/en/quick-start-guide/).
### Platforms
#### Workstation
[Docker Machine]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/docker-machine)
[Docker Machine]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/workstation/docker-machine)
[Boot from ISO]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso)
[Boot from ISO]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso)
#### Cloud
[Amazon EC2]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/aws)
[Amazon EC2]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/cloud/aws)
[Google Compute Engine]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/gce)
[Google Compute Engine]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/cloud/gce)
[DigitalOcean]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/do)
[DigitalOcean]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/cloud/do)
[Azure]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/azure)
[Azure]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/cloud/azure)
[OpenStack]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/openstack)
[OpenStack]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/cloud/openstack)
[VMware ESXi]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi)
[VMware ESXi]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi)
[Aliyun]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/aliyun)
[Aliyun]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/cloud/aliyun)
#### Bare Metal & Virtual Servers
[PXE]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/pxe)
[PXE]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/server/pxe)
[Install to Hard Disk]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk)
[Install to Hard Disk]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk)
[Raspberry Pi]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/raspberry-pi)
[Raspberry Pi]({{<baseurl>}}/os/v1.x/en/installation/running-rancheros/server/raspberry-pi)
@@ -1,9 +1,11 @@
---
title: Installing to Disk
weight: 111
aliases:
- /os/v1.x/en/installation/running-rancheros/server/install-to-disk
---
RancherOS comes with a simple installer that will install RancherOS on a given target disk. To install RancherOS on a new disk, you can use the `ros install` command. Before installing, you'll need to have already [booted RancherOS from ISO]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso). Please be sure to pick the `rancheros.iso` from our release [page](https://github.com/rancher/os/releases).
RancherOS comes with a simple installer that will install RancherOS on a given target disk. To install RancherOS on a new disk, you can use the `ros install` command. Before installing, you'll need to have already [booted RancherOS from ISO]({{< baseurl >}}/os/v1.x/en/installation/workstation//boot-from-iso). Please be sure to pick the `rancheros.iso` from our release [page](https://github.com/rancher/os/releases).
### Using `ros install` to Install RancherOS
@@ -11,7 +13,7 @@ The `ros install` command orchestrates the installation from the `rancher/os` co
#### Cloud-Config
The easiest way to log in is to pass a `cloud-config.yml` file containing your public SSH keys. To learn more about what's supported in our cloud-config, please read our [documentation]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config).
The easiest way to log in is to pass a `cloud-config.yml` file containing your public SSH keys. To learn more about what's supported in our cloud-config, please read our [documentation]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config).
The `ros install` command will process your `cloud-config.yml` file specified with the `-c` flag. This file will also be placed onto the disk and installed to `/var/lib/rancher/conf/`. It will be evaluated on every boot.
@@ -61,7 +63,7 @@ Status: Downloaded newer image for rancher/os:v0.5.0
Continue with reboot [y/N]:
```
After installing RancherOS to disk, you will no longer be automatically logged in as the `rancher` user. You'll need to have added in SSH keys within your [cloud-config file]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config).
After installing RancherOS to disk, you will no longer be automatically logged in as the `rancher` user. You'll need to have added in SSH keys within your [cloud-config file]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config).
#### Installing a Different Version
@@ -1,6 +1,8 @@
---
title: iPXE
weight: 112
aliases:
- /os/v1.x/en/installation/running-rancheros/server/pxe
---
```
@@ -63,11 +65,11 @@ Valid cloud-init datasources for RancherOS.
| cmdline | Kernel command line: `cloud-config-url=http://link/user_data` |
| configdrive | /media/config-2 |
| url | URL address |
| vmware| Set `guestinfo` cloud-init or interface data as per [VMware ESXi]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi) |
| vmware| Set `guestinfo` cloud-init or interface data as per [VMware ESXi]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi) |
| * | This will add ["configdrive", "vmware", "ec2", "digitalocean", "packet", "gce"] into the list of datasources to try |
The vmware datasource was added as of v1.1.
### Cloud-Config
When booting via iPXE, RancherOS can be configured using a [cloud-config file]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config).
When booting via iPXE, RancherOS can be configured using a [cloud-config file]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config).
@@ -1,11 +1,13 @@
---
title: Raspberry Pi
weight: 113
aliases:
- /os/v1.x/en/installation/running-rancheros/server/raspberry-pi
---
As of v0.5.0, RancherOS releases include a Raspberry Pi image that can be found on our [releases page](https://github.com/rancher/os/releases). The official Raspberry Pi documentation contains instructions on how to [install operating system images](https://www.raspberrypi.org/documentation/installation/installing-images/).
When installing, there is no ability to pass in a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). You will need to boot up, change the configuration and then reboot to apply those changes.
When installing, there is no ability to pass in a [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). You will need to boot up, change the configuration and then reboot to apply those changes.
Currently, only Raspberry Pi 3 is tested and known to work.
@@ -1,6 +1,8 @@
---
title: Booting from ISO
weight: 102
aliases:
- /os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso
---
The RancherOS ISO file can be used to create a fresh RancherOS install on KVM, VMware, VirtualBox, Hyper-V, Proxmox VE, or bare metal servers. You can download the `rancheros.iso` file from our [releases page](https://github.com/rancher/os/releases/).
@@ -13,8 +15,8 @@ VMware | [rancheros-vmware.iso](https://releases.rancher.com/os/latest/vmwar
Hyper-V | [rancheros-hyperv.iso](https://releases.rancher.com/os/latest/hyperv/rancheros.iso)
Proxmox VE | [rancheros-proxmoxve.iso](https://releases.rancher.com/os/latest/proxmoxve/rancheros.iso)
You must boot with enough memory which you can refer to [here]({{< baseurl >}}/os/v1.x/en/overview/#hardware-requirements). If you boot with the ISO, you will automatically be logged in as the `rancher` user. Only the ISO is set to use autologin by default. If you run from a cloud or install to disk, SSH keys or a password of your choice is expected to be used.
You must boot with enough memory which you can refer to [here]({{<baseurl>}}/os/v1.x/en/overview/#hardware-requirements). If you boot with the ISO, you will automatically be logged in as the `rancher` user. Only the ISO is set to use autologin by default. If you run from a cloud or install to disk, SSH keys or a password of your choice is expected to be used.
### Install to Disk
After you boot RancherOS from ISO, you can follow the instructions [here]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk/) to install RancherOS to a hard disk.
After you boot RancherOS from ISO, you can follow the instructions [here]({{< baseurl >}}/os/v1.x/en/installation/server/install-to-disk/) to install RancherOS to a hard disk.
@@ -1,10 +1,12 @@
---
title: Using Docker Machine
weight: 101
aliases:
- /os/v1.x/en/installation/running-rancheros/workstation/docker-machine
---
Before we get started, you'll need to make sure that you have docker machine installed. Download it directly from the docker machine [releases](https://github.com/docker/machine/releases).
You also need to know the [memory requirements]({{< baseurl >}}/os/v1.x/en/#hardware-requirements).
You also need to know the [memory requirements]({{<baseurl>}}/os/v1.x/en/#hardware-requirements).
> **Note:** If you create a RancherOS instance using Docker Machine, you will not be able to upgrade your version of RancherOS.
@@ -116,7 +118,7 @@ Logging into RancherOS follows the standard Docker Machine commands. To login in
$ docker-machine ssh <MACHINE-NAME>
```
You'll be logged into RancherOS and can start exploring the OS, This will log you into the RancherOS VM. You'll then be able to explore the OS by [adding system services]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/), [customizing the configuration]({{< baseurl >}}/os/v1.x/en/installation/configuration/), and launching containers.
You'll be logged into RancherOS and can start exploring the OS, This will log you into the RancherOS VM. You'll then be able to explore the OS by [adding system services]({{< baseurl >}}/os/v1.x/en/system-services/), [customizing the configuration]({{< baseurl >}}/os/v1.x/en/configuration/), and launching containers.
If you want to exit out of RancherOS, you can exit by pressing `Ctrl+D`.
@@ -1,6 +1,8 @@
---
title: Configuring DNS
weight: 171
aliases:
- /os/v1.x/en/installation/networking/dns
---
If you wanted to configure the DNS through the cloud config file, you'll need to place DNS configurations within the `rancher` key.
@@ -1,6 +1,8 @@
---
title: Configuring Network Interfaces
weight: 170
aliases:
- /os/v1.x/en/installation/networking/interfaces
---
Using `ros config`, you can configure specific interfaces. Wildcard globbing is supported so `eth*` will match `eth1` and `eth2`. The available options you can configure are `address`, `gateway`, `mtu`, and `dhcp`.
@@ -1,6 +1,8 @@
---
title: Configuring Proxy Settings
weight: 172
aliases:
- /os/v1.x/en/installation/networking/proxy-settings
---
HTTP proxy settings can be set directly under the `network` key. This will automatically configure proxy settings for both Docker and System Docker.
+3 -3
View File
@@ -25,11 +25,11 @@ VMWare | 1GB | 1280MB (rancheros.iso) <br> 2048MB (ran
GCE | 1GB | 1280MB
AWS | 1GB | 1.7GB
You can adjust memory requirements by custom building RancherOS, please refer to [reduce-memory-requirements]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/#reduce-memory-requirements)
You can adjust memory requirements by custom building RancherOS, please refer to [reduce-memory-requirements]({{<baseurl>}}/os/v1.x/en/installation/custom-builds/custom-rancheros-iso/#reduce-memory-requirements)
### How RancherOS Works
Everything in RancherOS is a Docker container. We accomplish this by launching two instances of Docker. One is what we call **System Docker** and is the first process on the system. All other system services, like `ntpd`, `syslog`, and `console`, are running in Docker containers. System Docker replaces traditional init systems like `systemd` and is used to launch [additional system services]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/).
Everything in RancherOS is a Docker container. We accomplish this by launching two instances of Docker. One is what we call **System Docker** and is the first process on the system. All other system services, like `ntpd`, `syslog`, and `console`, are running in Docker containers. System Docker replaces traditional init systems like `systemd` and is used to launch [additional system services]({{< baseurl >}}/os/v1.x/en/system-services/).
System Docker runs a special container called **Docker**, which is another Docker daemon responsible for managing all of the users containers. Any containers that you launch as a user from the console will run inside this Docker. This creates isolation from the System Docker containers and ensures that normal user commands dont impact system services.
@@ -39,7 +39,7 @@ System Docker runs a special container called **Docker**, which is another Docke
### Running RancherOS
To get started with RancherOS, head over to our [Quick Start Guide]({{< baseurl >}}/os/v1.x/en/quick-start-guide/).
To get started with RancherOS, head over to our [Quick Start Guide]({{<baseurl>}}/os/v1.x/en/quick-start-guide/).
### Latest Release
@@ -3,7 +3,7 @@ title: Quick Start
weight: 1
---
If you have a specific RanchersOS machine requirements, please check out our [guides on running RancherOS]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/). With the rest of this guide, we'll start up a RancherOS using [Docker machine]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/docker-machine/) and show you some of what RancherOS can do.
If you have a specific RanchersOS machine requirements, please check out our [guides on running RancherOS]({{< baseurl >}}/os/v1.x/en/installation/platform/). With the rest of this guide, we'll start up a RancherOS using [Docker machine]({{< baseurl >}}/os/v1.x/en/installation/workstation//docker-machine/) and show you some of what RancherOS can do.
### Launching RancherOS using Docker Machine
@@ -120,7 +120,7 @@ $ sudo ros config get rancher.network.dns.nameservers
```
When using the native Busybox console, any changes to the console will be lost after reboots, only changes to `/home` or `/opt` will be persistent. You can use the `ros console switch` command to switch to a [persistent console]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence) and replace the native Busybox console. For example, to switch to the Ubuntu console:
When using the native Busybox console, any changes to the console will be lost after reboots, only changes to `/home` or `/opt` will be persistent. You can use the `ros console switch` command to switch to a [persistent console]({{<baseurl>}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence) and replace the native Busybox console. For example, to switch to the Ubuntu console:
```
$ sudo ros console switch ubuntu
@@ -1,9 +1,15 @@
---
title: Additional Mounts
weight: 161
aliases:
- /os/v1.x/en/installation/storage/additional-mounts
---
Additional mounts can be specified as part of your [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config). These mounts are applied within the console container. Here's a simple example that mounts `/dev/vdb` to `/mnt/s`.
<<<<<<< HEAD:content/os/v1.x/en/installation/storage/additional-mounts/_index.md
Additional mounts can be specified as part of your [cloud-config]({{<baseurl>}}/os/v1.x/en/installation/configuration/#cloud-config). These mounts are applied within the console container. Here's a simple example that mounts `/dev/vdb` to `/mnt/s`.
=======
Additional mounts can be specified as part of your [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config). These mounts are applied within the console container. Here's a simple example that mounts `/dev/vdb` to `/mnt/s`.
>>>>>>> Reorganize RancherOS docs:content/os/v1.x/en/storage/additional-mounts/_index.md
```yaml
#cloud-config
@@ -1,6 +1,8 @@
---
title: Persistent State Partition
weight: 160
aliases:
- /os/v1.x/en/installation/storage/state-partition
---
RancherOS will store its state in a single partition specified by the `dev` field. The field can be a device such as `/dev/sda1` or a logical name such `LABEL=state` or `UUID=123124`. The default value is `LABEL=RANCHER_STATE`. The file system type of that partition can be set to `auto` or a specific file system type such as `ext4`.
@@ -13,7 +15,7 @@ rancher:
dev: LABEL=RANCHER_STATE
```
For other labels such as `RANCHER_BOOT` and `RANCHER_OEM` and `RANCHER_SWAP`, please refer to [Custom partition layout]({{< baseurl >}}/os/v1.x/en/about/custom-partition-layout/).
For other labels such as `RANCHER_BOOT` and `RANCHER_OEM` and `RANCHER_SWAP`, please refer to [Custom partition layout]({{<baseurl>}}/os/v1.x/en/about/custom-partition-layout/).
### Autoformat
@@ -1,6 +1,8 @@
---
title: Using ZFS
weight: 162
aliases:
- /os/v1.x/en/installation/storage/using-zfs
---
#### Installing the ZFS service
@@ -19,7 +21,7 @@ $ sudo ros service logs --follow zfs
$ lsmod | grep zfs
```
> *Note:* if you switch consoles, you may need to re-run `ros up zfs`.
> *Note:* if you switch consoles, you may need to re-run `sudo ros service up zfs`.
#### Creating ZFS pools
@@ -1,6 +1,8 @@
---
title: System Services
weight: 140
aliases:
- /os/v1.x/en/installation/system-services/adding-system-services
---
A system service is a container that can be run in either System Docker or Docker. Rancher provides services that are already available in RancherOS by adding them to the [os-services repo](https://github.com/rancher/os-services). Anything in the `index.yml` file from the repository for the tagged release will be an available system service when using the `ros service list` command.
@@ -1,9 +1,11 @@
---
title: Custom System Services
weight: 141
aliases:
- /os/v1.x/en/installation/system-services/custom-system-services
---
You can also create your own system service in [Docker Compose](https://docs.docker.com/compose/) format. After creating your own custom service, you can launch it in RancherOS in a couple of methods. The service could be directly added to the [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config), or a `docker-compose.yml` file could be saved at a http(s) url location or in a specific directory of RancherOS.
You can also create your own system service in [Docker Compose](https://docs.docker.com/compose/) format. After creating your own custom service, you can launch it in RancherOS in a couple of methods. The service could be directly added to the [cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/#cloud-config), or a `docker-compose.yml` file could be saved at a http(s) url location or in a specific directory of RancherOS.
### Launching Services through Cloud-Config
@@ -1,6 +1,8 @@
---
title: Environment
weight: 143
aliases:
- /os/v1.x/en/installation/system-services/environment
---
The [environment key](https://docs.docker.com/compose/compose-file/#environment) can be used to customize system services. When a value is not assigned, RancherOS looks up the value from the `rancher.environment` key.
@@ -1,6 +1,8 @@
---
title: System Docker Volumes
weight: 142
aliases:
- /os/v1.x/en/installation/system-services/system-docker-volumes
---
A few services are containers in `created` state. Their purpose is to provide volumes for other services.
+3 -3
View File
@@ -9,7 +9,7 @@ Since RancherOS is a kernel and initrd, the upgrade process is downloading a new
Before upgrading to any version, please review the release notes on our [releases page](https://github.com/rancher/os/releases) in GitHub to review any updates in the release.
> **Note:** If you are using [`docker-machine`]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/workstation/docker-machine/) then you will not be able to upgrade your RancherOS version. You need to delete and re-create the machine.
> **Note:** If you are using [`docker-machine`]({{< baseurl >}}/os/v1.x/en/installation/workstation//docker-machine/) then you will not be able to upgrade your RancherOS version. You need to delete and re-create the machine.
### Version Control
@@ -64,7 +64,7 @@ $ sudo ros -v
ros version v0.5.0
```
> **Note:** If you are booting from ISO and have not installed to disk, your upgrade will not be saved. You can view our guide to [installing to disk]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk/).
> **Note:** If you are booting from ISO and have not installed to disk, your upgrade will not be saved. You can view our guide to [installing to disk]({{< baseurl >}}/os/v1.x/en/installation/server/install-to-disk/).
#### Upgrading to a Specific Version
@@ -114,7 +114,7 @@ ros version 0.4.4
<br>
> **Note:** If you are using a [persistent console]({{< baseurl >}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence) and in the current version's console, rolling back is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported.
> **Note:** If you are using a [persistent console]({{<baseurl>}}/os/v1.x/en/installation/custom-builds/custom-console/#console-persistence) and in the current version's console, rolling back is not supported. For example, rolling back to v0.4.5 when using a v0.5.0 persistent console is not supported.
### Staging an Upgrade
+6 -5
View File
@@ -8,13 +8,14 @@ insertOneSix: true
weight: 1
ctaBanner: intro-k8s-rancher-online-training
---
Rancher was originally built to work with multiple orchestrators, and it included its own orchestrator called Cattle. With the rise of Kubernetes in the marketplace, Rancher 2.x exclusively deploys and manages Kubernetes clusters running anywhere, on any provider.
# What's New?
Rancher can provision Kubernetes from a hosted provider, provision compute nodes and then install Kubernetes onto them, or import existing Kubernetes clusters running anywhere.
Rancher was originally built to work with multiple orchestrators, and it included its own orchestrator called Cattle. With the rise of Kubernetes in the marketplace, Rancher now exclusively deploys and manages multiple Kubernetes clusters running anywhere, on any provider. It can provision Kubernetes from a hosted provider, provision compute nodes and then install Kubernetes onto them, or inherit existing Kubernetes clusters running anywhere.
One Rancher server installation can manage thousands of Kubernetes clusters and thousands of nodes from the same user interface.
One Rancher server installation can manage hundreds of Kubernetes clusters from the same interface.
Rancher adds significant value on top of Kubernetes, first by centralizing authentication and role-based access control (RBAC) for all of the clusters, giving global admins the ability to control cluster access from one location.
Rancher adds significant value on top of Kubernetes, first by centralizing role-based access control (RBAC) for all of the clusters and giving global admins the ability to control cluster access from one location. It then enables detailed monitoring and alerting for clusters and their resources, ships logs to external providers, and integrates directly with Helm via the Application Catalog. If you have an external CI/CD system, you can plug it into Rancher, but if you don't, Rancher even includes a pipeline engine to help you automatically deploy and upgrade workloads.
It then enables detailed monitoring and alerting for clusters and their resources, ships logs to external providers, and integrates directly with Helm via the Application Catalog. If you have an external CI/CD system, you can plug it into Rancher, but if you don't, Rancher even includes a pipeline engine to help you automatically deploy and upgrade workloads.
Rancher is a _complete_ container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere.
Rancher is a _complete_ container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere.
@@ -9,7 +9,7 @@ aliases:
- /rancher/v2.x/en/admin-settings/log-in/
---
After installation, the [system administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) should configure Rancher to configure authentication, authorization, security, default settings, security policies, drivers and global DNS entries.
After installation, the [system administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) should configure Rancher to configure authentication, authorization, security, default settings, security policies, drivers and global DNS entries.
## First Log In
@@ -21,7 +21,7 @@ After you log into Rancher for the first time, Rancher will prompt you for a **R
One of the key features that Rancher adds to Kubernetes is centralized user authentication. This feature allows to set up local users and/or connect to an external authentication provider. By connecting to an external authentication provider, you can leverage that provider's user and groups.
For more information how authentication works and how to configure each provider, see [Authentication]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/).
For more information how authentication works and how to configure each provider, see [Authentication]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/).
## Authorization
@@ -33,13 +33,13 @@ For more information how authorization works and how to customize roles, see [Ro
_Pod Security Policies_ (or PSPs) are objects that control security-sensitive aspects of pod specification, e.g. root privileges. If a pod does not meet the conditions specified in the PSP, Kubernetes will not allow it to start, and Rancher will display an error message.
For more information how to create and use PSPs, see [Pod Security Policies]({{< baseurl >}}/rancher/v2.x/en/admin-settings/pod-security-policies/).
For more information how to create and use PSPs, see [Pod Security Policies]({{<baseurl>}}/rancher/v2.x/en/admin-settings/pod-security-policies/).
## Provisioning Drivers
Drivers in Rancher allow you to manage which providers can be used to provision [hosted Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) or [nodes in an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) to allow Rancher to deploy and manage Kubernetes.
Drivers in Rancher allow you to manage which providers can be used to provision [hosted Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) or [nodes in an infrastructure provider]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) to allow Rancher to deploy and manage Kubernetes.
For more information, see [Provisioning Drivers]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/).
For more information, see [Provisioning Drivers]({{<baseurl>}}/rancher/v2.x/en/admin-settings/drivers/).
## Adding Kubernetes Versions into Rancher
@@ -47,9 +47,9 @@ _Available as of v2.3.0_
With this feature, you can upgrade to the latest version of Kubernetes as soon as it is released, without upgrading Rancher. This feature allows you to easily upgrade Kubernetes patch versions (i.e. `v1.15.X`), but not intended to upgrade Kubernetes minor versions (i.e. `v1.X.0`) as Kubernetes tends to deprecate or add APIs between minor versions.
The information that Rancher uses to provision [RKE clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) is now located in the Rancher Kubernetes Metadata. For details on metadata configuration and how to change the Kubernetes version used for provisioning RKE clusters, see [Rancher Kubernetes Metadata.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/k8s-metadata/)
The information that Rancher uses to provision [RKE clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) is now located in the Rancher Kubernetes Metadata. For details on metadata configuration and how to change the Kubernetes version used for provisioning RKE clusters, see [Rancher Kubernetes Metadata.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/k8s-metadata/)
Rancher Kubernetes Metadata contains Kubernetes version information which Rancher uses to provision [RKE clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/).
Rancher Kubernetes Metadata contains Kubernetes version information which Rancher uses to provision [RKE clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/).
For more information on how metadata works and how to configure metadata config, see [Rancher Kubernetes Metadata]({{<baseurl>}}/rancher/v2.x/en/admin-settings/k8s-metadata/).
@@ -7,11 +7,11 @@ aliases:
If your organization uses Microsoft Active Directory as central user repository, you can configure Rancher to communicate with an Active Directory server to authenticate users. This allows Rancher admins to control access to clusters and projects based on users and groups managed externally in the Active Directory, while allowing end-users to authenticate with their AD credentials when logging in to the Rancher UI.
Rancher uses LDAP to communicate with the Active Directory server. The authentication flow for Active Directory is therefore the same as for the [OpenLDAP authentication]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/openldap) integration.
Rancher uses LDAP to communicate with the Active Directory server. The authentication flow for Active Directory is therefore the same as for the [OpenLDAP authentication]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/openldap) integration.
> **Note:**
>
> Before you start, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
> Before you start, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
## Prerequisites
@@ -196,4 +196,4 @@ In the same way, we can observe that the value in the **memberOf** attribute in
## Annex: Troubleshooting
If you are experiencing issues while testing the connection to the Active Directory server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{< baseurl >}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation.
If you are experiencing issues while testing the connection to the Active Directory server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{<baseurl>}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation.
@@ -28,9 +28,8 @@ Configuring Rancher to allow your users to authenticate with their Azure AD acco
- [1. Register Rancher with Azure](#1-register-rancher-with-azure)
- [2. Create an Azure API Key](#2-create-an-azure-api-key)
- [3. Set Required Permissions for Rancher](#3-set-required-permissions-for-rancher)
- [4. Add a Reply URL](#4-add-a-reply-url)
- [5. Copy Azure Application Data](#5-copy-azure-application-data)
- [6. Configure Azure AD in Rancher](#6-configure-azure-ad-in-rancher)
- [4. Copy Azure Application Data](#4-copy-azure-application-data)
- [5. Configure Azure AD in Rancher](#5-configure-azure-ad-in-rancher)
<!-- /TOC -->
@@ -42,41 +41,43 @@ Before enabling Azure AD within Rancher, you must register Rancher with Azure.
1. Use search to open the **App registrations** service.
![Open App Registrations]({{< baseurl >}}/img/rancher/search-app-registrations.png)
![Open App Registrations]({{<baseurl>}}/img/rancher/search-app-registrations.png)
1. Click **New application registration** and complete the **Create** form.
1. Click **New registrations** and complete the **Create** form.
![New App Registration]({{< baseurl >}}/img/rancher/new-app-registration.png)
![New App Registration]({{<baseurl>}}/img/rancher/new-app-registration.png)
1. Enter a **Name** (something like `Rancher`).
1. From **Application type**, make sure that **Web app / API** is selected.
1. From **Supported account types**, select "Accounts in this organizational directory only (AzureADTest only - Single tenant)" This corresponds to the legacy app registration options.
1. In the **Sign-on URL** field, enter the URL of your Rancher Server.
1. In the **Redirect URI** section, make sure **Web** is selected from the dropdown and enter the URL of your Rancher Server in the text box next to the dropdown. This Rancher server URL should be appended with the verification path: `<MY_RANCHER_URL>/verify-auth-azure`.
1. Click **Create**.
>**Tip:** You can find your personalized Azure reply URL in Rancher on the Azure AD Authentication page (Global View > Security Authentication > Azure AD).
### 2. Create an Azure API Key
1. Click **Register**.
From the Azure portal, create an API key. Rancher will use this key to authenticate with Azure AD.
>**Note:** It can take up to five minutes for this change to take affect, so don't be alarmed if you can't authenticate immediately after Azure AD configuration.
### 2. Create a new client secret
From the Azure portal, create a client secret. Rancher will use this key to authenticate with Azure AD.
1. Use search to open **App registrations** services. Then open the entry for Rancher that you created in the last procedure.
![Open Rancher Registration]({{< baseurl >}}/img/rancher/open-rancher-app.png)
![Open Rancher Registration]({{<baseurl>}}/img/rancher/open-rancher-app.png)
**Step Result:** A new blade opens for Rancher.
1. From the navigation pane on left, click **Certificates and Secrets**.
1. Click **Settings**.
1. Click **New client secret**.
1. From the **Settings** blade, select **Keys**.
![Create new client secret]({{< baseurl >}}/img/rancher/select-client-secret.png)
1. From **Passwords**, create an API key.
1. Enter a **Description** (something like `Rancher`).
1. Enter a **Key description** (something like `Rancher`).
1. Select duration for the key from the options under **Expires**. This drop-down sets the expiration date for the key. Shorter durations are more secure, but require you to create a new key after expiration.
1. Select a **Duration** for the key. This drop-down sets the expiration date for the key. Shorter durations are more secure, but require you to create a new key after expiration.
1. Click **Save** (you don't need to enter a value—it will automatically populate after you save).
1. Click **Add** (you don't need to enter a value—it will automatically populate after you save).
<a id="secret"></a>
1. Copy the key value and save it to an [empty text file](#tip).
@@ -89,13 +90,16 @@ From the Azure portal, create an API key. Rancher will use this key to authentic
Next, set API permissions for Rancher within Azure.
1. From the **Settings** blade, select **Required permissions**.
1. From the navigation pane on left, select **API permissions**.
![Open Required Permissions]({{< baseurl >}}/img/rancher/select-required-permissions.png)
![Open Required Permissions]({{<baseurl>}}/img/rancher/select-required-permissions.png)
1. Click **Windows Azure Active Directory**.
1. Click **Add a permission**.
1. From the **Azure Active Directory Graph**, select the following **Delegated Permissions**:
![Select API Permissions]({{< baseurl >}}/img/rancher/select-required-permissions-2.png)
1. From the **Enable Access** blade, select the following **Delegated Permissions**:
<br/>
<br/>
- **Access the directory as the signed-in user**
@@ -105,9 +109,9 @@ Next, set API permissions for Rancher within Azure.
- **Read all users' basic profiles**
- **Sign in and read user profile**
1. Click **Save**.
1. Click **Add permissions**.
1. From **Required permissions**, click **Grant permissions**. Then click **Yes**.
1. From **API permissions**, click **Grant admin consent**. Then click **Yes**.
>**Note:** You must be signed in as an Azure administrator to successfully save your permission settings.
@@ -119,7 +123,7 @@ To use Azure AD with Rancher you must whitelist Rancher with Azure. You can comp
1. From the **Setting** blade, select **Reply URLs**.
![Azure: Enter Reply URL]({{< baseurl >}}/img/rancher/enter-azure-reply-url.png)
![Azure: Enter Reply URL]({{<baseurl>}}/img/rancher/enter-azure-reply-url.png)
1. From the **Reply URLs** blade, enter the URL of your Rancher Server, appended with the verification path: `<MY_RANCHER_URL>/verify-auth-azure`.
@@ -139,9 +143,9 @@ As your final step in Azure, copy the data that you'll use to configure Rancher
1. Use search to open the **Azure Active Directory** service.
![Open Azure Active Directory]({{< baseurl >}}/img/rancher/search-azure-ad.png)
![Open Azure Active Directory]({{<baseurl>}}/img/rancher/search-azure-ad.png)
1. From the **Azure Active Directory** menu, open **Properties**.
1. From the left navigation pane, open **Overview**.
2. Copy the **Directory ID** and paste it into your [text file](#tip).
@@ -151,7 +155,7 @@ As your final step in Azure, copy the data that you'll use to configure Rancher
1. Use search to open **App registrations**.
![Open App Registrations]({{< baseurl >}}/img/rancher/search-app-registrations.png)
![Open App Registrations]({{<baseurl>}}/img/rancher/search-app-registrations.png)
1. Find the entry you created for Rancher.
@@ -161,7 +165,7 @@ As your final step in Azure, copy the data that you'll use to configure Rancher
1. From **App registrations**, click **Endpoints**.
![Click Endpoints]({{< baseurl >}}/img/rancher/click-endpoints.png)
![Click Endpoints]({{<baseurl>}}/img/rancher/click-endpoints.png)
2. Copy the following endpoints to your clipboard and paste them into your [text file](#tip) (these values will be your Rancher endpoint values).
@@ -171,7 +175,7 @@ As your final step in Azure, copy the data that you'll use to configure Rancher
>**Note:** Copy the v1 version of the endpoints
### 6. Configure Azure AD in Rancher
### 5. Configure Azure AD in Rancher
From the Rancher UI, enter information about your AD instance hosted in Azure to complete configuration.
@@ -13,7 +13,7 @@ If your organization uses FreeIPA for user authentication, you can configure Ran
>
>- You must have a [FreeIPA Server](https://www.freeipa.org/) configured.
>- Create a service account in FreeIPA with `read-only` access. Rancher uses this account to verify group membership when a user makes a request using an API key.
>- Read [External Authentication Configuration and Principal Users]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
>- Read [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
1. Sign into Rancher using a local user assigned the `administrator` role (i.e., the _local principal_).
@@ -7,7 +7,7 @@ aliases:
In environments using GitHub, you can configure Rancher to allow sign on using GitHub credentials.
>**Prerequisites:** Read [External Authentication Configuration and Principal Users]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
>**Prerequisites:** Read [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
1. Sign into Rancher using a local user assigned the `administrator` role (i.e., the _local principal_).
@@ -17,12 +17,13 @@ If your organization uses Keycloak Identity Provider (IdP) for user authenticati
`Sign Documents` | `ON` <sup>1</sup>
`Sign Assertions` | `ON` <sup>1</sup>
All other `ON/OFF` Settings | `OFF`
`Client ID` | `https://yourRancherHostURL/v1-saml/keycloak/saml/metadata`
`Client ID` | `https://yourRancherHostURL/v1-saml/keycloak/saml/metadata`<sup>2</sup>
`Client Name` | <CLIENT_NAME> (e.g. `rancher`)
`Client Protocol` | `SAML`
`Valid Redirect URI` | `https://yourRancherHostURL/v1-saml/keycloak/saml/acs`
><sup>1</sup>: Optionally, you can enable either one or both of these settings.
><sup>2</sup>: Rancher SAML metadata won't be generated until a SAML provider is configured and saved.
- Export a `metadata.xml` file from your Keycloak client:
From the `Installation` tab, choose the `SAML Metadata IDPSSODescriptor` format option and download your file.
@@ -64,7 +65,7 @@ If your organization uses Keycloak Identity Provider (IdP) for user authenticati
## Annex: Troubleshooting
If you are experiencing issues while testing the connection to the Keycloak server, first double-check the configuration option of your SAML client. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{< baseurl >}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation.
If you are experiencing issues while testing the connection to the Keycloak server, first double-check the configuration option of your SAML client. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{<baseurl>}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation.
### You are not redirected to Keycloak
@@ -81,6 +82,11 @@ You are correctly redirected to your IdP login page and you are able to enter yo
* Check the Rancher debug log.
* If the log displays `ERROR: either the Response or Assertion must be signed`, make sure either `Sign Documents` or `Sign assertions` is set to `ON` in your Keycloak client.
### HTTP 502 when trying to access /v1-saml/keycloak/saml/metadata
This is usually due to the metadata not being created until a SAML provider is configured.
Try configuring and saving keycloak as your SAML provider and then accessing the metadata.
### Keycloak Error: "We're sorry, failed to process response"
* Check your Keycloak log.
@@ -27,10 +27,10 @@ If your organization uses Microsoft Active Directory Federation Services (AD FS)
Setting up Microsoft AD FS with Rancher Server requires configuring AD FS on your Active Directory server, and configuring Rancher to utilize your AD FS server. The following pages serve as guides for setting up Microsoft AD FS authentication on your Rancher installation.
- [1 — Configuring Microsoft AD FS for Rancher]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup)
- [2 — Configuring Rancher for Microsoft AD FS]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup)
- [1 — Configuring Microsoft AD FS for Rancher]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup)
- [2 — Configuring Rancher for Microsoft AD FS]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup)
{{< saml_caveats >}}
### [Next: Configuring Microsoft AD FS for Rancher]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup)
### [Next: Configuring Microsoft AD FS for Rancher]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup)
@@ -79,4 +79,4 @@ https://<AD_SERVER>/federationmetadata/2007-06/federationmetadata.xml
**Result:** You've added Rancher as a relying trust party. Now you can configure Rancher to leverage AD.
### [Next: Configuring Rancher for Microsoft AD FS]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/)
### [Next: Configuring Rancher for Microsoft AD FS]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/)
@@ -4,7 +4,7 @@ weight: 1205
---
_Available as of v2.0.7_
After you complete [Configuring Microsoft AD FS for Rancher]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/), enter your AD FS information into Rancher to allow AD FS users to authenticate with Rancher.
After you complete [Configuring Microsoft AD FS for Rancher]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/), enter your AD FS information into Rancher to allow AD FS users to authenticate with Rancher.
>**Important Notes For Configuring Your AD FS Server:**
>
@@ -8,17 +8,6 @@ aliases:
_Available as of v2.0.5_
If your organization uses LDAP for user authentication, you can configure Rancher to communicate with an OpenLDAP server to authenticate users. This allows Rancher admins to control access to clusters and projects based on users and groups managed externally in the organisation's central user repository, while allowing end-users to authenticate with their LDAP credentials when logging in to the Rancher UI.
## OpenLDAP Authentication Flow
1. When a user attempts to login with his LDAP credentials, Rancher creates an initial bind to the LDAP server using a service account with permissions to search the directory and read user/group attributes.
2. Rancher then searches the directory for the user by using a search filter based on the provided username and configured attribute mappings.
3. Once the user has been found, he is authenticated with another LDAP bind request using the user's DN and provided password.
4. Once authentication succeeded, Rancher then resolves the group memberships both from the membership attribute in the user's object and by performing a group search based on the configured user mapping attribute.
> **Note:**
>
> Before you proceed with the configuration, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
## Prerequisites
@@ -28,81 +17,16 @@ Rancher must be configured with a LDAP bind account (aka service account) to sea
>
> If the certificate used by the OpenLDAP server is self-signed or not from a recognised certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain.
## Configuration Steps
### Open OpenLDAP Configuration
## Configure OpenLDAP in Rancher
Configure the settings for the OpenLDAP server, groups and users. For help filling out each field, refer to the [configuration reference.](../openldap-config)
> Before you proceed with the configuration, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
1. Log into the Rancher UI using the initial local `admin` account.
2. From the **Global** view, navigate to **Security** > **Authentication**
3. Select **OpenLDAP**. The **Configure an OpenLDAP server** form will be displayed.
### Configure OpenLDAP Server Settings
In the section titled `1. Configure an OpenLDAP server`, complete the fields with the information specific to your server. Please refer to the following table for detailed information on the required values for each parameter.
> **Note:**
>
> If you are in doubt about the correct values to enter in the user/group Search Base configuration fields, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/ad/#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation.
**Table 1: OpenLDAP server parameters**
| Parameter | Description |
|:--|:--|
| Hostname | Specify the hostname or IP address of the OpenLDAP server |
| Port | Specify the port at which the OpenLDAP server is listening for connections. Unencrypted LDAP normally uses the standard port of 389, while LDAPS uses port 636.|
| TLS | Check this box to enable LDAP over SSL/TLS (commonly known as LDAPS). You will also need to paste in the CA certificate if the server uses a self-signed/enterprise-signed certificate. |
| Server Connection Timeout | The duration in number of seconds that Rancher waits before considering the server unreachable. |
| Service Account Distinguished Name | Enter the Distinguished Name (DN) of the user that should be used to bind, search and retrieve LDAP entries. (see [Prerequisites](#prerequisites)). |
| Service Account Password | The password for the service account. |
| User Search Base | Enter the Distinguished Name of the node in your directory tree from which to start searching for user objects. All users must be descendents of this base DN. For example: "ou=people,dc=acme,dc=com".|
| Group Search Base | If your groups live under a different node than the one configured under `User Search Base` you will need to provide the Distinguished Name here. Otherwise leave this field empty. For example: "ou=groups,dc=acme,dc=com".|
---
### Configure User/Group Schema
If your OpenLDAP directory deviates from the standard OpenLDAP schema, you must complete the **Customize Schema** section to match it.
Note that the attribute mappings configured in this section are used by Rancher to construct search filters and resolve group membership. It is therefore always recommended to verify that the configuration here matches the schema used in your OpenLDAP.
> **Note:**
>
> If you are unfamiliar with the user/group schema used in the OpenLDAP server, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/ad/#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation.
#### User Schema
The table below details the parameters for the user schema configuration.
**Table 2: User schema configuration parameters**
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for user objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Username Attribute | The user attribute whose value is suitable as a display name. |
| Login Attribute | The attribute whose value matches the username part of credentials entered by your users when logging in to Rancher. This is typically `uid`. |
| User Member Attribute | The user attribute containing the Distinguished Name of groups a user is member of. Usually this is one of `memberOf` or `isMemberOf`. |
| Search Attribute | When a user enters text to add users or groups in the UI, Rancher queries the LDAP server and attempts to match users by the attributes provided in this setting. Multiple attributes can be specified by separating them with the pipe ("\|") symbol. |
| User Enabled Attribute | If the schema of your OpenLDAP server supports a user attribute whose value can be evaluated to determine if the account is disabled or locked, enter the name of that attribute. The default OpenLDAP schema does not support this and the field should usually be left empty. |
| Disabled Status Bitmask | This is the value for a disabled/locked user account. The parameter is ignored if `User Enabled Attribute` is empty. |
---
#### Group Schema
The table below details the parameters for the group schema configuration.
**Table 3: Group schema configuration parameters**
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for group entries in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Name Attribute | The group attribute whose value is suitable for a display name. |
| Group Member User Attribute | The name of the **user attribute** whose format matches the group members in the `Group Member Mapping Attribute`. |
| Group Member Mapping Attribute | The name of the group attribute containing the members of a group. |
| Search Attribute | Attribute used to construct search filters when adding groups to clusters or projects in the UI. See description of user schema `Search Attribute`. |
| Group DN Attribute | The name of the group attribute whose format matches the values in the user's group membership attribute. See `User Member Attribute`. |
| Nested Group Membership | This settings defines whether Rancher should resolve nested group memberships. Use only if your organisation makes use of these nested memberships (ie. you have groups that contain other groups as members). |
---
### Test Authentication
Once you have completed the configuration, proceed by testing the connection to the OpenLDAP server. Authentication with OpenLDAP will be enabled implicitly if the test is successful.
@@ -125,4 +49,4 @@ Once you have completed the configuration, proceed by testing the connection to
## Annex: Troubleshooting
If you are experiencing issues while testing the connection to the OpenLDAP server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{< baseurl >}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation.
If you are experiencing issues while testing the connection to the OpenLDAP server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{<baseurl>}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation.
@@ -0,0 +1,86 @@
---
title: OpenLDAP Configuration Reference
weight: 2
---
This section is intended to be used as a reference when setting up an OpenLDAP authentication provider in Rancher.
For further details on configuring OpenLDAP, refer to the [official documentation.](https://www.openldap.org/doc/)
> Before you proceed with the configuration, please familiarize yourself with the concepts of [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
- [Background: OpenLDAP Authentication Flow](#background-openldap-authentication-flow)
- [OpenLDAP server configuration](#openldap-server-configuration)
- [User/group schema configuration](#user-group-schema-configuration)
- [User schema configuration](#user-schema-configuration)
- [Group schema configuration](#group-schema-configuration)
## Background: OpenLDAP Authentication Flow
1. When a user attempts to login with his LDAP credentials, Rancher creates an initial bind to the LDAP server using a service account with permissions to search the directory and read user/group attributes.
2. Rancher then searches the directory for the user by using a search filter based on the provided username and configured attribute mappings.
3. Once the user has been found, he is authenticated with another LDAP bind request using the user's DN and provided password.
4. Once authentication succeeded, Rancher then resolves the group memberships both from the membership attribute in the user's object and by performing a group search based on the configured user mapping attribute.
# OpenLDAP Server Configuration
You will need to enter the address, port, and protocol to connect to your OpenLDAP server. `389` is the standard port for insecure traffic, `636` for TLS traffic.
> **Using TLS?**
>
> If the certificate used by the OpenLDAP server is self-signed or not from a recognized certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain.
If you are in doubt about the correct values to enter in the user/group Search Base configuration fields, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/ad/#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation.
<figcaption>OpenLDAP Server Parameters</figcaption>
| Parameter | Description |
|:--|:--|
| Hostname | Specify the hostname or IP address of the OpenLDAP server |
| Port | Specify the port at which the OpenLDAP server is listening for connections. Unencrypted LDAP normally uses the standard port of 389, while LDAPS uses port 636.|
| TLS | Check this box to enable LDAP over SSL/TLS (commonly known as LDAPS). You will also need to paste in the CA certificate if the server uses a self-signed/enterprise-signed certificate. |
| Server Connection Timeout | The duration in number of seconds that Rancher waits before considering the server unreachable. |
| Service Account Distinguished Name | Enter the Distinguished Name (DN) of the user that should be used to bind, search and retrieve LDAP entries. (see [Prerequisites](#prerequisites)). |
| Service Account Password | The password for the service account. |
| User Search Base | Enter the Distinguished Name of the node in your directory tree from which to start searching for user objects. All users must be descendents of this base DN. For example: "ou=people,dc=acme,dc=com".|
| Group Search Base | If your groups live under a different node than the one configured under `User Search Base` you will need to provide the Distinguished Name here. Otherwise leave this field empty. For example: "ou=groups,dc=acme,dc=com".|
# User/Group Schema Configuration
If your OpenLDAP directory deviates from the standard OpenLDAP schema, you must complete the **Customize Schema** section to match it.
Note that the attribute mappings configured in this section are used by Rancher to construct search filters and resolve group membership. It is therefore always recommended to verify that the configuration here matches the schema used in your OpenLDAP.
If you are unfamiliar with the user/group schema used in the OpenLDAP server, consult your LDAP administrator or refer to the section [Identify Search Base and Schema using ldapsearch]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/ad/#annex-identify-search-base-and-schema-using-ldapsearch) in the Active Directory authentication documentation.
### User Schema Configuration
The table below details the parameters for the user schema configuration.
<figcaption>User Schema Configuration Parameters</figcaption>
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for user objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Username Attribute | The user attribute whose value is suitable as a display name. |
| Login Attribute | The attribute whose value matches the username part of credentials entered by your users when logging in to Rancher. This is typically `uid`. |
| User Member Attribute | The user attribute containing the Distinguished Name of groups a user is member of. Usually this is one of `memberOf` or `isMemberOf`. |
| Search Attribute | When a user enters text to add users or groups in the UI, Rancher queries the LDAP server and attempts to match users by the attributes provided in this setting. Multiple attributes can be specified by separating them with the pipe ("\|") symbol. |
| User Enabled Attribute | If the schema of your OpenLDAP server supports a user attribute whose value can be evaluated to determine if the account is disabled or locked, enter the name of that attribute. The default OpenLDAP schema does not support this and the field should usually be left empty. |
| Disabled Status Bitmask | This is the value for a disabled/locked user account. The parameter is ignored if `User Enabled Attribute` is empty. |
### Group Schema Configuration
The table below details the parameters for the group schema configuration.
<figcaption>Group Schema Configuration Parameters<figcaption>
| Parameter | Description |
|:--|:--|
| Object Class | The name of the object class used for group entries in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
| Name Attribute | The group attribute whose value is suitable for a display name. |
| Group Member User Attribute | The name of the **user attribute** whose format matches the group members in the `Group Member Mapping Attribute`. |
| Group Member Mapping Attribute | The name of the group attribute containing the members of a group. |
| Search Attribute | Attribute used to construct search filters when adding groups to clusters or projects in the UI. See description of user schema `Search Attribute`. |
| Group DN Attribute | The name of the group attribute whose format matches the values in the user's group membership attribute. See `User Member Attribute`. |
| Nested Group Membership | This settings defines whether Rancher should resolve nested group memberships. Use only if your organization makes use of these nested memberships (ie. you have groups that contain other groups as members). This option is disabled if you are using Shibboleth. |
@@ -0,0 +1,109 @@
---
title: Configuring Shibboleth (SAML)
weight: 1210
---
_Available as of v2.4.0_
If your organization uses Shibboleth Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in to Rancher using their Shibboleth credentials.
In this configuration, when Rancher users log in, they will be redirected to the Shibboleth IdP to enter their credentials. After authentication, they will be redirected back to the Rancher UI.
If you also configure OpenLDAP as the back end to Shibboleth, it will return a SAML assertion to Rancher with user attributes that include groups. Then the authenticated user will be able to access resources in Rancher that their groups have permissions for.
> The instructions in this section assume that you understand how Rancher, Shibboleth, and OpenLDAP work together. For a more detailed explanation of how it works, refer to [this page.](./about)
This section covers the following topics:
- [Setting up Shibboleth in Rancher](#setting-up-shibboleth-in-rancher)
- [Shibboleth Prerequisites](#shibboleth-prerequisites)
- [Configure Shibboleth in Rancher](#configure-shibboleth-in-rancher)
- [SAML Provider Caveats](#saml-provider-caveats)
- [Setting up OpenLDAP in Rancher](#setting-up-openldap-in-rancher)
- [OpenLDAP Prerequisites](#openldap-prerequisites)
- [Configure OpenLDAP in Rancher](#configure-openldap-in-rancher)
- [Troubleshooting](#troubleshooting)
# Setting up Shibboleth in Rancher
### Shibboleth Prerequisites
>
>- You must have a Shibboleth IdP Server configured.
>- Following are the Rancher Service Provider URLs needed for configuration:
Metadata URL: `https://<rancher-server>/v1-saml/shibboleth/saml/metadata`
Assertion Consumer Service (ACS) URL: `https://<rancher-server>/v1-saml/shibboleth/saml/acs`
>- Export a `metadata.xml` file from your IdP Server. For more information, see the [Shibboleth documentation.](https://wiki.shibboleth.net/confluence/display/SP3/Home)
### Configure Shibboleth in Rancher
If your organization uses Shibboleth for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
1. From the **Global** view, select **Security > Authentication** from the main menu.
1. Select **Shibboleth**.
1. Complete the **Configure Shibboleth Account** form. Shibboleth IdP lets you specify what data store you want to use. You can either add a database or use an existing ldap server. For example, if you select your Active Directory (AD) server, the examples below describe how you can map AD attributes to fields within Rancher.
1. **Display Name Field**: Enter the AD attribute that contains the display name of users (example: `displayName`).
1. **User Name Field**: Enter the AD attribute that contains the user name/given name (example: `givenName`).
1. **UID Field**: Enter an AD attribute that is unique to every user (example: `sAMAccountName`, `distinguishedName`).
1. **Groups Field**: Make entries for managing group memberships (example: `memberOf`).
1. **Rancher API Host**: Enter the URL for your Rancher Server.
1. **Private Key** and **Certificate**: This is a key-certificate pair to create a secure shell between Rancher and your IdP.
You can generate one using an openssl command. For example:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
```
1. **IDP-metadata**: The `metadata.xml` file that you exported from your IdP server.
1. After you complete the **Configure Shibboleth Account** form, click **Authenticate with Shibboleth**, which is at the bottom of the page.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Shibboleth IdP to validate your Rancher Shibboleth configuration.
>**Note:** You may have to disable your popup blocker to see the IdP login page.
**Result:** Rancher is configured to work with Shibboleth. Your users can now sign into Rancher using their Shibboleth logins.
### SAML Provider Caveats
If you configure Shibboleth without OpenLDAP, the following caveats apply due to the fact that SAML Protocol does not support search or lookup for users or groups.
- There is no validation on users or groups when assigning permissions to them in Rancher.
- When adding users, the exact user IDs (i.e. UID Field) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
- When adding groups, you must select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
- The group drop-down shows only the groups that you are a member of. You will not be able to add groups that you are not a member of.
To enable searching for groups when assigning permissions in Rancher, you will need to configure a back end for the SAML provider that supports groups, such as OpenLDAP.
# Setting up OpenLDAP in Rancher
If you also configure OpenLDAP as the back end to Shibboleth, it will return a SAML assertion to Rancher with user attributes that include groups. Then authenticated users will be able to access resources in Rancher that their groups have permissions for.
### OpenLDAP Prerequisites
Rancher must be configured with a LDAP bind account (aka service account) to search and retrieve LDAP entries pertaining to users and groups that should have access. It is recommended to not use an administrator account or personal account for this purpose and instead create a dedicated account in OpenLDAP with read-only access to users and groups under the configured search base (see below).
> **Using TLS?**
>
> If the certificate used by the OpenLDAP server is self-signed or not from a recognized certificate authority, make sure have at hand the CA certificate (concatenated with any intermediate certificates) in PEM format. You will have to paste in this certificate during the configuration so that Rancher is able to validate the certificate chain.
### Configure OpenLDAP in Rancher
Configure the settings for the OpenLDAP server, groups and users. For help filling out each field, refer to the [configuration reference.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/openldap/openldap-config) Note that nested group membership is not available for Shibboleth.
> Before you proceed with the configuration, please familiarise yourself with the concepts of [External Authentication Configuration and Principal Users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/#external-authentication-configuration-and-principal-users).
1. Log into the Rancher UI using the initial local `admin` account.
2. From the **Global** view, navigate to **Security** > **Authentication**
3. Select **OpenLDAP**. The **Configure an OpenLDAP server** form will be displayed.
# Troubleshooting
If you are experiencing issues while testing the connection to the OpenLDAP server, first double-check the credentials entered for the service account as well as the search base configuration. You may also inspect the Rancher logs to help pinpointing the problem cause. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging]({{<baseurl>}}/rancher/v2.x/en/faq/technical/#how-can-i-enable-debug-logging) in this documentation.
@@ -0,0 +1,34 @@
---
title: Group Permissions with Shibboleth and OpenLDAP
weight: 1
---
_Available as of Rancher v2.4_
This page provides background information and context for Rancher users who intend to set up the Shibboleth authentication provider in Rancher.
Because Shibboleth is a SAML provider, it does not support searching for groups. While a Shibboleth integration can validate user credentials, it can't be used to assign permissions to groups in Rancher without additional configuration.
One solution to this problem is to configure an OpenLDAP identity provider. With an OpenLDAP back end for Shibboleth, you will be able to search for groups in Rancher and assign them to resources such as clusters, projects, or namespaces from the Rancher UI.
### Terminology
- **Shibboleth** is a single sign-on log-in system for computer networks and the Internet. It allows people to sign in using just one identity to various systems. It validates user credentials, but does not, on its own, handle group memberships.
- **SAML:** Security Assertion Markup Language, an open standard for exchanging authentication and authorization data between an identity provider and a service provider.
- **OpenLDAP:** a free, open-source implementation of the Lightweight Directory Access Protocol (LDAP). It is used to manage an organizations computers and users. OpenLDAP is useful for Rancher users because it supports groups. In Rancher, it is possible to assign permissions to groups so that they can access resources such as clusters, projects, or namespaces, as long as the groups already exist in the identity provider.
- **IdP or IDP:** An identity provider. OpenLDAP is an example of an identity provider.
### Adding OpenLDAP Group Permissions to Rancher Resources
The diagram below illustrates how members of an OpenLDAP group can access resources in Rancher that the group has permissions for.
For example, a cluster owner could add an OpenLDAP group to a cluster so that they have permissions view most cluster level resources and create new projects. Then the OpenLDAP group members will have access to the cluster as soon as they log in to Rancher.
In this scenario, OpenLDAP allows the cluster owner to search for groups when assigning persmissions. Without OpenLDAP, the functionality to search for groups would not be supported.
When a member of the OpenLDAP group logs in to Rancher, she is redirected to Shibboleth and enters her username and password.
Shibboleth validates her credentials, and retrieves user attributes from OpenLDAP, including groups. Then Shibboleth sends a SAML assertion to Rancher including the user attributes. Rancher uses the group data so that she can access all of the resources and permissions that her groups have permissions for.
![Adding OpenLDAP Group Permissions to Rancher Resources]({{<baseurl>}}/img/rancher/shibboleth-with-openldap-groups.svg)
@@ -5,11 +5,11 @@ weight: 1
Rancher relies on users and groups to determine who is allowed to log in to Rancher and which resources they can access. When you configure an external authentication provider, users from that provider will be able to log in to your Rancher server. When a user logs in, the authentication provider will supply your Rancher server with a list of groups to which the user belongs.
Access to clusters, projects, multi-cluster apps, and global DNS providers and entries can be controlled by adding either individual users or groups to these resources. When you add a group to a resource, all users who are members of that group in the authentication provider, will be able to access the resource with the permissions that you've specified for the group. For more information on roles and permissions, see [Role Based Access Control]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/).
Access to clusters, projects, multi-cluster apps, and global DNS providers and entries can be controlled by adding either individual users or groups to these resources. When you add a group to a resource, all users who are members of that group in the authentication provider, will be able to access the resource with the permissions that you've specified for the group. For more information on roles and permissions, see [Role Based Access Control]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/).
## Managing Members
When adding a user or group to a resource, you can search for users or groups by beginning to type their name. The Rancher server will query the authentication provider to find users and groups that match what you've entered. Searching is limited to the authentication provider that you are currently logged in with. For example, if you've enabled GitHub authentication but are logged in using a [local]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/local/) user account, you will not be able to search for GitHub users or groups.
When adding a user or group to a resource, you can search for users or groups by beginning to type their name. The Rancher server will query the authentication provider to find users and groups that match what you've entered. Searching is limited to the authentication provider that you are currently logged in with. For example, if you've enabled GitHub authentication but are logged in using a [local]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/local/) user account, you will not be able to search for GitHub users or groups.
All users, whether they are local users or from an authentication provider, can be viewed and managed. From the **Global** view, click on **Users**.
@@ -3,7 +3,7 @@ title: Provisioning Drivers
weight: 1140
---
Drivers in Rancher allow you to manage which providers can be used to deploy [hosted Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) or [nodes in an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) to allow Rancher to deploy and manage Kubernetes.
Drivers in Rancher allow you to manage which providers can be used to deploy [hosted Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) or [nodes in an infrastructure provider]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) to allow Rancher to deploy and manage Kubernetes.
### Rancher Drivers
@@ -18,19 +18,19 @@ There are two types of drivers within Rancher:
_Available as of v2.2.0_
Cluster drivers are used to provision [hosted Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/), such as GKE, EKS, AKS, etc.. The availability of which cluster driver to display when creating a cluster is defined based on the cluster driver's status. Only `active` cluster drivers will be displayed as an option for creating clusters for hosted Kubernetes clusters. By default, Rancher is packaged with several existing cluster drivers, but you can also create custom cluster drivers to add to Rancher.
Cluster drivers are used to provision [hosted Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/), such as GKE, EKS, AKS, etc.. The availability of which cluster driver to display when creating a cluster is defined based on the cluster driver's status. Only `active` cluster drivers will be displayed as an option for creating clusters for hosted Kubernetes clusters. By default, Rancher is packaged with several existing cluster drivers, but you can also create custom cluster drivers to add to Rancher.
By default, Rancher has activated several hosted Kubernetes cloud providers including:
* [Amazon EKS]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/)
* [Google GKE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/)
* [Azure AKS]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/aks/)
* [Amazon EKS]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/)
* [Google GKE]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/)
* [Azure AKS]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/aks/)
There are several other hosted Kubernetes cloud providers that are disabled by default, but are packaged in Rancher:
* [Alibaba ACK]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/)
* [Huawei CCE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/)
* [Tencent]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/)
* [Alibaba ACK]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/)
* [Huawei CCE]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/)
* [Tencent]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/)
## Node Drivers
@@ -40,7 +40,7 @@ If there are specific node drivers that you don't want to show to your users, yo
Rancher supports several major cloud providers, but by default, these node drivers are active and available for deployment:
* [Amazon EC2]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/)
* [Azure]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/)
* [Digital Ocean]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/)
* [vSphere]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/)
* [Amazon EC2]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/)
* [Azure]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/)
* [Digital Ocean]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/)
* [vSphere]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/)
@@ -5,7 +5,7 @@ weight: 1
_Available as of v2.2.0_
Cluster drivers are used to create clusters in a [hosted Kubernetes provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/), such as Google GKE. The availability of which cluster driver to display when creating clusters is defined by the cluster driver's status. Only `active` cluster drivers will be displayed as an option for creating clusters. By default, Rancher is packaged with several existing cloud provider cluster drivers, but you can also add custom cluster drivers to Rancher.
Cluster drivers are used to create clusters in a [hosted Kubernetes provider]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/), such as Google GKE. The availability of which cluster driver to display when creating clusters is defined by the cluster driver's status. Only `active` cluster drivers will be displayed as an option for creating clusters. By default, Rancher is packaged with several existing cloud provider cluster drivers, but you can also add custom cluster drivers to Rancher.
If there are specific cluster drivers that you do not want to show your users, you may deactivate those cluster drivers within Rancher and they will not appear as an option for cluster creation.
@@ -13,8 +13,8 @@ If there are specific cluster drivers that you do not want to show your users, y
>**Prerequisites:** To create, edit, or delete cluster drivers, you need _one_ of the following permissions:
>
>- [Administrator Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
>- [Custom Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Cluster Drivers]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned.
>- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Cluster Drivers]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned.
## Activating/Deactivating Cluster Drivers
@@ -14,8 +14,8 @@ If there are specific node drivers that you don't want to show to your users, yo
>**Prerequisites:** To create, edit, or delete drivers, you need _one_ of the following permissions:
>
>- [Administrator Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
>- [Custom Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Node Drivers]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned.
>- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
>- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Node Drivers]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned.
## Activating/Deactivating Node Drivers
@@ -39,10 +39,28 @@ To force Rancher to refresh the Kubernetes metadata, a manual refresh action is
The RKE metadata config controls how often Rancher syncs metadata and where it downloads data from. You can configure the metadata from the settings in the Rancher UI, or through the Rancher API at the endpoint `v3/settings/rke-metadata-config`.
The way that the metadata is configured depends on the Rancher version.
{{% tabs %}}
{{% tab "Rancher v2.4+" %}}
To edit the metadata config in Rancher,
1. Go to the **Global** view and click the **Settings** tab.
1. Go to the **rke-metadata-config** section. Click the **Ellipsis (...)** and click **Edit.**
1. Go to the **rke-metadata-config** section. Click the **&#8942;** and click **Edit.**
1. You can optionally fill in the following parameters:
- `refresh-interval-minutes`: This is the amount of time that Rancher waits to sync the metadata. To disable the periodic refresh, set `refresh-interval-minutes` to 0.
- `url`: This is the HTTP path that Rancher fetches data from. The path must be a direct path to a JSON file. For example, the default URL for Rancher v2.4 is `https://releases.rancher.com/kontainer-driver-metadata/release-v2.4/data.json`.
If you don't have an air gap setup, you don't need to specify the URL where Rancher gets the metadata, because the default setting is to pull from [Rancher's metadata Git repository.](https://github.com/rancher/kontainer-driver-metadata/blob/dev-v2.5/data/data.json)
However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL to point to the new location of the JSON file.
{{% /tab %}}
{{% tab "Rancher v2.3" %}}
To edit the metadata config in Rancher,
1. Go to the **Global** view and click the **Settings** tab.
1. Go to the **rke-metadata-config** section. Click the **&#8942;** and click **Edit.**
1. You can optionally fill in the following parameters:
- `refresh-interval-minutes`: This is the amount of time that Rancher waits to sync the metadata. To disable the periodic refresh, set `refresh-interval-minutes` to 0.
@@ -52,6 +70,8 @@ To edit the metadata config in Rancher,
If you don't have an air gap setup, you don't need to specify the URL or Git branch where Rancher gets the metadata, because the default setting is to pull from [Rancher's metadata Git repository.](https://github.com/rancher/kontainer-driver-metadata.git)
However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL and Git branch in the `rke-metadata-config` settings to point to the new location of the repository.
{{% /tab %}}
{{% /tabs %}}
### Air Gap Setups
@@ -59,7 +79,7 @@ Rancher relies on a periodic refresh of the `rke-metadata-config` to download ne
If you have an air gap setup, you might not be able to get the automatic periodic refresh of the Kubernetes metadata from Rancher's Git repository. In that case, you should disable the periodic refresh to prevent your logs from showing errors. Optionally, you can configure your metadata settings so that Rancher can sync with a local copy of the RKE metadata.
To sync Rancher with a local mirror of the RKE metadata, an administrator would configure the `rke-metadata-config` settings by updating the `url` and `branch` to point to the mirror.
To sync Rancher with a local mirror of the RKE metadata, an administrator would configure the `rke-metadata-config` settings to point to the mirror. For details, refer to [Configuring the Metadata Synchronization.](#configuring-the-metadata-synchronization)
After new Kubernetes versions are loaded into the Rancher setup, additional steps would be required in order to use them for launching clusters. Rancher needs access to updated system images. While the metadata settings can only be changed by administrators, any user can download the Rancher system images and prepare a private Docker registry for them.
@@ -9,6 +9,8 @@ aliases:
_Pod Security Policies_ (or PSPs) are objects that control security-sensitive aspects of pod specification (like root privileges). If a pod does not meet the conditions specified in the PSP, Kubernetes will not allow it to start, and Rancher will display an error message of `Pod <NAME> is forbidden: unable to validate...`.
> **Note:** Assigning Pod Security Policies are only available for clusters that are [launched using RKE.]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/)
- You can assign PSPs at the cluster or project level.
- PSPs work through inheritance.
@@ -71,10 +73,10 @@ Rancher ships with two default Pod Security Policies (PSPs): the `restricted` an
You can add a Pod Security Policy (PSPs hereafter) in the following contexts:
- [When creating a cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/pod-security-policies/)
- [When editing an existing cluster]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/)
- [When creating a project]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#creating-a-project/)
- [When editing an existing project]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/)
- [When creating a cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/pod-security-policies/)
- [When editing an existing cluster]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/)
- [When creating a project]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#creating-a-project/)
- [When editing an existing project]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/)
> **Note:** We recommend adding PSPs during cluster and project creation instead of adding it to an existing one.
@@ -5,7 +5,7 @@ aliases:
- /rancher/v2.x/en/concepts/global-configuration/users-permissions-roles/
---
Within Rancher, each person authenticates as a _user_, which is a login that grants you access to Rancher. As mentioned in [Authentication]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/), users can either be local or external.
Within Rancher, each person authenticates as a _user_, which is a login that grants you access to Rancher. As mentioned in [Authentication]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/), users can either be local or external.
After you configure external authentication, the users that display on the **Users** page changes.
@@ -17,11 +17,11 @@ After you configure external authentication, the users that display on the **Use
Once the user logs in to Rancher, their _authorization_, or their access rights within the system, is determined by _global permissions_, and _cluster and project roles_.
- [Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/):
- [Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/):
Define user authorization outside the scope of any particular cluster.
- [Cluster and Project Roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/):
- [Cluster and Project Roles]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/):
Define user authorization inside the specific cluster or project where they are assigned the role.
@@ -67,7 +67,7 @@ To assign the role to a new cluster member,
To assign any custom role to an existing cluster member,
1. Go to the member you want to give the role to. Click the **Ellipsis (...) > View in API.**
1. Go to the member you want to give the role to. Click the **&#8942; > View in API.**
1. In the **roleTemplateId** field, go to the drop-down menu and choose the role you want to assign to the member. Click **Show Request** and **Send Request.**
**Result:** The member has the assigned role.
@@ -140,7 +140,7 @@ By default, when a standard user creates a new cluster or project, they are auto
There are two methods for changing default cluster/project roles:
- **Assign Custom Roles**: Create a [custom role]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles) for either your [cluster](#custom-cluster-roles) or [project](#custom-project-roles), and then set the custom role as default.
- **Assign Custom Roles**: Create a [custom role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles) for either your [cluster](#custom-cluster-roles) or [project](#custom-project-roles), and then set the custom role as default.
- **Assign Individual Roles**: Configure multiple [cluster](#cluster-role-reference)/[project](#project-role-reference) roles as default for assignment to the creating user.
@@ -148,7 +148,7 @@ There are two methods for changing default cluster/project roles:
>**Note:**
>
>- Although you can [lock]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/) a default role, the system still assigns the role to users who create a cluster/project.
>- Although you can [lock]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/) a default role, the system still assigns the role to users who create a cluster/project.
>- Only users that create clusters/projects inherit their roles. Users added to the cluster/project membership afterward must be explicitly assigned their roles.
### Configuring Default Roles for Cluster and Project Creators
@@ -157,7 +157,7 @@ You can change the cluster or project role(s) that are automatically assigned to
1. From the **Global** view, select **Security > Roles** from the main menu. Select either the **Cluster** or **Project** tab.
1. Find the custom or individual role that you want to use as default. Then edit the role by selecting **Ellipsis > Edit**.
1. Find the custom or individual role that you want to use as default. Then edit the role by selecting **&#8942; > Edit**.
1. Enable the role as default.
{{% accordion id="cluster" label="For Clusters" %}}
@@ -13,8 +13,7 @@ This section covers the following topics:
- [Prerequisites](#prerequisites)
- [Creating a custom role for a cluster or project](#creating-a-custom-role-for-a-cluster-or-project)
- [Creating a custom global role that copies rules from an existing role](#creating-a-custom-global-role-that-copies-rules-from-an-existing-role)
- [Creating a custom global role that does not copy rules from another role](#creating-a-custom-global-role-that-does-not-copy-rules-from-another-role)
- [Creating a custom global role](#creating-a-custom-global-role)
- [Deleting a custom global role](#deleting-a-custom-global-role)
- [Assigning a custom global role to a group](#assigning-a-custom-global-role-to-a-group)
@@ -22,8 +21,8 @@ This section covers the following topics:
To complete the tasks on this page, one of the following permissions are required:
- [Administrator Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/).
- [Custom Global Permissions]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned.
- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/).
- [Custom Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#custom-global-permissions) with the [Manage Roles]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) role assigned.
## Creating A Custom Role for a Cluster or Project
@@ -68,7 +67,7 @@ The steps to add custom roles differ depending on the version of Rancher.
1. **Name** the role.
1. Choose whether to set the role to a status of [locked]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/).
1. Choose whether to set the role to a status of [locked]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/).
> **Note:** Locked roles cannot be assigned to users.
@@ -93,9 +92,11 @@ The steps to add custom roles differ depending on the version of Rancher.
{{% /tab %}}
{{% /tabs %}}
## Creating a Custom Global Role that Copies Rules from an Existing Role
## Creating a Custom Global Role
_Available as of v2.4.0-alpha1_
_Available as of v2.4.0_
### Creating a Custom Global Role that Copies Rules from an Existing Role
If you have a group of individuals that need the same level of access in Rancher, it can save time to create a custom global role in which all of the rules from another role, such as the administrator role, are copied into a new role. This allows you to only configure the variations between the existing role and the new role.
@@ -104,15 +105,13 @@ The custom global role can then be assigned to a user or group so that the custo
To create a custom global role based on an existing role,
1. Go to the **Global** view and click **Security > Roles.**
1. On the **Global** tab, go to the role that the custom global role will be based on. Click **Ellipsis (…) > Clone.**
1. On the **Global** tab, go to the role that the custom global role will be based on. Click **&#8942; (…) > Clone.**
1. Enter a name for the role.
1. Optional: To assign the custom role default for new users, go to the **New User Default** section and click **Yes: Default role for new users.**
1. In the **Grant Resources** section, select the Kubernetes resource operations that will be enabled for users with the custom role.
1. Click **Save.**
## Creating a Custom Global Role that Does Not Copy Rules from Another Role
_Available as of v2.4.0-alpha1_
### Creating a Custom Global Role that Does Not Copy Rules from Another Role
Custom global roles don't have to be based on existing roles. To create a custom global role by choosing the specific Kubernetes resource operations that should be allowed for the role, follow these steps:
@@ -125,7 +124,7 @@ Custom global roles don't have to be based on existing roles. To create a custom
## Deleting a Custom Global Role
_Available as of v2.4.0-alpha1_
_Available as of v2.4.0_
When deleting a custom global role, all global role bindings with this custom role are deleted.
@@ -136,12 +135,12 @@ Custom global roles can be deleted, but built-in roles cannot be deleted.
To delete a custom global role,
1. Go to the **Global** view and click **Security > Roles.**
2. On the **Global** tab, go to the custom global role that should be deleted and click **Ellipsis (…) > Delete.**
2. On the **Global** tab, go to the custom global role that should be deleted and click **&#8942; (…) > Delete.**
3. Click **Delete.**
## Assigning a Custom Global Role to a Group
_Available as of v2.4.0-alpha1_
_Available as of v2.4.0_
If you have a group of individuals that need the same level of access in Rancher, it can save time to create a custom global role. When the role is assigned to a group, the users in the group have the appropriate level of access the first time they sign into Rancher.
@@ -164,4 +163,4 @@ To assign a custom global role to a group, follow these steps:
1. Optional: In the **Global Permissions** or **Built-in** sections, select any additional permissions that the group should have.
1. Click **Create.**
**Result:** The custom global role will take effect when the users in the group log into Rancher.
**Result:** The custom global role will take effect when the users in the group log into Rancher.
@@ -43,7 +43,7 @@ To see the default permissions for new users, go to the **Global** view and clic
Permissions can be assigned to an individual user with [these steps.](#configuring-global-permissions-for-existing-individual-users)
As of Rancher v2.4.0-alpha1, you can [assign a role to everyone in the group at the same time](#configuring-global-permissions-for-groups) if the external authentication provider supports groups.
As of Rancher v2.4.0, you can [assign a role to everyone in the group at the same time](#configuring-global-permissions-for-groups) if the external authentication provider supports groups.
# Custom Global Permissions
@@ -102,7 +102,7 @@ To change the default global permissions that are assigned to external users upo
1. From the **Global** view, select **Security > Roles** from the main menu. Make sure the **Global** tab is selected.
1. Find the permissions set that you want to add or remove as a default. Then edit the permission by selecting **Ellipsis > Edit**.
1. Find the permissions set that you want to add or remove as a default. Then edit the permission by selecting **&#8942; > Edit**.
1. If you want to add the permission as a default, Select **Yes: Default role for new users** and then click **Save**.
@@ -116,7 +116,7 @@ To configure permission for a user,
1. Go to the **Users** tab.
1. On this page, go to the user whose access level you want to change and click **Ellipsis (...) > Edit.**
1. On this page, go to the user whose access level you want to change and click **&#8942; > Edit.**
1. In the **Global Permissions** section, click **Custom.**
@@ -128,7 +128,7 @@ To configure permission for a user,
### Configuring Global Permissions for Groups
_Available as of v2.4.0-alpha1_
_Available as of v2.4.0_
If you have a group of individuals that need the same level of access in Rancher, it can save time to assign permissions to the entire group at once, so that the users in the group have the appropriate level of access the first time they sign into Rancher.
@@ -27,11 +27,11 @@ If you want to prevent a role from being assigned to users, you can set it to a
You can lock roles in two contexts:
- When you're [adding a custom role]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/).
- When you're [adding a custom role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/).
- When you editing an existing role (see below).
1. From the **Global** view, select **Security** > **Roles**.
2. From the role that you want to lock (or unlock), select **Vertical Ellipsis (...)** > **Edit**.
2. From the role that you want to lock (or unlock), select **&#8942;** > **Edit**.
3. From the **Locked** option, choose the **Yes** or **No** radio button. Then click **Save**.

Some files were not shown because too many files have changed in this diff Show More