Merge branch 'master' into staging
|
Before Width: | Height: | Size: 51 KiB After Width: | Height: | Size: 51 KiB |
|
Before Width: | Height: | Size: 69 KiB After Width: | Height: | Size: 69 KiB |
|
Before Width: | Height: | Size: 37 KiB After Width: | Height: | Size: 37 KiB |
|
Before Width: | Height: | Size: 77 KiB After Width: | Height: | Size: 77 KiB |
|
Before Width: | Height: | Size: 134 KiB After Width: | Height: | Size: 134 KiB |
|
Before Width: | Height: | Size: 8.4 KiB After Width: | Height: | Size: 8.4 KiB |
|
Before Width: | Height: | Size: 164 KiB After Width: | Height: | Size: 164 KiB |
|
Before Width: | Height: | Size: 443 KiB After Width: | Height: | Size: 443 KiB |
|
Before Width: | Height: | Size: 831 KiB After Width: | Height: | Size: 831 KiB |
|
Before Width: | Height: | Size: 1.2 MiB After Width: | Height: | Size: 1.2 MiB |
|
Before Width: | Height: | Size: 889 KiB After Width: | Height: | Size: 889 KiB |
|
Before Width: | Height: | Size: 1.3 MiB After Width: | Height: | Size: 1.3 MiB |
|
Before Width: | Height: | Size: 177 KiB After Width: | Height: | Size: 177 KiB |
|
Before Width: | Height: | Size: 883 KiB After Width: | Height: | Size: 883 KiB |
|
Before Width: | Height: | Size: 429 KiB After Width: | Height: | Size: 429 KiB |
|
Before Width: | Height: | Size: 1.5 MiB After Width: | Height: | Size: 1.5 MiB |
|
Before Width: | Height: | Size: 452 KiB After Width: | Height: | Size: 452 KiB |
|
Before Width: | Height: | Size: 1.1 MiB After Width: | Height: | Size: 1.1 MiB |
|
Before Width: | Height: | Size: 1.3 MiB After Width: | Height: | Size: 1.3 MiB |
|
Before Width: | Height: | Size: 1.5 MiB After Width: | Height: | Size: 1.5 MiB |
|
Before Width: | Height: | Size: 1.1 MiB After Width: | Height: | Size: 1.1 MiB |
|
Before Width: | Height: | Size: 613 KiB After Width: | Height: | Size: 613 KiB |
|
Before Width: | Height: | Size: 349 KiB After Width: | Height: | Size: 349 KiB |
|
Before Width: | Height: | Size: 27 KiB After Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 75 KiB After Width: | Height: | Size: 75 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 6.1 KiB After Width: | Height: | Size: 6.1 KiB |
|
Before Width: | Height: | Size: 20 KiB After Width: | Height: | Size: 20 KiB |
|
Before Width: | Height: | Size: 223 KiB After Width: | Height: | Size: 223 KiB |
|
Before Width: | Height: | Size: 212 KiB After Width: | Height: | Size: 212 KiB |
|
Before Width: | Height: | Size: 274 KiB After Width: | Height: | Size: 274 KiB |
|
Before Width: | Height: | Size: 260 KiB After Width: | Height: | Size: 260 KiB |
|
Before Width: | Height: | Size: 721 KiB After Width: | Height: | Size: 721 KiB |
|
Before Width: | Height: | Size: 206 KiB After Width: | Height: | Size: 206 KiB |
|
Before Width: | Height: | Size: 879 KiB After Width: | Height: | Size: 879 KiB |
|
Before Width: | Height: | Size: 886 KiB After Width: | Height: | Size: 886 KiB |
|
Before Width: | Height: | Size: 179 KiB After Width: | Height: | Size: 179 KiB |
|
Before Width: | Height: | Size: 114 KiB After Width: | Height: | Size: 114 KiB |
|
Before Width: | Height: | Size: 81 KiB After Width: | Height: | Size: 81 KiB |
|
Before Width: | Height: | Size: 115 KiB After Width: | Height: | Size: 115 KiB |
|
Before Width: | Height: | Size: 80 KiB After Width: | Height: | Size: 80 KiB |
|
Before Width: | Height: | Size: 115 KiB After Width: | Height: | Size: 115 KiB |
|
Before Width: | Height: | Size: 75 KiB After Width: | Height: | Size: 75 KiB |
|
Before Width: | Height: | Size: 90 KiB After Width: | Height: | Size: 90 KiB |
|
Before Width: | Height: | Size: 182 KiB After Width: | Height: | Size: 182 KiB |
|
Before Width: | Height: | Size: 125 KiB After Width: | Height: | Size: 125 KiB |
|
After Width: | Height: | Size: 117 KiB |
|
After Width: | Height: | Size: 168 KiB |
|
Before Width: | Height: | Size: 823 KiB After Width: | Height: | Size: 823 KiB |
|
Before Width: | Height: | Size: 801 KiB After Width: | Height: | Size: 801 KiB |
|
Before Width: | Height: | Size: 1.9 MiB After Width: | Height: | Size: 1.9 MiB |
|
Before Width: | Height: | Size: 1.5 MiB After Width: | Height: | Size: 1.5 MiB |
|
Before Width: | Height: | Size: 113 KiB After Width: | Height: | Size: 113 KiB |
|
Before Width: | Height: | Size: 217 KiB After Width: | Height: | Size: 217 KiB |
|
Before Width: | Height: | Size: 150 KiB After Width: | Height: | Size: 150 KiB |
|
Before Width: | Height: | Size: 164 KiB After Width: | Height: | Size: 164 KiB |
|
Before Width: | Height: | Size: 166 KiB After Width: | Height: | Size: 166 KiB |
|
Before Width: | Height: | Size: 123 KiB After Width: | Height: | Size: 123 KiB |
|
Before Width: | Height: | Size: 192 KiB After Width: | Height: | Size: 192 KiB |
|
Before Width: | Height: | Size: 88 KiB After Width: | Height: | Size: 88 KiB |
|
Before Width: | Height: | Size: 175 KiB After Width: | Height: | Size: 175 KiB |
@@ -5,6 +5,7 @@ title = "Rancher Labs"
|
||||
theme = "rancher-website-theme"
|
||||
themesDir = "node_modules"
|
||||
pluralizeListTitles = false
|
||||
timeout = 30000
|
||||
|
||||
enableRobotsTXT = true
|
||||
pygmentsCodeFences = true
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: "K3S - 5 less than k8s"
|
||||
shortTitle: K3S
|
||||
title: "K3s - 5 less than K8s"
|
||||
shortTitle: K3s
|
||||
date: 2019-02-05T09:52:46-07:00
|
||||
name: "menu"
|
||||
---
|
||||
@@ -18,18 +18,14 @@ Great for:
|
||||
What is this?
|
||||
---
|
||||
|
||||
k3s is intended to be a fully compliant Kubernetes distribution with the following changes:
|
||||
K3s is a fully compliant Kubernetes distribution with the following enhancements:
|
||||
|
||||
1. Legacy, alpha, non-default features are removed. Hopefully, you shouldn't notice the
|
||||
stuff that has been removed.
|
||||
2. Removed most in-tree plugins (cloud providers and storage plugins) which can be replaced
|
||||
with out of tree addons.
|
||||
3. Add sqlite3 as the default storage mechanism. etcd3 is still available, but not the default.
|
||||
4. Wrapped in simple launcher that handles a lot of the complexity of TLS and options.
|
||||
5. Minimal to no OS dependencies (just a sane kernel and cgroup mounts needed). k3s packages required
|
||||
dependencies
|
||||
* An embedded SQLite database has replaced etcd as the default datastore. External datastores such as PostgreSQL, MySQL, and etcd are also supported.
|
||||
* Simple but powerful "batteries-included" features have been added, such as: a local storage provider, a service load balancer, a helm controller, and the Traefik ingress controller.
|
||||
* Operation of all Kubernetes control plane components is encapsulated in a single binary and process. This allows K3s to automate and manage complex cluster operations like distributing certificates.
|
||||
* In-tree cloud providers and storage plugins have been removed.
|
||||
* External dependencies have been minimized (just a modern kernel and cgroup mounts needed). K3s packages required dependencies, including:
|
||||
* containerd
|
||||
* Flannel
|
||||
* CoreDNS
|
||||
* CNI
|
||||
* Host utilities (iptables, socat, etc)
|
||||
|
||||
@@ -0,0 +1,106 @@
|
||||
---
|
||||
title: "Advanced Options"
|
||||
weight: 40
|
||||
aliases:
|
||||
- /k3s/latest/en/running/
|
||||
---
|
||||
|
||||
This section contains advanced information describing the different ways you can run and manage K3s.
|
||||
|
||||
Starting the Server
|
||||
------------------
|
||||
|
||||
The installation script will auto-detect if your OS is using systemd or openrc and start the service.
|
||||
When running with openrc logs will be created at `/var/log/k3s.log`, or with systemd in `/var/log/syslog` and viewed using `journalctl -u k3s`. An example of installing and auto-starting with the install script:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
When running the server manually you should get an output similar to:
|
||||
|
||||
```
|
||||
$ k3s server
|
||||
INFO[2019-01-22T15:16:19.908493986-07:00] Starting k3s dev
|
||||
INFO[2019-01-22T15:16:19.908934479-07:00] Running kube-apiserver --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key
|
||||
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
|
||||
INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false
|
||||
INFO[2019-01-22T15:16:20.196880841-07:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false
|
||||
Flag --port has been deprecated, see --secure-port instead.
|
||||
INFO[2019-01-22T15:16:20.273441984-07:00] Listening on :6443
|
||||
INFO[2019-01-22T15:16:20.278383446-07:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
|
||||
INFO[2019-01-22T15:16:20.474454524-07:00] Node token is available at /var/lib/rancher/k3s/server/node-token
|
||||
INFO[2019-01-22T15:16:20.474471391-07:00] To join node to cluster: k3s agent -s https://10.20.0.3:6443 -t ${NODE_TOKEN}
|
||||
INFO[2019-01-22T15:16:20.541027133-07:00] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml
|
||||
INFO[2019-01-22T15:16:20.541049100-07:00] Run: k3s kubectl
|
||||
```
|
||||
|
||||
The output will likely be much longer as the agent will create a lot of logs. By default the server
|
||||
will register itself as a node (run the agent).
|
||||
|
||||
Alpine Linux
|
||||
------------
|
||||
|
||||
In order to pre-setup Alpine Linux you have to go through the following steps:
|
||||
|
||||
```bash
|
||||
echo "cgroup /sys/fs/cgroup cgroup defaults 0 0" >> /etc/fstab
|
||||
|
||||
cat >> /etc/cgconfig.conf <<EOF
|
||||
mount {
|
||||
cpuacct = /cgroup/cpuacct;
|
||||
memory = /cgroup/memory;
|
||||
devices = /cgroup/devices;
|
||||
freezer = /cgroup/freezer;
|
||||
net_cls = /cgroup/net_cls;
|
||||
blkio = /cgroup/blkio;
|
||||
cpuset = /cgroup/cpuset;
|
||||
cpu = /cgroup/cpu;
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Then update **/etc/update-extlinux.conf** by adding:
|
||||
|
||||
```
|
||||
default_kernel_opts="... cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory"
|
||||
```
|
||||
|
||||
Then update the config and reboot:
|
||||
|
||||
```bash
|
||||
update-extlinux
|
||||
reboot
|
||||
```
|
||||
|
||||
After rebooting:
|
||||
|
||||
- download **k3s** to **/usr/local/bin/k3s**
|
||||
- create an openrc file in **/etc/init.d**
|
||||
|
||||
Running in Docker (and docker-compose)
|
||||
-----------------
|
||||
|
||||
[k3d](https://github.com/rancher/k3d) is a utility designed to easily run K3s in Docker. It can be installed via the [brew](https://brew.sh/) utility for MacOS.
|
||||
|
||||
`rancher/k3s` images are also available to run K3s server and agent from Docker. A `docker-compose.yml` is in the root of the K3s repo that
|
||||
serves as an example of how to run K3s from Docker. To run from `docker-compose` from this repo run:
|
||||
|
||||
docker-compose up --scale node=3
|
||||
# kubeconfig is written to current dir
|
||||
kubectl --kubeconfig kubeconfig.yaml get node
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
497278a2d6a2 Ready <none> 11s v1.13.2-k3s2
|
||||
d54c8b17c055 Ready <none> 11s v1.13.2-k3s2
|
||||
db7a5a5a5bdd Ready <none> 12s v1.13.2-k3s2
|
||||
|
||||
To run the agent only in Docker, use `docker-compose up node`. Alternatively the Docker run command can also be used;
|
||||
|
||||
sudo docker run \
|
||||
-d --tmpfs /run \
|
||||
--tmpfs /var/run \
|
||||
-e K3S_URL=${SERVER_URL} \
|
||||
-e K3S_TOKEN=${NODE_TOKEN} \
|
||||
--privileged rancher/k3s:vX.Y.Z
|
||||
|
||||
@@ -1,47 +0,0 @@
|
||||
---
|
||||
title: "Building from Source"
|
||||
weight: 10
|
||||
---
|
||||
|
||||
This section provides information on building k3s from source.
|
||||
|
||||
See the [release](https://github.com/rancher/k3s/releases/latest) page for pre-built releases.
|
||||
|
||||
The clone will be much faster on this repo if you do
|
||||
|
||||
git clone --depth 1 https://github.com/rancher/k3s.git
|
||||
|
||||
This repo includes all of Kubernetes history so `--depth 1` will avoid most of that.
|
||||
|
||||
To build the full release binary run `make` and that will create `./dist/artifacts/k3s`.
|
||||
|
||||
Optionally to build the binaries without running linting or building docker images:
|
||||
```sh
|
||||
./scripts/download && ./scripts/build && ./scripts/package-cli
|
||||
```
|
||||
|
||||
For development, you just need go 1.12 and a sane GOPATH. To compile the binaries run:
|
||||
```bash
|
||||
go build -o k3s
|
||||
go build -o kubectl ./cmd/kubectl
|
||||
go build -o hyperkube ./vendor/k8s.io/kubernetes/cmd/hyperkube
|
||||
```
|
||||
|
||||
This will create the main executable, but it does not include the dependencies like containerd, CNI,
|
||||
etc. To run a server and agent with all the dependencies for development run the following
|
||||
helper scripts:
|
||||
```bash
|
||||
# Server
|
||||
./scripts/dev-server.sh
|
||||
|
||||
# Agent
|
||||
./scripts/dev-agent.sh
|
||||
```
|
||||
|
||||
|
||||
Kubernetes Source
|
||||
-----------------
|
||||
|
||||
The source code for Kubernetes is in `vendor/` and the location from which that is copied
|
||||
is in `./vendor.conf`. Go to the referenced repo/tag and you'll find all the patches applied
|
||||
to upstream Kubernetes.
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
title: "Configuration Info"
|
||||
weight: 4
|
||||
weight: 50
|
||||
---
|
||||
|
||||
This section contains information on using k3s with various configurations.
|
||||
This section contains information on using K3s with various configurations.
|
||||
|
||||
|
||||
Auto-Deploying Manifests
|
||||
@@ -12,7 +12,7 @@ Auto-Deploying Manifests
|
||||
Any file found in `/var/lib/rancher/k3s/server/manifests` will automatically be deployed to
|
||||
Kubernetes in a manner similar to `kubectl apply`.
|
||||
|
||||
It is also possible to deploy Helm charts. k3s supports a CRD controller for installing charts. A YAML file specification can look as following (example taken from `/var/lib/rancher/k3s/server/manifests/traefik.yaml`):
|
||||
It is also possible to deploy Helm charts. K3s supports a CRD controller for installing charts. A YAML file specification can look as following (example taken from `/var/lib/rancher/k3s/server/manifests/traefik.yaml`):
|
||||
|
||||
```yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
@@ -27,7 +27,7 @@ spec:
|
||||
ssl.enabled: "true"
|
||||
```
|
||||
|
||||
Keep in mind that `namespace` in your HelmChart resource metadata section should always be `kube-system`, because k3s deploy controller is configured to watch this namespace for new HelmChart resources. If you want to specify the namespace for the actual helm release, you can do that using `targetNamespace` key in the spec section:
|
||||
Keep in mind that `namespace` in your HelmChart resource metadata section should always be `kube-system`, because the K3s deploy controller is configured to watch this namespace for new HelmChart resources. If you want to specify the namespace for the actual helm release, you can do that using `targetNamespace` key in the spec section:
|
||||
|
||||
```
|
||||
apiVersion: helm.cattle.io/v1
|
||||
@@ -53,51 +53,68 @@ spec:
|
||||
|
||||
Also note that besides `set` you can use `valuesContent` in the spec section. And it's okay to use both of them.
|
||||
|
||||
k3s versions <= v0.5.0 used `k3s.cattle.io` for the api group of helmcharts, this has been changed to `helm.cattle.io` for later versions.
|
||||
K3s versions `<= v0.5.0` used `k3s.cattle.io` for the api group of helmcharts, this has been changed to `helm.cattle.io` for later versions.
|
||||
|
||||
Using the helm CRD
|
||||
---------------------
|
||||
|
||||
You can deploy a 3rd party helm chart using an example like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: kube-system
|
||||
spec:
|
||||
chart: nginx
|
||||
repo: https://charts.bitnami.com/bitnami
|
||||
targetNamespace: default
|
||||
```
|
||||
|
||||
You can install a specific version of a helm chart using an example like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: stable/nginx-ingress
|
||||
namespace: kube-system
|
||||
spec:
|
||||
chart: nginx-ingress
|
||||
version: 1.24.4
|
||||
targetNamespace: default
|
||||
```
|
||||
|
||||
Accessing Cluster from Outside
|
||||
-----------------------------
|
||||
|
||||
Copy `/etc/rancher/k3s/k3s.yaml` on your machine located outside the cluster as `~/.kube/config`. Then replace
|
||||
"localhost" with the IP or name of your k3s server. `kubectl` can now manage your k3s cluster.
|
||||
|
||||
Open Ports / Network Security
|
||||
---------------------------
|
||||
|
||||
The server needs port 6443 to be accessible by the nodes. The nodes need to be able to reach
|
||||
other nodes over UDP port 8472. The nodes also need to be able to reach the server on UDP port 8472. This is used for flannel VXLAN. If you don't use flannel
|
||||
and provide your own custom CNI, then 8472 is not needed by k3s. The node should not listen
|
||||
on any other port. k3s uses reverse tunneling such that the nodes make outbound connections
|
||||
to the server and all kubelet traffic runs through that tunnel.
|
||||
|
||||
IMPORTANT. The VXLAN port on nodes should not be exposed to the world, it opens up your
|
||||
cluster network to accessed by anyone. Run your nodes behind a firewall/security group that
|
||||
disables access to port 8472.
|
||||
"localhost" with the IP or name of your K3s server. `kubectl` can now manage your K3s cluster.
|
||||
|
||||
Node Registration
|
||||
-----------------
|
||||
|
||||
Agents will register with the server using the node cluster secret along with a randomly generated
|
||||
password for the node, stored at `/var/lib/rancher/k3s/agent/node-password.txt`. The server will
|
||||
password for the node, stored at `/etc/rancher/node/password`. The server will
|
||||
store the passwords for individual nodes at `/var/lib/rancher/k3s/server/cred/node-passwd`, and any
|
||||
subsequent attempts must use the same password. If the data directory of an agent is removed the
|
||||
password file should be recreated for the agent, or the entry removed from the server.
|
||||
subsequent attempts must use the same password. If the `/etc/rancher/node` directory of an agent is removed the
|
||||
password file should be recreated for the agent, or the entry removed from the server. A unique node
|
||||
id can be appended to the hostname by launching k3s servers or agents using the `--with-node-id` flag.
|
||||
|
||||
Containerd and Docker
|
||||
----------
|
||||
|
||||
k3s includes and defaults to containerd. Why? Because it's just plain better. If you want to
|
||||
run with Docker first stop and think, "Really? Do I really want more headache?" If still
|
||||
yes then you just need to run the agent with the `--docker` flag.
|
||||
K3s includes and defaults to containerd. If you want to use Docker instead of containerd then you simply need to run the agent with the `--docker` flag.
|
||||
|
||||
k3s will generate config.toml for containerd in `/var/lib/rancher/k3s/agent/etc/containerd/config.toml`, for advanced customization for this file you can create another file called `config.toml.tmpl` in the same directory and it will be used instead.
|
||||
K3s will generate config.toml for containerd in `/var/lib/rancher/k3s/agent/etc/containerd/config.toml`, for advanced customization for this file you can create another file called `config.toml.tmpl` in the same directory and it will be used instead.
|
||||
|
||||
The `config.toml.tmpl` will be treated as a Golang template file, and the `config.Node` structure is being passed to the template, the following is an example on how to use the structure to customize the configuration file https://github.com/rancher/k3s/blob/master/pkg/agent/templates/templates.go#L16-L32
|
||||
|
||||
Rootless
|
||||
Rootless (Experimental)
|
||||
--------
|
||||
|
||||
_**WARNING**:_ Some advanced magic, user beware
|
||||
_**WARNING**:_ Experimental feature
|
||||
|
||||
Initial rootless support has been added but there are a series of significant usability issues surrounding it.
|
||||
We are releasing the initial support for those interested in rootless and hopefully some people can help to
|
||||
@@ -110,9 +127,9 @@ In short, latest Ubuntu is your best bet for this to work.
|
||||
|
||||
* **Ports**
|
||||
|
||||
When running rootless a new network namespace is created. This means that k3s instance is running with networking
|
||||
fairly detached from the host. The only way to access services run in k3s from the host is to setup port forwards
|
||||
to the k3s network namespace. We have a controller that will automatically bind 6443 and service port below 1024 to the host with an offset of 10000.
|
||||
When running rootless a new network namespace is created. This means that K3s instance is running with networking
|
||||
fairly detached from the host. The only way to access services run in K3s from the host is to setup port forwards
|
||||
to the K3s network namespace. We have a controller that will automatically bind 6443 and service port below 1024 to the host with an offset of 10000.
|
||||
|
||||
That means service port 80 will become 10080 on the host, but 8080 will become 8080 without any offset.
|
||||
|
||||
@@ -120,7 +137,7 @@ In short, latest Ubuntu is your best bet for this to work.
|
||||
|
||||
* **Daemon lifecycle**
|
||||
|
||||
Once you kill k3s and then start a new instance of k3s it will create a new network namespace, but it doesn't kill the old pods. So you are left
|
||||
Once you kill K3s and then start a new instance of K3s it will create a new network namespace, but it doesn't kill the old pods. So you are left
|
||||
with a fairly broken setup. This is the main issue at the moment, how to deal with the network namespace.
|
||||
|
||||
The issue is tracked in https://github.com/rootless-containers/rootlesskit/issues/65
|
||||
@@ -133,155 +150,16 @@ In short, latest Ubuntu is your best bet for this to work.
|
||||
|
||||
Just add `--rootless` flag to either server or agent. So run `k3s server --rootless` and then look for the message
|
||||
`Wrote kubeconfig [SOME PATH]` for where your kubeconfig to access you cluster is. Be careful, if you use `-o` to write
|
||||
the kubeconfig to a different directory it will probably not work. This is because the k3s instance in running in a different
|
||||
the kubeconfig to a different directory it will probably not work. This is because the K3s instance in running in a different
|
||||
mount namespace.
|
||||
|
||||
Node Labels and Taints
|
||||
----------------------
|
||||
|
||||
k3s agents can be configured with options `--node-label` and `--node-taint` which adds set of Labels and Taints to kubelet, the two options only adds labels/taints at registration time, so they can only be added once and not changed after that, an example of options to add new label is:
|
||||
K3s agents can be configured with the options `--node-label` and `--node-taint` which adds a label and taint to the kubelet. The two options only add labels and/or taints at registration time, so they can only be added once and not changed after that again by running K3s. If you want to change node labels and taints after node registration you should use `kubectl`. Below is an example showing how to add labels and a taint:
|
||||
```
|
||||
--node-label foo=bar \
|
||||
--node-label hello=world \
|
||||
--node-taint key1=value1:NoExecute
|
||||
```
|
||||
|
||||
Flannel
|
||||
-------
|
||||
|
||||
Flannel is included by default, if you don't want flannel then run the agent with `--no-flannel` option.
|
||||
|
||||
In this setup you will still be required to install your own CNI driver. More info [here](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network)
|
||||
|
||||
CoreDNS
|
||||
-------
|
||||
|
||||
CoreDNS is deployed on start of the agent, to disable run the server with the `--no-deploy coredns` option.
|
||||
|
||||
If you don't install CoreDNS you will need to install a cluster DNS provider yourself.
|
||||
|
||||
Traefik
|
||||
-------
|
||||
|
||||
Traefik is deployed by default when starting the server; to disable it, start the server with the `--no-deploy traefik` option.
|
||||
|
||||
Service Load Balancer
|
||||
---------------------
|
||||
|
||||
k3s includes a basic service load balancer that uses available host ports. If you try to create
|
||||
a load balancer that listens on port 80, for example, it will try to find a free host in the cluster
|
||||
for port 80. If no port is available the load balancer will stay in Pending.
|
||||
|
||||
To disable the embedded load balancer run the server with the `--no-deploy servicelb` option. This is necessary if you wish to run a different load balancer, such as MetalLB.
|
||||
|
||||
Metrics Server
|
||||
--------------
|
||||
|
||||
To add functionality for commands such as `k3s kubectl top nodes` metrics-server must be installed,
|
||||
to install see the instructions located at https://github.com/kubernetes-incubator/metrics-server/.
|
||||
|
||||
**NOTE** : By default the image used in `metrics-server-deployment.yaml` is valid only for **amd64** devices,
|
||||
this should be edited as appropriate for your architecture. As of this writing metrics-server provides
|
||||
the following images relevant to k3s: `amd64:v0.3.3`, `arm64:v0.3.2`, and `arm:v0.3.2`. Further information
|
||||
on the images provided through gcr.io can be found at https://console.cloud.google.com/gcr/images/google-containers/GLOBAL.
|
||||
|
||||
Storage Backends
|
||||
----------------
|
||||
|
||||
As of version 0.6.0, k3s can support various storage backends including: SQLite (default), MySQL, Postgres, and etcd, this enhancement depends on the following arguments that can be passed to k3s server:
|
||||
|
||||
* `--storage-backend` _value_
|
||||
|
||||
Specify storage type etcd3 or kvsql [$`K3S_STORAGE_BACKEND`]
|
||||
|
||||
* `--storage-endpoint` _value_
|
||||
|
||||
Specify etcd, Mysql, Postgres, or Sqlite (default) data source name [$`K3S_STORAGE_ENDPOINT`]
|
||||
|
||||
* `--storage-cafile` _value_
|
||||
|
||||
SSL Certificate Authority file used to secure storage backend communication [$`K3S_STORAGE_CAFILE`]
|
||||
|
||||
* `--storage-certfile` _value_
|
||||
|
||||
SSL certification file used to secure storage backend communication [$`K3S_STORAGE_CERTFILE`]
|
||||
|
||||
* `--storage-keyfile` _value_
|
||||
|
||||
SSL key file used to secure storage backend communication [$`K3S_STORAGE_KEYFILE`]
|
||||
|
||||
### MySQL
|
||||
|
||||
To use k3s with MySQL storage backend, you can specify the following for insecure connection:
|
||||
|
||||
```
|
||||
--storage-endpoint="mysql://"
|
||||
```
|
||||
By default the server will attempt to connect to mysql using the mysql socket at `/var/run/mysqld/mysqld.sock` using the root user and with no password, k3s will also create a database with the name `kubernetes` if the database is not specified in the DSN.
|
||||
|
||||
To override the method of connection, user/pass, and database name, you can provide a custom DSN, for example:
|
||||
|
||||
```
|
||||
--storage-endpoint="mysql://k3suser:k3spass@tcp(192.168.1.100:3306)/k3stest"
|
||||
```
|
||||
|
||||
This command will attempt to connect to MySQL on host `192.168.1.100` on port `3306` with username `k3suser` and password `k3spass` and k3s will automatically create a new database with the name `k3stest` if it doesn't exist, for more information about the MySQL driver data source name, please refer to https://github.com/go-sql-driver/mysql#dsn-data-source-name
|
||||
|
||||
To connect to MySQL securely, you can use the following example:
|
||||
```
|
||||
--storage-endpoint="mysql://k3suser:k3spass@tcp(192.168.1.100:3306)/k3stest" \
|
||||
--storage-cafile ca.crt \
|
||||
--storage-certfile mysql.crt \
|
||||
--storage-keyfile mysql.key
|
||||
```
|
||||
The above command will use these certificates to generate the tls config to communicate with mysql securely.
|
||||
|
||||
|
||||
### Postgres
|
||||
|
||||
Connection to postgres can be established using the following command:
|
||||
|
||||
```
|
||||
--storage-endpoint="postgres://"
|
||||
```
|
||||
|
||||
By default the server will attempt to connect to postgres on localhost with using the `postgres` user and with `postgres` password, k3s will also create a database with the name `kubernetes` if the database is not specified in the DSN.
|
||||
|
||||
To override the method of connection, user/pass, and database name, you can provide a custom DSN, for example:
|
||||
|
||||
```
|
||||
--storage-endpoint="postgres://k3suser:k3spass@192.168.1.100:5432/k3stest"
|
||||
```
|
||||
|
||||
This command will attempt to connect to Postgres on host `192.168.1.100` on port `5432` with username `k3suser` and password `k3spass` and k3s will automatically create a new database with the name `k3stest` if it doesn't exist, for more information about the Postgres driver data source name, please refer to https://godoc.org/github.com/lib/pq
|
||||
|
||||
To connect to Postgres securely, you can use the following example:
|
||||
|
||||
```
|
||||
--storage-endpoint="postgres://k3suser:k3spass@192.168.1.100:5432/k3stest" \
|
||||
--storage-certfile postgres.crt \
|
||||
--storage-keyfile postgres.key \
|
||||
--storage-cafile ca.crt
|
||||
```
|
||||
|
||||
The above command will use these certificates to generate the tls config to communicate with postgres securely.
|
||||
|
||||
### etcd
|
||||
|
||||
Connection to etcd3 can be established using the following command:
|
||||
|
||||
```
|
||||
--storage-backend=etcd3 \
|
||||
--storage-endpoint="https://127.0.0.1:2379"
|
||||
```
|
||||
The above command will attempt to connect insecurely to etcd on localhost with port `2379`, you can connect securely to etcd using the following command:
|
||||
|
||||
```
|
||||
--storage-backend=etcd3 \
|
||||
--storage-endpoint="https://127.0.0.1:2379" \
|
||||
--storage-cafile ca.crt \
|
||||
--storage-certfile etcd.crt \
|
||||
--storage-keyfile etcd.key
|
||||
```
|
||||
|
||||
The above command will use these certificates to generate the tls config to communicate with etcd securely.
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
---
|
||||
title: FAQ
|
||||
weight: 60
|
||||
---
|
||||
|
||||
The FAQ is updated periodically and designed to answer the questions our users most frequently ask about K3s.
|
||||
|
||||
**Is K3s a suitable replacement for k8s?**
|
||||
|
||||
K3s is capable of nearly everything k8s can do. It is just a more lightweight version. See the [main]({{<baseurl>}}/k3s/latest/en/) docs page for more details.
|
||||
|
||||
**How can I use my own Ingress instead of Traefik?**
|
||||
|
||||
Simply start K3s server with `--no-deploy=traefik` and deploy your ingress.
|
||||
|
||||
**Does K3s support Windows?**
|
||||
|
||||
At this time K3s does not natively support Windows, however we are open to the idea in the future.
|
||||
|
||||
**How can I build from source?**
|
||||
|
||||
Please reference the K3s [BUILDING.md](https://github.com/rancher/k3s/blob/master/BUILDING.md) with instructions.
|
||||
@@ -1,349 +1,19 @@
|
||||
---
|
||||
title: "Installation Options"
|
||||
weight: 2
|
||||
title: "Installation"
|
||||
weight: 20
|
||||
---
|
||||
|
||||
This section contains information on flags and environment variables used for starting a k3s cluster.
|
||||
This section contains instructions for installing K3s in various environments. Please ensure you have met the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/) before you begin installing K3s.
|
||||
|
||||
Install Script
|
||||
--------------
|
||||
[Installation and Configuration Options]({{< baseurl >}}/k3s/latest/en/installation/install-options/) provides guidance on the options available to you when installing K3s.
|
||||
|
||||
The install script will attempt to download the latest release, to specify a specific
|
||||
version for download we can use the `INSTALL_K3S_VERSION` environment variable, for example:
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh -
|
||||
```
|
||||
|
||||
To install just the server without an agent we can add a `INSTALL_K3S_EXEC`
|
||||
environment variable to the command:
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable-agent" sh -
|
||||
```
|
||||
[High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha/) details how to setup an HA K3s cluster backed by an external datastore such as MySQL, PostgreSQL, or etcd.
|
||||
|
||||
The installer can also be run without performing downloads by setting `INSTALL_K3S_SKIP_DOWNLOAD=true`, for example:
|
||||
```sh
|
||||
curl -sfL https://github.com/rancher/k3s/releases/download/vX.Y.Z/k3s -o /usr/local/bin/k3s
|
||||
chmod 0755 /usr/local/bin/k3s
|
||||
[High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded/) details how to setup an HA K3s cluster that leverages a built-in distributed database.
|
||||
|
||||
curl -sfL https://get.k3s.io -o install-k3s.sh
|
||||
chmod 0755 install-k3s.sh
|
||||
[Air-Gap Installation]({{< baseurl >}}/k3s/latest/en/installation/airgap/) details how to setup K3s in environments that do not have direct access to the Internet.
|
||||
|
||||
export INSTALL_K3S_SKIP_DOWNLOAD=true
|
||||
./install-k3s.sh
|
||||
```
|
||||
### Uninstalling
|
||||
|
||||
The full help text for the install script environment variables are as follows:
|
||||
- `K3S_*`
|
||||
|
||||
Environment variables which begin with `K3S_` will be preserved for the
|
||||
systemd service to use. Setting `K3S_URL` without explicitly setting
|
||||
a systemd exec command will default the command to "agent", and we
|
||||
enforce that `K3S_TOKEN` or `K3S_CLUSTER_SECRET` is also set.
|
||||
|
||||
- `INSTALL_K3S_SKIP_DOWNLOAD`
|
||||
|
||||
If set to true will not download k3s hash or binary.
|
||||
|
||||
- INSTALL_K3S_SYMLINK
|
||||
|
||||
If set to 'skip' will not create symlinks, 'force' will overwrite,
|
||||
default will symlink if command does not exist in path.
|
||||
|
||||
- `INSTALL_K3S_VERSION`
|
||||
|
||||
Version of k3s to download from github. Will attempt to download the
|
||||
latest version if not specified.
|
||||
|
||||
- `INSTALL_K3S_BIN_DIR`
|
||||
|
||||
Directory to install k3s binary, links, and uninstall script to, or use
|
||||
/usr/local/bin as the default
|
||||
|
||||
- `INSTALL_K3S_SYSTEMD_DIR`
|
||||
|
||||
Directory to install systemd service and environment files to, or use
|
||||
/etc/systemd/system as the default
|
||||
|
||||
- `INSTALL_K3S_EXEC` or script arguments
|
||||
|
||||
Command with flags to use for launching k3s in the systemd service, if
|
||||
the command is not specified will default to "agent" if `K3S_URL` is set
|
||||
or "server" if not. The final systemd command resolves to a combination
|
||||
of EXEC and script args ($@).
|
||||
|
||||
The following commands result in the same behavior:
|
||||
```sh
|
||||
curl ... | INSTALL_K3S_EXEC="--disable-agent" sh -s -
|
||||
curl ... | INSTALL_K3S_EXEC="server --disable-agent" sh -s -
|
||||
curl ... | INSTALL_K3S_EXEC="server" sh -s - --disable-agent
|
||||
curl ... | sh -s - server --disable-agent
|
||||
curl ... | sh -s - --disable-agent
|
||||
```
|
||||
|
||||
- `INSTALL_K3S_NAME`
|
||||
|
||||
Name of systemd service to create, will default from the k3s exec command
|
||||
if not specified. If specified the name will be prefixed with 'k3s-'.
|
||||
|
||||
- `INSTALL_K3S_TYPE`
|
||||
|
||||
Type of systemd service to create, will default from the k3s exec command
|
||||
if not specified.
|
||||
|
||||
Server Options
|
||||
--------------
|
||||
|
||||
The following information on server options is also available through `k3s server --help` :
|
||||
|
||||
* `--bind-address` _value_
|
||||
|
||||
k3s bind address (default: localhost)
|
||||
|
||||
* `--https-listen-port` _value_
|
||||
|
||||
HTTPS listen port (default: 6443)
|
||||
|
||||
* `--http-listen-port` _value_
|
||||
|
||||
HTTP listen port (for /healthz, HTTPS redirect, and port for TLS terminating LB) (default: 0)
|
||||
|
||||
* `--data-dir` _value_, `-d` _value_
|
||||
|
||||
Folder to hold state default /var/lib/rancher/k3s or ${HOME}/.rancher/k3s if not root
|
||||
|
||||
* `--disable-agent`
|
||||
|
||||
Do not run a local agent and register a local kubelet
|
||||
|
||||
* `--log` _value_, `-l` _value_
|
||||
|
||||
Log to file
|
||||
|
||||
* `--cluster-cidr` _value_
|
||||
|
||||
Network CIDR to use for pod IPs (default: "10.42.0.0/16")
|
||||
|
||||
* `--cluster-secret` _value_
|
||||
|
||||
Shared secret used to bootstrap a cluster [$`K3S_CLUSTER_SECRET`]
|
||||
|
||||
* `--service-cidr` _value_
|
||||
|
||||
Network CIDR to use for services IPs (default: "10.43.0.0/16")
|
||||
|
||||
* `--cluster-dns` _value_
|
||||
|
||||
Cluster IP for coredns service. Should be in your service-cidr range
|
||||
|
||||
* `--cluster-domain` _value_
|
||||
|
||||
Cluster Domain (default: "cluster.local")
|
||||
|
||||
* `--no-deploy` _value_
|
||||
|
||||
Do not deploy packaged components (valid items: coredns, servicelb, traefik)
|
||||
|
||||
* `--write-kubeconfig` _value_, `-o` _value_
|
||||
|
||||
Write kubeconfig for admin client to this file [$`K3S_KUBECONFIG_OUTPUT`]
|
||||
|
||||
* `--write-kubeconfig-mode` _value_
|
||||
|
||||
Write kubeconfig with this mode [$`K3S_KUBECONFIG_MODE`]
|
||||
|
||||
* `--tls-san` _value_
|
||||
|
||||
Add additional hostname or IP as a Subject Alternative Name in the TLS cert
|
||||
|
||||
* `--kube-apiserver-arg` _value_
|
||||
|
||||
Customized flag for kube-apiserver process
|
||||
|
||||
* `--kube-scheduler-arg` _value_
|
||||
|
||||
Customized flag for kube-scheduler process
|
||||
|
||||
* `--kube-controller-arg` _value_
|
||||
|
||||
Customized flag for kube-controller-manager process
|
||||
|
||||
* `--rootless`
|
||||
|
||||
(experimental) Run rootless
|
||||
|
||||
* `--storage-backend` _value_
|
||||
|
||||
Specify storage type etcd3 or kvsql [$`K3S_STORAGE_BACKEND`]
|
||||
|
||||
* `--storage-endpoint` _value_
|
||||
|
||||
Specify etcd, Mysql, Postgres, or Sqlite (default) data source name [$`K3S_STORAGE_ENDPOINT`]
|
||||
|
||||
* `--storage-cafile` _value_
|
||||
|
||||
SSL Certificate Authority file used to secure storage backend communication [$`K3S_STORAGE_CAFILE`]
|
||||
|
||||
* `--storage-certfile` _value_
|
||||
|
||||
SSL certification file used to secure storage backend communication [$`K3S_STORAGE_CERTFILE`]
|
||||
|
||||
* `--storage-keyfile` _value_
|
||||
|
||||
SSL key file used to secure storage backend communication [$`K3S_STORAGE_KEYFILE`]
|
||||
|
||||
* `--node-ip` _value_, `-i` _value_
|
||||
|
||||
(agent) IP address to advertise for node
|
||||
|
||||
* `--node-name` _value_
|
||||
|
||||
(agent) Node name [$`K3S_NODE_NAME`]
|
||||
|
||||
* `--docker`
|
||||
|
||||
(agent) Use docker instead of containerd
|
||||
|
||||
* `--no-flannel`
|
||||
|
||||
(agent) Disable embedded flannel
|
||||
|
||||
* `--flannel-iface` _value_
|
||||
|
||||
(agent) Override default flannel interface
|
||||
|
||||
* `--container-runtime-endpoint` _value_
|
||||
|
||||
(agent) Disable embedded containerd and use alternative CRI implementation
|
||||
|
||||
* `--pause-image` _value_
|
||||
|
||||
(agent) Customized pause image for containerd sandbox
|
||||
|
||||
* `--resolv-conf` _value_
|
||||
|
||||
(agent) Kubelet resolv.conf file [$`K3S_RESOLV_CONF`]
|
||||
|
||||
* `--kubelet-arg` _value_
|
||||
|
||||
(agent) Customized flag for kubelet process
|
||||
|
||||
* `--kube-proxy-arg` _value_
|
||||
|
||||
(agent) Customized flag for kube-proxy process
|
||||
|
||||
* `--node-label` _value_
|
||||
|
||||
(agent) Registering kubelet with set of labels
|
||||
|
||||
* `--node-taint` _value_
|
||||
|
||||
(agent) Registering kubelet with set of taints
|
||||
|
||||
Agent Options
|
||||
------------------
|
||||
|
||||
The following information on agent options is also available through `k3s agent --help` :
|
||||
|
||||
* `--token` _value_, `-t` _value_
|
||||
|
||||
Token to use for authentication [$`K3S_TOKEN`]
|
||||
|
||||
* `--token-file` _value_
|
||||
|
||||
Token file to use for authentication [$`K3S_TOKEN_FILE`]
|
||||
|
||||
* `--server` _value_, `-s` _value_
|
||||
|
||||
Server to connect to [$`K3S_URL`]
|
||||
|
||||
* `--data-dir` _value_, `-d` _value_
|
||||
|
||||
Folder to hold state (default: "/var/lib/rancher/k3s")
|
||||
|
||||
* `--cluster-secret` _value_
|
||||
|
||||
Shared secret used to bootstrap a cluster [$`K3S_CLUSTER_SECRET`]
|
||||
|
||||
* `--rootless`
|
||||
|
||||
(experimental) Run rootless
|
||||
|
||||
* `--docker`
|
||||
|
||||
(agent) Use docker instead of containerd
|
||||
|
||||
* `--no-flannel`
|
||||
|
||||
(agent) Disable embedded flannel
|
||||
|
||||
* `--flannel-iface` _value_
|
||||
|
||||
(agent) Override default flannel interface
|
||||
|
||||
* `--node-name` _value_
|
||||
|
||||
(agent) Node name [$`K3S_NODE_NAME`]
|
||||
|
||||
* `--node-ip` _value_, `-i` _value
|
||||
|
||||
(agent) IP address to advertise for node
|
||||
|
||||
* `--container-runtime-endpoint` _value_
|
||||
|
||||
(agent) Disable embedded containerd and use alternative CRI implementation
|
||||
|
||||
* `--pause-image` _value_
|
||||
|
||||
(agent) Customized pause image for containerd sandbox
|
||||
|
||||
* `--resolv-conf` _value_
|
||||
|
||||
(agent) Kubelet resolv.conf file [$`K3S_RESOLV_CONF`]
|
||||
|
||||
* `--kubelet-arg` _value_
|
||||
|
||||
(agent) Customized flag for kubelet process
|
||||
|
||||
* `--kube-proxy-arg` _value_
|
||||
|
||||
(agent) Customized flag for kube-proxy process
|
||||
|
||||
* `--node-label` _value_
|
||||
|
||||
(agent) Registering kubelet with set of labels
|
||||
|
||||
* `--node-taint` _value_
|
||||
|
||||
(agent) Registering kubelet with set of taints
|
||||
|
||||
Customizing components
|
||||
----------------------
|
||||
|
||||
As of v0.3.0 any of the following processes can be customized with extra flags:
|
||||
|
||||
* `--kube-apiserver-arg` _value_
|
||||
|
||||
(server) [kube-apiserver options](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/)
|
||||
|
||||
* `--kube-controller-arg` _value_
|
||||
|
||||
(server) [kube-controller-manager options](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/)
|
||||
|
||||
* `--kube-scheduler-arg` _value_
|
||||
|
||||
(server) [kube-scheduler options](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/)
|
||||
|
||||
* `--kubelet-arg` _value_
|
||||
|
||||
(agent) [kubelet options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/)
|
||||
|
||||
* `--kube-proxy-arg` _value_
|
||||
|
||||
(agent) [kube-proxy options](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)
|
||||
|
||||
Adding extra arguments can be done by passing the following flags to server or agent.
|
||||
For example to add the following arguments `-v=9` and `log-file=/tmp/kubeapi.log` to the kube-apiserver, you should add the following options to k3s server:
|
||||
|
||||
```
|
||||
--kube-apiserver-arg v=9 --kube-apiserver-arg log-file=/tmp/kubeapi.log
|
||||
```
|
||||
If you installed K3s with the help of the `install.sh` script, an uninstall script is generated during installation, which will be created on your node at `/usr/local/bin/k3s-uninstall.sh` (or as `k3s-agent-uninstall.sh`).
|
||||
|
||||
@@ -0,0 +1,77 @@
|
||||
---
|
||||
title: "Air-Gap Install"
|
||||
weight: 60
|
||||
---
|
||||
|
||||
In this guide, we are assuming you have created your nodes in your air-gap environment and have a secure Docker private registry on your bastion server.
|
||||
|
||||
Installation Outline
|
||||
--------------------
|
||||
1. Prepare Images Directory
|
||||
2. Create Registry YAML
|
||||
3. Install K3s
|
||||
|
||||
### Prepare Images Directory
|
||||
Obtain the images tar file for your architecture from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be running.
|
||||
|
||||
Place the tar file in the `images` directory before starting K3s on each node, for example:
|
||||
|
||||
```sh
|
||||
sudo mkdir -p /var/lib/rancher/k3s/agent/images/
|
||||
sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/
|
||||
```
|
||||
|
||||
### Create Registry YAML
|
||||
Create the registries.yaml file at `/etc/rancher/k3s/registries.yaml`. This will tell K3s the necessary details to connect to your private registry.
|
||||
The registries.yaml file should look like this before plugging in the necessary information:
|
||||
|
||||
```
|
||||
---
|
||||
mirrors:
|
||||
customreg:
|
||||
endpoint:
|
||||
- "https://ip-to-server:5000"
|
||||
configs:
|
||||
customreg:
|
||||
auth:
|
||||
username: xxxxxx # this is the registry username
|
||||
password: xxxxxx # this is the registry password
|
||||
tls:
|
||||
cert_file: <path to the cert file used in the registry>
|
||||
key_file: <path to the key file used in the registry>
|
||||
ca_file: <path to the ca file used in the registry>
|
||||
```
|
||||
|
||||
Note, at this time only secure registries are supported with K3s (SSL with custom CA)
|
||||
|
||||
### Install K3s
|
||||
|
||||
Obtain the K3s binary from the [releases](https://github.com/rancher/k3s/releases) page, matching the same version used to get the airgap images tar.
|
||||
Also obtain the K3s install script at https://get.k3s.io
|
||||
|
||||
Place the binary in `/usr/local/bin` on each node.
|
||||
Place the install script anywhere on each node, name it `install.sh`.
|
||||
|
||||
Install K3s on each node. The example below shows how to do this for a server or an agent (worker):
|
||||
|
||||
```
|
||||
# K3s Server
|
||||
INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh
|
||||
|
||||
# K3s Agent
|
||||
INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken ./install.sh
|
||||
```
|
||||
|
||||
Note, take care to ensure you replace `myserver` with the IP or valid DNS of the server and replace `mynodetoken` with the node-token from the server.
|
||||
The node-token is on the server at `/var/lib/rancher/k3s/server/node-token`
|
||||
|
||||
|
||||
>**Note:** K3s additionally provides a `--resolv-conf` flag for kubelets, which may help with configuring DNS in air-gap networks.
|
||||
|
||||
# Upgrading
|
||||
|
||||
Upgrading an air-gap environment can be accomplished in the following manner:
|
||||
|
||||
1. Download the new air-gap images (tar file) from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each node. Delete the old tar file.
|
||||
2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables.
|
||||
3. Restart the K3s service (if not restarted automatically by installer).
|
||||
@@ -0,0 +1,97 @@
|
||||
---
|
||||
title: "Cluster Datastore Options"
|
||||
weight: 50
|
||||
---
|
||||
|
||||
The ability to run Kubernetes using a datastore other than etcd sets K3s apart from other Kubernetes distributions. This feature provides flexibility to Kubernetes operators. The available datastore options allow you to select a datastore that best fits your use case. For example:
|
||||
|
||||
* If your team doesn't have expertise in operating etcd, you can choose an enterprise-grade SQL database like MySQL or PostgreSQL
|
||||
* If you need to run a simple, short-lived cluster in your CI/CD environment, you can use the embedded SQLite database
|
||||
* If you wish to deploy Kubernetes on the edge and require a highly available solution but can't afford the operational overhead of managing a database at the edge, you can use K3s's embedded HA datastore built on top of DQLite (currently experimental)
|
||||
|
||||
K3s supports the following datastore options:
|
||||
|
||||
* Embedded [SQLite](https://www.sqlite.org/index.html)
|
||||
* [PostgreSQL](https://www.postgresql.org/) (certified against versions 10.7 and 11.5)
|
||||
* [MySQL](https://www.mysql.com/) (certified against version 5.7)
|
||||
* [etcd](https://etcd.io/) (certified against version 3.3.15)
|
||||
* Embedded [DQLite](https://dqlite.io/) for High Availability (experimental)
|
||||
|
||||
### External Datastore Configuration Parameters
|
||||
If you wish to use an external datastore such as PostgreSQL, MySQL, or etcd you must set the `datastore-endpoint` parameter so that K3s knows how to connect to it. You may also specify parameters to configure the authentication and encryption of the connection. The below table summarizes these parameters, which can be passed as either CLI flags or environment variables.
|
||||
|
||||
CLI Flag | Environment Variable | Description
|
||||
------------|-------------|------------------
|
||||
<span style="white-space: nowrap">`--datastore-endpoint`</span> | `K3S_DATASTORE_ENDPOINT` | Specify a PostgresSQL, MySQL, or etcd connection string. This is a string used to describe the connection to the datastore. The structure of this string is specific to each backend and is detailed below.
|
||||
<span style="white-space: nowrap">`--datastore-cafile`</span> | `K3S_DATASTORE_CAFILE` | TLS Certificate Authority (CA) file used to help secure communication with the datastore. If your datastore serves requests over TLS using a certificate signed by a custom certificate authority, you can specify that CA using this parameter so that the K3s client can properly verify the certificate. |
|
||||
| <span style="white-space: nowrap">`--datastore-certfile`</span> | `K3S_DATASTORE_CERTFILE` | TLS certificate file used for client certificate based authentication to your datastore. To use this feature, your datastore must be configured to support client certificate based authentication. If you specify this parameter, you must also specify the `datastore-keyfile` parameter. |
|
||||
| <span style="white-space: nowrap">`--datastore-keyfile`</span> | `K3S_DATASTORE_KEYFILE` | TLS key file used for client certificate based authentication to your datastore. See the previous `datastore-certfile` parameter for more details. |
|
||||
|
||||
As a best practice we recommend setting these parameters as environment variables rather than command line arguments so that your database credentials or other sensitive information aren't exposed as part of the process info.
|
||||
|
||||
### Datastore Endpoint Format and Functionality
|
||||
As mentioned, the format of the value passed to the `datastore-endpoint` parameter is dependent upon the datastore backend. The following details this format and functionality for each supported external datastore.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "PostgreSQL" %}}
|
||||
|
||||
In its most common form, the datastore-endpoint parameter for PostgreSQL has the following format:
|
||||
|
||||
`postgres://username:password@hostname:port/database-name`
|
||||
|
||||
More advanced configuration parameters are available. For more information on these, please see https://godoc.org/github.com/lib/pq.
|
||||
|
||||
If you specify a database name and it does not exist, the server will attempt to create it.
|
||||
|
||||
If you only supply `postgres://` as the endpoint, K3s will attempt to do the following:
|
||||
|
||||
* Connect to localhost using `postgres` as the username and password
|
||||
* Create a database named `kubernetes`
|
||||
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "MySQL" %}}
|
||||
|
||||
In its most common form, the `datastore-endpoint` parameter for MySQL has the following format:
|
||||
|
||||
`mysql://username:password@tcp(hostname:3306)/database-name`
|
||||
|
||||
More advanced configuration parameters are available. For more information on these, please see https://github.com/go-sql-driver/mysql#dsn-data-source-name
|
||||
|
||||
Note that due to a [known issue](https://github.com/rancher/k3s/issues/1093) in K3s, you cannot set the `tls` parameter. TLS communication is supported, but you cannot, for example, set this parameter to "skip-verify" to cause K3s to skip certificate verification.
|
||||
|
||||
If you specify a database name and it does not exist, the server will attempt to create it.
|
||||
|
||||
If you only supply `mysql://` as the endpoint, K3s will attempt to do the following:
|
||||
|
||||
* Connect to the MySQL socket at `/var/run/mysqld/mysqld.sock` using the `root` user and no password
|
||||
* Create a database with the name `kubernetes`
|
||||
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "etcd" %}}
|
||||
|
||||
In its most common form, the `datastore-endpoint` parameter for etcd has the following format:
|
||||
|
||||
`https://etcd-host-1:2379,https://etcd-host-2:2379,https://etcd-host-3:2379`
|
||||
|
||||
The above assumes a typical three node etcd cluster. The parameter can accept one more comma separated etcd URLs.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
<br/>Based on the above, the following example command could be used to launch a server instance that connects to a PostgresSQL database named k3s-db:
|
||||
```
|
||||
K3S_DATASTORE_ENDPOINT='postgres://username:password@hostname:5432/k3s-db' k3s server
|
||||
```
|
||||
|
||||
And the following example could be used to connect to a MySQL database using client certificate authentication:
|
||||
```
|
||||
K3S_DATASTORE_ENDPOINT='mysql://username:password@tcp(hostname:3306)/k3s-db' \
|
||||
K3S_DATASTORE_CERTFILE='/path/to/client.crt' \
|
||||
K3S_DATASTORE_KEYFILE='/path/to/client.key' \
|
||||
k3s server
|
||||
```
|
||||
|
||||
### Embedded DQLite for HA (Experimental)
|
||||
K3s's use of DQLite is similar to its use of SQLite. It is simple to setup and manage. As such, there is no external configuration or additional steps to take in order to use this option. Please see [High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded/) for instructions on how to run with this option.
|
||||
@@ -0,0 +1,22 @@
|
||||
---
|
||||
title: "High Availability with Embedded DB (Experimental)"
|
||||
weight: 40
|
||||
---
|
||||
|
||||
As of v1.0.0, K3s is previewing support for running a highly available control plane without the need for an external database. This means there is no need to manage an external etcd or SQL datastore in order to run a reliable production-grade setup. While this feature is currently experimental, we expect it to be the primary architecture for running HA K3s clusters in the future.
|
||||
|
||||
This architecture is achieved by embedding a dqlite database within the K3s server process. DQLite is short for "distributed SQLite." According to https://dqlite.io, it is "*a fast, embedded, persistent SQL database with Raft consensus that is perfect for fault-tolerant IoT and Edge devices.*" This makes it a natural fit for K3s.
|
||||
|
||||
To run K3s in this mode, you must have an odd number of server nodes. We recommend starting with three nodes.
|
||||
|
||||
To get started, first launch a server node with the `cluster-init` flag to enable clustering and a token that will be used as a shared secret to join additional servers to the cluster.
|
||||
```
|
||||
K3S_TOKEN=SECRET k3s server --cluster-init
|
||||
```
|
||||
|
||||
After launching the first server, join the second and third servers to the cluster using the shared secret:
|
||||
```
|
||||
K3S_TOKEN=SECRET k3s server --server https://<ip or hostname of server1>:6443
|
||||
```
|
||||
|
||||
Now you have a highly available control plane. Joining additional worker nodes to the cluster follows the same procedure as a single server cluster.
|
||||
@@ -0,0 +1,57 @@
|
||||
---
|
||||
title: "High Availability with an External DB"
|
||||
weight: 30
|
||||
---
|
||||
|
||||
>**Note:** Official support for High-Availability (HA) was introduced in our v1.0.0 release.
|
||||
|
||||
Single server clusters can meet a variety of use cases, but for environments where uptime of the Kubernetes control plane is critical, you can run K3s in an HA configuration. An HA K3s cluster is comprised of:
|
||||
|
||||
* Two or more **server nodes** that will serve the Kubernetes API and run other control plane services
|
||||
* An **external datastore** (as opposed to the embedded SQLite datastore used in single server setups)
|
||||
* A **fixed registration address** placed in front of the server nodes to allow worker nodes to register with the cluster
|
||||
|
||||
The following diagram illustrates the above configuration:
|
||||

|
||||
|
||||
In this architecture a server node is defined as a machine (bare-metal or virtual) running the `k3s server` command. A worker node is defined as a machine running the `k3s agent` command.
|
||||
|
||||
Workers register through the fixed registration address, but after registration they establish a connection directly to one of the sever nodes. This is a websocket connection initiated by the `k3s agent` process and it is maintained by a client-side load balancer running as part of the agent process.
|
||||
|
||||
Installation Outline
|
||||
--------------------
|
||||
Setting up an HA cluster requires the following steps:
|
||||
|
||||
1. Create an external datastore
|
||||
2. Launch server nodes
|
||||
3. Configure fixed registration address
|
||||
4. Join worker nodes
|
||||
|
||||
### Create an External Datastore
|
||||
You will first need to create an external datastore for the cluster. See the [Cluster Datastore Options]({{< baseurl >}}/k3s/latest/en/installation/datastore/) documentation for more details.
|
||||
|
||||
### Launch Server Nodes
|
||||
K3s requires two or more server nodes for this HA configuration. See the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/) guide for minimum machine requirements.
|
||||
|
||||
When running the `k3s server` command on these nodes, you must set the `datastore-endpoint` parameter so that K3s knows how to connect to the external datastore. Please see the [datastore configuration guide]({{< baseurl >}}/k3s/latest/en/installation/datastore/#external-datastore-configuration-parameters) for information on configuring this parameter.
|
||||
|
||||
> **Note:** The same installation options available to single-server installs are also available for HA installs. For more details, see the [Installation and Configuration Options]({{< baseurl >}}/k3s/latest/en/installation/install-options/) documentation.
|
||||
|
||||
By default, server nodes will be schedulable and thus your workloads can get launched on them. If you wish to have a dedicated control plane where no user workloads will run, you can use taints. The <span style='white-space: nowrap'>`node-taint`</span> parameter will allow you to configure nodes with taints, for example <span style='white-space: nowrap'>`--node-taint k3s-controlplane=true:NoExecute`</span>.
|
||||
|
||||
Once you've launched the `k3s server` process on all server nodes, you can ensure that the cluster has come up properly by checking that the nodes are in the Ready state with `k3s kubectl get nodes`.
|
||||
|
||||
### Configure the Fixed Registration Address
|
||||
Worker nodes need a URL to register against. This can be the IP or hostname of any of the server nodes, but in many cases those may change over time. For example, if you are running your cluster in a cloud that supports scaling groups, you may scale the server node group up and down over time, causing nodes to be created and destroyed and thus having different IPs from the initial set of server nodes. Therefore, you should have a stable endpoint in front of the server nodes that will not change over time. This endpoint can be setup using any number approaches, such as:
|
||||
|
||||
* A layer-4 (TCP) load balancer
|
||||
* Round-robin DNS
|
||||
* A virtual or elastic IP addresses
|
||||
|
||||
This endpoint can also be used for accessing the Kubernetes API. So you can, for example, modify your kubeconfig file to point to it instead of a specific node.
|
||||
|
||||
### Join Worker Nodes
|
||||
Joining worker nodes in an HA cluster is the same as joining worker nodes in a single server cluster. You just need to specify the URL the agent should register to and the token it should use.
|
||||
```
|
||||
K3S_TOKEN=SECRET k3s agent --server https://fixed-registration-address:6443
|
||||
```
|
||||
@@ -0,0 +1,183 @@
|
||||
---
|
||||
title: "Installation and Configuration Options"
|
||||
weight: 20
|
||||
---
|
||||
|
||||
### Installation script options
|
||||
|
||||
As mentioned in the [Quick-Start Guide]({{< baseurl >}}/k3s/latest/en/quick-start/), you can use the installation script available at https://get.k3s.io to install K3s as a service on systemd and openrc based systems.
|
||||
|
||||
The simplest form of this command is as follows:
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
When using this method to install K3s, the following environment variables can be used to configure the installation:
|
||||
|
||||
- `INSTALL_K3S_SKIP_DOWNLOAD`
|
||||
|
||||
If set to true will not download K3s hash or binary.
|
||||
|
||||
- `INSTALL_K3S_SYMLINK`
|
||||
|
||||
If set to 'skip' will not create symlinks, 'force' will overwrite, default will symlink if command does not exist in path.
|
||||
|
||||
- `INSTALL_K3S_SKIP_START`
|
||||
|
||||
If set to true will not start K3s service.
|
||||
|
||||
- `INSTALL_K3S_VERSION`
|
||||
|
||||
Version of K3s to download from github. Will attempt to download the latest version if not specified.
|
||||
|
||||
- `INSTALL_K3S_BIN_DIR`
|
||||
|
||||
Directory to install K3s binary, links, and uninstall script to, or use `/usr/local/bin` as the default.
|
||||
|
||||
- `INSTALL_K3S_BIN_DIR_READ_ONLY`
|
||||
|
||||
If set to true will not write files to `INSTALL_K3S_BIN_DIR`, forces setting INSTALL_K3S_SKIP_DOWNLOAD=true.
|
||||
|
||||
- `INSTALL_K3S_SYSTEMD_DIR`
|
||||
|
||||
Directory to install systemd service and environment files to, or use `/etc/systemd/system` as the default.
|
||||
|
||||
- `INSTALL_K3S_EXEC`
|
||||
|
||||
Command with flags to use for launching K3s in the service. If the command is not specified, it will default to "agent" if `K3S_URL` is set or "server" if it is not set. The final systemd command resolves to a combination of this environment variable and script args. To illustrate this, the following commands result in the same behavior:
|
||||
```sh
|
||||
curl ... | INSTALL_K3S_EXEC="--no-flannel" sh -s -
|
||||
curl ... | INSTALL_K3S_EXEC="server --no-flannel" sh -s -
|
||||
curl ... | INSTALL_K3S_EXEC="server" sh -s - --no-flannel
|
||||
curl ... | sh -s - server --no-flannel
|
||||
curl ... | sh -s - --no-flannel
|
||||
```
|
||||
|
||||
- `INSTALL_K3S_NAME`
|
||||
|
||||
Name of systemd service to create, will default from the K3s exec command if not specified. If specified the name will be prefixed with 'k3s-'.
|
||||
|
||||
- `INSTALL_K3S_TYPE`
|
||||
|
||||
Type of systemd service to create, will default from the K3s exec command if not specified.
|
||||
|
||||
|
||||
Environment variables which begin with `K3S_` will be preserved for the systemd and openrc services to use. Setting `K3S_URL` without explicitly setting an exec command will default the command to "agent". When running the agent `K3S_TOKEN` must also be set.
|
||||
|
||||
|
||||
### Beyond the Installation Script
|
||||
As stated, the installation script is primarily concerned with configuring K3s to run as a service. If you choose to not use the script, you can run K3s simply by downloading the binary from our [release page](https://github.com/rancher/k3s/releases/latest), placing it on your path, and executing it. The K3s binary supports the following commands:
|
||||
|
||||
Command | Description
|
||||
--------|------------------
|
||||
<span class='nowrap'>`k3s server`</span> | Run the K3s management server, which will also launch Kubernetes control plane components such as the API server, controller-manager, and scheduler.
|
||||
<span class='nowrap'>`k3s agent`</span> | Run the K3s node agent. This will cause K3s to run as a worker node, launching the Kubernetes node services `kubelet` and `kube-proxy`.
|
||||
<span class='nowrap'>`k3s kubectl`</span> | Run an embedded [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) CLI. If the `KUBECONFIG` environment variable is not set, this will automatically attempt to use the config file that is created at `/etc/rancher/k3s/k3s.yaml` when launching a K3s server node.
|
||||
<span class='nowrap'>`k3s crictl`</span> | Run an embedded [crictl](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md). This is a CLI for interacting with Kubernetes's container runtime interface (CRI). Useful for debugging.
|
||||
<span class='nowrap'>`k3s ctr`</span> | Run an embedded [ctr](https://github.com/projectatomic/containerd/blob/master/docs/cli.md). This is a CLI for containerd, the container daemon used by K3s. Useful for debugging.
|
||||
<span class='nowrap'>`k3s help`</span> | Shows a list of commands or help for one command
|
||||
|
||||
The `k3s server` and `k3s agent` commands have additional configuration options that can be viewed with <span class='nowrap'>`k3s server --help`</span> or <span class='nowrap'>`k3s agent --help`</span>. For convenience, that help text is presented here:
|
||||
|
||||
### `k3s server`
|
||||
```
|
||||
NAME:
|
||||
k3s server - Run management server
|
||||
|
||||
USAGE:
|
||||
k3s server [OPTIONS]
|
||||
|
||||
OPTIONS:
|
||||
-v value (logging) Number for the log level verbosity (default: 0)
|
||||
--vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging
|
||||
--log value, -l value (logging) Log to file
|
||||
--alsologtostderr (logging) Log to standard error as well as file (if set)
|
||||
--bind-address value (listener) k3s bind address (default: 0.0.0.0)
|
||||
--https-listen-port value (listener) HTTPS listen port (default: 6443)
|
||||
--advertise-address value (listener) IP address that apiserver uses to advertise to members of the cluster (default: node-external-ip/node-ip)
|
||||
--advertise-port value (listener) Port that apiserver uses to advertise to members of the cluster (default: listen-port) (default: 0)
|
||||
--tls-san value (listener) Add additional hostname or IP as a Subject Alternative Name in the TLS cert
|
||||
--data-dir value, -d value (data) Folder to hold state default /var/lib/rancher/k3s or ${HOME}/.rancher/k3s if not root
|
||||
--cluster-cidr value (networking) Network CIDR to use for pod IPs (default: "10.42.0.0/16")
|
||||
--service-cidr value (networking) Network CIDR to use for services IPs (default: "10.43.0.0/16")
|
||||
--cluster-dns value (networking) Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10)
|
||||
--cluster-domain value (networking) Cluster Domain (default: "cluster.local")
|
||||
--flannel-backend value (networking) One of 'none', 'vxlan', 'ipsec', or 'flannel' (default: "vxlan")
|
||||
--token value, -t value (cluster) Shared secret used to join a server or agent to a cluster [$K3S_TOKEN]
|
||||
--token-file value (cluster) File containing the cluster-secret/token [$K3S_TOKEN_FILE]
|
||||
--write-kubeconfig value, -o value (client) Write kubeconfig for admin client to this file [$K3S_KUBECONFIG_OUTPUT]
|
||||
--write-kubeconfig-mode value (client) Write kubeconfig with this mode [$K3S_KUBECONFIG_MODE]
|
||||
--kube-apiserver-arg value (flags) Customized flag for kube-apiserver process
|
||||
--kube-scheduler-arg value (flags) Customized flag for kube-scheduler process
|
||||
--kube-controller-manager-arg value (flags) Customized flag for kube-controller-manager process
|
||||
--kube-cloud-controller-manager-arg value (flags) Customized flag for kube-cloud-controller-manager process
|
||||
--datastore-endpoint value (db) Specify etcd, Mysql, Postgres, or Sqlite (default) data source name [$K3S_DATASTORE_ENDPOINT]
|
||||
--datastore-cafile value (db) TLS Certificate Authority file used to secure datastore backend communication [$K3S_DATASTORE_CAFILE]
|
||||
--datastore-certfile value (db) TLS certification file used to secure datastore backend communication [$K3S_DATASTORE_CERTFILE]
|
||||
--datastore-keyfile value (db) TLS key file used to secure datastore backend communication [$K3S_DATASTORE_KEYFILE]
|
||||
--default-local-storage-path value (storage) Default local storage path for local provisioner storage class
|
||||
--no-deploy value (components) Do not deploy packaged components (valid items: coredns, servicelb, traefik, local-storage, metrics-server)
|
||||
--disable-scheduler (components) Disable Kubernetes default scheduler
|
||||
--disable-cloud-controller (components) Disable k3s default cloud controller manager
|
||||
--disable-network-policy (components) Disable k3s default network policy controller
|
||||
--node-name value (agent/node) Node name [$K3S_NODE_NAME]
|
||||
--with-node-id (agent/node) Append id to node name
|
||||
--node-label value (agent/node) Registering kubelet with set of labels
|
||||
--node-taint value (agent/node) Registering kubelet with set of taints
|
||||
--docker (agent/runtime) Use docker instead of containerd
|
||||
--container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation
|
||||
--pause-image value (agent/runtime) Customized pause image for containerd sandbox
|
||||
--private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml")
|
||||
--node-ip value, -i value (agent/networking) IP address to advertise for node
|
||||
--node-external-ip value (agent/networking) External IP address to advertise for node
|
||||
--resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF]
|
||||
--flannel-iface value (agent/networking) Override default flannel interface
|
||||
--flannel-conf value (agent/networking) Override default flannel config file
|
||||
--kubelet-arg value (agent/flags) Customized flag for kubelet process
|
||||
--kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process
|
||||
--rootless (experimental) Run rootless
|
||||
--agent-token value (experimental/cluster) Shared secret used to join agents to the cluster, but not servers [$K3S_AGENT_TOKEN]
|
||||
--agent-token-file value (experimental/cluster) File containing the agent secret [$K3S_AGENT_TOKEN_FILE]
|
||||
--server value, -s value (experimental/cluster) Server to connect to, used to join a cluster [$K3S_URL]
|
||||
--cluster-init (experimental/cluster) Initialize new cluster master [$K3S_CLUSTER_INIT]
|
||||
--cluster-reset (experimental/cluster) Forget all peers and become a single cluster new cluster master [$K3S_CLUSTER_RESET]
|
||||
--no-flannel (deprecated) use --flannel-backend=none
|
||||
--cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET]
|
||||
```
|
||||
|
||||
### `k3s agent`
|
||||
```
|
||||
NAME:
|
||||
k3s agent - Run node agent
|
||||
|
||||
USAGE:
|
||||
k3s agent [OPTIONS]
|
||||
|
||||
OPTIONS:
|
||||
-v value (logging) Number for the log level verbosity (default: 0)
|
||||
--vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging
|
||||
--log value, -l value (logging) Log to file
|
||||
--alsologtostderr (logging) Log to standard error as well as file (if set)
|
||||
--token value, -t value (cluster) Token to use for authentication [$K3S_TOKEN]
|
||||
--token-file value (cluster) Token file to use for authentication [$K3S_TOKEN_FILE]
|
||||
--server value, -s value (cluster) Server to connect to [$K3S_URL]
|
||||
--data-dir value, -d value (agent/data) Folder to hold state (default: "/var/lib/rancher/k3s")
|
||||
--node-name value (agent/node) Node name [$K3S_NODE_NAME]
|
||||
--with-node-id (agent/node) Append id to node name
|
||||
--node-label value (agent/node) Registering kubelet with set of labels
|
||||
--node-taint value (agent/node) Registering kubelet with set of taints
|
||||
--docker (agent/runtime) Use docker instead of containerd
|
||||
--container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation
|
||||
--pause-image value (agent/runtime) Customized pause image for containerd sandbox
|
||||
--private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml")
|
||||
--node-ip value, -i value (agent/networking) IP address to advertise for node
|
||||
--node-external-ip value (agent/networking) External IP address to advertise for node
|
||||
--resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF]
|
||||
--flannel-iface value (agent/networking) Override default flannel interface
|
||||
--flannel-conf value (agent/networking) Override default flannel config file
|
||||
--kubelet-arg value (agent/flags) Customized flag for kubelet process
|
||||
--kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process
|
||||
--rootless (experimental) Run rootless
|
||||
--no-flannel (deprecated) use --flannel-backend=none
|
||||
--cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET]
|
||||
```
|
||||
@@ -0,0 +1,38 @@
|
||||
---
|
||||
title: Node Requirements
|
||||
weight: 1
|
||||
---
|
||||
|
||||
K3s is very lightweight, but has some minimum requirements as outlined below.
|
||||
|
||||
Whether you're configuring a K3s cluster to run in a single-node or high-availability (HA) setup, each node running K3s should meet the following minimum requirements. You may need more resources to fit your needs.
|
||||
|
||||
## Prerequisites
|
||||
* Two nodes cannot have the same hostname. If all your nodes have the same hostname, pass `--node-name` or set `$K3S_NODE_NAME` with a unique name for each node you add to the cluster.
|
||||
|
||||
## Operating Systems
|
||||
|
||||
K3s should run on just about any flavor of Linux. However, K3s is tested on the following operating systems and their subsequent non-major releases.
|
||||
|
||||
* Ubuntu 16.04 (amd64)
|
||||
* Ubuntu 18.04 (amd64)
|
||||
* Raspbian Buster (armhf)
|
||||
|
||||
## Hardware
|
||||
|
||||
Hardware requirements scale based on the size of your deployments. Minimum recommendations are outlined here.
|
||||
|
||||
* RAM: 512MB Minimum
|
||||
* CPU: 1 Minimum
|
||||
|
||||
#### Disks
|
||||
|
||||
K3s performance depends on the performance of the database. To ensure optimal speed, we recommend using an SSD when possible. Disk performance will vary on ARM devices utilizing an SD card or eMMC.
|
||||
|
||||
## Networking
|
||||
|
||||
The K3s server needs port 6443 to be accessible by the nodes. The nodes need to be able to reach other nodes over UDP port 8472 (Flannel VXLAN). If you do not use flannel and provide your own custom CNI, then port 8472 is not needed by K3s. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel.
|
||||
|
||||
IMPORTANT: The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disabled access to port 8472.
|
||||
|
||||
If you wish to utilize the metrics server, you will need to open port 10250 on each node.
|
||||
@@ -0,0 +1,13 @@
|
||||
---
|
||||
title: Known Issues
|
||||
weight: 70
|
||||
---
|
||||
The Known Issues are updated periodically and designed to inform you about any issues that may not be immediately addressed in the next upcoming release.
|
||||
|
||||
**Snap Docker**
|
||||
|
||||
If you plan to use K3s with docker, Docker installed via a snap package is not recommended as it has been known to cause issues running K3s.
|
||||
|
||||
**Iptables**
|
||||
|
||||
If you are running iptables in nftables mode instead of legacy you might encounter issues. We recommend utilizing newer iptables (such as 1.6.1+) to avoid issues.
|
||||
@@ -0,0 +1,43 @@
|
||||
---
|
||||
title: "Networking"
|
||||
weight: 35
|
||||
---
|
||||
|
||||
Open Ports
|
||||
----------
|
||||
Please reference the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/#networking) page for port information.
|
||||
|
||||
Flannel
|
||||
-------
|
||||
|
||||
Flannel is included by default, if you don't want flannel then run each agent with `--no-flannel` option.
|
||||
|
||||
In this setup you will still be required to install your own CNI driver. More info [here](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network)
|
||||
|
||||
CoreDNS
|
||||
-------
|
||||
|
||||
CoreDNS is deployed on start of the agent, to disable run each server with the `--no-deploy coredns` option.
|
||||
|
||||
If you don't install CoreDNS you will need to install a cluster DNS provider yourself.
|
||||
|
||||
Traefik Ingress Controller
|
||||
--------------------------
|
||||
|
||||
Traefik is deployed by default when starting the server. For more information see [Auto Deploying Manifests]({{< baseurl >}}/k3s/latest/en/configuration/#auto-deploying-manifests). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml` and any changes made to this file will automatically be deployed to Kubernetes in a manner similar to `kubectl apply`.
|
||||
|
||||
The Traefik ingress controller will use ports 80, 443, and 8080 on the host (i.e. these will not be usable for HostPort or NodePort).
|
||||
|
||||
You can tweak traefik to meet your needs by setting options in the traefik.yaml file.
|
||||
Reference the official [Traefik for Helm Configuration Parameters](https://github.com/helm/charts/tree/master/stable/traefik#configuration) readme for more information.
|
||||
|
||||
To disable it, start each server with the `--no-deploy traefik` option.
|
||||
|
||||
Service Load Balancer
|
||||
---------------------
|
||||
|
||||
K3s includes a basic service load balancer that uses available host ports. If you try to create
|
||||
a load balancer that listens on port 80, for example, it will try to find a free host in the cluster
|
||||
for port 80. If no port is available the load balancer will stay in Pending.
|
||||
|
||||
To disable the embedded load balancer run the server with the `--no-deploy servicelb` option. This is necessary if you wish to run a different load balancer, such as MetalLB.
|
||||
@@ -1,44 +1,30 @@
|
||||
---
|
||||
title: "Quick-Start"
|
||||
weight: 1
|
||||
title: "Quick-Start Guide"
|
||||
weight: 10
|
||||
---
|
||||
|
||||
There are many ways to run k3s, we cover a couple easy ways to get started in this section.
|
||||
The [installation options](../installation) section will cover in greater detail how k3s can be setup.
|
||||
>**Note:** This guide will help you quickly launch a cluster with default options. The [installation section](../installation) covers in greater detail how K3s can be set up.
|
||||
|
||||
> New to Kubernetes? The official Kubernetes docs already have some great tutorials outlining the basics [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/).
|
||||
|
||||
Install Script
|
||||
--------------
|
||||
The k3s `install.sh` script provides a convenient way for installing to systemd or openrc,
|
||||
to install k3s as a service just run:
|
||||
K3s provides an installation script that is a convenient way to install it as a service on systemd or openrc based systems. This script is available at https://get.k3s.io. To install K3s using this method, just run:
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
A kubeconfig file is written to `/etc/rancher/k3s/k3s.yaml` and the service is automatically started or restarted.
|
||||
The install script will install k3s and additional utilities, such as `kubectl`, `crictl`, `k3s-killall.sh`, and `k3s-uninstall.sh`, for example:
|
||||
After running this installation:
|
||||
|
||||
* The K3s service will be configured to automatically restart after node reboots or if the process crashes or is killed
|
||||
* Additional utilities will be installed, including `kubectl`, `crictl`, `ctr`, `k3s-killall.sh`, and `k3s-uninstall.sh`
|
||||
* A kubeconfig file will be written to `/etc/rancher/k3s/k3s.yaml` and the kubectl installed by K3s will automatically use it
|
||||
|
||||
To install on worker nodes and add them to the cluster, run the installation script with the `K3S_URL` and `K3S_TOKEN` environment variables. Here is an example showing how to join a worker node:
|
||||
|
||||
```bash
|
||||
sudo kubectl get nodes
|
||||
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
|
||||
```
|
||||
Setting the `K3S_URL` parameter causes K3s to run in worker mode. The K3s agent will register with the K3s server listening at the supplied URL. The value to use for `K3S_TOKEN` is stored at `/var/lib/rancher/k3s/server/node-token` on your server node.
|
||||
|
||||
`K3S_TOKEN` is created at `/var/lib/rancher/k3s/server/node-token` on your server.
|
||||
To install on worker nodes we should pass `K3S_URL` along with
|
||||
`K3S_TOKEN` or `K3S_CLUSTER_SECRET` environment variables, for example:
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -
|
||||
```
|
||||
|
||||
Manual Download
|
||||
---------------
|
||||
1. Download `k3s` from latest [release](https://github.com/rancher/k3s/releases/latest), x86_64, armhf, and arm64 are supported.
|
||||
2. Run server.
|
||||
|
||||
```bash
|
||||
sudo k3s server &
|
||||
# Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
|
||||
sudo k3s kubectl get nodes
|
||||
|
||||
# On a different node run the below. NODE_TOKEN comes from
|
||||
# /var/lib/rancher/k3s/server/node-token on your server
|
||||
sudo k3s agent --server https://myserver:6443 --token ${NODE_TOKEN}
|
||||
```
|
||||
Note: Each machine must have a unique hostname. If your machines do not have unique hostnames, pass the `K3S_NODE_NAME` environment variable and provide a value with a valid and unique hostname for each node.
|
||||
|
||||
@@ -1,253 +0,0 @@
|
||||
---
|
||||
title: "Running K3S"
|
||||
weight: 3
|
||||
---
|
||||
|
||||
This section contains information for running k3s in various environments.
|
||||
|
||||
Starting the Server
|
||||
------------------
|
||||
|
||||
The installation script will auto-detect if your OS is using systemd or openrc and start the service.
|
||||
When running with openrc logs will be created at `/var/log/k3s.log`, or with systemd in `/var/log/syslog` and viewed using `journalctl -u k3s`. An example of installing and auto-starting with the install script:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
When running the server manually you should get an output similar to:
|
||||
|
||||
```
|
||||
$ k3s server
|
||||
INFO[2019-01-22T15:16:19.908493986-07:00] Starting k3s dev
|
||||
INFO[2019-01-22T15:16:19.908934479-07:00] Running kube-apiserver --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key
|
||||
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
|
||||
INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false
|
||||
INFO[2019-01-22T15:16:20.196880841-07:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false
|
||||
Flag --port has been deprecated, see --secure-port instead.
|
||||
INFO[2019-01-22T15:16:20.273441984-07:00] Listening on :6443
|
||||
INFO[2019-01-22T15:16:20.278383446-07:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
|
||||
INFO[2019-01-22T15:16:20.474454524-07:00] Node token is available at /var/lib/rancher/k3s/server/node-token
|
||||
INFO[2019-01-22T15:16:20.474471391-07:00] To join node to cluster: k3s agent -s https://10.20.0.3:6443 -t ${NODE_TOKEN}
|
||||
INFO[2019-01-22T15:16:20.541027133-07:00] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml
|
||||
INFO[2019-01-22T15:16:20.541049100-07:00] Run: k3s kubectl
|
||||
```
|
||||
|
||||
The output will likely be much longer as the agent will create a lot of logs. By default the server
|
||||
will register itself as a node (run the agent).
|
||||
|
||||
It is common and almost required these days that the control plane be part of the cluster.
|
||||
To disable the agent when running the server use the `--disable-agent` flag, the agent can then be run as a separate process.
|
||||
|
||||
Joining Nodes
|
||||
-------------
|
||||
|
||||
When the server starts it creates a file `/var/lib/rancher/k3s/server/node-token`.
|
||||
Using the contents of that file as `K3S_TOKEN` and setting `K3S_URL` allows the node
|
||||
to join as an agent using the install script:
|
||||
|
||||
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -
|
||||
|
||||
When using the install script openrc logs will be created at `/var/log/k3s-agent.log`, or with systemd in `/var/log/syslog` and viewed using `journalctl -u k3s-agent`.
|
||||
|
||||
Or running k3s manually with the token as `NODE_TOKEN`:
|
||||
|
||||
k3s agent --server https://myserver:6443 --token ${NODE_TOKEN}
|
||||
|
||||
SystemD
|
||||
-------
|
||||
|
||||
If you are using systemd here is a sample unit `k3s.service`:
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Lightweight Kubernetes
|
||||
Documentation=https://k3s.io
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=notify
|
||||
EnvironmentFile=/etc/systemd/system/k3s.service.env
|
||||
ExecStart=/usr/local/bin/k3s server
|
||||
KillMode=process
|
||||
Delegate=yes
|
||||
LimitNOFILE=infinity
|
||||
LimitNPROC=infinity
|
||||
LimitCORE=infinity
|
||||
TasksMax=infinity
|
||||
TimeoutStartSec=0
|
||||
Restart=always
|
||||
RestartSec=5s
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
OpenRC
|
||||
------
|
||||
|
||||
And an example openrc `/etc/init.d/k3s`:
|
||||
|
||||
```bash
|
||||
#!/sbin/openrc-run
|
||||
|
||||
depend() {
|
||||
after net-online
|
||||
need net
|
||||
}
|
||||
|
||||
start_pre() {
|
||||
rm -f /tmp/k3s.*
|
||||
}
|
||||
|
||||
supervisor=supervise-daemon
|
||||
name="k3s"
|
||||
command="/usr/local/bin/k3s"
|
||||
command_args="server >>/var/log/k3s.log 2>&1"
|
||||
|
||||
pidfile="/var/run/k3s.pid"
|
||||
respawn_delay=5
|
||||
|
||||
set -o allexport
|
||||
if [ -f /etc/environment ]; then source /etc/environment; fi
|
||||
if [ -f /etc/rancher/k3s/k3s.env ]; then source /etc/rancher/k3s/k3s.env; fi
|
||||
set +o allexport
|
||||
```
|
||||
|
||||
Alpine Linux
|
||||
------------
|
||||
|
||||
In order to pre-setup Alpine Linux you have to go through the following steps:
|
||||
|
||||
```bash
|
||||
echo "cgroup /sys/fs/cgroup cgroup defaults 0 0" >> /etc/fstab
|
||||
|
||||
cat >> /etc/cgconfig.conf <<EOF
|
||||
mount {
|
||||
cpuacct = /cgroup/cpuacct;
|
||||
memory = /cgroup/memory;
|
||||
devices = /cgroup/devices;
|
||||
freezer = /cgroup/freezer;
|
||||
net_cls = /cgroup/net_cls;
|
||||
blkio = /cgroup/blkio;
|
||||
cpuset = /cgroup/cpuset;
|
||||
cpu = /cgroup/cpu;
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Then update **/etc/update-extlinux.conf** by adding:
|
||||
|
||||
```
|
||||
default_kernel_opts="... cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory"
|
||||
```
|
||||
|
||||
Then update the config and reboot:
|
||||
|
||||
```bash
|
||||
update-extlinux
|
||||
reboot
|
||||
```
|
||||
|
||||
After rebooting:
|
||||
|
||||
- download **k3s** to **/usr/local/bin/k3s**
|
||||
- create an openrc file in **/etc/init.d**
|
||||
|
||||
Running in Docker (and docker-compose)
|
||||
-----------------
|
||||
|
||||
[k3d](https://github.com/rancher/k3d) is a utility designed to easily run k3s in Docker. It can be installed via the [brew](https://brew.sh/) utility for MacOS.
|
||||
|
||||
`rancher/k3s` images are also available to run k3s server and agent from Docker. A `docker-compose.yml` is in the root of the k3s repo that
|
||||
serves as an example of how to run k3s from Docker. To run from `docker-compose` from this repo run:
|
||||
|
||||
docker-compose up --scale node=3
|
||||
# kubeconfig is written to current dir
|
||||
kubectl --kubeconfig kubeconfig.yaml get node
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
497278a2d6a2 Ready <none> 11s v1.13.2-k3s2
|
||||
d54c8b17c055 Ready <none> 11s v1.13.2-k3s2
|
||||
db7a5a5a5bdd Ready <none> 12s v1.13.2-k3s2
|
||||
|
||||
To run the agent only in Docker, use `docker-compose up node`. Alternatively the Docker run command can also be used;
|
||||
|
||||
sudo docker run \
|
||||
-d --tmpfs /run \
|
||||
--tmpfs /var/run \
|
||||
-e K3S_URL=${SERVER_URL} \
|
||||
-e K3S_TOKEN=${NODE_TOKEN} \
|
||||
--privileged rancher/k3s:vX.Y.Z
|
||||
|
||||
Air-Gap Support
|
||||
---------------
|
||||
|
||||
k3s supports pre-loading of containerd images by placing them in the `images` directory for the agent before starting, for example:
|
||||
```sh
|
||||
sudo mkdir -p /var/lib/rancher/k3s/agent/images/
|
||||
sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/
|
||||
```
|
||||
Images needed for a base install are provided through the releases page, additional images can be created with the `docker save` command.
|
||||
|
||||
Offline Helm charts are served from the `/var/lib/rancher/k3s/server/static` directory, and Helm chart manifests may reference the static files with a `%{KUBERNETES_API}%` templated variable. For example, the default traefik manifest chart installs from `https://%{KUBERNETES_API}%/static/charts/traefik-X.Y.Z.tgz`.
|
||||
|
||||
If networking is completely disabled k3s may not be able to start (ie ethernet unplugged or wifi disconnected), in which case it may be necessary to add a default route. For example:
|
||||
```sh
|
||||
sudo ip -c address add 192.168.123.123/24 dev eno1
|
||||
sudo ip route add default via 192.168.123.1
|
||||
```
|
||||
|
||||
k3s additionally provides a `--resolv-conf` flag for kubelets, which may help with configuring DNS in air-gap networks.
|
||||
|
||||
Upgrades
|
||||
--------
|
||||
|
||||
To upgrade k3s from an older version you can re-run the installation script using the same flags, for example:
|
||||
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
If you want to upgrade to specific version you can run the following command:
|
||||
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh -
|
||||
```
|
||||
|
||||
Or to manually upgrade k3s:
|
||||
|
||||
1. Download the desired version of k3s from [releases](https://github.com/rancher/k3s/releases/latest)
|
||||
2. Install to an appropriate location (normally `/usr/local/bin/k3s`)
|
||||
3. Stop the old version
|
||||
4. Start the new version
|
||||
|
||||
Restarting k3s is supported by the installation script for systemd and openrc.
|
||||
To restart manually for systemd use:
|
||||
```sh
|
||||
sudo systemctl restart k3s
|
||||
```
|
||||
|
||||
To restart manually for openrc use:
|
||||
```sh
|
||||
sudo service k3s restart
|
||||
```
|
||||
|
||||
Upgrading an air-gap environment can be accomplished in the following manner:
|
||||
|
||||
1. Download air-gap images and install if changed
|
||||
2. Install new k3s binary (from installer or manual download)
|
||||
3. Restart k3s (if not restarted automatically by installer)
|
||||
|
||||
Uninstalling
|
||||
------------
|
||||
|
||||
If you installed k3s with the help of `install.sh` script an uninstall script is generated during installation, which will be created on your server node at `/usr/local/bin/k3s-uninstall.sh` (or as `k3s-agent-uninstall.sh`).
|
||||
|
||||
Hyperkube
|
||||
---------
|
||||
|
||||
k3s is bundled in a nice wrapper to remove the majority of the headache of running k8s. If
|
||||
you don't want that wrapper and just want a smaller k8s distro, the releases includes
|
||||
the `hyperkube` binary you can use. It's then up to you to know how to use `hyperkube`. If
|
||||
you want individual binaries you will need to compile them yourself from source.
|
||||
@@ -0,0 +1,122 @@
|
||||
---
|
||||
title: "Volumes and Storage"
|
||||
weight: 30
|
||||
---
|
||||
|
||||
When deploying an application that needs to retain data, you’ll need to create persistent storage. Persistent storage allows you to store application data external from the pod running your application. This storage practice allows you to maintain application data, even if the application’s pod fails.
|
||||
|
||||
# Local Storage Provider
|
||||
K3s comes with Rancher's Local Path Provisioner and this enables the ability to create persistent volume claims out of the box using local storage on the respective node. Below we cover a simple example. For more information please reference the official documentation [here](https://github.com/rancher/local-path-provisioner/blob/master/README.md#usage).
|
||||
|
||||
Create a hostPath backed persistent volume claim and a pod to utilize it:
|
||||
|
||||
### pvc.yaml
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: local-path-pvc
|
||||
namespace: default
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: local-path
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
```
|
||||
|
||||
### pod.yaml
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: volume-test
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: volume-test
|
||||
image: nginx:stable-alpine
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: volv
|
||||
mountPath: /data
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumes:
|
||||
- name: volv
|
||||
persistentVolumeClaim:
|
||||
claimName: local-path-pvc
|
||||
```
|
||||
|
||||
Apply the yaml `kubectl create -f pvc.yaml` and `kubectl create -f pod.yaml`
|
||||
|
||||
Confirm the PV and PVC are created. `kubectl get pv` and `kubectl get pvc` The status should be Bound for each.
|
||||
|
||||
# Longhorn
|
||||
|
||||
[comment]: <> (pending change - longhorn may support arm64 and armhf in the future.)
|
||||
|
||||
> **Note:** At this time Longhorn only supports amd64.
|
||||
|
||||
K3s supports [Longhorn](https://github.com/longhorn/longhorn). Below we cover a simple example. For more information please reference the official documentation [here](https://github.com/longhorn/longhorn/blob/master/README.md).
|
||||
|
||||
Apply the longhorn.yaml to install Longhorn.
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
|
||||
```
|
||||
|
||||
Longhorn will be installed in the namespace `longhorn-system`.
|
||||
|
||||
Before we create a PVC, we will create a storage class for longhorn with this yaml.
|
||||
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/examples/storageclass.yaml
|
||||
```
|
||||
|
||||
Now, apply the following yaml to create the PVC and pod with `kubectl create -f pvc.yaml` and `kubectl create -f pod.yaml`
|
||||
|
||||
### pvc.yaml
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: longhorn-volv-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: longhorn
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
```
|
||||
|
||||
### pod.yaml
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: volume-test
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: volume-test
|
||||
image: nginx:stable-alpine
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: volv
|
||||
mountPath: /data
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumes:
|
||||
- name: volv
|
||||
persistentVolumeClaim:
|
||||
claimName: longhorn-volv-pvc
|
||||
```
|
||||
|
||||
Confirm the PV and PVC are created. `kubectl get pv` and `kubectl get pvc` The status should be Bound for each.
|
||||
@@ -0,0 +1,36 @@
|
||||
---
|
||||
title: "Upgrades"
|
||||
weight: 25
|
||||
---
|
||||
|
||||
>**Note:** When upgrading, upgrade server nodes first one at a time then any worker nodes.
|
||||
|
||||
To upgrade K3s from an older version you can re-run the installation script using the same flags, for example:
|
||||
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
If you want to upgrade to specific version you can run the following command:
|
||||
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh -
|
||||
```
|
||||
|
||||
Or to manually upgrade K3s:
|
||||
|
||||
1. Download the desired version of K3s from [releases](https://github.com/rancher/k3s/releases/latest)
|
||||
2. Install to an appropriate location (normally `/usr/local/bin/k3s`)
|
||||
3. Stop the old version
|
||||
4. Start the new version
|
||||
|
||||
Restarting K3s is supported by the installation script for systemd and openrc.
|
||||
To restart manually for systemd use:
|
||||
```sh
|
||||
sudo systemctl restart k3s
|
||||
```
|
||||
|
||||
To restart manually for openrc use:
|
||||
```sh
|
||||
sudo service k3s restart
|
||||
```
|
||||
@@ -35,7 +35,7 @@ System Docker runs a special container called **Docker**, which is another Docke
|
||||
|
||||
We created this separation not only for the security benefits, but also to make sure that commands like `docker rm -f $(docker ps -qa)` don't delete the entire OS.
|
||||
|
||||

|
||||
{{< img "/img/os/rancheroshowitworks.png" "How it works">}}
|
||||
|
||||
### Running RancherOS
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ weight: 303
|
||||
| [CVE-2017-5715](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5715) | Systems with microprocessors utilizing speculative execution and indirect branch prediction may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis | 6 Feb 2018 | [RancherOS v1.1.4](https://github.com/rancher/os/releases/tag/v1.1.4) using Linux v4.9.78 with the Retpoline support |
|
||||
| [CVE-2017-5753](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5753) | Systems with microprocessors utilizing speculative execution and branch prediction may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis. | 31 May 2018 | [RancherOS v1.4.0](https://github.com/rancher/os/releases/tag/v1.4.0) using Linux v4.14.32 |
|
||||
| [CVE-2018-8897](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-8897) | A statement in the System Programming Guide of the Intel 64 and IA-32 Architectures Software Developer's Manual (SDM) was mishandled in the development of some or all operating-system kernels, resulting in unexpected behavior for #DB exceptions that are deferred by MOV SS or POP SS, as demonstrated by (for example) privilege escalation in Windows, macOS, some Xen configurations, or FreeBSD, or a Linux kernel crash. | 31 May 2018 | [RancherOS v1.4.0](https://github.com/rancher/os/releases/tag/v1.4.0) using Linux v4.14.32 |
|
||||
| [L1 Terminal Fault](https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html) | L1 Terminal Fault is a hardware vulnerability which allows unprivileged speculative access to data which is available in the Level 1 Data Cache when the page table entry controlling the virtual address, which is used for the access, has the Present bit cleared or other reserved bits set. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 |
|
||||
| [CVE-2018-3620](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3620) | L1 Terminal Fault is a hardware vulnerability which allows unprivileged speculative access to data which is available in the Level 1 Data Cache when the page table entry controlling the virtual address, which is used for the access, has the Present bit cleared or other reserved bits set. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 |
|
||||
| [CVE-2018-3639](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3639) | Systems with microprocessors utilizing speculative execution and speculative execution of memory reads before the addresses of all prior memory writes are known may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis, aka Speculative Store Bypass (SSB), Variant 4. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 |
|
||||
| [CVE-2018-17182](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17182) | The vmacache_flush_all function in mm/vmacache.c mishandles sequence number overflows. An attacker can trigger a use-after-free (and possibly gain privileges) via certain thread creation, map, unmap, invalidation, and dereference operations. | 18 Oct 2018 | [RancherOS v1.4.2](https://github.com/rancher/os/releases/tag/v1.4.2) using Linux v4.14.73 |
|
||||
| [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736) | runc through 1.0-rc6, as used in Docker before 18.09.2 and other products, allows attackers to overwrite the host runc binary (and consequently obtain host root access) by leveraging the ability to execute a command as root within one of these types of containers: (1) a new container with an attacker-controlled image, or (2) an existing container, to which the attacker previously had write access, that can be attached with docker exec. This occurs because of file-descriptor mishandling, related to /proc/self/exe. | 12 Feb 2019 | [RancherOS v1.5.1](https://github.com/rancher/os/releases/tag/v1.5.1) |
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
---
|
||||
title: Date and time zone
|
||||
weight: 121
|
||||
---
|
||||
|
||||
The default console keeps time in the Coordinated Universal Time (UTC) zone and synchronizes clocks with the Network Time Protocol (NTP). The Network Time Protocol daemon (ntpd) is an operating system program that maintains the system time in synchronization with time servers using the NTP.
|
||||
|
||||
RancherOS can run ntpd in the System Docker container. You can update its configurations by updating `/etc/ntp.conf`. For an example of how to update a file such as `/etc/ntp.conf` within a container, refer to [this page.]({{< baseurl >}}/os/v1.x/en/installation/configuration/write-files/#writing-files-in-specific-system-services)
|
||||
|
||||
The default console cannot support changing the time zone because including `tzdata` (time zone data) will increase the ISO size. However, you can change the time zone in the container by passing a flag to specify the time zone when you run the container:
|
||||
|
||||
```
|
||||
$ docker run -e TZ=Europe/Amsterdam debian:jessie date
|
||||
Tue Aug 20 09:28:19 CEST 2019
|
||||
```
|
||||
|
||||
You may need to install the `tzdata` in some images:
|
||||
|
||||
```
|
||||
$ docker run -e TZ=Asia/Shanghai -e DEBIAN_FRONTEND=noninteractive -it --rm ubuntu /bin/bash -c "apt-get update && apt-get install -yq tzdata && date”
|
||||
Thu Aug 29 08:13:02 CST 2019
|
||||
```
|
||||
@@ -64,7 +64,7 @@ $ USER_DOCKER_VERSION=17.03.2 make release
|
||||
|
||||
_Available as of v1.5.0_
|
||||
|
||||
When building RancherOS, you have the ability to automatically start in a supported [console]({{< baseurl >}}/os/v1.x/en/installation/switching-consoles/) instead of booting into the default console and switching to your desired one.
|
||||
When building RancherOS, you have the ability to automatically start in a supported console instead of booting into the default console and switching to your desired one.
|
||||
|
||||
Here is an example of building RancherOS and having the `alpine` console enabled:
|
||||
|
||||
|
||||
@@ -25,17 +25,17 @@ Let’s walk through how to import and create a RancherOS on EC2 machine using t
|
||||
|
||||
|
||||
1. First login to your AWS console, and go to the EC2 dashboard, click on **Launch Instance**:
|
||||

|
||||
{{< img "/img/os/Rancher_aws1.png" "RancherOS on AWS 1">}}
|
||||
2. Select the **Community AMIs** on the sidebar and search for **RancherOS**. Pick the latest version and click **Select**.
|
||||

|
||||
{{< img "/img/os/Rancher_aws2.png" "RancherOS on AWS 2">}}
|
||||
3. Go through the steps of creating the instance type through the AWS console. If you want to pass in a [cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/#cloud-config) file during boot of RancherOS, you'd pass in the file as **User data** by expanding the **Advanced Details** in **Step 3: Configure Instance Details**. You can pass in the data as text or as a file.
|
||||

|
||||
{{< img "/img/os/Rancher_aws6.png" "RancherOS on AWS 6">}}
|
||||
After going through all the steps, you finally click on **Launch**, and either create a new key pair or choose an existing key pair to be used with the EC2 instance. If you have created a new key pair, download the key pair. If you have chosen an existing key pair, make sure you have the key pair accessible. Click on **Launch Instances**.
|
||||

|
||||
{{< img "/img/os/Rancher_aws3.png" "RancherOS on AWS 3">}}
|
||||
4. Your instance will be launching and you can click on **View Instances** to see it's status.
|
||||

|
||||
{{< img "/img/os/Rancher_aws4.png" "RancherOS on AWS 4">}}
|
||||
Your instance is now running!
|
||||

|
||||
{{< img "/img/os/Rancher_aws5.png" "RancherOS on AWS 5">}}
|
||||
|
||||
## Logging into RancherOS
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Environment
|
||||
weight: 143
|
||||
---
|
||||
|
||||
The [environment key](https://docs.docker.com/compose/yml/#environment) can be used to customize system services. When a value is not assigned, RancherOS looks up the value from the `rancher.environment` key.
|
||||
The [environment key](https://docs.docker.com/compose/compose-file/#environment) can be used to customize system services. When a value is not assigned, RancherOS looks up the value from the `rancher.environment` key.
|
||||
|
||||
In the example below, `ETCD_DISCOVERY` will be set to `https://discovery.etcd.io/d1cd18f5ee1c1e2223aed6a1734719f7` for the `etcd` service.
|
||||
|
||||
|
||||
@@ -35,7 +35,7 @@ System Docker runs a special container called **Docker**, which is another Docke
|
||||
|
||||
We created this separation not only for the security benefits, but also to make sure that commands like `docker rm -f $(docker ps -qa)` don't delete the entire OS.
|
||||
|
||||

|
||||
{{< img "/img/os/rancheroshowitworks.png" "How it works">}}
|
||||
|
||||
### Running RancherOS
|
||||
|
||||
|
||||
@@ -92,7 +92,7 @@ $ sudo system-docker run -d --net=host --name busydash husseingalal/busydash
|
||||
```
|
||||
In the command, we used `--net=host` to tell System Docker not to containerize the container's networking, and use the host’s networking instead. After running the container, you can see the monitoring server by accessing `http://<IP_OF_MACHINE>`.
|
||||
|
||||

|
||||
{{< img "/img/os/Rancher_busydash.png" "System Docker Container">}}
|
||||
|
||||
To make the container survive during the reboots, you can create the `/opt/rancher/bin/start.sh` script, and add the Docker start line to launch the Docker at each startup.
|
||||
|
||||
|
||||
@@ -42,7 +42,7 @@ Drivers in Rancher allow you to manage which providers can be used to provision
|
||||
|
||||
For more information, see [Provisioning Drivers]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/).
|
||||
|
||||
## Adding Kubernetes Versions into RANCHER
|
||||
## Adding Kubernetes Versions into Rancher
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
|
||||
@@ -74,11 +74,12 @@ The table below details the parameters for the user schema section configuration
|
||||
|
||||
| Parameter | Description |
|
||||
|:--|:--|
|
||||
| Object Class | The name of the object class used for user objects in your domain. |
|
||||
| Object Class | The name of the object class used for user objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
|
||||
| Username Attribute | The user attribute whose value is suitable as a display name. |
|
||||
| Login Attribute | The attribute whose value matches the username part of credentials entered by your users when logging in to Rancher. If your users authenticate with their UPN (e.g. "jdoe@acme.com") as username then this field must normally be set to `userPrincipalName`. Otherwise for the old, NetBIOS-style logon names (e.g. "jdoe") it's usually `sAMAccountName`. |
|
||||
| User Member Attribute | The attribute containing the groups that a user is a member of. |
|
||||
| Search Attribute | When a user enters text to add users or groups in the UI, Rancher queries the AD server and attempts to match users by the attributes provided in this setting. Multiple attributes can be specified by separating them with the pipe ("\|") symbol. To match UPN usernames (e.g. jdoe@acme.com) you should usually set the value of this field to `userPrincipalName`. |
|
||||
| Search Filter | This filter gets applied to the list of users that is searched when Rancher attempts to add users to a site access list or tries to add members to clusters or projects. For example, a user search filter could be <code>(|(memberOf=CN=group1,CN=Users,DC=testad,DC=rancher,DC=io)(memberOf=CN=group2,CN=Users,DC=testad,DC=rancher,DC=io))</code>. Note: If the search filter does not use [valid AD search syntax,](https://docs.microsoft.com/en-us/windows/win32/adsi/search-filter-syntax) the list of users will be empty. |
|
||||
| User Enabled Attribute | The attribute containing an integer value representing a bitwise enumeration of user account flags. Rancher uses this to determine if a user account is disabled. You should normally leave this set to the AD standard `userAccountControl`. |
|
||||
| Disabled Status Bitmask | This is the value of the `User Enabled Attribute` designating a disabled user account. You should normally leave this set to the default value of "2" as specified in the Microsoft Active Directory schema (see [here](https://docs.microsoft.com/en-us/windows/desktop/adschema/a-useraccountcontrol#remarks)). |
|
||||
|
||||
@@ -92,11 +93,12 @@ The table below details the parameters for the group schema configuration.
|
||||
|
||||
| Parameter | Description |
|
||||
|:--|:--|
|
||||
| Object Class | The name of the object class used for group objects in your domain. |
|
||||
| Object Class | The name of the object class used for group objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
|
||||
| Name Attribute | The group attribute whose value is suitable for a display name. |
|
||||
| Group Member User Attribute | The name of the **user attribute** whose format matches the group members in the `Group Member Mapping Attribute`. |
|
||||
| Group Member Mapping Attribute | The name of the group attribute containing the members of a group. |
|
||||
| Search Attribute | Attribute used to construct search filters when adding groups to clusters or projects. See description of user schema `Search Attribute`. |
|
||||
| Search Filter | This filter gets applied to the list of groups that is searched when Rancher attempts to add groups to a site access list or tries to add groups to clusters or projects. For example, a group search filter could be <code>(|(cn=group1)(cn=group2))</code>. Note: If the search filter does not use [valid AD search syntax,](https://docs.microsoft.com/en-us/windows/win32/adsi/search-filter-syntax) the list of groups will be empty. |
|
||||
| Group DN Attribute | The name of the group attribute whose format matches the values in the user attribute describing a the user's memberships. See `User Member Attribute`. |
|
||||
| Nested Group Membership | This settings defines whether Rancher should resolve nested group memberships. Use only if your organisation makes use of these nested memberships (ie. you have groups that contain other groups as members). |
|
||||
|
||||
@@ -146,7 +148,7 @@ $ ldapsearch -x -D "acme\jdoe" -w "secret" -p 389 \
|
||||
|
||||
This command performs an LDAP search with the search base set to the domain root (`-b "dc=acme,dc=com"`) and a filter targeting the user account (`sAMAccountNam=jdoe`), returning the attributes for said user:
|
||||
|
||||

|
||||
{{< img "/img/rancher/ldapsearch-user.png" "LDAP User">}}
|
||||
|
||||
Since in this case the user's DN is `CN=John Doe,CN=Users,DC=acme,DC=com` [5], we should configure the **User Search Base** with the parent node DN `CN=Users,DC=acme,DC=com`.
|
||||
|
||||
@@ -179,7 +181,7 @@ $ ldapsearch -x -D "acme\jdoe" -w "secret" -p 389 \
|
||||
|
||||
This command will inform us on the attributes used for group objects:
|
||||
|
||||

|
||||
{{< img "/img/rancher/ldapsearch-group.png" "LDAP Group">}}
|
||||
|
||||
Again, this allows us to determine the correct values to enter in the group schema configuration:
|
||||
|
||||
|
||||
@@ -9,6 +9,8 @@ _Available as of v2.0.3_
|
||||
|
||||
If you have an instance of Active Directory (AD) hosted in Azure, you can configure Rancher to allow your users to log in using their AD accounts. Configuration of Azure AD external authentication requires you to make configurations in both Azure and Rancher.
|
||||
|
||||
>**Note:** Azure AD integration only supports Service Provider initiated logins.
|
||||
|
||||
>**Prerequisite:** Have an instance of Azure AD configured.
|
||||
|
||||
>**Note:** Most of this procedure takes place from the [Microsoft Azure Portal](https://portal.azure.com/).
|
||||
|
||||
@@ -94,6 +94,9 @@ Using the Unique ID of the service account key, register it as an Oauth Client u
|
||||
1. Sign into Rancher using a local user assigned the [administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions) role. This user is also called the local principal.
|
||||
1. From the **Global** view, click **Security > Authentication** from the main menu.
|
||||
1. Click **Google.** The instructions in the UI cover the steps to set up authentication with Google OAuth.
|
||||
1. Admin Email: Provide the email of an administrator account from your GSuite setup. In order to perform user and group lookups, google apis require an administrator's email in conjunction with the service account key.
|
||||
1. Domain: Provide the domain on which you have configured GSuite. Provide the exact domain and not any aliases.
|
||||
1. Nested Group Membership: Check this box to enable nested group memberships. Rancher admins can disable this at any time after configuring auth.
|
||||
- **Step One** is about adding Rancher as an authorized domain, which we already covered in [this section.](#1-adding-rancher-as-an-authorized-domain)
|
||||
- For **Step Two,** provide the OAuth credentials JSON that you downloaded after completing [this section.](#2-creating-oauth2-credentials-for-the-rancher-server) You can upload the file or paste the contents into the **OAuth Credentials** field.
|
||||
- For **Step Three,** provide the service account credentials JSON that downloaded at the end of [this section.](#3-creating-service-account-credentials) The credentials will only work if you successfully [registered the service account key](#4-register-the-service-account-key-as-an-oauth-client) as an OAuth client in your G Suite account.
|
||||
|
||||
@@ -9,57 +9,57 @@ Before configuring Rancher to support AD FS users, you must add Rancher as a [re
|
||||
|
||||
1. Open the **AD FS Management** console. Select **Add Relying Party Trust...** from the **Actions** menu and click **Start**.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-overview.png" style="width:800px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-overview.png" "">}}
|
||||
|
||||
1. Select **Enter data about the relying party manually** as the option for obtaining data about the relying party.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-rpt-2.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-rpt-2.png" "">}}
|
||||
|
||||
1. Enter your desired **Display name** for your Relying Party Trust. For example, `Rancher`.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-rpt-3.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-rpt-3.png" "">}}
|
||||
|
||||
1. Select **AD FS profile** as the configuration profile for your relying party trust.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-rpt-4.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-rpt-4.png" "">}}
|
||||
|
||||
1. Leave the **optional token encryption certificate** empty, as Rancher AD FS will not be using one.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-rpt-5.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-rpt-5.png" "">}}
|
||||
|
||||
1. Select **Enable support for the SAML 2.0 WebSSO protocol**
|
||||
and enter `https://<rancher-server>/v1-saml/adfs/saml/acs` for the service URL.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-rpt-6.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-rpt-6.png" "">}}
|
||||
|
||||
1. Add `https://<rancher-server>/v1-saml/adfs/saml/metadata` as the **Relying party trust identifier**.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-rpt-7.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-rpt-7.png" "">}}
|
||||
|
||||
1. This tutorial will not cover multi-factor authentication; please refer to the [Microsoft documentation](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/configure-additional-authentication-methods-for-ad-fs) if you would like to configure multi-factor authentication.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-rpt-8.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-rpt-8.png" "">}}
|
||||
|
||||
1. From **Choose Issuance Authorization RUles**, you may select either of the options available according to use case. However, for the purposes of this guide, select **Permit all users to access this relying party**.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-rpt-9.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-rpt-9.png" "">}}
|
||||
|
||||
1. After reviewing your settings, select **Next** to add the relying party trust.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-rpt-10.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-rpt-10.png" "">}}
|
||||
|
||||
|
||||
1. Select **Open the Edit Claim Rules...** and click **Close**.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-rpt-11.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-rpt-11.png" "">}}
|
||||
|
||||
1. On the **Issuance Transform Rules** tab, click **Add Rule...**.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-edit-cr.png" style="width:450px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-edit-cr.png" "">}}
|
||||
|
||||
1. Select **Send LDAP Attributes as Claims** as the **Claim rule template**.
|
||||
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-tcr-1.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-tcr-1.png" "">}}
|
||||
|
||||
1. Set the **Claim rule name** to your desired name (for example, `Rancher Attributes`) and select **Active Directory** as the **Attribute store**. Create the following mapping to reflect the table below:
|
||||
|
||||
@@ -70,7 +70,7 @@ Before configuring Rancher to support AD FS users, you must add Rancher as a [re
|
||||
| Token-Groups - Qualified by Long Domain Name | Group |
|
||||
| SAM-Account-Name | Name |
|
||||
<br/>
|
||||
<img src="{{< baseurl >}}/img/rancher/adfs/adfs-add-tcr-2.png" style="width:600px;"/>
|
||||
{{< img "/img/rancher/adfs/adfs-add-tcr-2.png" "">}}
|
||||
|
||||
1. Download the `federationmetadata.xml` from your AD server at:
|
||||
```
|
||||
|
||||
@@ -7,6 +7,8 @@ _Available as of v2.2.0_
|
||||
|
||||
If your organization uses Okta Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
|
||||
|
||||
>**Note:** Okta integration only supports Service Provider initiated logins.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
In Okta, create a SAML Application with the settings below. See the [Okta documentation](https://developer.okta.com/standards/SAML/setting_up_a_saml_application_in_okta) for help.
|
||||
|
||||
@@ -75,7 +75,7 @@ The table below details the parameters for the user schema configuration.
|
||||
|
||||
| Parameter | Description |
|
||||
|:--|:--|
|
||||
| Object Class | The name of the object class used for user objects in your domain. |
|
||||
| Object Class | The name of the object class used for user objects in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
|
||||
| Username Attribute | The user attribute whose value is suitable as a display name. |
|
||||
| Login Attribute | The attribute whose value matches the username part of credentials entered by your users when logging in to Rancher. This is typically `uid`. |
|
||||
| User Member Attribute | The user attribute containing the Distinguished Name of groups a user is member of. Usually this is one of `memberOf` or `isMemberOf`. |
|
||||
@@ -93,7 +93,7 @@ The table below details the parameters for the group schema configuration.
|
||||
|
||||
| Parameter | Description |
|
||||
|:--|:--|
|
||||
| Object Class | The name of the object class used for group entries in your domain. |
|
||||
| Object Class | The name of the object class used for group entries in your domain. If defined, only specify the name of the object class - *don't* include it in an LDAP wrapper such as &(objectClass=xxxx) |
|
||||
| Name Attribute | The group attribute whose value is suitable for a display name. |
|
||||
| Group Member User Attribute | The name of the **user attribute** whose format matches the group members in the `Group Member Mapping Attribute`. |
|
||||
| Group Member Mapping Attribute | The name of the group attribute containing the members of a group. |
|
||||
|
||||
@@ -25,10 +25,16 @@ For example, if you install Rancher, then set a feature flag to true with the Ra
|
||||
|
||||
The following is a list of the feature flags available in Rancher:
|
||||
|
||||
Feature | Environment Variable Key | Default Value | Description | Available as of |
|
||||
---|---|---|---|---
|
||||
[Allow unsupported storage drivers]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags/enable-not-default-storage-drivers) | `unsupported-storage-drivers` | `false` | This feature enables types for storage providers and provisioners that are not enabled by default. | v2.3.0
|
||||
[UI for Istio virtual services and destination rules]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags/istio-virtual-service-ui) | `istio-virtual-service-ui`| `false` | Enables a UI that lets you create, read, update and delete virtual services and destination rules, which are traffic management features of Istio | v2.3.0
|
||||
- `unsupported-storage-drivers`: This feature [allows unsupported storage drivers.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags/enable-not-default-storage-drivers) In other words, it enables types for storage providers and provisioners that are not enabled by default.
|
||||
- `istio-virtual-service-ui`: This feature enables a [UI to create, read, update, and delete Istio virtual services and destination rules]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags/istio-virtual-service-ui), which are traffic management features of Istio.
|
||||
|
||||
The below table shows the availability and default value for feature flags in Rancher:
|
||||
|
||||
Feature Flag Name | Default Value | Status | Available as of |
|
||||
---|---|---|---
|
||||
`unsupported-storage-drivers` | `false` | Experimental | v2.3.0
|
||||
`istio-virtual-service-ui` | `false` | Experimental | v2.3.0
|
||||
`istio-virtual-service-ui` | `true` | GA | v2.3.2
|
||||
|
||||
# Enabling Features when Starting Rancher
|
||||
|
||||
@@ -38,21 +44,21 @@ When you install Rancher, enable the feature you want with a feature flag. The c
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "HA Install" %}}
|
||||
When installing Rancher with a Helm chart, use the `--features` option:
|
||||
When installing Rancher with a Helm chart, use the `--features` option. In the below example, two features are enabled by passing the feature flag names names in a comma separated list:
|
||||
```
|
||||
helm install rancher-latest/rancher \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set 'extraEnv[0].name=CATTLE_FEATURES' # Available as of v2.3.0
|
||||
--set 'extraEnv[0].value=<FEATURE-NAME1>=true,<FEATURE-NAME2>=true' # Available as of v2.3.0
|
||||
--set 'extraEnv[0].value=<FEATURE-FLAG-NAME-1>=true,<FEATURE-FLAG-NAME-2>=true' # Available as of v2.3.0
|
||||
```
|
||||
|
||||
### Rendering the Helm Chart for Air Gap Installations
|
||||
|
||||
For an air gap installation of Rancher, you need to add a Helm chart repository and render a Helm template before installing Rancher with Helm. For details, refer to the [air gap installation documentation.]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap/install-rancher)
|
||||
|
||||
Here is an example of a command for passing in the feature flag options when rendering the Helm template:
|
||||
Here is an example of a command for passing in the feature flag names when rendering the Helm template. In the below example, two features are enabled by passing the feature flag names in a comma separated list:
|
||||
```
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
@@ -63,16 +69,16 @@ helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
--set 'extraEnv[0].name=CATTLE_FEATURES' # Available as of v2.3.0
|
||||
--set 'extraEnv[0].value=<FEATURE-NAME1>=true,<FEATURE-NAME2>=true' # Available as of v2.3.0
|
||||
--set 'extraEnv[0].value=<FEATURE-FLAG-NAME-1>=true,<FEATURE-FLAG-NAME-2>=true' # Available as of v2.3.0
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab "Single Node Install" %}}
|
||||
When installing Rancher with Docker, use the `--features` option:
|
||||
When installing Rancher with Docker, use the `--features` option. In the below example, two features are enabled by passing the feature flag names in a comma separated list:
|
||||
```
|
||||
docker run -d -p 80:80 -p 443:443 \
|
||||
--restart=unless-stopped \
|
||||
rancher/rancher:rancher-latest \
|
||||
--features=<FEATURE-NAME1>=true,<FEATURE-NAME2>=true # Available as of v2.3.0
|
||||
--features=<FEATURE-FLAG-NAME-1>=true,<FEATURE-NAME-2>=true # Available as of v2.3.0
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
@@ -4,13 +4,16 @@ weight: 2
|
||||
---
|
||||
_Available as of v2.3.0_
|
||||
|
||||
This feature enables a UI that lets you create, read, update and delete virtual services and destination rules, which are traffic management features of Istio.
|
||||
|
||||
> **Prerequisite:** Turning on this feature does not enable Istio. A cluster administrator needs to [enable Istio for the cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup) in order to use the feature.
|
||||
|
||||
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags)
|
||||
|
||||
Environment Variable Key | Default Value | Description
|
||||
---|---|---
|
||||
`istio-virtual-service-ui`| `false` | Enables a UI that lets you create, read, update and delete virtual services and destination rules, which are traffic management features of Istio
|
||||
Environment Variable Key | Default Value | Status | Available as of
|
||||
---|---|---|---
|
||||
`istio-virtual-service-ui` |`false` | Experimental | v2.3.0
|
||||
`istio-virtual-service-ui` | `true` | GA | v2.3.2
|
||||
|
||||
# About this Feature
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ _Available as of v2.3.0_
|
||||
|
||||
The RKE metadata feature allows you to provision clusters with new versions of Kubernetes as soon as they are released, without upgrading Rancher. This feature is useful for taking advantage of patch versions of Kubernetes, for example, if you want to upgrade to Kubernetes v1.14.7 when your Rancher server originally supported v1.14.6.
|
||||
|
||||
**Note:** The Kubernetes API can change between minor versions. Therefore, we don't support introducing minor Kubernetes versions, such as introducing v1.15 when Rancher currently supports v1.14. You would need to upgrade Rancher to add support for minor Kubernetes versions.
|
||||
> **Note:** The Kubernetes API can change between minor versions. Therefore, we don't support introducing minor Kubernetes versions, such as introducing v1.15 when Rancher currently supports v1.14. You would need to upgrade Rancher to add support for minor Kubernetes versions.
|
||||
|
||||
Rancher's Kubernetes metadata contains information specific to the Kubernetes version that Rancher uses to provision [RKE clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). Rancher syncs the data periodically and creates custom resource definitions (CRDs) for **system images,** **service options** and **addon templates.** Consequently, when a new Kubernetes version is compatible with the Rancher server version, the Kubernetes metadata makes the new version available to Rancher for provisioning clusters. The metadata gives you an overview of the information that the [Rancher Kubernetes Engine]({{<baseurl>}}/rke/latest/en/) (RKE) uses for deploying various Kubernetes versions.
|
||||
|
||||
@@ -27,13 +27,13 @@ Administrators might configure the RKE metadata settings to do the following:
|
||||
- Change the metadata URL that Rancher uses to sync the metadata, which is useful for air gap setups if you need to sync Rancher locally instead of with GitHub
|
||||
- Prevent Rancher from auto-syncing the metadata, which is one way to prevent new and unsupported Kubernetes versions from being available in Rancher
|
||||
|
||||
# Refresh Kubernetes Metadata
|
||||
### Refresh Kubernetes Metadata
|
||||
|
||||
The option to refresh the Kubernetes metadata is available for administrators by default, or for any user who has the **Manage Cluster Drivers** [global role.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
|
||||
|
||||
To force Rancher to refresh the Kubernetes metadata, a manual refresh action is available under **Tools > Drivers > Refresh Kubernetes Metadata** on the right side corner.
|
||||
|
||||
# Configuring the Metadata Synchronization
|
||||
### Configuring the Metadata Synchronization
|
||||
|
||||
> Only administrators can change these settings.
|
||||
|
||||
@@ -53,7 +53,7 @@ If you don't have an air gap setup, you don't need to specify the URL or Git bra
|
||||
|
||||
However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL and Git branch in the `rke-metadata-config` settings to point to the new location of the repository.
|
||||
|
||||
# Air Gap Setups
|
||||
### Air Gap Setups
|
||||
|
||||
Rancher relies on a periodic refresh of the `rke-metadata-config` to download new Kubernetes version metadata if it is supported with the current version of the Rancher server. For a table of compatible Kubernetes and Rancher versions, refer to the [service terms section.](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.2.8/)
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ If you want to prevent a role from being assigned to users, you can set it to a
|
||||
|
||||
You can lock roles in two contexts:
|
||||
|
||||
- When you're [adding a custom role](({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/).
|
||||
- When you're [adding a custom role]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles/).
|
||||
- When you editing an existing role (see below).
|
||||
|
||||
1. From the **Global** view, select **Security** > **Roles**.
|
||||
|
||||
@@ -22,7 +22,7 @@ You might want to require new clusters to use a template to ensure that any clus
|
||||
To require new clusters to use an RKE template, administrators can turn on RKE template enforcement with the following steps:
|
||||
|
||||
1. From the **Global** view, click the **Settings** tab.
|
||||
1. Go to the `rke-template-enforcement` setting. Click the vertical **Ellipsis (...)** and click **Edit.**
|
||||
1. Go to the `cluster-template-enforcement` setting. Click the vertical **Ellipsis (...)** and click **Edit.**
|
||||
1. Set the value to **True** and click **Save.**
|
||||
|
||||
**Result:** All clusters provisioned by Rancher must use a template, unless the creator is an administrator.
|
||||
@@ -32,7 +32,7 @@ To require new clusters to use an RKE template, administrators can turn on RKE t
|
||||
To allow new clusters to be created without an RKE template, administrators can turn off RKE template enforcement with the following steps:
|
||||
|
||||
1. From the **Global** view, click the **Settings** tab.
|
||||
1. Go to the `rke-template-enforcement` setting. Click the vertical **Ellipsis (...)** and click **Edit.**
|
||||
1. Go to the `cluster-template-enforcement` setting. Click the vertical **Ellipsis (...)** and click **Edit.**
|
||||
1. Set the value to **False** and click **Save.**
|
||||
|
||||
**Result:** When clusters are provisioned by Rancher, they don't need to use a template.
|
||||
**Result:** When clusters are provisioned by Rancher, they don't need to use a template.
|
||||
|
||||