Merge remote-tracking branch 'upstream/master'
@@ -5,3 +5,4 @@ package-lock.json
|
||||
*.tern-port
|
||||
*/**/.tern-port
|
||||
.DS_Store
|
||||
.vscode/settings.json
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
{
|
||||
}
|
||||
@@ -29,7 +29,7 @@ Windows
|
||||
|
||||
License
|
||||
=======
|
||||
Copyright (c) 2014-2019 [Rancher Labs, Inc.](http://rancher.com)
|
||||
Copyright (c) 2014-2019 [Rancher Labs, Inc.](https://rancher.com)
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
||||
|
Before Width: | Height: | Size: 51 KiB After Width: | Height: | Size: 51 KiB |
|
Before Width: | Height: | Size: 69 KiB After Width: | Height: | Size: 69 KiB |
|
Before Width: | Height: | Size: 37 KiB After Width: | Height: | Size: 37 KiB |
|
Before Width: | Height: | Size: 77 KiB After Width: | Height: | Size: 77 KiB |
|
Before Width: | Height: | Size: 134 KiB After Width: | Height: | Size: 134 KiB |
|
Before Width: | Height: | Size: 8.4 KiB After Width: | Height: | Size: 8.4 KiB |
|
Before Width: | Height: | Size: 164 KiB After Width: | Height: | Size: 164 KiB |
|
Before Width: | Height: | Size: 443 KiB After Width: | Height: | Size: 443 KiB |
|
Before Width: | Height: | Size: 831 KiB After Width: | Height: | Size: 831 KiB |
|
Before Width: | Height: | Size: 1.2 MiB After Width: | Height: | Size: 1.2 MiB |
|
Before Width: | Height: | Size: 889 KiB After Width: | Height: | Size: 889 KiB |
|
Before Width: | Height: | Size: 1.3 MiB After Width: | Height: | Size: 1.3 MiB |
|
Before Width: | Height: | Size: 177 KiB After Width: | Height: | Size: 177 KiB |
|
Before Width: | Height: | Size: 883 KiB After Width: | Height: | Size: 883 KiB |
|
Before Width: | Height: | Size: 429 KiB After Width: | Height: | Size: 429 KiB |
|
Before Width: | Height: | Size: 1.5 MiB After Width: | Height: | Size: 1.5 MiB |
|
Before Width: | Height: | Size: 452 KiB After Width: | Height: | Size: 452 KiB |
|
Before Width: | Height: | Size: 1.1 MiB After Width: | Height: | Size: 1.1 MiB |
|
Before Width: | Height: | Size: 1.3 MiB After Width: | Height: | Size: 1.3 MiB |
|
Before Width: | Height: | Size: 1.5 MiB After Width: | Height: | Size: 1.5 MiB |
|
Before Width: | Height: | Size: 1.1 MiB After Width: | Height: | Size: 1.1 MiB |
|
Before Width: | Height: | Size: 613 KiB After Width: | Height: | Size: 613 KiB |
|
Before Width: | Height: | Size: 349 KiB After Width: | Height: | Size: 349 KiB |
|
Before Width: | Height: | Size: 27 KiB After Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 75 KiB After Width: | Height: | Size: 75 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 6.1 KiB After Width: | Height: | Size: 6.1 KiB |
|
Before Width: | Height: | Size: 20 KiB After Width: | Height: | Size: 20 KiB |
|
Before Width: | Height: | Size: 223 KiB After Width: | Height: | Size: 223 KiB |
|
Before Width: | Height: | Size: 212 KiB After Width: | Height: | Size: 212 KiB |
|
Before Width: | Height: | Size: 274 KiB After Width: | Height: | Size: 274 KiB |
|
Before Width: | Height: | Size: 260 KiB After Width: | Height: | Size: 260 KiB |
|
Before Width: | Height: | Size: 721 KiB After Width: | Height: | Size: 721 KiB |
|
Before Width: | Height: | Size: 206 KiB After Width: | Height: | Size: 206 KiB |
|
Before Width: | Height: | Size: 879 KiB After Width: | Height: | Size: 879 KiB |
|
Before Width: | Height: | Size: 886 KiB After Width: | Height: | Size: 886 KiB |
|
Before Width: | Height: | Size: 179 KiB After Width: | Height: | Size: 179 KiB |
|
Before Width: | Height: | Size: 114 KiB After Width: | Height: | Size: 114 KiB |
|
Before Width: | Height: | Size: 81 KiB After Width: | Height: | Size: 81 KiB |
|
Before Width: | Height: | Size: 115 KiB After Width: | Height: | Size: 115 KiB |
|
Before Width: | Height: | Size: 80 KiB After Width: | Height: | Size: 80 KiB |
|
Before Width: | Height: | Size: 115 KiB After Width: | Height: | Size: 115 KiB |
|
Before Width: | Height: | Size: 75 KiB After Width: | Height: | Size: 75 KiB |
|
Before Width: | Height: | Size: 90 KiB After Width: | Height: | Size: 90 KiB |
|
Before Width: | Height: | Size: 182 KiB After Width: | Height: | Size: 182 KiB |
|
Before Width: | Height: | Size: 125 KiB After Width: | Height: | Size: 125 KiB |
|
After Width: | Height: | Size: 117 KiB |
|
After Width: | Height: | Size: 168 KiB |
|
Before Width: | Height: | Size: 823 KiB After Width: | Height: | Size: 823 KiB |
|
Before Width: | Height: | Size: 801 KiB After Width: | Height: | Size: 801 KiB |
|
Before Width: | Height: | Size: 1.9 MiB After Width: | Height: | Size: 1.9 MiB |
|
Before Width: | Height: | Size: 1.5 MiB After Width: | Height: | Size: 1.5 MiB |
|
Before Width: | Height: | Size: 113 KiB After Width: | Height: | Size: 113 KiB |
|
Before Width: | Height: | Size: 217 KiB After Width: | Height: | Size: 217 KiB |
|
Before Width: | Height: | Size: 150 KiB After Width: | Height: | Size: 150 KiB |
|
Before Width: | Height: | Size: 164 KiB After Width: | Height: | Size: 164 KiB |
|
Before Width: | Height: | Size: 166 KiB After Width: | Height: | Size: 166 KiB |
|
Before Width: | Height: | Size: 123 KiB After Width: | Height: | Size: 123 KiB |
|
Before Width: | Height: | Size: 192 KiB After Width: | Height: | Size: 192 KiB |
|
Before Width: | Height: | Size: 88 KiB After Width: | Height: | Size: 88 KiB |
|
Before Width: | Height: | Size: 175 KiB After Width: | Height: | Size: 175 KiB |
@@ -13,3 +13,7 @@
|
||||
display: none;
|
||||
visibility: hidden;
|
||||
}
|
||||
|
||||
pre > code {
|
||||
padding: 0;
|
||||
}
|
||||
@@ -5,6 +5,7 @@ title = "Rancher Labs"
|
||||
theme = "rancher-website-theme"
|
||||
themesDir = "node_modules"
|
||||
pluralizeListTitles = false
|
||||
timeout = 30000
|
||||
|
||||
enableRobotsTXT = true
|
||||
pygmentsCodeFences = true
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
---
|
||||
title: "K3S - 5 less than k8s"
|
||||
shortTitle: K3S
|
||||
date: 2019-02-05T09:52:46-07:00
|
||||
title: "K3s - 5 less than K8s"
|
||||
shortTitle: K3s
|
||||
name: "menu"
|
||||
---
|
||||
|
||||
@@ -15,21 +14,16 @@ Great for:
|
||||
* ARM
|
||||
* Situations where a PhD in k8s clusterology is infeasible
|
||||
|
||||
What is this?
|
||||
---
|
||||
# What is K3s?
|
||||
|
||||
k3s is intended to be a fully compliant Kubernetes distribution with the following changes:
|
||||
K3s is a fully compliant Kubernetes distribution with the following enhancements:
|
||||
|
||||
1. Legacy, alpha, non-default features are removed. Hopefully, you shouldn't notice the
|
||||
stuff that has been removed.
|
||||
2. Removed most in-tree plugins (cloud providers and storage plugins) which can be replaced
|
||||
with out of tree addons.
|
||||
3. Add sqlite3 as the default storage mechanism. etcd3 is still available, but not the default.
|
||||
4. Wrapped in simple launcher that handles a lot of the complexity of TLS and options.
|
||||
5. Minimal to no OS dependencies (just a sane kernel and cgroup mounts needed). k3s packages required
|
||||
dependencies
|
||||
* An embedded SQLite database has replaced etcd as the default datastore. External datastores such as PostgreSQL, MySQL, and etcd are also supported.
|
||||
* Simple but powerful "batteries-included" features have been added, such as: a local storage provider, a service load balancer, a Helm controller, and the Traefik ingress controller.
|
||||
* Operation of all Kubernetes control plane components is encapsulated in a single binary and process. This allows K3s to automate and manage complex cluster operations like distributing certificates.
|
||||
* In-tree cloud providers and storage plugins have been removed.
|
||||
* External dependencies have been minimized (just a modern kernel and cgroup mounts needed). K3s packages required dependencies, including:
|
||||
* containerd
|
||||
* Flannel
|
||||
* CoreDNS
|
||||
* CNI
|
||||
* Host utilities (iptables, socat, etc)
|
||||
|
||||
@@ -0,0 +1,164 @@
|
||||
---
|
||||
title: "Advanced Options and Configuration"
|
||||
weight: 45
|
||||
aliases:
|
||||
- /k3s/latest/en/running/
|
||||
- /k3s/latest/en/configuration/
|
||||
---
|
||||
|
||||
This section contains advanced information describing the different ways you can run and manage K3s:
|
||||
|
||||
- [Auto-deploying manifests](#auto-deploying-manifests)
|
||||
- [Using Docker as the container runtime](#using-docker-as-the-container-runtime)
|
||||
- [Running K3s with RootlessKit (Experimental)](#running-k3s-with-rootlesskit-experimental)
|
||||
- [Node labels and taints](#node-labels-and-taints)
|
||||
- [Starting the server with the installation script](#starting-the-server-with-the-installation-script)
|
||||
- [Additional preparation for Alpine Linux setup](#additional-preparation-for-alpine-linux-setup)
|
||||
- [Running K3d (K3s in Docker) and docker-compose](#running-k3d-k3s-in-docker-and-docker-compose)
|
||||
|
||||
# Auto-Deploying Manifests
|
||||
|
||||
Any file found in `/var/lib/rancher/k3s/server/manifests` will automatically be deployed to Kubernetes in a manner similar to `kubectl apply`.
|
||||
|
||||
For information about deploying Helm charts, refer to the section about [Helm.](../helm)
|
||||
|
||||
# Using Docker as the Container Runtime
|
||||
|
||||
K3s includes and defaults to [containerd,](https://containerd.io/) an industry-standard container runtime. If you want to use Docker instead of containerd then you simply need to run the agent with the `--docker` flag.
|
||||
|
||||
K3s will generate config.toml for containerd in `/var/lib/rancher/k3s/agent/etc/containerd/config.toml`. For advanced customization for this file you can create another file called `config.toml.tmpl` in the same directory and it will be used instead.
|
||||
|
||||
The `config.toml.tmpl` will be treated as a Golang template file, and the `config.Node` structure is being passed to the template, the following is an example on how to use the structure to customize the configuration file https://github.com/rancher/k3s/blob/master/pkg/agent/templates/templates.go#L16-L32
|
||||
|
||||
# Running K3s with RootlessKit (Experimental)
|
||||
|
||||
> **Warning:** This feature is experimental.
|
||||
|
||||
RootlessKit is a kind of Linux-native "fake root" utility, made for mainly [running Docker and Kubernetes as an unprivileged user,](https://github.com/rootless-containers/usernetes) so as to protect the real root on the host from potential container-breakout attacks.
|
||||
|
||||
Initial rootless support has been added but there are a series of significant usability issues surrounding it.
|
||||
|
||||
We are releasing the initial support for those interested in rootless and hopefully some people can help to improve the usability. First, ensure you have a proper setup and support for user namespaces. Refer to the [requirements section](https://github.com/rootless-containers/rootlesskit#setup) in RootlessKit for instructions.
|
||||
In short, latest Ubuntu is your best bet for this to work.
|
||||
|
||||
### Known Issues with RootlessKit
|
||||
|
||||
* **Ports**
|
||||
|
||||
When running rootless a new network namespace is created. This means that K3s instance is running with networking fairly detached from the host. The only way to access services run in K3s from the host is to set up port forwards to the K3s network namespace. We have a controller that will automatically bind 6443 and service port below 1024 to the host with an offset of 10000.
|
||||
|
||||
That means service port 80 will become 10080 on the host, but 8080 will become 8080 without any offset.
|
||||
|
||||
Currently, only `LoadBalancer` services are automatically bound.
|
||||
|
||||
* **Daemon lifecycle**
|
||||
|
||||
Once you kill K3s and then start a new instance of K3s it will create a new network namespace, but it doesn't kill the old pods. So you are left
|
||||
with a fairly broken setup. This is the main issue at the moment, how to deal with the network namespace.
|
||||
|
||||
The issue is tracked in https://github.com/rootless-containers/rootlesskit/issues/65
|
||||
|
||||
* **Cgroups**
|
||||
|
||||
Cgroups are not supported.
|
||||
|
||||
### Running Servers and Agents with Rootless
|
||||
|
||||
Just add `--rootless` flag to either server or agent. So run `k3s server --rootless` and then look for the message `Wrote kubeconfig [SOME PATH]` for where your kubeconfig file is.
|
||||
|
||||
For more information about setting up the kubeconfig file, refer to the [section about cluster access.](../cluster-access)
|
||||
|
||||
> Be careful, if you use `-o` to write the kubeconfig to a different directory it will probably not work. This is because the K3s instance in running in a different mount namespace.
|
||||
|
||||
# Node Labels and Taints
|
||||
|
||||
K3s agents can be configured with the options `--node-label` and `--node-taint` which adds a label and taint to the kubelet. The two options only add labels and/or taints [at registration time,]({{<baseurl>}}/k3s/latest/en/installation/install-options/#node-labels-and-taints-for-agents) so they can only be added once and not changed after that again by running K3s commands.
|
||||
|
||||
If you want to change node labels and taints after node registration you should use `kubectl`. Refer to the official Kubernetes documentation for details on how to add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) and [node labels.](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node)
|
||||
|
||||
# Starting the Server with the Installation Script
|
||||
|
||||
The installation script will auto-detect if your OS is using systemd or openrc and start the service.
|
||||
When running with openrc, logs will be created at `/var/log/k3s.log`.
|
||||
|
||||
When running with systemd, logs will be created in `/var/log/syslog` and viewed using `journalctl -u k3s`.
|
||||
|
||||
An example of installing and auto-starting with the install script:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
When running the server manually you should get an output similar to the following:
|
||||
|
||||
```
|
||||
$ k3s server
|
||||
INFO[2019-01-22T15:16:19.908493986-07:00] Starting k3s dev
|
||||
INFO[2019-01-22T15:16:19.908934479-07:00] Running kube-apiserver --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key
|
||||
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
|
||||
INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false
|
||||
INFO[2019-01-22T15:16:20.196880841-07:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false
|
||||
Flag --port has been deprecated, see --secure-port instead.
|
||||
INFO[2019-01-22T15:16:20.273441984-07:00] Listening on :6443
|
||||
INFO[2019-01-22T15:16:20.278383446-07:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
|
||||
INFO[2019-01-22T15:16:20.474454524-07:00] Node token is available at /var/lib/rancher/k3s/server/node-token
|
||||
INFO[2019-01-22T15:16:20.474471391-07:00] To join node to cluster: k3s agent -s https://10.20.0.3:6443 -t ${NODE_TOKEN}
|
||||
INFO[2019-01-22T15:16:20.541027133-07:00] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml
|
||||
INFO[2019-01-22T15:16:20.541049100-07:00] Run: k3s kubectl
|
||||
```
|
||||
|
||||
The output will likely be much longer as the agent will create a lot of logs. By default the server
|
||||
will register itself as a node (run the agent).
|
||||
|
||||
# Additional Preparation for Alpine Linux Setup
|
||||
|
||||
In order to set up Alpine Linux, you have to go through the following preparation:
|
||||
|
||||
Update **/etc/update-extlinux.conf** by adding:
|
||||
|
||||
```
|
||||
default_kernel_opts="... cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory"
|
||||
```
|
||||
|
||||
Then update the config and reboot:
|
||||
|
||||
```bash
|
||||
update-extlinux
|
||||
reboot
|
||||
```
|
||||
|
||||
# Running K3d (K3s in Docker) and docker-compose
|
||||
|
||||
[k3d](https://github.com/rancher/k3d) is a utility designed to easily run K3s in Docker.
|
||||
|
||||
It can be installed via the the [brew](https://brew.sh/) utility on MacOS:
|
||||
|
||||
```
|
||||
brew install k3d
|
||||
```
|
||||
|
||||
`rancher/k3s` images are also available to run the K3s server and agent from Docker.
|
||||
|
||||
A `docker-compose.yml` is in the root of the K3s repo that serves as an example of how to run K3s from Docker. To run from `docker-compose` from this repo, run:
|
||||
|
||||
docker-compose up --scale agent=3
|
||||
# kubeconfig is written to current dir
|
||||
|
||||
kubectl --kubeconfig kubeconfig.yaml get node
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
497278a2d6a2 Ready <none> 11s v1.13.2-k3s2
|
||||
d54c8b17c055 Ready <none> 11s v1.13.2-k3s2
|
||||
db7a5a5a5bdd Ready <none> 12s v1.13.2-k3s2
|
||||
|
||||
To run the agent only in Docker, use `docker-compose up agent`.
|
||||
|
||||
Alternatively the `docker run` command can also be used:
|
||||
|
||||
sudo docker run \
|
||||
-d --tmpfs /run \
|
||||
--tmpfs /var/run \
|
||||
-e K3S_URL=${SERVER_URL} \
|
||||
-e K3S_TOKEN=${NODE_TOKEN} \
|
||||
--privileged rancher/k3s:vX.Y.Z
|
||||
|
||||
@@ -0,0 +1,54 @@
|
||||
---
|
||||
title: Architecture
|
||||
weight: 1
|
||||
---
|
||||
|
||||
This page describes the architecture of a high-availability K3s server cluster and how it differs from a single-node server cluster.
|
||||
|
||||
It also describes how agent nodes are registered with K3s servers.
|
||||
|
||||
A server node is defined as a machine (bare-metal or virtual) running the `k3s server` command. A worker node is defined as a machine running the `k3s agent` command.
|
||||
|
||||
This page covers the following topics:
|
||||
|
||||
- [Single-server setup with an embedded database](#single-server-setup-with-an-embedded-db)
|
||||
- [High-availability K3s server with an external database](#high-availability-k3s-server-with-an-external-db)
|
||||
- [Fixed registration address for agent nodes](#fixed-registration-address-for-agent-nodes)
|
||||
- [How agent node registration works](#how-agent-node-registration-works)
|
||||
|
||||
# Single-server Setup with an Embedded DB
|
||||
|
||||
The following diagram shows an example of a cluster that has a single-node K3s server with an embedded SQLite database.
|
||||
|
||||
In this configuration, each agent node is registered to the same server node. A K3s user can manipulate Kubernetes resources by calling the K3s API on the server node.
|
||||
|
||||
<figcaption>K3s Architecture with a Single Server</figcaption>
|
||||

|
||||
|
||||
# High-Availability K3s Server with an External DB
|
||||
|
||||
Single server clusters can meet a variety of use cases, but for environments where uptime of the Kubernetes control plane is critical, you can run K3s in an HA configuration. An HA K3s cluster is comprised of:
|
||||
|
||||
* Two or more **server nodes** that will serve the Kubernetes API and run other control plane services
|
||||
* An **external datastore** (as opposed to the embedded SQLite datastore used in single-server setups)
|
||||
|
||||
<figcaption>K3s Architecture with a High-availability Server</figcaption>
|
||||

|
||||
|
||||
### Fixed Registration Address for Agent Nodes
|
||||
|
||||
In the high-availability server configuration, each node must also register with the Kubernetes API by using a fixed registration address, as shown in the diagram below.
|
||||
|
||||
After registration, the agent nodes establish a connection directly to one of the server nodes.
|
||||
|
||||

|
||||
|
||||
# How Agent Node Registration Works
|
||||
|
||||
Agent nodes are registered with a websocket connection initiated by the `k3s agent` process, and the connection is maintained by a client-side load balancer running as part of the agent process.
|
||||
|
||||
Agents will register with the server using the node cluster secret along with a randomly generated password for the node, stored at `/etc/rancher/node/password`. The server will store the passwords for individual nodes at `/var/lib/rancher/k3s/server/cred/node-passwd`, and any subsequent attempts must use the same password.
|
||||
|
||||
If the `/etc/rancher/node` directory of an agent is removed, the password file should be recreated for the agent, or the entry removed from the server.
|
||||
|
||||
A unique node ID can be appended to the hostname by launching K3s servers or agents using the `--with-node-id` flag.
|
||||
@@ -1,46 +0,0 @@
|
||||
---
|
||||
title: "Building from Source"
|
||||
weight: 10
|
||||
---
|
||||
|
||||
This section provides information on building k3s from source.
|
||||
|
||||
See the [release](https://github.com/rancher/k3s/releases/latest) page for pre-built releases.
|
||||
|
||||
The clone will be much faster on this repo if you do
|
||||
|
||||
git clone --depth 1 https://github.com/rancher/k3s.git
|
||||
|
||||
This repo includes all of Kubernetes history so `--depth 1` will avoid most of that.
|
||||
|
||||
To build the full release binary run `make` and that will create `./dist/artifacts/k3s`.
|
||||
Optionally to build the binaries without running linting or building docker images:
|
||||
```sh
|
||||
./scripts/build && ./scripts/package-cli
|
||||
```
|
||||
|
||||
For development, you just need go 1.12 and a sane GOPATH. To compile the binaries run:
|
||||
```bash
|
||||
go build -o k3s
|
||||
go build -o kubectl ./cmd/kubectl
|
||||
go build -o hyperkube ./vendor/k8s.io/kubernetes/cmd/hyperkube
|
||||
```
|
||||
|
||||
This will create the main executable, but it does not include the dependencies like containerd, CNI,
|
||||
etc. To run a server and agent with all the dependencies for development run the following
|
||||
helper scripts:
|
||||
```bash
|
||||
# Server
|
||||
./scripts/dev-server.sh
|
||||
|
||||
# Agent
|
||||
./scripts/dev-agent.sh
|
||||
```
|
||||
|
||||
|
||||
Kubernetes Source
|
||||
-----------------
|
||||
|
||||
The source code for Kubernetes is in `vendor/` and the location from which that is copied
|
||||
is in `./vendor.conf`. Go to the referenced repo/tag and you'll find all the patches applied
|
||||
to upstream Kubernetes.
|
||||
@@ -0,0 +1,25 @@
|
||||
---
|
||||
title: Cluster Access
|
||||
weight: 21
|
||||
---
|
||||
|
||||
The kubeconfig file is used to configure access to the Kubernetes cluster. It is required to be set up properly in order to access the Kubernetes API such as with kubectl or for installing applications with Helm. You may set the kubeconfig by either exporting the KUBECONFIG environment variable or by specifying a flag for kubectl and helm. Refer to the examples below for details.
|
||||
|
||||
Leverage the KUBECONFIG environment variable:
|
||||
|
||||
```
|
||||
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
|
||||
kubectl get pods --all-namespaces
|
||||
helm ls --all-namespaces
|
||||
```
|
||||
|
||||
Or specify the location of the kubeconfig file per command:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get pods --all-namespaces
|
||||
helm --kubeconfig /etc/rancher/k3s/k3s.yaml ls --all-namespaces
|
||||
```
|
||||
|
||||
### Accessing the Cluster from Outside with kubectl
|
||||
|
||||
Copy `/etc/rancher/k3s/k3s.yaml` on your machine located outside the cluster as `~/.kube/config`. Then replace "localhost" with the IP or name of your K3s server. `kubectl` can now manage your K3s cluster.
|
||||
@@ -1,287 +0,0 @@
|
||||
---
|
||||
title: "Configuration Info"
|
||||
weight: 4
|
||||
---
|
||||
|
||||
This section contains information on using k3s with various configurations.
|
||||
|
||||
|
||||
Auto-Deploying Manifests
|
||||
------------------------
|
||||
|
||||
Any file found in `/var/lib/rancher/k3s/server/manifests` will automatically be deployed to
|
||||
Kubernetes in a manner similar to `kubectl apply`.
|
||||
|
||||
It is also possible to deploy Helm charts. k3s supports a CRD controller for installing charts. A YAML file specification can look as following (example taken from `/var/lib/rancher/k3s/server/manifests/traefik.yaml`):
|
||||
|
||||
```yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: traefik
|
||||
namespace: kube-system
|
||||
spec:
|
||||
chart: stable/traefik
|
||||
set:
|
||||
rbac.enabled: "true"
|
||||
ssl.enabled: "true"
|
||||
```
|
||||
|
||||
Keep in mind that `namespace` in your HelmChart resource metadata section should always be `kube-system`, because k3s deploy controller is configured to watch this namespace for new HelmChart resources. If you want to specify the namespace for the actual helm release, you can do that using `targetNamespace` key in the spec section:
|
||||
|
||||
```
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: grafana
|
||||
namespace: kube-system
|
||||
spec:
|
||||
chart: stable/grafana
|
||||
targetNamespace: monitoring
|
||||
set:
|
||||
adminPassword: "NotVerySafePassword"
|
||||
valuesContent: |-
|
||||
image:
|
||||
tag: master
|
||||
env:
|
||||
GF_EXPLORE_ENABLED: true
|
||||
adminUser: admin
|
||||
sidecar:
|
||||
datasources:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
Also note that besides `set` you can use `valuesContent` in the spec section. And it's okay to use both of them.
|
||||
|
||||
k3s versions <= v0.5.0 used `k3s.cattle.io` for the api group of helmcharts, this has been changed to `helm.cattle.io` for later versions.
|
||||
|
||||
Accessing Cluster from Outside
|
||||
-----------------------------
|
||||
|
||||
Copy `/etc/rancher/k3s/k3s.yaml` on your machine located outside the cluster as `~/.kube/config`. Then replace
|
||||
"localhost" with the IP or name of your k3s server. `kubectl` can now manage your k3s cluster.
|
||||
|
||||
Open Ports / Network Security
|
||||
---------------------------
|
||||
|
||||
The server needs port 6443 to be accessible by the nodes. The nodes need to be able to reach
|
||||
other nodes over UDP port 8472. This is used for flannel VXLAN. If you don't use flannel
|
||||
and provide your own custom CNI, then 8472 is not needed by k3s. The node should not listen
|
||||
on any other port. k3s uses reverse tunneling such that the nodes make outbound connections
|
||||
to the server and all kubelet traffic runs through that tunnel.
|
||||
|
||||
IMPORTANT. The VXLAN port on nodes should not be exposed to the world, it opens up your
|
||||
cluster network to accessed by anyone. Run your nodes behind a firewall/security group that
|
||||
disables access to port 8472.
|
||||
|
||||
Node Registration
|
||||
-----------------
|
||||
|
||||
Agents will register with the server using the node cluster secret along with a randomly generated
|
||||
password for the node, stored at `/var/lib/rancher/k3s/agent/node-password.txt`. The server will
|
||||
store the passwords for individual nodes at `/var/lib/rancher/k3s/server/cred/node-passwd`, and any
|
||||
subsequent attempts must use the same password. If the data directory of an agent is removed the
|
||||
password file should be recreated for the agent, or the entry removed from the server.
|
||||
|
||||
Containerd and Docker
|
||||
----------
|
||||
|
||||
k3s includes and defaults to containerd. Why? Because it's just plain better. If you want to
|
||||
run with Docker first stop and think, "Really? Do I really want more headache?" If still
|
||||
yes then you just need to run the agent with the `--docker` flag.
|
||||
|
||||
k3s will generate config.toml for containerd in `/var/lib/rancher/k3s/agent/etc/containerd/config.toml`, for advanced customization for this file you can create another file called `config.toml.tmpl` in the same directory and it will be used instead.
|
||||
|
||||
The `config.toml.tmpl` will be treated as a Golang template file, and the `config.Node` structure is being passed to the template, the following is an example on how to use the structure to customize the configuration file https://github.com/rancher/k3s/blob/master/pkg/agent/templates/templates.go#L16-L32
|
||||
|
||||
Rootless
|
||||
--------
|
||||
|
||||
_**WARNING**:_ Some advanced magic, user beware
|
||||
|
||||
Initial rootless support has been added but there are a series of significant usability issues surrounding it.
|
||||
We are releasing the initial support for those interested in rootless and hopefully some people can help to
|
||||
improve the usability. First ensure you have proper setup and support for user namespaces. Refer to the
|
||||
[requirements section](https://github.com/rootless-containers/rootlesskit#setup) in RootlessKit for instructions.
|
||||
In short, latest Ubuntu is your best bet for this to work.
|
||||
|
||||
|
||||
**Issues w/ Rootless**:
|
||||
|
||||
* **Ports**
|
||||
|
||||
When running rootless a new network namespace is created. This means that k3s instance is running with networking
|
||||
fairly detached from the host. The only way to access services run in k3s from the host is to setup port forwards
|
||||
to the k3s network namespace. We have a controller that will automatically bind 6443 and service port below 1024 to the host with an offset of 10000.
|
||||
|
||||
That means service port 80 will become 10080 on the host, but 8080 will become 8080 without any offset.
|
||||
|
||||
Currently, only `LoadBalancer` services are automatically bound.
|
||||
|
||||
* **Daemon lifecycle**
|
||||
|
||||
Once you kill k3s and then start a new instance of k3s it will create a new network namespace, but it doesn't kill the old pods. So you are left
|
||||
with a fairly broken setup. This is the main issue at the moment, how to deal with the network namespace.
|
||||
|
||||
The issue is tracked in https://github.com/rootless-containers/rootlesskit/issues/65
|
||||
|
||||
* **Cgroups**
|
||||
|
||||
Cgroups are not supported
|
||||
|
||||
**Running w/ Rootless**:
|
||||
|
||||
Just add `--rootless` flag to either server or agent. So run `k3s server --rootless` and then look for the message
|
||||
`Wrote kubeconfig [SOME PATH]` for where your kubeconfig to access you cluster is. Be careful, if you use `-o` to write
|
||||
the kubeconfig to a different directory it will probably not work. This is because the k3s instance in running in a different
|
||||
mount namespace.
|
||||
|
||||
Node Labels and Taints
|
||||
----------------------
|
||||
|
||||
k3s agents can be configured with options `--node-label` and `--node-taint` which adds set of Labels and Taints to kubelet, the two options only adds labels/taints at registration time, so they can only be added once and not changed after that, an example of options to add new label is:
|
||||
```
|
||||
--node-label foo=bar \
|
||||
--node-label hello=world \
|
||||
--node-taint key1=value1:NoExecute
|
||||
```
|
||||
|
||||
Flannel
|
||||
-------
|
||||
|
||||
Flannel is included by default, if you don't want flannel then run the agent with `--no-flannel` option.
|
||||
|
||||
In this setup you will still be required to install your own CNI driver. More info [here](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network)
|
||||
|
||||
CoreDNS
|
||||
-------
|
||||
|
||||
CoreDNS is deployed on start of the agent, to disable run the server with the `--no-deploy coredns` option.
|
||||
|
||||
If you don't install CoreDNS you will need to install a cluster DNS provider yourself.
|
||||
|
||||
Traefik
|
||||
-------
|
||||
|
||||
Traefik is deployed by default when starting the server; to disable it, start the server with the `--no-deploy traefik` option.
|
||||
|
||||
Service Load Balancer
|
||||
---------------------
|
||||
|
||||
k3s includes a basic service load balancer that uses available host ports. If you try to create
|
||||
a load balancer that listens on port 80, for example, it will try to find a free host in the cluster
|
||||
for port 80. If no port is available the load balancer will stay in Pending.
|
||||
|
||||
To disable the embedded load balancer run the server with the `--no-deploy servicelb` option. This is necessary if you wish to run a different load balancer, such as MetalLB.
|
||||
|
||||
Metrics Server
|
||||
--------------
|
||||
|
||||
To add functionality for commands such as `k3s kubectl top nodes` metrics-server must be installed,
|
||||
to install see the instructions located at https://github.com/kubernetes-incubator/metrics-server/.
|
||||
|
||||
**NOTE** : By default the image used in `metrics-server-deployment.yaml` is valid only for **amd64** devices,
|
||||
this should be edited as appropriate for your architecture. As of this writing metrics-server provides
|
||||
the following images relevant to k3s: `amd64:v0.3.3`, `arm64:v0.3.2`, and `arm:v0.3.2`. Further information
|
||||
on the images provided through gcr.io can be found at https://console.cloud.google.com/gcr/images/google-containers/GLOBAL.
|
||||
|
||||
Storage Backends
|
||||
----------------
|
||||
|
||||
As of version 0.6.0, k3s can support various storage backends including: SQLite (default), MySQL, Postgres, and etcd, this enhancement depends on the following arguments that can be passed to k3s server:
|
||||
|
||||
* `--storage-backend` _value_
|
||||
|
||||
Specify storage type etcd3 or kvsql [$`K3S_STORAGE_BACKEND`]
|
||||
|
||||
* `--storage-endpoint` _value_
|
||||
|
||||
Specify etcd, Mysql, Postgres, or Sqlite (default) data source name [$`K3S_STORAGE_ENDPOINT`]
|
||||
|
||||
* `--storage-cafile` _value_
|
||||
|
||||
SSL Certificate Authority file used to secure storage backend communication [$`K3S_STORAGE_CAFILE`]
|
||||
|
||||
* `--storage-certfile` _value_
|
||||
|
||||
SSL certification file used to secure storage backend communication [$`K3S_STORAGE_CERTFILE`]
|
||||
|
||||
* `--storage-keyfile` _value_
|
||||
|
||||
SSL key file used to secure storage backend communication [$`K3S_STORAGE_KEYFILE`]
|
||||
|
||||
### MySQL
|
||||
|
||||
To use k3s with MySQL storage backend, you can specify the following for insecure connection:
|
||||
|
||||
```
|
||||
--storage-endpoint="mysql://"
|
||||
```
|
||||
By default the server will attempt to connect to mysql using the mysql socket at `/var/run/mysqld/mysqld.sock` using the root user and with no password, k3s will also create a database with the name `kubernetes` if the database is not specified in the DSN.
|
||||
|
||||
To override the method of connection, user/pass, and database name, you can provide a custom DSN, for example:
|
||||
|
||||
```
|
||||
--storage-endpoint="mysql://k3suser:k3spass@tcp(192.168.1.100:3306)/k3stest"
|
||||
```
|
||||
|
||||
This command will attempt to connect to MySQL on host `192.168.1.100` on port `3306` with username `k3suser` and password `k3spass` and k3s will automatically create a new database with the name `k3stest` if it doesn't exist, for more information about the MySQL driver data source name, please refer to https://github.com/go-sql-driver/mysql#dsn-data-source-name
|
||||
|
||||
To connect to MySQL securely, you can use the following example:
|
||||
```
|
||||
--storage-endpoint="mysql://k3suser:k3spass@tcp(192.168.1.100:3306)/k3stest" \
|
||||
--storage-cafile ca.crt \
|
||||
--storage-certfile mysql.crt \
|
||||
--storage-keyfile mysql.key
|
||||
```
|
||||
The above command will use these certificates to generate the tls config to communicate with mysql securely.
|
||||
|
||||
|
||||
### Postgres
|
||||
|
||||
Connection to postgres can be established using the following command:
|
||||
|
||||
```
|
||||
--storage-endpoint="postgres://"
|
||||
```
|
||||
|
||||
By default the server will attempt to connect to postgres on localhost with using the `postgres` user and with `postgres` password, k3s will also create a database with the name `kubernetes` if the database is not specified in the DSN.
|
||||
|
||||
To override the method of connection, user/pass, and database name, you can provide a custom DSN, for example:
|
||||
|
||||
```
|
||||
--storage-endpoint="postgres://k3suser:k3spass@192.168.1.100:5432/k3stest"
|
||||
```
|
||||
|
||||
This command will attempt to connect to Postgres on host `192.168.1.100` on port `5432` with username `k3suser` and password `k3spass` and k3s will automatically create a new database with the name `k3stest` if it doesn't exist, for more information about the Postgres driver data source name, please refer to https://godoc.org/github.com/lib/pq
|
||||
|
||||
To connect to Postgres securely, you can use the following example:
|
||||
|
||||
```
|
||||
--storage-endpoint="postgres://k3suser:k3spass@192.168.1.100:5432/k3stest" \
|
||||
--storage-certfile postgres.crt \
|
||||
--storage-keyfile postgres.key \
|
||||
--storage-cafile ca.crt
|
||||
```
|
||||
|
||||
The above command will use these certificates to generate the tls config to communicate with postgres securely.
|
||||
|
||||
### etcd
|
||||
|
||||
Connection to etcd3 can be established using the following command:
|
||||
|
||||
```
|
||||
--storage-backend=etcd3 \
|
||||
--storage-endpoint="https://127.0.0.1:2379"
|
||||
```
|
||||
The above command will attempt to connect insecurely to etcd on localhost with port `2379`, you can connect securely to etcd using the following command:
|
||||
|
||||
```
|
||||
--storage-backend=etcd3 \
|
||||
--storage-endpoint="https://127.0.0.1:2379" \
|
||||
--storage-cafile ca.crt \
|
||||
--storage-certfile etcd.crt \
|
||||
--storage-keyfile etcd.key
|
||||
```
|
||||
|
||||
The above command will use these certificates to generate the tls config to communicate with etcd securely.
|
||||
@@ -0,0 +1,22 @@
|
||||
---
|
||||
title: FAQ
|
||||
weight: 60
|
||||
---
|
||||
|
||||
The FAQ is updated periodically and designed to answer the questions our users most frequently ask about K3s.
|
||||
|
||||
**Is K3s a suitable replacement for k8s?**
|
||||
|
||||
K3s is capable of nearly everything k8s can do. It is just a more lightweight version. See the [main]({{<baseurl>}}/k3s/latest/en/) docs page for more details.
|
||||
|
||||
**How can I use my own Ingress instead of Traefik?**
|
||||
|
||||
Simply start K3s server with `--no-deploy=traefik` and deploy your ingress.
|
||||
|
||||
**Does K3s support Windows?**
|
||||
|
||||
At this time K3s does not natively support Windows, however we are open to the idea in the future.
|
||||
|
||||
**How can I build from source?**
|
||||
|
||||
Please reference the K3s [BUILDING.md](https://github.com/rancher/k3s/blob/master/BUILDING.md) with instructions.
|
||||
@@ -0,0 +1,101 @@
|
||||
---
|
||||
title: Helm
|
||||
weight: 42
|
||||
---
|
||||
|
||||
K3s release _v1.17.0+k3s.1_ added support for Helm 3. You can access the Helm 3 documentation [here](https://helm.sh/docs/intro/quickstart/).
|
||||
|
||||
Helm is the package management tool of choice for Kubernetes. Helm charts provide templating syntax for Kubernetes YAML manifest documents. With Helm we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at https://helm.sh/.
|
||||
|
||||
K3s does not require any special configuration to start using Helm 3. Just be sure you have properly set up your kubeconfig as per the section about [cluster access.](../cluster-access)
|
||||
|
||||
This section covers the following topics:
|
||||
|
||||
- [Upgrading Helm](#upgrading-helm)
|
||||
- [Deploying manifests and Helm charts](#deploying-manifests-and-helm-charts)
|
||||
- [Using the Helm CRD](#using-the-helm-crd)
|
||||
|
||||
### Upgrading Helm
|
||||
|
||||
If you were using Helm v2 in previous versions of K3s, you may upgrade to v1.17.0+k3s.1 or newer and Helm 2 will still function. If you wish to migrate to Helm 3, [this](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) blog post by Helm explains how to use a plugin to successfully migrate. Refer to the official Helm 3 documentation [here](https://helm.sh/docs/) for more information. K3s will handle either Helm v2 or Helm v3 as of v1.17.0+k3s.1. Just be sure you have properly set your kubeconfig as per the examples in the section about [cluster access.](../cluster-access)
|
||||
|
||||
Note that Helm 3 no longer requires Tiller and the `helm init` command. Refer to the official documentation for details.
|
||||
|
||||
### Deploying Manifests and Helm Charts
|
||||
|
||||
Any file found in `/var/lib/rancher/k3s/server/manifests` will automatically be deployed to Kubernetes in a manner similar to `kubectl apply`.
|
||||
|
||||
It is also possible to deploy Helm charts. K3s supports a CRD controller for installing charts. A YAML file specification can look as following (example taken from `/var/lib/rancher/k3s/server/manifests/traefik.yaml`):
|
||||
|
||||
```yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: traefik
|
||||
namespace: kube-system
|
||||
spec:
|
||||
chart: stable/traefik
|
||||
set:
|
||||
rbac.enabled: "true"
|
||||
ssl.enabled: "true"
|
||||
```
|
||||
|
||||
Keep in mind that `namespace` in your HelmChart resource metadata section should always be `kube-system`, because the K3s deploy controller is configured to watch this namespace for new HelmChart resources. If you want to specify the namespace for the actual Helm release, you can do that using `targetNamespace` key under the `spec` directive, as shown in the configuration example below.
|
||||
|
||||
> **Note:** In order for the Helm Controller to know which version of Helm to use to Auto-Deploy a helm app, please specify the `helmVersion` in the spec of your YAML file.
|
||||
|
||||
Also note that besides `set`, you can use `valuesContent` under the `spec` directive. And it's okay to use both of them:
|
||||
|
||||
```yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: grafana
|
||||
namespace: kube-system
|
||||
spec:
|
||||
chart: stable/grafana
|
||||
targetNamespace: monitoring
|
||||
set:
|
||||
adminPassword: "NotVerySafePassword"
|
||||
valuesContent: |-
|
||||
image:
|
||||
tag: master
|
||||
env:
|
||||
GF_EXPLORE_ENABLED: true
|
||||
adminUser: admin
|
||||
sidecar:
|
||||
datasources:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
K3s versions `<= v0.5.0` used `k3s.cattle.io` for the API group of HelmCharts. This has been changed to `helm.cattle.io` for later versions.
|
||||
|
||||
### Using the Helm CRD
|
||||
|
||||
You can deploy a third-party Helm chart using an example like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: kube-system
|
||||
spec:
|
||||
chart: nginx
|
||||
repo: https://charts.bitnami.com/bitnami
|
||||
targetNamespace: default
|
||||
```
|
||||
|
||||
You can install a specific version of a Helm chart using an example like this:
|
||||
|
||||
```yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: stable/nginx-ingress
|
||||
namespace: kube-system
|
||||
spec:
|
||||
chart: nginx-ingress
|
||||
version: 1.24.4
|
||||
targetNamespace: default
|
||||
```
|
||||
@@ -1,349 +1,19 @@
|
||||
---
|
||||
title: "Installation Options"
|
||||
weight: 2
|
||||
title: "Installation"
|
||||
weight: 20
|
||||
---
|
||||
|
||||
This section contains information on flags and environment variables used for starting a k3s cluster.
|
||||
This section contains instructions for installing K3s in various environments. Please ensure you have met the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/) before you begin installing K3s.
|
||||
|
||||
Install Script
|
||||
--------------
|
||||
[Installation and Configuration Options]({{< baseurl >}}/k3s/latest/en/installation/install-options/) provides guidance on the options available to you when installing K3s.
|
||||
|
||||
The install script will attempt to download the latest release, to specify a specific
|
||||
version for download we can use the `INSTALL_K3S_VERSION` environment variable, for example:
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh -
|
||||
```
|
||||
|
||||
To install just the server without an agent we can add a `INSTALL_K3S_EXEC`
|
||||
environment variable to the command:
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable-agent" sh -
|
||||
```
|
||||
[High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha/) details how to set up an HA K3s cluster backed by an external datastore such as MySQL, PostgreSQL, or etcd.
|
||||
|
||||
The installer can also be run without performing downloads by setting `INSTALL_K3S_SKIP_DOWNLOAD=true`, for example:
|
||||
```sh
|
||||
curl -sfL https://github.com/rancher/k3s/releases/download/vX.Y.Z/k3s -o /usr/local/bin/k3s
|
||||
chmod 0755 /usr/local/bin/k3s
|
||||
[High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded/) details how to set up an HA K3s cluster that leverages a built-in distributed database.
|
||||
|
||||
curl -sfL https://get.k3s.io -o install-k3s.sh
|
||||
chmod 0755 install-k3s.sh
|
||||
[Air-Gap Installation]({{< baseurl >}}/k3s/latest/en/installation/airgap/) details how to set up K3s in environments that do not have direct access to the Internet.
|
||||
|
||||
export INSTALL_K3S_SKIP_DOWNLOAD=true
|
||||
./install-k3s.sh
|
||||
```
|
||||
### Uninstalling
|
||||
|
||||
The full help text for the install script environment variables are as follows:
|
||||
- `K3S_*`
|
||||
|
||||
Environment variables which begin with `K3S_` will be preserved for the
|
||||
systemd service to use. Setting `K3S_URL` without explicitly setting
|
||||
a systemd exec command will default the command to "agent", and we
|
||||
enforce that `K3S_TOKEN` or `K3S_CLUSTER_SECRET` is also set.
|
||||
|
||||
- `INSTALL_K3S_SKIP_DOWNLOAD`
|
||||
|
||||
If set to true will not download k3s hash or binary.
|
||||
|
||||
- INSTALL_K3S_SYMLINK
|
||||
|
||||
If set to 'skip' will not create symlinks, 'force' will overwrite,
|
||||
default will symlink if command does not exist in path.
|
||||
|
||||
- `INSTALL_K3S_VERSION`
|
||||
|
||||
Version of k3s to download from github. Will attempt to download the
|
||||
latest version if not specified.
|
||||
|
||||
- `INSTALL_K3S_BIN_DIR`
|
||||
|
||||
Directory to install k3s binary, links, and uninstall script to, or use
|
||||
/usr/local/bin as the default
|
||||
|
||||
- `INSTALL_K3S_SYSTEMD_DIR`
|
||||
|
||||
Directory to install systemd service and environment files to, or use
|
||||
/etc/systemd/system as the default
|
||||
|
||||
- `INSTALL_K3S_EXEC` or script arguments
|
||||
|
||||
Command with flags to use for launching k3s in the systemd service, if
|
||||
the command is not specified will default to "agent" if `K3S_URL` is set
|
||||
or "server" if not. The final systemd command resolves to a combination
|
||||
of EXEC and script args ($@).
|
||||
|
||||
The following commands result in the same behavior:
|
||||
```sh
|
||||
curl ... | INSTALL_K3S_EXEC="--disable-agent" sh -s -
|
||||
curl ... | INSTALL_K3S_EXEC="server --disable-agent" sh -s -
|
||||
curl ... | INSTALL_K3S_EXEC="server" sh -s - --disable-agent
|
||||
curl ... | sh -s - server --disable-agent
|
||||
curl ... | sh -s - --disable-agent
|
||||
```
|
||||
|
||||
- `INSTALL_K3S_NAME`
|
||||
|
||||
Name of systemd service to create, will default from the k3s exec command
|
||||
if not specified. If specified the name will be prefixed with 'k3s-'.
|
||||
|
||||
- `INSTALL_K3S_TYPE`
|
||||
|
||||
Type of systemd service to create, will default from the k3s exec command
|
||||
if not specified.
|
||||
|
||||
Server Options
|
||||
--------------
|
||||
|
||||
The following information on server options is also available through `k3s server --help` :
|
||||
|
||||
* `--bind-address` _value_
|
||||
|
||||
k3s bind address (default: localhost)
|
||||
|
||||
* `--https-listen-port` _value_
|
||||
|
||||
HTTPS listen port (default: 6443)
|
||||
|
||||
* `--http-listen-port` _value_
|
||||
|
||||
HTTP listen port (for /healthz, HTTPS redirect, and port for TLS terminating LB) (default: 0)
|
||||
|
||||
* `--data-dir` _value_, `-d` _value_
|
||||
|
||||
Folder to hold state default /var/lib/rancher/k3s or ${HOME}/.rancher/k3s if not root
|
||||
|
||||
* `--disable-agent`
|
||||
|
||||
Do not run a local agent and register a local kubelet
|
||||
|
||||
* `--log` _value_, `-l` _value_
|
||||
|
||||
Log to file
|
||||
|
||||
* `--cluster-cidr` _value_
|
||||
|
||||
Network CIDR to use for pod IPs (default: "10.42.0.0/16")
|
||||
|
||||
* `--cluster-secret` _value_
|
||||
|
||||
Shared secret used to bootstrap a cluster [$`K3S_CLUSTER_SECRET`]
|
||||
|
||||
* `--service-cidr` _value_
|
||||
|
||||
Network CIDR to use for services IPs (default: "10.43.0.0/16")
|
||||
|
||||
* `--cluster-dns` _value_
|
||||
|
||||
Cluster IP for coredns service. Should be in your service-cidr range
|
||||
|
||||
* `--cluster-domain` _value_
|
||||
|
||||
Cluster Domain (default: "cluster.local")
|
||||
|
||||
* `--no-deploy` _value_
|
||||
|
||||
Do not deploy packaged components (valid items: coredns, servicelb, traefik)
|
||||
|
||||
* `--write-kubeconfig` _value_, `-o` _value_
|
||||
|
||||
Write kubeconfig for admin client to this file [$`K3S_KUBECONFIG_OUTPUT`]
|
||||
|
||||
* `--write-kubeconfig-mode` _value_
|
||||
|
||||
Write kubeconfig with this mode [$`K3S_KUBECONFIG_MODE`]
|
||||
|
||||
* `--tls-san` _value_
|
||||
|
||||
Add additional hostname or IP as a Subject Alternative Name in the TLS cert
|
||||
|
||||
* `--kube-apiserver-arg` _value_
|
||||
|
||||
Customized flag for kube-apiserver process
|
||||
|
||||
* `--kube-scheduler-arg` _value_
|
||||
|
||||
Customized flag for kube-scheduler process
|
||||
|
||||
* `--kube-controller-arg` _value_
|
||||
|
||||
Customized flag for kube-controller-manager process
|
||||
|
||||
* `--rootless`
|
||||
|
||||
(experimental) Run rootless
|
||||
|
||||
* `--storage-backend` _value_
|
||||
|
||||
Specify storage type etcd3 or kvsql [$`K3S_STORAGE_BACKEND`]
|
||||
|
||||
* `--storage-endpoint` _value_
|
||||
|
||||
Specify etcd, Mysql, Postgres, or Sqlite (default) data source name [$`K3S_STORAGE_ENDPOINT`]
|
||||
|
||||
* `--storage-cafile` _value_
|
||||
|
||||
SSL Certificate Authority file used to secure storage backend communication [$`K3S_STORAGE_CAFILE`]
|
||||
|
||||
* `--storage-certfile` _value_
|
||||
|
||||
SSL certification file used to secure storage backend communication [$`K3S_STORAGE_CERTFILE`]
|
||||
|
||||
* `--storage-keyfile` _value_
|
||||
|
||||
SSL key file used to secure storage backend communication [$`K3S_STORAGE_KEYFILE`]
|
||||
|
||||
* `--node-ip` _value_, `-i` _value_
|
||||
|
||||
(agent) IP address to advertise for node
|
||||
|
||||
* `--node-name` _value_
|
||||
|
||||
(agent) Node name [$`K3S_NODE_NAME`]
|
||||
|
||||
* `--docker`
|
||||
|
||||
(agent) Use docker instead of containerd
|
||||
|
||||
* `--no-flannel`
|
||||
|
||||
(agent) Disable embedded flannel
|
||||
|
||||
* `--flannel-iface` _value_
|
||||
|
||||
(agent) Override default flannel interface
|
||||
|
||||
* `--container-runtime-endpoint` _value_
|
||||
|
||||
(agent) Disable embedded containerd and use alternative CRI implementation
|
||||
|
||||
* `--pause-image` _value_
|
||||
|
||||
(agent) Customized pause image for containerd sandbox
|
||||
|
||||
* `--resolv-conf` _value_
|
||||
|
||||
(agent) Kubelet resolv.conf file [$`K3S_RESOLV_CONF`]
|
||||
|
||||
* `--kubelet-arg` _value_
|
||||
|
||||
(agent) Customized flag for kubelet process
|
||||
|
||||
* `--kube-proxy-arg` _value_
|
||||
|
||||
(agent) Customized flag for kube-proxy process
|
||||
|
||||
* `--node-label` _value_
|
||||
|
||||
(agent) Registering kubelet with set of labels
|
||||
|
||||
* `--node-taint` _value_
|
||||
|
||||
(agent) Registering kubelet with set of taints
|
||||
|
||||
Agent Options
|
||||
------------------
|
||||
|
||||
The following information on agent options is also available through `k3s agent --help` :
|
||||
|
||||
* `--token` _value_, `-t` _value_
|
||||
|
||||
Token to use for authentication [$`K3S_TOKEN`]
|
||||
|
||||
* `--token-file` _value_
|
||||
|
||||
Token file to use for authentication [$`K3S_TOKEN_FILE`]
|
||||
|
||||
* `--server` _value_, `-s` _value_
|
||||
|
||||
Server to connect to [$`K3S_URL`]
|
||||
|
||||
* `--data-dir` _value_, `-d` _value_
|
||||
|
||||
Folder to hold state (default: "/var/lib/rancher/k3s")
|
||||
|
||||
* `--cluster-secret` _value_
|
||||
|
||||
Shared secret used to bootstrap a cluster [$`K3S_CLUSTER_SECRET`]
|
||||
|
||||
* `--rootless`
|
||||
|
||||
(experimental) Run rootless
|
||||
|
||||
* `--docker`
|
||||
|
||||
(agent) Use docker instead of containerd
|
||||
|
||||
* `--no-flannel`
|
||||
|
||||
(agent) Disable embedded flannel
|
||||
|
||||
* `--flannel-iface` _value_
|
||||
|
||||
(agent) Override default flannel interface
|
||||
|
||||
* `--node-name` _value_
|
||||
|
||||
(agent) Node name [$`K3S_NODE_NAME`]
|
||||
|
||||
* `--node-ip` _value_, `-i` _value
|
||||
|
||||
(agent) IP address to advertise for node
|
||||
|
||||
* `--container-runtime-endpoint` _value_
|
||||
|
||||
(agent) Disable embedded containerd and use alternative CRI implementation
|
||||
|
||||
* `--pause-image` _value_
|
||||
|
||||
(agent) Customized pause image for containerd sandbox
|
||||
|
||||
* `--resolv-conf` _value_
|
||||
|
||||
(agent) Kubelet resolv.conf file [$`K3S_RESOLV_CONF`]
|
||||
|
||||
* `--kubelet-arg` _value_
|
||||
|
||||
(agent) Customized flag for kubelet process
|
||||
|
||||
* `--kube-proxy-arg` _value_
|
||||
|
||||
(agent) Customized flag for kube-proxy process
|
||||
|
||||
* `--node-label` _value_
|
||||
|
||||
(agent) Registering kubelet with set of labels
|
||||
|
||||
* `--node-taint` _value_
|
||||
|
||||
(agent) Registering kubelet with set of taints
|
||||
|
||||
Customizing components
|
||||
----------------------
|
||||
|
||||
As of v0.3.0 any of the following processes can be customized with extra flags:
|
||||
|
||||
* `--kube-apiserver-arg` _value_
|
||||
|
||||
(server) [kube-apiserver options](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/)
|
||||
|
||||
* `--kube-controller-arg` _value_
|
||||
|
||||
(server) [kube-controller-manager options](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/)
|
||||
|
||||
* `--kube-scheduler-arg` _value_
|
||||
|
||||
(server) [kube-scheduler options](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/)
|
||||
|
||||
* `--kubelet-arg` _value_
|
||||
|
||||
(agent) [kubelet options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/)
|
||||
|
||||
* `--kube-proxy-arg` _value_
|
||||
|
||||
(agent) [kube-proxy options](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)
|
||||
|
||||
Adding extra arguments can be done by passing the following flags to server or agent.
|
||||
For example to add the following arguments `-v=9` and `log-file=/tmp/kubeapi.log` to the kube-apiserver, you should add the following options to k3s server:
|
||||
|
||||
```
|
||||
--kube-apiserver-arg v=9 --kube-apiserver-arg log-file=/tmp/kubeapi.log
|
||||
```
|
||||
If you installed K3s with the help of the `install.sh` script, an uninstall script is generated during installation. The script is created on your node at `/usr/local/bin/k3s-uninstall.sh` (or as `k3s-agent-uninstall.sh`).
|
||||
|
||||
@@ -0,0 +1,79 @@
|
||||
---
|
||||
title: "Air-Gap Install"
|
||||
weight: 60
|
||||
---
|
||||
|
||||
In this guide, we are assuming you have created your nodes in your air-gap environment and have a secure Docker private registry on your bastion server.
|
||||
|
||||
# Installation Outline
|
||||
|
||||
1. [Prepare Images Directory](#prepare-images-directory)
|
||||
2. [Create Registry YAML](#create-registry-YAML)
|
||||
3. [Install K3s](#install-k3s)
|
||||
|
||||
### Prepare Images Directory
|
||||
Obtain the images tar file for your architecture from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be running.
|
||||
|
||||
Place the tar file in the `images` directory before starting K3s on each node, for example:
|
||||
|
||||
```sh
|
||||
sudo mkdir -p /var/lib/rancher/k3s/agent/images/
|
||||
sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/
|
||||
```
|
||||
|
||||
### Create Registry YAML
|
||||
Create the registries.yaml file at `/etc/rancher/k3s/registries.yaml`. This will tell K3s the necessary details to connect to your private registry.
|
||||
The registries.yaml file should look like this before plugging in the necessary information:
|
||||
|
||||
```
|
||||
---
|
||||
mirrors:
|
||||
customreg:
|
||||
endpoint:
|
||||
- "https://ip-to-server:5000"
|
||||
configs:
|
||||
customreg:
|
||||
auth:
|
||||
username: xxxxxx # this is the registry username
|
||||
password: xxxxxx # this is the registry password
|
||||
tls:
|
||||
cert_file: <path to the cert file used in the registry>
|
||||
key_file: <path to the key file used in the registry>
|
||||
ca_file: <path to the ca file used in the registry>
|
||||
```
|
||||
|
||||
Note, at this time only secure registries are supported with K3s (SSL with custom CA)
|
||||
|
||||
### Install K3s
|
||||
|
||||
Obtain the K3s binary from the [releases](https://github.com/rancher/k3s/releases) page, matching the same version used to get the airgap images tar.
|
||||
Also obtain the K3s install script at https://get.k3s.io
|
||||
|
||||
Place the binary in `/usr/local/bin` on each node.
|
||||
Place the install script anywhere on each node, name it `install.sh`.
|
||||
|
||||
Install K3s on each server:
|
||||
|
||||
```
|
||||
INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh
|
||||
```
|
||||
|
||||
Install K3s on each agent:
|
||||
|
||||
```
|
||||
INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken ./install.sh
|
||||
```
|
||||
|
||||
Note, take care to ensure you replace `myserver` with the IP or valid DNS of the server and replace `mynodetoken` with the node-token from the server.
|
||||
The node-token is on the server at `/var/lib/rancher/k3s/server/node-token`
|
||||
|
||||
|
||||
>**Note:** K3s additionally provides a `--resolv-conf` flag for kubelets, which may help with configuring DNS in air-gap networks.
|
||||
|
||||
# Upgrading
|
||||
|
||||
Upgrading an air-gap environment can be accomplished in the following manner:
|
||||
|
||||
1. Download the new air-gap images (tar file) from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each node. Delete the old tar file.
|
||||
2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables.
|
||||
3. Restart the K3s service (if not restarted automatically by installer).
|
||||
@@ -0,0 +1,97 @@
|
||||
---
|
||||
title: "Cluster Datastore Options"
|
||||
weight: 50
|
||||
---
|
||||
|
||||
The ability to run Kubernetes using a datastore other than etcd sets K3s apart from other Kubernetes distributions. This feature provides flexibility to Kubernetes operators. The available datastore options allow you to select a datastore that best fits your use case. For example:
|
||||
|
||||
* If your team doesn't have expertise in operating etcd, you can choose an enterprise-grade SQL database like MySQL or PostgreSQL
|
||||
* If you need to run a simple, short-lived cluster in your CI/CD environment, you can use the embedded SQLite database
|
||||
* If you wish to deploy Kubernetes on the edge and require a highly available solution but can't afford the operational overhead of managing a database at the edge, you can use K3s's embedded HA datastore built on top of DQLite (currently experimental)
|
||||
|
||||
K3s supports the following datastore options:
|
||||
|
||||
* Embedded [SQLite](https://www.sqlite.org/index.html)
|
||||
* [PostgreSQL](https://www.postgresql.org/) (certified against versions 10.7 and 11.5)
|
||||
* [MySQL](https://www.mysql.com/) (certified against version 5.7)
|
||||
* [etcd](https://etcd.io/) (certified against version 3.3.15)
|
||||
* Embedded [DQLite](https://dqlite.io/) for High Availability (experimental)
|
||||
|
||||
### External Datastore Configuration Parameters
|
||||
If you wish to use an external datastore such as PostgreSQL, MySQL, or etcd you must set the `datastore-endpoint` parameter so that K3s knows how to connect to it. You may also specify parameters to configure the authentication and encryption of the connection. The below table summarizes these parameters, which can be passed as either CLI flags or environment variables.
|
||||
|
||||
CLI Flag | Environment Variable | Description
|
||||
------------|-------------|------------------
|
||||
<span style="white-space: nowrap">`--datastore-endpoint`</span> | `K3S_DATASTORE_ENDPOINT` | Specify a PostgresSQL, MySQL, or etcd connection string. This is a string used to describe the connection to the datastore. The structure of this string is specific to each backend and is detailed below.
|
||||
<span style="white-space: nowrap">`--datastore-cafile`</span> | `K3S_DATASTORE_CAFILE` | TLS Certificate Authority (CA) file used to help secure communication with the datastore. If your datastore serves requests over TLS using a certificate signed by a custom certificate authority, you can specify that CA using this parameter so that the K3s client can properly verify the certificate. |
|
||||
| <span style="white-space: nowrap">`--datastore-certfile`</span> | `K3S_DATASTORE_CERTFILE` | TLS certificate file used for client certificate based authentication to your datastore. To use this feature, your datastore must be configured to support client certificate based authentication. If you specify this parameter, you must also specify the `datastore-keyfile` parameter. |
|
||||
| <span style="white-space: nowrap">`--datastore-keyfile`</span> | `K3S_DATASTORE_KEYFILE` | TLS key file used for client certificate based authentication to your datastore. See the previous `datastore-certfile` parameter for more details. |
|
||||
|
||||
As a best practice we recommend setting these parameters as environment variables rather than command line arguments so that your database credentials or other sensitive information aren't exposed as part of the process info.
|
||||
|
||||
### Datastore Endpoint Format and Functionality
|
||||
As mentioned, the format of the value passed to the `datastore-endpoint` parameter is dependent upon the datastore backend. The following details this format and functionality for each supported external datastore.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "PostgreSQL" %}}
|
||||
|
||||
In its most common form, the datastore-endpoint parameter for PostgreSQL has the following format:
|
||||
|
||||
`postgres://username:password@hostname:port/database-name`
|
||||
|
||||
More advanced configuration parameters are available. For more information on these, please see https://godoc.org/github.com/lib/pq.
|
||||
|
||||
If you specify a database name and it does not exist, the server will attempt to create it.
|
||||
|
||||
If you only supply `postgres://` as the endpoint, K3s will attempt to do the following:
|
||||
|
||||
* Connect to localhost using `postgres` as the username and password
|
||||
* Create a database named `kubernetes`
|
||||
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "MySQL" %}}
|
||||
|
||||
In its most common form, the `datastore-endpoint` parameter for MySQL has the following format:
|
||||
|
||||
`mysql://username:password@tcp(hostname:3306)/database-name`
|
||||
|
||||
More advanced configuration parameters are available. For more information on these, please see https://github.com/go-sql-driver/mysql#dsn-data-source-name
|
||||
|
||||
Note that due to a [known issue](https://github.com/rancher/k3s/issues/1093) in K3s, you cannot set the `tls` parameter. TLS communication is supported, but you cannot, for example, set this parameter to "skip-verify" to cause K3s to skip certificate verification.
|
||||
|
||||
If you specify a database name and it does not exist, the server will attempt to create it.
|
||||
|
||||
If you only supply `mysql://` as the endpoint, K3s will attempt to do the following:
|
||||
|
||||
* Connect to the MySQL socket at `/var/run/mysqld/mysqld.sock` using the `root` user and no password
|
||||
* Create a database with the name `kubernetes`
|
||||
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "etcd" %}}
|
||||
|
||||
In its most common form, the `datastore-endpoint` parameter for etcd has the following format:
|
||||
|
||||
`https://etcd-host-1:2379,https://etcd-host-2:2379,https://etcd-host-3:2379`
|
||||
|
||||
The above assumes a typical three node etcd cluster. The parameter can accept one more comma separated etcd URLs.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
<br/>Based on the above, the following example command could be used to launch a server instance that connects to a PostgresSQL database named k3s:
|
||||
```
|
||||
K3S_DATASTORE_ENDPOINT='postgres://username:password@hostname:5432/k3s' k3s server
|
||||
```
|
||||
|
||||
And the following example could be used to connect to a MySQL database using client certificate authentication:
|
||||
```
|
||||
K3S_DATASTORE_ENDPOINT='mysql://username:password@tcp(hostname:3306)/k3s' \
|
||||
K3S_DATASTORE_CERTFILE='/path/to/client.crt' \
|
||||
K3S_DATASTORE_KEYFILE='/path/to/client.key' \
|
||||
k3s server
|
||||
```
|
||||
|
||||
### Embedded DQLite for HA (Experimental)
|
||||
K3s's use of DQLite is similar to its use of SQLite. It is simple to set up and manage. As such, there is no external configuration or additional steps to take in order to use this option. Please see [High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded/) for instructions on how to run with this option.
|
||||
@@ -0,0 +1,22 @@
|
||||
---
|
||||
title: "High Availability with Embedded DB (Experimental)"
|
||||
weight: 40
|
||||
---
|
||||
|
||||
As of v1.0.0, K3s is previewing support for running a highly available control plane without the need for an external database. This means there is no need to manage an external etcd or SQL datastore in order to run a reliable production-grade setup. While this feature is currently experimental, we expect it to be the primary architecture for running HA K3s clusters in the future.
|
||||
|
||||
This architecture is achieved by embedding a dqlite database within the K3s server process. DQLite is short for "distributed SQLite." According to https://dqlite.io, it is "*a fast, embedded, persistent SQL database with Raft consensus that is perfect for fault-tolerant IoT and Edge devices.*" This makes it a natural fit for K3s.
|
||||
|
||||
To run K3s in this mode, you must have an odd number of server nodes. We recommend starting with three nodes.
|
||||
|
||||
To get started, first launch a server node with the `cluster-init` flag to enable clustering and a token that will be used as a shared secret to join additional servers to the cluster.
|
||||
```
|
||||
K3S_TOKEN=SECRET k3s server --cluster-init
|
||||
```
|
||||
|
||||
After launching the first server, join the second and third servers to the cluster using the shared secret:
|
||||
```
|
||||
K3S_TOKEN=SECRET k3s server --server https://<ip or hostname of server1>:6443
|
||||
```
|
||||
|
||||
Now you have a highly available control plane. Joining additional worker nodes to the cluster follows the same procedure as a single server cluster.
|
||||
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: High Availability with an External DB
|
||||
weight: 30
|
||||
---
|
||||
|
||||
>**Note:** Official support for installing Rancher on a Kubernetes cluster was introduced in our v1.0.0 release.
|
||||
|
||||
This section describes how to install a high-availability K3s cluster with an external database.
|
||||
|
||||
Single server clusters can meet a variety of use cases, but for environments where uptime of the Kubernetes control plane is critical, you can run K3s in an HA configuration. An HA K3s cluster is comprised of:
|
||||
|
||||
* Two or more **server nodes** that will serve the Kubernetes API and run other control plane services
|
||||
* Zero or more **agent nodes** that are designated to run your apps and services
|
||||
* An **external datastore** (as opposed to the embedded SQLite datastore used in single-server setups)
|
||||
* A **fixed registration address** that is placed in front of the server nodes to allow agent nodes to register with the cluster
|
||||
|
||||
For more details on how these components work together, refer to the [architecture section.]({{<baseurl>}}/k3s/latest/en/architecture/#high-availability-with-an-external-db)
|
||||
|
||||
Agents register through the fixed registration address, but after registration they establish a connection directly to one of the server nodes. This is a websocket connection initiated by the `k3s agent` process and it is maintained by a client-side load balancer running as part of the agent process.
|
||||
|
||||
# Installation Outline
|
||||
|
||||
Setting up an HA cluster requires the following steps:
|
||||
|
||||
1. [Create an external datastore](#1-create-an-external-datastore)
|
||||
2. [Launch server nodes](#2-launch-server-nodes)
|
||||
3. [Configure the fixed registration address](#3-configure-the-fixed-registration-address)
|
||||
4. [Join agent nodes](#4-optional-join-agent-nodes)
|
||||
|
||||
### 1. Create an External Datastore
|
||||
You will first need to create an external datastore for the cluster. See the [Cluster Datastore Options]({{< baseurl >}}/k3s/latest/en/installation/datastore/) documentation for more details.
|
||||
|
||||
### 2. Launch Server Nodes
|
||||
K3s requires two or more server nodes for this HA configuration. See the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/) guide for minimum machine requirements.
|
||||
|
||||
When running the `k3s server` command on these nodes, you must set the `datastore-endpoint` parameter so that K3s knows how to connect to the external datastore.
|
||||
|
||||
For example, a command like the following could be used to install the K3s server with a MySQL database as the external datastore:
|
||||
|
||||
```
|
||||
curl -sfL https://get.k3s.io | sh -s - server \
|
||||
--datastore-endpoint="mysql://username:password@tcp(hostname:3306)/database-name"
|
||||
```
|
||||
|
||||
The datastore endpoint format differs based on the database type. For details, refer to the section on [datastore endpoint formats.]({{<baseurl>}}/k3s/latest/en/installation/datastore/#datastore-endpoint-format-and-functionality)
|
||||
|
||||
To configure TLS certificates when launching server nodes, refer to the [datastore configuration guide.]({{<baseurl>}}/k3s/latest/en/installation/datastore/#external-datastore-configuration-parameters)
|
||||
|
||||
> **Note:** The same installation options available to single-server installs are also available for high-availability installs. For more details, see the [Installation and Configuration Options]({{<baseurl>}}/k3s/latest/en/installation/install-options/) documentation.
|
||||
|
||||
By default, server nodes will be schedulable and thus your workloads can get launched on them. If you wish to have a dedicated control plane where no user workloads will run, you can use taints. The <span style='white-space: nowrap'>`node-taint`</span> parameter will allow you to configure nodes with taints, for example <span style='white-space: nowrap'>`--node-taint k3s-controlplane=true:NoExecute`</span>.
|
||||
|
||||
Once you've launched the `k3s server` process on all server nodes, ensure that the cluster has come up properly with `k3s kubectl get nodes`. You should see your server nodes in the Ready state.
|
||||
|
||||
### 3. Configure the Fixed Registration Address
|
||||
Agent nodes need a URL to register against. This can be the IP or hostname of any of the server nodes, but in many cases those may change over time. For example, if you are running your cluster in a cloud that supports scaling groups, you may scale the server node group up and down over time, causing nodes to be created and destroyed and thus having different IPs from the initial set of server nodes. Therefore, you should have a stable endpoint in front of the server nodes that will not change over time. This endpoint can be set up using any number approaches, such as:
|
||||
|
||||
* A layer-4 (TCP) load balancer
|
||||
* Round-robin DNS
|
||||
* Virtual or elastic IP addresses
|
||||
|
||||
This endpoint can also be used for accessing the Kubernetes API. So you can, for example, modify your [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file to point to it instead of a specific node.
|
||||
|
||||
### 4. Optional: Join Agent Nodes
|
||||
|
||||
Because K3s server nodes are schedulable by default, the minimum number of nodes for an HA K3s server cluster is two server nodes and zero agent nodes. To add nodes designated to run your apps and services, join agent nodes to your cluster.
|
||||
|
||||
Joining agent nodes in an HA cluster is the same as joining agent nodes in a single server cluster. You just need to specify the URL the agent should register to and the token it should use.
|
||||
```
|
||||
K3S_TOKEN=SECRET k3s agent --server https://fixed-registration-address:6443
|
||||
```
|
||||
@@ -0,0 +1,208 @@
|
||||
---
|
||||
title: "Installation Options"
|
||||
weight: 20
|
||||
---
|
||||
|
||||
This page focuses on the options that can be used when you set up K3s for the first time:
|
||||
|
||||
- [Installation script options](#installation-script-options)
|
||||
- [Installing K3s from the binary](#installing-k3s-from-the-binary)
|
||||
- [Registration options for the K3s server](#registration-options-for-the-k3s-server)
|
||||
- [Registration options for the K3s agent](#registration-options-for-the-k3s-agent)
|
||||
|
||||
For more advanced options, refer to [this page.]({{<baseurl>}}/k3s/latest/en/advanced)
|
||||
|
||||
# Installation Script Options
|
||||
|
||||
As mentioned in the [Quick-Start Guide]({{< baseurl >}}/k3s/latest/en/quick-start/), you can use the installation script available at https://get.k3s.io to install K3s as a service on systemd and openrc based systems.
|
||||
|
||||
The simplest form of this command is as follows:
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
When using this method to install K3s, the following environment variables can be used to configure the installation:
|
||||
|
||||
- `INSTALL_K3S_SKIP_DOWNLOAD`
|
||||
|
||||
If set to true will not download K3s hash or binary.
|
||||
|
||||
- `INSTALL_K3S_SYMLINK`
|
||||
|
||||
If set to 'skip' will not create symlinks, 'force' will overwrite, default will symlink if command does not exist in path.
|
||||
|
||||
- `INSTALL_K3S_SKIP_START`
|
||||
|
||||
If set to true will not start K3s service.
|
||||
|
||||
- `INSTALL_K3S_VERSION`
|
||||
|
||||
Version of K3s to download from github. Will attempt to download the latest version if not specified.
|
||||
|
||||
- `INSTALL_K3S_BIN_DIR`
|
||||
|
||||
Directory to install K3s binary, links, and uninstall script to, or use `/usr/local/bin` as the default.
|
||||
|
||||
- `INSTALL_K3S_BIN_DIR_READ_ONLY`
|
||||
|
||||
If set to true will not write files to `INSTALL_K3S_BIN_DIR`, forces setting `INSTALL_K3S_SKIP_DOWNLOAD=true`.
|
||||
|
||||
- `INSTALL_K3S_SYSTEMD_DIR`
|
||||
|
||||
Directory to install systemd service and environment files to, or use `/etc/systemd/system` as the default.
|
||||
|
||||
- `INSTALL_K3S_EXEC`
|
||||
|
||||
Command with flags to use for launching K3s in the service. If the command is not specified, it will default to "agent" if `K3S_URL` is set or "server" if it is not set.
|
||||
|
||||
The final systemd command resolves to a combination of this environment variable and script args. To illustrate this, the following commands result in the same behavior of registering a server without flannel:
|
||||
```sh
|
||||
curl ... | INSTALL_K3S_EXEC="--no-flannel" sh -s -
|
||||
curl ... | INSTALL_K3S_EXEC="server --no-flannel" sh -s -
|
||||
curl ... | INSTALL_K3S_EXEC="server" sh -s - --no-flannel
|
||||
curl ... | sh -s - server --no-flannel
|
||||
curl ... | sh -s - --no-flannel
|
||||
```
|
||||
|
||||
- `INSTALL_K3S_NAME`
|
||||
|
||||
Name of systemd service to create, will default from the K3s exec command if not specified. If specified the name will be prefixed with 'k3s-'.
|
||||
|
||||
- `INSTALL_K3S_TYPE`
|
||||
|
||||
Type of systemd service to create, will default from the K3s exec command if not specified.
|
||||
|
||||
|
||||
Environment variables which begin with `K3S_` will be preserved for the systemd and openrc services to use. Setting `K3S_URL` without explicitly setting an exec command will default the command to "agent". When running the agent `K3S_TOKEN` must also be set.
|
||||
|
||||
|
||||
# Installing K3s from the Binary
|
||||
|
||||
As stated, the installation script is primarily concerned with configuring K3s to run as a service. If you choose to not use the script, you can run K3s simply by downloading the binary from our [release page](https://github.com/rancher/k3s/releases/latest), placing it on your path, and executing it. The K3s binary supports the following commands:
|
||||
|
||||
Command | Description
|
||||
--------|------------------
|
||||
<span class='nowrap'>`k3s server`</span> | Run the K3s management server, which will also launch Kubernetes control plane components such as the API server, controller-manager, and scheduler.
|
||||
<span class='nowrap'>`k3s agent`</span> | Run the K3s node agent. This will cause K3s to run as a worker node, launching the Kubernetes node services `kubelet` and `kube-proxy`.
|
||||
<span class='nowrap'>`k3s kubectl`</span> | Run an embedded [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) CLI. If the `KUBECONFIG` environment variable is not set, this will automatically attempt to use the config file that is created at `/etc/rancher/k3s/k3s.yaml` when launching a K3s server node.
|
||||
<span class='nowrap'>`k3s crictl`</span> | Run an embedded [crictl](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md). This is a CLI for interacting with Kubernetes's container runtime interface (CRI). Useful for debugging.
|
||||
<span class='nowrap'>`k3s ctr`</span> | Run an embedded [ctr](https://github.com/projectatomic/containerd/blob/master/docs/cli.md). This is a CLI for containerd, the container daemon used by K3s. Useful for debugging.
|
||||
<span class='nowrap'>`k3s help`</span> | Shows a list of commands or help for one command
|
||||
|
||||
The `k3s server` and `k3s agent` commands have additional configuration options that can be viewed with <span class='nowrap'>`k3s server --help`</span> or <span class='nowrap'>`k3s agent --help`</span>. For convenience, that help text is presented here:
|
||||
|
||||
# Registration Options for the K3s Server
|
||||
```
|
||||
NAME:
|
||||
k3s server - Run management server
|
||||
|
||||
USAGE:
|
||||
k3s server [OPTIONS]
|
||||
|
||||
OPTIONS:
|
||||
-v value (logging) Number for the log level verbosity (default: 0)
|
||||
--vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging
|
||||
--log value, -l value (logging) Log to file
|
||||
--alsologtostderr (logging) Log to standard error as well as file (if set)
|
||||
--bind-address value (listener) k3s bind address (default: 0.0.0.0)
|
||||
--https-listen-port value (listener) HTTPS listen port (default: 6443)
|
||||
--advertise-address value (listener) IP address that apiserver uses to advertise to members of the cluster (default: node-external-ip/node-ip)
|
||||
--advertise-port value (listener) Port that apiserver uses to advertise to members of the cluster (default: listen-port) (default: 0)
|
||||
--tls-san value (listener) Add additional hostname or IP as a Subject Alternative Name in the TLS cert
|
||||
--data-dir value, -d value (data) Folder to hold state default /var/lib/rancher/k3s or ${HOME}/.rancher/k3s if not root
|
||||
--cluster-cidr value (networking) Network CIDR to use for pod IPs (default: "10.42.0.0/16")
|
||||
--service-cidr value (networking) Network CIDR to use for services IPs (default: "10.43.0.0/16")
|
||||
--cluster-dns value (networking) Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10)
|
||||
--cluster-domain value (networking) Cluster Domain (default: "cluster.local")
|
||||
--flannel-backend value (networking) One of 'none', 'vxlan', 'ipsec', or 'flannel' (default: "vxlan")
|
||||
--token value, -t value (cluster) Shared secret used to join a server or agent to a cluster [$K3S_TOKEN]
|
||||
--token-file value (cluster) File containing the cluster-secret/token [$K3S_TOKEN_FILE]
|
||||
--write-kubeconfig value, -o value (client) Write kubeconfig for admin client to this file [$K3S_KUBECONFIG_OUTPUT]
|
||||
--write-kubeconfig-mode value (client) Write kubeconfig with this mode [$K3S_KUBECONFIG_MODE]
|
||||
--kube-apiserver-arg value (flags) Customized flag for kube-apiserver process
|
||||
--kube-scheduler-arg value (flags) Customized flag for kube-scheduler process
|
||||
--kube-controller-manager-arg value (flags) Customized flag for kube-controller-manager process
|
||||
--kube-cloud-controller-manager-arg value (flags) Customized flag for kube-cloud-controller-manager process
|
||||
--datastore-endpoint value (db) Specify etcd, Mysql, Postgres, or Sqlite (default) data source name [$K3S_DATASTORE_ENDPOINT]
|
||||
--datastore-cafile value (db) TLS Certificate Authority file used to secure datastore backend communication [$K3S_DATASTORE_CAFILE]
|
||||
--datastore-certfile value (db) TLS certification file used to secure datastore backend communication [$K3S_DATASTORE_CERTFILE]
|
||||
--datastore-keyfile value (db) TLS key file used to secure datastore backend communication [$K3S_DATASTORE_KEYFILE]
|
||||
--default-local-storage-path value (storage) Default local storage path for local provisioner storage class
|
||||
--no-deploy value (components) Do not deploy packaged components (valid items: coredns, servicelb, traefik, local-storage, metrics-server)
|
||||
--disable-scheduler (components) Disable Kubernetes default scheduler
|
||||
--disable-cloud-controller (components) Disable k3s default cloud controller manager
|
||||
--disable-network-policy (components) Disable k3s default network policy controller
|
||||
--node-name value (agent/node) Node name [$K3S_NODE_NAME]
|
||||
--with-node-id (agent/node) Append id to node name
|
||||
--node-label value (agent/node) Registering kubelet with set of labels
|
||||
--node-taint value (agent/node) Registering kubelet with set of taints
|
||||
--docker (agent/runtime) Use docker instead of containerd
|
||||
--container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation
|
||||
--pause-image value (agent/runtime) Customized pause image for containerd sandbox
|
||||
--private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml")
|
||||
--node-ip value, -i value (agent/networking) IP address to advertise for node
|
||||
--node-external-ip value (agent/networking) External IP address to advertise for node
|
||||
--resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF]
|
||||
--flannel-iface value (agent/networking) Override default flannel interface
|
||||
--flannel-conf value (agent/networking) Override default flannel config file
|
||||
--kubelet-arg value (agent/flags) Customized flag for kubelet process
|
||||
--kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process
|
||||
--rootless (experimental) Run rootless
|
||||
--agent-token value (experimental/cluster) Shared secret used to join agents to the cluster, but not servers [$K3S_AGENT_TOKEN]
|
||||
--agent-token-file value (experimental/cluster) File containing the agent secret [$K3S_AGENT_TOKEN_FILE]
|
||||
--server value, -s value (experimental/cluster) Server to connect to, used to join a cluster [$K3S_URL]
|
||||
--cluster-init (experimental/cluster) Initialize new cluster master [$K3S_CLUSTER_INIT]
|
||||
--cluster-reset (experimental/cluster) Forget all peers and become a single cluster new cluster master [$K3S_CLUSTER_RESET]
|
||||
--no-flannel (deprecated) use --flannel-backend=none
|
||||
--cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET]
|
||||
```
|
||||
|
||||
# Registration Options for the K3s Agent
|
||||
```
|
||||
NAME:
|
||||
k3s agent - Run node agent
|
||||
|
||||
USAGE:
|
||||
k3s agent [OPTIONS]
|
||||
|
||||
OPTIONS:
|
||||
-v value (logging) Number for the log level verbosity (default: 0)
|
||||
--vmodule value (logging) Comma-separated list of pattern=N settings for file-filtered logging
|
||||
--log value, -l value (logging) Log to file
|
||||
--alsologtostderr (logging) Log to standard error as well as file (if set)
|
||||
--token value, -t value (cluster) Token to use for authentication [$K3S_TOKEN]
|
||||
--token-file value (cluster) Token file to use for authentication [$K3S_TOKEN_FILE]
|
||||
--server value, -s value (cluster) Server to connect to [$K3S_URL]
|
||||
--data-dir value, -d value (agent/data) Folder to hold state (default: "/var/lib/rancher/k3s")
|
||||
--node-name value (agent/node) Node name [$K3S_NODE_NAME]
|
||||
--with-node-id (agent/node) Append id to node name
|
||||
--node-label value (agent/node) Registering kubelet with set of labels
|
||||
--node-taint value (agent/node) Registering kubelet with set of taints
|
||||
--docker (agent/runtime) Use docker instead of containerd
|
||||
--container-runtime-endpoint value (agent/runtime) Disable embedded containerd and use alternative CRI implementation
|
||||
--pause-image value (agent/runtime) Customized pause image for containerd sandbox
|
||||
--private-registry value (agent/runtime) Private registry configuration file (default: "/etc/rancher/k3s/registries.yaml")
|
||||
--node-ip value, -i value (agent/networking) IP address to advertise for node
|
||||
--node-external-ip value (agent/networking) External IP address to advertise for node
|
||||
--resolv-conf value (agent/networking) Kubelet resolv.conf file [$K3S_RESOLV_CONF]
|
||||
--flannel-iface value (agent/networking) Override default flannel interface
|
||||
--flannel-conf value (agent/networking) Override default flannel config file
|
||||
--kubelet-arg value (agent/flags) Customized flag for kubelet process
|
||||
--kube-proxy-arg value (agent/flags) Customized flag for kube-proxy process
|
||||
--rootless (experimental) Run rootless
|
||||
--no-flannel (deprecated) use --flannel-backend=none
|
||||
--cluster-secret value (deprecated) use --token [$K3S_CLUSTER_SECRET]
|
||||
```
|
||||
|
||||
### Node Labels and Taints for Agents
|
||||
|
||||
K3s agents can be configured with the options `--node-label` and `--node-taint` which adds a label and taint to the kubelet. The two options only add labels and/or taints at registration time, so they can only be added once and not changed after that again by running K3s commands.
|
||||
|
||||
Below is an example showing how to add labels and a taint:
|
||||
```
|
||||
--node-label foo=bar \
|
||||
--node-label hello=world \
|
||||
--node-taint key1=value1:NoExecute
|
||||
```
|
||||
|
||||
If you want to change node labels and taints after node registration you should use `kubectl`. Refer to the official Kubernetes documentation for details on how to add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) and [node labels.](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node)
|
||||
@@ -0,0 +1,88 @@
|
||||
---
|
||||
title: "Kubernetes Dashboard"
|
||||
weight: 60
|
||||
---
|
||||
|
||||
This installation guide will help you to deploy and configure the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) on K3s.
|
||||
|
||||
### Deploying the Kubernetes Dashboard
|
||||
|
||||
```bash
|
||||
sudo k3s kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
|
||||
```
|
||||
|
||||
### Dashboard RBAC Configuration
|
||||
|
||||
> **Important:** The `admin-user` created in this guide will have administrative privileges in the Dashboard.
|
||||
|
||||
Create the following resource manifest files:
|
||||
|
||||
`dashboard.admin-user.yml`
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: admin-user
|
||||
namespace: kubernetes-dashboard
|
||||
```
|
||||
|
||||
`dashboard.admin-user-role.yml`
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: admin-user
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: admin-user
|
||||
namespace: kubernetes-dashboard
|
||||
```
|
||||
|
||||
Deploy the `admin-user` configuration:
|
||||
|
||||
```bash
|
||||
sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml
|
||||
```
|
||||
|
||||
### Obtain the Bearer Token
|
||||
|
||||
```bash
|
||||
sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token | grep ^token
|
||||
```
|
||||
|
||||
### Local Access to the Dashboard
|
||||
|
||||
To access the Dashboard you must create a secure channel to your K3s cluster:
|
||||
|
||||
```bash
|
||||
sudo k3s kubectl proxy
|
||||
```
|
||||
|
||||
The Dashboard is now accessible at:
|
||||
|
||||
* http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
|
||||
* `Sign In` with the `admin-user` Bearer Token
|
||||
|
||||
#### Advanced: Remote Access to the Dashboard
|
||||
|
||||
Please see: Using [Port Forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to Access Applications in a Cluster.
|
||||
|
||||
### Upgrading the Dashboard
|
||||
|
||||
The latest Dashboard releases are available from: https://github.com/kubernetes/dashboard/releases/latest
|
||||
|
||||
```bash
|
||||
sudo k3s kubectl delete ns kubernetes-dashboard
|
||||
sudo k3s kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/[...]
|
||||
```
|
||||
|
||||
### Deleting the Dashboard and admin-user configuration
|
||||
|
||||
```bash
|
||||
sudo k3s kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
|
||||
sudo k3s kubectl delete -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml
|
||||
```
|
||||
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: "Network Options"
|
||||
weight: 25
|
||||
---
|
||||
|
||||
> **Note:** Please reference the [Networking]({{< baseurl >}}/k3s/latest/en/networking) page for information about CoreDNS, Traefik, and the Service LB.
|
||||
|
||||
By default, K3s will run with flannel as the CNI, using VXLAN as the default backend. To change the CNI, refer to the section on configuring a [custom CNI](#custom-cni). To change the flannel backend, refer to the flannel options section.
|
||||
|
||||
### Flannel Options
|
||||
|
||||
The default backend for flannel is VXLAN. To enable encryption, pass the IPSec (Internet Protocol Security) or WireGuard options below.
|
||||
|
||||
If you wish to use WireGuard as your flannel backend it may require additional kernel modules. Please see the [WireGuard Install Guide](https://www.wireguard.com/install/) for details. The WireGuard install steps will ensure the appropriate kernel modules are installed for your operating system. You need to install WireGuard on every node, both server and agents before attempting to leverage the WireGuard flannel backend option.
|
||||
|
||||
CLI Flag and Value | Description
|
||||
-------------------|------------
|
||||
<span style="white-space: nowrap">`--flannel-backend=vxlan`</span> | (Default) Uses the VXLAN backend. |
|
||||
<span style="white-space: nowrap">`--flannel-backend=ipsec`</span> | Uses the IPSEC backend which encrypts network traffic. |
|
||||
<span style="white-space: nowrap">`--flannel-backend=host-gw`</span> | Uses the host-gw backend. |
|
||||
<span style="white-space: nowrap">`--flannel-backend=wireguard`</span> | Uses the WireGuard backend which encrypts network traffic. May require additional kernel modules and configuration. |
|
||||
|
||||
### Custom CNI
|
||||
|
||||
Run K3s with `--flannel-backend=none` and install your CNI of choice. IP Forwarding should be enabled for Canal and Calico. Please reference the steps below.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "Canal" %}}
|
||||
|
||||
Visit the [Project Calico Docs](https://docs.projectcalico.org/) website. Follow the steps to install Canal. Modify the Canal YAML so that IP forwarding is allowed in the container_settings section, for example:
|
||||
|
||||
```
|
||||
"container_settings": {
|
||||
"allow_ip_forwarding": true
|
||||
}
|
||||
```
|
||||
|
||||
Apply the Canal YAML.
|
||||
|
||||
Ensure the settings were applied by running the following command on the host:
|
||||
|
||||
```
|
||||
cat /etc/cni/net.d/10-canal.conflist
|
||||
```
|
||||
|
||||
You should see that IP forwarding is set to true.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Calico" %}}
|
||||
|
||||
Follow the [Calico CNI Plugins Guide](https://docs.projectcalico.org/master/reference/cni-plugin/configuration). Modify the Calico YAML so that IP forwarding is allowed in the container_settings section, for example:
|
||||
|
||||
```
|
||||
"container_settings": {
|
||||
"allow_ip_forwarding": true
|
||||
}
|
||||
```
|
||||
|
||||
Apply the Calico YAML.
|
||||
|
||||
Ensure the settings were applied by running the following command on the host:
|
||||
|
||||
```
|
||||
cat /etc/cni/net.d/10-calico.conflist
|
||||
```
|
||||
|
||||
You should see that IP forwarding is set to true.
|
||||
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
@@ -0,0 +1,40 @@
|
||||
---
|
||||
title: Node Requirements
|
||||
weight: 1
|
||||
---
|
||||
|
||||
K3s is very lightweight, but has some minimum requirements as outlined below.
|
||||
|
||||
Whether you're configuring a K3s cluster to run in a Docker or Kubernetes setup, each node running K3s should meet the following minimum requirements. You may need more resources to fit your needs.
|
||||
|
||||
## Prerequisites
|
||||
* Two nodes cannot have the same hostname. If all your nodes have the same hostname, pass `--node-name` or set `$K3S_NODE_NAME` with a unique name for each node you add to the cluster.
|
||||
|
||||
## Operating Systems
|
||||
|
||||
K3s should run on just about any flavor of Linux. However, K3s is tested on the following operating systems and their subsequent non-major releases.
|
||||
|
||||
* Ubuntu 16.04 (amd64)
|
||||
* Ubuntu 18.04 (amd64)
|
||||
* Raspbian Buster (armhf)
|
||||
|
||||
> If you are using Alpine Linux, follow [these steps]({{<baseurl>}}/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup.
|
||||
|
||||
## Hardware
|
||||
|
||||
Hardware requirements scale based on the size of your deployments. Minimum recommendations are outlined here.
|
||||
|
||||
* RAM: 512MB Minimum
|
||||
* CPU: 1 Minimum
|
||||
|
||||
#### Disks
|
||||
|
||||
K3s performance depends on the performance of the database. To ensure optimal speed, we recommend using an SSD when possible. Disk performance will vary on ARM devices utilizing an SD card or eMMC.
|
||||
|
||||
## Networking
|
||||
|
||||
The K3s server needs port 6443 to be accessible by the nodes. The nodes need to be able to reach other nodes over UDP port 8472 (Flannel VXLAN). If you do not use flannel and provide your own custom CNI, then port 8472 is not needed by K3s. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel.
|
||||
|
||||
IMPORTANT: The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disabled access to port 8472.
|
||||
|
||||
If you wish to utilize the metrics server, you will need to open port 10250 on each node.
|
||||
@@ -0,0 +1,129 @@
|
||||
---
|
||||
title: "Private Registry Configuration"
|
||||
weight: 55
|
||||
---
|
||||
_Available as of v1.0.0_
|
||||
|
||||
Containerd can be configured to connect to private registries and use them to pull private images on the node.
|
||||
|
||||
Upon startup, K3s will check to see if a `registries.yaml` file exists at `/etc/rancher/k3s/` and instruct containerd to use any registries defined in the file. If you wish to use a private registry, then you will need to create this file as root on each node that will be using the registry.
|
||||
|
||||
Note that server nodes are schedulable by default. If you have not tainted the server nodes and will be running workloads on them, please ensure you also create the `registries.yaml` file on each server as well.
|
||||
|
||||
Configuration in containerd can be used to connect to a private registry with a TLS connection and with registries that enable authentication as well. The following section will explain the `registries.yaml` file and give different examples of using private registry configuration in K3s.
|
||||
|
||||
# Registries Configuration File
|
||||
|
||||
The file consists of two main sections:
|
||||
|
||||
- mirrors
|
||||
- configs
|
||||
|
||||
### Mirrors
|
||||
|
||||
Mirrors is a directive that defines the names and endpoints of the private registries, for example:
|
||||
|
||||
```
|
||||
mirrors:
|
||||
"mycustomreg.com:5000":
|
||||
endpoint:
|
||||
- "https://mycustomreg.com:5000"
|
||||
```
|
||||
|
||||
Each mirror must have a name and set of endpoints. When pulling an image from a registry, containerd will try these endpoint URLs one by one, and use the first working one.
|
||||
|
||||
### Configs
|
||||
|
||||
The configs section defines the TLS and credential configuration for each mirror. For each mirror you can define `auth` and/or `tls`. The TLS part consists of:
|
||||
|
||||
Directive | Description
|
||||
----------|------------
|
||||
`cert_file` | The client certificate path that will be used to authenticate with the registry
|
||||
`key_file` | The client key path that will be used to authenticate with the registry
|
||||
`ca_file` | Defines the CA certificate path to be used to verify the registry's server cert file
|
||||
|
||||
The credentials consist of either username/password or authentication token:
|
||||
|
||||
- username: user name of the private registry basic auth
|
||||
- password: user password of the private registry basic auth
|
||||
- auth: authentication token of the private registry basic auth
|
||||
|
||||
Below are basic examples of using private registries in different modes:
|
||||
|
||||
### With TLS
|
||||
|
||||
Below are examples showing how you may configure `/etc/rancher/k3s/registries.yaml` on each node when using TLS.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "With Authentication" %}}
|
||||
|
||||
```
|
||||
mirrors:
|
||||
"mycustomreg.com:5000":
|
||||
endpoint:
|
||||
- "https://mycustomreg.com:5000"
|
||||
configs:
|
||||
"mycustomreg:5000":
|
||||
auth:
|
||||
username: xxxxxx # this is the registry username
|
||||
password: xxxxxx # this is the registry password
|
||||
tls:
|
||||
cert_file: # path to the cert file used in the registry
|
||||
key_file: # path to the key file used in the registry
|
||||
ca_file: # path to the ca file used in the registry
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Without Authentication" %}}
|
||||
|
||||
```
|
||||
mirrors:
|
||||
"mycustomreg.com:5000":
|
||||
endpoint:
|
||||
- "https://mycustomreg.com:5000"
|
||||
configs:
|
||||
"mycustomreg:5000":
|
||||
tls:
|
||||
cert_file: # path to the cert file used in the registry
|
||||
key_file: # path to the key file used in the registry
|
||||
ca_file: # path to the ca file used in the registry
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
### Without TLS
|
||||
|
||||
Below are examples showing how you may configure `/etc/rancher/k3s/registries.yaml` on each node when _not_ using TLS.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "With Authentication" %}}
|
||||
|
||||
```
|
||||
mirrors:
|
||||
"mycustomreg.com:5000":
|
||||
endpoint:
|
||||
- "http://mycustomreg.com:5000"
|
||||
configs:
|
||||
"mycustomreg:5000":
|
||||
auth:
|
||||
username: xxxxxx # this is the registry username
|
||||
password: xxxxxx # this is the registry password
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Without Authentication" %}}
|
||||
|
||||
```
|
||||
mirrors:
|
||||
"mycustomreg.com:5000":
|
||||
endpoint:
|
||||
- "http://mycustomreg.com:5000"
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
> In case of no TLS communication, you need to specify `http://` for the endpoints, otherwise it will default to https.
|
||||
|
||||
In order for the registry changes to take effect, you need to restart K3s on each node.
|
||||
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title: Uninstalling K3s
|
||||
weight: 61
|
||||
---
|
||||
|
||||
If you installed K3s using the installation script, a script to uninstall K3s was generated during installation.
|
||||
|
||||
To uninstall K3s from a server node, run:
|
||||
|
||||
```
|
||||
/usr/local/bin/k3s-uninstall.sh
|
||||
```
|
||||
|
||||
To uninstall K3s from an agent node, run:
|
||||
|
||||
```
|
||||
/usr/local/bin/k3s-agent-uninstall.sh
|
||||
```
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title: Known Issues
|
||||
weight: 70
|
||||
---
|
||||
The Known Issues are updated periodically and designed to inform you about any issues that may not be immediately addressed in the next upcoming release.
|
||||
|
||||
**Snap Docker**
|
||||
|
||||
If you plan to use K3s with docker, Docker installed via a snap package is not recommended as it has been known to cause issues running K3s.
|
||||
|
||||
**Iptables**
|
||||
|
||||
If you are running iptables in nftables mode instead of legacy you might encounter issues. We recommend utilizing newer iptables (such as 1.6.1+) to avoid issues.
|
||||
|
||||
**RootlessKit**
|
||||
|
||||
Running K3s with RootlessKit is experimental and has several [known issues.]({{<baseurl>}}/k3s/latest/en/advanced/#known-issues-with-rootlesskit)
|
||||
@@ -0,0 +1,37 @@
|
||||
---
|
||||
title: "Networking"
|
||||
weight: 35
|
||||
---
|
||||
|
||||
>**Note:** CNI options are covered in detail on the [Installation Network Options]({{< baseurl >}}/k3s/latest/en/installation/network-options/) page. Please reference that page for details on Flannel and the various flannel backend options or how to set up your own CNI.
|
||||
|
||||
Open Ports
|
||||
----------
|
||||
Please reference the [Node Requirements]({{< baseurl >}}/k3s/latest/en/installation/node-requirements/#networking) page for port information.
|
||||
|
||||
CoreDNS
|
||||
-------
|
||||
|
||||
CoreDNS is deployed on start of the agent. To disable, run each server with the `--no-deploy coredns` option.
|
||||
|
||||
If you don't install CoreDNS, you will need to install a cluster DNS provider yourself.
|
||||
|
||||
Traefik Ingress Controller
|
||||
--------------------------
|
||||
|
||||
[Traefik](https://traefik.io/) is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It simplifies networking complexity while designing, deploying, and running applications.
|
||||
|
||||
Traefik is deployed by default when starting the server. For more information see [Auto Deploying Manifests]({{< baseurl >}}/k3s/latest/en/advanced/#auto-deploying-manifests). The default config file is found in `/var/lib/rancher/k3s/server/manifests/traefik.yaml` and any changes made to this file will automatically be deployed to Kubernetes in a manner similar to `kubectl apply`.
|
||||
|
||||
The Traefik ingress controller will use ports 80, 443, and 8080 on the host (i.e. these will not be usable for HostPort or NodePort).
|
||||
|
||||
You can tweak traefik to meet your needs by setting options in the traefik.yaml file. Refer to the official [Traefik for Helm Configuration Parameters](https://github.com/helm/charts/tree/master/stable/traefik#configuration) readme for more information.
|
||||
|
||||
To disable it, start each server with the `--no-deploy traefik` option.
|
||||
|
||||
Service Load Balancer
|
||||
---------------------
|
||||
|
||||
K3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available, the load balancer will stay in Pending.
|
||||
|
||||
To disable the embedded load balancer, run the server with the `--no-deploy servicelb` option. This is necessary if you wish to run a different load balancer, such as MetalLB.
|
||||
@@ -1,44 +1,32 @@
|
||||
---
|
||||
title: "Quick-Start"
|
||||
weight: 1
|
||||
title: "Quick-Start Guide"
|
||||
weight: 10
|
||||
---
|
||||
|
||||
There are many ways to run k3s, we cover a couple easy ways to get started in this section.
|
||||
The [installation options](../installation) section will cover in greater detail how k3s can be setup.
|
||||
This guide will help you quickly launch a cluster with default options. The [installation section](../installation) covers in greater detail how K3s can be set up.
|
||||
|
||||
For information on how K3s components work together, refer to the [architecture section.]({{<baseurl>}}/k3s/latest/en/architecture/#high-availability-with-an-external-db)
|
||||
|
||||
> New to Kubernetes? The official Kubernetes docs already have some great tutorials outlining the basics [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/).
|
||||
|
||||
Install Script
|
||||
--------------
|
||||
The k3s `install.sh` script provides a convenient way for installing to systemd or openrc,
|
||||
to install k3s as a service just run:
|
||||
K3s provides an installation script that is a convenient way to install it as a service on systemd or openrc based systems. This script is available at https://get.k3s.io. To install K3s using this method, just run:
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
A kubeconfig file is written to `/etc/rancher/k3s/k3s.yaml` and the service is automatically started or restarted.
|
||||
The install script will install k3s and additional utilities, such as `kubectl`, `crictl`, `k3s-killall.sh`, and `k3s-uninstall.sh`, for example:
|
||||
After running this installation:
|
||||
|
||||
* The K3s service will be configured to automatically restart after node reboots or if the process crashes or is killed
|
||||
* Additional utilities will be installed, including `kubectl`, `crictl`, `ctr`, `k3s-killall.sh`, and `k3s-uninstall.sh`
|
||||
* A [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file will be written to `/etc/rancher/k3s/k3s.yaml` and the kubectl installed by K3s will automatically use it
|
||||
|
||||
To install on worker nodes and add them to the cluster, run the installation script with the `K3S_URL` and `K3S_TOKEN` environment variables. Here is an example showing how to join a worker node:
|
||||
|
||||
```bash
|
||||
sudo kubectl get nodes
|
||||
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
|
||||
```
|
||||
Setting the `K3S_URL` parameter causes K3s to run in worker mode. The K3s agent will register with the K3s server listening at the supplied URL. The value to use for `K3S_TOKEN` is stored at `/var/lib/rancher/k3s/server/node-token` on your server node.
|
||||
|
||||
`K3S_TOKEN` is created at `/var/lib/rancher/k3s/server/node-token` on your server.
|
||||
To install on worker nodes we should pass `K3S_URL` along with
|
||||
`K3S_TOKEN` or `K3S_CLUSTER_SECRET` environment variables, for example:
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -
|
||||
```
|
||||
|
||||
Manual Download
|
||||
---------------
|
||||
1. Download `k3s` from latest [release](https://github.com/rancher/k3s/releases/latest), x86_64, armhf, and arm64 are supported.
|
||||
2. Run server.
|
||||
|
||||
```bash
|
||||
sudo k3s server &
|
||||
# Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
|
||||
sudo k3s kubectl get nodes
|
||||
|
||||
# On a different node run the below. NODE_TOKEN comes from
|
||||
# /var/lib/rancher/k3s/server/node-token on your server
|
||||
sudo k3s agent --server https://myserver:6443 --token ${NODE_TOKEN}
|
||||
```
|
||||
Note: Each machine must have a unique hostname. If your machines do not have unique hostnames, pass the `K3S_NODE_NAME` environment variable and provide a value with a valid and unique hostname for each node.
|
||||
|
||||
@@ -1,253 +0,0 @@
|
||||
---
|
||||
title: "Running K3S"
|
||||
weight: 3
|
||||
---
|
||||
|
||||
This section contains information for running k3s in various environments.
|
||||
|
||||
Starting the Server
|
||||
------------------
|
||||
|
||||
The installation script will auto-detect if your OS is using systemd or openrc and start the service.
|
||||
When running with openrc logs will be created at `/var/log/k3s.log`, or with systemd in `/var/log/syslog` and viewed using `journalctl -u k3s`. An example of installing and auto-starting with the install script:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
When running the server manually you should get an output similar to:
|
||||
|
||||
```
|
||||
$ k3s server
|
||||
INFO[2019-01-22T15:16:19.908493986-07:00] Starting k3s dev
|
||||
INFO[2019-01-22T15:16:19.908934479-07:00] Running kube-apiserver --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key
|
||||
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
|
||||
INFO[2019-01-22T15:16:20.196766005-07:00] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 0 --secure-port 0 --leader-elect=false
|
||||
INFO[2019-01-22T15:16:20.196880841-07:00] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 0 --secure-port 0 --leader-elect=false
|
||||
Flag --port has been deprecated, see --secure-port instead.
|
||||
INFO[2019-01-22T15:16:20.273441984-07:00] Listening on :6443
|
||||
INFO[2019-01-22T15:16:20.278383446-07:00] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
|
||||
INFO[2019-01-22T15:16:20.474454524-07:00] Node token is available at /var/lib/rancher/k3s/server/node-token
|
||||
INFO[2019-01-22T15:16:20.474471391-07:00] To join node to cluster: k3s agent -s https://10.20.0.3:6443 -t ${NODE_TOKEN}
|
||||
INFO[2019-01-22T15:16:20.541027133-07:00] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml
|
||||
INFO[2019-01-22T15:16:20.541049100-07:00] Run: k3s kubectl
|
||||
```
|
||||
|
||||
The output will likely be much longer as the agent will create a lot of logs. By default the server
|
||||
will register itself as a node (run the agent).
|
||||
|
||||
It is common and almost required these days that the control plane be part of the cluster.
|
||||
To disable the agent when running the server use the `--disable-agent` flag, the agent can then be run as a separate process.
|
||||
|
||||
Joining Nodes
|
||||
-------------
|
||||
|
||||
When the server starts it creates a file `/var/lib/rancher/k3s/server/node-token`.
|
||||
Using the contents of that file as `K3S_TOKEN` and setting `K3S_URL` allows the node
|
||||
to join as an agent using the install script:
|
||||
|
||||
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=XXX sh -
|
||||
|
||||
When using the install script openrc logs will be created at `/var/log/k3s-agent.log`, or with systemd in `/var/log/syslog` and viewed using `journalctl -u k3s-agent`.
|
||||
|
||||
Or running k3s manually with the token as `NODE_TOKEN`:
|
||||
|
||||
k3s agent --server https://myserver:6443 --token ${NODE_TOKEN}
|
||||
|
||||
SystemD
|
||||
-------
|
||||
|
||||
If you are using systemd here is a sample unit `k3s.service`:
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Lightweight Kubernetes
|
||||
Documentation=https://k3s.io
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=notify
|
||||
EnvironmentFile=/etc/systemd/system/k3s.service.env
|
||||
ExecStart=/usr/local/bin/k3s server
|
||||
KillMode=process
|
||||
Delegate=yes
|
||||
LimitNOFILE=infinity
|
||||
LimitNPROC=infinity
|
||||
LimitCORE=infinity
|
||||
TasksMax=infinity
|
||||
TimeoutStartSec=0
|
||||
Restart=always
|
||||
RestartSec=5s
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
OpenRC
|
||||
------
|
||||
|
||||
And an example openrc `/etc/init.d/k3s`:
|
||||
|
||||
```bash
|
||||
#!/sbin/openrc-run
|
||||
|
||||
depend() {
|
||||
after net-online
|
||||
need net
|
||||
}
|
||||
|
||||
start_pre() {
|
||||
rm -f /tmp/k3s.*
|
||||
}
|
||||
|
||||
supervisor=supervise-daemon
|
||||
name="k3s"
|
||||
command="/usr/local/bin/k3s"
|
||||
command_args="server >>/var/log/k3s.log 2>&1"
|
||||
|
||||
pidfile="/var/run/k3s.pid"
|
||||
respawn_delay=5
|
||||
|
||||
set -o allexport
|
||||
if [ -f /etc/environment ]; then source /etc/environment; fi
|
||||
if [ -f /etc/rancher/k3s/k3s.env ]; then source /etc/rancher/k3s/k3s.env; fi
|
||||
set +o allexport
|
||||
```
|
||||
|
||||
Alpine Linux
|
||||
------------
|
||||
|
||||
In order to pre-setup Alpine Linux you have to go through the following steps:
|
||||
|
||||
```bash
|
||||
echo "cgroup /sys/fs/cgroup cgroup defaults 0 0" >> /etc/fstab
|
||||
|
||||
cat >> /etc/cgconfig.conf <<EOF
|
||||
mount {
|
||||
cpuacct = /cgroup/cpuacct;
|
||||
memory = /cgroup/memory;
|
||||
devices = /cgroup/devices;
|
||||
freezer = /cgroup/freezer;
|
||||
net_cls = /cgroup/net_cls;
|
||||
blkio = /cgroup/blkio;
|
||||
cpuset = /cgroup/cpuset;
|
||||
cpu = /cgroup/cpu;
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Then update **/etc/update-extlinux.conf** by adding:
|
||||
|
||||
```
|
||||
default_kernel_opts="... cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory"
|
||||
```
|
||||
|
||||
Then update the config and reboot:
|
||||
|
||||
```bash
|
||||
update-extlinux
|
||||
reboot
|
||||
```
|
||||
|
||||
After rebooting:
|
||||
|
||||
- download **k3s** to **/usr/local/bin/k3s**
|
||||
- create an openrc file in **/etc/init.d**
|
||||
|
||||
Running in Docker (and docker-compose)
|
||||
-----------------
|
||||
|
||||
[k3d](https://github.com/rancher/k3d) is a utility designed to easily run k3s in Docker. It can be installed via the [brew](https://brew.sh/) utility for MacOS.
|
||||
|
||||
`rancher/k3s` images are also available to run k3s server and agent from Docker. A `docker-compose.yml` is in the root of the k3s repo that
|
||||
serves as an example of how to run k3s from Docker. To run from `docker-compose` from this repo run:
|
||||
|
||||
docker-compose up --scale node=3
|
||||
# kubeconfig is written to current dir
|
||||
kubectl --kubeconfig kubeconfig.yaml get node
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
497278a2d6a2 Ready <none> 11s v1.13.2-k3s2
|
||||
d54c8b17c055 Ready <none> 11s v1.13.2-k3s2
|
||||
db7a5a5a5bdd Ready <none> 12s v1.13.2-k3s2
|
||||
|
||||
To run the agent only in Docker, use `docker-compose up node`. Alternatively the Docker run command can also be used;
|
||||
|
||||
sudo docker run \
|
||||
-d --tmpfs /run \
|
||||
--tmpfs /var/run \
|
||||
-e K3S_URL=${SERVER_URL} \
|
||||
-e K3S_TOKEN=${NODE_TOKEN} \
|
||||
--privileged rancher/k3s:vX.Y.Z
|
||||
|
||||
Air-Gap Support
|
||||
---------------
|
||||
|
||||
k3s supports pre-loading of containerd images by placing them in the `images` directory for the agent before starting, for example:
|
||||
```sh
|
||||
sudo mkdir -p /var/lib/rancher/k3s/agent/images/
|
||||
sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/
|
||||
```
|
||||
Images needed for a base install are provided through the releases page, additional images can be created with the `docker save` command.
|
||||
|
||||
Offline Helm charts are served from the `/var/lib/rancher/k3s/server/static` directory, and Helm chart manifests may reference the static files with a `%{KUBERNETES_API}%` templated variable. For example, the default traefik manifest chart installs from `https://%{KUBERNETES_API}%/static/charts/traefik-X.Y.Z.tgz`.
|
||||
|
||||
If networking is completely disabled k3s may not be able to start (ie ethernet unplugged or wifi disconnected), in which case it may be necessary to add a default route. For example:
|
||||
```sh
|
||||
sudo ip -c address add 192.168.123.123/24 dev eno1
|
||||
sudo ip route add default via 192.168.123.1
|
||||
```
|
||||
|
||||
k3s additionally provides a `--resolv-conf` flag for kubelets, which may help with configuring DNS in air-gap networks.
|
||||
|
||||
Upgrades
|
||||
--------
|
||||
|
||||
To upgrade k3s from an older version you can re-run the installation script using the same flags, for example:
|
||||
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
If you want to upgrade to specific version you can run the following command:
|
||||
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh -
|
||||
```
|
||||
|
||||
Or to manually upgrade k3s:
|
||||
|
||||
1. Download the desired version of k3s from [releases](https://github.com/rancher/k3s/releases/latest)
|
||||
2. Install to an appropriate location (normally `/usr/local/bin/k3s`)
|
||||
3. Stop the old version
|
||||
4. Start the new version
|
||||
|
||||
Restarting k3s is supported by the installation script for systemd and openrc.
|
||||
To restart manually for systemd use:
|
||||
```sh
|
||||
sudo systemctl restart k3s
|
||||
```
|
||||
|
||||
To restart manually for openrc use:
|
||||
```sh
|
||||
sudo service k3s restart
|
||||
```
|
||||
|
||||
Upgrading an air-gap environment can be accomplished in the following manner:
|
||||
|
||||
1. Download air-gap images and install if changed
|
||||
2. Install new k3s binary (from installer or manual download)
|
||||
3. Restart k3s (if not restarted automatically by installer)
|
||||
|
||||
Uninstalling
|
||||
------------
|
||||
|
||||
If you installed k3s with the help of `install.sh` script an uninstall script is generated during installation, which will be created on your server node at `/usr/local/bin/k3s-uninstall.sh` (or as `k3s-agent-uninstall.sh`).
|
||||
|
||||
Hyperkube
|
||||
---------
|
||||
|
||||
k3s is bundled in a nice wrapper to remove the majority of the headache of running k8s. If
|
||||
you don't want that wrapper and just want a smaller k8s distro, the releases includes
|
||||
the `hyperkube` binary you can use. It's then up to you to know how to use `hyperkube`. If
|
||||
you want individual binaries you will need to compile them yourself from source.
|
||||
@@ -0,0 +1,152 @@
|
||||
---
|
||||
title: "Volumes and Storage"
|
||||
weight: 30
|
||||
---
|
||||
|
||||
When deploying an application that needs to retain data, you’ll need to create persistent storage. Persistent storage allows you to store application data external from the pod running your application. This storage practice allows you to maintain application data, even if the application’s pod fails.
|
||||
|
||||
A persistent volume (PV) is a piece of storage in the Kubernetes cluster, while a persistent volume claim (PVC) is a request for storage. For details on how PVs and PVCs work, refer to the official Kubernetes documentation on [storage.](https://kubernetes.io/docs/concepts/storage/volumes/)
|
||||
|
||||
This page describes how to set up persistent storage with a local storage provider, or with [Longhorn.](#setting-up-longhorn)
|
||||
|
||||
# Setting up the Local Storage Provider
|
||||
K3s comes with Rancher's Local Path Provisioner and this enables the ability to create persistent volume claims out of the box using local storage on the respective node. Below we cover a simple example. For more information please reference the official documentation [here](https://github.com/rancher/local-path-provisioner/blob/master/README.md#usage).
|
||||
|
||||
Create a hostPath backed persistent volume claim and a pod to utilize it:
|
||||
|
||||
### pvc.yaml
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: local-path-pvc
|
||||
namespace: default
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: local-path
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
```
|
||||
|
||||
### pod.yaml
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: volume-test
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: volume-test
|
||||
image: nginx:stable-alpine
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: volv
|
||||
mountPath: /data
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumes:
|
||||
- name: volv
|
||||
persistentVolumeClaim:
|
||||
claimName: local-path-pvc
|
||||
```
|
||||
|
||||
Apply the yaml:
|
||||
|
||||
```
|
||||
kubectl create -f pvc.yaml
|
||||
kubectl create -f pod.yaml
|
||||
```
|
||||
|
||||
Confirm the PV and PVC are created:
|
||||
|
||||
```
|
||||
kubectl get pv
|
||||
kubectl get pvc
|
||||
```
|
||||
|
||||
The status should be Bound for each.
|
||||
|
||||
# Setting up Longhorn
|
||||
|
||||
[comment]: <> (pending change - longhorn may support arm64 and armhf in the future.)
|
||||
|
||||
> **Note:** At this time Longhorn only supports amd64.
|
||||
|
||||
K3s supports [Longhorn](https://github.com/longhorn/longhorn). Longhorn is an open-source distributed block storage system for Kubernetes.
|
||||
|
||||
Below we cover a simple example. For more information, refer to the official documentation [here](https://github.com/longhorn/longhorn/blob/master/README.md).
|
||||
|
||||
Apply the longhorn.yaml to install Longhorn:
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
|
||||
```
|
||||
|
||||
Longhorn will be installed in the namespace `longhorn-system`.
|
||||
|
||||
Before we create a PVC, we will create a storage class for Longhorn with this yaml:
|
||||
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/examples/storageclass.yaml
|
||||
```
|
||||
|
||||
Apply the yaml to create the PVC and pod:
|
||||
|
||||
```
|
||||
kubectl create -f pvc.yaml
|
||||
kubectl create -f pod.yaml
|
||||
```
|
||||
|
||||
### pvc.yaml
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: longhorn-volv-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: longhorn
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
```
|
||||
|
||||
### pod.yaml
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: volume-test
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: volume-test
|
||||
image: nginx:stable-alpine
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: volv
|
||||
mountPath: /data
|
||||
ports:
|
||||
- containerPort: 80
|
||||
volumes:
|
||||
- name: volv
|
||||
persistentVolumeClaim:
|
||||
claimName: longhorn-volv-pvc
|
||||
```
|
||||
|
||||
Confirm the PV and PVC are created:
|
||||
|
||||
```
|
||||
kubectl get pv
|
||||
kubectl get pvc
|
||||
```
|
||||
|
||||
The status should be Bound for each.
|
||||
@@ -0,0 +1,44 @@
|
||||
---
|
||||
title: "Upgrades"
|
||||
weight: 25
|
||||
---
|
||||
|
||||
You can upgrade K3s by using the installation script, or by manually installing the binary of the desired version.
|
||||
|
||||
>**Note:** When upgrading, upgrade server nodes first one at a time, then any worker nodes.
|
||||
|
||||
### Upgrade K3s Using the Installation Script
|
||||
|
||||
To upgrade K3s from an older version you can re-run the installation script using the same flags, for example:
|
||||
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
```
|
||||
|
||||
If you want to upgrade to specific version you can run the following command:
|
||||
|
||||
```sh
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=vX.Y.Z-rc1 sh -
|
||||
```
|
||||
|
||||
### Manually Upgrade K3s Using the Binary
|
||||
|
||||
Or to manually upgrade K3s:
|
||||
|
||||
1. Download the desired version of K3s from [releases](https://github.com/rancher/k3s/releases/latest)
|
||||
2. Install to an appropriate location (normally `/usr/local/bin/k3s`)
|
||||
3. Stop the old version
|
||||
4. Start the new version
|
||||
|
||||
### Restarting K3s
|
||||
|
||||
Restarting K3s is supported by the installation script for systemd and openrc.
|
||||
To restart manually for systemd use:
|
||||
```sh
|
||||
sudo systemctl restart k3s
|
||||
```
|
||||
|
||||
To restart manually for openrc use:
|
||||
```sh
|
||||
sudo service k3s restart
|
||||
```
|
||||
@@ -35,7 +35,7 @@ System Docker runs a special container called **Docker**, which is another Docke
|
||||
|
||||
We created this separation not only for the security benefits, but also to make sure that commands like `docker rm -f $(docker ps -qa)` don't delete the entire OS.
|
||||
|
||||

|
||||
{{< img "/img/os/rancheroshowitworks.png" "How it works">}}
|
||||
|
||||
### Running RancherOS
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ weight: 303
|
||||
<p style="padding: 8px">Please submit possible security issues by emailing <a href="mailto:security@rancher.com">security@rancher.com</a></p>
|
||||
</td>
|
||||
<td width="30%" style="border: none;">
|
||||
<h4>Announcments</h4>
|
||||
<h4>Announcements</h4>
|
||||
<p style="padding: 8px">Subscribe to the <a href="https://forums.rancher.com/c/announcements">Rancher announcements forum</a> for release updates.</p>
|
||||
</td>
|
||||
</tr>
|
||||
@@ -33,7 +33,7 @@ weight: 303
|
||||
| [CVE-2017-5715](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5715) | Systems with microprocessors utilizing speculative execution and indirect branch prediction may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis | 6 Feb 2018 | [RancherOS v1.1.4](https://github.com/rancher/os/releases/tag/v1.1.4) using Linux v4.9.78 with the Retpoline support |
|
||||
| [CVE-2017-5753](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5753) | Systems with microprocessors utilizing speculative execution and branch prediction may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis. | 31 May 2018 | [RancherOS v1.4.0](https://github.com/rancher/os/releases/tag/v1.4.0) using Linux v4.14.32 |
|
||||
| [CVE-2018-8897](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-8897) | A statement in the System Programming Guide of the Intel 64 and IA-32 Architectures Software Developer's Manual (SDM) was mishandled in the development of some or all operating-system kernels, resulting in unexpected behavior for #DB exceptions that are deferred by MOV SS or POP SS, as demonstrated by (for example) privilege escalation in Windows, macOS, some Xen configurations, or FreeBSD, or a Linux kernel crash. | 31 May 2018 | [RancherOS v1.4.0](https://github.com/rancher/os/releases/tag/v1.4.0) using Linux v4.14.32 |
|
||||
| [L1 Terminal Fault](https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html) | L1 Terminal Fault is a hardware vulnerability which allows unprivileged speculative access to data which is available in the Level 1 Data Cache when the page table entry controlling the virtual address, which is used for the access, has the Present bit cleared or other reserved bits set. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 |
|
||||
| [CVE-2018-3620](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3620) | L1 Terminal Fault is a hardware vulnerability which allows unprivileged speculative access to data which is available in the Level 1 Data Cache when the page table entry controlling the virtual address, which is used for the access, has the Present bit cleared or other reserved bits set. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 |
|
||||
| [CVE-2018-3639](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3639) | Systems with microprocessors utilizing speculative execution and speculative execution of memory reads before the addresses of all prior memory writes are known may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis, aka Speculative Store Bypass (SSB), Variant 4. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 |
|
||||
| [CVE-2018-17182](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17182) | The vmacache_flush_all function in mm/vmacache.c mishandles sequence number overflows. An attacker can trigger a use-after-free (and possibly gain privileges) via certain thread creation, map, unmap, invalidation, and dereference operations. | 18 Oct 2018 | [RancherOS v1.4.2](https://github.com/rancher/os/releases/tag/v1.4.2) using Linux v4.14.73 |
|
||||
| [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736) | runc through 1.0-rc6, as used in Docker before 18.09.2 and other products, allows attackers to overwrite the host runc binary (and consequently obtain host root access) by leveraging the ability to execute a command as root within one of these types of containers: (1) a new container with an attacker-controlled image, or (2) an existing container, to which the attacker previously had write access, that can be attached with docker exec. This occurs because of file-descriptor mishandling, related to /proc/self/exe. | 12 Feb 2019 | [RancherOS v1.5.1](https://github.com/rancher/os/releases/tag/v1.5.1) |
|
||||
|
||||
@@ -58,25 +58,25 @@ rancher:
|
||||
|
||||
### Amazon ECS enabled AMIs
|
||||
|
||||
Latest Release: [v1.5.3](https://github.com/rancher/os/releases/tag/v1.5.3)
|
||||
Latest Release: [v1.5.4](https://github.com/rancher/os/releases/tag/v1.5.4)
|
||||
|
||||
Region | Type | AMI
|
||||
---|--- | ---
|
||||
eu-north-1 | HVM - ECS enabled | [ami-02042aefd9a6743c0](https://eu-north-1.console.aws.amazon.com/ec2/home?region=eu-north-1#launchInstanceWizard:ami=ami-02042aefd9a6743c0)
|
||||
ap-south-1 | HVM - ECS enabled | [ami-097e19198e915f12c](https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#launchInstanceWizard:ami=ami-097e19198e915f12c)
|
||||
eu-west-3 | HVM - ECS enabled | [ami-0622559381120fe22](https://eu-west-3.console.aws.amazon.com/ec2/home?region=eu-west-3#launchInstanceWizard:ami=ami-0622559381120fe22)
|
||||
eu-west-2 | HVM - ECS enabled | [ami-081d1809e05a29ff9](https://eu-west-2.console.aws.amazon.com/ec2/home?region=eu-west-2#launchInstanceWizard:ami=ami-081d1809e05a29ff9)
|
||||
eu-west-1 | HVM - ECS enabled | [ami-08f19c0126135b103](https://eu-west-1.console.aws.amazon.com/ec2/home?region=eu-west-1#launchInstanceWizard:ami=ami-08f19c0126135b103)
|
||||
ap-northeast-2 | HVM - ECS enabled | [ami-08bba0cd9934cef90](https://ap-northeast-2.console.aws.amazon.com/ec2/home?region=ap-northeast-2#launchInstanceWizard:ami=ami-08bba0cd9934cef90)
|
||||
ap-northeast-1 | HVM - ECS enabled | [ami-0a7a9e44ec4c01f7e](https://ap-northeast-1.console.aws.amazon.com/ec2/home?region=ap-northeast-1#launchInstanceWizard:ami=ami-0a7a9e44ec4c01f7e)
|
||||
sa-east-1 | HVM - ECS enabled | [ami-0ad4e6bd39fe14dfa](https://sa-east-1.console.aws.amazon.com/ec2/home?region=sa-east-1#launchInstanceWizard:ami=ami-0ad4e6bd39fe14dfa)
|
||||
ca-central-1 | HVM - ECS enabled | [ami-0bde65d7509878a90](https://ca-central-1.console.aws.amazon.com/ec2/home?region=ca-central-1#launchInstanceWizard:ami=ami-0bde65d7509878a90)
|
||||
ap-southeast-1 | HVM - ECS enabled | [ami-085ce6d3cf455dba0](https://ap-southeast-1.console.aws.amazon.com/ec2/home?region=ap-southeast-1#launchInstanceWizard:ami=ami-085ce6d3cf455dba0)
|
||||
ap-southeast-2 | HVM - ECS enabled | [ami-004dc02c07766a9a6](https://ap-southeast-2.console.aws.amazon.com/ec2/home?region=ap-southeast-2#launchInstanceWizard:ami=ami-004dc02c07766a9a6)
|
||||
eu-central-1 | HVM - ECS enabled | [ami-0fa23b013188bf809](https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#launchInstanceWizard:ami=ami-0fa23b013188bf809)
|
||||
us-east-1 | HVM - ECS enabled | [ami-0395c86bff9bc1bce](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#launchInstanceWizard:ami=ami-0395c86bff9bc1bce)
|
||||
us-east-2 | HVM - ECS enabled | [ami-02027918438bc6897](https://us-east-2.console.aws.amazon.com/ec2/home?region=us-east-2#launchInstanceWizard:ami=ami-02027918438bc6897)
|
||||
us-west-1 | HVM - ECS enabled | [ami-03e54b15c63b99c47](https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#launchInstanceWizard:ami=ami-03e54b15c63b99c47)
|
||||
us-west-2 | HVM - ECS enabled | [ami-0a7f51b27f45e8d77](https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#launchInstanceWizard:ami=ami-0a7f51b27f45e8d77)
|
||||
cn-north-1 | HVM - ECS enabled | [ami-0dfbc6d88d4048e24](https://cn-north-1.console.amazonaws.cn/ec2/home?region=cn-north-1#launchInstanceWizard:ami=ami-0dfbc6d88d4048e24)
|
||||
cn-northwest-1 | HVM - ECS enabled | [ami-04d3267529863091d](https://cn-northwest-1.console.amazonaws.cn/ec2/home?region=cn-northwest-1#launchInstanceWizard:ami=ami-04d3267529863091d)
|
||||
eu-north-1 | HVM - ECS enabled | [ami-0c46c1da6468aa948](https://eu-north-1.console.aws.amazon.com/ec2/home?region=eu-north-1#launchInstanceWizard:ami=ami-0c46c1da6468aa948)
|
||||
ap-south-1 | HVM - ECS enabled | [ami-097e5fa868c46e925](https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#launchInstanceWizard:ami=ami-097e5fa868c46e925)
|
||||
eu-west-3 | HVM - ECS enabled | [ami-016e7d630d7f608e4](https://eu-west-3.console.aws.amazon.com/ec2/home?region=eu-west-3#launchInstanceWizard:ami=ami-016e7d630d7f608e4)
|
||||
eu-west-2 | HVM - ECS enabled | [ami-00aacd261ab72302e](https://eu-west-2.console.aws.amazon.com/ec2/home?region=eu-west-2#launchInstanceWizard:ami=ami-00aacd261ab72302e)
|
||||
eu-west-1 | HVM - ECS enabled | [ami-0812b3f8aec8d2d81](https://eu-west-1.console.aws.amazon.com/ec2/home?region=eu-west-1#launchInstanceWizard:ami=ami-0812b3f8aec8d2d81)
|
||||
ap-northeast-2 | HVM - ECS enabled | [ami-0d9d77df6579e618a](https://ap-northeast-2.console.aws.amazon.com/ec2/home?region=ap-northeast-2#launchInstanceWizard:ami=ami-0d9d77df6579e618a)
|
||||
ap-northeast-1 | HVM - ECS enabled | [ami-09e957ac11ef430a3](https://ap-northeast-1.console.aws.amazon.com/ec2/home?region=ap-northeast-1#launchInstanceWizard:ami=ami-09e957ac11ef430a3)
|
||||
sa-east-1 | HVM - ECS enabled | [ami-09c22f3ce89280ed4](https://sa-east-1.console.aws.amazon.com/ec2/home?region=sa-east-1#launchInstanceWizard:ami=ami-09c22f3ce89280ed4)
|
||||
ca-central-1 | HVM - ECS enabled | [ami-016ac80225e649cf9](https://ca-central-1.console.aws.amazon.com/ec2/home?region=ca-central-1#launchInstanceWizard:ami=ami-016ac80225e649cf9)
|
||||
ap-southeast-1 | HVM - ECS enabled | [ami-06cdfc80bdbd6f419](https://ap-southeast-1.console.aws.amazon.com/ec2/home?region=ap-southeast-1#launchInstanceWizard:ami=ami-06cdfc80bdbd6f419)
|
||||
ap-southeast-2 | HVM - ECS enabled | [ami-0335f7bb1c51c0a74](https://ap-southeast-2.console.aws.amazon.com/ec2/home?region=ap-southeast-2#launchInstanceWizard:ami=ami-0335f7bb1c51c0a74)
|
||||
eu-central-1 | HVM - ECS enabled | [ami-0af71ec7ee8b729be](https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#launchInstanceWizard:ami=ami-0af71ec7ee8b729be)
|
||||
us-east-1 | HVM - ECS enabled | [ami-07209d7ec9e7545b4](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#launchInstanceWizard:ami=ami-07209d7ec9e7545b4)
|
||||
us-east-2 | HVM - ECS enabled | [ami-046358fe356dd0e35](https://us-east-2.console.aws.amazon.com/ec2/home?region=us-east-2#launchInstanceWizard:ami=ami-046358fe356dd0e35)
|
||||
us-west-1 | HVM - ECS enabled | [ami-031bcb65b47cb0a77](https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#launchInstanceWizard:ami=ami-031bcb65b47cb0a77)
|
||||
us-west-2 | HVM - ECS enabled | [ami-0d92d296ecb13ea45](https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#launchInstanceWizard:ami=ami-0d92d296ecb13ea45)
|
||||
cn-north-1 | HVM - ECS enabled | [ami-04f1668aaf990acf6](https://cn-north-1.console.amazonaws.cn/ec2/home?region=cn-north-1#launchInstanceWizard:ami=ami-04f1668aaf990acf6)
|
||||
cn-northwest-1 | HVM - ECS enabled | [ami-0771f259ffce58280](https://cn-northwest-1.console.amazonaws.cn/ec2/home?region=cn-northwest-1#launchInstanceWizard:ami=ami-0771f259ffce58280)
|
||||
|
||||
@@ -49,7 +49,7 @@ On desktop systems the Syslinux boot menu can be switched to graphical mode by a
|
||||
|
||||
#### Recovery console
|
||||
|
||||
`rancher.recovery=true` will start a single user `root` bash session as easily in the boot process, with no network, or persitent filesystem mounted. This can be used to fix disk problems, or to debug your system.
|
||||
`rancher.recovery=true` will start a single user `root` bash session as easily in the boot process, with no network, or persistent filesystem mounted. This can be used to fix disk problems, or to debug your system.
|
||||
|
||||
#### Enable/Disable sshd
|
||||
|
||||
@@ -61,7 +61,7 @@ On desktop systems the Syslinux boot menu can be switched to graphical mode by a
|
||||
|
||||
#### Autologin console
|
||||
|
||||
`rancher.autologin=<tty...>` will automatically log in the sepcified console - common values are `tty1`, `ttyS0` and `ttyAMA0` - depending on your platform.
|
||||
`rancher.autologin=<tty...>` will automatically log in the specified console - common values are `tty1`, `ttyS0` and `ttyAMA0` - depending on your platform.
|
||||
|
||||
#### Enable/Disable hypervisor service auto-enable
|
||||
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
---
|
||||
title: Date and time zone
|
||||
weight: 121
|
||||
---
|
||||
|
||||
The default console keeps time in the Coordinated Universal Time (UTC) zone and synchronizes clocks with the Network Time Protocol (NTP). The Network Time Protocol daemon (ntpd) is an operating system program that maintains the system time in synchronization with time servers using the NTP.
|
||||
|
||||
RancherOS can run ntpd in the System Docker container. You can update its configurations by updating `/etc/ntp.conf`. For an example of how to update a file such as `/etc/ntp.conf` within a container, refer to [this page.]({{< baseurl >}}/os/v1.x/en/installation/configuration/write-files/#writing-files-in-specific-system-services)
|
||||
|
||||
The default console cannot support changing the time zone because including `tzdata` (time zone data) will increase the ISO size. However, you can change the time zone in the container by passing a flag to specify the time zone when you run the container:
|
||||
|
||||
```
|
||||
$ docker run -e TZ=Europe/Amsterdam debian:jessie date
|
||||
Tue Aug 20 09:28:19 CEST 2019
|
||||
```
|
||||
|
||||
You may need to install the `tzdata` in some images:
|
||||
|
||||
```
|
||||
$ docker run -e TZ=Asia/Shanghai -e DEBIAN_FRONTEND=noninteractive -it --rm ubuntu /bin/bash -c "apt-get update && apt-get install -yq tzdata && date”
|
||||
Thu Aug 29 08:13:02 CST 2019
|
||||
```
|
||||
@@ -86,7 +86,7 @@ _Available as of v1.4.x_
|
||||
The docker0 bridge can be configured with docker args, it will take effect after reboot.
|
||||
|
||||
```
|
||||
$ ros config set rancher.docker.bip 192.168.100.1/16
|
||||
$ ros config set rancher.docker.bip 192.168.0.0/16
|
||||
```
|
||||
|
||||
### Configuring System Docker
|
||||
@@ -114,13 +114,13 @@ _Available as of v1.4.x_
|
||||
The docker-sys bridge can be configured with system-docker args, it will take effect after reboot.
|
||||
|
||||
```
|
||||
$ ros config set rancher.system_docker.bip 172.18.43.1/16
|
||||
$ ros config set rancher.system_docker.bip 172.19.0.0/16
|
||||
```
|
||||
|
||||
_Available as of v1.4.x_
|
||||
|
||||
The default path of system-docker logs is `/var/log/system-docker.log`. If you want to write the system-docker logs to a separate partition,
|
||||
e.g. [RANCHE_OEM partition]({{< baseurl >}}/os/v1.x/en/about/custom-partition-layout/#use-rancher-oem-partition), you can try `rancher.defaults.system_docker_logs`:
|
||||
The default path of system-docker logs is `/var/log/system-docker.log`. If you want to write the system-docker logs to a separate partition,
|
||||
e.g. [RANCHER_OEM partition]({{< baseurl >}}/os/v1.x/en/about/custom-partition-layout/#use-rancher-oem-partition), you can try `rancher.defaults.system_docker_logs`:
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
@@ -170,11 +170,11 @@ Status: Downloaded newer image for alpine:latest
|
||||
|
||||
_Available as of v1.5.0_
|
||||
|
||||
When RancherOS is booted, you start with a User Docker service that is running in System Docker. With v1.5.0, RancherOS has the ability to create additional User Docker services that can run at the same time.
|
||||
When RancherOS is booted, you start with a User Docker service that is running in System Docker. With v1.5.0, RancherOS has the ability to create additional User Docker services that can run at the same time.
|
||||
|
||||
#### Terminology
|
||||
|
||||
Throughout the rest of this documentation, we may simplify to use these terms when describing Docker.
|
||||
Throughout the rest of this documentation, we may simplify to use these terms when describing Docker.
|
||||
|
||||
| Terminology | Definition |
|
||||
|-----------------------|--------------------------------------------------|
|
||||
@@ -184,13 +184,13 @@ Throughout the rest of this documentation, we may simplify to use these terms wh
|
||||
|
||||
#### Pre-Requisites
|
||||
|
||||
User Docker must be set as Docker 17.12.1 or earlier. If it's a later Docker version, it will produce errors when creating a user defined network in System Docker.
|
||||
User Docker must be set as Docker 17.12.1 or earlier. If it's a later Docker version, it will produce errors when creating a user defined network in System Docker.
|
||||
|
||||
```
|
||||
$ ros engine switch docker-17.12.1-ce
|
||||
```
|
||||
|
||||
You will need to create a user-defined network, which will be used when creating the Other User Docker.
|
||||
You will need to create a user-defined network, which will be used when creating the Other User Docker.
|
||||
|
||||
```
|
||||
$ system-docker network create --subnet=172.20.0.0/16 dind
|
||||
@@ -204,7 +204,7 @@ In order to create another User Docker, you will use `ros engine create`. Curren
|
||||
$ ros engine create otheruserdockername --network=dind --fixed-ip=172.20.0.2
|
||||
```
|
||||
|
||||
After the Other User Docker service is created, users can query this service like other services.
|
||||
After the Other User Docker service is created, users can query this service like other services.
|
||||
|
||||
```
|
||||
$ ros service list
|
||||
@@ -215,13 +215,13 @@ disabled volume-nfs
|
||||
enabled otheruserdockername
|
||||
```
|
||||
|
||||
You can use `ros service up` to start the Other User Docker service.
|
||||
You can use `ros service up` to start the Other User Docker service.
|
||||
|
||||
```
|
||||
$ ros service up otheruserdockername
|
||||
```
|
||||
|
||||
After the Other User Docker service is running, you can interact with it just like you can use the built-in User Docker. You would need to append `-<SERVICE_NAME>` to `docker`.
|
||||
After the Other User Docker service is running, you can interact with it just like you can use the built-in User Docker. You would need to append `-<SERVICE_NAME>` to `docker`.
|
||||
|
||||
```
|
||||
$ docker-otheruserdockername ps -a
|
||||
@@ -229,7 +229,7 @@ $ docker-otheruserdockername ps -a
|
||||
|
||||
#### SSH into the Other User Docker container
|
||||
|
||||
When creating the Other User Docker, you can set an external SSH port so you can SSH into the Other User Docker container in System Docker. By using `--ssh-port` and adding ssh keys with `--authorized-keys`, you can set up this optional SSH port.
|
||||
When creating the Other User Docker, you can set an external SSH port so you can SSH into the Other User Docker container in System Docker. By using `--ssh-port` and adding ssh keys with `--authorized-keys`, you can set up this optional SSH port.
|
||||
|
||||
```
|
||||
$ ros engine create --help
|
||||
@@ -248,7 +248,7 @@ When using `--authorized-keys`, you will need to put the key file in one of the
|
||||
/home/
|
||||
```
|
||||
|
||||
RancherOS will generate a random password for each Other User Docker container, which can be viewed in the container logs. If you do not set any SSH keys, the password can be used.
|
||||
RancherOS will generate a random password for each Other User Docker container, which can be viewed in the container logs. If you do not set any SSH keys, the password can be used.
|
||||
|
||||
```
|
||||
$ system-docker logs otheruserdockername
|
||||
@@ -259,7 +259,7 @@ password: xCrw6fEG
|
||||
======================================
|
||||
```
|
||||
|
||||
In System Docker, you can SSH into any Other Uesr Docker Container using `ssh`.
|
||||
In System Docker, you can SSH into any Other User Docker Container using `ssh`.
|
||||
|
||||
```
|
||||
$ system-docker ps
|
||||
@@ -274,7 +274,7 @@ $ ssh root@<OTHERUSERDOCKER_CONTAINER_IP>
|
||||
|
||||
#### Removing any Other User Docker Service
|
||||
|
||||
We recommend using `ros engine rm` to remove any Other User Docker service.
|
||||
We recommend using `ros engine rm` to remove any Other User Docker service.
|
||||
|
||||
```
|
||||
$ ros engine rm otheruserdockername
|
||||
|
||||
@@ -12,7 +12,7 @@ runcmd:
|
||||
- echo "test" > /home/rancher/test2
|
||||
```
|
||||
|
||||
Commands specified using `runcmd` will be executed within the context of the `console` container. More details on the ordering of commands run in the `console` container can be found [here]({{< baseurl >}}/os/v1.x/en/installation/boot-process/built-in-system-services/#console).
|
||||
Commands specified using `runcmd` will be executed within the context of the `console` container.
|
||||
|
||||
### Running Docker commands
|
||||
|
||||
|
||||
@@ -83,7 +83,7 @@ FROM scratch
|
||||
COPY engine /engine
|
||||
```
|
||||
|
||||
Once the image is built a [system service]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/) configuration file must be created. An [example file](https://github.com/rancher/os-services/blob/master/d/docker-1.12.3.yml) can be found in the rancher/os-services repo. Change the `image` field to point to the Docker engine image you've built.
|
||||
Once the image is built a [system service]({{< baseurl >}}/os/v1.x/en/installation/system-services/adding-system-services/) configuration file must be created. An [example file](https://github.com/rancher/os-services/blob/master/d/docker-18.06.3-ce.yml) can be found in the rancher/os-services repo. Change the `image` field to point to the Docker engine image you've built.
|
||||
|
||||
All of the previously mentioned methods of switching Docker engines are now available. For example, if your service file is located at `https://myservicefile` then the following cloud-config file could be used to use your custom Docker engine.
|
||||
|
||||
|
||||
@@ -64,7 +64,7 @@ $ USER_DOCKER_VERSION=17.03.2 make release
|
||||
|
||||
_Available as of v1.5.0_
|
||||
|
||||
When building RancherOS, you have the ability to automatically start in a supported [console]({{< baseurl >}}/os/v1.x/en/installation/switching-consoles/) instead of booting into the default console and switching to your desired one.
|
||||
When building RancherOS, you have the ability to automatically start in a supported console instead of booting into the default console and switching to your desired one.
|
||||
|
||||
Here is an example of building RancherOS and having the `alpine` console enabled:
|
||||
|
||||
|
||||