From fb301936a97b8cd8f80317f7fcfbe2496f3008bc Mon Sep 17 00:00:00 2001 From: Akihiro Suda Date: Thu, 6 May 2021 17:41:12 +0900 Subject: [PATCH 01/12] k3s: update instruction for rootless mode Signed-off-by: Akihiro Suda --- content/k3s/latest/en/advanced/_index.md | 55 +++++++++++++------- content/k3s/latest/en/known-issues/_index.md | 4 +- 2 files changed, 38 insertions(+), 21 deletions(-) diff --git a/content/k3s/latest/en/advanced/_index.md b/content/k3s/latest/en/advanced/_index.md index a557e491fc4..9c83909a734 100644 --- a/content/k3s/latest/en/advanced/_index.md +++ b/content/k3s/latest/en/advanced/_index.md @@ -13,7 +13,7 @@ This section contains advanced information describing the different ways you can - [Using Docker as the container runtime](#using-docker-as-the-container-runtime) - [Configuring containerd](#configuring-containerd) - [Secrets Encryption Config (Experimental)](#secrets-encryption-config-experimental) -- [Running K3s with RootlessKit (Experimental)](#running-k3s-with-rootlesskit-experimental) +- [Running K3s with Rootless mode (Experimental)](#running-k3s-with-rootless-mode-experimental) - [Node labels and taints](#node-labels-and-taints) - [Starting the server with the installation script](#starting-the-server-with-the-installation-script) - [Additional preparation for Alpine Linux setup](#additional-preparation-for-alpine-linux-setup) @@ -163,18 +163,15 @@ As of v1.17.4+k3s1, K3s added the experimental feature of enabling secrets encry Once enabled any created secret will be encrypted with this key. Note that if you disable encryption then any encrypted secrets will not be readable until you enable encryption again. -# Running K3s with RootlessKit (Experimental) +# Running K3s with Rootless mode (Experimental) > **Warning:** This feature is experimental. -RootlessKit is a kind of Linux-native "fake root" utility, made for mainly [running Docker and Kubernetes as an unprivileged user,](https://github.com/rootless-containers/usernetes) so as to protect the real root on the host from potential container-breakout attacks. +Rootless mode allows running the entire k3s an unprivileged user, so as to protect the real root on the host from potential container-breakout attacks. -Initial rootless support has been added but there are a series of significant usability issues surrounding it. +See also https://rootlesscontaine.rs/ to learn about Rootless mode. -We are releasing the initial support for those interested in rootless and hopefully some people can help to improve the usability. First, ensure you have a proper setup and support for user namespaces. Refer to the [requirements section](https://github.com/rootless-containers/rootlesskit#setup) in RootlessKit for instructions. -In short, latest Ubuntu is your best bet for this to work. - -### Known Issues with RootlessKit +### Known Issues with Rootless mode * **Ports** @@ -184,24 +181,44 @@ In short, latest Ubuntu is your best bet for this to work. Currently, only `LoadBalancer` services are automatically bound. -* **Daemon lifecycle** - - Once you kill K3s and then start a new instance of K3s it will create a new network namespace, but it doesn't kill the old pods. So you are left - with a fairly broken setup. This is the main issue at the moment, how to deal with the network namespace. - - The issue is tracked in https://github.com/rootless-containers/rootlesskit/issues/65 - * **Cgroups** - Cgroups are not supported. + Cgroup v1 is not supported. v2 is supported. + +* **Multi-node cluster** + + Multi-cluster installation is untested and undocumented. ### Running Servers and Agents with Rootless +* Enable cgroup v2 delegation, see https://rootlesscontaine.rs/getting-started/common/cgroup2/ . + This step is optional, but highly recommended for enabling CPU and memory resource limtitation. -Just add `--rootless` flag to either server or agent. So run `k3s server --rootless` and then look for the message `Wrote kubeconfig [SOME PATH]` for where your kubeconfig file is. +* Download `k3s-rootless.service` from [`https://github.com/k3s-io/k3s/blob//k3s-rootless.service`](https://github.com/k3s-io/k3s/blob/master/k3s-rootless.service). + Make sure to use the same version of `k3s-rootless.service` and `k3s`. -For more information about setting up the kubeconfig file, refer to the [section about cluster access.](../cluster-access) +* Install `k3s-rootless.service` to `~/.config/systemd/user/k3s-rootless.service`. + Installing this file as a system-wide service (`/etc/systemd/...`) is not supported. + Depending on the path of `k3s` binary, you might need to modify the `ExecStart=/usr/local/bin/k3s ...` line of the file. -> Be careful, if you use `-o` to write the kubeconfig to a different directory it will probably not work. This is because the K3s instance in running in a different mount namespace. +* Run `systemctl --user daemon-reload` + +* Run `systemctl --user enable --now k3s-rootless` + +* Run `KUBECONFIG=~/.kube/k3s.yaml kubectl get pods -A`, and make sure the pods are running. + +> **Note:** Don't try to run `k3s server --rootless` on a terminal, as it doesn't enable cgroup v2 delegation. +> If you really need to try it on a terminal, prepend `systemd-run --user -p Delegate=yes --tty` to create a systemd scope. +> +> i.e., +> ```console +> $ systemd-run --user -p Delegate=yes --tty k3s server --rootless +> ``` + +### Troubleshooting + +* Run `systemctl --user status k3s-rootless` to check the daemon status +* Run `journalctl --user -f -u k3s-rootless` to see the daemon log +* See also https://rootlesscontaine.rs/ # Node Labels and Taints diff --git a/content/k3s/latest/en/known-issues/_index.md b/content/k3s/latest/en/known-issues/_index.md index 8107e8a7451..d12fafa2a5c 100644 --- a/content/k3s/latest/en/known-issues/_index.md +++ b/content/k3s/latest/en/known-issues/_index.md @@ -12,6 +12,6 @@ If you plan to use K3s with docker, Docker installed via a snap package is not r If you are running iptables in nftables mode instead of legacy you might encounter issues. We recommend utilizing newer iptables (such as 1.6.1+) to avoid issues. -**RootlessKit** +**Rootless Mode** -Running K3s with RootlessKit is experimental and has several [known issues.]({{}}/k3s/latest/en/advanced/#known-issues-with-rootlesskit) +Running K3s with Rootless mode is experimental and has several [known issues.]({{}}/k3s/latest/en/advanced/#known-issues-with-rootless-mode) From 84fe04c54157d613d6cbc390f991c904929b5c59 Mon Sep 17 00:00:00 2001 From: Brad Davidson Date: Wed, 26 May 2021 16:04:58 -0700 Subject: [PATCH 02/12] Update _index.md --- .../install-rancher-on-k8s/_index.md | 28 ++++++++++++------- 1 file changed, 18 insertions(+), 10 deletions(-) diff --git a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md index 7d4887312eb..2b72cd8e8e7 100644 --- a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md @@ -172,18 +172,20 @@ The exact command to install Rancher differs depending on the certificate config {{% tab "Rancher-generated Certificates" %}} -The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface. +The default is for Rancher to generate a self-signed CA, and uses `cert-manager` to issue the certificate for access to the Rancher server interface. Because `rancher` is the default option for `ingress.tls.source`, we are not specifying `ingress.tls.source` when running the `helm install` command. -- Set the `hostname` to the DNS name you pointed at your load balancer. +- Set `hostname` to the DNS record that resolves to your load balancer. +- Set `replicas` to the number of replicas to use for the Rancher Deployment. This defaults to 3; if you have less than 3 nodes in your cluster you should reduce it accordingly. +- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`. - If you are installing an alpha version, Helm requires adding the `--devel` option to the command. -- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6` ``` helm install rancher rancher-/rancher \ --namespace cattle-system \ - --set hostname=rancher.my.org + --set hostname=rancher.my.org \ + --set replicas=3 ``` Wait for Rancher to be rolled out: @@ -201,15 +203,18 @@ This option uses `cert-manager` to automatically request and renew [Let's Encryp In the following command, -- `hostname` is set to the public DNS record, -- `ingress.tls.source` is set to `letsEncrypt` -- `letsEncrypt.email` is set to the email address used for communication about your certificate (for example, expiry notices) +- Set `hostname` to the public DNS record that resolves to your load balancer. +- Set `replicas` to the number of replicas to use for the Rancher Deployment. This defaults to 3; if you have less than 3 nodes in your cluster you should reduce it accordingly. +- Set `ingress.tls.source` to `letsEncrypt`. +- Set `letsEncrypt.email` to the email address used for communication about your certificate (for example, expiry notices). +- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`. - If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher rancher-/rancher \ --namespace cattle-system \ --set hostname=rancher.my.org \ + --set replicas=3 \ --set ingress.tls.source=letsEncrypt \ --set letsEncrypt.email=me@example.org ``` @@ -226,20 +231,23 @@ deployment "rancher" successfully rolled out {{% tab "Certificates from Files" %}} In this option, Kubernetes secrets are created from your own certificates for Rancher to use. -When you run this command, the `hostname` option must match the `Common Name` or a `Subject Alternative Names` entry in the server certificate or the Ingress controller will fail to configure correctly. +When you run this command, the `hostname` option must match the `Common Name` or a `Subject Alternative Names` entry in the server certificate, or the Ingress controller will fail to configure correctly. Although an entry in the `Subject Alternative Names` is technically required, having a matching `Common Name` maximizes compatibility with older browsers and applications. > If you want to check if your certificates are correct, see [How do I check Common Name and Subject Alternative Names in my server certificate?]({{}}/rancher/v2.5/en/faq/technical/#how-do-i-check-common-name-and-subject-alternative-names-in-my-server-certificate) -- Set the `hostname`. +- Set `hostname` as appropriate for your certificate, as described above. +- Set `replicas` to the number of replicas to use for the Rancher Deployment. This defaults to 3; if you have less than 3 nodes in your cluster you should reduce it accordingly. - Set `ingress.tls.source` to `secret`. +- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`. - If you are installing an alpha version, Helm requires adding the `--devel` option to the command. ``` helm install rancher rancher-/rancher \ --namespace cattle-system \ --set hostname=rancher.my.org \ + --set replicas=3 \ --set ingress.tls.source=secret ``` @@ -263,7 +271,7 @@ The Rancher chart configuration has many options for customizing the installatio - [Private Docker Image Registry]({{}}/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/#private-registry-and-air-gap-installs) - [TLS Termination on an External Load Balancer]({{}}/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/#external-tls-termination) -See the [Chart Options]({{}}/rancher/v2.5/en/installation/resources/chart-options/) for the full list of options. +See the [Chart Options]({{}}/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/) for the full list of options. ### 6. Verify that the Rancher Server is Successfully Deployed From 8ae6e7a448bbc2db47d796f5d5ec9a5badb5ea09 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Wed, 26 May 2021 21:10:22 -0700 Subject: [PATCH 03/12] Say all node roles are required for RKE clusters in more places #3126 --- .../en/cluster-provisioning/rke-clusters/custom-nodes/_index.md | 2 ++ .../rke-clusters/node-pools/azure/_index.md | 2 ++ .../rke-clusters/node-pools/digital-ocean/_index.md | 2 ++ .../cluster-provisioning/rke-clusters/node-pools/ec2/_index.md | 2 ++ .../rke-clusters/node-pools/vsphere/_index.md | 1 + .../node-pools/vsphere/provisioning-vsphere-clusters/_index.md | 2 ++ .../rke-clusters/windows-clusters/_index.md | 2 ++ 7 files changed, 13 insertions(+) diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md index 419860882dd..9f1f7fabd91 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md @@ -42,6 +42,8 @@ Provision the host according to the [installation requirements]({{}}/ra ### 2. Create the Custom Cluster +Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present. + 1. From the **Clusters** page, click **Add Cluster**. 2. Choose **Custom**. diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md index 99a61942c22..7442cee0954 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md @@ -66,6 +66,8 @@ Creating a [node template]({{}}/rancher/v2.5/en/cluster-provisioning/rk Use Rancher to create a Kubernetes cluster in Azure. +Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present. + 1. From the **Clusters** page, click **Add Cluster**. 1. Choose **Azure**. 1. Enter a **Cluster Name**. diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md index 61b810178d5..ed7581ab8c3 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md @@ -37,6 +37,8 @@ Creating a [node template]({{}}/rancher/v2.5/en/cluster-provisioning/rk ### 3. Create a cluster with node pools using the node template +Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present. + 1. From the **Clusters** page, click **Add Cluster**. 1. Choose **DigitalOcean**. 1. Enter a **Cluster Name**. diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md index 8515bcf6a05..78d54a58b6e 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md @@ -51,6 +51,8 @@ Creating a [node template]({{}}/rancher/v2.5/en/cluster-provisioning/rk Add one or more node pools to your cluster. For more information about node pools, see [this section.]({{}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools) +Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present. + 1. From the **Clusters** page, click **Add Cluster**. 1. Choose **Amazon EC2**. 1. Enter a **Cluster Name**. diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md index 695503f45af..f9672b462df 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md @@ -38,6 +38,7 @@ For the fields to be populated, your setup needs to fulfill the [prerequisites.] ### More Supported Operating Systems You can provision VMs with any operating system that supports `cloud-init`. Only YAML format is supported for the [cloud config.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) + ### Video Walkthrough of v2.3.3 Node Template Features In this YouTube video, we demonstrate how to set up a node template with the new features designed to help you bring cloud operations to on-premises clusters. diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md index 03a33fbc749..aa86d8dfaf1 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md @@ -77,6 +77,8 @@ Creating a [node template]({{}}/rancher/v2.5/en/cluster-provisioning/rk Use Rancher to create a Kubernetes cluster in vSphere. +Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present. + 1. Navigate to **Clusters** in the **Global** view. 1. Click **Add Cluster** and select the **vSphere** infrastructure provider. 1. Enter a **Cluster Name.** diff --git a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md index d4cab37b33d..874e42467fc 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md @@ -89,6 +89,8 @@ The Kubernetes cluster management nodes (`etcd` and `controlplane`) must be run The `worker` nodes, which is where your workloads will be deployed on, will typically be Windows nodes, but there must be at least one `worker` node that is run on Linux in order to run the Rancher cluster agent, DNS, metrics server, and Ingress related containers. +Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present. + We recommend the minimum three-node architecture listed in the table below, but you can always add additional Linux and Windows workers to scale up your cluster for redundancy: From 7d08497a00cc1c6152608e6b6a741093979c71a5 Mon Sep 17 00:00:00 2001 From: dkeightley <20566450+dkeightley@users.noreply.github.com> Date: Fri, 28 May 2021 10:46:48 +1200 Subject: [PATCH 04/12] Update repo/tag to reflect latest image location and version --- .../en/backups/v2.5/configuration/storage-config/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.x/en/backups/v2.5/configuration/storage-config/_index.md b/content/rancher/v2.x/en/backups/v2.5/configuration/storage-config/_index.md index 3f08a943c93..be6622c5b90 100644 --- a/content/rancher/v2.x/en/backups/v2.5/configuration/storage-config/_index.md +++ b/content/rancher/v2.x/en/backups/v2.5/configuration/storage-config/_index.md @@ -63,8 +63,8 @@ For more information about `values.yaml` files and configuring Helm charts durin ```yaml image: - repository: rancher/rancher-backup - tag: v0.0.1-rc10 + repository: rancher/backup-restore-operator + tag: v1.0.3 ## Default s3 bucket for storing all backup files created by the rancher-backup operator s3: From c6891889768b0e3a68c381d6ed4e587b017b346a Mon Sep 17 00:00:00 2001 From: Daishan Date: Fri, 28 May 2021 12:51:16 -0700 Subject: [PATCH 05/12] Add clusterRole and binding to use restricted psp --- .../rancher-2.5/1.6-hardening-2.5/_index.md | 30 +++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/content/rancher/v2.x/en/security/rancher-2.5/1.6-hardening-2.5/_index.md b/content/rancher/v2.x/en/security/rancher-2.5/1.6-hardening-2.5/_index.md index c49eb6a1d40..82970d87e17 100644 --- a/content/rancher/v2.x/en/security/rancher-2.5/1.6-hardening-2.5/_index.md +++ b/content/rancher/v2.x/en/security/rancher-2.5/1.6-hardening-2.5/_index.md @@ -287,6 +287,36 @@ addons: | - configMap - projected --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: psp:restricted + rules: + - apiGroups: + - extensions + resourceNames: + - restricted + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: psp:restricted + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: From 4581ce5dcfa7998aafa112279ed4823b33dff417 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Mon, 31 May 2021 18:22:21 +0000 Subject: [PATCH 06/12] Add clusterRole and binding to use restricted psp to versioned docs --- .../rancher-2.5/1.6-hardening-2.5/_index.md | 30 +++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/content/rancher/v2.5/en/security/rancher-2.5/1.6-hardening-2.5/_index.md b/content/rancher/v2.5/en/security/rancher-2.5/1.6-hardening-2.5/_index.md index 59588fa422c..b504be806e5 100644 --- a/content/rancher/v2.5/en/security/rancher-2.5/1.6-hardening-2.5/_index.md +++ b/content/rancher/v2.5/en/security/rancher-2.5/1.6-hardening-2.5/_index.md @@ -286,6 +286,36 @@ addons: | - configMap - projected --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: psp:restricted + rules: + - apiGroups: + - extensions + resourceNames: + - restricted + resources: + - podsecuritypolicies + verbs: + - use + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: psp:restricted + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:authenticated + --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: From ba592d39030ff57984aec15f9f60f5a68a872257 Mon Sep 17 00:00:00 2001 From: Billy Tat Date: Tue, 1 Jun 2021 08:50:21 -0700 Subject: [PATCH 07/12] Fix Helm value: no_proxy to noProxy --- .../behind-proxy/install-rancher/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.5/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md b/content/rancher/v2.5/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md index 93d0a036753..add0d1c7a7f 100644 --- a/content/rancher/v2.5/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md +++ b/content/rancher/v2.5/en/installation/other-installation-methods/behind-proxy/install-rancher/_index.md @@ -34,7 +34,7 @@ helm upgrade --install cert-manager jetstack/cert-manager \ --namespace cert-manager --version v0.15.2 \ --set http_proxy=http://${proxy_host} \ --set https_proxy=http://${proxy_host} \ - --set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local + --set noProxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local ``` Now you should wait until cert-manager is finished starting up: @@ -65,7 +65,7 @@ helm upgrade --install rancher rancher-latest/rancher \ --namespace cattle-system \ --set hostname=rancher.example.com \ --set proxy=http://${proxy_host} - --set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local + --set noProxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local ``` After waiting for the deployment to finish: From f08696a95153a9dcf565d169beeaf69073856fa0 Mon Sep 17 00:00:00 2001 From: Akihiro Suda Date: Wed, 2 Jun 2021 14:05:56 +0900 Subject: [PATCH 08/12] k3s: fix markdown rendering Signed-off-by: Akihiro Suda --- content/k3s/latest/en/advanced/_index.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/content/k3s/latest/en/advanced/_index.md b/content/k3s/latest/en/advanced/_index.md index 9c83909a734..a066e71f759 100644 --- a/content/k3s/latest/en/advanced/_index.md +++ b/content/k3s/latest/en/advanced/_index.md @@ -209,10 +209,7 @@ See also https://rootlesscontaine.rs/ to learn about Rootless mode. > **Note:** Don't try to run `k3s server --rootless` on a terminal, as it doesn't enable cgroup v2 delegation. > If you really need to try it on a terminal, prepend `systemd-run --user -p Delegate=yes --tty` to create a systemd scope. > -> i.e., -> ```console -> $ systemd-run --user -p Delegate=yes --tty k3s server --rootless -> ``` +> i.e., `systemd-run --user -p Delegate=yes --tty k3s server --rootless` ### Troubleshooting From e0e8a77d4d28044e51ee9ba4f78ae868c6fa38a2 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Wed, 2 Jun 2021 03:54:56 -0700 Subject: [PATCH 09/12] Make versioning notice appear on every page of v2.x docs --- layouts/_default/list.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/layouts/_default/list.html b/layouts/_default/list.html index 0cbafa759ff..ac7639d1cfa 100644 --- a/layouts/_default/list.html +++ b/layouts/_default/list.html @@ -25,7 +25,7 @@ {{ $product := index $path 1 }} {{ $version := index $path 2 }} {{ $productVersion := printf "%s/%s" $product $version}} - {{ if eq $productVersion "rancher/v2.x" }} + {{ if in .Dir "rancher/v2.x" }}
We are transitioning to versioned documentation. The v2.x docs will no longer be maintained. For Rancher v2.5 docs, go here. For Rancher v2.0-v2.4 docs, go here.
From de95120806be072be5a824b7e11d03ce0c7c8384 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Mon, 7 Jun 2021 08:52:28 -0700 Subject: [PATCH 10/12] Make versioning structure less confusing --- content/rancher/v2.0-v2.4/_index.md | 2 +- content/rancher/v2.5/_index.md | 2 +- content/rancher/v2.5/en/_index.md | 4 ++-- content/rancher/v2.x/_index.md | 2 +- content/rancher/v2.x/en/_index.md | 6 +++--- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/content/rancher/v2.0-v2.4/_index.md b/content/rancher/v2.0-v2.4/_index.md index ff2500ddd4d..25b54e1aff6 100644 --- a/content/rancher/v2.0-v2.4/_index.md +++ b/content/rancher/v2.0-v2.4/_index.md @@ -1,5 +1,5 @@ --- title: v2.0-v2.4.x -weight: 2 +weight: 3 showBreadcrumb: false --- diff --git a/content/rancher/v2.5/_index.md b/content/rancher/v2.5/_index.md index 50324f5568b..512930c7c8d 100644 --- a/content/rancher/v2.5/_index.md +++ b/content/rancher/v2.5/_index.md @@ -1,5 +1,5 @@ --- -title: v2.5.x +title: Rancher v2.5.7+ weight: 1 showBreadcrumb: false --- diff --git a/content/rancher/v2.5/en/_index.md b/content/rancher/v2.5/en/_index.md index 90163fef59c..6e97f69f89a 100644 --- a/content/rancher/v2.5/en/_index.md +++ b/content/rancher/v2.5/en/_index.md @@ -1,6 +1,6 @@ --- -title: "Rancher 2.5" -shortTitle: "Rancher 2.5 (Latest)" +title: "Rancher v2.5.7+ (Latest)" +shortTitle: "Rancher v2.5.7+ (Latest)" description: "Rancher adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more." metaTitle: "Rancher 2.x Docs: What is New?" metaDescription: "Rancher 2 adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more." diff --git a/content/rancher/v2.x/_index.md b/content/rancher/v2.x/_index.md index 3a7cda1c1aa..462704d4cd6 100644 --- a/content/rancher/v2.x/_index.md +++ b/content/rancher/v2.x/_index.md @@ -1,5 +1,5 @@ --- title: v2.x -weight: 4 +weight: 2 showBreadcrumb: false --- diff --git a/content/rancher/v2.x/en/_index.md b/content/rancher/v2.x/en/_index.md index 1e324546d82..66454c146b1 100644 --- a/content/rancher/v2.x/en/_index.md +++ b/content/rancher/v2.x/en/_index.md @@ -1,11 +1,11 @@ --- -title: "Rancher 2.0-2.5.6 (Formerly 2.x)" -shortTitle: "Rancher 2.5.6 (Archive)" +title: "Pre-Versioned Docs from 2.0-2.5.6 (Formerly 2.x)" +shortTitle: "Rancher 2.5-2.5.6" description: "Rancher adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more." metaTitle: "Rancher 2.x Docs: What is New?" metaDescription: "Rancher 2 adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more." insertOneSix: false -weight: 1 +weight: 2 ctaBanner: 0 --- From aa0fed9da0076ce0c82594fc9d71587e4ddfed45 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Thu, 10 Jun 2021 12:51:57 -0700 Subject: [PATCH 11/12] Use stronger language to say pipelines and muli-cluster apps are deprecated in 2.5+ --- layouts/_default/list.html | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/layouts/_default/list.html b/layouts/_default/list.html index ac7639d1cfa..3536040c57a 100644 --- a/layouts/_default/list.html +++ b/layouts/_default/list.html @@ -30,6 +30,26 @@ We are transitioning to versioned documentation. The v2.x docs will no longer be maintained. For Rancher v2.5 docs, go here. For Rancher v2.0-v2.4 docs, go here. {{end}} + {{ if in .Dir "/rancher/v2.5/en/pipelines/" }} +
+ As of Rancher v2.5, Git-based deployment pipelines are now recommended to be handled with Rancher Continuous Delivery powered by Fleet, available in Cluster Explorer. +
+ {{end}} + {{ if in .Dir "/rancher/v2.x/en/pipelines/" }} +
+ As of Rancher v2.5, Git-based deployment pipelines are now recommended to be handled with Rancher Continuous Delivery powered by Fleet, available in Cluster Explorer. +
+ {{end}} + {{ if in .Dir "/rancher/v2.5/en/deploy-across-clusters/multi-cluster-apps/" }} +
+ As of Rancher v2.5, we now recommend using Fleet for deploying apps across clusters. +
+ {{end}} + {{ if in .Dir "/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps/" }} +
+ As of Rancher v2.5, we now recommend using Fleet for deploying apps across clusters. +
+ {{end}}
here. For Rancher v2.0-v2.4 docs, go here.
{{end}} - {{ if in .Dir "/rancher/v2.5/en/pipelines/" }} + {{ if in .Dir "/rancher/v2.5/en/pipelines" }}
As of Rancher v2.5, Git-based deployment pipelines are now recommended to be handled with Rancher Continuous Delivery powered by Fleet, available in Cluster Explorer.
{{end}} - {{ if in .Dir "/rancher/v2.x/en/pipelines/" }} + {{ if in .Dir "/rancher/v2.x/en/pipelines" }}
As of Rancher v2.5, Git-based deployment pipelines are now recommended to be handled with Rancher Continuous Delivery powered by Fleet, available in Cluster Explorer.
{{end}} - {{ if in .Dir "/rancher/v2.5/en/deploy-across-clusters/multi-cluster-apps/" }} + {{ if in .Dir "/rancher/v2.5/en/deploy-across-clusters/multi-cluster-apps" }}
As of Rancher v2.5, we now recommend using Fleet for deploying apps across clusters.
{{end}} - {{ if in .Dir "/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps/" }} + {{ if in .Dir "/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps" }}
As of Rancher v2.5, we now recommend using Fleet for deploying apps across clusters.