From 1433d8108a7aa8b831624f4e2a30597348777883 Mon Sep 17 00:00:00 2001 From: Sebastiaan van Steenis Date: Wed, 31 Oct 2018 12:59:06 +0100 Subject: [PATCH 01/15] Improve HA install steps --- .../rancher/v2.x/en/faq/technical/_index.md | 18 ++++ .../rancher/v2.x/en/installation/ha/_index.md | 4 +- .../en/installation/ha/helm-rancher/_index.md | 97 ++++++++++++++----- .../ha/helm-rancher/chart-options/_index.md | 2 +- .../ha/helm-rancher/tls-secrets/_index.md | 10 +- 5 files changed, 99 insertions(+), 32 deletions(-) diff --git a/content/rancher/v2.x/en/faq/technical/_index.md b/content/rancher/v2.x/en/faq/technical/_index.md index 2ead99f4b86..2a118cc0105 100644 --- a/content/rancher/v2.x/en/faq/technical/_index.md +++ b/content/rancher/v2.x/en/faq/technical/_index.md @@ -123,3 +123,21 @@ When the node is removed from the cluster, and the node is cleaned, you can read ### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? You can add additional arguments/binds/environment variables via the [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables]({{< baseurl >}}/rke/v0.1.x/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls]({{< baseurl >}}/rke/v0.1.x/en/example-yamls/). + +### How do I check `Common Name` and `Subject Alternative Names` in my server certificate? + +Although technically an entry in `Subject Alternative Names` is required, having the hostname in both `Common Name` and as entry in `Subject Alternative Names` gives you maximum compatibility with older browser/applications. + +Check `Common Name`: + +``` +openssl x509 -noout -subject -in cert.pem +subject= /CN=rancher.my.org +``` + +Check `Subject Alternative Names`: + +``` +openssl x509 -noout -in cert.pem -text | grep DNS + DNS:rancher.my.org +``` diff --git a/content/rancher/v2.x/en/installation/ha/_index.md b/content/rancher/v2.x/en/installation/ha/_index.md index 24fe7a7b010..1c040ed93a4 100644 --- a/content/rancher/v2.x/en/installation/ha/_index.md +++ b/content/rancher/v2.x/en/installation/ha/_index.md @@ -11,13 +11,13 @@ This procedure walks you through setting up a 3-node cluster with RKE and instal ## Recommended Architecture -* DNS for Rancher should resolve to a layer 4 load balancer +* DNS for Rancher should resolve to a Layer 4 load balancer (TCP) * The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster. * The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443. * The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment. -HA Rancher install with layer 4 load balancer, depicting SSL termination at ingress controllers ![Rancher HA]({{< baseurl >}}/img/rancher/ha/rancher2ha.svg) +HA Rancher install with Layer 4 load balancer (TCP), depicting SSL termination at ingress controllers ## Required Tools diff --git a/content/rancher/v2.x/en/installation/ha/helm-rancher/_index.md b/content/rancher/v2.x/en/installation/ha/helm-rancher/_index.md index e813341cf31..cb0b97ca981 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-rancher/_index.md @@ -11,17 +11,31 @@ Rancher installation is managed using the Helm package manager for Kubernetes. Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories). -Replace `` with the Helm chart repository that you want to use (i.e. `latest` or `stable`). +Replace both occurences of `` with the Helm chart repository that you want to use (i.e. `latest` or `stable`). ``` helm repo add rancher- https://releases.rancher.com/server-charts/ ``` -### Install cert-manager +### Choose your SSL Configuration -> **Note:** cert-manager is only required for Rancher generated and LetsEncrypt issued certificates. You may skip this step if you are bringing your own certificates or using the `ingress.tls.source=secret` option. +Rancher Server is designed to be secure by default and requires SSL/TLS configuration. -Rancher relies on [cert-manager](https://github.com/kubernetes/charts/tree/master/stable/cert-manager) from the official Kubernetes Helm chart repository to issue self-signed or LetsEncrypt certificates. +There are three recommended options for the source of the certificate. + +> **Note:** If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#external-tls-termination). + +| Configuration | Chart option | Description | Requires cert-manager | +|-----|-----|-----|-----| +| [Rancher Generated Certificates](#rancher-generated-certificates) | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)
This is the **default** | [yes](#optional-install-cert-manager) | +| [Let’s Encrypt](#let-s-encrypt) | `ingress.tls.source=letsEncrypt` | Use [Let's Encrypt](https://letsencrypt.org/) to issue a certificate | [yes](#optional-install-cert-manager) | +| [Certificates from Files](#certificates-from-files) | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s) | no | + +### Optional: Install cert-manager + +> **Note:** cert-manager is only required for certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) and Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`). You should skip this step if you are using your own certificate files (option `ingress.tls.source=secret`) or if you use [TLS termination on an External Load Balancer]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#external-tls-termination). + +Rancher relies on [cert-manager](https://github.com/kubernetes/charts/tree/master/stable/cert-manager) from the official Kubernetes Helm chart repository to issue certificates from Rancher's own generated CA or to request Let's Encrypt certificates. Install `cert-manager` from Kubernetes Helm chart repository. @@ -31,21 +45,21 @@ helm install stable/cert-manager \ --namespace kube-system ``` -### Choose your SSL Configuration +Wait for `cert-manager` to be rolled out: -Rancher server is designed to be secure by default and requires SSL/TLS configuration. - -There are three options for the source of the certificate. - -1. `rancher` - (Default) Use Rancher generated CA/Certificates. -2. `letsEncrypt` - Use [LetsEncrypt](https://letsencrypt.org/) to issue a cert. -3. `secret` - Configure a Kubernetes Secret with your certificate files. +``` +kubectl -n kube-system rollout status deploy/cert-manager +Waiting for deployment "cert-manager" rollout to finish: 0 of 1 updated replicas are available... +deployment "cert-manager" successfully rolled out +```
-#### (Default) Rancher Generated Certificates +#### Rancher Generated Certificates -The default is for Rancher to generate a CA and use the `cert-manager` to issue the certificate for access to the Rancher server interface. +> **Note:** You need to have [cert-manager](#optional-install-cert-manager) installed before proceeding. + +The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface. Because `rancher` is the default option for `ingress.tls.source`, we are not specifying `ingress.tls.source` when running the `helm install` command. - Replace `` with the repository that you configured in [Add the Helm Chart Repository](#add-the-helm-chart-repository) (i.e. `latest` or `stable`). - Set the `hostname` to the DNS name you pointed at your load balancer. @@ -59,12 +73,22 @@ helm install rancher-/rancher \ --set hostname=rancher.my.org ``` -#### LetsEncrypt +Wait for Rancher to be rolled out: -Use [LetsEncrypt](https://letsencrypt.org/)'s free service to issue trusted SSL certs. This configuration uses http validation so the Load Balancer must have a Public DNS record and be accessible from the internet. +``` +kubectl -n cattle-system rollout status deploy/rancher +Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... +deployment "rancher" successfully rolled out +``` + +#### Let's Encrypt + +> **Note:** You need to have [cert-manager](#optional-install-cert-manager) installed before proceeding. + +This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA. This configuration uses HTTP validation (`HTTP-01`) so the load balancer must have a public DNS record and be accessible from the internet. - Replace `` with the repository that you configured in [Add the Helm Chart Repository](#add-the-helm-chart-repository) (i.e. `latest` or `stable`). -- Set `hostname`, `ingress.tls.source=letsEncrypt` and LetsEncrypt options. +- Set `hostname` to the public DNS record, set `ingress.tls.source` to `letsEncrypt` and `letsEncrypt.email` to the email address used for communication about your certificate (for example, expiry notices) >**Using Air Gap?** [Set the `rancherImage` option]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#install-rancher-using-private-registry) in your command, pointing toward your private registry. @@ -77,16 +101,23 @@ helm install rancher-/rancher \ --set letsEncrypt.email=me@example.org ``` -#### Certificates from Files (Kubernetes Secret) +Wait for Rancher to be rolled out: + +``` +kubectl -n cattle-system rollout status deploy/rancher +Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... +deployment "rancher" successfully rolled out +``` + +#### Certificates from Files Create Kubernetes secrets from your own certificates for Rancher to use. -> **Note:** The common name for the cert will need to match the `hostname` option or the ingress controller will fail to provision the site for Rancher. +> **Note:** The `Common Name` or a `Subject Alternative Names` entry in the server certificate must match the `hostname` option, or the ingress controller will fail to configure correctly. Although an entry in the `Subject Alternative Names` is technically required, having a matching `Common Name` maximizes compatibility with older browsers/applications. If you want to check if your certificates are correct, see [How do I check Common Name and Subject Alternative Names in my server certificate?]({{< baseurl >}}/rancher/v2.x/en/faq/technical/#how-do-i-check-common-name-and-subject-alternative-names-in-my-server-certificate) - Replace `` with the repository that you configured in [Add the Helm Chart Repository](#add-the-helm-chart-repository) (i.e. `latest` or `stable`). -- Set `hostname` and `ingress.tls.source=secret`. - -> **Note:** If you are using a Private CA signed cert, add `--set privateCA=true` +- Set `hostname` and set `ingress.tls.source` to `secret`. +- If you are using a Private CA signed certificate , add `--set privateCA=true` to the command shown below. ``` helm install rancher-/rancher \ @@ -96,7 +127,25 @@ helm install rancher-/rancher \ --set ingress.tls.source=secret ``` -Now that Rancher is running, see [Adding TLS Secrets]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them. +Now that Rancher is deployed, see [Adding TLS Secrets]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them. + +After adding the secrets, check if Rancher was rolled out successfully: + +``` +kubectl -n cattle-system rollout status deploy/rancher +Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available... +deployment "rancher" successfully rolled out +``` + +If you see the following error: `error: deployment "rancher" exceeded its progress deadline`, you can check the status of the deployment by running the following command: + +``` +kubectl -n cattle-system get deploy rancher +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +rancher 3 3 3 3 3m +``` + +It should show the same count for `DESIRED` and `AVAILABLE`. ### Advanced Configurations @@ -116,4 +165,4 @@ Make sure you save the `--set` options you used. You will need to use the same o That's it you should have a functional Rancher server. Point a browser at the hostname you picked and you should be greeted by the colorful login page. -Doesn't Work? Take a look at the [Troubleshooting]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/troubleshooting/) Page +Doesn't work? Take a look at the [Troubleshooting]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/troubleshooting/) Page diff --git a/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md b/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md index d9c3eb50cb1..8a74005cfdd 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md @@ -86,7 +86,7 @@ We recommend configuring your load balancer as a Layer 4 balancer, forwarding pl You may terminate the SSL/TLS on a L7 load balancer external to the Rancher cluster (ingress). Use the `--set tls=external` option and point your load balancer at port http 80 on all of the Rancher cluster nodes. This will expose the Rancher interface on http port 80. Be aware that clients that are allowed to connect directly to the Rancher cluster will not be encrypted. If you choose to do this we recommend that you restrict direct access at the network level to just your load balancer. -> **Note:** If you are using a Private CA signed cert, add `--set privateCA=true` and see [Adding TLS Secrets - Private CA Signed - Additional Steps]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/#private-ca-signed---additional-steps) to add the CA cert for Rancher. +> **Note:** If you are using a Private CA signed certificate, add `--set privateCA=true` and see [Adding TLS Secrets - Using a Private CA Signed Certificate]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/#using-a-private-ca-signed-certificate) to add the CA cert for Rancher. Your load balancer must support long lived websocket connections and will need to insert proxy headers so Rancher can route links correctly. diff --git a/content/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/_index.md b/content/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/_index.md index 796305b067e..8b1c8b11f3a 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/_index.md @@ -5,7 +5,7 @@ weight: 276 Kubernetes will create all the objects and services for Rancher, but it will not become available until we populate the `tls-rancher-ingress` secret in the `cattle-system` namespace with the certificate and key. -Combine the server certificate followed by the intermediate cert chain your CA provided into a file named `tls.crt`. Copy your key into a file name `tls.key`. +Combine the server certificate followed by any intermediate certificate(s) needed into a file named `tls.crt`. Copy your certificate key into a file named `tls.key`. Use `kubectl` with the `tls` secret type to create the secrets. @@ -15,13 +15,13 @@ kubectl -n cattle-system create secret tls tls-rancher-ingress \ --key=tls.key ``` -### Private CA Signed - Additional Steps +### Using a Private CA Signed Certificate -If you are using a private CA, Rancher will need to have a copy of the CA cert to include when generating agent configs. +If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server. -Copy the CA cert into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace. +Copy the CA certificate into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace. ->**Important:** Make sure the file is called `cacerts.pem` as Rancher uses that filename to configure the CA cert. +>**Important:** Make sure the file is called `cacerts.pem` as Rancher uses that filename to configure the CA certificate. ``` kubectl -n cattle-system create secret generic tls-ca \ From 6352d7a259dc27d4396779560bd43ed70399f291 Mon Sep 17 00:00:00 2001 From: Sebastiaan van Steenis Date: Mon, 5 Nov 2018 23:02:43 +0100 Subject: [PATCH 02/15] Add external LB examples to new HA locations --- .../installation/ha/create-nodes-lb/_index.md | 1 + .../ha/create-nodes-lb/nginx/_index.md | 75 +++++++++++++++++++ .../ha/helm-rancher/chart-options/_index.md | 47 ++++++++++++ .../single-node-install-external-lb/_index.md | 1 + 4 files changed, 124 insertions(+) create mode 100644 content/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/_index.md diff --git a/content/rancher/v2.x/en/installation/ha/create-nodes-lb/_index.md b/content/rancher/v2.x/en/installation/ha/create-nodes-lb/_index.md index 26bb172cf6a..c9d51f538d3 100644 --- a/content/rancher/v2.x/en/installation/ha/create-nodes-lb/_index.md +++ b/content/rancher/v2.x/en/installation/ha/create-nodes-lb/_index.md @@ -21,6 +21,7 @@ Configure a load balancer as a basic Layer 4 TCP forwarder. The exact configurat #### Examples +* [NGINX]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/) * [Amazon NLB]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/nlb/) ### [Next: Install Kubernetes with RKE]({{< baseurl >}}/rancher/v2.x/en/installation/ha/kubernetes-rke/) diff --git a/content/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/_index.md b/content/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/_index.md new file mode 100644 index 00000000000..5a9fb2eee41 --- /dev/null +++ b/content/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/_index.md @@ -0,0 +1,75 @@ +--- +title: NGINX +weight: 270 +--- +NGINX will be configured as Layer 4 Load Balancer (TCP). NGINX will forward connections to one of your Rancher nodes. + +>**Note:** +> In this configuration, the load balancer is positioned in front of your nodes. The load balancer can be any host that you have available that's capable of running NGINX. +> +> One caveat: do not use one of your Rancher nodes as the load balancer. + +## Install NGINX + +Start by installing NGINX on the node you want to use as a load balancer. NGINX has packages available for all known operating systems. The versions tested are `1.14` and `1.15`. For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/). + +The `stream` module is required, which is present when using the official NGINX packages. Please refer to your OS documentation how to install and enable the NGINX `stream` module on your operating system. + +## Create NGINX Configuration + +After installing NGINX, you need to update the NGINX configuration file, `nginx.conf`, with the IP addresses for your nodes. + +1. Copy and paste the code sample below into your favorite text editor. Save it as `nginx.conf`. + +2. From `nginx.conf`, replace `IP_NODE_1`, `IP_NODE_2`, and `IP_NODE_3` with the IPs of your [nodes]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/) + + >**Note:** See [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/) for all configuration options. + + **Example NGINX config:** + ``` + worker_processes 4; + worker_rlimit_nofile 40000; + + events { + worker_connections 8192; + } + + http { + server { + listen 80; + return 301 https://$host$request_uri; + } + } + + stream { + upstream rancher_servers { + least_conn; + server IP_NODE_1:443 max_fails=3 fail_timeout=5s; + server IP_NODE_2:443 max_fails=3 fail_timeout=5s; + server IP_NODE_3:443 max_fails=3 fail_timeout=5s; + } + server { + listen 443; + proxy_pass rancher_servers; + } + } + ``` + +3. Save `nginx.conf` to your load balancer at the following path: `/etc/nginx/nginx.conf`. + +4. Load the updates to your NGINX configuration by running the following command: + + ``` + # nginx -s reload + ``` + +## Option - Run NGINX as Docker container + +Instead of installing NGINX as a package on the operating system, you can rather run it as a Docker container. Save the edited **Example NGINX config** as `/etc/nginx.conf` and run the following command to launch the NGINX container: + +``` +docker run -d --restart=unless-stopped \ + -p 80:80 -p 443:443 \ + -v /etc/nginx.conf:/etc/nginx/nginx.conf \ + nginx:1.14 +``` diff --git a/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md b/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md index d9c3eb50cb1..58b2d707618 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md @@ -106,3 +106,50 @@ Your load balancer must support long lived websocket connections and will need t #### Health Checks Rancher will respond `200` to health checks on the `/healthz` endpoint. + + +#### Example NGINX config + +* Replace `IP_NODE1`, `IP_NODE2` and `IP_NODE3` with the IP addresses of the nodes in your cluster. +* Replace both occurences of `FQDN` to the DNS name for Rancher. +* Replace `/certs/fullchain.pem` and `/certs/privkey.pem` to the location of the server certificate and the server certificate key respectively. + +``` +upstream rancher { + server IP_NODE_1:80; + server IP_NODE_2:80; + server IP_NODE_3:80; +} + +map $http_upgrade $connection_upgrade { + default Upgrade; + '' close; +} + +server { + listen 443 ssl http2; + server_name FQDN; + ssl_certificate /certs/fullchain.pem; + ssl_certificate_key /certs/privkey.pem; + + location / { + proxy_set_header Host $host; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header X-Forwarded-Port $server_port; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_pass http://rancher; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection $connection_upgrade; + # This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close. + proxy_read_timeout 900s; + proxy_buffering off; + } +} + +server { + listen 80; + server_name FQDN; + return 301 https://$server_name$request_uri; +} +``` diff --git a/content/rancher/v2.x/en/installation/single-node/single-node-install-external-lb/_index.md b/content/rancher/v2.x/en/installation/single-node/single-node-install-external-lb/_index.md index 45e501aeaa4..acc8ae97f56 100644 --- a/content/rancher/v2.x/en/installation/single-node/single-node-install-external-lb/_index.md +++ b/content/rancher/v2.x/en/installation/single-node/single-node-install-external-lb/_index.md @@ -132,6 +132,7 @@ server { proxy_set_header Connection $connection_upgrade; # This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close. proxy_read_timeout 900s; + proxy_buffering off; } } From 4ed71ec2fbac44d2abaa142da4aee9e02a323238 Mon Sep 17 00:00:00 2001 From: XiaoluHong Date: Wed, 17 Oct 2018 19:26:07 +0800 Subject: [PATCH 03/15] Specify tiller image for chinese users --- .../v2.x/en/installation/ha/helm-init/_index.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/installation/ha/helm-init/_index.md b/content/rancher/v2.x/en/installation/ha/helm-init/_index.md index 02e45bb69d2..41c5b5205b4 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-init/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-init/_index.md @@ -24,9 +24,17 @@ kubectl create clusterrolebinding tiller \ --serviceaccount=kube-system:tiller helm init --service-account tiller + +# For chinese users +# The latest version of tiller images queries addresses: +# https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085 + +helm init --service-account tiller \ + --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller: + ``` -> **Note:** This `tiller` install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the [helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for restricting `tiller` access to suit your security requirements. +> **Note:** This`tiller`install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the [helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for restricting `tiller` access to suit your security requirements. ### Test your Tiller installation From 64033c068faf89401c9471a898e86bf691615a0e Mon Sep 17 00:00:00 2001 From: Mark Bishop Date: Wed, 7 Nov 2018 16:09:20 -0700 Subject: [PATCH 04/15] made minor style/grammar edits --- .../ha/create-nodes-lb/nginx/_index.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/content/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/_index.md b/content/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/_index.md index 5a9fb2eee41..95ef78a58bc 100644 --- a/content/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/_index.md +++ b/content/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/_index.md @@ -2,10 +2,10 @@ title: NGINX weight: 270 --- -NGINX will be configured as Layer 4 Load Balancer (TCP). NGINX will forward connections to one of your Rancher nodes. +NGINX will be configured as Layer 4 load balancer (TCP) that forwards connections to one of your Rancher nodes. >**Note:** -> In this configuration, the load balancer is positioned in front of your nodes. The load balancer can be any host that you have available that's capable of running NGINX. +> In this configuration, the load balancer is positioned in front of your nodes. The load balancer can be any host capable of running NGINX. > > One caveat: do not use one of your Rancher nodes as the load balancer. @@ -13,7 +13,7 @@ NGINX will be configured as Layer 4 Load Balancer (TCP). NGINX will forward con Start by installing NGINX on the node you want to use as a load balancer. NGINX has packages available for all known operating systems. The versions tested are `1.14` and `1.15`. For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/). -The `stream` module is required, which is present when using the official NGINX packages. Please refer to your OS documentation how to install and enable the NGINX `stream` module on your operating system. +The `stream` module is required, which is present when using the official NGINX packages. Please refer to your OS documentation on how to install and enable the NGINX `stream` module on your operating system. ## Create NGINX Configuration @@ -21,11 +21,11 @@ After installing NGINX, you need to update the NGINX configuration file, `nginx. 1. Copy and paste the code sample below into your favorite text editor. Save it as `nginx.conf`. -2. From `nginx.conf`, replace `IP_NODE_1`, `IP_NODE_2`, and `IP_NODE_3` with the IPs of your [nodes]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/) +2. From `nginx.conf`, replace ``, ``, and `` with the IPs of your [nodes]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/). - >**Note:** See [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/) for all configuration options. + >**Note:** See [NGINX Documentation: TCP and UDP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/) for all configuration options. - **Example NGINX config:** +
Example NGINX config
``` worker_processes 4; worker_rlimit_nofile 40000; @@ -44,9 +44,9 @@ After installing NGINX, you need to update the NGINX configuration file, `nginx. stream { upstream rancher_servers { least_conn; - server IP_NODE_1:443 max_fails=3 fail_timeout=5s; - server IP_NODE_2:443 max_fails=3 fail_timeout=5s; - server IP_NODE_3:443 max_fails=3 fail_timeout=5s; + server :443 max_fails=3 fail_timeout=5s; + server :443 max_fails=3 fail_timeout=5s; + server :443 max_fails=3 fail_timeout=5s; } server { listen 443; From 61c3f2b16c5b20f982308b5054bef5beb604fe6c Mon Sep 17 00:00:00 2001 From: Sebastiaan van Steenis Date: Wed, 7 Nov 2018 22:03:03 +0100 Subject: [PATCH 05/15] Fix kubectl commands in technical FAQ for Helm HA --- .../rancher/v2.x/en/faq/technical/_index.md | 44 ++++++++++++++++--- 1 file changed, 39 insertions(+), 5 deletions(-) diff --git a/content/rancher/v2.x/en/faq/technical/_index.md b/content/rancher/v2.x/en/faq/technical/_index.md index 2a118cc0105..4087ee4fc6b 100644 --- a/content/rancher/v2.x/en/faq/technical/_index.md +++ b/content/rancher/v2.x/en/faq/technical/_index.md @@ -12,7 +12,15 @@ New password for default admin user (user-xxxxx): ``` -High Availability install: +High Availability install (Helm): +``` +$ KUBECONFIG=./kube_config_rancher-cluster.yml +$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- reset-password +New password for default admin user (user-xxxxx): + +``` + +High Availability install (RKE add-on): ``` $ KUBECONFIG=./kube_config_rancher-cluster.yml $ kubectl --kubeconfig $KUBECONFIG exec -n cattle-system $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name=="cattle-server") | .metadata.name') -- reset-password @@ -20,6 +28,7 @@ New password for default admin user (user-xxxxx): ``` + ### I deleted/deactivated the last admin, how can I fix it? Single node install: ``` @@ -29,7 +38,15 @@ New password for default admin user (user-xxxxx): ``` -High Availability install: +High Availability install (Helm): +``` +$ KUBECONFIG=./kube_config_rancher-cluster.yml +$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- ensure-default-admin +New password for default admin user (user-xxxxx): + +``` + +High Availability install (RKE add-on): ``` $ KUBECONFIG=./kube_config_rancher-cluster.yml $ kubectl --kubeconfig $KUBECONFIG exec -n cattle-system $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name=="cattle-server") | .metadata.name') -- ensure-default-admin @@ -37,7 +54,6 @@ New password for default admin user (user-xxxxx): ``` - ### How can I enable debug logging? * Single node install @@ -54,8 +70,27 @@ $ docker exec -ti loglevel --set info OK ``` +* High Availability install (Helm) + * Enable +``` +$ KUBECONFIG=./kube_config_rancher-cluster.yml +$ kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | awk '{ print $1 }' | xargs -I{} kubectl --kubeconfig $KUBECONFIG -n cattle-system exec {} -- loglevel --set debug +OK +OK +OK +$ kubectl --kubeconfig $KUBECONFIG -n cattle-system logs -l app=rancher +``` -* High Availability install + * Disable +``` +$ KUBECONFIG=./kube_config_rancher-cluster.yml +$ kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | awk '{ print $1 }' | xargs -I{} kubectl --kubeconfig $KUBECONFIG -n cattle-system exec {} -- loglevel --set info +OK +OK +OK +``` + +* High Availability install (RKE add-on) * Enable ``` $ KUBECONFIG=./kube_config_rancher-cluster.yml @@ -71,7 +106,6 @@ $ kubectl --kubeconfig $KUBECONFIG exec -n cattle-system $(kubectl --kubeconfig OK ``` - ### My ClusterIP does not respond to ping ClusterIP is a virtual IP, which will not respond to ping. Best way to test if the ClusterIP is configured correctly, is by using `curl` to access the IP and port to see if it responds. From 767493698c6fb139f39ca2f2a55adfcc7f57e467 Mon Sep 17 00:00:00 2001 From: Mark Bishop Date: Tue, 6 Nov 2018 18:44:16 -0700 Subject: [PATCH 06/15] cleaing up Raul's migration tool edits --- .../rancher/v2.x/en/v1.6-migration/_index.md | 120 ++++++++++++------ 1 file changed, 78 insertions(+), 42 deletions(-) diff --git a/content/rancher/v2.x/en/v1.6-migration/_index.md b/content/rancher/v2.x/en/v1.6-migration/_index.md index 0ae95ff40b3..6649ebd975e 100644 --- a/content/rancher/v2.x/en/v1.6-migration/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/_index.md @@ -3,7 +3,7 @@ title: Migrating from Rancher v1.6 Cattle to v2.x weight: 10000 --- -Rancher 2.0 has been rearchitected and rewritten with the goal of providing a complete management solution for Kubernetes and Docker. Due to these extensive changes, there is no direct upgrade path from 1.6.x to 2.x, but rather a migration of your 1.6 application workloads into the 2.0 Kubernetes equivalent. In 1.6, the most common orchestration used was Rancher's own engine called Cattle. The following blogs (that will be converted in an official guide) explain and educate our Cattle users on running workloads in a Kubernetes environment. +Rancher 2.0 has been rearchitected and rewritten with the goal of providing a complete management solution for Kubernetes and Docker. Due to these extensive changes, there is no direct upgrade path from 1.6.x to 2.x, but rather a migration of your 1.6 application workloads into the 2.0 Kubernetes equivalent. In 1.6, the most common orchestration used was Rancher's own engine called Cattle. The following blogs (that will be converted in an official guide) explain and educate our Cattle users on running workloads in a Kubernetes environment. If you are an existing Kubernetes user on Rancher 1.6, you only need to review the [Get Started](#1-get-started) section to prepare you on what to expect on a new 2.0 Rancher cluster. @@ -29,12 +29,12 @@ Because Rancher 1.6 defaulted to our Cattle container orchestrator, it primarily | **Rancher 1.6** | **Rancher 2.0** | | --- | --- | -| Container | Pod | +| Container | Pod | | Services | Workload | | Load Balancer | Ingress | -| Stack | Namespace | +| Stack | Namespace | | Environment | Project (Administration)/Cluster (Compute) -| Host | Node | +| Host | Node | | Catalog | Helm |
More detailed information on Kubernetes concepts can be found in the @@ -64,75 +64,71 @@ Blog Post: [Migrating from Rancher 1.6 to Rancher 2.0—A Short Checklist](https ## 2. Run Migration Tools -To help with migration from 1.6 to 2.0, Rancher has developed a migration tool. Running this tool will help you check if your Rancher 1.6 applications can be migrated to 2.0. If an application can't be migrated, the tool will help you identify what's lacking. +To help with migration from 1.6 to 2.0, Rancher has developed migration-tools. Running these tools helps you export Docker Compose files and check if your Rancher 1.6 applications can be migrated to 2.0. If an application can't be migrated, the tools help you identify what's lacking. -This tool will: +These tools: -- Accept Docker Compose config files (i.e., `docker-compose.yml` and `rancher-compose.yml`) that you've exported from your Rancher 1.6 Stacks. -- Output a list of constructs present in the Compose files that cannot be supported by Kubernetes in Rancher 2.0. These constructs require special handling or are parameters that cannot be converted to Kubernetes YAML, even using tools like Kompose. +- `export` Docker Compose files (i.e., `docker-compose.yml` and `rancher-compose.yml`) from your stacks running on `cattle` environments in your Rancher 1.6 system. For every stack, files are exported to the `//` folder. To export all environments, you'll need an admin [API key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys). -### A. Download the Migration Tool +- `parse` Docker Compose files that you've exported from your Rancher 1.6 Stacks and output a list of constructs present in the Compose files that cannot be supported by Kubernetes in Rancher 2.0. These constructs require special handling or are parameters that cannot be converted to Kubernetes YAML. -The Migration Tool for your platform can be downloaded from its [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tool is available for Linux, Mac, and Windows platforms. +### A. Download Migration-Tools + +Migration-tools for your platform can be downloaded from our [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tools are available for Linux, Mac, and Windows platforms. -### B. Configure the Migration Tool +### B. Configure Migration-Tools -After the tool is downloaded, you need to make some configurations to run it. +After the tools are downloaded, you need to make some configurations to run them. -1. Modify the Migration Tool file to make it an executable. +1. Modify the migration-tools file to make it an executable. - 1. Open Terminal and change to the directory that contains the Migration Tool file. + 1. Open Terminal and change to the directory that contains the migration-tool file. - 1. Rename the Migration Tool file to `migration-tools` so that it no longer includes the platform name. + 1. Rename the file to `migration-tools` so that it no longer includes the platform name. 1. Enter the following command to make `migration-tools` an executable: - + ``` chmod +x migration-tools - ``` -1. Export the configuration for each Rancher 1.6 Stack that you want to migrate to 2.0. + ``` - 1. Log into Rancher 1.6 and select **Stacks > All**. - - 1. From the **All Stacks** page, select **Ellipsis (...) > Export Config** for each Stack that you want to migrate. - - 1. Extract the downloaded `compose.zip`. Move the folder contents (`docker-compose.yml` and `rancher-compose.yml`) into the same directory as `migration-tools`. +### C. Run Migration-Tools -### C. Run the Migration Tool +Next, use migration-tools to export your Cattle environments from Rancher 1.6 as Docker Compose files. Then, for environments that you want to migrate to Rancher 2.0, convert its Compose file into Kubernetes YAML. -To use the Migration Tool, run the command below while pointing to the compose files exported from each stack that you want to migrate. If you want to migrate multiple stacks, you'll have to re-run the command for each pair of compose files that you exported. +>**Want full usage and options for migration-tools?** See the [Migration Tools Reference](#migration-tools-reference) below. -#### Usage +1. Export the Docker Compose files for your Cattle environments from Rancher 1.6. -You can run the Migration Tool by entering the following command, replacing each placeholder with the absolute path to your Stack's compose files. + From Terminal, execute the following command, replacing each placeholder with your values. -``` -migration-tools --docker-file --rancher-file -``` + ``` + migration-tools export --url --access-key --secret-key --export-dir + ``` -#### Options + **Step Result:** migration-tools exports Compose files for each of your Cattle environments in the `--export-dir` directory. If you omitted this option, Compose files are output to your current directory. -When using the Migration Tool, you can specify the paths to your Docker and Rancher compose files, regardless of where they are on your file system. -| Option | Description | -| ---------------------- | -------------------------------------------------------------------------------------- | -| `--docker-file ` | The absolute path to an exported Docker compose file (default value: `docker-compose.yml`)1. | -| `--rancher-file ` | The absolute path to an alternate Rancher compose file (default value: `rancher-compose.yml`)1. | -| `--help, -h` | Lists usage for the Migration Tool. | -| `--version, -v` | Lists the version of the Migration Tool in use. | +1. Convert the exported Compose files to Kubernetes YAML. + + Execute the following command, replacing each placeholder with the absolute path to your Stack's Compose files. If you want to migrate multiple stacks, you'll have to re-run the command for each pair of Compose files that you exported. + + ``` + migration-tools parse --docker-file --rancher-file + ``` + + >**Note:** If you omit the `--docker-file` and `--rancher-file` options from your command, migration-tools checks its home directory for Compose files. ->1 If you omit the `--docker-file` and `--rancher-file` options from your command, the migration tool will check its home directory for compose files. #### Output -After you run the migration tool, the following files output to the same directory that you ran the tool from. - +After you run the migration tools parse command, the following files are output to your target directory. | Output | Description | | --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `output.txt` | This file lists all constructs for each service in `docker-compose.yml` that requires special handling to be successfully migrated to Rancher 2.0. Each construct links to the relevant blog articles on how to implement it in Rancher 2.0 (these articles are also listed below). | -| Kubernetes YAML specs | The Migration Tool internally invokes [Kompose](https://github.com/kubernetes/kompose) to generate Kubernetes YAML specs for each service you're migrating to 2.0. Each YAML spec file is named for the service you're migrating. +| Kubernetes YAML specs | Mirgation-tools internally invokes [Kompose](https://github.com/kubernetes/kompose) to generate Kubernetes YAML specs for each service you're migrating to 2.0. Each YAML spec file is named for the service you're migrating. ## 3. Migrate Applications @@ -178,3 +174,43 @@ Blog Post: [From Cattle to Kubernetes-How to Load Balance Your Services in Ranch In Rancher 1.6, a Load Balancer was used to expose your applications from within the Rancher environment for external access. In Rancher 2.0, the concept is the same. There is a Load Balancer option to expose your services. In the language of Kubernetes, this function is more often referred to as an **Ingress**. In short, Load Balancer and Ingress play the same role. +### Migration-Tools Reference + +Review this reference to find out what commands and options are available when using migration-tools. + +#### Usage + +``` +migration-tools [global options] command [command options] [arguments...] +``` + +#### Global Options + +Migration-tools includes a handful of options that can be used regardless of which commands you are using. These options are not required to run the tool. Rather, they're useful for troubleshooting. + +| Global Option | Description | +| ----------------- | -------------------------------------------- | +| `--debug` | Enables debug logging. | +| `--log ` | Outputs logs to the path you enter. | +| `--help`, `-h` | Displays a list of all commands available. | +| `--version`, `-v` | Prints the version of migration-tools in use.| + + +#### Commands and Command Options +This section contains reference material for commands and options available for the migration-tools used in [step 2](#2-run-migration-tools). + +Command | Options | Required? | Description +--------|---------|-------------|----- +`export`| | N/A | Exports Compose files for every Stack running in a Cattle environment in Rancher v1.6. + |`--url ` | ✓ | Rancher API endpoint URL (``). + |`--access-key ` | ✓ | Rancher API access key. Using an admin [API key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys) exports stacks from all cattle environments (``). + |`--secret-key ` | ✓ | Rancher [API secret key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys) (``). + |`--export-dir ` | | Base directory that Compose files export to under sub-directories created for each environment/stack (default: `Export`). + |`--all`, `--a` | | Export all stacks. Using this flag exports any stack in a state of inactive, stopped, or removing. + |`--system`, `--s` | | Export system and infrastructure stacks. +`parse` | | N/A | Parse Docker Compose and Rancher Compose files to get Kubernetes manifests. + |`--docker-file ` | | Parses Docker Compose file to output Kubernetes manifest (default: `docker-compose.yml`) + |`--output-file ` | | Name of file that outputs listing checks and advice for conversion (default: `output.txt`). + |`--rancher-file ` | | Parses Rancher Compose file to output Kubernetes manifest (default: `rancher-compose.yml`) +`help`, `h` | | N/A | Shows a list of options available for use with preceding command. + From 411abb7e24790bac30028479479ce12904674a48 Mon Sep 17 00:00:00 2001 From: Mark Bishop Date: Thu, 8 Nov 2018 13:50:41 -0700 Subject: [PATCH 07/15] adding back 5 min note --- .../rancher/v2.x/en/faq/technical/_index.md | 31 +++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/content/rancher/v2.x/en/faq/technical/_index.md b/content/rancher/v2.x/en/faq/technical/_index.md index 2ead99f4b86..2264a8c55f4 100644 --- a/content/rancher/v2.x/en/faq/technical/_index.md +++ b/content/rancher/v2.x/en/faq/technical/_index.md @@ -123,3 +123,34 @@ When the node is removed from the cluster, and the node is cleaned, you can read ### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? You can add additional arguments/binds/environment variables via the [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables]({{< baseurl >}}/rke/v0.1.x/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls]({{< baseurl >}}/rke/v0.1.x/en/example-yamls/). + +### How do I check `Common Name` and `Subject Alternative Names` in my server certificate? + +Although technically an entry in `Subject Alternative Names` is required, having the hostname in both `Common Name` and as entry in `Subject Alternative Names` gives you maximum compatibility with older browser/applications. + +Check `Common Name`: + +``` +openssl x509 -noout -subject -in cert.pem +subject= /CN=rancher.my.org +``` + +Check `Subject Alternative Names`: + +``` +openssl x509 -noout -in cert.pem -text | grep DNS + DNS:rancher.my.org +``` + +### Why does it take 5+ minutes for a pod to be rescheduled when a node has failed? + +This is due to a combination of the following default Kubernetes settings: + +* kubelet + * `node-status-update-frequency`: Specifies how often kubelet posts node status to master (default 10s) +* kube-controller-manager + * `node-monitor-period`: The period for syncing NodeStatus in NodeController (default 5s) + * `node-monitor-grace-period`: Amount of time which we allow running Node to be unresponsive before marking it unhealthy (default 40s) + * `pod-eviction-timeout`: The grace period for deleting pods on failed nodes (default 5m0s) + +See [Kubernetes: kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) and [Kubernetes: kube-controller-manager](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/) for more information on these settings. From 9cdb5d209cf37a2ec95b0095e1f328d4b3684931 Mon Sep 17 00:00:00 2001 From: Sebastiaan van Steenis Date: Wed, 17 Oct 2018 15:27:55 +0200 Subject: [PATCH 08/15] Add FAQ on adding args/binds/envvars to k8s components --- .../about/custom-partition-layout/_index.md | 2 +- .../v1.x/en/about/recovery-console/_index.md | 3 + content/os/v1.x/en/about/security/_index.md | 1 + .../v1.x/en/installation/amazon-ecs/_index.md | 32 +- .../configuration/docker/_index.md | 14 +- .../workstation/boot-from-iso/_index.md | 15 - .../en/admin-settings/agent-options/_index.md | 51 ++ .../user-cluster-nodes/_index.md | 2 +- .../restorations/ha-restoration/_index.md | 24 +- .../v2.x/en/cluster-provisioning/_index.md | 9 +- .../cloning-clusters/_index.md | 155 +++++ .../custom-clusters/_index.md | 2 +- .../hosted-kubernetes-clusters/_index.md | 2 +- .../hosted-kubernetes-clusters/eks/_index.md | 125 +++- .../rke-clusters/_index.md | 8 +- .../rke-clusters/custom-nodes/_index.md | 5 +- .../rke-clusters/node-pools/_index.md | 2 +- content/rancher/v2.x/en/faq/_index.md | 2 +- .../rancher/v2.x/en/faq/technical/_index.md | 17 + .../install-rancher/_index.md | 14 +- .../rancher/v2.x/en/installation/ha/_index.md | 2 +- .../en/installation/ha/helm-init/_index.md | 36 +- .../ha/helm-init/troubleshooting/_index.md | 2 +- .../en/installation/ha/helm-rancher/_index.md | 44 +- .../ha/helm-rancher/chart-options/_index.md | 4 +- .../ha/helm-rancher/troubleshooting/_index.md | 2 +- .../installation/ha/kubernetes-rke/_index.md | 2 +- .../en/installation/requirements/_index.md | 2 +- .../en/installation/server-tags/_index.md | 80 ++- .../horitzontal-pod-autoscaler/_index.md | 623 ++++++++++-------- .../v2.x/en/k8s-in-rancher/kubectl/_index.md | 4 + .../v2.x/en/k8s-in-rancher/nodes/_index.md | 12 +- .../en/k8s-in-rancher/workloads/_index.md | 6 +- .../rancher/v2.x/en/tools/pipelines/_index.md | 4 +- .../tools/pipelines/configurations/_index.md | 10 +- .../tools/pipelines/docs-for-v2.0.x/_index.md | 2 +- .../rollbacks/ha-server-rollbacks/_index.md | 2 +- .../v2.x/en/upgrades/upgrades/_index.md | 6 +- .../ha-server-upgrade-helm-airgap/_index.md | 48 +- .../upgrades/ha-server-upgrade-helm/_index.md | 47 +- .../upgrades/ha-server-upgrade/_index.md | 55 -- .../migrating-from-rke-add-on/_index.md | 8 + .../upgrades/namespace-migration/_index.md | 157 +++++ .../single-node-air-gap-upgrade/_index.md | 12 + .../upgrades/single-node-upgrade/_index.md | 19 +- .../en/user-settings/node-templates/_index.md | 2 +- .../rancher/v2.x/en/v1.6-migration/_index.md | 100 ++- .../rke/v0.1.x/en/config-options/_index.md | 2 +- .../cloud-providers/vsphere/_index.md | 8 +- .../step_create-cluster_cluster-options.html | 2 +- .../step_create-cluster_member-roles.html | 2 +- src/img/rancher/horizontal-pod-autoscaler.jpg | Bin 0 -> 38147 bytes src/img/rancher/move-namespaces.png | Bin 0 -> 23180 bytes 53 files changed, 1208 insertions(+), 582 deletions(-) create mode 100644 content/rancher/v2.x/en/admin-settings/agent-options/_index.md create mode 100644 content/rancher/v2.x/en/cluster-provisioning/cloning-clusters/_index.md delete mode 100644 content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade/_index.md create mode 100644 content/rancher/v2.x/en/upgrades/upgrades/namespace-migration/_index.md create mode 100644 src/img/rancher/horizontal-pod-autoscaler.jpg create mode 100644 src/img/rancher/move-namespaces.png diff --git a/content/os/v1.x/en/about/custom-partition-layout/_index.md b/content/os/v1.x/en/about/custom-partition-layout/_index.md index 698400771f7..a1c43205ed8 100644 --- a/content/os/v1.x/en/about/custom-partition-layout/_index.md +++ b/content/os/v1.x/en/about/custom-partition-layout/_index.md @@ -49,7 +49,7 @@ $ reboot ### Use RANCHER_BOOT partition -When you only use the RRACHER_STATE partition, the bootloader will be installed in the `/boot` directory. +When you only use the RANCHER_STATE partition, the bootloader will be installed in the `/boot` directory. ``` $ system-docker run -it --rm -v /:/host alpine diff --git a/content/os/v1.x/en/about/recovery-console/_index.md b/content/os/v1.x/en/about/recovery-console/_index.md index f2c5f434405..ad5fea9bc34 100644 --- a/content/os/v1.x/en/about/recovery-console/_index.md +++ b/content/os/v1.x/en/about/recovery-console/_index.md @@ -72,6 +72,9 @@ You need add `rancher.autologin=tty1` to the end, then press ``. If all g We need to mount the root disk in the recovery console and delete some data: ``` +# If you couldn't see any disk devices created under `/dev/`, please try this command: +$ ros udev-settle + $ mkdir /mnt/root-disk $ mount /dev/sda1 /mnt/root-disk diff --git a/content/os/v1.x/en/about/security/_index.md b/content/os/v1.x/en/about/security/_index.md index a5fa3da3ae8..5ff2e307ffe 100644 --- a/content/os/v1.x/en/about/security/_index.md +++ b/content/os/v1.x/en/about/security/_index.md @@ -35,3 +35,4 @@ weight: 303 | [CVE-2018-8897](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-8897) | A statement in the System Programming Guide of the Intel 64 and IA-32 Architectures Software Developer's Manual (SDM) was mishandled in the development of some or all operating-system kernels, resulting in unexpected behavior for #DB exceptions that are deferred by MOV SS or POP SS, as demonstrated by (for example) privilege escalation in Windows, macOS, some Xen configurations, or FreeBSD, or a Linux kernel crash. | 31 May 2018 | [RancherOS v1.4.0](https://github.com/rancher/os/releases/tag/v1.4.0) using Linux v4.14.32 | | [L1 Terminal Fault](https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html) | L1 Terminal Fault is a hardware vulnerability which allows unprivileged speculative access to data which is available in the Level 1 Data Cache when the page table entry controlling the virtual address, which is used for the access, has the Present bit cleared or other reserved bits set. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 | | [CVE-2018-3639](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3639) | Systems with microprocessors utilizing speculative execution and speculative execution of memory reads before the addresses of all prior memory writes are known may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis, aka Speculative Store Bypass (SSB), Variant 4. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 | +| [CVE-2018-17182](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17182) | The vmacache_flush_all function in mm/vmacache.c mishandles sequence number overflows. An attacker can trigger a use-after-free (and possibly gain privileges) via certain thread creation, map, unmap, invalidation, and dereference operations. | 18 Oct 2018 | [RancherOS v1.4.2](https://github.com/rancher/os/releases/tag/v1.4.2) using Linux v4.14.73 | diff --git a/content/os/v1.x/en/installation/amazon-ecs/_index.md b/content/os/v1.x/en/installation/amazon-ecs/_index.md index 9c61c81c10e..18f100eea18 100644 --- a/content/os/v1.x/en/installation/amazon-ecs/_index.md +++ b/content/os/v1.x/en/installation/amazon-ecs/_index.md @@ -58,22 +58,22 @@ rancher: ### Amazon ECS enabled AMIs -Latest Release: [v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) +Latest Release: [v1.4.2](https://github.com/rancher/os/releases/tag/v1.4.2) Region | Type | AMI ---|--- | --- -ap-south-1 | HVM - ECS enabled | [ami-0c095bd65873104ea](https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#launchInstanceWizard:ami=ami-0c095bd65873104ea) -eu-west-3 | HVM - ECS enabled | [ami-0a9420a7b9a46517b](https://eu-west-3.console.aws.amazon.com/ec2/home?region=eu-west-3#launchInstanceWizard:ami=ami-0a9420a7b9a46517b) -eu-west-2 | HVM - ECS enabled | [ami-09f7882ec876661f9](https://eu-west-2.console.aws.amazon.com/ec2/home?region=eu-west-2#launchInstanceWizard:ami=ami-09f7882ec876661f9) -eu-west-1 | HVM - ECS enabled | [ami-0dd35c5333b908688](https://eu-west-1.console.aws.amazon.com/ec2/home?region=eu-west-1#launchInstanceWizard:ami=ami-0dd35c5333b908688) -ap-northeast-2 | HVM - ECS enabled | [ami-0272129f9db7717d1](https://ap-northeast-2.console.aws.amazon.com/ec2/home?region=ap-northeast-2#launchInstanceWizard:ami=ami-0272129f9db7717d1) -ap-northeast-1 | HVM - ECS enabled | [ami-0cc3f7df2e7cac07a](https://ap-northeast-1.console.aws.amazon.com/ec2/home?region=ap-northeast-1#launchInstanceWizard:ami=ami-0cc3f7df2e7cac07a) -sa-east-1 | HVM - ECS enabled | [ami-0b8bc2a235e2ba0b8](https://sa-east-1.console.aws.amazon.com/ec2/home?region=sa-east-1#launchInstanceWizard:ami=ami-0b8bc2a235e2ba0b8) -ca-central-1 | HVM - ECS enabled | [ami-0834633a15bc44f0c](https://ca-central-1.console.aws.amazon.com/ec2/home?region=ca-central-1#launchInstanceWizard:ami=ami-0834633a15bc44f0c) -ap-southeast-1 | HVM - ECS enabled | [ami-076072ffb77b9e9c7](https://ap-southeast-1.console.aws.amazon.com/ec2/home?region=ap-southeast-1#launchInstanceWizard:ami=ami-076072ffb77b9e9c7) -ap-southeast-2 | HVM - ECS enabled | [ami-0b39a6595e83e016d](https://ap-southeast-2.console.aws.amazon.com/ec2/home?region=ap-southeast-2#launchInstanceWizard:ami=ami-0b39a6595e83e016d) -eu-central-1 | HVM - ECS enabled | [ami-0a8b8e376349bd511](https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#launchInstanceWizard:ami=ami-0a8b8e376349bd511) -us-east-1 | HVM - ECS enabled | [ami-0683608046ab95a13](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#launchInstanceWizard:ami=ami-0683608046ab95a13) -us-east-2 | HVM - ECS enabled | [ami-0d6a98791e2f98a13](https://us-east-2.console.aws.amazon.com/ec2/home?region=us-east-2#launchInstanceWizard:ami=ami-0d6a98791e2f98a13) -us-west-1 | HVM - ECS enabled | [ami-0880d73d3ea92c89c](https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#launchInstanceWizard:ami=ami-0880d73d3ea92c89c) -us-west-2 | HVM - ECS enabled | [ami-0626403624bc30288](https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#launchInstanceWizard:ami=ami-0626403624bc30288) +ap-south-1 | HVM - ECS enabled | [ami-0721722dd0f0a6b54](https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#launchInstanceWizard:ami=ami-0721722dd0f0a6b54) +eu-west-3 | HVM - ECS enabled | [ami-017eb997502d38415](https://eu-west-3.console.aws.amazon.com/ec2/home?region=eu-west-3#launchInstanceWizard:ami=ami-017eb997502d38415) +eu-west-2 | HVM - ECS enabled | [ami-08772e5a96934e3e5](https://eu-west-2.console.aws.amazon.com/ec2/home?region=eu-west-2#launchInstanceWizard:ami=ami-08772e5a96934e3e5) +eu-west-1 | HVM - ECS enabled | [ami-089bd570fab84ab89](https://eu-west-1.console.aws.amazon.com/ec2/home?region=eu-west-1#launchInstanceWizard:ami=ami-089bd570fab84ab89) +ap-northeast-2 | HVM - ECS enabled | [ami-0420afe0617d4f723](https://ap-northeast-2.console.aws.amazon.com/ec2/home?region=ap-northeast-2#launchInstanceWizard:ami=ami-0420afe0617d4f723) +ap-northeast-1 | HVM - ECS enabled | [ami-05bee9d87b6af1f5c](https://ap-northeast-1.console.aws.amazon.com/ec2/home?region=ap-northeast-1#launchInstanceWizard:ami=ami-05bee9d87b6af1f5c) +sa-east-1 | HVM - ECS enabled | [ami-0bc2d9e3a0c98158c](https://sa-east-1.console.aws.amazon.com/ec2/home?region=sa-east-1#launchInstanceWizard:ami=ami-0bc2d9e3a0c98158c) +ca-central-1 | HVM - ECS enabled | [ami-0c09398512d4ba6b9](https://ca-central-1.console.aws.amazon.com/ec2/home?region=ca-central-1#launchInstanceWizard:ami=ami-0c09398512d4ba6b9) +ap-southeast-1 | HVM - ECS enabled | [ami-0ffa715a6bb9373de](https://ap-southeast-1.console.aws.amazon.com/ec2/home?region=ap-southeast-1#launchInstanceWizard:ami=ami-0ffa715a6bb9373de) +ap-southeast-2 | HVM - ECS enabled | [ami-03cb7478f257c6490](https://ap-southeast-2.console.aws.amazon.com/ec2/home?region=ap-southeast-2#launchInstanceWizard:ami=ami-03cb7478f257c6490) +eu-central-1 | HVM - ECS enabled | [ami-029b85c9d234c4f43](https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#launchInstanceWizard:ami=ami-029b85c9d234c4f43) +us-east-1 | HVM - ECS enabled | [ami-0f274b6c9410c73ed](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#launchInstanceWizard:ami=ami-0f274b6c9410c73ed) +us-east-2 | HVM - ECS enabled | [ami-0cae94276614142ef](https://us-east-2.console.aws.amazon.com/ec2/home?region=us-east-2#launchInstanceWizard:ami=ami-0cae94276614142ef) +us-west-1 | HVM - ECS enabled | [ami-03f86e5bb88269702](https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#launchInstanceWizard:ami=ami-03f86e5bb88269702) +us-west-2 | HVM - ECS enabled | [ami-01bde5d57c4d043ad](https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#launchInstanceWizard:ami=ami-01bde5d57c4d043ad) diff --git a/content/os/v1.x/en/installation/configuration/docker/_index.md b/content/os/v1.x/en/installation/configuration/docker/_index.md index 33eddaefdba..1dbd9c3e21a 100644 --- a/content/os/v1.x/en/installation/configuration/docker/_index.md +++ b/content/os/v1.x/en/installation/configuration/docker/_index.md @@ -93,7 +93,7 @@ Key | Value | Default | Description `extra_args` | List of Strings | `[]` | Arbitrary daemon arguments, appended to the generated command `environment` | List of Strings (optional) | `[]` | -_Available as of v1.4_ +_Available as of v1.4.x_ The docker-sys bridge can be configured with system-docker args, it will take effect after reboot. @@ -101,6 +101,18 @@ The docker-sys bridge can be configured with system-docker args, it will take ef $ ros config set rancher.system_docker.bip 172.18.43.1/16 ``` +_Available as of v1.4.x_ + +The default path of system-docker logs is `/var/log/system-docker.log`. If you want to write the system-docker logs to a separate partition, +e.g. [RANCHE_OEM partition]({{< baseurl >}}/os/v1.x/en/about/custom-partition-layout/#use-rancher-oem-partition), you can try `rancher.defaults.system_docker_logs`: + +``` +#cloud-config +rancher: + defaults: + system_docker_logs: /usr/share/ros/oem/system-docker.log +``` + ### Using a pull through registry mirror There are 3 Docker engines that can be configured to use the pull-through Docker Hub registry mirror cache: diff --git a/content/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/_index.md b/content/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/_index.md index 2dbe8773753..5df0f0d6fb6 100644 --- a/content/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/_index.md +++ b/content/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/_index.md @@ -10,18 +10,3 @@ You must boot with at least **1280MB** of memory. If you boot with the ISO, you ### Install to Disk After you boot RancherOS from ISO, you can follow the instructions [here]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk/) to install RancherOS to a hard disk. - -### Persisting State - -If you are running from the ISO, RancherOS will be running from memory. All downloaded Docker images, for example, will be stored in a ramdisk and will be lost after the server is rebooted. You can -create a file system with the label `RANCHER_STATE` to instruct RancherOS to use that partition to store state. Suppose you have a disk partition on the server called `/dev/sda`, the following command formats that partition and labels it `RANCHER_STATE` - -``` -$ sudo mkfs.ext4 -L RANCHER_STATE /dev/sda -# Reboot afterwards in order for the changes to start being saved. -$ sudo reboot -``` - -After you reboot, the server RancherOS will use `/dev/sda` as the state partition. - -> **Note:** If you are installing RancherOS to disk, you do not need to run this command. diff --git a/content/rancher/v2.x/en/admin-settings/agent-options/_index.md b/content/rancher/v2.x/en/admin-settings/agent-options/_index.md new file mode 100644 index 00000000000..8d33d443047 --- /dev/null +++ b/content/rancher/v2.x/en/admin-settings/agent-options/_index.md @@ -0,0 +1,51 @@ +--- +title: Rancher Agent Options +weight: 1140 +--- + +Rancher deploys an agent on each node to communicate with the node. This pages describes the options that can be passed to the agent. To use these options, you will need to [Create a Cluster with Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/) and add the options to the generated `docker run` command when adding a node. + +## General options + +| Parameter | Environment variable | Description | +| ---------- | -------------------- | ----------- | +| `--server` | `CATTLE_SERVER` | The configured Rancher `server-url` setting which the agent connects to | +| `--token` | `CATTLE_TOKEN` | Token that is needed to register the node in Rancher | +| `--ca-checksum` | `CATTLE_CA_CHECKSUM` | The SHA256 checksum of the configured Rancher `cacerts` setting to validate | +| `--node-name` | `CATTLE_NODE_NAME` | Override the hostname that is used to register the node (defaults to `hostname -s`) | +| `--label` | `CATTLE_NODE_LABEL` | Add node labels to the node (`--label key=value`) | + +## Role options + +| Parameter | Environment variable | Description | +| ---------- | -------------------- | ----------- | +| `--all-roles` | `ALL=true` | Apply all roles (`etcd`,`controlplane`,`worker`) to the node | +| `--etcd` | `ETCD=true` | Apply the role `etcd` to the node | +| `--controlplane` | `CONTROL=true` | Apply the role `controlplane` to the node | +| `--worker` | `WORKER=true` | Apply the role `worker` to the node | + +## IP address options + +| Parameter | Environment variable | Description | +| ---------- | -------------------- | ----------- | +| `--address` | `CATTLE_ADDRESS` | The IP address the node will be registered with (defaults to the IP used to reach `8.8.8.8` | +| `--internal-address` | `CATTLE_INTERNAL_ADDRESS` | The IP address used for inter-host communication on a private network | + +### Dynamic IP address options + +For automation purposes, you can't have a specific IP address in a command as it has to be generic to be used for every node. For this, we have dynamic IP address options. They are used as a value to the existing [IP address options](#ip-address-options). This is supported for `--address` and `--internal-address`. + +| Value | Example | Description | +| ---------- | -------------------- | ----------- | +| Interface name | `--address eth0` | The first configured IP address will be retrieved from the given interface | +| `ipify` | `--address ipify` | Value retrieved from `https://api.ipify.org` will be used | +| `awslocal` | `--address awslocal` | Value retrieved from `http://169.254.169.254/latest/meta-data/local-ipv4` will be used | +| `awspublic` | `--address awspublic` | Value retrieved from `http://169.254.169.254/latest/meta-data/public-ipv4` will be used | +| `doprivate` | `--address doprivate` | Value retrieved from `http://169.254.169.254/metadata/v1/interfaces/private/0/ipv4/address` will be used | +| `dopublic` | `--address dopublic` | Value retrieved from `http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/address` will be used | +| `azprivate` | `--address azprivate` | Value retrieved from `http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/privateIpAddress?api-version=2017-08-01&format=text` will be used | +| `azpublic` | `--address azpublic` | Value retrieved from `http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-version=2017-08-01&format=text` will be used | +| `gceinternal` | `--address gceinternal` | Value retrieved from `http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip` will be used | +| `gceexternal` | `--address gceexternal` | Value retrieved from `http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip` will be used | +| `packetlocal` | `--address packetlocal` | Value retrieved from `https://metadata.packet.net/2009-04-04/meta-data/local-ipv4` will be used | +| `packetpublic` | `--address packetlocal` | Value retrieved from `https://metadata.packet.net/2009-04-04/meta-data/public-ipv4` will be used | diff --git a/content/rancher/v2.x/en/admin-settings/removing-rancher/user-cluster-nodes/_index.md b/content/rancher/v2.x/en/admin-settings/removing-rancher/user-cluster-nodes/_index.md index 5431793e941..35594a6e7b8 100644 --- a/content/rancher/v2.x/en/admin-settings/removing-rancher/user-cluster-nodes/_index.md +++ b/content/rancher/v2.x/en/admin-settings/removing-rancher/user-cluster-nodes/_index.md @@ -14,7 +14,7 @@ When removing nodes from your Rancher-launched cluster (provided that they are i When cleaning nodes provisioned using Rancher, the following components are deleted based on the type of cluster node you're removing. -| Removed Component | [IaaS Nodes][1] | [Custom Nodes][2] | [Hosted Cluster][3] | [Imported Nodes][4] | +| Removed Component | [Nodes Hosted by Infrastructure Provider][1] | [Custom Nodes][2] | [Hosted Cluster][3] | [Imported Nodes][4] | | ------------------------------------------------------------------------------ | --------------- | ----------------- | ------------------- | ------------------- | | The Rancher deployment namespace (`cattle-system` by default) | ✓ | ✓ | ✓ | ✓ | | `serviceAccount`, `clusterRoles`, and `clusterRoleBindings` labeled by Rancher | ✓ | ✓ | ✓ | ✓ | diff --git a/content/rancher/v2.x/en/backups/restorations/ha-restoration/_index.md b/content/rancher/v2.x/en/backups/restorations/ha-restoration/_index.md index 221d4d74ff9..0ae9fc9ad89 100644 --- a/content/rancher/v2.x/en/backups/restorations/ha-restoration/_index.md +++ b/content/rancher/v2.x/en/backups/restorations/ha-restoration/_index.md @@ -9,26 +9,16 @@ This procedure describes how to use RKE to restore a snapshot of the Rancher Kub ## Restore Outline -1. [Preparation](#1-preparation) + - Install utilities and create new or clean existing nodes to prepare for restore. -2. [Place Snapshot and PKI Bundle](#2-place-snapshot-and-pki-bundle) - - Pick a node and place snapshot `.db` and `pki.bundle.tar.gz` files. - -3. [Configure RKE](#3-configure-rke) - - Configure RKE `cluster.yml`. Remove `addons:` section and point configuration to the clean nodes. - -4. [Restore Database](#4-restore-database) - - Run RKE command to restore the `etcd` database to a single node. - -5. [Bring Up the Cluster](#5-bring-up-the-cluster) - - Run RKE commands to bring up cluster one a single node. Clean up old nodes. Verify and add additional nodes. +- [1. Preparation](#1-preparation) +- [2. Place Snapshot and PKI Bundle](#2-place-snapshot-and-pki-bundle) +- [3. Configure RKE](#3-configure-rke) +- [4. Restore Database](#4-restore-database) +- [5. Bring Up the Cluster](#5-bring-up-the-cluster) +
### 1. Preparation diff --git a/content/rancher/v2.x/en/cluster-provisioning/_index.md b/content/rancher/v2.x/en/cluster-provisioning/_index.md index 808214f97e8..318cf361898 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/_index.md @@ -51,8 +51,7 @@ Options include: - [Hosted Kubernetes Cluster](#hosted-kubernetes-cluster) - [Rancher Launched Kubernetes](#rancher-launched-kubernetes) - - - [Node Pools](#node-pools) + - [Nodes Hosted by an Infrastructure Provider](#nodes-hosted-by-an-infrastructure-provider) - [Custom Nodes](#custom-nodes) - [Import Existing Cluster](#import-existing-cluster) @@ -73,11 +72,11 @@ Alternatively, you can use Rancher to create a cluster on your own nodes, using [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) -#### Node Pools +#### Nodes Hosted by an Infrastructure Provider -Using Rancher, you can create pools of nodes based on a [node template]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates). This template defines the parameters used to launch nodes in your cloud providers. The cloud providers available for creating a node template are decided based on the [node drivers]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-drivers) active in the Rancher UI. The benefit of using a node pool is that if a node loses connectivity with the cluster, Rancher automatically replaces it, thus maintaining the expected cluster configuration. +Using Rancher, you can create pools of nodes based on a [node template]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates). This template defines the parameters used to launch nodes in your cloud providers. The cloud providers available for creating a node template are decided based on the [node drivers]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-drivers) active in the Rancher UI. The benefit of using nodes hosted by an infrastructure provider is that if a node loses connectivity with the cluster, Rancher automatically replaces it, thus maintaining the expected cluster configuration. -[Node Pools]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) +[Nodes Hosted by an Infrastructure Provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) #### Custom Nodes diff --git a/content/rancher/v2.x/en/cluster-provisioning/cloning-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/cloning-clusters/_index.md new file mode 100644 index 00000000000..a437724d2ab --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/cloning-clusters/_index.md @@ -0,0 +1,155 @@ +--- +title: Cloning Clusters +weight: 2400 +--- + +If you have a cluster in Rancher that you want to use as a template for creating similar clusters, you can use Rancher CLI to clone the cluster's configuration, edit it, and then use it to quickly launch the cloned cluster. + +## Caveats + +- Only [cluster types]({{< baseurl >}}/content/rancher/v2.x/en/cluster-provisioning) that interact with cloud hosts over API can be cloned. Duplication of imported clusters and custom clusters provisioned using Docker machine is not supported. + + | Cluster Type | Cloneable? | + | -------------------------------- | ------------- | + | [Hosted Kubernetes Providers][1] | ✓ | + | [Nodes Hosted by Infrastructure Provider][2] | ✓ | + | [Custom Cluster][3] | | + | [Imported Cluster][4] | | +- During the process of duplicating a cluster, you will edit a config file full of cluster settings. However, we recommend editing only values explicitly listed in this document, as cluster duplication is designed for simple cluster copying, _not_ wide scale configuration changes. Editing other values may invalidate the config file, which will lead to cluster deployment failure. + +[1]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ +[2]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ +[3]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/ +[4]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/ + +## Prerequisites + +Download and install [Rancher CLI]({{< baseurl >}}/rancher/v2.x/en/cli). Remember to [create an API bearer token]({{< baseurl >}}/rancher/v2.x/en/user-settings/api-keys) if necessary. + + +## 1. Export Cluster Config + +Begin by using Rancher CLI to export the configuration for the cluster that you want to clone. + +1. Open Terminal and change your directory to the location of the Rancher CLI binary, `rancher`. + +1. Enter the following command to list the clusters managed by Rancher. + + + ./rancher cluster ls + + +1. Find the cluster that you want to clone, and copy either its resource `ID` or `NAME` to your clipboard. From this point on, we'll refer to the resource `ID` or `NAME` as ``, which is used as a placeholder in the next step. + +1. Enter the following command to export the configuration for your cluster. + + + ./rancher clusters export + + + **Step Result:** The YAML for a cloned cluster prints to Terminal. + +1. Copy the YAML to your clipboard and paste it in a new file. Save the file as `cluster-template.yml` (or any other name, as long as it has a `.yml` extension). + +## 2. Modify Cluster Config + +Use your favorite text editor to modify the cluster configuration in `cluster-template.yml` for your cloned cluster. + +1. Open `cluster-template.yml` (or whatever you named your config) in your favorite text editor. + + >**Warning:** Only edit the cluster config values explicitly called out below. Many of the values listed in this file are used to provision your cloned cluster, and editing their values may break the provisioning process. + + +1. As depicted in one of the examples below, at the `` placeholder, replace your original cluster's name with a unique name (``). If your cloned cluster has a duplicate name, the cluster will not provision successfully. +{{% accordion id="gke" label="GKE" %}} +```yml +Version: v3 +clusters: + : # ENTER UNIQUE NAME + dockerRootDir: /var/lib/docker + enableNetworkPolicy: false + googleKubernetesEngineConfig: + credential: |- + { + "type": "service_account", + "project_id": "gke-cluster-221300", + "private_key_id": "1d210afae352bc298bde1b3e680ec0c8b22cdd61" +``` +{{% /accordion %}} +{{% accordion id="eks" label="EKS" %}} +```yml +Version: v3 +clusters: + : # ENTER UNIQUE NAME + amazonElasticContainerServiceConfig: + accessKey: 00000000000000000000 + associateWorkerNodePublicIp: true + instanceType: t2.medium + maximumNodes: 3 + minimumNodes: 1 + region: us-west-2 + secretKey: 0000000000000000000000000000000000000000 + dockerRootDir: /var/lib/docker + enableNetworkPolicy: false +``` +{{% /accordion %}} +{{% accordion id="aks" label="AKS" %}} +```yml +Version: v3 +clusters: + : # ENTER UNIQUE NAME + azureKubernetesServiceConfig: + adminUsername: azureuser + agentPoolName: rancher + agentVmSize: Standard_D5_v2 + clientId: 00000000-0000-0000-0000-000000000000 + clientSecret: 00000000000000000000000000000000000000000000 + count: 3 + kubernetesVersion: 1.11.2 + location: westus + osDiskSizeGb: 100 + resourceGroup: docker-machine + sshPublicKeyContents: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJc2kDExgRaDLD +``` +{{% /accordion %}} +{{% accordion id="ec2" label="Nodes Hosted by Infrastructure Provider (EC2, Azure, or DigitalOcean )" %}} +```yml +Version: v3 +clusters: + : # ENTER UNIQUE NAME + dockerRootDir: /var/lib/docker + enableNetworkPolicy: false + rancherKubernetesEngineConfig: + addonJobTimeout: 30 + authentication: + strategy: x509 + authorization: {} + bastionHost: {} + cloudProvider: {} + ignoreDockerVersion: true +``` +{{% /accordion %}} + +1. **Nodes Hosted by Infrastructure Provider Only:** For each `nodePools` section, replace the original nodepool name with a unique name at the `` placeholder. If your cloned cluster has a duplicate nodepool name, the cluster will not provision successfully. + + ```yml + nodePools: + : + clusterId: do + controlPlane: true + etcd: true + hostnamePrefix: mark-do + nodeTemplateId: do + quantity: 1 + worker: true + ``` + +1. When you're done, save and close the configuration. + +## 3. Launch Cloned Cluster + +Move `cluster-template.yml` into the same directory as the Rancher CLI binary. Then run this command: + + ./rancher up --file cluster-template.yml + +**Result:** Your cloned cluster begins provisioning. Enter `./rancher cluster ls` to confirm. You can also log into the Rancher UI and open the **Global** view to watch your provisioning cluster's progress. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/custom-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/custom-clusters/_index.md index 0f6b7d9237e..84c184fc6fc 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/custom-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/custom-clusters/_index.md @@ -3,6 +3,6 @@ title: Custom Cluster weight: 2210 --- -If you don't want to host your Kubernetes cluster in a [hosted kubernetes provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters) or provision nodes through Rancher, you can use the _custom cluster_ option to create a Kubernetes cluster in on-premise bare-metal servers, on-premise virtual machines, or in _any_ IaaS provider. +If you don't want to host your Kubernetes cluster in a [hosted kubernetes provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters) or provision nodes through Rancher, you can use the _custom cluster_ option to create a Kubernetes cluster in on-premise bare-metal servers, on-premise virtual machines, or in _any_ node hosted by an infrastructure provider. In this scenario, you'll bring the nodes yourself, and then configure them to meet Rancher's [requirements]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#requirements). Then, use the [Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/) install option to setup your cluster. diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md index 5c47482aae3..69c6aeeb8fb 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md @@ -5,7 +5,7 @@ weight: 2100 You can use Rancher to create clusters in a hosted Kubernetes provider, such as Google GKE. -In this use case, Rancher sends a request to a hosted provider using the provider's API. The provider then provisions and hosts the cluster for you. When the cluster finishes building, you can manage it from the Rancher UI along with clusters you've provisioned that are hosted on-premise or in an IaaS, all from the same UI. +In this use case, Rancher sends a request to a hosted provider using the provider's API. The provider then provisions and hosts the cluster for you. When the cluster finishes building, you can manage it from the Rancher UI along with clusters you've provisioned that are hosted on-premise or in an infrastructure provider, all from the same UI. Rancher supports the following Kubernetes providers: diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md index f80f354049c..5534f6c5f0b 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md @@ -7,56 +7,119 @@ aliases: --- ## Objectives -1. [Create an account with appropriate permissions](#give-appropriate-permissions) + - Create (or give an existing) user appropriate permissions to create an EKS cluster. +- [1. Give Appropriate Permissions](#1-give-appropriate-permissions) +- [2. Create Access Key and Secret Key](#2-create-access-key-and-secret-key) +- [3. Create the EKS Cluster](#3-create-the-eks-cluster) -2. [Create an access key and secret key](#create-access-key-and-secret-key) - Create an access key and secret key to access Amazon Web Services (AWS) resources from Rancher. + -3. [Create the EKS Cluster](#create-the-eks-cluster) - - Using the AWS account, create your Amazon Elastic Container Service for Kubernetes (EKS) cluster in Rancher. - -## Give Appropriate Permissions +## 1. Give Appropriate Permissions Make sure that the account you will be using to create the EKS cluster has the appropriate permissions. Referring to the official [EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/IAM_policies.html) for details. -## Create Access Key and Secret Key +## 2. Create Access Key and Secret Key -Use AWS to create an access key and client secret. +Use AWS to create an access key and client secret for the IAM account used in [1. Give Appropriate Permissions](#1-give-appropriate-permissions). -1. In the AWS Console, go to the **IAM** service. +For instructions on how to create these keys, see the AWS documentation [Managing Access Keys: To create, modify, or delete a user's access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey). -2. Select **Users**. +## 3. Create the EKS Cluster -3. Find the user you wish to create the EKS cluster with. Select the user. - -4. Click **Security Credentials**. - -5. Click **Create access key**. - -6. Record the **Access key ID** and **Secret access key**. You will need to use these in Rancher to create your EKS cluster. - -## Create the EKS Cluster - -Use {{< product >}} to set up and configure your Kubernetes cluster. +Use Rancher to set up and configure your Kubernetes cluster. 1. From the **Clusters** page, click **Add Cluster**. -2. Choose **Amazon EKS**. +1. Choose **Amazon EKS**. -3. Enter a **Cluster Name**. +1. Enter a **Cluster Name**. -4. {{< step_create-cluster_member-roles >}} +1. {{< step_create-cluster_member-roles >}} -5. Enter your **Access Key**. +1. Configure **Account Access** for the EKS cluster. Complete each drop-down and field using the information obtained in [2. Create Access Key and Secret Key](#2-create-access-key-and-secret-key). -6. Enter your **Secret Key** + | Setting | Description | + | ---------- | -------------------------------------------------------------------------------------------------------------------- | + | Region | From the drop-down choose the geographical region in which to build your cluster. | + | Access Key | Enter the access key that you created in [2. Create Access Key and Secret Key](#2-create-access-key-and-secret-key). | + | Secret Key | Enter the secret key that you created in [2. Create Access Key and Secret Key](#2-create-access-key-and-secret-key). | + +1. Click **Next: Select Service Role**. Then choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html). -7. Click **Next: Authenticate & configure nodes**. + Service Role | Description + -------------|--------------------------- + Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster. + Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role). -8. Specify any additional options (such as instance type or minimum and maximum number of nodes). Then click **Create**. +1. Click **Next: Select VPC and Subnet**. + +1. Choose an option for **Public IP for Worker Nodes**. Your selection for this option determines what options are available for **VPC & Subnet**. + + Option | Description + -------|------------ + Yes | When your cluster nodes are provisioned, they're assigned a both a private and public IP address. + No: Private IPs only | When your cluster nodes are provisioned, they're assigned only a private IP address.

If you choose this option, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane. + +1. Now choose a **VPC & Subnet**. Follow one of the sets of instructions below based on your selection from the previous step. + + Amazon Documentation: + + - [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) + - [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) + + {{% accordion id="yes" label="Public IP for Worker Nodes—Yes" %}} +If you choose to assign a public IP address to your cluster's worker nodes, you have the option of choosing between a VPC that's automatically generated by Rancher (i.e., **Standard: Rancher generated VPC and Subnet**), or a VPC that you're already created with AWS (i.e., **Custom: Choose from your existing VPC and Subnets**). Choose the option that best fits your use case. + +1. Choose a **VPC and Subnet** option. + + Option | Description + -------|------------ + Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC and Subnet. + Custom: Choose from your exiting VPC and Subnets | While provisioning your cluster, Rancher configures your nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html). If you choose this option, complete the remaining steps below. + +1. If you're using **Custom: Choose from your existing VPC and Subnets**: + + (If you're using **Standard**, skip to [step 11](#select-instance-options)) + + 1. Make sure **Custom: Choose from your existing VPC and Subnets** is selected. + + 1. From the drop-down that displays, choose a VPC. + + 1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays. + + 1. Click **Next: Select Security Group**. + {{% /accordion %}} + {{% accordion id="no" label="Public IP for Worker Nodes—No: Private IPs only" %}} +If you chose this option, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane. Follow the steps below. + +>**Tip:** When using only private IP addresses, you can provide your nodes internet access by creating a VPC constructed with two subnets, a private set and a public set. The private set should have its route tables configured to point toward a NAT in the public set. For more information on routing traffic from private subnets, please see the [official AWS documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html). + + 1. From the drop-down that displays, choose a VPC. + + 1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays. + + 1. Click **Next: Select Security Group**. + {{% /accordion %}} + +1. Choose a **Security Group**. See the documentation below on how to create one. + + Amazon Documentation: + + - [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) + - [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group) + +1. Click **Select Instance Options**, and then edit the node options available. + + Option | Description + -------|------------ + Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning. + Custom AMI Override | If you want to use a custom [Amazon Machine Image](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html#creating-an-ami) (AMI), specify it here. + Minimum ASG Size | The minimum number of instances that your cluster will scale to during low traffic, as controlled by [Amazon Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html). + Maximum ASG Size | The maximum number of instances that your cluster will scale to during high traffic, as controlled by [Amazon Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html). + +1. Click **Create**. {{< result_create-cluster >}} + diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md index 049ce3d7e74..f64297f1e8e 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md @@ -7,17 +7,17 @@ If you don't want to use a hosted Kubernetes provider, you can have Rancher laun - Bare-metal servers - On-premise virtual machines -- IaaS-hosted virtual machines +- Virtual machines hosted by an infrastructure provider RKE launched clusters are separated into two categories: -- [Node Pools]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/): +- [Nodes Hosted by an Infrastructure Provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/): - Using Rancher, you can create pools of nodes based on a [node template]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates). This node template defines the parameters you want to use to launch nodes in your cloud providers. The available cloud providers to create a node template are decided based on active [node drivers]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-drivers). The benefit of using a node pool is that if a node loses connectivity with the cluster, Rancher will automatically create another node to join the cluster to ensure that the count of the node pool is as expected. + Using Rancher, you can create pools of nodes based on a [node template]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates). This node template defines the parameters you want to use to launch nodes in your cloud providers. The available cloud providers to create a node template are decided based on active [node drivers]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-drivers). The benefit of using a node hosted by an infrastructure provider is that if a node loses connectivity with the cluster, Rancher will automatically create another node to join the cluster to ensure that the count of the node pool is as expected. - [Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/): - For use cases where you want to provision bare-metal servers, on-premise virtual machines, or bring virtual machines that are already exist in a cloud provider. With this option, you will run a Rancher agent Docker container on the machine. + For use cases where you want to provision bare-metal servers, on-premise virtual machines, or bring virtual machines that already exist in a cloud provider. With this option, you will run a Rancher agent Docker container on the machine. >**Note:** If you want to reuse a node from a previous custom cluster, [clean the node]({{< baseurl >}}/rancher/v2.x/en/admin-settings/removing-rancher/rancher-cluster-nodes/) before using it in a cluster again. If you reuse a node that hasn't been cleaned, cluster provisioning may fail. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md index f1802a42553..9912d70255d 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md @@ -8,7 +8,7 @@ aliases: ## Custom Nodes -Use Rancher to create a Kubernetes cluster on your on-premise bare metal servers. This option creates a cluster using a combination of [Docker Machine](https://docs.docker.com/machine/) and RKE, which is Rancher's own lightweight Kubernetes installer. In addition to bare metal servers, RKE can also create clusters on _any_ IaaS providers by integrating with node drivers. +Use Rancher to create a Kubernetes cluster on your on-premise bare metal servers. This option creates a cluster using a combination of [Docker Machine](https://docs.docker.com/machine/) and RKE, which is Rancher's own lightweight Kubernetes installer. In addition to bare metal servers, RKE can also create clusters on _any_ infrastructure provider by integrating with node drivers. To use this option you'll need access to servers you intend to use as your Kubernetes cluster. Provision each server according to Rancher [requirements](#requirements), which includes some hardware specifications and Docker. After you install Docker on each server, run the command provided in the Rancher UI to turn each server into a Kubernetes node. @@ -72,8 +72,9 @@ Use {{< product >}} to clone your Linux host and configure them as Kubernetes no >- Using Windows nodes as Kubernetes workers? See [Node Configuration]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/#node-configuration). >- Bare-Metal Server Reminder: If you plan on dedicating bare-metal servers to each role, you must provision a bare-metal server for each role (i.e. provision multiple bare-metal servers). -8. **Optional**: Add **Labels** to your cluster nodes to help schedule workloads later. +8. **Optional**: Click **Show advanced options** to specify IP address(es) to use when registering the node, override the hostname of the node or to add labels to the node. + [Rancher Agent Options]({{< baseurl >}}/rancher/v2.x/en/admin-settings/agent-options/)
[Kubernetes Documentation: Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) 9. Copy the command displayed on screen to your clipboard. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md index 9fbdb7e18a6..3ab3fabc935 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md @@ -1,5 +1,5 @@ --- -title: Nodes hosted in an Infrastructure Provider +title: Nodes Hosted in an Infrastructure Provider weight: 2205 aliases: - /rancher/v2.x/en/concepts/global-configuration/node-drivers/ diff --git a/content/rancher/v2.x/en/faq/_index.md b/content/rancher/v2.x/en/faq/_index.md index 534424ce53f..63c03de071a 100644 --- a/content/rancher/v2.x/en/faq/_index.md +++ b/content/rancher/v2.x/en/faq/_index.md @@ -13,7 +13,7 @@ See [Technical FAQ]({{< baseurl >}}/rancher/v2.x/en/faq/technical/), for frequen #### What does it mean when you say Rancher v2.0 is built on Kubernetes? -Rancher v2.0 is a complete container management platform built on 100% on Kubernetes leveraging its Custom Resource and Controller framework. All features are written as a CustomResourceDefinition (CRD) which extends the existing Kubernetes API and can leverage native features such as RBAC. +Rancher v2.0 is a complete container management platform built 100% on Kubernetes leveraging its Custom Resource and Controller framework. All features are written as a CustomResourceDefinition (CRD) which extends the existing Kubernetes API and can leverage native features such as RBAC. #### Do you plan to implement upstream Kubernetes, or continue to work on your own fork? diff --git a/content/rancher/v2.x/en/faq/technical/_index.md b/content/rancher/v2.x/en/faq/technical/_index.md index 86103435ecb..e82d4ac0ce2 100644 --- a/content/rancher/v2.x/en/faq/technical/_index.md +++ b/content/rancher/v2.x/en/faq/technical/_index.md @@ -119,3 +119,20 @@ A node is required to have a static IP configured (or a reserved IP via DHCP). I When the IP address of the node changed, Rancher lost connection to the node, so it will be unable to clean the node properly. See [Cleaning cluster nodes]({{< baseurl >}}/rancher/v2.x/en/faq/cleaning-cluster-nodes/) to clean the node. When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster. + +### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster? + +You can add additional arguments/binds/environment variables via the [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables]({{< baseurl >}}/rke/v0.1.x/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls]({{< baseurl >}}/rke/v0.1.x/en/example-yamls/). + +### Why does it take 5+ minutes for a pod to be rescheduled when a node has failed? + +This is due to a combination of the following default Kubernetes settings: + +* kubelet + * `node-status-update-frequency`: Specifies how often kubelet posts node status to master (default 10s) +* kube-controller-manager + * `node-monitor-period`: The period for syncing NodeStatus in NodeController (default 5s) + * `node-monitor-grace-period`: Amount of time which we allow running Node to be unresponsive before marking it unhealthy (default 40s) + * `pod-eviction-timeout`: The grace period for deleting pods on failed nodes (default 5m0s) + +See [Kubernetes: kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) and [Kubernetes: kube-controller-manager](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/) for more information on these settings. diff --git a/content/rancher/v2.x/en/installation/air-gap-installation/install-rancher/_index.md b/content/rancher/v2.x/en/installation/air-gap-installation/install-rancher/_index.md index 7b7490b3938..2adfc00ff8c 100644 --- a/content/rancher/v2.x/en/installation/air-gap-installation/install-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/air-gap-installation/install-rancher/_index.md @@ -64,7 +64,7 @@ Instead of installing the `tiller` agent on the cluster, render the installs on ### Initialize Helm Locally -Skip the [Initialize Helm (Install Tiller)]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/#helm-init) and initialize `helm` locally on a system that has internet access. +Skip the [Initialize Helm (Install Tiller)]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/) and initialize `helm` locally on a system that has internet access. ```plain helm init -c @@ -80,9 +80,9 @@ Fetch and render the `helm` charts on a system that has internet access. #### Cert-Manager -If you are installing Rancher with Rancher Self-Signed certificates you will need to install 'cert-manager' on your cluster. If you are installing your own certificates you may skip this section. +If you are installing Rancher with Rancher self-signed certificates you will need to install 'cert-manager' on your cluster. If you are installing your own certificates you may skip this section. -Fetch the latest `stable/cert-manager` chart. This will pull down the chart and save it in the current directory as a `.tgz` file. +Fetch the latest `cert-manager` chart from the [official Helm chart repository](https://github.com/helm/charts/tree/master/stable). ```plain helm fetch stable/cert-manager @@ -98,16 +98,16 @@ helm template ./cert-manager-.tgz --output-dir . \ #### Rancher -Install the Rancher chart repo. +Add the Helm chart repository that contains charts to install Rancher. Replace `` with the [repository that you're using]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories) (i.e. `latest` or `stable`). ```plain -helm repo add rancher-stable https://releases.rancher.com/server-charts/stable +helm repo add rancher- https://releases.rancher.com/server-charts/ ``` -Fetch the latest `rancher-stable/rancher` chart. This will pull down the chart and save it in the current directory as a `.tgz` file. +Fetch the latest Rancher chart. This will pull down the chart and save it in the current directory as a `.tgz` file. Replace `` with the repo you're using (`latest` or `stable`). ```plain -helm fetch rancher-stable/rancher +helm fetch rancher-/rancher ``` Render the template with the options you would use to install the chart. See [Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/) for details on the various options. Remember to set the `rancherImage` option to pull the image from your private registry. This will create a `rancher` directory with the Kubernetes manifest files. diff --git a/content/rancher/v2.x/en/installation/ha/_index.md b/content/rancher/v2.x/en/installation/ha/_index.md index 24fe7a7b010..77eacd64a7a 100644 --- a/content/rancher/v2.x/en/installation/ha/_index.md +++ b/content/rancher/v2.x/en/installation/ha/_index.md @@ -7,7 +7,7 @@ For production environments, we recommend installing Rancher in a high-availabil This procedure walks you through setting up a 3-node cluster with RKE and installing the Rancher chart with the Helm package manager. -> **Important:** For the best performance, we recommend this Kubernetes cluster to be dedicated only to run Rancher. +> **Important:** For the best performance, we recommend this Kubernetes cluster to be dedicated only to run Rancher. After the Kubernetes cluster to run Rancher is setup, you can [create or import clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads. ## Recommended Architecture diff --git a/content/rancher/v2.x/en/installation/ha/helm-init/_index.md b/content/rancher/v2.x/en/installation/ha/helm-init/_index.md index 14f3f7b86cd..ea51f9ddaea 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-init/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-init/_index.md @@ -3,17 +3,18 @@ title: 3 - Initialize Helm (Install tiller) weight: 195 --- -Helm is the package management tool of choice for Kubernetes. Helm charts provide templating syntax for Kubernetes YAML manifest documents. With Helm we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at [https://helm.sh/](https://helm.sh/). + +Helm is the package management tool of choice for Kubernetes. Helm "charts" provide templating syntax for Kubernetes YAML manifest documents. With Helm we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at [https://helm.sh/](https://helm.sh/). To be able to use Helm, the server-side component `tiller` needs to be installed on your cluster. > **Note:** For systems without direct internet access see [Helm - Air Gap]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#helm) for install details. -### Initialize Helm on the Cluster +### Install Tiller on the Cluster Helm installs the `tiller` service on your cluster to manage charts. Since RKE enables RBAC by default we will need to use `kubectl` to create a `serviceaccount` and `clusterrolebinding` so `tiller` has permission to deploy to the cluster. * Create the `ServiceAccount` in the `kube-system` namespace. -* Create the `ClusterRoleBinding` to give the `tiller` service account access to the cluster. -* Finally use `helm` to initialize the `tiller` service +* Create the `ClusterRoleBinding` to give the `tiller` account access to the cluster. +* Finally use `helm` to install the `tiller` service ```plain kubectl -n kube-system create serviceaccount tiller @@ -24,6 +25,14 @@ kubectl create clusterrolebinding tiller \ helm init --service-account tiller +<<<<<<< HEAD +# Users in China: You will need to specify a specific tiller-image in order to initialize tiller. +# The list of tiller image tags are available here: https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085. +# When initializing tiller, you'll need to pass in --tiller-image + +helm init --service-account tiller | +--tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller: +======= # For chinese users # The latest version of tiller images queries addresses: # https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085 @@ -31,10 +40,29 @@ helm init --service-account tiller helm init --service-account tiller \ --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller: +>>>>>>> Specify tiller image for chinese users ``` > **Note:** This`tiller`install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the [helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for restricting `tiller` access to suit your security requirements. +### Test your Tiller installation + +Run the following command to verify the installation of `tiller` on your cluster: + +``` +kubectl -n kube-system rollout status deploy/tiller-deploy +Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available... +deployment "tiller-deploy" successfully rolled out +``` + +And run the following command to validate Helm can talk to the `tiller` service: + +``` +helm version +Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} +Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"} +``` + ### Issues or errors? See the [Troubleshooting]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/troubleshooting/) page. diff --git a/content/rancher/v2.x/en/installation/ha/helm-init/troubleshooting/_index.md b/content/rancher/v2.x/en/installation/ha/helm-init/troubleshooting/_index.md index b050fb6b003..c73013b5cb8 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-init/troubleshooting/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-init/troubleshooting/_index.md @@ -20,4 +20,4 @@ helm version --server Error: could not find tiller ``` -When you have confirmed that `tiller` has been removed, please follow the steps provided in [Initialize Helm on the cluster]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/#initialize-helm-on-the-cluster) to install `tiller` with the correct `ServiceAccount`. +When you have confirmed that `tiller` has been removed, please follow the steps provided in [Initialize Helm (Install tiller)]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/) to install `tiller` with the correct `ServiceAccount`. diff --git a/content/rancher/v2.x/en/installation/ha/helm-rancher/_index.md b/content/rancher/v2.x/en/installation/ha/helm-rancher/_index.md index ab7f4654d31..e813341cf31 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-rancher/_index.md @@ -3,38 +3,27 @@ title: 4 - Install Rancher weight: 200 --- -Rancher installation is now managed using the Helm package manager for Kubernetes. Use `helm` to install the prerequisite and Rancher charts. +Rancher installation is managed using the Helm package manager for Kubernetes. Use `helm` to install the prerequisite and charts to install Rancher. > **Note:** For systems without direct internet access see [Installing Rancher - Air Gap]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/) for install details. -### Add the Chart Repo +### Add the Helm Chart Repository -Use `helm repo add` to add the Rancher chart repository. +Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories). + +Replace `` with the Helm chart repository that you want to use (i.e. `latest` or `stable`). ``` -helm repo add rancher-stable https://releases.rancher.com/server-charts/stable -``` - -## Chart Versioning Notes - -Up until the initial helm chart release for v2.1.0, the helm chart version matched the Rancher version (i.e `appVersion`). - -Since there are times where the helm chart will require changes without any changes to the Rancher version, we have moved to a `yyyy.mm.` helm chart version. - -Run `helm search rancher` to view which Rancher version will be launched for the specific helm chart version. - -``` -NAME CHART VERSION APP VERSION DESCRIPTION -rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Server to manage Kubernetes clusters acro... +helm repo add rancher- https://releases.rancher.com/server-charts/ ``` ### Install cert-manager -> **Note:** cert-manager is only required for Rancher generated and LetsEncrypt issued certificates. You may skip this step if you are bringing your own certificates and using the `ingress.tls.source=secret` option. +> **Note:** cert-manager is only required for Rancher generated and LetsEncrypt issued certificates. You may skip this step if you are bringing your own certificates or using the `ingress.tls.source=secret` option. -Rancher relies on [cert-manager](https://github.com/kubernetes/charts/tree/master/stable/cert-manager) from the Kubernetes Helm stable catalog to issue self-signed or LetsEncrypt certificates. +Rancher relies on [cert-manager](https://github.com/kubernetes/charts/tree/master/stable/cert-manager) from the official Kubernetes Helm chart repository to issue self-signed or LetsEncrypt certificates. -Install `cert-manager` from the Helm stable catalog. +Install `cert-manager` from Kubernetes Helm chart repository. ``` helm install stable/cert-manager \ @@ -58,12 +47,13 @@ There are three options for the source of the certificate. The default is for Rancher to generate a CA and use the `cert-manager` to issue the certificate for access to the Rancher server interface. -The only requirement is to set the `hostname` to the DNS name you pointed at your load balancer. +- Replace `` with the repository that you configured in [Add the Helm Chart Repository](#add-the-helm-chart-repository) (i.e. `latest` or `stable`). +- Set the `hostname` to the DNS name you pointed at your load balancer. >**Using Air Gap?** [Set the `rancherImage` option]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#install-rancher-using-private-registry) in your command, pointing toward your private registry. ``` -helm install rancher-stable/rancher \ +helm install rancher-/rancher \ --name rancher \ --namespace cattle-system \ --set hostname=rancher.my.org @@ -73,12 +63,13 @@ helm install rancher-stable/rancher \ Use [LetsEncrypt](https://letsencrypt.org/)'s free service to issue trusted SSL certs. This configuration uses http validation so the Load Balancer must have a Public DNS record and be accessible from the internet. -Set `hostname`, `ingress.tls.source=letsEncrypt` and LetsEncrypt options. +- Replace `` with the repository that you configured in [Add the Helm Chart Repository](#add-the-helm-chart-repository) (i.e. `latest` or `stable`). +- Set `hostname`, `ingress.tls.source=letsEncrypt` and LetsEncrypt options. >**Using Air Gap?** [Set the `rancherImage` option]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#install-rancher-using-private-registry) in your command, pointing toward your private registry. ``` -helm install rancher-stable/rancher \ +helm install rancher-/rancher \ --name rancher \ --namespace cattle-system \ --set hostname=rancher.my.org \ @@ -92,12 +83,13 @@ Create Kubernetes secrets from your own certificates for Rancher to use. > **Note:** The common name for the cert will need to match the `hostname` option or the ingress controller will fail to provision the site for Rancher. -Set `hostname` and `ingress.tls.source=secret`. +- Replace `` with the repository that you configured in [Add the Helm Chart Repository](#add-the-helm-chart-repository) (i.e. `latest` or `stable`). +- Set `hostname` and `ingress.tls.source=secret`. > **Note:** If you are using a Private CA signed cert, add `--set privateCA=true` ``` -helm install rancher-stable/rancher \ +helm install rancher-/rancher \ --name rancher \ --namespace cattle-system \ --set hostname=rancher.my.org \ diff --git a/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md b/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md index c7a12773f3f..d9c3eb50cb1 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md @@ -29,7 +29,7 @@ weight: 276 | `debug` | false | `bool` - set debug flag on rancher server | | `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials | | `proxy` | "" | `string` - string - HTTP[S] proxy server for Rancher | -| `noProxy` | "localhost,127.0.0.1" | `string` - comma separated list of hostnames or ip address not to use the proxy | +| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" | `string` - comma separated list of hostnames or ip address not to use the proxy | | `resources` | {} | `map` - rancher pod resource requests & limits | | `rancherImage` | "rancher/rancher" | `string` - rancher image source | | `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag | @@ -59,7 +59,7 @@ Add your IP exceptions to the `noProxy` list. Make sure you add the Service clus ```plain --set proxy="http://:@:/" ---set noProxy="127.0.0.1,localhost,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" +--set noProxy="127.0.0.0/8\,10.0.0.0/8\,172.16.0.0/12\,192.168.0.0/16" ``` ### Additional Trusted CAs diff --git a/content/rancher/v2.x/en/installation/ha/helm-rancher/troubleshooting/_index.md b/content/rancher/v2.x/en/installation/ha/helm-rancher/troubleshooting/_index.md index f841c0e8992..313ae5fb11b 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-rancher/troubleshooting/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-rancher/troubleshooting/_index.md @@ -62,7 +62,7 @@ pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m Use `kubectl` and the pod name to list the logs from the pod. ``` -kubectl -n cattle-namespace logs -f rancher-784d94f59b-vgqzh +kubectl -n cattle-system logs -f rancher-784d94f59b-vgqzh ``` ### Cert CN is "Kubernetes Ingress Controller Fake Certificate" diff --git a/content/rancher/v2.x/en/installation/ha/kubernetes-rke/_index.md b/content/rancher/v2.x/en/installation/ha/kubernetes-rke/_index.md index b1b30c8b728..851052f68c3 100644 --- a/content/rancher/v2.x/en/installation/ha/kubernetes-rke/_index.md +++ b/content/rancher/v2.x/en/installation/ha/kubernetes-rke/_index.md @@ -118,4 +118,4 @@ Save a copy of the `kube_config_rancher-cluster.yml` and `rancher-cluster.yml` f See the [Troubleshooting]({{< baseurl >}}/rancher/v2.x/en/installation/ha/kubernetes-rke/troubleshooting/) page. -### [Next: Initialize Helm]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/) +### [Next: Initialize Helm (Install tiller)]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/) diff --git a/content/rancher/v2.x/en/installation/requirements/_index.md b/content/rancher/v2.x/en/installation/requirements/_index.md index 90f914047d1..d3d4a50194f 100644 --- a/content/rancher/v2.x/en/installation/requirements/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/_index.md @@ -78,7 +78,7 @@ Each node used (either for the Single Node Install, High Availability (HA) Insta

Port requirements

-When deploying Rancher in an HA cluster, certain ports on your nodes must be open to allow communication with Rancher. The ports that must be open change according to the type of machines hosting your cluster nodes. For example, if your are deploying Rancher on nodes hosted by an IaaS, port `22` must be open for SSH. The following diagram depicts the ports that are opened for each [cluster type]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning). +When deploying Rancher in an HA cluster, certain ports on your nodes must be open to allow communication with Rancher. The ports that must be open change according to the type of machines hosting your cluster nodes. For example, if your are deploying Rancher on nodes hosted by an infrastructure, port `22` must be open for SSH. The following diagram depicts the ports that are opened for each [cluster type]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning). Cluster Type Port Requirements ![Basic Port Requirements]({{< baseurl >}}/img/rancher/port-communications.svg) diff --git a/content/rancher/v2.x/en/installation/server-tags/_index.md b/content/rancher/v2.x/en/installation/server-tags/_index.md index 2dde9a9e39b..589c3844a80 100644 --- a/content/rancher/v2.x/en/installation/server-tags/_index.md +++ b/content/rancher/v2.x/en/installation/server-tags/_index.md @@ -1,13 +1,81 @@ --- -title: Server Tags +title: Choosing a Version of Rancher weight: 230 --- -{{< product >}} Server is distributed as a Docker image, which have _tags_ attached to them. Tags are used to identify what version is included in the image. Rancher includes additional tags that point to a specific version. Remember that if you use the additional tags, you must explicitly pull a new version of that image tag. Otherwise it will use the cached image on the host. -You can find Rancher images at [DockerHub](https://hub.docker.com/r/rancher/rancher/tags/). +## Single Node Installs -- `rancher/rancher:latest`: Our latest development release. These builds are validated through our CI automation framework. These releases are not recommended for production environments. +When performing [single-node installs]({{< baseurl >}}/rancher/v2.x/en/installation/single-node), upgrades, or rollbacks, you can use _tags_ to install a specific version of Rancher. -- `rancher/rancher:stable`: Our newest stable release. This tag is recommended for production. +### Server Tags -The `master` tag or any tag with a `-rc` or another suffix is meant for the {{< product >}} testing team to validate. You should not use these tags, as these builds are not officially supported. +Rancher Server is distributed as a Docker image, which have tags attached to them. You can specify this tag when entering the command to deploy Rancher. Remember that if you use a tag without an explicit version (like `latest` or `stable`), you must explicitly pull a new version of that image tag. Otherwise, any image cached on the host will be used. + +| Tag | Description | +| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `rancher/rancher:latest` | Our latest development release. These builds are validated through our CI automation framework. These releases are not recommended for production environments. | +| `rancher/rancher:stable` | Our newest stable release. This tag is recommended for production. | +| `rancher/rancher:` | You can install specific versions of Rancher by using the tag from a previous release. See what's available at DockerHub. | + +
+ +>**Note:** The `master` tag or any tag with `-rc` or another suffix is meant for the Rancher testing team to validate. You should not use these tags, as these builds are not officially supported. + +## High Availability Installs + +When installing, upgrading, or rolling back Rancher Server in a [high availability configuration]({{< baseurl >}}/rancher/v2.x/en/installation/ha/), Rancher server is installed using a Helm chart on a Kubernetes cluster. Therefore, as you prepare to install or upgrade a high availability Rancher configuration, you must add a Helm chart repository that contains the charts for installing Rancher. + +### Helm Chart Repositories + +Rancher provides two different Helm chart repositories to choose from. We align our latest and stable Helm chart repositories with the Docker tags that are used for a single node installation. Therefore, the `rancher-latest` repository will contain charts for all the Rancher versions that have been tagged as `rancher/rancher:latest`. When a Rancher version has been promoted to the `rancher/rancher:stable`, it will get added to the `rancher-stable` repository. + + +Type | Command to Add the Repo | Description of the Repo +-----------|-----|------------- +rancher-latest | `helm repo add rancher-latest https://releases.rancher.com/server-charts/latest` | Adds a repository of Helm charts for the latest versions of Rancher. We recommend using this repo for testing out new Rancher builds. +rancher-stable | `helm repo add rancher-stable https://releases.rancher.com/server-charts/stable` | Adds a repository of Helm charts for older, stable versions of Rancher. We recommend using this repo for production environments. +
+Instructions on when to select these repos are available in [High Availability Install]({{< baseurl >}}/rancher/v2.x/en/installation/ha). + +> **Note:** The introduction of the `rancher-latest` and `rancher-stable` Helm Chart repositories was introduced after Rancher v2.1.0, so the `rancher-stable` repository contains some Rancher versions that were never marked as `rancher/rancher:stable`. The versions of Rancher that were tagged as `rancher/rancher:stable` prior to v2.1.0 are v2.0.4, v2.0.6, v2.0.8. Post v2.1.0, all charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`. + +### Helm Chart Versions + +Up until the initial release of the Helm chart for Rancher v2.1.0, the version of the Helm chart matched the Rancher version (i.e `appVersion`). + +Since there are times where the Helm chart will require changes without any changes to the Rancher version, we have moved to a versioning scheme using `yyyy.mm.` for the Helm charts. + +Run `helm search rancher` to view which Rancher version will be launched for the your Helm chart. + +``` +NAME CHART VERSION APP VERSION DESCRIPTION +rancher-latest/rancher 2018.10.1 v2.1.0 Install Rancher Server to manage Kubernetes clusters acro... +``` + +### Switching to a different Helm Chart Repository + +After installing Rancher, if you want to change which Helm chart repository to install Rancher from, you will need to follow these steps. + +1. List the current Helm chart repositories. + + ``` + helm repo list + + NAME URL + stable https://kubernetes-charts.storage.googleapis.com + rancher- https://releases.rancher.com/server-charts/ + ``` + +2. Remove the existing Helm Chart repository that contains your charts to install Rancher, which will either be `rancher-stable` or `rancher-latest` depending on what you had initially added. + + ``` + helm repo remove rancher- + ``` + +3. Add the Helm chart repository that you want to start installing Rancher from. Replace `` with the chart repository that you want to use (i.e. `latest` or `stable`). + + ``` + helm repo add rancher- https://releases.rancher.com/server-charts/ + ``` + +4. Continue to follow the steps to [upgrade Rancher]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/) from the new Helm chart repository. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md index 8926a33aeb5..8b6bd0d4c78 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md @@ -5,6 +5,8 @@ weight: 2300 Using the Kubernetes [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) feature (HPA), you can configure your cluster to automatically scale the services it's running up or down. +>**Note:** Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. + ### Why Use Horizontal Pod Autoscaler? Using HPA, you can automatically scale the number of pods within a replication controller, deployment, or replica set up or down. HPA automatically scales the number of pods that are running for maximum efficiency. Factors that affect the number of pods include: @@ -20,11 +22,10 @@ HPA improves your services by: ### How HPA Works -![HPA Schema]({{< baseurl >}}/img/rancher/horizontal-pod-autoscaler.svg) +![HPA Schema]({{< baseurl >}}/img/rancher/horizontal-pod-autoscaler.jpg) HPA is implemented as a control loop, with a period controlled by the `kube-controller-manager` flags below: - Flag | Default | Description | ---------|----------|----------| `--horizontal-pod-autoscaler-sync-period` | `30s` | How often HPA audits resource/custom metrics in a deployment. @@ -36,13 +37,13 @@ For full documentation on HPA, refer to the [Kubernetes Documentation](https://k ### Horizontal Pod Autoscaler API Objects -HPA is an API resource in the Kubernetes `autoscaling` API group. The current stable version is `autoscaling/v1`, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: `autoscaling/v2beta1`. +HPA is an API resource in the Kubernetes `autoscaling` API group. The current stable version is `autoscaling/v1`, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: `autoscaling/v2beta1`. For more information about the HPA API object, see the [HPA GitHub Readme](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object). ### kubectl Commands -You can create, manage, and delete HPAs using kubectl: +You can create, manage, and delete HPAs using kubectl: - Creating HPA @@ -98,113 +99,39 @@ Directive | Description `targetAverageValue: 100Mi` | Indicates the deployment will scale pods up when the average running pod uses more that 100Mi of memory.
-### Installation - -Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements. - -#### Requirements - -Be sure that your Kubernetes cluster services are running with these flags at minimum: - -- kube-api: `requestheader-client-ca-file` -- kubelet: `read-only-port` at 10255 -- kube-controller: Optional, just needed if distinct values than default are required. - - - `horizontal-pod-autoscaler-downscale-delay: "5m0s"` - - `horizontal-pod-autoscaler-upscale-delay: "3m0s"` - - `horizontal-pod-autoscaler-sync-period: "30s"` - -For an RKE Kubernetes cluster definition, add this snippet in the `services` section. To add this snippet using the Rancher v2.0 UI, open the **Clusters** view and select **Ellipsis (...) > Edit** for the cluster in which you want to use HPA. Then, from **Cluster Options**, click **Edit as YAML**. Add the following snippet to the `services` section: - -``` -services: -... - kube-api: - extra_args: - requestheader-client-ca-file: "/etc/kubernetes/ssl/kube-ca.pem" - kube-controller: - extra_args: - horizontal-pod-autoscaler-downscale-delay: "5m0s" - horizontal-pod-autoscaler-upscale-delay: "1m0s" - horizontal-pod-autoscaler-sync-period: "30s" - kubelet: - extra_args: - read-only-port: 10255 -``` - -Once the Kubernetes cluster is configured and deployed, you can deploy metrics services. - ->**Note:** kubectl command samples in the sections that follow were tested in a cluster running Rancher v2.0.6 and Kubernetes v1.10.1. - #### Configuring HPA to Scale Using Resource Metrics -To create HPA resources based on resource metrics such as CPU and memory use, you need to deploy the `metrics-server` package in the `kube-system` namespace of your Kubernetes cluster. This deployment allows HPA to consume the `metrics.k8s.io` API. +Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. Run the following commands to check if metrics are available in your installation: ->**Prerequisite:** You must be running kubectl 1.8 or later. +``` +$ kubectl top nodes +NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% +node-controlplane 196m 9% 1623Mi 42% +node-etcd 80m 4% 1090Mi 28% +node-worker 64m 3% 1146Mi 29% +$ kubectl -n kube-system top pods +NAME CPU(cores) MEMORY(bytes) +canal-pgldr 18m 46Mi +canal-vhkgr 20m 45Mi +canal-x5q5v 17m 37Mi +canal-xknnz 20m 37Mi +kube-dns-7588d5b5f5-298j2 0m 22Mi +kube-dns-autoscaler-5db9bbb766-t24hw 0m 5Mi +metrics-server-97bc649d5-jxrlt 0m 12Mi +$ kubectl -n kube-system logs -l k8s-app=metrics-server +I1002 12:55:32.172841 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:https://kubernetes.default.svc?kubeletHttps=true&kubeletPort=10250&useServiceAccount=true&insecure=true +I1002 12:55:32.172994 1 heapster.go:72] Metrics Server version v0.2.1 +I1002 12:55:32.173378 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default.svc" and version +I1002 12:55:32.173401 1 configs.go:62] Using kubelet port 10250 +I1002 12:55:32.173946 1 heapster.go:128] Starting with Metric Sink +I1002 12:55:32.592703 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) +I1002 12:55:32.925630 1 heapster.go:101] Starting Heapster API server... +[restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] listing is available at https:///swaggerapi +[restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ +I1002 12:55:32.928597 1 serve.go:85] Serving securely on 0.0.0.0:443 +``` -1. Connect to your Kubernetes cluster using kubectl. - -1. Clone the GitHub `metrics-server` repo: - ``` - # git clone https://github.com/kubernetes-incubator/metrics-server - ``` - -1. Install the `metrics-server` package. - ``` - # kubectl create -f metrics-server/deploy/1.8+/ - ``` - -1. Check that `metrics-server` is running properly. Check the service pod and logs in the `kube-system` namespace. - - 1. Check the service pod for a status of `running`. Enter the following command: - ``` - # kubectl get pods -n kube-system - ``` - Then check for the status of `running`. - ``` - NAME READY STATUS RESTARTS AGE - ... - metrics-server-6fbfb84cdd-t2fk9 1/1 Running 0 8h - ... - ``` - 1. Check the service logs for service availability. Enter the following command: - ``` - # kubectl -n kube-system logs metrics-server-6fbfb84cdd-t2fk9 - ``` - Then review the log to confirm that that the `metrics-server` package is running. - {{% accordion id="metrics-server-run-check" label="Metrics Server Log Output" %}} - I0723 08:09:56.193136 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:'' - I0723 08:09:56.193574 1 heapster.go:72] Metrics Server version v0.2.1 - I0723 08:09:56.194480 1 configs.go:61] Using Kubernetes client with master "https://10.43.0.1:443" and version - I0723 08:09:56.194501 1 configs.go:62] Using kubelet port 10255 - I0723 08:09:56.198612 1 heapster.go:128] Starting with Metric Sink - I0723 08:09:56.780114 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) - I0723 08:09:57.391518 1 heapster.go:101] Starting Heapster API server... - [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] listing is available at https:///swaggerapi - [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ - I0723 08:09:57.394080 1 serve.go:85] Serving securely on 0.0.0.0:443 - {{% /accordion %}} - - -1. Check that the metrics api is accessible from kubectl. - - - If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://:6443`. - ``` - # kubectl get --raw /apis/metrics.k8s.io/v1beta1 - ``` - If the the API is working correctly, you should receive output similar to the output below. - ``` - {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} - ``` - - - If you are accessing the cluster through Rancher, enter your Server URL in the kubectl config in the following format: `https:///k8s/clusters/`. Add the suffix `/k8s/clusters/` to API path. - ``` - # kubectl get --raw /k8s/clusters//apis/metrics.k8s.io/v1beta1 - ``` - If the the API is working correctly, you should receive output similar to the output below. - ``` - {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} - ``` +If you have created your cluster in Rancher v2.0.6 or before, please refer to [Manual installation](#manual-installation) #### Configuring HPA to Scale Using Custom Metrics (Prometheus) @@ -293,210 +220,136 @@ For HPA to use custom metrics from Prometheus, package [k8s-prometheus-adapter]( {{% /accordion %}} -#### Assigning Additional Required Roles to Your HPA - -By default, HPA reads resource and custom metrics with the user `system:anonymous`. Assign `system:anonymous` the the `view-resource-metrics` and `view-custom-metrics` in the ClusterRole and ClusterRoleBindings manifests. These roles are used to access metrics. - -To do it, follow these steps: - -1. Configure kubectl to connect to your cluster. - -1. Copy the ClusterRole and ClusterRoleBinding manifest for the type of metrics you're using for your HPA. - {{% accordion id="cluster-role-resource-metrics" label="Resource Metrics: ApiGroups resource.metrics.k8s.io" %}} - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: view-resource-metrics - rules: - - apiGroups: - - metrics.k8s.io - resources: - - pods - - nodes - verbs: - - get - - list - - watch - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: view-resource-metrics - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: view-resource-metrics - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: User - name: system:anonymous - {{% /accordion %}} -{{% accordion id="cluster-role-custom-resources" label="Custom Metrics: ApiGroups custom.metrics.k8s.io" %}} - - ``` - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: view-custom-metrics - rules: - - apiGroups: - - custom.metrics.k8s.io - resources: - - "*" - verbs: - - get - - list - - watch - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: view-custom-metrics - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: view-custom-metrics - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: User - name: system:anonymous - ``` -{{% /accordion %}} -1. Create them in your cluster using one of the follow commands, depending on the metrics you're using. - ``` - # kubectl create -f - # kubectl create -f - ``` - ### Testing HPAs with a Service Deployment -For HPA to work correctly, service deployments should have resources request definitions for containers. Follow this hello-world example to test if HPA is working correctly. +For HPA to work correctly, service deployments should have resources request definitions for containers. Follow this hello-world example to test if HPA is working correctly. 1. Configure kubectl to connect to your Kubernetes cluster. 2. Copy the `hello-world` deployment manifest below. {{% accordion id="hello-world" label="Hello World Manifest" %}} - apiVersion: apps/v1beta2 - kind: Deployment - metadata: - labels: - app: hello-world - name: hello-world - namespace: default - spec: - replicas: 1 - selector: - matchLabels: - app: hello-world - strategy: - rollingUpdate: - maxSurge: 1 - maxUnavailable: 0 - type: RollingUpdate - template: - metadata: - labels: - app: hello-world - spec: - containers: - - image: rancher/hello-world - imagePullPolicy: Always - name: hello-world - resources: - requests: - cpu: 500m - memory: 64Mi - ports: - - containerPort: 80 - protocol: TCP - restartPolicy: Always - --- - apiVersion: v1 - kind: Service - metadata: - name: hello-world - namespace: default - spec: - ports: - - port: 80 - protocol: TCP - targetPort: 80 - selector: - app: hello-world +``` +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + labels: + app: hello-world + name: hello-world + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + app: hello-world + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + type: RollingUpdate + template: + metadata: + labels: + app: hello-world + spec: + containers: + - image: rancher/hello-world + imagePullPolicy: Always + name: hello-world + resources: + requests: + cpu: 500m + memory: 64Mi + ports: + - containerPort: 80 + protocol: TCP + restartPolicy: Always +--- +apiVersion: v1 +kind: Service +metadata: + name: hello-world + namespace: default +spec: + ports: + - port: 80 + protocol: TCP + targetPort: 80 + selector: + app: hello-world +``` {{% /accordion %}} - - 1. Deploy it to your cluster. ``` # kubectl create -f ``` -1. Copy one of the HPAs below based on the metric type you're using: - {{% accordion id="service-deployment-resource-metrics" label="Hello World HPA: Resource Metrics" %}} - apiVersion: autoscaling/v2beta1 - kind: HorizontalPodAutoscaler - metadata: - name: hello-world - namespace: default - spec: - scaleTargetRef: - apiVersion: extensions/v1beta1 - kind: Deployment - name: hello-world - minReplicas: 1 - maxReplicas: 10 - metrics: - - type: Resource - resource: - name: cpu - targetAverageUtilization: 50 - - type: Resource - resource: - name: memory - targetAverageValue: 1000Mi - {{% /accordion %}} - {{% accordion id="service-deployment-custom-metrics" label="Hello World HPA: Custom Metrics" %}} - apiVersion: autoscaling/v2beta1 - kind: HorizontalPodAutoscaler - metadata: - name: hello-world - namespace: default - spec: - scaleTargetRef: - apiVersion: extensions/v1beta1 - kind: Deployment - name: hello-world - minReplicas: 1 - maxReplicas: 10 - metrics: - - type: Resource - resource: - name: cpu - targetAverageUtilization: 50 - - type: Resource - resource: - name: memory - targetAverageValue: 100Mi - - type: Pods - pods: - metricName: cpu_system - targetAverageValue: 20m - {{% /accordion %}} +1. Copy one of the HPAs below based on the metric type you're using: +{{% accordion id="service-deployment-resource-metrics" label="Hello World HPA: Resource Metrics" %}} +``` +apiVersion: autoscaling/v2beta1 +kind: HorizontalPodAutoscaler +metadata: + name: hello-world + namespace: default +spec: + scaleTargetRef: + apiVersion: extensions/v1beta1 + kind: Deployment + name: hello-world + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + targetAverageUtilization: 50 + - type: Resource + resource: + name: memory + targetAverageValue: 1000Mi +``` +{{% /accordion %}} +{{% accordion id="service-deployment-custom-metrics" label="Hello World HPA: Custom Metrics" %}} +``` +apiVersion: autoscaling/v2beta1 +kind: HorizontalPodAutoscaler +metadata: + name: hello-world + namespace: default +spec: + scaleTargetRef: + apiVersion: extensions/v1beta1 + kind: Deployment + name: hello-world + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + targetAverageUtilization: 50 + - type: Resource + resource: + name: memory + targetAverageValue: 100Mi + - type: Pods + pods: + metricName: cpu_system + targetAverageValue: 20m +``` +{{% /accordion %}} 1. View the HPA info and description. Confirm that metric data is shown. {{% accordion id="hpa-info-resource-metrics" label="Resource Metrics" %}} -1. Enter the following command. +1. Enter the following commands. ``` # kubectl get hpa - ``` - You should receive the output that follows: - ``` NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hello-world Deployment/hello-world 1253376 / 100Mi, 0% / 50% 1 10 1 6m - # kubectl describe hpa + # kubectl describe hpa Name: hello-world Namespace: default Labels: @@ -552,7 +405,7 @@ For HPA to work correctly, service deployments should have resources request def 1. Test that pod autoscaling works as intended.

**To Test Autoscaling Using Resource Metrics:** {{% accordion id="observe-upscale-2-pods-cpu" label="Upscale to 2 Pods: CPU Usage Up to Target" %}} -Use your load testing tool to to scale up to two pods based on CPU Usage. +Use your load testing tool to to scale up to two pods based on CPU Usage. 1. View your HPA. ``` @@ -671,7 +524,7 @@ Use your load testing to to scale down to 1 pod when all metrics are below targe Normal SuccessfulRescale 1s horizontal-pod-autoscaler New size: 1; reason: All metrics below target ``` {{% /accordion %}} -
+
**To Test Autoscaling Using Custom Metrics:** {{% accordion id="custom-observe-upscale-2-pods-cpu" label="Upscale to 2 Pods: CPU Usage Up to Target" %}} Use your load testing tool to upscale two pods based on CPU usage. @@ -855,6 +708,8 @@ Use your load testing tool to scale down to one pod when all metrics below targe ``` {{% /accordion %}} + + ### Conclusion Horizontal Pod Autoscaling is a great way to automate the number of pod you have deployed for maximum efficiency. You can use it to accommodate deployment scale to real service load and to meet service level agreements. @@ -863,4 +718,190 @@ By adjusting the `horizontal-pod-autoscaler-downscale-delay` and `horizontal-pod We've demonstrated how to setup an HPA based on custom metrics provided by Prometheus. We used the `cpu_system` metric as an example, but you can use other metrics that monitor service performance, like `http_request_number`, `http_response_time`, etc. ->**Note:**To facilitate HPA use, we are working to integrate metric-server as an addon on RKE cluster deployments. This feature is included in RKE v0.1.9-rc2 for testing, but is not officially supported as of yet. It would be supported at rke v0.1.9. \ No newline at end of file + +### Manual Installation + +>**Note:** This is only applicable to clusters created in versions before Rancher v2.0.7. + +Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements. + +#### Requirements + +Be sure that your Kubernetes cluster services are running with these flags at minimum: + +- kube-api: `requestheader-client-ca-file` +- kubelet: `read-only-port` at 10255 +- kube-controller: Optional, just needed if distinct values than default are required. + + - `horizontal-pod-autoscaler-downscale-delay: "5m0s"` + - `horizontal-pod-autoscaler-upscale-delay: "3m0s"` + - `horizontal-pod-autoscaler-sync-period: "30s"` + +For an RKE Kubernetes cluster definition, add this snippet in the `services` section. To add this snippet using the Rancher v2.0 UI, open the **Clusters** view and select **Ellipsis (...) > Edit** for the cluster in which you want to use HPA. Then, from **Cluster Options**, click **Edit as YAML**. Add the following snippet to the `services` section: + +``` +services: +... + kube-api: + extra_args: + requestheader-client-ca-file: "/etc/kubernetes/ssl/kube-ca.pem" + kube-controller: + extra_args: + horizontal-pod-autoscaler-downscale-delay: "5m0s" + horizontal-pod-autoscaler-upscale-delay: "1m0s" + horizontal-pod-autoscaler-sync-period: "30s" + kubelet: + extra_args: + read-only-port: 10255 +``` + +Once the Kubernetes cluster is configured and deployed, you can deploy metrics services. + +>**Note:** kubectl command samples in the sections that follow were tested in a cluster running Rancher v2.0.6 and Kubernetes v1.10.1. + +#### Configuring HPA to Scale Using Resource Metrics + +To create HPA resources based on resource metrics such as CPU and memory use, you need to deploy the `metrics-server` package in the `kube-system` namespace of your Kubernetes cluster. This deployment allows HPA to consume the `metrics.k8s.io` API. + +>**Prerequisite:** You must be running kubectl 1.8 or later. + +1. Connect to your Kubernetes cluster using kubectl. + +1. Clone the GitHub `metrics-server` repo: + ``` + # git clone https://github.com/kubernetes-incubator/metrics-server + ``` + +1. Install the `metrics-server` package. + ``` + # kubectl create -f metrics-server/deploy/1.8+/ + ``` + +1. Check that `metrics-server` is running properly. Check the service pod and logs in the `kube-system` namespace. + + 1. Check the service pod for a status of `running`. Enter the following command: + ``` + # kubectl get pods -n kube-system + ``` + Then check for the status of `running`. + ``` + NAME READY STATUS RESTARTS AGE + ... + metrics-server-6fbfb84cdd-t2fk9 1/1 Running 0 8h + ... + ``` + 1. Check the service logs for service availability. Enter the following command: + ``` + # kubectl -n kube-system logs metrics-server-6fbfb84cdd-t2fk9 + ``` + Then review the log to confirm that that the `metrics-server` package is running. + {{% accordion id="metrics-server-run-check" label="Metrics Server Log Output" %}} + I0723 08:09:56.193136 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:'' + I0723 08:09:56.193574 1 heapster.go:72] Metrics Server version v0.2.1 + I0723 08:09:56.194480 1 configs.go:61] Using Kubernetes client with master "https://10.43.0.1:443" and version + I0723 08:09:56.194501 1 configs.go:62] Using kubelet port 10255 + I0723 08:09:56.198612 1 heapster.go:128] Starting with Metric Sink + I0723 08:09:56.780114 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) + I0723 08:09:57.391518 1 heapster.go:101] Starting Heapster API server... + [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] listing is available at https:///swaggerapi + [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ + I0723 08:09:57.394080 1 serve.go:85] Serving securely on 0.0.0.0:443 + {{% /accordion %}} + + +1. Check that the metrics api is accessible from kubectl. + + + - If you are accessing the cluster through Rancher, enter your Server URL in the kubectl config in the following format: `https:///k8s/clusters/`. Add the suffix `/k8s/clusters/` to API path. + ``` + # kubectl get --raw /k8s/clusters//apis/metrics.k8s.io/v1beta1 + ``` + If the the API is working correctly, you should receive output similar to the output below. + ``` + {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} + ``` + + - If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://:6443`. + ``` + # kubectl get --raw /apis/metrics.k8s.io/v1beta1 + ``` + If the the API is working correctly, you should receive output similar to the output below. + ``` + {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} + ``` + +#### Assigning Additional Required Roles to Your HPA + +By default, HPA reads resource and custom metrics with the user `system:anonymous`. Assign `system:anonymous` to `view-resource-metrics` and `view-custom-metrics` in the ClusterRole and ClusterRoleBindings manifests. These roles are used to access metrics. + +To do it, follow these steps: + +1. Configure kubectl to connect to your cluster. + +1. Copy the ClusterRole and ClusterRoleBinding manifest for the type of metrics you're using for your HPA. + {{% accordion id="cluster-role-resource-metrics" label="Resource Metrics: ApiGroups resource.metrics.k8s.io" %}} + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: view-resource-metrics + rules: + - apiGroups: + - metrics.k8s.io + resources: + - pods + - nodes + verbs: + - get + - list + - watch + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: view-resource-metrics + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: view-resource-metrics + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: User + name: system:anonymous + {{% /accordion %}} +{{% accordion id="cluster-role-custom-resources" label="Custom Metrics: ApiGroups custom.metrics.k8s.io" %}} + + ``` + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: view-custom-metrics + rules: + - apiGroups: + - custom.metrics.k8s.io + resources: + - "*" + verbs: + - get + - list + - watch + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: view-custom-metrics + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: view-custom-metrics + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: User + name: system:anonymous + ``` +{{% /accordion %}} +1. Create them in your cluster using one of the follow commands, depending on the metrics you're using. + ``` + # kubectl create -f + # kubectl create -f + ``` + diff --git a/content/rancher/v2.x/en/k8s-in-rancher/kubectl/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/kubectl/_index.md index edd00120150..9064c89c7df 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/kubectl/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/kubectl/_index.md @@ -9,6 +9,10 @@ You can access and manage your Kubernetes clusters using kubectl in two ways: - [Accessing Clusters with kubectl Shell](#accessing-clusters-with-kubectl-shell) - [Accessing Clusters with kubectl CLI and a kubeconfig File]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/kubeconfig/) +## Resources created using kubectl + +Rancher will discover and show resources created by `kubectl`. However, these resources might not have all the necessary annotations on discovery. If an operation (for instance, scaling the workload) is done to the resource using the Rancher UI/API, this may trigger recreation of the resources due to the missing annotations. This should only happen the first time an operation is done to the discovered resource. + ## Accessing Clusters with kubectl Shell You can access and manage your clusters by logging into Rancher and opening the kubectl shell. No further configuration necessary. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/nodes/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/nodes/_index.md index 2dc2d9b142b..32a960e0831 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/nodes/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/nodes/_index.md @@ -15,7 +15,7 @@ To manage individual nodes, browse to the cluster that you want to manage and th The following table lists which node options are available for each [type of cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-options) in Rancher. Click the links in the **Option** column for more detailed information about each feature. -| Option | [Node Pool][1] | [Custom Node][2] | [Hosted Cluster][3] | [Imported Nodes][4] | Description | +| Option | [Nodes Hosted by an Infrastructure Provider][1] | [Custom Node][2] | [Hosted Cluster][3] | [Imported Nodes][4] | Description | | ------------------------------------------------ | ------------------------------------------------ | ---------------- | ------------------- | ------------------- | ------------------------------------------------------------------ | | [Cordon](#cordoning-a-node) | ✓ | ✓ | ✓ | | Marks the node as unschedulable. | | [Drain](#draining-a-node) | ✓ | ✓ | ✓ | | Marks the node as unschedulable _and_ evicts all pods. | @@ -96,22 +96,22 @@ Select this option to view the node's [API endpoints]({{< baseurl >}}/rancher/v2 Use **Delete** to remove defective nodes from the cloud provider. When you the delete a defective node, Rancher automatically replaces it with an identically provisioned node. ->**Tip:** If your cluster is hosted on IaaS nodes, and you want to scale your cluster down instead of deleting a defective node, [scale down](#scaling-nodes) rather than delete. +>**Tip:** If your cluster is hosted by an infrastructure provider, and you want to scale your cluster down instead of deleting a defective node, [scale down](#scaling-nodes) rather than delete. ## Scaling Nodes -For nodes hosted by an IaaS, you can scale the number of nodes in each node pool by using the scale controls. This option isn't available for other cluster types. +For nodes hosted by an infrastructure provider, you can scale the number of nodes in each node pool by using the scale controls. This option isn't available for other cluster types. ![Scaling Nodes]({{< baseurl >}}/img/rancher/iaas-scale-nodes.png) -## Remoting into a Node Pool Node +## Remoting into a Node Hosted by an Infrastructure Provider -For [nodes hosted by an IaaS]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/), you have the option of downloading its SSH key so that you can connect to it remotely from your desktop. +For [nodes hosted by an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/), you have the option of downloading its SSH key so that you can connect to it remotely from your desktop. -1. From the Node Pool cluster, select **Nodes** from the main menu. +1. From the cluster hosted by an infrastructure provider, select **Nodes** from the main menu. 1. Find the node that you want to remote into. Select **Ellipsis (...) > Download Keys**. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/workloads/_index.md index 42ec6482f41..f5da74a4a81 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/workloads/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/workloads/_index.md @@ -70,9 +70,9 @@ There are several types of services available in Rancher. The descriptions below This section of the documentation contains instructions for deploying workloads and using workload options. - - [Deploy Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/) - - [Upgrade Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/) - - [Rollback Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads/) +- [Deploy Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/) +- [Upgrade Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/) +- [Rollback Workloads]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads/) ## Related Links diff --git a/content/rancher/v2.x/en/tools/pipelines/_index.md b/content/rancher/v2.x/en/tools/pipelines/_index.md index c2598d90e9d..ef9e83664ae 100644 --- a/content/rancher/v2.x/en/tools/pipelines/_index.md +++ b/content/rancher/v2.x/en/tools/pipelines/_index.md @@ -8,7 +8,7 @@ aliases: >**Notes:** > >- Pipelines are new and improved for Rancher v2.1! Therefore, if you configured pipelines while using v2.0.x, you'll have to reconfigure them after upgrading to v2.1. ->- Still using v2.0.x? See the pipeline documentation for [previous versions](/rancher/v2.x/en/tools/pipelines/docs-for-v2.0.x). +>- Still using v2.0.x? See the pipeline documentation for [previous versions]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/docs-for-v2.0.x). A _pipeline_ is a software delivery process that is broken into different stages, allowing developers to deliver new software as quickly and efficiently as possible. Within Rancher, you can configure a pipeline for each of your Rancher projects. @@ -73,7 +73,7 @@ When you configure a pipeline in one of your projects, a namespace specifically Minio storage is used to store the logs for pipeline executions. - >**Note:** The managed Jenkins instance works statelessly, so don't worry about its data persistency. The Docker Registry and Minio instances use ephemeral volumes by default, which is fine for most use cases. If you want to make sure pipeline logs can survive node failures, you can configure persistent volumes for them, as described in [data persistency for pipeline components](/rancher/v2.x/en/tools/pipelines/configurations/#data-persistency-for-pipeline-components). + >**Note:** The managed Jenkins instance works statelessly, so don't worry about its data persistency. The Docker Registry and Minio instances use ephemeral volumes by default, which is fine for most use cases. If you want to make sure pipeline logs can survive node failures, you can configure persistent volumes for them, as described in [data persistency for pipeline components]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/configurations/#data-persistency-for-pipeline-components). ## Pipeline Triggers diff --git a/content/rancher/v2.x/en/tools/pipelines/configurations/_index.md b/content/rancher/v2.x/en/tools/pipelines/configurations/_index.md index a4f44a8ac40..5b0fe697b1d 100644 --- a/content/rancher/v2.x/en/tools/pipelines/configurations/_index.md +++ b/content/rancher/v2.x/en/tools/pipelines/configurations/_index.md @@ -71,7 +71,9 @@ Select your provider's tab below and follow the directions. 1. Enable the repository for which you want to run a pipeline. Then click **Done**. ->**Note:** If you use GitLab and your Rancher setup is in a local network, enable the **Allow requests to the local network from hooks and services** option in GitLab admin settings. +>**Note:** +> 1. Pipeline uses Gitlab [v4 API](https://docs.gitlab.com/ee/api/v3_to_v4.html) and the supported Gitlab version is 9.0+. +> 2. If you use GitLab 10.7+ and your Rancher setup is in a local network, enable the **Allow requests to the local network from hooks and services** option in GitLab admin settings. {{% /tab %}} {{% /tabs %}} @@ -108,7 +110,7 @@ The first stage is preserved to be a cloning step that checks out source code fr {{% /accordion %}} {{% accordion id="run-script" label="Run Script" %}} -The **Run Script** step executes arbitrary commands in the workspace inside a specified container. You can use it to build, test and do more, given whatever utilities the base image provides. For your convenience you can use variables to refer to metadata of a pipeline execution. Please go to [reference page](/rancher/v2.x/en/tools/pipelines/reference/#variable-substitution) for the list of available variables. +The **Run Script** step executes arbitrary commands in the workspace inside a specified container. You can use it to build, test and do more, given whatever utilities the base image provides. For your convenience you can use variables to refer to metadata of a pipeline execution. Please go to the [Pipeline Variable Reference]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/reference/#variable-substitution) for the list of available variables. {{% tabs %}} @@ -225,7 +227,7 @@ stages: Run your pipeline for the first time. From the **Pipeline** tab, find your pipeline and select **Ellipsis (...) > Run**. -During this initial run, your pipeline is tested, and the following [pipeline components](/Users/markbishop/Documents/GitHub/docs/content/rancher/v2.x/en/tools/pipelines/#how-pipelines-work) are deployed to your project as workloads in a new namespace dedicated to the pipeline: +During this initial run, your pipeline is tested, and the following [pipeline components]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/) are deployed to your project as workloads in a new namespace dedicated to the pipeline: - `docker-registry` - `jenkins` @@ -235,7 +237,7 @@ This process takes several minutes. When it completes, you can view each pipelin ### 4. Configuring Persistent Data for Pipeline Components -The internal [Docker registry]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/#reg) and the [Minio]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/#minio) wokrloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes. +The internal [Docker registry]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/#reg) and the [Minio]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/#minio) workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes. Complete both [A—Configuring Persistent Data for Docker Registry](#a—configuring-persistent-data-for-docker-registry) _and_ [B—Configuring Persistent Data for Minio](#b—configuring-persistent-data-for-minio). diff --git a/content/rancher/v2.x/en/tools/pipelines/docs-for-v2.0.x/_index.md b/content/rancher/v2.x/en/tools/pipelines/docs-for-v2.0.x/_index.md index 5c14bd17f61..adf02862bde 100644 --- a/content/rancher/v2.x/en/tools/pipelines/docs-for-v2.0.x/_index.md +++ b/content/rancher/v2.x/en/tools/pipelines/docs-for-v2.0.x/_index.md @@ -3,7 +3,7 @@ title: v2.0.x Pipeline Documentation weight: 9000 --- ->**Note:** This section describes the pipeline feature as implemented in Rancher v2.0.x. If you are using Rancher v2.1 or later, where pipelines have been significantly improved, please refer to the new documentation for [v2.1 or later](/rancher/v2.x/en/tools/pipelines). +>**Note:** This section describes the pipeline feature as implemented in Rancher v2.0.x. If you are using Rancher v2.1 or later, where pipelines have been significantly improved, please refer to the new documentation for [v2.1 or later]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines). diff --git a/content/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/_index.md b/content/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/_index.md index 1a8cd596d01..04e1b27935d 100644 --- a/content/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/_index.md +++ b/content/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/_index.md @@ -12,4 +12,4 @@ To restore Rancher follow the procedure detailed here: [Restoring Backups — Hi Restoring a snapshot of the Rancher Server cluster will revert Rancher to the version and state at the time of the snapshot. -> **Note:** Managed cluster are authoritative for their state. This means restoring the rancher server will not revert workload deployments or changes made on managed clusters after the snapshot was taken. +>**Note:** Managed cluster are authoritative for their state. This means restoring the rancher server will not revert workload deployments or changes made on managed clusters after the snapshot was taken. diff --git a/content/rancher/v2.x/en/upgrades/upgrades/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/_index.md index 6c3db9da3d9..109c96c1bfa 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/_index.md @@ -15,12 +15,8 @@ This section contains information about how to upgrade your Rancher server to a - [Upgrade a Air Gap HA Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm-airgap/) - [Migrating from an RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) -### Upgrading an RKE Add-on Install - > #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** > >Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline). > ->If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -- [Upgrading a High Availability Install - RKE Add-On Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade/) +>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. diff --git a/content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm-airgap/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm-airgap/_index.md index 5095a92343c..3d7d0c93e50 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm-airgap/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm-airgap/_index.md @@ -24,18 +24,8 @@ The following instructions will guide you through upgrading a high-availability [Install or update](https://docs.helm.sh/using_helm/#installing-helm) Helm to the latest version. -## Chart Versioning Notes - -Up until the initial helm chart release for v2.1.0, the helm chart version matched the Rancher version (i.e `appVersion`). - -Since there are times where the helm chart will require changes without any changes to the Rancher version, we have moved to a `yyyy.mm.` helm chart version. - -Run `helm search rancher` to view which Rancher version will be launched for the specific helm chart version. - -``` -NAME CHART VERSION APP VERSION DESCRIPTION -rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Server to manage Kubernetes clusters acro... -``` +- **Upgrades to v2.0.7+ only: check system namespace locations** + Starting in v2.0.7, Rancher introduced the `system` project, which is a project that's automatically created to store important namespaces that Kubernetes needs to operate. During upgrade to v2.0.7+, Rancher expects these namespaces to be unassigned from all projects. Before beginning upgrade, check your system namespaces to make sure that they're unassigned to [prevent cluster networking issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#preventing-cluster-networking-issues). ## Upgrade Rancher @@ -45,17 +35,31 @@ rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Serve helm repo update ``` -1. Fetch the latest `rancher-stable/rancher` chart. +2. Get the [repository name that you installed Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories) with. - This will pull down the chart and save it in the current directory as a `.tgz` file. + ``` + helm repo list - ```plain - helm fetch rancher-stable/rancher + NAME URL + stable https://kubernetes-charts.storage.googleapis.com + rancher- https://releases.rancher.com/server-charts/ ``` -1. Render the upgrade template. + > **Note:** If you want to switch to a different Helm chart repository, please follow the [steps on how to switch repositories]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#switching-to-a-different-helm-chart-repository). If you switch repositories, make sure to list the repositories again before continuing onto Step 3 to ensure you have the correct one added. - Use the same `--set` values you used for the install. Remember to set the `--is-upgrade` flag for `helm`. This will create a `rancher` directory with the Kubernetes manifest files. + +3. Fetch the latest chart to install Rancher from the Helm chart repository. + + This command will pull down the latest chart and save it in the current directory as a `.tgz` file. Replace `` with the name of the repository name that was listed (i.e. `latest` or `stable`). + + + ```plain + helm fetch rancher-/rancher + ``` + +3. Render the upgrade template. + + Use the same `--set` values that you used for the install. Remember to set the `--is-upgrade` flag for `helm`. This will create a `rancher` directory with the Kubernetes manifest files. ```plain helm template ./rancher-.tgz --output-dir . --is-upgrade \ @@ -64,7 +68,7 @@ rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Serve --set rancherImage=/rancher/rancher ``` -1. Copy and apply the rendered manifests. +4. Copy and apply the rendered manifests. Copy the files to a server with access to the Rancher server cluster and apply the rendered templates. @@ -72,6 +76,12 @@ rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Serve kubectl -n cattle-system apply -R -f ./rancher ``` +**Result:** Rancher is upgraded. Log back into Rancher to confirm that the upgrade succeeded. + +>**Having Network Issues Following Upgrade?** +> +> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking). + ## Rolling Back Should something go wrong, follow the [HA Rollback]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/) instructions to restore the snapshot you took before you preformed the upgrade. diff --git a/content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/_index.md index a9fcaf3eb97..0bcb9627e52 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/_index.md @@ -10,10 +10,7 @@ The following instructions will guide you through upgrading a high-availability >* [Migrating from RKE Add-On Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on) > > As of release v2.0.8, Rancher supports installation and upgrade by Helm chart, although RKE installs/upgrades are still supported as well. If you want to change upgrade method from RKE Add-on to Helm chart, follow this procedure. -> ->* [High Availability (HA) Upgrade - RKE Add-On Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/ha-server-upgrade) -> -> If you want to continue using RKE for upgrades, follow this procedure. + ## Prerequisites @@ -37,19 +34,8 @@ The following instructions will guide you through upgrading a high-availability ``` helm init --upgrade --service-account tiller ``` - -## Chart Versioning Notes - -Up until the initial helm chart release for v2.1.0, the helm chart version matched the Rancher version (i.e `appVersion`). - -Since there are times where the helm chart will require changes without any changes to the Rancher version, we have moved to a `yyyy.mm.` helm chart version. - -Run `helm search rancher` to view which Rancher version will be launched for the specific helm chart version. - -``` -NAME CHART VERSION APP VERSION DESCRIPTION -rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Server to manage Kubernetes clusters acro... -``` +- **Upgrades to v2.0.7+ only: check system namespace locations** + Starting in v2.0.7, Rancher introduced the `System` project, which is a project that's automatically created to store important namespaces that Kubernetes needs to operate. During upgrade to v2.0.7+, Rancher expects these namespaces to be unassigned from all projects. Before beginning upgrade, check your system namespaces to make sure that they're unassigned to [prevent cluster networking issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#preventing-cluster-networking-issues). ## Upgrade Rancher @@ -61,7 +47,19 @@ rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Serve helm repo update ``` -2. Get the set values from current Rancher release. +2. Get the [repository name that you installed Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories) with. + + ``` + helm repo list + + NAME URL + stable https://kubernetes-charts.storage.googleapis.com + rancher- https://releases.rancher.com/server-charts/ + ``` + + > **Note:** If you want to switch to a different Helm chart repository, please follow the [steps on how to switch repositories]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#switching-to-a-different-helm-chart-repository). If you switch repositories, make sure to list the repositories again before continuing onto Step 3 to ensure you have the correct one added. + +3. Get the set values from the current Rancher install. ``` helm get values rancher @@ -71,13 +69,20 @@ rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Serve > **Note:** There may be more values that are listed with this command depending on which [SSL configuration option you selected]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/#choose-your-ssl-configuration) when installing Rancher. -3. Take all values from the previous command and use `helm` with `--set` options to upgrade Rancher to the latest version. +4. Upgrade Rancher to the latest version based on values from the previous steps. + + - Replace `` with the repository that was listed (i.e. `latest` or `stable`). + - Take all the values from the previous step and append them to the command using `--set key=value`. ``` - helm upgrade rancher rancher-stable/rancher --set hostname=rancher.my.org + helm upgrade rancher rancher-/rancher --set hostname=rancher.my.org ``` - > **Important:** For any values listed from Step 2, you must use `--set key=value` to apply the same values to the helm chart. +**Result:** Rancher is upgraded. Log back into Rancher to confirm that the upgrade succeeded. + +>**Having Network Issues Following Upgrade?** +> +> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking). ## Rolling Back diff --git a/content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade/_index.md deleted file mode 100644 index c7686e44015..00000000000 --- a/content/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade/_index.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: HA Upgrade - RKE Add-on -weight: 1040 -aliases: - - /rancher/v2.x/en/upgrades/ha-server-upgrade/ ---- - -> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** -> ->Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline). -> ->If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart. - -This document is for upgrading Rancher HA installed with the RKE Add-On yaml. See these docs to migrate to or upgrade Rancher installed with the Helm chart. - -* [Migrating from a High Availability RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) -* [High Availability (HA) Upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/) - ->**Prerequisites:** -{{< requirements_rollback >}} - ->- Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your workstation. ->- Confirm that the following path exists on your workstation: `~/.kube/`. If it doesn't, create it yourself. ->- Copy `kube_config_rancher-cluster.yml`, which is automatically generated after [Rancher Server installation]({{< baseurl >}}/rancher/v2.x/en/installation/ha-server-install#part-11-backup-kube-config-rancher-cluster-yml), to the `~/.kube/` directory. - -1. From your workstation, open **Terminal**. - -1. Change directory to the location of the RKE binary. Your `rancher-cluster.yml` file must reside in the same directory. - - >**Want records of all transactions with the Rancher API?** - > - >Enable the [API Auditing]({{< baseurl >}}/rancher/v2.x/en/installation/api-auditing) feature by editing your RKE config file (`rancher-cluster.yml`). For more information, see how to enable it in [your RKE config file]({{< baseurl >}}/rancher/v2.x/en/installation/ha/rke-add-on/api-auditing/). - -1. Enter the following command. Replace `` with any name that you want to use for the snapshot (e.g. `upgrade.db`). - - ``` - rke etcd snapshot-save --name --config rancher-cluster.yml - ``` - - **Result:** RKE takes a snapshot of `etcd` running on each `etcd` node. The file is saved to `/opt/rke/etcd-snapshots`. - -1. Enter the following command: - - ``` -kubectl --kubeconfig=kube_config_rancher-cluster.yml set image deployment/cattle cattle-server=rancher/rancher: -n cattle-system - ``` - Replace `` with the version that you want to upgrade to. For a list of tags available, see the [Rancher Forum Announcements](https://forums.rancher.com/c/announcements). - - **Step Result:** The upgrade begins. Rancher Server may be unavailable for a few minutes. - -1. Log into Rancher. Confirm that the upgrade succeeded by checking the version displayed in the bottom-left corner of the browser window. - -**Result:** Your Rancher Servers are upgraded. - ->**Upgrade Issues?** You can restore your Rancher Server and data that was running prior to upgrade. For more information, see [Rolling Back—High Availability Installs]({{< baseurl >}}/rancher/v2.x/en/backups/rollbacks/ha-server-rollbacks). diff --git a/content/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/_index.md index 5bf1119c27a..1ee9b871021 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/_index.md @@ -1,8 +1,16 @@ --- title: Migrating from an HA RKE Add-on Install weight: 1030 +aliases: + - /rancher/v2.x/en/upgrades/ha-server-upgrade/ + - /rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade/ --- +> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8** +> +>If you are currently using the RKE add-on install method, please follow these directions to migrate to the Helm install. + + The following instructions will help guide you through migrating from the RKE Add-on install to managing Rancher with the Helm package manager. You will need the to have [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) installed and `kube_config_rancher-cluster.yml` credentials file generated by RKE. diff --git a/content/rancher/v2.x/en/upgrades/upgrades/namespace-migration/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/namespace-migration/_index.md new file mode 100644 index 00000000000..b51b4bd695a --- /dev/null +++ b/content/rancher/v2.x/en/upgrades/upgrades/namespace-migration/_index.md @@ -0,0 +1,157 @@ +--- +title: Upgrading to v2.0.7+ — Namespace Migration +weight: +aliases: +--- +>This section applies only to Rancher upgrades from v2.0.6 or earlier to v2.0.7 or later. Upgrades from v2.0.7 to later version are unaffected. + +In Rancher v2.0.6 and prior, system namespaces crucial for Rancher and Kubernetes operations were not assigned to any Rancher project by default. Instead, these namespaces existed independently from all Rancher projects, but you could move these namespaces into any project without affecting cluster operations. + +These namespaces include: + +- `kube-system` +- `kube-public` +- `cattle-system` +- `cattle-alerting`1 +- `cattle-logging`1 +- `cattle-pipeline`1 +- `ingress-nginx` + +>1 Only displays if this feature is enabled for the cluster. + +However, with the release of Rancher v2.0.7, the `System` project was introduced. This project, which is automatically created during the upgrade, is assigned the system namespaces above to hold these crucial components for safe keeping. + +During upgrades from Rancher v2.0.6- to Rancher v2.0.7+, all system namespaces are moved from their default location outside of all projects into the newly created `System` project. However, if you assigned any of your system namespaces to a project before upgrading, your cluster networking may encounter issues afterwards. This issue occurs because the system namespaces are not where the upgrade expects them to be during the upgrade, so it cannot move them to the `System` project. + +- To prevent this issue from occurring before the upgrade, see [Preventing Cluster Networking Issues](#preventing-cluster-networking-issues). +- To fix this issue following upgrade, see [Restoring Cluster Networking](#restoring-cluster-networking). + +## Preventing Cluster Networking Issues + +You can prevent cluster networking issues from occurring during your upgrade to v2.0.7+ by unassigning system namespaces from all of your Rancher projects. Complete this task if you've assigned any of a cluster's system namespaces into a Rancher project. + +1. Log into the Rancher UI prior to upgrade. + +1. From the context menu, open the **local** cluster (or any of your other clusters). + +1. From the main menu, select **Project/Namespaces**. + +1. Find and select the following namespaces. Click **Move** and then choose **None** to move them out of your projects. Click **Move** again. + + >**Note:** Some or all of these namespaces may already be unassigned from all projects. + + - `kube-system` + - `kube-public` + - `cattle-system` + - `cattle-alerting`1 + - `cattle-logging`1 + - `cattle-pipeline`1 + - `ingress-nginx` + + >1 Only displays if this feature is enabled for the cluster. + +
Moving namespaces out of projects
+ ![Moving Namespaces]({{< baseurl >}}/img/rancher/move-namespaces.png) + +1. Repeat these steps for each cluster where you've assigned system namespaces to projects. + +**Result:** All system namespaces are moved out of Rancher projects. You can now safely begin the [upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades). + +## Restoring Cluster Networking + +Reset the cluster nodes' network policies to restore connectivity. + +>**Prerequisites:** +> +>Download and setup [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). + +{{% tabs %}} +{{% tab "HA Install" %}} +1. From **Terminal**, change directories to your kubectl file that's generated during Rancher install, `kube_config_rancher-cluster.yml`. This file is usually in the directory where you ran RKE during Rancher installation. + +1. Before repairing networking, run the following two commands to make sure that your nodes have a status of `Ready` and that your cluster components are `Healthy`. + + ``` + kubectl get nodes --kubeconfig kube_config_rancher-cluster.yml + + NAME STATUS ROLES AGE VERSION + 165.227.114.63 Ready controlplane,etcd,worker 11m v1.10.1 + 165.227.116.167 Ready controlplane,etcd,worker 11m v1.10.1 + 165.227.127.226 Ready controlplane,etcd,worker 11m v1.10.1 + + kubectl get cs --kubeconfig kube_config_rancher-cluster.yml + + NAME STATUS MESSAGE ERROR + scheduler Healthy ok + controller-manager Healthy ok + etcd-0 Healthy {"health": "true"} + etcd-2 Healthy {"health": "true"} + etcd-1 Healthy {"health": "true"} + ``` + +1. Check the `networkPolicy` for all clusters by running the following command. + + kubectl --kubeconfig kube_config_rancher-cluster.yml get cluster -o=custom-columns=ID:.metadata.name,NAME:.spec.displayName,NETWORKPOLICY:.spec.enableNetworkPolicy + + ID NAME NETWORKPOLICY + c-59ptz custom + local local + + +1. Disable the `networkPolicy` for all clusters, still pointing toward your `kube_config_rancher-cluster.yml`. + + kubectl --kubeconfig kube_config_rancher-cluster.yml get cluster -o jsonpath='{range .items[*]}{@.metadata.name}{"\n"}{end}' | xargs -I {} kubectl --kubeconfig kube_config_rancher-cluster.yml patch cluster {} --type merge -p '{"spec": {"enableNetworkPolicy": false}}' + + >**Tip:** If you want to keep `networkPolicy` enabled for all created clusters, you can run the following command to disable `networkPolicy` for `local` cluster (i.e., your Rancher Server nodes): + > + >``` + kubectl --kubeconfig kube_config_rancher-cluster.yml patch cluster local --type merge -p '{"spec": {"enableNetworkPolicy": false}}' + ``` + +1. Check the `networkPolicy` for all clusters again to make sure the policies have a status of `false `. + + kubectl --kubeconfig kube_config_rancher-cluster.yml get cluster -o=custom-columns=ID:.metadata.name,NAME:.spec.displayName,NETWORKPOLICY:.spec.enableNetworkPolicy + + ID NAME NETWORKPOLICY + c-59ptz custom false + local local false + +1. Now remove all network policies from system namespaces. Run this command for each cluster, using the kubeconfig generated by RKE. + + ``` + for namespace in kube-system kube-public cattle-system cattle-alerting cattle-logging cattle-pipeline ingress-nginx; do + kubectl --kubeconfig kube_config_rancher-cluster.yml -n $namespace delete networkpolicy --all; + done + ``` + +1. Wait a few minutes and then log into the Rancher UI. + + - If you can access Rancher, you're done, so you can skip the rest of the steps. + - If you still can't access Rancher, complete the steps below. + +1. Force your pods to recreate themselves by entering the following command. + + ``` + kubectl --kubeconfig kube_config_rancher-cluster.yml delete pods -n cattle-system --all + ``` + +1. Log into the Rancher UI and view your clusters. Created clusters will show errors from attempting to contact Rancher while it was unavailable. However, these errors should resolve automatically. + +{{% /tab %}} +{{% tab "Rancher Launched Kubernetes" %}} +
+If you can access Rancher, but one or more of the clusters that you launched using Rancher has no networking, you can repair them by moving the + +- From the cluster's [embedded kubectl shell]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-shell). +- By [downloading the cluster kubeconfig file and running it]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-and-a-kubeconfig-file) from your workstation. + + ``` + for namespace in kube-system kube-public cattle-system cattle-alerting cattle-logging cattle-pipeline ingress-nginx; do + kubectl --kubeconfig kube_config_rancher-cluster.yml -n $namespace delete networkpolicy --all; + done + ``` + +{{% /tab %}} +{{% /tabs %}} + + diff --git a/content/rancher/v2.x/en/upgrades/upgrades/single-node-air-gap-upgrade/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/single-node-air-gap-upgrade/_index.md index 90033c3444a..726f9de0ad5 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/single-node-air-gap-upgrade/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/single-node-air-gap-upgrade/_index.md @@ -6,6 +6,9 @@ aliases: --- To upgrade an air gapped Rancher Server, update your private registry with the latest Docker images, and then run the upgrade command. +## Prerequisites +**Upgrades to v2.0.7+ only:** Starting in v2.0.7, Rancher introduced the `system` project, which is a project that's automatically created to store important namespaces that Kubernetes needs to operate. During upgrade to v2.0.7+, Rancher expects these namespaces to be unassigned from all projects. Before beginning upgrade, check your system namespaces to make sure that they're unassigned to [prevent cluster networking issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#preventing-cluster-networking-issues). + ## Upgrading An Air Gapped Rancher Server 1. Follow the directions in Air Gap Installation to [pull the Docker images]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/#release-files) required for the new version of Rancher. @@ -16,3 +19,12 @@ To upgrade an air gapped Rancher Server, update your private registry with the l > While completing [Single Node Upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/single-node-upgrade/), prepend your private registry URL to the image when running the `docker run` command. > > Example: `/rancher/rancher:latest` + +**Result:** Rancher is upgraded. Log back into Rancher to confirm that the upgrade succeeded. + +>**Having Network Issues Following Upgrade?** +> +> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking). + +## Rolling Back +If your upgrade does not complete successfully, you can roll Rancher Server and its data back to its last healthy state. For more information, see [Single Node Rollback]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/). diff --git a/content/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/_index.md b/content/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/_index.md index 6061a7a0dd8..87ae47154a5 100644 --- a/content/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/_index.md +++ b/content/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/_index.md @@ -27,11 +27,14 @@ Cross reference the image and reference table below to learn how to obtain this | `` | `v2.0.5` | The rancher/rancher image you pulled for initial install. | | `` | `festive_mestorf` | The name of your Rancher container. | | `` | `v2.0.5` | The version of Rancher that you're creating a backup for. | -| `` | `9-27-18` | The date that the data container or backup was created. | +| `` | `9-27-18` | The date that the data container or backup was created. |
You can obtain `` and `` by logging into your Rancher Server by remote connection and entering the command to view the containers that are running: `docker ps`. You can also view containers that are stopped using a different command: `docker ps -a`. Use these commands for help anytime during while creating backups. +## Prerequisites +**Upgrades to v2.0.7+ only:** Starting in v2.0.7, Rancher introduced the `system` project, which is a project that's automatically created to store important namespaces that Kubernetes needs to operate. During upgrade to v2.0.7+, Rancher expects these namespaces to be unassigned from all projects. Before beginning upgrade, check your system namespaces to make sure that they're unassigned to [prevent cluster networking issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#preventing-cluster-networking-issues). + ## Completing the Upgrade During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data. @@ -88,7 +91,7 @@ During upgrade, you create a copy of the data from your current Rancher containe docker run -d --volumes-from rancher-data --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest ``` - >**Want records of all transactions with the Rancher API?** + >**Want records of all transactions with the Rancher API?** > >Enable the [API Auditing]({{< baseurl >}}/rancher/v2.x/en/installation/api-auditing) feature by adding the flags below into your upgrade command. >``` @@ -98,7 +101,7 @@ During upgrade, you create a copy of the data from your current Rancher containe -e AUDIT_LOG_MAXBACKUP=20 \ -e AUDIT_LOG_MAXSIZE=100 \ ``` - + >**Note:** _Do not_ stop the upgrade after initiating it, even if the upgrade process seems longer than expected. Stopping the upgrade may result in database migration errors during future upgrades. >
>
@@ -112,6 +115,12 @@ During upgrade, you create a copy of the data from your current Rancher containe If you only stop the previous Rancher Server container (and don't remove it), the container may restart after the next server reboot. -**Result:** Rancher Server is upgraded to the latest version. +**Result:** Rancher is upgraded. Log back into Rancher to confirm that the upgrade succeeded. ->**Note:** If your upgrade does not complete successfully, you can roll Rancher Server and its data back to its last healthy state. For more information, see [Single Node Rollback]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/). +>**Having Network Issues Following Upgrade?** +> +> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking). + +## Rolling Back + +If your upgrade does not complete successfully, you can roll Rancher Server and its data back to its last healthy state. For more information, see [Single Node Rollback]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/). diff --git a/content/rancher/v2.x/en/user-settings/node-templates/_index.md b/content/rancher/v2.x/en/user-settings/node-templates/_index.md index fdc48f6e6b1..24d0d604602 100644 --- a/content/rancher/v2.x/en/user-settings/node-templates/_index.md +++ b/content/rancher/v2.x/en/user-settings/node-templates/_index.md @@ -3,7 +3,7 @@ title: Managing Node Templates weight: 7010 --- -When you provision a [node pool]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools) cluster, [node templates]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. You can create node templates in two contexts: +When you provision a cluster [hosted by an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools), [node templates]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node. You can create node templates in two contexts: - While [provisioning a node pool cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools). - At any time, from your [user settings](#creating-a-node-template-from-user-settings). diff --git a/content/rancher/v2.x/en/v1.6-migration/_index.md b/content/rancher/v2.x/en/v1.6-migration/_index.md index 81c7a28246f..0ae95ff40b3 100644 --- a/content/rancher/v2.x/en/v1.6-migration/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/_index.md @@ -46,14 +46,13 @@ More detailed information on Kubernetes concepts can be found in the - [1. Get Started](#1-get-started) -- [2. Migrate Applications](#2-migrate-applications) -- [3. Expose Your Services](#3-expose-your-services) -- [4. Monitor Your Applications](#4-monitor-your-applications) -- [5. Schedule Deployments](#5-schedule-deployments) -- [6. Service Discovery](#6-service-discovery) -- [7. Load Balancing](#7-load-balancing) - - +- [2. Run Migration Tools](#2-run-migration-tools) +- [3. Migrate Applications](#3-migrate-applications) +- [4. Expose Your Services](#4-expose-your-services) +- [5. Monitor Your Applications](#5-monitor-your-applications) +- [6. Schedule Deployments](#6-schedule-deployments) +- [7. Service Discovery](#7-service-discovery) +- [8. Load Balancing](#8-load-balancing) @@ -63,7 +62,79 @@ As a Rancher 1.6 user who's interested in moving to 2.0, how should you get star Blog Post: [Migrating from Rancher 1.6 to Rancher 2.0—A Short Checklist](https://rancher.com/blog/2018/2018-08-09-migrate-1dot6-setup-to-2dot0/) -## 2. Migrate Applications +## 2. Run Migration Tools + +To help with migration from 1.6 to 2.0, Rancher has developed a migration tool. Running this tool will help you check if your Rancher 1.6 applications can be migrated to 2.0. If an application can't be migrated, the tool will help you identify what's lacking. + +This tool will: + +- Accept Docker Compose config files (i.e., `docker-compose.yml` and `rancher-compose.yml`) that you've exported from your Rancher 1.6 Stacks. +- Output a list of constructs present in the Compose files that cannot be supported by Kubernetes in Rancher 2.0. These constructs require special handling or are parameters that cannot be converted to Kubernetes YAML, even using tools like Kompose. + +### A. Download the Migration Tool + +The Migration Tool for your platform can be downloaded from its [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tool is available for Linux, Mac, and Windows platforms. + + +### B. Configure the Migration Tool + +After the tool is downloaded, you need to make some configurations to run it. + +1. Modify the Migration Tool file to make it an executable. + + 1. Open Terminal and change to the directory that contains the Migration Tool file. + + 1. Rename the Migration Tool file to `migration-tools` so that it no longer includes the platform name. + + 1. Enter the following command to make `migration-tools` an executable: + + ``` + chmod +x migration-tools + ``` +1. Export the configuration for each Rancher 1.6 Stack that you want to migrate to 2.0. + + 1. Log into Rancher 1.6 and select **Stacks > All**. + + 1. From the **All Stacks** page, select **Ellipsis (...) > Export Config** for each Stack that you want to migrate. + + 1. Extract the downloaded `compose.zip`. Move the folder contents (`docker-compose.yml` and `rancher-compose.yml`) into the same directory as `migration-tools`. + +### C. Run the Migration Tool + +To use the Migration Tool, run the command below while pointing to the compose files exported from each stack that you want to migrate. If you want to migrate multiple stacks, you'll have to re-run the command for each pair of compose files that you exported. + +#### Usage + +You can run the Migration Tool by entering the following command, replacing each placeholder with the absolute path to your Stack's compose files. + +``` +migration-tools --docker-file --rancher-file +``` + +#### Options + +When using the Migration Tool, you can specify the paths to your Docker and Rancher compose files, regardless of where they are on your file system. + +| Option | Description | +| ---------------------- | -------------------------------------------------------------------------------------- | +| `--docker-file ` | The absolute path to an exported Docker compose file (default value: `docker-compose.yml`)1. | +| `--rancher-file ` | The absolute path to an alternate Rancher compose file (default value: `rancher-compose.yml`)1. | +| `--help, -h` | Lists usage for the Migration Tool. | +| `--version, -v` | Lists the version of the Migration Tool in use. | + +>1 If you omit the `--docker-file` and `--rancher-file` options from your command, the migration tool will check its home directory for compose files. + +#### Output + +After you run the migration tool, the following files output to the same directory that you ran the tool from. + + +| Output | Description | +| --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `output.txt` | This file lists all constructs for each service in `docker-compose.yml` that requires special handling to be successfully migrated to Rancher 2.0. Each construct links to the relevant blog articles on how to implement it in Rancher 2.0 (these articles are also listed below). | +| Kubernetes YAML specs | The Migration Tool internally invokes [Kompose](https://github.com/kubernetes/kompose) to generate Kubernetes YAML specs for each service you're migrating to 2.0. Each YAML spec file is named for the service you're migrating. + +## 3. Migrate Applications In Rancher 1.6, you launch applications as _services_ and organize them under _stacks_ in an _environment_, which represents a compute and administrative boundary. Rancher 1.6 supports the Docker compose standard and provides import/export for application configurations using the following files: `docker-compose.yml` and `rancher-compose.yml`. In 2.0 the environment concept doesn't exist. Instead it's replaced by: @@ -74,31 +145,31 @@ The following article explores how to map Cattle's stack and service design to K Blog Post: [A Journey from Cattle to Kubernetes!](https://rancher.com/blog/2018/2018-08-02-journey-from-cattle-to-k8s/) -## 3. Expose Your Services +## 4. Expose Your Services In Rancher 1.6, you could provide external access to your applications using port mapping. This article explores how to publicly expose your services in Rancher 2.0. It explores both UI and CLI methods to transition the port mapping functionality. Blog Post: [From Cattle to Kubernetes—How to Publicly Expose Your Services in Rancher 2.0](https://rancher.com/blog/2018/expose-and-monitor-workloads/) -## 4. Monitor Your Applications +## 5. Monitor Your Applications Rancher 1.6 provided TCP and HTTP healthchecks using its own healthcheck microservice. Rancher 2.0 uses native Kubernetes healthcheck support instead. This article overviews how to configure it in Rancher 2.0. Blog Post: [From Cattle to Kubernetes—Application Healthchecks in Rancher 2.0](https://rancher.com/blog/2018/2018-08-22-k8s-monitoring-and-healthchecks/) -## 5. Schedule Deployments +## 6. Schedule Deployments Scheduling application containers on available resources is a key container orchestration technique. The following blog reviews how to schedule containers in Rancher 2.0 for those familiar with 1.6 scheduling labels (such as affinity and anti-affinity). It also explores how to launch a global service in 2.0. Blog Post: [From Cattle to Kubernetes—Scheduling Workloads in Rancher 2.0](https://rancher.com/blog/2018/2018-08-29-scheduling-options-in-2-dot-0/) -## 6. Service Discovery +## 7. Service Discovery Rancher 1.6 provides service discovery within and across stacks using its own internal DNS microservice. It also supports pointing to external services and creating aliases. Moving to Rancher 2.0, you can replicate this same service discovery behavior. The following blog reviews this topic and the solutions needed to achieve service discovery parity in Rancher 2.0. Blog Post: [From Cattle to Kubernetes—Service Discovery in Rancher 2.0](https://rancher.com/blog/2018/2018-09-04-service_discovery_2dot0/) -## 7. Load Balancing +## 8. Load Balancing How to achieve TCP/HTTP load balancing and configure hostname/path-based routing in Rancher 2.0. @@ -106,3 +177,4 @@ Blog Post: [From Cattle to Kubernetes-How to Load Balance Your Services in Ranch In Rancher 1.6, a Load Balancer was used to expose your applications from within the Rancher environment for external access. In Rancher 2.0, the concept is the same. There is a Load Balancer option to expose your services. In the language of Kubernetes, this function is more often referred to as an **Ingress**. In short, Load Balancer and Ingress play the same role. + diff --git a/content/rke/v0.1.x/en/config-options/_index.md b/content/rke/v0.1.x/en/config-options/_index.md index 8baf72e8d00..3830db72db5 100644 --- a/content/rke/v0.1.x/en/config-options/_index.md +++ b/content/rke/v0.1.x/en/config-options/_index.md @@ -44,7 +44,7 @@ cluster_name: mycluster ### Supported Docker Versions -By default, RKE will check the installed Docker version on all hosts and fail with an error if the version is not supported by Kubernetes. The list of [supported Docker versions](https://github.com/rancher/rke/blob/master/docker/docker.go#L29) are set specifically for each Kubernetes version. To override this behavior, set this option to `true`. +By default, RKE will check the installed Docker version on all hosts and fail with an error if the version is not supported by Kubernetes. The list of [supported Docker versions](https://github.com/rancher/rke/blob/master/docker/docker.go#L37-L41) are set specifically for each Kubernetes version. To override this behavior, set this option to `true`. The default value is `false`. diff --git a/content/rke/v0.1.x/en/config-options/cloud-providers/vsphere/_index.md b/content/rke/v0.1.x/en/config-options/cloud-providers/vsphere/_index.md index fdfdddcac7c..9a79826b2df 100644 --- a/content/rke/v0.1.x/en/config-options/cloud-providers/vsphere/_index.md +++ b/content/rke/v0.1.x/en/config-options/cloud-providers/vsphere/_index.md @@ -28,7 +28,7 @@ When provisioning clusters in Rancher using the [vSphere node driver]({{< baseur 5. Assign **Member Roles** as required. 6. Expand **Cluster Options** and configure as required. 7. Set **Cloud Provider** option to `Custom`. - + ![vsphere-node-driver-cloudprovider]({{< baseurl >}}/img/rancher/vsphere-node-driver-cloudprovider.png) 8. Click on **Edit as YAML** @@ -149,7 +149,7 @@ The following configuration options are available under the disk directive: ___ -### network +### network The following configuration options are available under the network directive: @@ -184,7 +184,7 @@ cloud_provider: folder: k8s-dummy default-datastore: ds-1 datacenter: eu-west-1 - + ``` ## Annex @@ -245,7 +245,7 @@ $ rancher ssh ``` 3. Inspect the logs of the controller-manager and kubelet containers looking for errors related to the vSphere cloud provider: - + ```sh $ docker logs --since 15m kube-controller-manager $ docker logs --since 15m kubelet diff --git a/layouts/shortcodes/step_create-cluster_cluster-options.html b/layouts/shortcodes/step_create-cluster_cluster-options.html index 5e15db10ed1..23084666e66 100644 --- a/layouts/shortcodes/step_create-cluster_cluster-options.html +++ b/layouts/shortcodes/step_create-cluster_cluster-options.html @@ -1 +1 @@ -

Use Cluster Options to choose the version of Kubernetes, what network provider will be used, if you want to enable Pod Security Policies and wether the nodes added to this cluster need to have a supported Docker version installed. +

Use Cluster Options to choose the version of Kubernetes, what network provider will be used, if you want to enable Pod Security Policies and whether the nodes added to this cluster need to have a supported Docker version installed. diff --git a/layouts/shortcodes/step_create-cluster_member-roles.html b/layouts/shortcodes/step_create-cluster_member-roles.html index 6c4e3e3e45c..bd01144a6f5 100644 --- a/layouts/shortcodes/step_create-cluster_member-roles.html +++ b/layouts/shortcodes/step_create-cluster_member-roles.html @@ -1,7 +1,7 @@

Use Member Roles to configure user authorization for the cluster.

    -
  • Click Add Members to add users that can access the cluster.
  • +
  • Click Add Member to add users that can access the cluster.
  • Use the Role drop-down to set permissions for each user.

diff --git a/src/img/rancher/horizontal-pod-autoscaler.jpg b/src/img/rancher/horizontal-pod-autoscaler.jpg new file mode 100644 index 0000000000000000000000000000000000000000..e39eae1bff88b600818c796117d9a29c949a30f4 GIT binary patch literal 38147 zcmeFZ1ymf%7B)J#%ituqd+^}y?(XhRaDuzLyL*7(5Zpbu1a}D#AS96Rh#fz<=gR-D z_tttnGkdDOt+jVm&CuP${lfh^09isrTm%3D0sw$KJOKC006_o*7&tgM7{mjCfPjF6 zf`^8BIFR7rVBmpBC@4TAAP^M;2NM7T3zP(8-RnOg~U*`f(W8I1iinPv$Y)>Xx6@%a%y;Vu()I4J2_z+c&Yz}(Bv zlpDH+25NpN{CHeML4Om4#@@%{en}Xu z_d^qmdHS7rMbo9{**63|oy9a++st6;hro|__(p{hYw?YJy>9@B_6t=v=h-0KTmo#F z%7nGd5Zmr=98gGisAy|3!-XH3fHCH_X-hh8e6DW@gvprt&b$UzgU>G#-x^@FGquYK zuI^ahaaAkjH`>nn!DeERisi;n_Y5ANeOK|#^4(fiz`Fj$0F(3+=CqM$(TQ&ehbrdyS#bmp;n`nbQO_qR$I>9RIY*N}+hfQwIDN)NdqGbbm zS&`Nhp@mPyVmVjKks!w8AM0=2lB8)NizN6Rhdl{v9jU6IKY;-ZJ6jdq9SeFtY%O07 zWTr_j4+}xO&ab@(0%=~j8?SD^|6%f)#zo>PF2F;kQB6({mqz&FlMX3{qS*s-*_k*azI>u=p(DWqnpaYMn`CH;4(YN5QX z<*)~FG6ud_W!QGS_vywr&TkUOROf zQt&L9^m&{YfP514fvmf}cRa|8`1r#E0J|tzDCMo{hxzX!AYFQ_2vp(%MT*t&tUISCvuTkGl-zmd}C3d=Y{gw=N7x zwK|-X@%dE;34-gD8^xQttqHG7G|udTUpar*+KScmne2BtX#XI??IcT?S62>W^mPz9 zYk@XHqVJQwI(yIt!IWplI@Ai?I-2x=- zhWVA@bP_*IJ1v*Hk#resMM**=b39^V|BEMMsbxFgAFcmQf4stZ1drAeKF#lU0w7vb zb(xRnFS;jx2mfscBvb7uiHqX~+xhn*=wpxBr`m04Xnzm?Z4Y}ks@bro+BaG5cOu`y znfp(|DPMAo$~ab$u+{3F2X^?4-`woI&Y#kWO9_kjKfBz^AtWsI$U{ME{*X@;>!?r| z&l7PrXGJACd*rOrL1YcP#&3C0+N05;IkCSpi2~s$GVWxnIReM`yYT+D1L-~%(5QePnZ59raYz0 zrS$YHjM~W6kydo=rNSirye{qm`W5v-MbDkACt1%x^x{AX3G8lU#{QeeBn+Rw4dHhT z_`?CM7aQ}PBP9Snl85i4F6xoWHV(`U)w_%T)C*oj8$ zfYjy#6s(qNvn}4d-Fg){eTqxB@fT#9!(#4Zxblke6(8WgNDx|iC3W>9b8xDx&?P;|c(1zD-H}YQL@;0Pvq@4@9D6y5+p#d;-@~ z5AaTSe5R7#0fq6panRmG2N!Jd@;S49g;lv13BD3#TM;I>1%>QtNuku9@$m417^ zm=rCFIP(qp!bFEjH4;-<2mIUX%*EiM)8wt0@{0Qsx>F@=Q$5w@WO=wi747$C=-uX& zN;~s&Wo^kNbbB-9zFbv|KrM~mHk~+~ED{@^@umFDB>Wo~(ko}L|M#2c-$?!ijD78& z{pkDODu2QLIb^zo=xIjqbD6*6f@GVqpXd1e9sB3_e>h~ezGrRb&jkiESzgAd%<*fg zLs%Q3e`}s=3IEdm9c&S1E4~ z#poWAQKoSi#vrpIx1t|O=k=4vV?MINn1cajO(!;R@ZB1@sIKn5?Agn5e?33@R7dF5a{=pvGV!}UZmz4_*V^wr zyp5T-ClGULU<<2C_G)1_e{TP`T#!hsrq`7vTj0Z3BsX2ClVEn-7O&z>%X}B{~P}# z1J55$5i%nK01%+y;ETJA zIS*$Kp&w2xfGWo6lB!aJgA?spgl0Sa99IV{(Goq&nf{5v?#ljE-9n@2Yha>d;Da4;APU`OX zN}R=3>R*Ze$sqDjQR2?*Y60aQz0|+uR1kcSE{tL&TbHGsL#eTmGxP0q-&>ukgAbn* zPj^0ks63c1Y;Rt9>z(w({mMml;c_@2k*3y=Bo7VqbU9?jkzmhMia zwl*%DYlY!&+O=Q5`$WAua`jI0b)lq(WAmF&G&)y@r`g#DI;XnTX>T&AG?z2S-yayf zuU2)sy9Y?TclhKvcX^=jX64l7l_X}ndu#FDN7dHkqus`0s^OwAw*otxoz^gK6=h~c z!Y~Jjd2v`r(yx&z&xwP-6I);HMuhvIZ4@Juz3Uf8LsiWD&EJm0POF7UY!@l2Qg))M zFfqgm!P>d3R)79SlV!GNW4%V^YD)w1Vm4~+ zN84I;>>T#<-VFDEUE>eLY6{opcX1s=-Z)fB$aArI>{EEPyAKIWIw*vHk&mS$m=tM4 zjxrQG0z;x*@>!a7WRQotl+cB|j~VPGwQ%F}?yQ$|H7`4Hu^Lz! zt>&T(oEof=m@H%*vb}23y9X@kcovTKjht3N4kMpr6kTlZe47&vBdkmf@Y7|q-~sN% zZfft}DktiPBW~^JHkxG0n~`BtTx5Uv9&4R6;s-y{Ga@|EYSF?d*MpgSuqNe8s|-Kz z;(++$j8``4;=Mz7qDG?M=ii-2xz^e34sa%5n$c^ed?RU@;` zn3+iZ6@wMp!^1SAPaQ2T8N4>Pm?<|*{pWqXiNq3!0s5az%}?L94o$%Wpp4EtcIY=2=iiEgp>;N zNJh3I9U;J8W-!BM1Twl9E~pLo%T&x5*qHFc%KrtULl9)YDdsE6Egj=T6o=&>$yD~-n|^Uz zsu|c*w)1+Thx!@qO9_R9_KRyW_H~w*O4NPLi{Mp3flQ)fxUb_JNU@pq3zLYmDeu}G z3A5W>8B2@1B41(56&6y;+yj)EXQ^9FNZofFEBc*jMj4wnaY0jxDbm7i$==Y(_mFE% zqsm8!co5teUA*;&#%eIgo_uSF#SPT}9onWUiJUp^XX2bN!4wtahDrYM8%Ha5}hzlcAb8O~l#Ho>~Bk|vWu#&?is@4&k<*eBpz-qc@ zQpvR4$Hx|Rgqc!5)%&9d&5PprV&^x!9j_EkZ%>cfo_l#0O}t;7cm4eFlXUjm)~fSQ zS0B@P*Icx>ZjY{5pLd+L4!oZ~w7Lg)t-3_Nt!A--4pXLFNJPXR5v^r{d4?o=W3xj# zt1M8Elp73RkkVdFST2y5AE2GPQ&wiKnD*WFJN0d$u_}T7BuXf1#`?;^%UDAe6wzMU zJ^{KypvaO;VR+I_xjLFGh$FpttKncygtGA(kX(V193`P=fRfcSBEYz|%uG`_#r?(O zVX|FThDBXQ3CXY))F|z|fNN5UREbvYl^0>>0foxXPK_$J#Tjd5=@PXRq?>2t@ie~= zXUw{f(L1~WVS|JY?OaZdI@UE*v{ba7Uxo^k;Ye&|Na*!L5VE$b<>(aVy@>+M+gX)} z3u9I{8Yr<0JQp3b&?8;`{-0UE$gw$IAe05RnkyvuzATlJcsvIgrz}4rj&1}aq4K_s z`{mRH*WCBarJ-=V7~-)?Z>SlaH5ffB%V{Q5R_}D0JQ@r(w_jonO0Wu7laQZ@C((Ub zp_LnxW)aV+!^k5@tB(`U3Op~cMb1!)ciPOKin6t56+d1Yx>@dJ?yL#ZR}m%gl4RYb zl(H1d-wMaokddmX@Kdy(jUi^FNw@hi;;B}f_@aFhA>ds`Jv*_%V$IX+zpeTOiP58h zWCfu}aTFF3v6)!n|D+ieV@aS(^u6dGF!4Wlb&J`>;gAHw5dQN!}W5FU8jydu*;bWHY`Q#X$P&?u3SsJ zp!u;nV(o8FU65&`>exgEpAd{e4PpSPvP2j#!%89jdL19doouT$dMAmb9zEd&%Z@*9^*R5WrW9wgbfwp|1SWD=8LRf20~)EG>9qPhwv1Qc}4EEz2ILo zJ9`hgf5LukFTlZG;q^-)rmAw~1*&F8AD1al?jQdcakbD6_)3mh(xcQe7*@gjFGX6h zw-|~i&BUhJA2Jn@SI6?JR}1%E(U3w+%a)rjYpX;8bnu`@YHR_pVi@+}G%Pf>Ij>|r> zW1wXb?vefA@QRyaKa?}AqI7fb`pV0#>dP41N{CW2Q!|T|(;4-J^5P6B7TFdq>TD4w zN7d!y5|S*@$reRIhS42G*=;r=90FWdW2Y12ja)27=#{6(^D|0Z?5nehJ~EgxXuU`B z(CldU>UlHwwmRjrv=IpcDUpOIm-xHq<4-?V?`m$#K%DPE9!z(8n>@&Ft?o4RlJC?bZQT^(3p3Jy)@h6&UIC>0(uA{Y@c z_Xqe_*?_^^ejNj&-zojdeXvBRdJsbw|$$58G3-Q1kQ7*3l!`DD*Y`>r51muS)#jX31)pZkgs&#PeRrRS(9lOb#8$ zTe=s6iUyIR4tDjN{hx&-WmPmw!Ms;MjWa{A6MS);+2~zE_m1QB9PY3l*s8wOpa9>H zKr;x>Cw=W}eDjCDR^6aWufFQlB}*AY#rP6p%2vR&jn`jgv=~xjE5;!bSNvsI**K5u zg&xMEz3W^&0QId5$rI>pbe&l|Y4w&+2TIMJUDTUeMVs%6uNwSnt^4y-DftNE49hKL(N z_oAHSyglh)9VkzK3gM@SQ`(^r=d>L)@S-_1a^@Jl-u1@PzJTwC-$ce3Vb*UxvUz{F$jr zLPFb{eEXYFPUvYVQ=oJPmDPr#DQr5MydA4{BCB0#-46_!Az%N&L#_}!93zGi&7gWr z9ifT9XkaQFBWm7@wZatJKjAaaU!L_27xamj#VeDQNh_t5vM5*-%>Jg!^O*0wo$W^; z?E{_Xg_rbNd3?#?h$0lTvmy@aA?=cNXrh)x+&1@GhD=oGD_625`2-SB;1Ak~)=ERD z_lQXIbbYPL<30bU)(Y_;_q;FN3RW?H_6ia*|JzxVH5bTFh|ll>MhZSkQNB+~NxE%% zy3b@Uc*X3IPj|3fz2>%-M^P#~i`7z!#H0X+jUA)g$$9udEfzMZ{q?$_-C^vibf zc>)9#BAG?qQTZ>3(k;;xE&`DLuelEzvb8P5ZQv?ev&Lh8MHG)Eg*=0#SWgP4Oxczj zlilhNTGjs%z$|3hkSv_B#YeCot?A~HPCo^lxt_9o5;GL6dX^)G8|#>(m5Bn2Lk5jH zx1^2X7*(B6B7`_h29zPG&tGwo-fR4-b!1NEOMhI=0mw%osx%gD+?t!yC5%US3Q7Y3 zf!Hv;Uzg!b#7zgT21~>pW5uyN(zY0^Ks>#!3Q9y#kJnK90Y1fgSu$_jzR{NMoyp^8 z{{%pwr^YS+)1z#9hK}I3yNrx(N`6FnOa-WbuNH7S*Gdawhs@kg=niaV^dsR-F zr__b|>{?EGRqSxYShxsZwGRjQ3o(1aZlQxj7>?7Sb0?f{f^|rtpb+=FPD0CifzdZ2 z0Q|!&dAqyB>w7?-D;e9fdw~1nuP$4p+!k!XmOZOPd|-|-9`Dzz2?u?oc{-)v5sq-F zW}DaJ^#ddtK%o8VZpxT-MX`|MTgZ9C)(Fsn&XDi58|!Oatakv&>67Gii0*Ry!Bv*{iUdAy6Z4Y)Gra(@Af;ZMu{E!C!wj!MTH38F8(v>t`LPuzpnR;<)|gsP%?x`I3F z!N!zUYzm*pU4FPeBidHYCUBT(u|Anr`|8>HY58JT#w;hBG4MoJ6*7-|$E_Pv=|~ka zU^el#5z1L-9DcWSv4+ftN z5tOYE22{lO^_jryGp3Chd0&tn1%wSK7596v{TTV8HlBuDBpKtJyFeWhVfS4zci_|> zN`N^7<1L*x)|l;clOHiQ2-&s;?fJckc$pPbH9cK_AU>fN(6p&gj3`FT#cN8JJ!1@RjlB0t@I-ui$ac&XeA;MW3{O>$gqCWV=`1_lNz9!jGEC;(ChVKf#Iy!lKkkWC@PJ4La4@we*YWoS+nsnE-u+)ia8{SP~EafgsphTGK(%G#}j&A3R%Zdz*Dc0E= zyTHAi9DPwp(EG$^rllgDvk1X&bA`W>GI0|d^%D_zCFyn3#S6K3`N&Q3b?w}eq9LJV zSR>J3(=P?A^$|}PeA7X?E_Dg6g~NUmmROkc7D7<4=2Ee=pKJu6$bw_My0uR^c+qvu&%&;v9;1zq?oNjHzKuV7YVns@YIF~vH#aZD=cj~t zNk_Or?KT+Pl+ZzlQ_L{!SAN`vU>Xrvl#_qjCWR9ICdRulp7@DX@6!YFJFggyPVR&s zbImIGE%{y0j8gfwNF2La8^SyLfj5ka&SCPFYcLMG@En?(I#fll`)U!^(2pXj3yr^~ zjB)@?TtdRcgw9&UBwG9=v434sI#G$FK)Ci{y{za<00TMYB`WD?K!Ni)nG(nckD^7o z=G3fZXlBxfuKhB=MezAZ+I6kuNg_rUl|tJ6NxVmUiIn2S<6V)yWzAQdsx-Ye%a>H) z90jgvPBFaur9S=34UJ5Weu}=FhQn0qIe|EPj~zEhSXyu8C9V06Va_W7N-^nA5w@eT z2*`5~3}@VlKUP+Qj9|`SLM365k|Q&ju7P1r7Fiy2fg|-W!Q!+ni-Xm!G)w5&a*1SzDaSSRsHQBjGX5%ISDBgqaVd7euRgR0@-?s&mA-R)MN+w3m4QO6c5?%cP z`M0Ev2?`oYn#z!C1#hgTr^XPT`sp{7nA*tG8kC%2o1q)zVT%`|LwAj%WP%GJk06r5 zWX$t$gHQ+=aVW&a0tv81ID&Dq(2_B}Ot~O=yxA+OT%|zLr{vT)pXYJ%@vLGt#-!PO zB3zw3hMW(?NivlDqw-e;ZAuC@Rl{XHnl+CRMaiv2S+i(PM2M*Gu#hB zTu|Qn(I9Ine+j@v7oc^GrDuopn!@{s`m%~ksjMMM{aojZRe#G+Hbu!`B4#o}IBkGz zA^ljP8eJeU!{0jkQz*!>_QQcCU8yHX2YW*mRkkA*2l{){M2g}CW6ImpG0i&Dyss&f zsgfYqsQT7YygrAX1$dJ%T)~#R$sJwlNXu&%qii}A>}nA%m`Yk~8ez;DvmQw#Ogq4M zbC5n!eRCP>J~8GZnki^DogWLAK%^7t5=xWf+Mu4VG2LFyOKUg@9WXJ8IW zj^8bPPTUw97aC-~=0sB-_M+XX-a%! zXEcq(D#<8RhONUijgBo^@Fsu8(H(#CDn?3(DWIPE)-9{`En)qE{Ge7&nmhX{0@>M8 zVHiXH>jt%!X$Bzh^uS&25S2%qU~8%Ozf$8TKa>00OA_0u_7 z(Fj?bSu2u6W4&W`H0=#P#1zYa_TZ;P$+(_16Nb{kTZHWRv2*mW$jm*UCKFRqi0K2) zGKtqNNXZNaI=utWUH2a9Gk?cOx*U~YuSUrWDvwv^8u;cYnp--O1@Ke#ZkU;7X9YLf zW+_2S?-M<>_Dd*2X3a^4hzI&uL|MHhRcGj;<6~{yWvQMoD4W9wIFS?%TSV6^mrfRp zY7=w7r7!iCa(! zl!SsBASSh^(__WUkh1J+)Vxx)l;?x_6|r0Ei_!dU(l`lp_SUZ;+feL5AH3Tn@2G6c z*HE9iUZ1PzsFrl+5>2h2pBOfHde^QyhK5q~o$4#iFm<`|F z1De{)MAr^bf*#x^BWfG8cOUN+9L%9B2nlHkk5uDBPe)4?7`((57wX0fofBKGLlPsF zU@b!@-q<8a{=E4Frbnp$ZA01%Ws+{Q7JWsl3)d`STBz%xg@Q+)z^x-B zLi1FW?g8mQ*MI{JZRh8(c;_&6(Eh3g?Jq9Q!!x*7ZRtwM>Vx#Ob4KCnv3;NU)6-3w zKrF#`w(_T7nlXW%K>nOnviF(8Qv96K5&hLwVErrc;<7-&3@*tC{Fu`v!7d39nGv82 z2BjHllmWwx_aihh!cE?D&ChWesS*Fw!(YWk1hZi2A3>KFui;sdca_be$T4qAP0?;T zrZoO?d3gAV`a5N&%4Zj~;d7p|Qb*cte=W3_ilhB-Ta2}~PfSYh=w-pkamgJ&$|tln z2KnIOS9y7{pQ>3`Mi-TZJp@+mwgZ5rqKDX~_kgrk@Z3CfBG0Y+$-ua1A96LrPST}C&iWCp3veyfYRImaK>N2X)rocC_VTzSN zT_&yS^ARz(-692nP;o9wc5tr4vRw3tiHygS36fDg=M?kAJbK4kOeTe_qU!l=j&uYo z60pim#Xcgr2-3)<+->L4e59RI;qGd;Za} z^zcM^Z$v&c#3_z*!O~&NyRr%g+c#YRgcSP@8(5_&YQ>9M;QT%~DU z=`<1{NK5XEF1y6HsZGpC91GN^54K3NEJ!5o7VLT*ja+Ni(yb{2o)DnB|KxMt!uD%gef*$yyGtldH>wV&_KZGIT+!K+jKfR@&DM_kYS zqzhOK*`&bPJl{fQLa-Vh>MN11H7(v|F^zl0Fe8AZ{^X2eVs7#E_xWmah%oK2zvGF5 zDA=N@9CK!>OCA@$^rlk-G&6525l$ByB~3u5kXVMUh>DctQ%rT*@NVyTRialk1G_e| zc~)o)nOc!j&}(0kn3R(JVq#nwS}SmL#LI1$!4Vj&Gvn!ZrXV-%_4QKRh+X0^;*M(s z^M2DVWLj2HWwA_Bmk{0cCeDLLG-s))`uII>7@eQ)xvTNNfRmUt93myM5C^laV|f&1 zsZmqXi;9pPA@92Aj#twVZmABLZF~T^A#KzWzbqz9u}ZqSTI~gc_BBv}zCp-&Owete*> zRdZaLY1x>Vpi|d{)^kv?RFklcJYq~jP@oAWzRnnR7}`N>x=1!kl9S|da=9^tM>~Dx51txC)GNO@s-zmI;~%XWpRRXegd4+%Z@a` z?I&++))$iuVJ{SBa&q0B}O8 zUMjp+1;M~xm$xp{IBYG}BM}Yn#Pp#Z4&#J(h7}j5?*Y}zO+QwbecnAl)9z{Rsynu5 z(5{Ow&sJ1#Q--458C?89sy;Cdi9Q}GIx0$lt^|(qdhk)x$jN5Y;u@osO*pd*|tVHGA4<4S9W%^JcaF@p6OzudF+c-IhdN)RFGL6ZoAj9EH&%%Z=2DA2VYi-0m-{+^oecCEkG})@NxdJeqFn0;?oXnhC0}rsq2n-+4(`Nj-|pG5u9& z1w*PyvN==PJ%ZAbos^V`UZ%EXp9`vn&A_f|h^jwZWi(_o%q4vOI&Kq(Y}%BF>2=c& z8+HZMO$nd2B0l92gzk=>n(jbm*b5~TB$#-vuLR}my2V}W5!f-WBh} zd1fEun06huJCHukDxVK{6~OcSf}q3}T+<}Nu#z2u0Sb0oV=7WLmV6-f$Wxi7{DjD}Jzl*?RcwNm zz(<}g`oi~sLHcF6(vWa2*^zNPwrU9L^4Ai!C2vFU-SDrtyP-5>X-g5zH3P&;&6O92 zcj#yc=R&qE#S^;tI?U|~j2tkb)pk#Hq@7OUaK!0aqjWVUBD&~b7MiM!8{tKfqmsUq z3y+9k+p-^3b;K5lXQEFO5@Fo(G$#qh%g;kz(&{LhQ>65djPO3aD^Q>w0s@2T)3k|* zWU--@3(ckbh6i})d(Pg*)782}3<~xUky3`66<8$A^1a5!SagT6W~BVs%SAd{t0|IN zb{8c~`y*Z*J>R+b5eq>Ll|wlz0qq&{%1AAd*Wv!b`lszsBPM6;&g-IO@ zRN|SA=BEi@uicDQ-y}}P2AibHouRlyDPkQaxs`fsw25FG$n>i2N1n&#KcT(34itQa zu{w3Zq%{%o7_zj~zkHNZ*R!SX@i>;XRHCY!g}OQY>>~^29_Cf!htn50S|~P1OrDYC z0enJ~FX^zhizlOOOKdx{C`6}AV~0hLkrS*PCGgmr33Q<2g55ADYg2;R_Um&4Y(u-W zde?1*2nMIkh|%0<110&hI~gF~mD%8Gno2eGnW3l4Lw76|BoOLy8PhUdufaG|GP#C{ z)T!WdvVBjO)dt07Tp6YE3cEdgVwR7ooP7B8E82BeKgGUrnq8&P zszOYDS)piyC}s(hSpDomS}I$#Tyv*D>0qD^5vwh-F}3*P#Mc_T>B|BCU~l}kfr=h| z-$}VjEHt*0aaab^KSq3f_FO(&X5x_PVDXcP7bkDYM6u}=_T};Ce4fng8PnHw|93gH}jqSwFw}6!i5{HXZVpOV<$44>y`U9AB?w1Av4KXZ-T|xm^VGe0ui2vFCCx zrjR%6a$9Ydq84(_MqU_kmcj>Nr}>W zrDtJLQ}RnxoWi1JB`4^(c@IPSdk9rX$f=i*Jl_tfb2RV_{hz4(-Z15g4Qef0x6NP* zUZUbZaQGW-WxBA=#;3zp!2^3*j{T}V)aP4HFn0A;$TxRKQLe)`#@<{uzH}Ioe)#yA z_7m8;@YMs6lce2KHG*63s@S)vHxh5oTfmz%SfM?=od>v#JsDr&J>^S=6X`;V9&g%< zs18!y>fcA%Fk2uz|1ghpo_IEV=ei22Mf(_>hnHi?1TCraiKggCJ)D)y39S!5#tez* z8Cp(Ds>wYdqT0*|mPV$_eTe`}Dl${kfM*bDfn5k73eHARgD43%Yy^94B6HI6#hxZ1 z+PMCFWlp|l8e%Wl4cIB3=-H*Znd~}&H3rPmw|l3JIgxVGcmUSei!{7rEN=4T)74b2?Atzm%i0nEYCp z?cp|(gA!5^aw0Ih0Iy`+^kHUb_cpk`R?0}k^l|n)CYaE5q`c?#4_7iJSvt0;mtBe2 zeKeq@d5rvf;gNm1Y`glwI?r+|bQgT@@PJ7m4x0{RNJ=)-F3Rblmoz2-V)ZVeR#4mq zO1-7(0Y{cg9|n4(k7!C2b9#`Dd=G!lB&}y8p&1}EPD68zD20zDv z7n0fZrHvyofZq=DWZbWbUIHBtY8HyEqh%ff$c~!1n+Yr;sqwr24iivXW)+=T8V006YP?@f(8A|hOtoJai*^O zvE>Mbg55xvKOqJ;5XF6utuu82LFy~Y1D;F_fV!YWaOEwn77KZchKk(Dy`Nn~05?_; zJ1n)QH#FSxZf*ZCq zcIuP8vP!u30MZ=erSs1q?mf17`Yr%}W<4TDGt!s;Y#D52F?L|85w?q8 zK!%VI<1phM_amJl`t4`ZpN`rhVkbw5HQ7jcAlZxUE_1VLZYe^{O-#|W8Cadskl>hd z4@U*&U|AN)nePF%{7s6e)FCywXT3dcQP{aOB?#fAQ&WfCF%_Ut9e!N=o^$wvoCB07 zStk5lGD5F`35-G|g)j&+$lQSu28-Y)o=GloK!RYqGXqe7zdrQgh#^^Zu@EPuFDK}P zAH~9BvuPe$pWILKFq>PjJn_Ou8U9y#U@68Uq(bYHh-P{0$#!pGQG_FW9rHHYD5C3! zVXk8N+H`%vH`v^}Auf?)k+qX}+w6rZ1-rm6Q7IB{Lpo23Gz{jgCs{f&L<6wx!5O94 z-F!glD33{fQv$A!+BRFuyr>4px>6yzMRI5ap|Lj*&SSw{>>=B^W*sLt!}Pc_^EhA< zjNaFEUGY{FSCmaW{0y%|tY%-(Zx8^MS~CsJBn$G|zt47bcOnf)A@j)32J(4GKKue``xH|;FM)z=3YRz)dVynscPPk=ua1a~cYq`ERPRn@ zf@PAuBf_-4*Ta-d=Fsdw37^Vajt%#DcewAdHRl%H;KP4MMY`d9cfIT>KZe0Zchm>A zoj40^bn%gbaVG0xf&^nEz#uRe?~>4+Xs!e^M92px3ECG&kRN)zn^pH?CSyh);@&3+ zW*@7L#b<(9TNoG{@yy{(f>68(saQ>y?~zMan0ezj&-!`pbsJAL4}d)z}qC$ zxf~*H{FoNfc@M}ZK2Z86=DvDFgLNZ{?BG9kG*$rtmjxY)CxeMcgh?`gKNeO2nsO)e zZ!}I|N{G=i4N_P>5XKNxU_=OSYELrMH9XFrXuP~>QV9a&^2a;_Y*&Da5Z_F!>o0PS z_cj4`c=ciq#@OG2z@t%)K)(6z?l>IifQKcVqLb%4X(u%$rJ`?!Q+K$7k(BZjua}db z3ByB!_Yg$bUj%$f9lsMAAF%KBUVC!n!`YUQkKbmmrc+3~+bf@};TkF!$`iM$GR?d>uVbXK$KlqB>s zq={J9H%k@CP9>%_k@o}Bg~$Tk;3AqtEjWmILZ4Yb7M?b=rb~Nt zdes|42}fl%+(!YRqky1zM!$0b!!TF~Z14=eoVlm^&tCP%atPDdF95awu6#Mq4lRTfPktY9L5 ziJ~`WfFlKDq(Zn~(;ciW2TTnW2x3%`o}va?mk=KW4JYn9p!cK{rUS%1oQSe7#<1Dx z7Ci99LQ5e$Rtvr~N-vyXod<=sDR`*xEm=1Zl>nYes8)hv-@Pi`GR^CqK{=p^3mbz0 zl$^mWz>H5svIcAU1-+9tP_2(RS-!Ou2yn-p{Ylsn9Q8lEFznF=GKrwpt!%kNTpFiD6l zXx+q#MhOf064K8+AC%R5Meaoe?BuhEL>?VusS3o-5iK!FJOW?>^yyo7k9 z$lCy97~ydm%Px64+w_=}PIy3#H8xe73)lCVxMI>m%!4(gy!A`U);rm0K=#AaSoSR^ z=X0wCb%NRv+#g;3Z8!W8313DAcChv@DF!8uqk~&iZK$@^S!#djVR1csQoj79E5vWS zI$^(c@*C)f#Bu52B&w%|X2$op)+fqwqySDrdS`I0lwM>E8%*DdGv0<>0NG70yo>Dk z+>YpUzof{(Gz+mW-RqtHa=NqRYhE^yuV2x=5noX$%8TD2S*qX~fvHP{;Gi)$YLW4kaoL z&rsi$QoW4slpQZ9o8txH<%la+JB6q4^P?x8%u%LS_94iK=SSMpy9XeXG_^bDTlw`E zrB>n3X+*(z#9-Nw?9IM#nLpo zzyb>_X~{uda*mRO70Hq_NS2&4D2RcKOAZ2(C1=SXAUO$0mLxf;2qH;DK;SJHzXgWKF7p>hirb8gnslMT)Yp3^KNr$;HzW~ zR5ii1{u5<|oINu3dAb6CQ`e2JFnW*hnrD6N3KkJ{1p%NCi1<;)JwUCi`OyWO4>`#} zVX`+ab9>iWPhJ~_LPbOc8i!@PTOYPT+2;YfSiGOb_xnC$*Q4U8fw0gYZ&&m^cRBG& z*c6bE;p;0@pF?F`Bw5i`hI~|a)R)bJlLlzF9ZluK-!7r=prwWDTt!S2nQH+k#|M-G z`e&N_E}G3Wb1nz=3LOFgsFCPC_|LsK$sgchq*~j{MADP}{pI z>|{*RV&TuHGQVK#4aexba0H~T=yY7XM~yj8fSk2b%bvxD8gLc#UL7b@fie!CA<*vt zhv2y@q<3~g<>R>d$>brb37i-`+$s?w@pQ(10~t-Sy(I*+07wwZHt3-rZ#$%~O})Yx zOE9$Nv;y0AT;6r5mqqWy%`sh5P@|H>6=F{GFn3LlsPnZWY#S~U1vrB$C*nmuh+b#n zt8C_1YO%Ql(D$YqkpIM4P0V1QDvJ`jLNjxPX6!=}+5nFtyJ`5Vo$5fQf!zJM4KSW0 z=pDqKJ>HVk9B?IrK2L?-{9y7)vW!OI?YTuXdBM7ZCuH~$#JWUL;S)4TLEh*ARt0Rd zGi-*@AC>(E@Y6#mB@*eomMjV2o37&xmrRjOU=c16zC>O5!4NEog%m>|dTt#db_BBp zEm*gOhPY<{tE8^hyJQ({-j8Ghe$@+|ejl58Zed2b?J^k2_ycnMnEk1LC!KrVGEwg1sz1zG1Qx?i(lG$GxqS*PhOzuqrBCsDap>QaI|ei&rbB-oyX&yCi*L}%nLTfQEz48- ztGHtTVu9J&ad^;w?d<9V+OPX5ZO`+3(TdtnZNCH84$hY!10HN6ZUN69N5wy@7ruGE zU_bh2B|m_pV?aHkzORV?*}Cl$;$6?`bwrFPK-7l)P0r(EKzRMT@MFMH)=$Ad0EC>)yO<8S@)b!Rc@?lIt-OAel}tSfx? zhD$%psUNoI`SH`s+2^zW_x*6*Q{R1Y9(Cf7?+M|Mv*Y(pIV=C^ne(VW={vK9+x^!N z-u-;g&v&B0(RRPUy&tN;Hj}nzPZV22xa%kKC$QNCDrdkU2Ow|aKVe-9ns;>$wgf62Jjc;-w5%T{>Z zf5}+b-{}9yz+(U(_$6AL-&cW0IUz{sXnAf6~9 z5ufHj!nyyl8Gn#5{s8{V?)HPM`v>rA@Ay|~TjBph#*9C#2oXh3yBhRchx=E>&ImfU zzJ-dDoXPbJ|8ELxKwCgipKl36;}6UPar#hl27sZ#lu#cHrT?B084UG)N>nBa_XI#= z)c#GqGeequL9``6FwzTkU6r{U5a8zqQnV-=_aRQz9PB zXTb*`Blg!pJ6X+edLx2(Fdv`G2#?As4KV?rtg^xB9a-A!jQ9SrPn#q&`twWB(J4`z?yS%X8bD=I!&JBfdV4>~F%78DnI4)Ostoh#MyqG>`_+f@5O?9Hsl@6Gmq0WuhQ zVVGHVcdJ&tj{&%mZ+PZ3SZt=B=}{P<^3teoQTp^{CSQ*QW#k~|=^Cun&P{CauwO}+ zP?0juT}7*02^3`_Z8zBD#m70z9zyi)p!x{#B_PqI3#nN~i4Z3b20)Rc`QXKs(5V_jz8>G-aZ965MMF@RfYF`qokC43|5QNa2U zb{_A5B`s-LJFD58zZx}(FBk2@Z+dc^7|X<}!xsC}tn9UImF{=I z1+nh)+uk9}Cm{+6?KDK4uD3b~QHTh|$(2?LMT97)K@soB4X7HP;ty*zD+7%R}rH7E+9XRQf&-MW5u@k?T`4%hG{RyEQm)|i-`hW6I=YNWxv?%PuC#7m7>;t zM~UeTEGK#^MVFaHHG8RD{X8;eGh`$5Jds+rnA%ZlUl$=Nl8EweGCfn4k;H|8{l}WG z?&(7&3#=O01MNmCMpFY8xXJwxI}p6{pg_03yG0;8_YtWk%7SAxB+Hh`j#_52m#0yx zNDnb-6}YYBC^T8d5z*NIr#VogshvKN^Ry=Hvs}^kC#!~s*HC7*$W?Z+%lY=;#tJXX zqgfx-K7i|%K3fRTdjaz?5(t=YEU2XHO~RsH9oi3d@noz4%`WX=i{D<3oz_)o?voo6 z!taxow^w&AgLIno1_!T~Crzg7-m>FIq3qCnK=j1lVqW{pUx2wN9nT~|x@VfN9L*X( z#B+_}Sk&(RAB9V#oZBGq4 ztz1m_?rwb3@9}Oq)n{ryWJXfw#qJId3v8{v>|BqrsS>S^xCmOeVgT?%1<|R_7MdbKG2li)Cmvja8Xx2*M)e)wsh#*jCV2WkcPDg1hEJw@D%sAl`Q&B`zxmPALNbDd zuF)|{ZRC>^N^KVdI)knr&I{YkrB$1tlfXESz&CY`4k80Vcu*1VZzty;L4h3)@zMko zWMmA~pQ|wO0aRRwMVK3SoF_{#(@Hm?BF%&I%8sdCKUZNQ0>j!B{$oH;h{mkx1Z6aY z?(^*f^Tr2<2fORDXQzF&x2won(uVa7&j`(&3c*;MN|X10B|%(`kuFQ8!fEkC`|GU5 z?Wf~KwXVvd&D!*z$@$KrdXZvW9z(*3-F5VsOjwHO@0R-^hI( z5*8U)6~y>t0-bHNE=>E$kMYdrZtEXSF3)EX1{+ax#)-zh>o*!jt!-lX|+-3#lv+xVCDUOmX@bvtm#xW9T4 zxALvf4U|LmL2D2h7*Y>GzCmzx!*n$9-pI2IJ8S%_SWG?!`?NZm>e6B9Y6=O9UWmOj zgQQi4X63#~+0W3&=hebkls1oKCY67T+PPWeGStJAg#k_B9Q;ZeClLu_f;P!UpcDJl ziYAuy$WJivP)kpG77W(Lm0Kn}QrDP?X-gT9<)RWl zhUja>dJ8*tF!N~|JVX)OtV&daR7qbz>nJcGvb){Fh-?&mWTW51%+NImk+Bj;Ww-D3 z8#Quw-sCbUU;kA3&Eo|P;lnxP2l1@s1cF1)_nNIG?SVW)j7JgkRM$lVr}6VsqKkOuCwd$M*yQO#a!clNz4H6wG3n}1(+DpTnEiMoMkK&9q26Nl{g3{4BiXiqoR3jN0FB8_R-~szYr=wv zS=NQs)5)Y+s)goC3_54eCZVcQNKd-rQetSn3(?(Z5KPH>Kwdk6j1Gz#b7!K^y;EJw z^(wYmCAq#4R#-udbGZS2>-$gkSpd=|FM6r_m?)tXpnOhdk*Z|Iu&6Ym+54!Y{#1@X~<<#+H^ezYzsg3i>=A*AyClv)l~zB8GLEW5Vq`gHLM8a%#ql=Vb`R`|dn@;<8u`ZFPJErbnTSBCz6p^hN9OQQ-Mez3 zlpV&%r(TMTMik-2AfyVW-mRei`t(;O)hh!-Xi;YLuKPm}W!E_)YF_%x-h1C<{Yr@t ze-H5}GzzHRjHKQ$8+b|n0q0q=Y z7D!;?&ehn6b7$$udd{s)rcH*##-_UJm$uJYvL5g#0pPVVV0mp)qDL83j6Ooznn~Ky zx9Rc{op4vc^n2k|%X%z;huGbjqRys5YZ0|-%Sb*rBB(NpnO?^TuMq80(^&#=Rz}g% zzw}9a`ruVz9QG%#GGJ<*)~E$qhN_s$5p#y<6;q~wR{8@@JGJ>WNGukaMMVf^A18_MW5T?ght_(|x&G<@0XlSiV zP-Fgt;i?a$$BhWp7^O7#*~69qhHRGr+aZFiB*y9IF}Wk&9K>EA5Hn79v$CCPjjFlO z7;L#i1aQyRF+opFH;Ko7Gds8o^-Elum55f|dvk*~830g}(9VWBg_^j$1Z3WX9;^3b zWEe&*dD$eF@2r&Y&UH*?jL94-6sc0qjX$7{^4dTivsg7~r@pHfX;U*)(XT#5S`cRfp$ch5The87ym{ z+ZFaUdZ$#gIG%=zm0>lJKi#NH00ld-jPxF zPAJGi!51N4RfA<@thz=cAfWd56;vXPO9`fa(= zgk=5*oLw!WD45i!=jbihLdpoia;am!alIELKB^R;wqX5Il-7HR=a+>=wI}C4 zyCLDCxT~5LPQ6KQG6}yoBoA3b3mBQg1RFNtc{VRq5pm>@>&^GF`|@1#frq5F>wBTy z%_XOem8*uV7lvh!+Yq=om1$?{xG2Vh=>qL_Ol1LhQVVKOG>uaEtESIxn<9U3k-Kn*{wy-~C&OR>!z;axBC7*xSNV<-@oXC+ojLQ0+>=Q^%Tizz~ ze*kpnk~d@+xs%ui@Hn9A_n?vT2~6psk!-Tm`KqBJL4&MuHAv7~l9AcB*O|x(B}Z+T zFLI_9vrNbTo8Xf=GlZy`#X{B+Oyj8VHW*)6r-Y&4KUU;LYZon5wpFJf`VRnhICH>@ z7;%&CYV_~V=rgg?tZz9i_%S z96^G(HMPMP`G{XWJDmoSPISAfwtgfY=J!@e{gczm6VJRFFM2dcJ*WoZ|KT9QD8oR1yDjTKxw>{0q%xSif1(E z+6?7<_28G-K=ZTvf-mVO?Z%>&SFw@ev-G%l_>1dZ2Ln1I|A*n?wiD^<5K98H+f3y+ z4<27QGeJ>*1xwo^O$lJyK6$IlJX18Zn#zIZgTzMVDlH5mYQF8n~3pTw3R~`*ue(3Cb$c?p}V4&n705 z+ugj*HoEBEe&kTHhlPQlO;sZ-u5`R7q&djlZ}@#}pNfP?Z#&7EHdzm9N)Ks~3AF50 zatBXVg}+a8Vsql$EO;UNOBa!Da*zHz@EAnKZ?|kbWo;t;7U$?R{aK^ zT$FymW?&FNnh3yWp}UAJ95D~pOVkZzA8+sAex|!eI663>#o&`~Cz_txgC}6 zdEud^vfet2SNmZqmE3NJRf;R0?9&3$~O$bwmDcgPeM8y}CZijb-5VIwLXCYCgn=c#jO0$CC-onzIOzM&XJoQ>M ziik~1DOF|+G!VotP{GOoB!l^=rH;}aTU;#aA##kf$dIzJWpJ2wl-Yv8ns+ku$FPM0 zeT`8{h3i9-t#C`DN`N{EpI!qC25M>xgZrWO@9WS1xNNBxp`eu= zL#+=PmZ7C7+{gwgauRP$ibJCKR0Kh7~|QhK#~UU+i5smkWsNW5sQ=$3X>6csuJg#;SGfv zgXhSa+)i}N6^T1oYO7Q+0i^6n<}4do6;<~ySo0NogoqGKKhyyUkW+DHO4Gq>*w?h6 z5$}oHd(0>Yc-rzr$aP3DV-LIOui~4JGBWsn`hv8T3PDaq2Jj%yYW}vh3g6imFx>Ls zS2_EXT0w;nX$7mwd8X{VL~MnISUcE@m=k?llnw84PyBn?~lR)SLZY#_YJNkbxoD0d;8piTg)-Wr25JR0NY_fskdnw>xK@_o&||mPu|P%MKnn z{tyKnIW97h01%e~PG0Ttp!dQ$A^Ijak+&TcgU-eImwb7fvm^K1c5Yrcyz#(dwD;CsaGWB&u@vkGO+EODa59m&*gDUR&jF2b$6WD`g4xg~zO^sBWEwhF&9 zhPPcr^-fQ`ulixK6RzOG#aq8ey9QyjbmHh)tV*)IuOf)`f0b@}N)EI8Zxp{Rp z+TjT0(YjQ9MQht`kxJti9Hn~l&#!O^W@@C40drS`KiN#m!<0PUP#goimJSAszccN4 zD|3(MK1lArA(8@&&9H4_w9=I`v5v23#xp8`EIwLB5ha|1sF)g>AF`F(A0_i{rJ-*a zr8p6M+n% z-z)w32f}N)s!8$v{<+i~7>F-V%M-As4qid5fld`*1txIiB1lzhs z9l~g@mPm~bPwf^r^{bc^>L*$j2|FtixqU7{v!|K? z451C_kYNI3%MQWm;zNe7u;W3wQkkc22sjSObHPO@#5KniID5$`fUgT0!eUr*F|@&7 z2$LS5Z^4yG9=627$BMB+YT(I;_Z}wCS8Wp?_L530`LIiG@MxG1c&nf$+Q=5gUO*Q$ zRY+fxdr@!T0@4m~2pI$92FXtWBOIivj8N>U-(olQQc*#Qi^sZlGYlq-7VeHQ&OUu$ zBQsQyOwaJ)&Y3AYx!kU{G_GDOR(XIQ-V^wkPDLdW#s|xx+C=EX8 z@@5XoDiyI7>%PYPll)t9x7;l2=0bTm#y4{cq!8byN3@?Kc6mP=j>-`)Zwx;ytY~{j zRIG7Mu~jn^cBT$WNp*hL6iL#kMCHgWs{t=y7o3E5D{t0M1}INalS}z9f=LZoCRlj* zgIK4)Blu!ahl>Gs!0`PlO{Ch|yDIPUkT0~#qKW|DrQBP7x#geX)eC;D0^mQ)U;v0xei%b@)*S59;h5BeddSALIh;O$Zh!}(@hB}?-$?%LFtp-!`( zL0QNv`vAG9>Da|XDgkN#ZuPO zHbr4D=S_$Z1$Dm&s_+i7Nq9_UD`kKR3c<8eiop@UL)r%3Oxg(4oWItEJlHqx+ z;GB0yz3ULSGe)G8iehlvUa&$EZt6iRJS5epx=WU(2aRTo`Y- z48T>b2~G`rYZ-zHvdiC+%nXS3Sb=>7U?Df-^TZw{SmV>O@O+N8S24a)7D?>ufyVVM1L*97HC7e^Iut!A(_uGjO9F!p${p zT@gqRl&i<&X5D@DK9tt|S-O;D6I-r@ZK literal 0 HcmV?d00001 diff --git a/src/img/rancher/move-namespaces.png b/src/img/rancher/move-namespaces.png new file mode 100644 index 0000000000000000000000000000000000000000..9e6b7e9f42e9da37b4c38a3139d9436ce3aae8d8 GIT binary patch literal 23180 zcmd43bySq!*EftJAgzEPjihvUiAXm{cXxM#pePd34I*9A-Q6JF4BapcHNXJ#&fxcV z$MZb*TEFLA?;r2AU|qwW(`TR0-urXT;hVCe6eb!G8Ug|Wri}ClRRn~m_Xr42Ql2Bj zBt&viy|5RmleD%g0s=k_{P&5BD)lK$h~g%rAc3-hhK9$9U)0O_=20Y}0f7QR=7ZQruk@o8f3J_1jX#p z3V#OnrDWRSAEYDHCRr(A56))%#do-*hmYnSE}&#r{F`Kekb;H9L0V65S!Nq3dn@q) z_Kabmc6d8GJ3}Xzq6`nijNq~lxxftU=6OVY?S93!ZU1*IEmQfJnU7noqBtcCj6~Gop5ZoCls0mFa3E~x1rmN%0KN$S@&C{ zx$6)OV%^->Xu}BWh8~sX{AV2#P+^vJl!!gR#FzlXKEY$JIn$MFy8t5~87Kj)JcSJ5 z=@S3HJupfCC+F*ezm1{pqyL-foAW=N{LdtFU|kQgRhhBTLw*jh@oYYw$4cEkx2%KS zSpFfMTE1UpZuwKB633_PkDZ19d&*=gEiM3a6T70X-utOx%zXipolh4c2>*pic|Ti%d9aJSq(CD#*)90ZWO%x>#mCvfb5|(0?}%7$#-5$DF@{u%>!t;*D zde@fN=I!3u?iqlZ&7t((5g-|~`O>azqR#QeOkACN{QmN~j(6q2M9<{Vabgp2KOk0D zrqdl)PPP?7OKIDFohF~z{$oE0`{{vp8i;~B4t>FCvsV~W2A?!V-221}Q?hQexfd3{ z0fQs9bS}$Ot&Z&jUqU=~3TVzQi#_f|$}NwDdrNQp@c2EMj0f{}5llgQD_kB$yTcg)^;vnA+Zk{W+ zvB{eoI1F??m+Vn+2DF79#Oz8XO?RA#LNloT>=+7(&bh{abFy6SFT$6Mbt9`E%6Y&X0kD5m56!qMpkH!x#bwcX3v{CaVa zvD;nBcY^-*ez|3*^_1XvoOB+)&3dEdH1&Q3^9+1CRX>8ox_A|slSOsdaP?dC3-lqn z?l{8fn5w$ll+15GwDCaJGOVA;o8FIl@%PDuZr7Yh)oxF6*ZL_zc{zGBEi3xmGpa1n zMUB^Ft`PFBKp;cU+}!7W`E=4ug9nd((-Uo64+Xt>NbN}=gOLA5m(#*wmgh>DW;l2g z8oJQEBqirmtiYb~_z7ubgq+gh^%&$2kbTpqFu(zu91){Ekx-M=eAKmkq6$@cOZ$l+ zXTXcbkb6*Cc_q-W+*B{@6(PZTI#)Qh!5K)boM5h_t~jajh>oStqf%VOQc{1$Ck&Dd z^=CKbAF@;^3lj;jN0-|fw_J`67@8+HXQw)R8L;mi9o>f2jP38G2d;#Qg!gQoe8Apl zaSdAAEdvBj&ndjyJKqCek(JK`3}Wmil!|FTCZ;#qzPPl|4>Ze zwfiSYzm(%!+T<~5G(T-*%C|^6-=a26TmRiWg6lOG0?u-x$PE2rz=M=SzQ5 z_5MsqjLRE1xWAqZ+!jW_*pN;7&gfW@fBOfVEOIAzWp{QOoGaX<5_RNN+?VC ztrJTeYW{Ai*~1yx%B8UfEx_3FjMT`_<%e*)zWCT^c}(@0aXxsDe@@Ao58>Ol z7{lUz+eq?fK%P_ftKYcL)rS+1c*7QC%Ig&NTX7>tw7NXFXo&aYs_(tWw#Nfh6F!-o zV$eh|0$Zpvue_1fXx3Zp=Fcmd3Z}Q)8MU>WEymBAc{Qtn=U9q?7f+R-Ai2JRD0_wL zU)UEXK07oX(4tiRT?L9`!7_ao5?9R4hq|QpF9NHkqGxm68J;>MKEd39ZC}%~FqApw z{o==?F1CXJ)Z4djjsiA?AM&PvR?l}msA|i%}bJ$1FFRvOTn9dH%B17G(b3 zs6>7wU;yvg)E~_s7l?q&fx8kvI|%xSe0$w+^Ffk1ibq;tjEFn`Uwm|5d*s({9$MO5mxE4=oL>2qhUooMJ4hb;lC=`o!@{}jdV z^y(M8#Cf^Qi_({dy6xccD3ZMOa@J}{d6~>r|BNlY_bt*Y|8)LnR#x}<1O-l5Cct9( zyR=Y6Vk5R_f19a2@qE2Ezw38{4ZS%b3xn0++JGgAnBkr&Y&_+f1*z^zKGCfZQAguJ zwF!c@%L7kvWwsI3l7X<7-N`uCT=g;7cYPy!0)2#fN2E4ZeX2&k0Ol{Jd*8OVpaeZj z-^bM=BpJFWX=*G9h|0JrA*bC2k5@fZK9v=@!TANbP|8#vh>fx-4Ko~$DB-CBM$V#m zEy=nuXZviS+Z8T$2`fo}jV)ggUe~iWw?;L}f@Q3H8Z7spm?)2K}AAv8;$~_DUK6W zBEIvqI*rWeT{YVTXjjCv_FwJSbKY3WB?m4G65_muRtq-X*<%xvHN~Zd*kA3pgV#5& z_JKD#VNTmE{ilAObi@J8;VHIM5TF$7TYSx(ucpkt56#gX`=yrMz{GR2!7x~n?PtZ` zN*fABJu33W7|tJ;hpL_%ggmD0vd^b)H8ALoE|IOV9~&GFk31}4!GLU#Qq%1rQ$5sX z$QSM`e=zHK`@ebNHvhyN2*rQDic$QZJ@tRXng9PdNydasyQen;zb+y>ZxTF!k~S1z zjthW}x$pon-F=TnwhAnIEm|V&qmfP<=wjMA`IN=J!INPKGRHlv1z^{bhc*d&SfFhz z>3oMmYlSU()!QY z`Fhi*?|m8H`jrv^)?XmgGUezowb7+*WydVuEL%RYh--SP*DEN!;J?8R-ox6tFN_M8 zk_8=MnizEnb&1^`dz(dZcoMs4vbz_iH9pmKP4<5qXZgFN;94^cMCPPgL-l!aT2Ba} zT9v+EVi1cL>!}z1XdiTHph@S}p=D7H#DTr~Gnl?Jim^WEsM7v{d(DaTRkVaDTS{f5 zV!yv7PVH1lMHy>1cPT-83VeimO52of(^}%k#|s|5;Aq%YnX*5TL;!w&@;(TOqVrob zCGnEc&xH*M$OJc zbWrH!^kn;7W0TDMK;VSMq^dt_OOG`vmUyH`za@rJZY$YT3N=^5BP+gLPcfT4ti^q) zGL@2mh98_1Tl4#Q0g~0}k2w9$G$ldqy&g=U7@Wd0h#bh6ruuLOuB`*wx#PpWPscS? z{mscFcqMQH5aM3-KSN;#7miZJQVw#zYe8L#=9Ga{;RCbdvgV+AG|Q=Bwq}PJPHH9` zCrfID-c5-onaFzz^1_+JN0h99&nuIoNK#UEDR|5KJw(SgWZPZt)}(ESG2^SuuR=oyorGO1!+WgEBq>cm+yJKj=&w@7N zyz_n^#OuRO_BoG@BF0$f4M9)?e3sH{Z-w@Go0>=O9B3Mc#YeQOGir?ARg0bKW9K#R zpS{nfJI`lc>D;K)=So%Wi=rqeoS&%lIG=6X5H6ZbBbEed!q={7tj z>?Aw z`(09+N=Fkjc>z{|94iYD`n@V4afBo3m zg~p_UiOeU-1>P?=-<>=*b_I z#hrd4KK3NpRfqPz^PbE`^trn)Q=(QusgZ9s?(jPlcrz&3cWdPTnuuKB#W=Ay@nKQG zuTMkz2A>p+WGmK_E0aI%d&s}Dazy$ZtxoP6S85*ry%%g2C&Z;%kOg7tuv66xD)B<0 zZ&ni8AzSHeA)iqJNz-Sxb`B8RNbyOx3rCNr>=>}VX())DH1s_sdv1(TxAs~z?5LD0 zb`fhvE80?mJ)!wN6azO=MX?GfO?RP%|BzMp4fF_un;icu3}QS&tQD3Zz5wO;>yfiK z?Su$$%xu7wj5pqLY6MMN(m9-o8_+HT#y!xoZl9kQwP%OEN;}e$4|>@FF$sF{VMr4v zy@9c+5)(NU^Dgi$^==ee_^jooRznDmY7t3WR2BW~Kyr>C->gAl;zrhtQM)73l9Xh) zS;Q9s1~R>OH!U*_QYk(yp&ixyX2Ry?AX8#`SuQcy!s&bOQgukLv)RY~`nkmHz8t+# zFXy0^!o&^PP{9>jZVkrl+8erNG&zkYhvCOs8&r9++u)>)Zdr>0m1)(7AD!vVvz5XV z&ms*NPR0t{<^;lNwR}V#l_dQV1_yMfel#(bD16cL*R_ zJJB)f<6)haFvS1^sB40gE!ppwGIdaXkJ)m4f%3T&xPR~C|b;oGTB(q-qQ`{ zc}E6H;E+wRL4=SE@jRin<=`jjE`PhWE_u7ivTpDCk1v`kLqE}9nR@OqQ|aT4xSF^T zevKTt>n&ra=5xA#`>pBz!C>@}0-fCTyhRN2ndw+1pX#2&5rmxMr0Acuv*pV@qQ{<6rT)c1AY944<`#X)nUjpUFW zo`LMkvdP#MVk=trvYX!cRPldxRHIL@$C=4>Mb6~uUFl4m>fl#uQ#^fi=k%*ded*+qu>psn{BY0*&l!B0@Pg^_5IRE<`LaN| zsSeLhfeXFgYV=Ecf`9Ou@={A__mKhKhphOHU@bEapO(I*wJpy2@p7fgH8!mmp?ivc zsE96b;-f4vraV5%*wSPQ=cL?E^RFO>8ezbf3ttG!G!8P_8C!x&Sqhb?1>k zuKp7HpyoUu23FqFDiaYSSC`nnF362p&i^gq)**MlJ}VUw`GqK){LCnA=SY2FVsPwl z74>c9`=V)FHw$-KZ&p1+UIz`vevp+UED22SaRx%|OLI(Q-n4bU!H`c5Taals<@VO} zJo8v@%Y0<>$87%b&-oDx^B*N21oHkzZhe_B_+c4$=NI5yE-$Lxs6*S-Yrq`W?BSh(6s5ML(JiQkdBgZ2*WLqdX>* zHzla{-z15>^(H%ve`Uw&X0pHQ4R@Tjj+|f?1U}%Dlb*M`ku8yzA4OHfwpmy^Y1wGz zokq0eJKY3nDQfCJb@dlVw`R|B4E1jKR>_bN6-|x%;~Vi5F?Oi8gv77Vk-0{n7_COb z{LNgv@l~U((RwP9TtKGO&Rf?x5ZToa&f)QN%gIw#)_j+Gn7TzC{>|YA z_jGmFxZS9BqsCX+{6|`Q6skbOZ)MAvx;m!Sk>j+E+23;$^SDupAnxpW8AI&n*xlOy`o_xG?h z*YdE-us$oKN2SZSuJCQJ!BHuDXJ4l&kf@{MrW$8b_qIQ+B5FKmtMA1tVjFgW{a01y zsxAebu_K6DM`rjt>t1c?2jcJ4nAKTxf7DYn)+)3hLGXA}@@0h}zI7tI|;$qMZdM#BMR^MtR`#>HY8M%LTbj^s_q7T&L(Xw zYP3=Ucjwz~ga`ds@{2PQGJ+>i6zU5zvcD_OKl6e(V+F?%^zS+Sq$I#`7)^^AAy=(k z&(VrvE<`7aBFbdapD}pP9rCDxo&T4Hs0O3JmwUR^eM1F$=>FP6w*a*n_zL(9vLDtm zeOw#k_6?P#!bI?+oPMBzmX`gzK~eY8bhInHssX~1@wkAc54_fp@WH@IsR|lI-cQ$c zv&z`Xb~5t@7s@LygcnG9>Wkusz`Fg?w}~OQ{vz*3-?{GgFODDj@n_vB*zef$JZ}N^ z0&-G4aJkzD%+DW;YytYcTO#JPz810t*j=gj>RT)&Z~kyt{*h7_V_zk2k|D;zP&8J= z|G2C!JDNQ!heNg1578iRRq*m6Iytl%3^d|wGfLyfn%fj+s{rym&V2A3xEY*G+Mt=Z zuB{?J`%&}_Rg?b~HShN)UkcRX)s!B76~*%qB{N~A*XMCs<|ZZL%?qtY`p!M&pv5=U zTemPRLYG777s#xSN18fB<#|HOIr~y^S&o!+c9Wd-b8nvYGmR~d%acTA!3;B4ys|Jj zCX7%=D6yqoLZ6}UV>hhWUqh*(C0W-04+0$YomLG#nSL-Fc>_mM%i;znJ`&g#B(mPH$i3h)pU*^@dDeqs z{mWNyUD~IDZMo8P(COd1CoUaEw!pLe$ENasi3>hXJsKYuWaYon#8)@j|8Z}+fm2`~ z%%VhNY5NoM<*`7^$|;V0Fp$aed{w!=U(qwWHH{vyPrk?Y2>VQ)|0QrZawE9?i9i0Z z_Acyi`x{|dAH9T_w)5Hm!rIEr13HGnFqX@s#Cp+{%3%$}%zSiP?w0l~T zjaq(6JcHs??zfHOPe*9oks881e^fzwOiw0AM<6ON;v;@zfw-q5)H^ifV(EN<R?*^(Qn8BhsglToB@5%?M zcUZ_P1UY$RUeZ!cIr)pVEmpPWb&#HF782x>d%QyB)!VTSnEOdSWVShDfF>t&P1NbL zVj}AjcUtx~*g~id-_qVr287CTa@WHZdbTSmYmGrua!w9}eS4L)OaF`o8*nAKr~U1APEx{7b5KMcJCFRdrOh31#;y7gbK6YHIKLoWdn0ak>(DdzR<1`3!|b4XN9IJ# zhS)&%8c`2TD+s2h@=}5R2CKMUZLtT8=Wm-}b`Ve43T`7haPF)>*{6LL8kMkIQ?Y4a_{G+Z z_Jmn`WUR0=B_`@n07ZsblZi@A8X}Z+P|q>g_H^EGbZ30(EfH!@L5y&~lSV>!KH9vv zeK!+w2La~?gFe)t-d|Ouk82hej{lOpn5$*fp59;iZR_HAFsyiXnCI?aXNiqju&~c4q05P63kx5MXP~N;8r}B6=ubwXDqs#`}pXMWt_+1(AFusn_4_s#+1Z zkBeR;{c$A>0g*DxP%91QTa5}V$E!68;(Vsh#Qpv0mQoSk;h5FE^IQPWgu-H*$ywIL zOS?h4NQ$;H?rB6?U7R`s;g7r@lz31;_6&Jv_p|J=ZQDNsvZ9qa|195BUY!dVj?o3G zrFo}eJ_Y)Mcs+10nz`))R-z|39880Z;*ea2T~WH#ob+|%j&Zp^#z_th?17>WPLGO3 z9WsJ{qE_>(ZkWncnXc~Uv?+`f;K6qjz7ELjL6`+c|Lu^)x3skP-?9wzP2t{!NTzc9 z)#O$yCR{WHg`+$f8ma+i1%4?ghf12RMD;yoN9UyfUwEF1_dKMrX*9@nJ3pFQP8!fVeBl|Mjd~nwYpGs_7ur$o>k%M ze1B$-lxs$y#_zy=GO?x~GrTj5rp4R4cxWgIpSv+ELmBVrb4!Wb=$a2AJ-RY!v681{ zw*#bi7OZKSVFP}J2y>M@_(y7JPKK5>#DfuszoiA$go4D5r?_*MH);7(PZ6~wo1DhE zRQ5qnkR{_%a^-a5Ow@iMLfAUk*l=Q{iHL+>I(7*xDn?IQ>e3>43RFrKa7VkWH`QPP zf(I;tiD>V66P4L$Vo&RP)Y-;mAnLWm$uOpgPj@zMt>ZF>`rt|mdf$1}asLC|QA|XA z{G;-<3mQ^Ajp>QZ2;4rqXxyTmTr5=>O!w|b-%W2CjeW&cT8PW2Xg_|5a<#dLg!-_e zWbVO`KpJ^(idYfrQ4)s)R?2gT{|@un`P$9D>v2)%|1Su?>H=EJ6XGTqciv2>FwLeq z0{|*a_T5yT{3W44)2{~kK<ZzP%&qi_!@d0TQ36c^&L|&#sj07|usiFl@@1A9 z$RK|7AdjUKgtaW_u_m(Lpjc(9#D!8=Qr6G)+6H``a$bhT!UCVZ55JHDz?sG}^B^+D z@7BX$K<$cKol$F#lD$4QnMq{*54YOyM_n_ex(M6tMfR=p71S|8G-9y>&G)yI%j z!&mhPM&SOXp>jhXd)P)=mje%LVyu}A;-ttH6Tkcac9io!c7I{UfqXty{J;fB z9|2z>DC6rQ=tYK?Zmp}^MO${5en_x_U=N9s=PgQ5WkI|QVZt=F8^xm*Y4E{z$wkwW z`&cX>eZf)h;R-y4-}E<5e178$M8U2sq2! zLCE~%XH!mZa`{o0RR9?zl>FUa^5vmJtw|R}MZu>oNcD64-+jW17(viMJz3&iWf&8n zf&14U+<1n)rFSI>wZzto*y&RYu5TZJYuL;E)`#XP07D@9@Nrn?`X7(5=f)gQpIv3XvbaeKpUhDQc{+Ui=C*|$)wcLXKLVJFtpCiDLkhn(@r?dTe^n+!tSXp(BT{` zUmfgh&2y-c@2S$aY0;16DJs8OA8`U@e+SpDt=vmB?`YJeY04p0zb*WaDuCZyE7z9I zJJ}SSn^@O{m&n$vHT77)uhe!S!7v^lYev-*jTLm(y0IOq{odfxP}z{miee?5s)8 z9j1fp`pAZid$h7cN(S^b#DC`b{ukn8Hc+m!t|WXE)} zjB&5;440a!jC57IGUo5)ErVr`%U08Q~^>p^IyD z<4(!ZvHe`CHl$@=_Vc0*Ll!%X4I!>KTsY@>|Em=#%QTe_nYF2t3wf}398L{bOO$4k z`JZ}u3DAgQt9pSyG7v>dV|e1R(KW1HAM=zC=4^!p44vo4`ci}%bC_nWS@hyzy@ZIt)td#xXaXqlL1aP)>6KH!tt|Dsbq!OoK;bkC{q zazAW|JE*e|#;M6DWb{lQ`4wM%S;SVnpEaqh_n5AUx1jC%biVq9WH~OKoD!YO`6xt zmvb1%%W;)E{ZW;U_2*gHi|khQEdb6w?|!0*4Uta;&*M1qw9S_1@S>dJZK?SxyeuhL z^qFA)y4$ID#2K#TS@H@OIav4=-fEUag%ixjv_SYznz|x2XosTBa(J*^@PkYdx~=A> zXuQ(>wYyy)*W{4ZsVX^k-Qi@#Sb09ru-MR=)p@>vjpu^%SnY;B@3&D@iPEK=4ifBV zhaj>iPHU?wPm2=fXOm!jkAUv@#)aBtL)a1QAj`c0ZnmauN=A~{s1~mTW4)T$Q)oU( zRe-wqnJV3mfnOSY)?GXf+Xe4O#cYzcmEC2Ylo0Uwq~s$1#ZAV1FN)RBOB-IAMyz5@ z^-UD3;=F^;cxU+3QaIK@Ug%f!4jc*zURR#nZNU>}wEMBM!%rXmTcd`!3r3s571Nr! zRW-TMJucNRumnb})A%s%oJodH(yg+ItlygUII5Qqmnw2ZQ_z`6CU;z}8~@m{dF-2i zboA{>`@6>xHoe_f4156$Hwi~fDGlIiGHB-12_{M zwK@qX=AX?z6iQh0u(6Cd1gUGdRbmnt%>jk}w0QyM8cg2&KZ8oFO_ zMuuBgBoUu?J-}S}t8L}HzE5yVc!GirSbML(iz!qcE{a&HI~X z%jet@eKUgJ7AsHRLKT>iJJ!rpX|Lz+=tQ%Hxu+u)sPc|Qg?eCn67p2%s~nlysqaTO zGo=S#1`T#tK`5LJdmj7hf_Cc+CgYUheWeorQ9&^{>>bz3x@eCLKubzfV!Fs)9%Xra zlZXF3@fm#}jT2Ka)=4>Kdym{=3zr>2h+1+S(kssdRo6m=d;fuGaM@KeVl6@L$pXUj*kM*_GXA#9Sz^vv8)Z%{ zMl%Zh?l~gt_{Dzw-SWhuSv>9>9lxE(S}sMqyQ18u&YgbCxOU#J!Kmk7@$jLR!&pKV z>GjP;+VNsG--7+uVOLG;_&@VKk^zePdaxt^*lsTh7y;!4vF8&eb%3<@n&StLw8usL zf8T^vyaAo?I2k0B{&jlCtYmMOzKe*z@G<916ZKqg(xLx*_F>_+F&~O?p}`+_a%F^} zNa<>gdS%dV`q#2w{QB#z#&8DxLv!S+{R>zBZ*D35him))gX59l+BVX$e%8Q?=fq9+ zyur1B$-N2wf;XDLfnmH+gKh#(rcE)+u6I&Y@KrjQEuwgPJOXHlZM2VS(Pqyg*-?oT zE*_l_OJAzT#f}g|r8Y=^E*^FP`IyyuLqfdTnw+wKHGh2qH&t7VQ~ffc$z#Nyibi+O zkrAn#PQ354UI%U?FQq_+yUNWNHal!f{jFbcP97T$`oZhN5&UP*0p_XSDF!Yp>guJeeC@N~EtP>vDnjb!*?CZ*4Hi>+#n&qIul$@87pvxMXCcaS{DMZ!BNSZ?x3 zHFuaS0>Hm!ul~8p6lj7gBwo2Quv5ob68?YsY%4omMTLJPOCL?!+z$UYpUHq4GHFWaHyU{|WMFB=_(;3_);^t{-eh;V{G$nHL5#m3 zhn++39l0bi-xz5uZFT&RtJn$>uDiFvv@CSE{iLtZaaQ*qvO zQ5-N>$=d7E+-CJY-uYY1F7;&b_mrEB*E~X8!5{eWf#$B7Rl5bk_nD7VZO6qX12q(8 zTDYU&{idZ-;_>pi`| zD_sA&M)J_GIR>m(x(=oiu`viCOtHs(;Ay)@bZBac;SRiVC+U+C@gQ$W9t0d-XT4>) zdw4b};uFaWRoZe*0zB@wN|O`*5WJV1Qfc6dK^L_BaSdfpMD&|jCaBJMyvxDS5B3d^? z>(>M45pyqP7KRGANP>EL6iZH*dXSucAY#5l!CvkPyF2!0^DilRAc%tu3rz|fkvkvV z1fR5_JURv}gH@zKty?@tFKGv2s@r?THH{?P7+m4jSBuOR+zkg&YMOx{e7uN*4s?@Ab+j z0mP-ENVcV$#UTM1y$wr3et)W8KB^8A7aQav@V_1RkIDIae_?U>3nZyx>)7#Z?HF*_ zn`X&M3F^?<5xFCQU`g}~Om?)54yWi|;OsW-uvd4ZI4;Ri9A9ZiK$d(gSXKv4+WlXX z9*VEtZ0hW+w&TXxzTs?N@g|7_rXsaBCZ)hG9%P!{QI zi!T>~(OK_lzhMf3T8IeGH&iN^g()>cJ zJ(cZAbCSQ5&ecA7_k}OHg^#Bxjp9$c72cwUWBg35Dw?Ko7=ku`m6N1)9=-#*Lxey4 zbIgA>AK;g>_8Px;333vJq#aZ?dC$UL}RMxJb9Z2SI)USbhYaqGp9lppnX)A zF-&CzXv)#sX7c89s3Y(pC+9a%IJ%uS81;tR?0wMP&u=dztG{_E*f@spN13Rtq)C6@ zD6Xl!dJW=Avb$G#b`rat?$Xs5R2Hva#qS$qJJCecf~KE(BugdR6#tdIXwNGLH59>^ zx&t9jrW-@i0y+@-Y`_t*vwXatA3${zAPN`<-;a`K&iU=!-%CPkA3lvFE-+}_yWUG3 zdd)q9-S@^gIs{B%!#?;S{&{>Gy$cP1{n@eh9CiyHA*l1^?GxBLgz5qj_Cl80LxH_8 zqW-|Mye7#ehrQVN-iyLsMo#~J#^q2D%bJaqjg6|UZOP)&5{=Bn#DuJ@?Bvlxb$=rV zthBX7&X0S*y**`n`?9XCuK4uy(!D(^8C753X2o>A?Ack36^|wR@Q4VKU0GNIgKYbi zkA)#%VA70D!l*@DuSv|HSuE@a_5t?ijLpo$JtQo-H!s{Ac5TQ*xT*JzAwV2O*1Uen zygq5j`4te}z1MPIQ13A@G0{*_RrK`ITMWt%4i7IU3?o__B}KaW5xY zBzEplU0yySH7zZS-}bw~vfi5lXmKJIEW4gMtZs*TT@@9T*xYE*hwJc7VaTsvzqUM@ z!$h4&k?jjzF#VQ+sw=i7>J4o5&6k!BIX*rLjR!25lS?cgb<_)zzLTqGuCPE_(QE8% zGxYpLl)Z=;Yt|RmoPB@evicyg>N+-jA=(AMDQb#qW~X;Q0}YL^)Iv*rW=flw6we*pya zixr8x=6!MX(Q&q^RfJwhci6B*V>?ao>fecw`X=NRq#onKQ?tQnR!%Kt8K;?C9ox7v^#WuTUgVjZjK6S&fQZ3!^{7{27 z(Gvzk0$3T67Ri6eU%t#98Ihlvn=8r55ueIeXudWo@849>)lGwGDhDSg>AEw`GfCJ=%VUL#mrJA@_>vEw!zXmKLT3YErT@H2Tyy#sp0nUKXvq z*cQAL-%srTx-01eZesgm(AvKlZ zdoM?LN{$(;UiMLIiTc!nWx0Q4;_gS=l~tlLU9-6_gDR8!N}l9_UUiw?d~z0;XFFMK z=d3!EORK6gLeN5&tfXJD_8eJu@g-_Q29@H6=O$7rys(;$D*liY5iyj^6klAp<>luO z0l|jL|(i&S_aF4BxoThwx}M%r{YwE&giszLG5 zaD}m1T;gFj!(%muGD71c@QoQ3Ch-D4ELX>CcKJQe=9TbpqNaKp1_oJ&dS5pJR0Wj> z)0@SI*(@K(**(jQJ@ahN`Ek|K@sPe&;fx^3*>y8ExCU*eRG*@G<@Y3FtWHVI?SfRZ z@7-~uj&hCItI<|+YQ2PEGgenU59cI?M5OWdQ1QI0_X#DEhxm&y3vqD930IF%i^9wY z+s<^KoGM$aVS0KEkr_KbMc+w{LOZE%zuwc_q!;^|*Jj@8#|)uy#H(nPaV*qwn}^Nr zB4wfQSx^1iVAgqMn@H`A)l#|}(55l5q%kPso%_n%O2$3K3{XtGs#ktjWa?E7>nYo_ z?$Qe>Ny(v25C;F)Cg?2(hsy;)xPo%HjvH(h&!XASmV_T^xCWzdDV?>x?1tMUK!Y19E8_Z}~TU#%-`qrl_7U1CA2D0Z*?Z?Bc91PJ4%QpqD8*Qo_r_p52bWIqAtE^243T?Y&0;42Q(}Sz78XO>F0i8sGBAUvudjb_ za1gt6tf8hBr^k}?53DtPzt;U&S^K^b>SPJ_xvTkL{t*~wO0Cbot9V#(Oy=J{j(#~R zqn(=Od{f(!bid3pz7Sl1i4JxTd=7VLSWJ`uu*!fyprOIR7?^u7kt3z9t6Q?Tr~?Nz zjS_W!e*W<2X!4t$fGj;Jn~M##&$pZQ#I+^{Y!!6dB~}KJnYz{bk-t~BfPxE@{V`97 zZVU4F`ug;5r=E=WHfH$BsW(-7N*@76)%(Wg)SapM3d$wa$IEATU{@mLCnEXXZ@11f z`K1EB!y3cJ(p>|VR<*J!EY&EPoSK3oX*(d$IZ#V${;S5d?JUa=fBVz0LHeqD{W>zQ zGw`Czpy3ub)A8k`OiIW%P&| zr&(Z)bo40e9Jy~F^-4Ln?NaSkics1AQ_GnLHL>h_97RM?!6%|3n~1<+5d;J!h(QoI zfF#O#K-py9_i#vr0LpR@LD@kdY+(xogoGV}a1>bsVG|?-MTD><3JH;f6z*2p6TiNen0)|s?nmFTNq%^ov#^D!yXo+HO>4oje3M;Wh%bx_*lM{ zp@rMCFE|CSG zQdf6PT2@v81hP#XpPR!9%0dowt%(eKeOXGXw*~lJ6Tpfp76>9BfUbC3*M@1p*j-d} z4i+E*?BZ?l$o;({*^QgH2orps^CMIPTi+KvFT7x5IDlSJ4~Mq~`Orzhj&1hU;;R%< zpu|qTo@aY;aYtjHJ^MvT31VN{M7auVB_$=jJUlKusUPl{X<|CJyl*zhT5w(0`X&Mt zrC_y{k-_%0@6H}x(Q1S$YrSXR0AM+)w)((`9rb_IG@j;iHb+DfZPwr!>xDzS4~44{W)&dawqmg@Bzi5 z_A3&R^p}RIC4<=RsnT5h374i%mT<@KATd|iRZK{~<$ir4GBxZ6Ttr@k^Qbl~FWBFc z)%={v9yhau_qAXgFlnoa!JO)TfpBE%j=1_(&Qv7*un=|{Yk>M%^{9-;+q-m%&$Y=x z<3PUOzbF#_Dkf;YiQhBzobK*{SHu5MRTBSR>9!C&lh&x~s%3muNWO2$f~9wYAl?Bu*MMnIZ%uawxEhE0c=b5kVlYuN%L<317wQ4d4nFRV>fDJqqLj9wkZO zzm-;)Eoqa|tJd1Y!dZD>&aLL$dKh*o^Ga|`edJa(?Fq+p;`b}t?Hkgwh3N&#qPivN zsjrHQ2*_oWTIj0+<$fz&Xe-KRY!_pkj(1Lzw?*{k58GBVd2Ec23D9nC%leWt(W#cK z0gaZX$+hA$S2f#gkZbf;ai3=$DB;de^<#500-oCm_i#i3NQ??}%+A>R^Uu3@JihPo zo0A}HO^qp_-cp1v>O#=k$HG~$U9rK*4Tw?GtwKvJKE?=i&U`OLd?aG}h5}kvlr>Of zhA8Ui&}=PWff=_rZ8p}Cj>iZ#l;ObTo|4;gl?YljP9(I?GB4gTZPT}gkFYj7a+du2 zbMuD+Ev2{tGj;&Yzc3uo$^bjHd&^K0lQ9~(Z7~Ft@p5Ee#1l|^05Jh%Hi$e#D2#H8+Fy>~c-jT$KUa#{jqjuiYpI$S&7g~by7@;W6yxNRax_Xy-d>}Gn z$u3W}*9di19CMO$SxAVvvm!oPl~>LfZ#N6?Mpo4~p-$6(0KM>nLz&K{(--KM1EdY7 zKiJd5JM0gJQqDatsx?_5ZgsweKK>P3;z_#8BYME`?@WW`eVGlD4J{mtAYcCO@CW5Mm*e$+VeqDNYlZR?Q!pFh z);_=$wK@reZn(Ix$@JwhW`m3GPo3lY@QQuOMjtWTyy8v#qA>;Li)r%<4G;OwUEaSp zQ>ndf7i3a@FRy}k={L&&!{RS8$zjo4QKOD%iKwD?#9VX+x2Qt1bE}*Em=`wKt{lG^ zW(xnRlO^6xEEC9_t0DrcKtaNkwnKsCs6lO##e)bE6K-GKzF>WcYhIzSgEuPLZlqJ{_D}~^ zMriq2BSSNY)uEO?9d5uBtZu!|HuUO~(v;s5uzDA7va+$dR;l4zSv^UDN~4!PY&THZ zAxjV0b0`FHn!$*$EjAcp`{w6LFD_Ehrbw`z+SBRnC-Dz65Z<`D70|jCI83)z{NzM# zFF|@hXW))g+SzkE<(xEwnkP+qi1t)#GcTWJt6j&&`UPESqjDIsI&aKs9Ur=EjR4_S zW6o+g74y+^i@G1mvT_(PkO{HL8I~3fQ{s*_vMtCe(^`0{*E(GJO`sRox*fw0GjbBt z4TZwE-h2~j?-yM`XtzY}8tSYxr z#;RQxReh_*FiQ)J0o2Y@dz=v)XVeCTj;{xk5YXJ9G`|#I2$jr^wF0%y{D@U`yA zu5A{KDF``|b*m$|*OIU}>Zw_qKl-^pN$lHCkLQ`-f+B7c{|R?u8I@4Z#&6*D*fy`lV20R6in`zs@(=RS0b=zVR(dnb2kWX5aL1f zzB;3+38~#*qQOeIWB-8HEu@QcdICGcxuTr@>I_&=$IL9c<%T4j`3^Z7{5u;$uqo0p#DN>e{d3i-~IH_cq_Cpc%83d;Mzyv-hedFRgK7U|!rO1Ond zv`f8iC!M3hW_9cvf@wK=?zlw(dS!E8Z5ttCRlS+|YSaGMuacwykpe}zF?A``pBDZf zuAt{M^#x#|ReK(&JFiCBOg#!IsqVf}QGIDI^{oeLC50y?Ol&GspbfXft3eKblkZWi zsAis?e{ttWg1KwG*93XYl^ULK(JpsQ2ODxs%id=fO(%>ey zFYY;nrOM}cj!u^#CoUS9C4G_7RlmV<;t2{xH?D3nZ#W5>nq`GV{#K&A{1GMKMU7vw z+()0)kh~iv6YIV5OsRzv_kH=A&C%Co>+Sq;5YkTiNOW%=6EhcX+qA07&H_W=SQUS{n{7EUMmuo$`R%Y5S~E*!R(NV<+L%gjguW|+NM8xQ!veM zP)@zX)q4s0O%%cK>F6V5ajggLpU54XVz)|+)Xp%zge_ce*iD+NS{FuGIqdir*OQbu zI)NRpZ`pw%Oz*{q@gddTQ7U$kMyaf72VOzNTyRZOhb*nEuV0)x@bdAHF%xAS@f`V5 zMKGPZc3*pl!}UGmH2csKH48BkjUPjUWNt;6onONf?k)7nYkx$ygbj_o4m(Qu40g`c z5E`1`9`);JX(M{=4Li@U(3U(m469nU8*cj_1r>eC6(kL6f*4~Vi#+Em+{=bX`eSX_ zi$K^V$&J{)#I(|{Ic#Q_2p-pd$AoZl@7>&59~{0RCnoacOcvinQk(xc zEnAO0L5L`mRSuz>zQ`{vC!9YoDp)xf+kj(sINP#}pL)eOf9cj`qJrGy)m)QkVq7PO z&mKE+c){)n7o!(f{iNNbS&3&7d}|K)zwi6#J{MbM>Y(E%o#_JL%lo%Xg#?hNu34&Z*9@&SZbsXCdLIJ>RN?ISzo|N;pF}hw`;omq1^G~qIjXSB81a6r$IbD*A zk;+q_gZsq3XhEO*tTU zlu8(+qln-?PqzqzNsL4lflnU`$+=esIVJkDtutu)pv1U=nmNlo81wip&=?~=w?7&a z4o62#xv1`%?(#_6iBq{;qx3fCC;3@MYi_ zAH5AurVPfEj0UHSwR>5gdcDc zc&37Qm{$B4X)Mb#TY?`By_TbU#gy;A&TXzlsxKutRH%I!LH@X#R!L!T4r;uB=l|b@(udr|>N>{!4OAd?ZvX%Q literal 0 HcmV?d00001 From 9c3f39ca8b384e08f4a24d370ffffb7cb512de98 Mon Sep 17 00:00:00 2001 From: Mark Bishop Date: Tue, 6 Nov 2018 18:44:16 -0700 Subject: [PATCH 09/15] cleaing up Raul's migration tool edits --- .../rancher/v2.x/en/v1.6-migration/_index.md | 120 ++++++++++++------ 1 file changed, 78 insertions(+), 42 deletions(-) diff --git a/content/rancher/v2.x/en/v1.6-migration/_index.md b/content/rancher/v2.x/en/v1.6-migration/_index.md index 0ae95ff40b3..6649ebd975e 100644 --- a/content/rancher/v2.x/en/v1.6-migration/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/_index.md @@ -3,7 +3,7 @@ title: Migrating from Rancher v1.6 Cattle to v2.x weight: 10000 --- -Rancher 2.0 has been rearchitected and rewritten with the goal of providing a complete management solution for Kubernetes and Docker. Due to these extensive changes, there is no direct upgrade path from 1.6.x to 2.x, but rather a migration of your 1.6 application workloads into the 2.0 Kubernetes equivalent. In 1.6, the most common orchestration used was Rancher's own engine called Cattle. The following blogs (that will be converted in an official guide) explain and educate our Cattle users on running workloads in a Kubernetes environment. +Rancher 2.0 has been rearchitected and rewritten with the goal of providing a complete management solution for Kubernetes and Docker. Due to these extensive changes, there is no direct upgrade path from 1.6.x to 2.x, but rather a migration of your 1.6 application workloads into the 2.0 Kubernetes equivalent. In 1.6, the most common orchestration used was Rancher's own engine called Cattle. The following blogs (that will be converted in an official guide) explain and educate our Cattle users on running workloads in a Kubernetes environment. If you are an existing Kubernetes user on Rancher 1.6, you only need to review the [Get Started](#1-get-started) section to prepare you on what to expect on a new 2.0 Rancher cluster. @@ -29,12 +29,12 @@ Because Rancher 1.6 defaulted to our Cattle container orchestrator, it primarily | **Rancher 1.6** | **Rancher 2.0** | | --- | --- | -| Container | Pod | +| Container | Pod | | Services | Workload | | Load Balancer | Ingress | -| Stack | Namespace | +| Stack | Namespace | | Environment | Project (Administration)/Cluster (Compute) -| Host | Node | +| Host | Node | | Catalog | Helm |
More detailed information on Kubernetes concepts can be found in the @@ -64,75 +64,71 @@ Blog Post: [Migrating from Rancher 1.6 to Rancher 2.0—A Short Checklist](https ## 2. Run Migration Tools -To help with migration from 1.6 to 2.0, Rancher has developed a migration tool. Running this tool will help you check if your Rancher 1.6 applications can be migrated to 2.0. If an application can't be migrated, the tool will help you identify what's lacking. +To help with migration from 1.6 to 2.0, Rancher has developed migration-tools. Running these tools helps you export Docker Compose files and check if your Rancher 1.6 applications can be migrated to 2.0. If an application can't be migrated, the tools help you identify what's lacking. -This tool will: +These tools: -- Accept Docker Compose config files (i.e., `docker-compose.yml` and `rancher-compose.yml`) that you've exported from your Rancher 1.6 Stacks. -- Output a list of constructs present in the Compose files that cannot be supported by Kubernetes in Rancher 2.0. These constructs require special handling or are parameters that cannot be converted to Kubernetes YAML, even using tools like Kompose. +- `export` Docker Compose files (i.e., `docker-compose.yml` and `rancher-compose.yml`) from your stacks running on `cattle` environments in your Rancher 1.6 system. For every stack, files are exported to the `//` folder. To export all environments, you'll need an admin [API key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys). -### A. Download the Migration Tool +- `parse` Docker Compose files that you've exported from your Rancher 1.6 Stacks and output a list of constructs present in the Compose files that cannot be supported by Kubernetes in Rancher 2.0. These constructs require special handling or are parameters that cannot be converted to Kubernetes YAML. -The Migration Tool for your platform can be downloaded from its [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tool is available for Linux, Mac, and Windows platforms. +### A. Download Migration-Tools + +Migration-tools for your platform can be downloaded from our [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tools are available for Linux, Mac, and Windows platforms. -### B. Configure the Migration Tool +### B. Configure Migration-Tools -After the tool is downloaded, you need to make some configurations to run it. +After the tools are downloaded, you need to make some configurations to run them. -1. Modify the Migration Tool file to make it an executable. +1. Modify the migration-tools file to make it an executable. - 1. Open Terminal and change to the directory that contains the Migration Tool file. + 1. Open Terminal and change to the directory that contains the migration-tool file. - 1. Rename the Migration Tool file to `migration-tools` so that it no longer includes the platform name. + 1. Rename the file to `migration-tools` so that it no longer includes the platform name. 1. Enter the following command to make `migration-tools` an executable: - + ``` chmod +x migration-tools - ``` -1. Export the configuration for each Rancher 1.6 Stack that you want to migrate to 2.0. + ``` - 1. Log into Rancher 1.6 and select **Stacks > All**. - - 1. From the **All Stacks** page, select **Ellipsis (...) > Export Config** for each Stack that you want to migrate. - - 1. Extract the downloaded `compose.zip`. Move the folder contents (`docker-compose.yml` and `rancher-compose.yml`) into the same directory as `migration-tools`. +### C. Run Migration-Tools -### C. Run the Migration Tool +Next, use migration-tools to export your Cattle environments from Rancher 1.6 as Docker Compose files. Then, for environments that you want to migrate to Rancher 2.0, convert its Compose file into Kubernetes YAML. -To use the Migration Tool, run the command below while pointing to the compose files exported from each stack that you want to migrate. If you want to migrate multiple stacks, you'll have to re-run the command for each pair of compose files that you exported. +>**Want full usage and options for migration-tools?** See the [Migration Tools Reference](#migration-tools-reference) below. -#### Usage +1. Export the Docker Compose files for your Cattle environments from Rancher 1.6. -You can run the Migration Tool by entering the following command, replacing each placeholder with the absolute path to your Stack's compose files. + From Terminal, execute the following command, replacing each placeholder with your values. -``` -migration-tools --docker-file --rancher-file -``` + ``` + migration-tools export --url --access-key --secret-key --export-dir + ``` -#### Options + **Step Result:** migration-tools exports Compose files for each of your Cattle environments in the `--export-dir` directory. If you omitted this option, Compose files are output to your current directory. -When using the Migration Tool, you can specify the paths to your Docker and Rancher compose files, regardless of where they are on your file system. -| Option | Description | -| ---------------------- | -------------------------------------------------------------------------------------- | -| `--docker-file ` | The absolute path to an exported Docker compose file (default value: `docker-compose.yml`)1. | -| `--rancher-file ` | The absolute path to an alternate Rancher compose file (default value: `rancher-compose.yml`)1. | -| `--help, -h` | Lists usage for the Migration Tool. | -| `--version, -v` | Lists the version of the Migration Tool in use. | +1. Convert the exported Compose files to Kubernetes YAML. + + Execute the following command, replacing each placeholder with the absolute path to your Stack's Compose files. If you want to migrate multiple stacks, you'll have to re-run the command for each pair of Compose files that you exported. + + ``` + migration-tools parse --docker-file --rancher-file + ``` + + >**Note:** If you omit the `--docker-file` and `--rancher-file` options from your command, migration-tools checks its home directory for Compose files. ->1 If you omit the `--docker-file` and `--rancher-file` options from your command, the migration tool will check its home directory for compose files. #### Output -After you run the migration tool, the following files output to the same directory that you ran the tool from. - +After you run the migration tools parse command, the following files are output to your target directory. | Output | Description | | --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `output.txt` | This file lists all constructs for each service in `docker-compose.yml` that requires special handling to be successfully migrated to Rancher 2.0. Each construct links to the relevant blog articles on how to implement it in Rancher 2.0 (these articles are also listed below). | -| Kubernetes YAML specs | The Migration Tool internally invokes [Kompose](https://github.com/kubernetes/kompose) to generate Kubernetes YAML specs for each service you're migrating to 2.0. Each YAML spec file is named for the service you're migrating. +| Kubernetes YAML specs | Mirgation-tools internally invokes [Kompose](https://github.com/kubernetes/kompose) to generate Kubernetes YAML specs for each service you're migrating to 2.0. Each YAML spec file is named for the service you're migrating. ## 3. Migrate Applications @@ -178,3 +174,43 @@ Blog Post: [From Cattle to Kubernetes-How to Load Balance Your Services in Ranch In Rancher 1.6, a Load Balancer was used to expose your applications from within the Rancher environment for external access. In Rancher 2.0, the concept is the same. There is a Load Balancer option to expose your services. In the language of Kubernetes, this function is more often referred to as an **Ingress**. In short, Load Balancer and Ingress play the same role. +### Migration-Tools Reference + +Review this reference to find out what commands and options are available when using migration-tools. + +#### Usage + +``` +migration-tools [global options] command [command options] [arguments...] +``` + +#### Global Options + +Migration-tools includes a handful of options that can be used regardless of which commands you are using. These options are not required to run the tool. Rather, they're useful for troubleshooting. + +| Global Option | Description | +| ----------------- | -------------------------------------------- | +| `--debug` | Enables debug logging. | +| `--log ` | Outputs logs to the path you enter. | +| `--help`, `-h` | Displays a list of all commands available. | +| `--version`, `-v` | Prints the version of migration-tools in use.| + + +#### Commands and Command Options +This section contains reference material for commands and options available for the migration-tools used in [step 2](#2-run-migration-tools). + +Command | Options | Required? | Description +--------|---------|-------------|----- +`export`| | N/A | Exports Compose files for every Stack running in a Cattle environment in Rancher v1.6. + |`--url ` | ✓ | Rancher API endpoint URL (``). + |`--access-key ` | ✓ | Rancher API access key. Using an admin [API key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys) exports stacks from all cattle environments (``). + |`--secret-key ` | ✓ | Rancher [API secret key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys) (``). + |`--export-dir ` | | Base directory that Compose files export to under sub-directories created for each environment/stack (default: `Export`). + |`--all`, `--a` | | Export all stacks. Using this flag exports any stack in a state of inactive, stopped, or removing. + |`--system`, `--s` | | Export system and infrastructure stacks. +`parse` | | N/A | Parse Docker Compose and Rancher Compose files to get Kubernetes manifests. + |`--docker-file ` | | Parses Docker Compose file to output Kubernetes manifest (default: `docker-compose.yml`) + |`--output-file ` | | Name of file that outputs listing checks and advice for conversion (default: `output.txt`). + |`--rancher-file ` | | Parses Rancher Compose file to output Kubernetes manifest (default: `rancher-compose.yml`) +`help`, `h` | | N/A | Shows a list of options available for use with preceding command. + From 5d3149aaef5949f9f9ce93a8a89498711c933cba Mon Sep 17 00:00:00 2001 From: Mark Bishop Date: Thu, 8 Nov 2018 13:50:41 -0700 Subject: [PATCH 10/15] adding back 5 min note --- .../rancher/v2.x/en/faq/technical/_index.md | 18 ++++++++++++++++++ .../en/installation/ha/helm-init/_index.md | 10 ---------- 2 files changed, 18 insertions(+), 10 deletions(-) diff --git a/content/rancher/v2.x/en/faq/technical/_index.md b/content/rancher/v2.x/en/faq/technical/_index.md index e82d4ac0ce2..2264a8c55f4 100644 --- a/content/rancher/v2.x/en/faq/technical/_index.md +++ b/content/rancher/v2.x/en/faq/technical/_index.md @@ -124,6 +124,24 @@ When the node is removed from the cluster, and the node is cleaned, you can read You can add additional arguments/binds/environment variables via the [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables]({{< baseurl >}}/rke/v0.1.x/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls]({{< baseurl >}}/rke/v0.1.x/en/example-yamls/). +### How do I check `Common Name` and `Subject Alternative Names` in my server certificate? + +Although technically an entry in `Subject Alternative Names` is required, having the hostname in both `Common Name` and as entry in `Subject Alternative Names` gives you maximum compatibility with older browser/applications. + +Check `Common Name`: + +``` +openssl x509 -noout -subject -in cert.pem +subject= /CN=rancher.my.org +``` + +Check `Subject Alternative Names`: + +``` +openssl x509 -noout -in cert.pem -text | grep DNS + DNS:rancher.my.org +``` + ### Why does it take 5+ minutes for a pod to be rescheduled when a node has failed? This is due to a combination of the following default Kubernetes settings: diff --git a/content/rancher/v2.x/en/installation/ha/helm-init/_index.md b/content/rancher/v2.x/en/installation/ha/helm-init/_index.md index ea51f9ddaea..c0877ab8203 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-init/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-init/_index.md @@ -25,22 +25,12 @@ kubectl create clusterrolebinding tiller \ helm init --service-account tiller -<<<<<<< HEAD # Users in China: You will need to specify a specific tiller-image in order to initialize tiller. # The list of tiller image tags are available here: https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085. # When initializing tiller, you'll need to pass in --tiller-image helm init --service-account tiller | --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller: -======= -# For chinese users -# The latest version of tiller images queries addresses: -# https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085 - -helm init --service-account tiller \ - --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller: - ->>>>>>> Specify tiller image for chinese users ``` > **Note:** This`tiller`install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the [helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for restricting `tiller` access to suit your security requirements. From 78c1f4e812dfcf8fc5e87804723a5107f31a11f7 Mon Sep 17 00:00:00 2001 From: Mark Bishop Date: Thu, 8 Nov 2018 18:47:36 -0700 Subject: [PATCH 11/15] edits per Denise --- .../rancher/v2.x/en/v1.6-migration/_index.md | 73 ++++--------------- .../migration-tools-ref/_index.md | 47 ++++++++++++ 2 files changed, 63 insertions(+), 57 deletions(-) create mode 100644 content/rancher/v2.x/en/v1.6-migration/migration-tools-ref/_index.md diff --git a/content/rancher/v2.x/en/v1.6-migration/_index.md b/content/rancher/v2.x/en/v1.6-migration/_index.md index 6649ebd975e..9ad773c4f4a 100644 --- a/content/rancher/v2.x/en/v1.6-migration/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/_index.md @@ -46,7 +46,7 @@ More detailed information on Kubernetes concepts can be found in the - [1. Get Started](#1-get-started) -- [2. Run Migration Tools](#2-run-migration-tools) +- [2. Run Migration-Tools CLI](#2-run-migration-tools-cli) - [3. Migrate Applications](#3-migrate-applications) - [4. Expose Your Services](#4-expose-your-services) - [5. Monitor Your Applications](#5-monitor-your-applications) @@ -62,26 +62,26 @@ As a Rancher 1.6 user who's interested in moving to 2.0, how should you get star Blog Post: [Migrating from Rancher 1.6 to Rancher 2.0—A Short Checklist](https://rancher.com/blog/2018/2018-08-09-migrate-1dot6-setup-to-2dot0/) -## 2. Run Migration Tools +## 2. Run Migration-Tools CLI -To help with migration from 1.6 to 2.0, Rancher has developed migration-tools. Running these tools helps you export Docker Compose files and check if your Rancher 1.6 applications can be migrated to 2.0. If an application can't be migrated, the tools help you identify what's lacking. +The migration-tools CLI is a tool that helps you recreate your applications in Rancher v2.0. This tool exports your Rancher v1.6 applications as Docker Compose files and converts them to a Kubernetes manifest that Rancher 2.0 can consume. -These tools: +This command line interface tool: -- `export` Docker Compose files (i.e., `docker-compose.yml` and `rancher-compose.yml`) from your stacks running on `cattle` environments in your Rancher 1.6 system. For every stack, files are exported to the `//` folder. To export all environments, you'll need an admin [API key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys). +- Exports Docker Compose files (i.e., `docker-compose.yml` and `rancher-compose.yml`) from your stacks running on `cattle` environments in your Rancher 1.6 system. For every stack, files are exported to the `//` folder. -- `parse` Docker Compose files that you've exported from your Rancher 1.6 Stacks and output a list of constructs present in the Compose files that cannot be supported by Kubernetes in Rancher 2.0. These constructs require special handling or are parameters that cannot be converted to Kubernetes YAML. +- Parses Docker Compose files that you've exported from your Rancher 1.6 Stacks and converts them to a Kubernetes manifest that Rancher v2.0 can consume. The tool also outputs a list of constructs present in the Compose files that cannot be ported automatically to Rancher 2.0—you'll have to port them manually. -### A. Download Migration-Tools +### A. Download Migration-Tools CLI -Migration-tools for your platform can be downloaded from our [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tools are available for Linux, Mac, and Windows platforms. +The migration-tools CLI for your platform can be downloaded from our [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tools are available for Linux, Mac, and Windows platforms. -### B. Configure Migration-Tools +### B. Configure Migration-Tools CLI After the tools are downloaded, you need to make some configurations to run them. -1. Modify the migration-tools file to make it an executable. +1. Modify the migration-tools CLI file to make it an executable. 1. Open Terminal and change to the directory that contains the migration-tool file. @@ -93,11 +93,11 @@ After the tools are downloaded, you need to make some configurations to run them chmod +x migration-tools ``` -### C. Run Migration-Tools +### C. Run Migration-Tools CLI -Next, use migration-tools to export your Cattle environments from Rancher 1.6 as Docker Compose files. Then, for environments that you want to migrate to Rancher 2.0, convert its Compose file into Kubernetes YAML. +Next, use the migration-tools CLI to export your Cattle environments from Rancher 1.6 as Docker Compose files. Then, for environments that you want to migrate to Rancher 2.0, convert its Compose file into Kubernetes YAML. ->**Want full usage and options for migration-tools?** See the [Migration Tools Reference](#migration-tools-reference) below. +>**Want full usage and options for the migration-tools CLI?** See the [Migration-Tools CLI Reference]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/migration-tools-ref). 1. Export the Docker Compose files for your Cattle environments from Rancher 1.6. @@ -107,7 +107,7 @@ Next, use migration-tools to export your Cattle environments from Rancher 1.6 as migration-tools export --url --access-key --secret-key --export-dir ``` - **Step Result:** migration-tools exports Compose files for each of your Cattle environments in the `--export-dir` directory. If you omitted this option, Compose files are output to your current directory. + **Step Result:** The migration-tools CLI exports Compose files for each of your Cattle environments in the `--export-dir` directory. If you omitted this option, Compose files are output to your current directory. 1. Convert the exported Compose files to Kubernetes YAML. @@ -118,12 +118,12 @@ Next, use migration-tools to export your Cattle environments from Rancher 1.6 as migration-tools parse --docker-file --rancher-file ``` - >**Note:** If you omit the `--docker-file` and `--rancher-file` options from your command, migration-tools checks its home directory for Compose files. + >**Note:** If you omit the `--docker-file` and `--rancher-file` options from your command, the migration-tools CLI checks its home directory for Compose files. #### Output -After you run the migration tools parse command, the following files are output to your target directory. +After you run the migration-tools cli `parse` command, the following files are output to your target directory. | Output | Description | | --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | @@ -173,44 +173,3 @@ Blog Post: [From Cattle to Kubernetes-How to Load Balance Your Services in Ranch In Rancher 1.6, a Load Balancer was used to expose your applications from within the Rancher environment for external access. In Rancher 2.0, the concept is the same. There is a Load Balancer option to expose your services. In the language of Kubernetes, this function is more often referred to as an **Ingress**. In short, Load Balancer and Ingress play the same role. - -### Migration-Tools Reference - -Review this reference to find out what commands and options are available when using migration-tools. - -#### Usage - -``` -migration-tools [global options] command [command options] [arguments...] -``` - -#### Global Options - -Migration-tools includes a handful of options that can be used regardless of which commands you are using. These options are not required to run the tool. Rather, they're useful for troubleshooting. - -| Global Option | Description | -| ----------------- | -------------------------------------------- | -| `--debug` | Enables debug logging. | -| `--log ` | Outputs logs to the path you enter. | -| `--help`, `-h` | Displays a list of all commands available. | -| `--version`, `-v` | Prints the version of migration-tools in use.| - - -#### Commands and Command Options -This section contains reference material for commands and options available for the migration-tools used in [step 2](#2-run-migration-tools). - -Command | Options | Required? | Description ---------|---------|-------------|----- -`export`| | N/A | Exports Compose files for every Stack running in a Cattle environment in Rancher v1.6. - |`--url ` | ✓ | Rancher API endpoint URL (``). - |`--access-key ` | ✓ | Rancher API access key. Using an admin [API key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys) exports stacks from all cattle environments (``). - |`--secret-key ` | ✓ | Rancher [API secret key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys) (``). - |`--export-dir ` | | Base directory that Compose files export to under sub-directories created for each environment/stack (default: `Export`). - |`--all`, `--a` | | Export all stacks. Using this flag exports any stack in a state of inactive, stopped, or removing. - |`--system`, `--s` | | Export system and infrastructure stacks. -`parse` | | N/A | Parse Docker Compose and Rancher Compose files to get Kubernetes manifests. - |`--docker-file ` | | Parses Docker Compose file to output Kubernetes manifest (default: `docker-compose.yml`) - |`--output-file ` | | Name of file that outputs listing checks and advice for conversion (default: `output.txt`). - |`--rancher-file ` | | Parses Rancher Compose file to output Kubernetes manifest (default: `rancher-compose.yml`) -`help`, `h` | | N/A | Shows a list of options available for use with preceding command. - diff --git a/content/rancher/v2.x/en/v1.6-migration/migration-tools-ref/_index.md b/content/rancher/v2.x/en/v1.6-migration/migration-tools-ref/_index.md new file mode 100644 index 00000000000..ebda029781c --- /dev/null +++ b/content/rancher/v2.x/en/v1.6-migration/migration-tools-ref/_index.md @@ -0,0 +1,47 @@ +--- +title: Migration Tools CLI Reference +weight: 100 +--- + +The migration-tools CLI includes multiple commands and options to assist your migration from v1.6 to v2.0. This reference to find out what commands and options are available when using . + +## Download + +The migration-tools CLI for your platform can be downloaded from our [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tool is available for Linux, Mac, and Windows platforms. + +## Usage + +``` +migration-tools [global options] command [command options] [arguments...] +``` + +## Global Options + +The migration-tools CLI includes a handful of options that can be used regardless of which commands you are using. These options are not required to run the tool. Rather, they're useful for troubleshooting. + +| Global Option | Description | +| ----------------- | -------------------------------------------- | +| `--debug` | Enables debug logging. | +| `--log ` | Outputs logs to the path you enter. | +| `--help`, `-h` | Displays a list of all commands available. | +| `--version`, `-v` | Prints the version of migration-tools CLI in use.| + + +## Commands and Command Options + +This section contains reference material for commands and options available for the migration-tools CLI. + +Command | Options | Required? | Description +--------|---------|-------------|----- +`export`| | N/A | Exports Compose files for every Stack running in a Cattle environment in Rancher v1.6. + |`--url ` | ✓ | Rancher API endpoint URL (``). + |`--access-key ` | ✓ | Rancher API access key. Using an admin [API key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys) exports stacks from all cattle environments (``). + |`--secret-key ` | ✓ | Rancher [API secret key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys) (``). + |`--export-dir ` | | Base directory that Compose files export to under sub-directories created for each environment/stack (default: `Export`). + |`--all`, `--a` | | Export all stacks. Using this flag exports any stack in a state of inactive, stopped, or removing. + |`--system`, `--s` | | Export system and infrastructure stacks. +`parse` | | N/A | Parse Docker Compose and Rancher Compose files to get Kubernetes manifests. + |`--docker-file ` | | Parses Docker Compose file to output Kubernetes manifest (default: `docker-compose.yml`) + |`--output-file ` | | Name of file that outputs listing checks and advice for conversion (default: `output.txt`). + |`--rancher-file ` | | Parses Rancher Compose file to output Kubernetes manifest (default: `rancher-compose.yml`) +`help`, `h` | | N/A | Shows a list of options available for use with preceding command. From 3f518d2f0305982aea2b0075afb7a404186c065a Mon Sep 17 00:00:00 2001 From: Denise Schannon Date: Mon, 12 Nov 2018 14:34:44 -0800 Subject: [PATCH 12/15] update for OS/Docker in requirements --- .../en/installation/requirements/_index.md | 30 +++++++------------ 1 file changed, 11 insertions(+), 19 deletions(-) diff --git a/content/rancher/v2.x/en/installation/requirements/_index.md b/content/rancher/v2.x/en/installation/requirements/_index.md index d3d4a50194f..33334bcff01 100644 --- a/content/rancher/v2.x/en/installation/requirements/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/_index.md @@ -7,17 +7,25 @@ aliases: Whether you're configuring Rancher to run in a single-node or high-availability setup, each node running Rancher Server must meet the following requirements. {{% tabs %}} -{{% tab "Operating Systems" %}} -Rancher is supported on the following operating systems and their subsequent releases. +{{% tab "Operating Systems and Docker" %}} +Rancher is supported on the following operating systems and their subsequent non-major releases with a supported version of [Docker](https://www.docker.com/). + * Ubuntu 16.04 (64-bit) -* Red Hat Enterprise Linux 7.5 (64-bit) + * Docker 17.03.2 +* Red Hat Enterprise Linux (RHEL)/CentOS 7.5 (64-bit) + * RHEL Docker 1.13 + * Docker 17.03.2 * RancherOS 1.4 (64-bit) + * Docker 17.03.2 * Windows Server version 1803 (64-bit) + * Docker 18.06 If you are using RancherOS, make sure you switch the Docker engine to a supported version using:
`sudo ros engine switch docker-17.03.2-ce` +[Docker Documentation: Installation Instructions](https://docs.docker.com/) + {{% /tab %}} {{% tab "Hardware" %}} Hardware requirements scale based on the size of your Rancher deployment. Provision each individual node according to the requirements. @@ -53,22 +61,6 @@ Hardware requirements scale based on the size of your Rancher deployment. Provis
-{{% /tab %}} -{{% tab "Software" %}} -A supported version of [Docker](https://www.docker.com/) is required. - -Supported Versions: - -* `1.12.6` -* `1.13.1` -* `17.03.2` -* `17.06` (for Windows) - -If you are using RancherOS, make sure you switch the Docker engine to a supported version using:
-`sudo ros engine switch docker-17.03.2-ce` - -[Docker Documentation: Installation Instructions](https://docs.docker.com/) - {{% /tab %}} {{% tab "Networking" %}} From c2610bc3450edcdac1cb367be8715962aa8ea5a9 Mon Sep 17 00:00:00 2001 From: Denise Schannon Date: Tue, 13 Nov 2018 14:43:14 -0800 Subject: [PATCH 13/15] editing migration tools --- .../rancher/v2.x/en/v1.6-migration/_index.md | 75 ++++++++---------- .../migration-tools-ref/_index.md | 76 ++++++++++++++----- 2 files changed, 90 insertions(+), 61 deletions(-) diff --git a/content/rancher/v2.x/en/v1.6-migration/_index.md b/content/rancher/v2.x/en/v1.6-migration/_index.md index 9ad773c4f4a..122f98ec1dc 100644 --- a/content/rancher/v2.x/en/v1.6-migration/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/_index.md @@ -3,13 +3,13 @@ title: Migrating from Rancher v1.6 Cattle to v2.x weight: 10000 --- -Rancher 2.0 has been rearchitected and rewritten with the goal of providing a complete management solution for Kubernetes and Docker. Due to these extensive changes, there is no direct upgrade path from 1.6.x to 2.x, but rather a migration of your 1.6 application workloads into the 2.0 Kubernetes equivalent. In 1.6, the most common orchestration used was Rancher's own engine called Cattle. The following blogs (that will be converted in an official guide) explain and educate our Cattle users on running workloads in a Kubernetes environment. +Rancher 2.x has been rearchitected and rewritten with the goal of providing a complete management solution for Kubernetes and Docker. Due to these extensive changes, there is no direct upgrade path from 1.6.x to 2.x, but rather a migration of your 1.6 application workloads into the 2.x Kubernetes equivalent. In 1.6, the most common orchestration used was Rancher's own engine called Cattle. The following blogs (that will be converted in an official guide) explain and educate our Cattle users on running workloads in a Kubernetes environment. -If you are an existing Kubernetes user on Rancher 1.6, you only need to review the [Get Started](#1-get-started) section to prepare you on what to expect on a new 2.0 Rancher cluster. +If you are an existing Kubernetes user on Rancher 1.6, you only need to review the [Get Started](#1-get-started) section to prepare you on what to expect on a new 2.x Rancher cluster. ## Kubernetes Basics -Rancher 2.0 is built on the [Kubernetes](https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundational) container orchestrator. This shift in underlying technology for 2.0 is a large departure from 1.6, which supported several popular container orchestrators. Since Rancher is now based entirely on Kubernetes, it's helpful to learn the Kubernetes basics. +Rancher 2.x is built on the [Kubernetes](https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundational) container orchestrator. This shift in underlying technology for 2.x is a large departure from 1.6, which supported several popular container orchestrators. Since Rancher is now based entirely on Kubernetes, it's helpful to learn the Kubernetes basics. The following table introduces and defines some key Kubernetes concepts. @@ -25,9 +25,9 @@ The following table introduces and defines some key Kubernetes concepts. ## Migration Cheatsheet -Because Rancher 1.6 defaulted to our Cattle container orchestrator, it primarily used terminology related to Cattle. However, because Rancher 2.0 uses Kubernetes, it aligns with the Kubernetes naming standard. This shift could be confusing for people unfamiliar with Kubernetes, so we've created a table that maps terms commonly used in Rancher 1.6 to their equivalents in Rancher 2.0. +Because Rancher 1.6 defaulted to our Cattle container orchestrator, it primarily used terminology related to Cattle. However, because Rancher 2.x uses Kubernetes, it aligns with the Kubernetes naming standard. This shift could be confusing for people unfamiliar with Kubernetes, so we've created a table that maps terms commonly used in Rancher 1.6 to their equivalents in Rancher 2.x. -| **Rancher 1.6** | **Rancher 2.0** | +| **Rancher 1.6** | **Rancher 2.x** | | --- | --- | | Container | Pod | | Services | Workload | @@ -58,23 +58,23 @@ More detailed information on Kubernetes concepts can be found in the ## 1. Get Started -As a Rancher 1.6 user who's interested in moving to 2.0, how should you get started with migration? The following blog provides a short checklist to help with this transition. +As a Rancher 1.6 user who's interested in moving to 2.x, how should you get started with migration? The following blog provides a short checklist to help with this transition. -Blog Post: [Migrating from Rancher 1.6 to Rancher 2.0—A Short Checklist](https://rancher.com/blog/2018/2018-08-09-migrate-1dot6-setup-to-2dot0/) +Blog Post: [Migrating from Rancher 1.6 to Rancher 2.x—A Short Checklist](https://rancher.com/blog/2018/2018-08-09-migrate-1dot6-setup-to-2dot0/) ## 2. Run Migration-Tools CLI -The migration-tools CLI is a tool that helps you recreate your applications in Rancher v2.0. This tool exports your Rancher v1.6 applications as Docker Compose files and converts them to a Kubernetes manifest that Rancher 2.0 can consume. +The migration-tools CLI is a tool that helps you recreate your applications in Rancher v2.x. This tool exports your Rancher v1.6 applications as Compose files and converts them to a Kubernetes manifest that Rancher 2.x can consume. This command line interface tool: -- Exports Docker Compose files (i.e., `docker-compose.yml` and `rancher-compose.yml`) from your stacks running on `cattle` environments in your Rancher 1.6 system. For every stack, files are exported to the `//` folder. +- Exports Compose files (i.e., `docker-compose.yml` and `rancher-compose.yml`) for all your stacks that are Cattle environments in your Rancher 1.6 server. For every stack, files are exported to a `//` folder. -- Parses Docker Compose files that you've exported from your Rancher 1.6 Stacks and converts them to a Kubernetes manifest that Rancher v2.0 can consume. The tool also outputs a list of constructs present in the Compose files that cannot be ported automatically to Rancher 2.0—you'll have to port them manually. +- Parses Compose files that you've exported from your Rancher 1.6 stack and converts them to a Kubernetes manifest that Rancher v2.x can consume. The tool also outputs a list of constructs present in the Compose files that cannot be converted automatically to Rancher 2.x. These are fields that you'll have to manually configure in the Kubernetes YAML. ### A. Download Migration-Tools CLI -The migration-tools CLI for your platform can be downloaded from our [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tools are available for Linux, Mac, and Windows platforms. +The migration-tools CLI for your platform can be downloaded from our [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tools are available for Linux, Mac, and Windows platforms. ### B. Configure Migration-Tools CLI @@ -83,7 +83,7 @@ After the tools are downloaded, you need to make some configurations to run them 1. Modify the migration-tools CLI file to make it an executable. - 1. Open Terminal and change to the directory that contains the migration-tool file. + 1. Open Terminal and change to the directory that contains the migration-tools file. 1. Rename the file to `migration-tools` so that it no longer includes the platform name. @@ -95,81 +95,72 @@ After the tools are downloaded, you need to make some configurations to run them ### C. Run Migration-Tools CLI -Next, use the migration-tools CLI to export your Cattle environments from Rancher 1.6 as Docker Compose files. Then, for environments that you want to migrate to Rancher 2.0, convert its Compose file into Kubernetes YAML. +Next, use the migration-tools CLI to export all stacks in all of the Cattle environments into Compose files. Then, for stacks that you want to migrate to Rancher 2.x, convert the Compose files into Kubernetes YAML. >**Want full usage and options for the migration-tools CLI?** See the [Migration-Tools CLI Reference]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/migration-tools-ref). -1. Export the Docker Compose files for your Cattle environments from Rancher 1.6. +1. Export the Compose files for all stacks in all of the Cattle environments in your Rancher 1.6 server. - From Terminal, execute the following command, replacing each placeholder with your values. + Execute the following command, replacing each placeholder with your values. The access key and secret key are Account API keys, which will allow you to export from all Cattle environments. ``` migration-tools export --url --access-key --secret-key --export-dir ``` - **Step Result:** The migration-tools CLI exports Compose files for each of your Cattle environments in the `--export-dir` directory. If you omitted this option, Compose files are output to your current directory. + **Step Result:** The migration-tools CLI exports Compose files for each stack in every Cattle environments in the `--export-dir` directory. If you omitted this option, the files are saved to your current directory. -1. Convert the exported Compose files to Kubernetes YAML. +1. Convert the exported Compose files for a stack to Kubernetes YAML. - Execute the following command, replacing each placeholder with the absolute path to your Stack's Compose files. If you want to migrate multiple stacks, you'll have to re-run the command for each pair of Compose files that you exported. + Execute the following command, replacing each placeholder with the absolute path to your Stack's Compose files. For each stack, you'll have to re-run the command for each pair of Compose files that was exported. ``` migration-tools parse --docker-file --rancher-file ``` - >**Note:** If you omit the `--docker-file` and `--rancher-file` options from your command, the migration-tools CLI checks its home directory for Compose files. + >**Note:** If you omit the `--docker-file` and `--rancher-file` options from your command, the migration-tools CLI checks its home directory for these Compose files. - -#### Output - -After you run the migration-tools cli `parse` command, the following files are output to your target directory. - -| Output | Description | -| --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `output.txt` | This file lists all constructs for each service in `docker-compose.yml` that requires special handling to be successfully migrated to Rancher 2.0. Each construct links to the relevant blog articles on how to implement it in Rancher 2.0 (these articles are also listed below). | -| Kubernetes YAML specs | Mirgation-tools internally invokes [Kompose](https://github.com/kubernetes/kompose) to generate Kubernetes YAML specs for each service you're migrating to 2.0. Each YAML spec file is named for the service you're migrating. + **Step Result:** The migration-tools CLI parses your Compose files and outputs Kubernetes YAML specs as well as an `output.txt` file. For each service in the stack, a YAML spec file is created and named the same as your service. The `output.txt` file lists all constructs for each service in `docker-compose.yml` that requires special handling to be successfully migrated to Rancher 2.x. Each construct links to the relevant blog articles on how to implement it in Rancher 2.x (these articles are also listed below). ## 3. Migrate Applications -In Rancher 1.6, you launch applications as _services_ and organize them under _stacks_ in an _environment_, which represents a compute and administrative boundary. Rancher 1.6 supports the Docker compose standard and provides import/export for application configurations using the following files: `docker-compose.yml` and `rancher-compose.yml`. In 2.0 the environment concept doesn't exist. Instead it's replaced by: +In Rancher 1.6, you launch applications as _services_ and organize them under _stacks_ in an _environment_, which represents a compute and administrative boundary. Rancher 1.6 supports the Docker compose standard and provides import/export for application configurations using the following files: `docker-compose.yml` and `rancher-compose.yml`. In 2.x the environment concept doesn't exist. Instead it's replaced by: - **Cluster:** The compute boundary. - **Project:** An administrative boundary. -The following article explores how to map Cattle's stack and service design to Kubernetes. It also demonstrates how to migrate a simple application from Rancher 1.6 to 2.0 using either the Rancher UI or Docker Compose. +The following article explores how to map Cattle's stack and service design to Kubernetes. It also demonstrates how to migrate a simple application from Rancher 1.6 to 2.x using either the Rancher UI or Docker Compose. Blog Post: [A Journey from Cattle to Kubernetes!](https://rancher.com/blog/2018/2018-08-02-journey-from-cattle-to-k8s/) ## 4. Expose Your Services -In Rancher 1.6, you could provide external access to your applications using port mapping. This article explores how to publicly expose your services in Rancher 2.0. It explores both UI and CLI methods to transition the port mapping functionality. +In Rancher 1.6, you could provide external access to your applications using port mapping. This article explores how to publicly expose your services in Rancher 2.x. It explores both UI and CLI methods to transition the port mapping functionality. -Blog Post: [From Cattle to Kubernetes—How to Publicly Expose Your Services in Rancher 2.0](https://rancher.com/blog/2018/expose-and-monitor-workloads/) +Blog Post: [From Cattle to Kubernetes—How to Publicly Expose Your Services in Rancher 2.x](https://rancher.com/blog/2018/expose-and-monitor-workloads/) ## 5. Monitor Your Applications -Rancher 1.6 provided TCP and HTTP healthchecks using its own healthcheck microservice. Rancher 2.0 uses native Kubernetes healthcheck support instead. This article overviews how to configure it in Rancher 2.0. +Rancher 1.6 provided TCP and HTTP healthchecks using its own healthcheck microservice. Rancher 2.x uses native Kubernetes healthcheck support instead. This article overviews how to configure it in Rancher 2.x. -Blog Post: [From Cattle to Kubernetes—Application Healthchecks in Rancher 2.0](https://rancher.com/blog/2018/2018-08-22-k8s-monitoring-and-healthchecks/) +Blog Post: [From Cattle to Kubernetes—Application Healthchecks in Rancher 2.x](https://rancher.com/blog/2018/2018-08-22-k8s-monitoring-and-healthchecks/) ## 6. Schedule Deployments -Scheduling application containers on available resources is a key container orchestration technique. The following blog reviews how to schedule containers in Rancher 2.0 for those familiar with 1.6 scheduling labels (such as affinity and anti-affinity). It also explores how to launch a global service in 2.0. +Scheduling application containers on available resources is a key container orchestration technique. The following blog reviews how to schedule containers in Rancher 2.x for those familiar with 1.6 scheduling labels (such as affinity and anti-affinity). It also explores how to launch a global service in 2.x. -Blog Post: [From Cattle to Kubernetes—Scheduling Workloads in Rancher 2.0](https://rancher.com/blog/2018/2018-08-29-scheduling-options-in-2-dot-0/) +Blog Post: [From Cattle to Kubernetes—Scheduling Workloads in Rancher 2.x](https://rancher.com/blog/2018/2018-08-29-scheduling-options-in-2-dot-0/) ## 7. Service Discovery -Rancher 1.6 provides service discovery within and across stacks using its own internal DNS microservice. It also supports pointing to external services and creating aliases. Moving to Rancher 2.0, you can replicate this same service discovery behavior. The following blog reviews this topic and the solutions needed to achieve service discovery parity in Rancher 2.0. +Rancher 1.6 provides service discovery within and across stacks using its own internal DNS microservice. It also supports pointing to external services and creating aliases. Moving to Rancher 2.x, you can replicate this same service discovery behavior. The following blog reviews this topic and the solutions needed to achieve service discovery parity in Rancher 2.x. -Blog Post: [From Cattle to Kubernetes—Service Discovery in Rancher 2.0](https://rancher.com/blog/2018/2018-09-04-service_discovery_2dot0/) +Blog Post: [From Cattle to Kubernetes—Service Discovery in Rancher 2.x](https://rancher.com/blog/2018/2018-09-04-service_discovery_2dot0/) ## 8. Load Balancing -How to achieve TCP/HTTP load balancing and configure hostname/path-based routing in Rancher 2.0. +How to achieve TCP/HTTP load balancing and configure hostname/path-based routing in Rancher 2.x. -Blog Post: [From Cattle to Kubernetes-How to Load Balance Your Services in Rancher 2.0](https://rancher.com/blog/2018/2018-09-13-load-balancing-options-2dot0/) - -In Rancher 1.6, a Load Balancer was used to expose your applications from within the Rancher environment for external access. In Rancher 2.0, the concept is the same. There is a Load Balancer option to expose your services. In the language of Kubernetes, this function is more often referred to as an **Ingress**. In short, Load Balancer and Ingress play the same role. +Blog Post: [From Cattle to Kubernetes-How to Load Balance Your Services in Rancher 2.x](https://rancher.com/blog/2018/2018-09-13-load-balancing-options-2dot0/) +In Rancher 1.6, a Load Balancer was used to expose your applications from within the Rancher environment for external access. In Rancher 2.x, the concept is the same. There is a Load Balancer option to expose your services. In the language of Kubernetes, this function is more often referred to as an **Ingress**. In short, Load Balancer and Ingress play the same role. diff --git a/content/rancher/v2.x/en/v1.6-migration/migration-tools-ref/_index.md b/content/rancher/v2.x/en/v1.6-migration/migration-tools-ref/_index.md index ebda029781c..1c1c060e730 100644 --- a/content/rancher/v2.x/en/v1.6-migration/migration-tools-ref/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/migration-tools-ref/_index.md @@ -3,7 +3,7 @@ title: Migration Tools CLI Reference weight: 100 --- -The migration-tools CLI includes multiple commands and options to assist your migration from v1.6 to v2.0. This reference to find out what commands and options are available when using . +The migration-tools CLI includes multiple commands and options to assist your migration from Rancher v1.6 to Rancher v2.x. ## Download @@ -15,9 +15,9 @@ The migration-tools CLI for your platform can be downloaded from our [GitHub rel migration-tools [global options] command [command options] [arguments...] ``` -## Global Options +## Migration Tools Global Options -The migration-tools CLI includes a handful of options that can be used regardless of which commands you are using. These options are not required to run the tool. Rather, they're useful for troubleshooting. +The migration-tools CLI includes a handful of global options. | Global Option | Description | | ----------------- | -------------------------------------------- | @@ -26,22 +26,60 @@ The migration-tools CLI includes a handful of options that can be used regardles | `--help`, `-h` | Displays a list of all commands available. | | `--version`, `-v` | Prints the version of migration-tools CLI in use.| - ## Commands and Command Options -This section contains reference material for commands and options available for the migration-tools CLI. +### Migration-Tools Export Reference -Command | Options | Required? | Description ---------|---------|-------------|----- -`export`| | N/A | Exports Compose files for every Stack running in a Cattle environment in Rancher v1.6. - |`--url ` | ✓ | Rancher API endpoint URL (``). - |`--access-key ` | ✓ | Rancher API access key. Using an admin [API key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys) exports stacks from all cattle environments (``). - |`--secret-key ` | ✓ | Rancher [API secret key]({{< baseurl >}}/rancherv2.x/en/user-settings/api-keys) (``). - |`--export-dir ` | | Base directory that Compose files export to under sub-directories created for each environment/stack (default: `Export`). - |`--all`, `--a` | | Export all stacks. Using this flag exports any stack in a state of inactive, stopped, or removing. - |`--system`, `--s` | | Export system and infrastructure stacks. -`parse` | | N/A | Parse Docker Compose and Rancher Compose files to get Kubernetes manifests. - |`--docker-file ` | | Parses Docker Compose file to output Kubernetes manifest (default: `docker-compose.yml`) - |`--output-file ` | | Name of file that outputs listing checks and advice for conversion (default: `output.txt`). - |`--rancher-file ` | | Parses Rancher Compose file to output Kubernetes manifest (default: `rancher-compose.yml`) -`help`, `h` | | N/A | Shows a list of options available for use with preceding command. +The `migration-tools export` command exports all stacks from your Rancher v1.6 server into Compose files. + +#### Options + +| Option | Required? | Description| +| --- | --- |--- | +|`--url ` | ✓ | Rancher API endpoint URL (``). | +|`--access-key ` | ✓ | Rancher API access key. Using an account API key exports all stacks from all cattle environments (``). | +|`--secret-key ` | ✓ | Rancher API secret key associated with the access key. (``). | +|`--export-dir ` | | Base directory that Compose files export to under sub-directories created for each environment/stack (default: `Export`). | +|`--all`, `--a` | | Export all stacks. Using this flag exports any stack in a state of inactive, stopped, or removing. | +|`--system`, `--s` | | Export system and infrastructure stacks. | + + +#### Usage + +Execute the following command, replacing each placeholder with your values. The access key and secret key are Account API keys, which will allow you to export from all Cattle environments. + +``` +migration-tools export --url --access-key --secret-key --export-dir +``` + +**Result:** The migration-tools CLI exports Compose files for each stack in every Cattle environments in the `--export-dir` directory. If you omitted this option, the files are saved to your current directory. + +### Migration-Tools Parse Reference + +The `migration-tools parse` command parses the Compose files for a stack and uses [Kompose](https://github.com/kubernetes/kompose) to generate an equivalent Kubernetes YAML. It also outputs an `output.txt` file, which lists all the constructs that will need manual intervention in order to be converted to Kubernetes. + +#### Options + +| Option | Required? | Description +| ---|---|--- +|`--docker-file ` | | Parses Docker Compose file to output Kubernetes manifest (default: `docker-compose.yml`) +|`--output-file ` | | Name of file that outputs listing checks and advice for conversion (default: `output.txt`). +|`--rancher-file ` | | Parses Rancher Compose file to output Kubernetes manifest (default: `rancher-compose.yml`) + +#### Subcommands + +| Subcommand | Description | +| ---|---| +| `help`, `h` | Shows a list of options available for use with preceding command. | + +#### Usage + +Execute the following command, replacing each placeholder with the absolute path to your Stack's Compose files. For each stack, you'll have to re-run the command for each pair of Compose files that was exported. + +``` +migration-tools parse --docker-file --rancher-file +``` + +>**Note:** If you omit the `--docker-file` and `--rancher-file` options from your command, the migration-tools CLI checks its home directory for these Compose files. + +**Result:** The migration-tools CLI parses your Compose files and outputs Kubernetes YAML specs as well as an `output.txt` file. For each service in the stack, a YAML spec file is created and named the same as your service. The `output.txt` file lists all constructs for each service in `docker-compose.yml` that requires special handling to be successfully migrated to Rancher 2.x. Each construct links to the relevant blog articles on how to implement it in Rancher 2.x. From 971d01cbd542b0a1c78e2eacd43d2a5edea377c3 Mon Sep 17 00:00:00 2001 From: Mark Bishop Date: Tue, 13 Nov 2018 16:47:26 -0700 Subject: [PATCH 14/15] fixed typos and other minor issues prior to publication --- .../rancher/v2.x/en/v1.6-migration/_index.md | 23 +++++++++---------- 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/content/rancher/v2.x/en/v1.6-migration/_index.md b/content/rancher/v2.x/en/v1.6-migration/_index.md index 122f98ec1dc..942c04dbdff 100644 --- a/content/rancher/v2.x/en/v1.6-migration/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/_index.md @@ -70,7 +70,7 @@ This command line interface tool: - Exports Compose files (i.e., `docker-compose.yml` and `rancher-compose.yml`) for all your stacks that are Cattle environments in your Rancher 1.6 server. For every stack, files are exported to a `//` folder. -- Parses Compose files that you've exported from your Rancher 1.6 stack and converts them to a Kubernetes manifest that Rancher v2.x can consume. The tool also outputs a list of constructs present in the Compose files that cannot be converted automatically to Rancher 2.x. These are fields that you'll have to manually configure in the Kubernetes YAML. +- Parses Compose files that you've exported from your Rancher 1.6 stack and converts them to a Kubernetes manifest that Rancher v2.x can consume. The tool also outputs a list of constructs present in the Compose files that cannot be converted automatically to Rancher 2.x. These are files that you'll have to manually configure in the Kubernetes YAML. ### A. Download Migration-Tools CLI @@ -79,19 +79,18 @@ The migration-tools CLI for your platform can be downloaded from our [GitHub rel ### B. Configure Migration-Tools CLI -After the tools are downloaded, you need to make some configurations to run them. +After you download migration-tools CLI, rename it and make it executable. -1. Modify the migration-tools CLI file to make it an executable. - 1. Open Terminal and change to the directory that contains the migration-tools file. +1. Open Terminal and change to the directory that contains the migration-tools file. - 1. Rename the file to `migration-tools` so that it no longer includes the platform name. +1. Rename the file to `migration-tools` so that it no longer includes the platform name. - 1. Enter the following command to make `migration-tools` an executable: +1. Enter the following command to make `migration-tools` an executable: - ``` - chmod +x migration-tools - ``` + ``` + chmod +x migration-tools + ``` ### C. Run Migration-Tools CLI @@ -112,7 +111,7 @@ Next, use the migration-tools CLI to export all stacks in all of the Cattle envi 1. Convert the exported Compose files for a stack to Kubernetes YAML. - Execute the following command, replacing each placeholder with the absolute path to your Stack's Compose files. For each stack, you'll have to re-run the command for each pair of Compose files that was exported. + Execute the following command, replacing each placeholder with the absolute path to your stack's Compose files. For each stack, you'll have to re-run the command for each pair of Compose files that was exported. ``` migration-tools parse --docker-file --rancher-file @@ -124,7 +123,7 @@ Next, use the migration-tools CLI to export all stacks in all of the Cattle envi ## 3. Migrate Applications -In Rancher 1.6, you launch applications as _services_ and organize them under _stacks_ in an _environment_, which represents a compute and administrative boundary. Rancher 1.6 supports the Docker compose standard and provides import/export for application configurations using the following files: `docker-compose.yml` and `rancher-compose.yml`. In 2.x the environment concept doesn't exist. Instead it's replaced by: +In Rancher 1.6, you launch applications as _services_ and organize them under _stacks_ in an _environment_, which represents a compute and administrative boundary. Rancher 1.6 supports the Compose standard and provides import/export for application configurations using the following files: `docker-compose.yml` and `rancher-compose.yml`. In 2.x the environment concept doesn't exist. Instead it's replaced by: - **Cluster:** The compute boundary. - **Project:** An administrative boundary. @@ -163,4 +162,4 @@ How to achieve TCP/HTTP load balancing and configure hostname/path-based routing Blog Post: [From Cattle to Kubernetes-How to Load Balance Your Services in Rancher 2.x](https://rancher.com/blog/2018/2018-09-13-load-balancing-options-2dot0/) -In Rancher 1.6, a Load Balancer was used to expose your applications from within the Rancher environment for external access. In Rancher 2.x, the concept is the same. There is a Load Balancer option to expose your services. In the language of Kubernetes, this function is more often referred to as an **Ingress**. In short, Load Balancer and Ingress play the same role. +In Rancher 1.6, a load balancer was used to expose your applications from within the Rancher environment for external access. In Rancher 2.x, the concept is the same. There is a Load Balancer option to expose your services. In the language of Kubernetes, this function is more often referred to as an _Ingress_. In short, load balancer and Ingress play the same role. From 353ee3ec120562a30019bc243dfce130309e4509 Mon Sep 17 00:00:00 2001 From: Mark Bishop Date: Tue, 13 Nov 2018 17:25:18 -0700 Subject: [PATCH 15/15] changing 'files' to 'directives' --- content/rancher/v2.x/en/v1.6-migration/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/v1.6-migration/_index.md b/content/rancher/v2.x/en/v1.6-migration/_index.md index 942c04dbdff..8b5c70d29a2 100644 --- a/content/rancher/v2.x/en/v1.6-migration/_index.md +++ b/content/rancher/v2.x/en/v1.6-migration/_index.md @@ -70,7 +70,7 @@ This command line interface tool: - Exports Compose files (i.e., `docker-compose.yml` and `rancher-compose.yml`) for all your stacks that are Cattle environments in your Rancher 1.6 server. For every stack, files are exported to a `//` folder. -- Parses Compose files that you've exported from your Rancher 1.6 stack and converts them to a Kubernetes manifest that Rancher v2.x can consume. The tool also outputs a list of constructs present in the Compose files that cannot be converted automatically to Rancher 2.x. These are files that you'll have to manually configure in the Kubernetes YAML. +- Parses Compose files that you've exported from your Rancher 1.6 stack and converts them to a Kubernetes manifest that Rancher v2.x can consume. The tool also outputs a list of constructs present in the Compose files that cannot be converted automatically to Rancher 2.x. These are directives that you'll have to manually configure in the Kubernetes YAML. ### A. Download Migration-Tools CLI