Merge pull request #978 from rancher/staging

Bring Staging to live with migration and other stuff
This commit is contained in:
Denise
2018-11-13 22:25:26 -08:00
committed by GitHub
12 changed files with 435 additions and 144 deletions
@@ -12,7 +12,15 @@ New password for default admin user (user-xxxxx):
<new_password>
```
High Availability install:
High Availability install (Helm):
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- reset-password
New password for default admin user (user-xxxxx):
<new_password>
```
High Availability install (RKE add-on):
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG exec -n cattle-system $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name=="cattle-server") | .metadata.name') -- reset-password
@@ -20,6 +28,7 @@ New password for default admin user (user-xxxxx):
<new_password>
```
### I deleted/deactivated the last admin, how can I fix it?
Single node install:
```
@@ -29,7 +38,15 @@ New password for default admin user (user-xxxxx):
<new_password>
```
High Availability install:
High Availability install (Helm):
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- ensure-default-admin
New password for default admin user (user-xxxxx):
<new_password>
```
High Availability install (RKE add-on):
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG exec -n cattle-system $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name=="cattle-server") | .metadata.name') -- ensure-default-admin
@@ -37,7 +54,6 @@ New password for default admin user (user-xxxxx):
<new_password>
```
### How can I enable debug logging?
* Single node install
@@ -54,8 +70,27 @@ $ docker exec -ti <container_id> loglevel --set info
OK
```
* High Availability install (Helm)
* Enable
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | awk '{ print $1 }' | xargs -I{} kubectl --kubeconfig $KUBECONFIG -n cattle-system exec {} -- loglevel --set debug
OK
OK
OK
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system logs -l app=rancher
```
* High Availability install
* Disable
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | awk '{ print $1 }' | xargs -I{} kubectl --kubeconfig $KUBECONFIG -n cattle-system exec {} -- loglevel --set info
OK
OK
OK
```
* High Availability install (RKE add-on)
* Enable
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
@@ -71,7 +106,6 @@ $ kubectl --kubeconfig $KUBECONFIG exec -n cattle-system $(kubectl --kubeconfig
OK
```
### My ClusterIP does not respond to ping
ClusterIP is a virtual IP, which will not respond to ping. Best way to test if the ClusterIP is configured correctly, is by using `curl` to access the IP and port to see if it responds.
@@ -124,6 +158,24 @@ When the node is removed from the cluster, and the node is cleaned, you can read
You can add additional arguments/binds/environment variables via the [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables]({{< baseurl >}}/rke/v0.1.x/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls]({{< baseurl >}}/rke/v0.1.x/en/example-yamls/).
### How do I check `Common Name` and `Subject Alternative Names` in my server certificate?
Although technically an entry in `Subject Alternative Names` is required, having the hostname in both `Common Name` and as entry in `Subject Alternative Names` gives you maximum compatibility with older browser/applications.
Check `Common Name`:
```
openssl x509 -noout -subject -in cert.pem
subject= /CN=rancher.my.org
```
Check `Subject Alternative Names`:
```
openssl x509 -noout -in cert.pem -text | grep DNS
DNS:rancher.my.org
```
### Why does it take 5+ minutes for a pod to be rescheduled when a node has failed?
This is due to a combination of the following default Kubernetes settings:
@@ -11,13 +11,13 @@ This procedure walks you through setting up a 3-node cluster with RKE and instal
## Recommended Architecture
* DNS for Rancher should resolve to a layer 4 load balancer
* DNS for Rancher should resolve to a Layer 4 load balancer (TCP)
* The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster.
* The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443.
* The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment.
<sup>HA Rancher install with layer 4 load balancer, depicting SSL termination at ingress controllers</sup>
![Rancher HA]({{< baseurl >}}/img/rancher/ha/rancher2ha.svg)
<sup>HA Rancher install with Layer 4 load balancer (TCP), depicting SSL termination at ingress controllers</sup>
## Required Tools
@@ -21,6 +21,7 @@ Configure a load balancer as a basic Layer 4 TCP forwarder. The exact configurat
#### Examples
* [NGINX]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/)
* [Amazon NLB]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/nlb/)
### [Next: Install Kubernetes with RKE]({{< baseurl >}}/rancher/v2.x/en/installation/ha/kubernetes-rke/)
@@ -0,0 +1,75 @@
---
title: NGINX
weight: 270
---
NGINX will be configured as Layer 4 load balancer (TCP) that forwards connections to one of your Rancher nodes.
>**Note:**
> In this configuration, the load balancer is positioned in front of your nodes. The load balancer can be any host capable of running NGINX.
>
> One caveat: do not use one of your Rancher nodes as the load balancer.
## Install NGINX
Start by installing NGINX on the node you want to use as a load balancer. NGINX has packages available for all known operating systems. The versions tested are `1.14` and `1.15`. For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/).
The `stream` module is required, which is present when using the official NGINX packages. Please refer to your OS documentation on how to install and enable the NGINX `stream` module on your operating system.
## Create NGINX Configuration
After installing NGINX, you need to update the NGINX configuration file, `nginx.conf`, with the IP addresses for your nodes.
1. Copy and paste the code sample below into your favorite text editor. Save it as `nginx.conf`.
2. From `nginx.conf`, replace `<IP_NODE_1>`, `<IP_NODE_2>`, and `<IP_NODE_3>` with the IPs of your [nodes]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/).
>**Note:** See [NGINX Documentation: TCP and UDP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/) for all configuration options.
<figcaption>Example NGINX config</figcaption>
```
worker_processes 4;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
http {
server {
listen 80;
return 301 https://$host$request_uri;
}
}
stream {
upstream rancher_servers {
least_conn;
server <IP_NODE_1>:443 max_fails=3 fail_timeout=5s;
server <IP_NODE_2>:443 max_fails=3 fail_timeout=5s;
server <IP_NODE_3>:443 max_fails=3 fail_timeout=5s;
}
server {
listen 443;
proxy_pass rancher_servers;
}
}
```
3. Save `nginx.conf` to your load balancer at the following path: `/etc/nginx/nginx.conf`.
4. Load the updates to your NGINX configuration by running the following command:
```
# nginx -s reload
```
## Option - Run NGINX as Docker container
Instead of installing NGINX as a package on the operating system, you can rather run it as a Docker container. Save the edited **Example NGINX config** as `/etc/nginx.conf` and run the following command to launch the NGINX container:
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /etc/nginx.conf:/etc/nginx/nginx.conf \
nginx:1.14
```
@@ -33,7 +33,7 @@ helm init --service-account tiller |
--tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:<tag>
```
> **Note:** This `tiller` install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the [helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for restricting `tiller` access to suit your security requirements.
> **Note:** This`tiller`install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the [helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for restricting `tiller` access to suit your security requirements.
### Test your Tiller installation
@@ -11,17 +11,32 @@ Rancher installation is managed using the Helm package manager for Kubernetes.
Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories).
Replace `<CHART_REPO>` with the Helm chart repository that you want to use (i.e. `latest` or `stable`).
Replace both occurences of `<CHART_REPO>` with the Helm chart repository that you want to use (i.e. `latest` or `stable`).
```
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
```
### Install cert-manager
### Choose your SSL Configuration
> **Note:** cert-manager is only required for Rancher generated and LetsEncrypt issued certificates. You may skip this step if you are bringing your own certificates or using the `ingress.tls.source=secret` option.
Rancher Server is designed to be secure by default and requires SSL/TLS configuration.
There are three recommended options for the source of the certificate.
> **Note:** If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#external-tls-termination).
| Configuration | Chart option | Description | Requires cert-manager |
|-----|-----|-----|-----|
| [Rancher Generated Certificates](#rancher-generated-certificates) | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br/>This is the **default** | [yes](#optional-install-cert-manager) |
| [Lets Encrypt](#let-s-encrypt) | `ingress.tls.source=letsEncrypt` | Use [Let's Encrypt](https://letsencrypt.org/) to issue a certificate | [yes](#optional-install-cert-manager) |
| [Certificates from Files](#certificates-from-files) | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s) | no |
### Optional: Install cert-manager
> **Note:** cert-manager is only required for certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) and Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`). You should skip this step if you are using your own certificate files (option `ingress.tls.source=secret`) or if you use [TLS termination on an External Load Balancer]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#external-tls-termination).
Rancher relies on [cert-manager](https://github.com/kubernetes/charts/tree/master/stable/cert-manager) from the official Kubernetes Helm chart repository to issue certificates from Rancher's own generated CA or to request Let's Encrypt certificates.
Rancher relies on [cert-manager](https://github.com/kubernetes/charts/tree/master/stable/cert-manager) from the official Kubernetes Helm chart repository to issue self-signed or LetsEncrypt certificates.
Install `cert-manager` from Kubernetes Helm chart repository.
@@ -31,21 +46,21 @@ helm install stable/cert-manager \
--namespace kube-system
```
### Choose your SSL Configuration
Wait for `cert-manager` to be rolled out:
Rancher server is designed to be secure by default and requires SSL/TLS configuration.
There are three options for the source of the certificate.
1. `rancher` - (Default) Use Rancher generated CA/Certificates.
2. `letsEncrypt` - Use [LetsEncrypt](https://letsencrypt.org/) to issue a cert.
3. `secret` - Configure a Kubernetes Secret with your certificate files.
```
kubectl -n kube-system rollout status deploy/cert-manager
Waiting for deployment "cert-manager" rollout to finish: 0 of 1 updated replicas are available...
deployment "cert-manager" successfully rolled out
```
<br/>
#### (Default) Rancher Generated Certificates
#### Rancher Generated Certificates
The default is for Rancher to generate a CA and use the `cert-manager` to issue the certificate for access to the Rancher server interface.
> **Note:** You need to have [cert-manager](#optional-install-cert-manager) installed before proceeding.
The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface. Because `rancher` is the default option for `ingress.tls.source`, we are not specifying `ingress.tls.source` when running the `helm install` command.
- Replace `<CHART_REPO>` with the repository that you configured in [Add the Helm Chart Repository](#add-the-helm-chart-repository) (i.e. `latest` or `stable`).
- Set the `hostname` to the DNS name you pointed at your load balancer.
@@ -59,12 +74,24 @@ helm install rancher-<CHART_REPO>/rancher \
--set hostname=rancher.my.org
```
#### LetsEncrypt
Wait for Rancher to be rolled out:
Use [LetsEncrypt](https://letsencrypt.org/)'s free service to issue trusted SSL certs. This configuration uses http validation so the Load Balancer must have a Public DNS record and be accessible from the internet.
```
kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
deployment "rancher" successfully rolled out
```
#### Let's Encrypt
> **Note:** You need to have [cert-manager](#optional-install-cert-manager) installed before proceeding.
This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA. This configuration uses HTTP validation (`HTTP-01`) so the load balancer must have a public DNS record and be accessible from the internet.
- Replace `<CHART_REPO>` with the repository that you configured in [Add the Helm Chart Repository](#add-the-helm-chart-repository) (i.e. `latest` or `stable`).
- Set `hostname`, `ingress.tls.source=letsEncrypt` and LetsEncrypt options.
- Set `hostname` to the public DNS record, set `ingress.tls.source` to `letsEncrypt` and `letsEncrypt.email` to the email address used for communication about your certificate (for example, expiry notices)
>**Using Air Gap?** [Set the `rancherImage` option]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#install-rancher-using-private-registry) in your command, pointing toward your private registry.
@@ -77,16 +104,24 @@ helm install rancher-<CHART_REPO>/rancher \
--set letsEncrypt.email=me@example.org
```
#### Certificates from Files (Kubernetes Secret)
Wait for Rancher to be rolled out:
```
kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
deployment "rancher" successfully rolled out
```
#### Certificates from Files
Create Kubernetes secrets from your own certificates for Rancher to use.
> **Note:** The common name for the cert will need to match the `hostname` option or the ingress controller will fail to provision the site for Rancher.
> **Note:** The `Common Name` or a `Subject Alternative Names` entry in the server certificate must match the `hostname` option, or the ingress controller will fail to configure correctly. Although an entry in the `Subject Alternative Names` is technically required, having a matching `Common Name` maximizes compatibility with older browsers/applications. If you want to check if your certificates are correct, see [How do I check Common Name and Subject Alternative Names in my server certificate?]({{< baseurl >}}/rancher/v2.x/en/faq/technical/#how-do-i-check-common-name-and-subject-alternative-names-in-my-server-certificate)
- Replace `<CHART_REPO>` with the repository that you configured in [Add the Helm Chart Repository](#add-the-helm-chart-repository) (i.e. `latest` or `stable`).
- Set `hostname` and `ingress.tls.source=secret`.
> **Note:** If you are using a Private CA signed cert, add `--set privateCA=true`
- Set `hostname` and set `ingress.tls.source` to `secret`.
- If you are using a Private CA signed certificate , add `--set privateCA=true` to the command shown below.
```
helm install rancher-<CHART_REPO>/rancher \
@@ -96,7 +131,25 @@ helm install rancher-<CHART_REPO>/rancher \
--set ingress.tls.source=secret
```
Now that Rancher is running, see [Adding TLS Secrets]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them.
Now that Rancher is deployed, see [Adding TLS Secrets]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them.
After adding the secrets, check if Rancher was rolled out successfully:
```
kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
deployment "rancher" successfully rolled out
```
If you see the following error: `error: deployment "rancher" exceeded its progress deadline`, you can check the status of the deployment by running the following command:
```
kubectl -n cattle-system get deploy rancher
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
rancher 3 3 3 3 3m
```
It should show the same count for `DESIRED` and `AVAILABLE`.
### Advanced Configurations
@@ -116,4 +169,4 @@ Make sure you save the `--set` options you used. You will need to use the same o
That's it you should have a functional Rancher server. Point a browser at the hostname you picked and you should be greeted by the colorful login page.
Doesn't Work? Take a look at the [Troubleshooting]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/troubleshooting/) Page
Doesn't work? Take a look at the [Troubleshooting]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/troubleshooting/) Page
@@ -86,7 +86,7 @@ We recommend configuring your load balancer as a Layer 4 balancer, forwarding pl
You may terminate the SSL/TLS on a L7 load balancer external to the Rancher cluster (ingress). Use the `--set tls=external` option and point your load balancer at port http 80 on all of the Rancher cluster nodes. This will expose the Rancher interface on http port 80. Be aware that clients that are allowed to connect directly to the Rancher cluster will not be encrypted. If you choose to do this we recommend that you restrict direct access at the network level to just your load balancer.
> **Note:** If you are using a Private CA signed cert, add `--set privateCA=true` and see [Adding TLS Secrets - Private CA Signed - Additional Steps]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/#private-ca-signed---additional-steps) to add the CA cert for Rancher.
> **Note:** If you are using a Private CA signed certificate, add `--set privateCA=true` and see [Adding TLS Secrets - Using a Private CA Signed Certificate]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/#using-a-private-ca-signed-certificate) to add the CA cert for Rancher.
Your load balancer must support long lived websocket connections and will need to insert proxy headers so Rancher can route links correctly.
@@ -106,3 +106,50 @@ Your load balancer must support long lived websocket connections and will need t
#### Health Checks
Rancher will respond `200` to health checks on the `/healthz` endpoint.
#### Example NGINX config
* Replace `IP_NODE1`, `IP_NODE2` and `IP_NODE3` with the IP addresses of the nodes in your cluster.
* Replace both occurences of `FQDN` to the DNS name for Rancher.
* Replace `/certs/fullchain.pem` and `/certs/privkey.pem` to the location of the server certificate and the server certificate key respectively.
```
upstream rancher {
server IP_NODE_1:80;
server IP_NODE_2:80;
server IP_NODE_3:80;
}
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
listen 443 ssl http2;
server_name FQDN;
ssl_certificate /certs/fullchain.pem;
ssl_certificate_key /certs/privkey.pem;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://rancher;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
proxy_buffering off;
}
}
server {
listen 80;
server_name FQDN;
return 301 https://$server_name$request_uri;
}
```
@@ -5,7 +5,7 @@ weight: 276
Kubernetes will create all the objects and services for Rancher, but it will not become available until we populate the `tls-rancher-ingress` secret in the `cattle-system` namespace with the certificate and key.
Combine the server certificate followed by the intermediate cert chain your CA provided into a file named `tls.crt`. Copy your key into a file name `tls.key`.
Combine the server certificate followed by any intermediate certificate(s) needed into a file named `tls.crt`. Copy your certificate key into a file named `tls.key`.
Use `kubectl` with the `tls` secret type to create the secrets.
@@ -15,13 +15,13 @@ kubectl -n cattle-system create secret tls tls-rancher-ingress \
--key=tls.key
```
### Private CA Signed - Additional Steps
### Using a Private CA Signed Certificate
If you are using a private CA, Rancher will need to have a copy of the CA cert to include when generating agent configs.
If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server.
Copy the CA cert into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
Copy the CA certificate into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
>**Important:** Make sure the file is called `cacerts.pem` as Rancher uses that filename to configure the CA cert.
>**Important:** Make sure the file is called `cacerts.pem` as Rancher uses that filename to configure the CA certificate.
```
kubectl -n cattle-system create secret generic tls-ca \
@@ -7,17 +7,25 @@ aliases:
Whether you're configuring Rancher to run in a single-node or high-availability setup, each node running Rancher Server must meet the following requirements.
{{% tabs %}}
{{% tab "Operating Systems" %}}
Rancher is supported on the following operating systems and their subsequent releases.
{{% tab "Operating Systems and Docker" %}}
Rancher is supported on the following operating systems and their subsequent non-major releases with a supported version of [Docker](https://www.docker.com/).
* Ubuntu 16.04 (64-bit)
* Red Hat Enterprise Linux 7.5 (64-bit)
* Docker 17.03.2
* Red Hat Enterprise Linux (RHEL)/CentOS 7.5 (64-bit)
* RHEL Docker 1.13
* Docker 17.03.2
* RancherOS 1.4 (64-bit)
* Docker 17.03.2
* Windows Server version 1803 (64-bit)
* Docker 18.06
If you are using RancherOS, make sure you switch the Docker engine to a supported version using:<br>
`sudo ros engine switch docker-17.03.2-ce`
[Docker Documentation: Installation Instructions](https://docs.docker.com/)
{{% /tab %}}
{{% tab "Hardware" %}}
Hardware requirements scale based on the size of your Rancher deployment. Provision each individual node according to the requirements.
@@ -53,22 +61,6 @@ Hardware requirements scale based on the size of your Rancher deployment. Provis
</table>
<br/>
{{% /tab %}}
{{% tab "Software" %}}
A supported version of [Docker](https://www.docker.com/) is required.
Supported Versions:
* `1.12.6`
* `1.13.1`
* `17.03.2`
* `17.06` (for Windows)
If you are using RancherOS, make sure you switch the Docker engine to a supported version using:<br>
`sudo ros engine switch docker-17.03.2-ce`
[Docker Documentation: Installation Instructions](https://docs.docker.com/)
{{% /tab %}}
{{% tab "Networking" %}}
@@ -132,6 +132,7 @@ server {
proxy_set_header Connection $connection_upgrade;
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
proxy_buffering off;
}
}
@@ -3,13 +3,13 @@ title: Migrating from Rancher v1.6 Cattle to v2.x
weight: 10000
---
Rancher 2.0 has been rearchitected and rewritten with the goal of providing a complete management solution for Kubernetes and Docker. Due to these extensive changes, there is no direct upgrade path from 1.6.x to 2.x, but rather a migration of your 1.6 application workloads into the 2.0 Kubernetes equivalent. In 1.6, the most common orchestration used was Rancher's own engine called Cattle. The following blogs (that will be converted in an official guide) explain and educate our Cattle users on running workloads in a Kubernetes environment.
Rancher 2.x has been rearchitected and rewritten with the goal of providing a complete management solution for Kubernetes and Docker. Due to these extensive changes, there is no direct upgrade path from 1.6.x to 2.x, but rather a migration of your 1.6 application workloads into the 2.x Kubernetes equivalent. In 1.6, the most common orchestration used was Rancher's own engine called Cattle. The following blogs (that will be converted in an official guide) explain and educate our Cattle users on running workloads in a Kubernetes environment.
If you are an existing Kubernetes user on Rancher 1.6, you only need to review the [Get Started](#1-get-started) section to prepare you on what to expect on a new 2.0 Rancher cluster.
If you are an existing Kubernetes user on Rancher 1.6, you only need to review the [Get Started](#1-get-started) section to prepare you on what to expect on a new 2.x Rancher cluster.
## Kubernetes Basics
Rancher 2.0 is built on the [Kubernetes](https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundational) container orchestrator. This shift in underlying technology for 2.0 is a large departure from 1.6, which supported several popular container orchestrators. Since Rancher is now based entirely on Kubernetes, it's helpful to learn the Kubernetes basics.
Rancher 2.x is built on the [Kubernetes](https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundational) container orchestrator. This shift in underlying technology for 2.x is a large departure from 1.6, which supported several popular container orchestrators. Since Rancher is now based entirely on Kubernetes, it's helpful to learn the Kubernetes basics.
The following table introduces and defines some key Kubernetes concepts.
@@ -25,16 +25,16 @@ The following table introduces and defines some key Kubernetes concepts.
## Migration Cheatsheet
Because Rancher 1.6 defaulted to our Cattle container orchestrator, it primarily used terminology related to Cattle. However, because Rancher 2.0 uses Kubernetes, it aligns with the Kubernetes naming standard. This shift could be confusing for people unfamiliar with Kubernetes, so we've created a table that maps terms commonly used in Rancher 1.6 to their equivalents in Rancher 2.0.
Because Rancher 1.6 defaulted to our Cattle container orchestrator, it primarily used terminology related to Cattle. However, because Rancher 2.x uses Kubernetes, it aligns with the Kubernetes naming standard. This shift could be confusing for people unfamiliar with Kubernetes, so we've created a table that maps terms commonly used in Rancher 1.6 to their equivalents in Rancher 2.x.
| **Rancher 1.6** | **Rancher 2.0** |
| **Rancher 1.6** | **Rancher 2.x** |
| --- | --- |
| Container | Pod |
| Container | Pod |
| Services | Workload |
| Load Balancer | Ingress |
| Stack | Namespace |
| Stack | Namespace |
| Environment | Project (Administration)/Cluster (Compute)
| Host | Node |
| Host | Node |
| Catalog | Helm |
<br/>
More detailed information on Kubernetes concepts can be found in the
@@ -46,7 +46,7 @@ More detailed information on Kubernetes concepts can be found in the
<!-- TOC -->
- [1. Get Started](#1-get-started)
- [2. Run Migration Tools](#2-run-migration-tools)
- [2. Run Migration-Tools CLI](#2-run-migration-tools-cli)
- [3. Migrate Applications](#3-migrate-applications)
- [4. Expose Your Services](#4-expose-your-services)
- [5. Monitor Your Applications](#5-monitor-your-applications)
@@ -58,123 +58,108 @@ More detailed information on Kubernetes concepts can be found in the
## 1. Get Started
As a Rancher 1.6 user who's interested in moving to 2.0, how should you get started with migration? The following blog provides a short checklist to help with this transition.
As a Rancher 1.6 user who's interested in moving to 2.x, how should you get started with migration? The following blog provides a short checklist to help with this transition.
Blog Post: [Migrating from Rancher 1.6 to Rancher 2.0—A Short Checklist](https://rancher.com/blog/2018/2018-08-09-migrate-1dot6-setup-to-2dot0/)
Blog Post: [Migrating from Rancher 1.6 to Rancher 2.x—A Short Checklist](https://rancher.com/blog/2018/2018-08-09-migrate-1dot6-setup-to-2dot0/)
## 2. Run Migration Tools
## 2. Run Migration-Tools CLI
To help with migration from 1.6 to 2.0, Rancher has developed a migration tool. Running this tool will help you check if your Rancher 1.6 applications can be migrated to 2.0. If an application can't be migrated, the tool will help you identify what's lacking.
The migration-tools CLI is a tool that helps you recreate your applications in Rancher v2.x. This tool exports your Rancher v1.6 applications as Compose files and converts them to a Kubernetes manifest that Rancher 2.x can consume.
This tool will:
This command line interface tool:
- Accept Docker Compose config files (i.e., `docker-compose.yml` and `rancher-compose.yml`) that you've exported from your Rancher 1.6 Stacks.
- Output a list of constructs present in the Compose files that cannot be supported by Kubernetes in Rancher 2.0. These constructs require special handling or are parameters that cannot be converted to Kubernetes YAML, even using tools like Kompose.
- Exports Compose files (i.e., `docker-compose.yml` and `rancher-compose.yml`) for all your stacks that are Cattle environments in your Rancher 1.6 server. For every stack, files are exported to a `<EXPORT_DIR>/<ENV_NAME>/<STACK_NAME>` folder.
### A. Download the Migration Tool
- Parses Compose files that you've exported from your Rancher 1.6 stack and converts them to a Kubernetes manifest that Rancher v2.x can consume. The tool also outputs a list of constructs present in the Compose files that cannot be converted automatically to Rancher 2.x. These are directives that you'll have to manually configure in the Kubernetes YAML.
The Migration Tool for your platform can be downloaded from its [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tool is available for Linux, Mac, and Windows platforms.
### A. Download Migration-Tools CLI
The migration-tools CLI for your platform can be downloaded from our [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tools are available for Linux, Mac, and Windows platforms.
### B. Configure the Migration Tool
### B. Configure Migration-Tools CLI
After the tool is downloaded, you need to make some configurations to run it.
1. Modify the Migration Tool file to make it an executable.
1. Open Terminal and change to the directory that contains the Migration Tool file.
1. Rename the Migration Tool file to `migration-tools` so that it no longer includes the platform name.
1. Enter the following command to make `migration-tools` an executable:
```
chmod +x migration-tools
```
1. Export the configuration for each Rancher 1.6 Stack that you want to migrate to 2.0.
1. Log into Rancher 1.6 and select **Stacks > All**.
1. From the **All Stacks** page, select **Ellipsis (...) > Export Config** for each Stack that you want to migrate.
1. Extract the downloaded `compose.zip`. Move the folder contents (`docker-compose.yml` and `rancher-compose.yml`) into the same directory as `migration-tools`.
### C. Run the Migration Tool
To use the Migration Tool, run the command below while pointing to the compose files exported from each stack that you want to migrate. If you want to migrate multiple stacks, you'll have to re-run the command for each pair of compose files that you exported.
#### Usage
You can run the Migration Tool by entering the following command, replacing each placeholder with the absolute path to your Stack's compose files.
```
migration-tools --docker-file <DOCKER_COMPOSE_ABSOLUTE_PATH> --rancher-file <RANCHER_COMPOSE_ABSOLUTE_PATH>
```
#### Options
When using the Migration Tool, you can specify the paths to your Docker and Rancher compose files, regardless of where they are on your file system.
| Option | Description |
| ---------------------- | -------------------------------------------------------------------------------------- |
| `--docker-file <DOCKER_COMPOSE_ABSOLUTE_PATH>` | The absolute path to an exported Docker compose file (default value: `docker-compose.yml`)<sup>1</sup>. |
| `--rancher-file <RANCHER_COMPOSE_ABSOLUTE_PATH>` | The absolute path to an alternate Rancher compose file (default value: `rancher-compose.yml`)<sup>1</sup>. |
| `--help, -h` | Lists usage for the Migration Tool. |
| `--version, -v` | Lists the version of the Migration Tool in use. |
><sup>1</sup> If you omit the `--docker-file` and `--rancher-file` options from your command, the migration tool will check its home directory for compose files.
#### Output
After you run the migration tool, the following files output to the same directory that you ran the tool from.
After you download migration-tools CLI, rename it and make it executable.
| Output | Description |
| --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `output.txt` | This file lists all constructs for each service in `docker-compose.yml` that requires special handling to be successfully migrated to Rancher 2.0. Each construct links to the relevant blog articles on how to implement it in Rancher 2.0 (these articles are also listed below). |
| Kubernetes YAML specs | The Migration Tool internally invokes [Kompose](https://github.com/kubernetes/kompose) to generate Kubernetes YAML specs for each service you're migrating to 2.0. Each YAML spec file is named for the service you're migrating.
1. Open Terminal and change to the directory that contains the migration-tools file.
1. Rename the file to `migration-tools` so that it no longer includes the platform name.
1. Enter the following command to make `migration-tools` an executable:
```
chmod +x migration-tools
```
### C. Run Migration-Tools CLI
Next, use the migration-tools CLI to export all stacks in all of the Cattle environments into Compose files. Then, for stacks that you want to migrate to Rancher 2.x, convert the Compose files into Kubernetes YAML.
>**Want full usage and options for the migration-tools CLI?** See the [Migration-Tools CLI Reference]({{< baseurl >}}/rancher/v2.x/en/v1.6-migration/migration-tools-ref).
1. Export the Compose files for all stacks in all of the Cattle environments in your Rancher 1.6 server.
Execute the following command, replacing each placeholder with your values. The access key and secret key are Account API keys, which will allow you to export from all Cattle environments.
```
migration-tools export --url <RANCHER_URL> --access-key <RANCHER_ACCESS_KEY> --secret-key <RANCHER_SECRET_KEY> --export-dir <EXPORT_DIR>
```
**Step Result:** The migration-tools CLI exports Compose files for each stack in every Cattle environments in the `--export-dir` directory. If you omitted this option, the files are saved to your current directory.
1. Convert the exported Compose files for a stack to Kubernetes YAML.
Execute the following command, replacing each placeholder with the absolute path to your stack's Compose files. For each stack, you'll have to re-run the command for each pair of Compose files that was exported.
```
migration-tools parse --docker-file <DOCKER_COMPOSE_ABSOLUTE_PATH> --rancher-file <RANCHER_COMPOSE_ABSOLUTE_PATH>
```
>**Note:** If you omit the `--docker-file` and `--rancher-file` options from your command, the migration-tools CLI checks its home directory for these Compose files.
**Step Result:** The migration-tools CLI parses your Compose files and outputs Kubernetes YAML specs as well as an `output.txt` file. For each service in the stack, a YAML spec file is created and named the same as your service. The `output.txt` file lists all constructs for each service in `docker-compose.yml` that requires special handling to be successfully migrated to Rancher 2.x. Each construct links to the relevant blog articles on how to implement it in Rancher 2.x (these articles are also listed below).
## 3. Migrate Applications
In Rancher 1.6, you launch applications as _services_ and organize them under _stacks_ in an _environment_, which represents a compute and administrative boundary. Rancher 1.6 supports the Docker compose standard and provides import/export for application configurations using the following files: `docker-compose.yml` and `rancher-compose.yml`. In 2.0 the environment concept doesn't exist. Instead it's replaced by:
In Rancher 1.6, you launch applications as _services_ and organize them under _stacks_ in an _environment_, which represents a compute and administrative boundary. Rancher 1.6 supports the Compose standard and provides import/export for application configurations using the following files: `docker-compose.yml` and `rancher-compose.yml`. In 2.x the environment concept doesn't exist. Instead it's replaced by:
- **Cluster:** The compute boundary.
- **Project:** An administrative boundary.
The following article explores how to map Cattle's stack and service design to Kubernetes. It also demonstrates how to migrate a simple application from Rancher 1.6 to 2.0 using either the Rancher UI or Docker Compose.
The following article explores how to map Cattle's stack and service design to Kubernetes. It also demonstrates how to migrate a simple application from Rancher 1.6 to 2.x using either the Rancher UI or Docker Compose.
Blog Post: [A Journey from Cattle to Kubernetes!](https://rancher.com/blog/2018/2018-08-02-journey-from-cattle-to-k8s/)
## 4. Expose Your Services
In Rancher 1.6, you could provide external access to your applications using port mapping. This article explores how to publicly expose your services in Rancher 2.0. It explores both UI and CLI methods to transition the port mapping functionality.
In Rancher 1.6, you could provide external access to your applications using port mapping. This article explores how to publicly expose your services in Rancher 2.x. It explores both UI and CLI methods to transition the port mapping functionality.
Blog Post: [From Cattle to Kubernetes—How to Publicly Expose Your Services in Rancher 2.0](https://rancher.com/blog/2018/expose-and-monitor-workloads/)
Blog Post: [From Cattle to Kubernetes—How to Publicly Expose Your Services in Rancher 2.x](https://rancher.com/blog/2018/expose-and-monitor-workloads/)
## 5. Monitor Your Applications
Rancher 1.6 provided TCP and HTTP healthchecks using its own healthcheck microservice. Rancher 2.0 uses native Kubernetes healthcheck support instead. This article overviews how to configure it in Rancher 2.0.
Rancher 1.6 provided TCP and HTTP healthchecks using its own healthcheck microservice. Rancher 2.x uses native Kubernetes healthcheck support instead. This article overviews how to configure it in Rancher 2.x.
Blog Post: [From Cattle to Kubernetes—Application Healthchecks in Rancher 2.0](https://rancher.com/blog/2018/2018-08-22-k8s-monitoring-and-healthchecks/)
Blog Post: [From Cattle to Kubernetes—Application Healthchecks in Rancher 2.x](https://rancher.com/blog/2018/2018-08-22-k8s-monitoring-and-healthchecks/)
## 6. Schedule Deployments
Scheduling application containers on available resources is a key container orchestration technique. The following blog reviews how to schedule containers in Rancher 2.0 for those familiar with 1.6 scheduling labels (such as affinity and anti-affinity). It also explores how to launch a global service in 2.0.
Scheduling application containers on available resources is a key container orchestration technique. The following blog reviews how to schedule containers in Rancher 2.x for those familiar with 1.6 scheduling labels (such as affinity and anti-affinity). It also explores how to launch a global service in 2.x.
Blog Post: [From Cattle to Kubernetes—Scheduling Workloads in Rancher 2.0](https://rancher.com/blog/2018/2018-08-29-scheduling-options-in-2-dot-0/)
Blog Post: [From Cattle to Kubernetes—Scheduling Workloads in Rancher 2.x](https://rancher.com/blog/2018/2018-08-29-scheduling-options-in-2-dot-0/)
## 7. Service Discovery
Rancher 1.6 provides service discovery within and across stacks using its own internal DNS microservice. It also supports pointing to external services and creating aliases. Moving to Rancher 2.0, you can replicate this same service discovery behavior. The following blog reviews this topic and the solutions needed to achieve service discovery parity in Rancher 2.0.
Rancher 1.6 provides service discovery within and across stacks using its own internal DNS microservice. It also supports pointing to external services and creating aliases. Moving to Rancher 2.x, you can replicate this same service discovery behavior. The following blog reviews this topic and the solutions needed to achieve service discovery parity in Rancher 2.x.
Blog Post: [From Cattle to Kubernetes—Service Discovery in Rancher 2.0](https://rancher.com/blog/2018/2018-09-04-service_discovery_2dot0/)
Blog Post: [From Cattle to Kubernetes—Service Discovery in Rancher 2.x](https://rancher.com/blog/2018/2018-09-04-service_discovery_2dot0/)
## 8. Load Balancing
How to achieve TCP/HTTP load balancing and configure hostname/path-based routing in Rancher 2.0.
Blog Post: [From Cattle to Kubernetes-How to Load Balance Your Services in Rancher 2.0](https://rancher.com/blog/2018/2018-09-13-load-balancing-options-2dot0/)
In Rancher 1.6, a Load Balancer was used to expose your applications from within the Rancher environment for external access. In Rancher 2.0, the concept is the same. There is a Load Balancer option to expose your services. In the language of Kubernetes, this function is more often referred to as an **Ingress**. In short, Load Balancer and Ingress play the same role.
How to achieve TCP/HTTP load balancing and configure hostname/path-based routing in Rancher 2.x.
Blog Post: [From Cattle to Kubernetes-How to Load Balance Your Services in Rancher 2.x](https://rancher.com/blog/2018/2018-09-13-load-balancing-options-2dot0/)
In Rancher 1.6, a load balancer was used to expose your applications from within the Rancher environment for external access. In Rancher 2.x, the concept is the same. There is a Load Balancer option to expose your services. In the language of Kubernetes, this function is more often referred to as an _Ingress_. In short, load balancer and Ingress play the same role.
@@ -0,0 +1,85 @@
---
title: Migration Tools CLI Reference
weight: 100
---
The migration-tools CLI includes multiple commands and options to assist your migration from Rancher v1.6 to Rancher v2.x.
## Download
The migration-tools CLI for your platform can be downloaded from our [GitHub releases page](https://github.com/rancher/migration-tools/releases). The tool is available for Linux, Mac, and Windows platforms.
## Usage
```
migration-tools [global options] command [command options] [arguments...]
```
## Migration Tools Global Options
The migration-tools CLI includes a handful of global options.
| Global Option | Description |
| ----------------- | -------------------------------------------- |
| `--debug` | Enables debug logging. |
| `--log <VALUE>` | Outputs logs to the path you enter. |
| `--help`, `-h` | Displays a list of all commands available. |
| `--version`, `-v` | Prints the version of migration-tools CLI in use.|
## Commands and Command Options
### Migration-Tools Export Reference
The `migration-tools export` command exports all stacks from your Rancher v1.6 server into Compose files.
#### Options
| Option | Required? | Description|
| --- | --- |--- |
|`--url <VALUE>` | ✓ | Rancher API endpoint URL (`<RANCHER_URL>`). |
|`--access-key <VALUE>` | ✓ | Rancher API access key. Using an account API key exports all stacks from all cattle environments (`<RANCHER_ACCESS_KEY>`). |
|`--secret-key <VALUE>` | ✓ | Rancher API secret key associated with the access key. (`<RANCHER_SECRET_KEY>`). |
|`--export-dir <VALUE>` | | Base directory that Compose files export to under sub-directories created for each environment/stack (default: `Export`). |
|`--all`, `--a` | | Export all stacks. Using this flag exports any stack in a state of inactive, stopped, or removing. |
|`--system`, `--s` | | Export system and infrastructure stacks. |
#### Usage
Execute the following command, replacing each placeholder with your values. The access key and secret key are Account API keys, which will allow you to export from all Cattle environments.
```
migration-tools export --url <RANCHER_URL> --access-key <RANCHER_ACCESS_KEY> --secret-key <RANCHER_SECRET_KEY> --export-dir <EXPORT_DIR>
```
**Result:** The migration-tools CLI exports Compose files for each stack in every Cattle environments in the `--export-dir` directory. If you omitted this option, the files are saved to your current directory.
### Migration-Tools Parse Reference
The `migration-tools parse` command parses the Compose files for a stack and uses [Kompose](https://github.com/kubernetes/kompose) to generate an equivalent Kubernetes YAML. It also outputs an `output.txt` file, which lists all the constructs that will need manual intervention in order to be converted to Kubernetes.
#### Options
| Option | Required? | Description
| ---|---|---
|`--docker-file <VALUE>` | | Parses Docker Compose file to output Kubernetes manifest (default: `docker-compose.yml`)
|`--output-file <VALUE>` | | Name of file that outputs listing checks and advice for conversion (default: `output.txt`).
|`--rancher-file <VALUE>` | | Parses Rancher Compose file to output Kubernetes manifest (default: `rancher-compose.yml`)
#### Subcommands
| Subcommand | Description |
| ---|---|
| `help`, `h` | Shows a list of options available for use with preceding command. |
#### Usage
Execute the following command, replacing each placeholder with the absolute path to your Stack's Compose files. For each stack, you'll have to re-run the command for each pair of Compose files that was exported.
```
migration-tools parse --docker-file <DOCKER_COMPOSE_ABSOLUTE_PATH> --rancher-file <RANCHER_COMPOSE_ABSOLUTE_PATH>
```
>**Note:** If you omit the `--docker-file` and `--rancher-file` options from your command, the migration-tools CLI checks its home directory for these Compose files.
**Result:** The migration-tools CLI parses your Compose files and outputs Kubernetes YAML specs as well as an `output.txt` file. For each service in the stack, a YAML spec file is created and named the same as your service. The `output.txt` file lists all constructs for each service in `docker-compose.yml` that requires special handling to be successfully migrated to Rancher 2.x. Each construct links to the relevant blog articles on how to implement it in Rancher 2.x.