mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-12 16:13:23 +00:00
Apply Divio and update links
This commit is contained in:
+344
@@ -0,0 +1,344 @@
|
||||
---
|
||||
title: 4. Install Rancher
|
||||
weight: 400
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-installation/install-rancher/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/config-rancher-system-charts/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/config-rancher-for-private-reg/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-single-node/install-rancher
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap/install-rancher
|
||||
- /rancher/v2.0-v2.4/en/installation/options/air-gap-helm2/install-rancher
|
||||
- /rancher/v2.x/en/installation/resources/advanced/air-gap-helm2/install-rancher/
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
This section is about how to deploy Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Kubernetes Install (Recommended)">
|
||||
|
||||
Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes Installation is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
|
||||
|
||||
This section describes installing Rancher in five parts:
|
||||
|
||||
- [A. Add the Helm Chart Repository](#a-add-the-helm-chart-repository)
|
||||
- [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration)
|
||||
- [C. Render the Rancher Helm Template](#c-render-the-rancher-helm-template)
|
||||
- [D. Install Rancher](#d-install-rancher)
|
||||
- [E. For Rancher versions before v2.3.0, Configure System Charts](#e-for-rancher-versions-before-v2-3-0-configure-system-charts)
|
||||
|
||||
### A. Add the Helm Chart Repository
|
||||
|
||||
From a system that has access to the internet, fetch the latest Helm chart and copy the resulting manifests to a system that has access to the Rancher server cluster.
|
||||
|
||||
1. If you haven't already, initialize `helm` locally on a workstation that has internet access. Note: Refer to the [Helm version requirements](installation/options/helm-version) to choose a version of Helm to install Rancher.
|
||||
```plain
|
||||
helm init -c
|
||||
```
|
||||
|
||||
2. Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher](../../../resources/choose-a-rancher-version.md).
|
||||
{{< release-channel >}}
|
||||
```
|
||||
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
3. Fetch the latest Rancher chart. This will pull down the chart and save it in the current directory as a `.tgz` file.
|
||||
```plain
|
||||
helm fetch rancher-<CHART_REPO>/rancher
|
||||
```
|
||||
|
||||
> Want additional options? See the Rancher [Helm chart options](../../../../../reference-guides/installation-references/helm-chart-options.md).
|
||||
|
||||
### B. Choose your SSL Configuration
|
||||
|
||||
Rancher Server is designed to be secure by default and requires SSL/TLS configuration.
|
||||
|
||||
When Rancher is installed on an air gapped Kubernetes cluster, there are two recommended options for the source of the certificate.
|
||||
|
||||
> **Note:** If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer](installation/options/chart-options/#external-tls-termination).
|
||||
|
||||
| Configuration | Chart option | Description | Requires cert-manager |
|
||||
| ------------------------------------------ | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
|
||||
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br/> This is the **default** and does not need to be added when rendering the Helm template. | yes |
|
||||
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s). <br/> This option must be passed when rendering the Rancher Helm template. | no |
|
||||
|
||||
### C. Render the Rancher Helm Template
|
||||
|
||||
When setting up the Rancher Helm template, there are several options in the Helm chart that are designed specifically for air gap installations.
|
||||
|
||||
| Chart Option | Chart Value | Description |
|
||||
| ----------------------- | -------------------------------- | ---- |
|
||||
| `certmanager.version` | `<version>` | Configure proper Rancher TLS issuer depending of running cert-manager version. |
|
||||
| `systemDefaultRegistry` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
|
||||
| `useBundledSystemChart` | `true` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_ |
|
||||
|
||||
Based on the choice your made in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), complete one of the procedures below.
|
||||
|
||||
<details id="self-signed">
|
||||
<summary>Option A-Default Self-Signed Certificate</summary>
|
||||
|
||||
By default, Rancher generates a CA and uses cert-manager to issue the certificate for access to the Rancher server interface.
|
||||
|
||||
> **Note:**
|
||||
> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.11.0, please see our [upgrade cert-manager documentation](installation/options/upgrading-cert-manager/).
|
||||
|
||||
1. From a system connected to the internet, add the cert-manager repo to Helm.
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager).
|
||||
|
||||
```plain
|
||||
helm fetch jetstack/cert-manager --version v0.14.2
|
||||
```
|
||||
|
||||
1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
|
||||
```plain
|
||||
helm template ./cert-manager-v0.14.2.tgz --output-dir . \
|
||||
--name cert-manager --namespace cert-manager \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
|
||||
```
|
||||
|
||||
1. Download the required CRD file for cert-manager
|
||||
```plain
|
||||
curl -L -o cert-manager/cert-manager-crd.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-0.14/deploy/manifests/00-crds.yaml
|
||||
```
|
||||
1. Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<VERSION>` | The version number of the output tarball.
|
||||
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry.
|
||||
`<CERTMANAGER_VERSION>` | Cert-manager version running on k8s cluster.
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set certmanager.version=<CERTMANAGER_VERSION> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details id="secret">
|
||||
<summary>Option B: Certificates From Files using Kubernetes Secrets</summary>
|
||||
|
||||
Create Kubernetes secrets from your own certificates for Rancher to use. The common name for the cert will need to match the `hostname` option in the command below, or the ingress controller will fail to provision the site for Rancher.
|
||||
|
||||
Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------- |
|
||||
| `<VERSION>` | The version number of the output tarball. |
|
||||
| `<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer. |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry. |
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`:
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set privateCA=true \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
Then refer to [Adding TLS Secrets](../../../resources/add-tls-secrets.md) to publish the certificate files so Rancher and the ingress controller can use them.
|
||||
|
||||
</details>
|
||||
|
||||
### D. Install Rancher
|
||||
|
||||
Copy the rendered manifest directories to a system that has access to the Rancher server cluster to complete installation.
|
||||
|
||||
Use `kubectl` to create namespaces and apply the rendered manifests.
|
||||
|
||||
If you choose to use self-signed certificates in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), install cert-manager.
|
||||
|
||||
<details id="install-cert-manager">
|
||||
<summary>Self-Signed Certificate Installs - Install Cert-manager</summary>
|
||||
|
||||
If you are using self-signed certificates, install cert-manager:
|
||||
|
||||
1. Create the namespace for cert-manager.
|
||||
```plain
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
1. Create the cert-manager CustomResourceDefinitions (CRDs).
|
||||
```plain
|
||||
kubectl apply -f cert-manager/cert-manager-crd.yaml
|
||||
```
|
||||
|
||||
> **Important:**
|
||||
> If you are running Kubernetes v1.15 or below, you will need to add the `--validate=false flag to your kubectl apply command above else you will receive a validation error relating to the x-kubernetes-preserve-unknown-fields field in cert-manager’s CustomResourceDefinition resources. This is a benign error and occurs due to the way kubectl performs resource validation.
|
||||
|
||||
1. Launch cert-manager.
|
||||
```plain
|
||||
kubectl apply -R -f ./cert-manager
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
Install Rancher:
|
||||
|
||||
```plain
|
||||
kubectl create namespace cattle-system
|
||||
kubectl -n cattle-system apply -R -f ./rancher
|
||||
```
|
||||
|
||||
**Step Result:** If you are installing Rancher v2.3.0+, the installation is complete.
|
||||
|
||||
### E. For Rancher versions before v2.3.0, Configure System Charts
|
||||
|
||||
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts](installation/options/local-system-charts/).
|
||||
|
||||
### Additional Resources
|
||||
|
||||
These resources could be helpful when installing Rancher:
|
||||
|
||||
- [Rancher Helm chart options](installation/options/chart-options/)
|
||||
- [Adding TLS secrets](../../../resources/add-tls-secrets.md)
|
||||
- [Troubleshooting Rancher Kubernetes Installations](installation/options/troubleshooting/)
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker Install">
|
||||
|
||||
The Docker installation is for Rancher users that are wanting to **test** out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. **Important: If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker installation to a Kubernetes Installation.** Instead of running the single node installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation.
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
| Environment Variable Key | Environment Variable Value | Description |
|
||||
| -------------------------------- | -------------------------------- | ---- |
|
||||
| `CATTLE_SYSTEM_DEFAULT_REGISTRY` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
|
||||
| `CATTLE_SYSTEM_CATALOG` | `bundled` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_ |
|
||||
|
||||
> **Do you want to...**
|
||||
>
|
||||
> - Configure custom CA root certificate to access your services? See [Custom CA root certificate](installation/options/chart-options/#additional-trusted-cas).
|
||||
> - Record all transactions with the Rancher API? See [API Auditing](../../../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#api-audit-log).
|
||||
|
||||
- For Rancher before v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.](installation/options/local-system-charts/)
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Option A-Default Self-Signed Certificate</summary>
|
||||
|
||||
If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself.
|
||||
|
||||
Log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder.
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
|
||||
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/options/server-tags/) that you want to install. |
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
<details id="option-b">
|
||||
<summary>Option B-Bring Your Own Certificate: Self-Signed</summary>
|
||||
|
||||
In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher.
|
||||
|
||||
> **Prerequisites:**
|
||||
> From a computer with an internet connection, create a self-signed certificate using [OpenSSL](https://www.openssl.org/) or another method of your choice.
|
||||
>
|
||||
> - The certificate files must be in PEM format.
|
||||
> - In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](../../../other-installation-methods/rancher-on-a-single-node-with-docker/certificate-troubleshooting.md)
|
||||
|
||||
After creating your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Use the `-v` flag and provide the path to your certificates to mount them in your container.
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `<CERT_DIRECTORY>` | The path to the directory containing your certificate files. |
|
||||
| `<FULL_CHAIN.pem>` | The path to your full certificate chain. |
|
||||
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
|
||||
| `<CA_CERTS>` | The path to the certificate authority's certificate. |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
|
||||
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/options/server-tags/) that you want to install. |
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
<details id="option-c">
|
||||
<summary>Option C-Bring Your Own Certificate: Signed by Recognized CA</summary>
|
||||
|
||||
In development or testing environments where you're exposing an app publicly, use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings.
|
||||
|
||||
> **Prerequisite:** The certificate files must be in PEM format.
|
||||
|
||||
After obtaining your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Because your certificate is signed by a recognized CA, mounting an additional CA certificate file is unnecessary.
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `<CERT_DIRECTORY>` | The path to the directory containing your certificate files. |
|
||||
| `<FULL_CHAIN.pem>` | The path to your full certificate chain. |
|
||||
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
|
||||
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/options/server-tags/) that you want to install. |
|
||||
|
||||
> **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--no-cacerts \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
If you are installing Rancher v2.3.0+, the installation is complete.
|
||||
|
||||
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts](installation/options/local-system-charts/).
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
+84
@@ -0,0 +1,84 @@
|
||||
---
|
||||
title: '3. Install Kubernetes with RKE (Kubernetes Installs Only)'
|
||||
weight: 300
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/install-kube
|
||||
- /rancher/v2.0-v2.4/en/installation/options/air-gap-helm2/launch-kubernetes
|
||||
- /rancher/v2.x/en/installation/resources/advanced/air-gap-helm2/launch-kubernetes/
|
||||
---
|
||||
|
||||
This section is about how to prepare to launch a Kubernetes cluster which is used to deploy Rancher server for your air gapped environment.
|
||||
|
||||
Since a Kubernetes Installation requires a Kubernetes cluster, we will create a Kubernetes cluster using [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) (RKE). Before being able to start your Kubernetes cluster, you'll need to [install RKE](https://rancher.com/docs/rke/latest/en/installation/) and create a RKE config file.
|
||||
|
||||
- [A. Create an RKE Config File](#a-create-an-rke-config-file)
|
||||
- [B. Run RKE](#b-run-rke)
|
||||
- [C. Save Your Files](#c-save-your-files)
|
||||
|
||||
### A. Create an RKE Config File
|
||||
|
||||
From a system that can access ports 22/tcp and 6443/tcp on your host nodes, use the sample below to create a new file named `rancher-cluster.yml`. This file is a Rancher Kubernetes Engine configuration file (RKE config file), which is a configuration for the cluster you're deploying Rancher to.
|
||||
|
||||
Replace values in the code sample below with help of the _RKE Options_ table. Use the IP address or DNS names of the [3 nodes](installation/air-gap-high-availability/provision-hosts) you created.
|
||||
|
||||
> **Tip:** For more details on the options available, see the RKE [Config Options](https://rancher.com/docs/rke/latest/en/config-options/).
|
||||
|
||||
<figcaption>RKE Options</figcaption>
|
||||
|
||||
| Option | Required | Description |
|
||||
| ------------------ | -------------------- | --------------------------------------------------------------------------------------- |
|
||||
| `address` | ✓ | The DNS or IP address for the node within the air gap network. |
|
||||
| `user` | ✓ | A user that can run docker commands. |
|
||||
| `role` | ✓ | List of Kubernetes roles assigned to the node. |
|
||||
| `internal_address` | optional<sup>1</sup> | The DNS or IP address used for internal cluster traffic. |
|
||||
| `ssh_key_path` | | Path to SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`). |
|
||||
|
||||
> <sup>1</sup> Some services like AWS EC2 require setting the `internal_address` if you want to use self-referencing security groups or firewalls.
|
||||
|
||||
```yaml
|
||||
nodes:
|
||||
- address: 10.10.3.187 # node air gap network IP
|
||||
internal_address: 172.31.7.22 # node intra-cluster IP
|
||||
user: rancher
|
||||
role: ['controlplane', 'etcd', 'worker']
|
||||
ssh_key_path: /home/user/.ssh/id_rsa
|
||||
- address: 10.10.3.254 # node air gap network IP
|
||||
internal_address: 172.31.13.132 # node intra-cluster IP
|
||||
user: rancher
|
||||
role: ['controlplane', 'etcd', 'worker']
|
||||
ssh_key_path: /home/user/.ssh/id_rsa
|
||||
- address: 10.10.3.89 # node air gap network IP
|
||||
internal_address: 172.31.3.216 # node intra-cluster IP
|
||||
user: rancher
|
||||
role: ['controlplane', 'etcd', 'worker']
|
||||
ssh_key_path: /home/user/.ssh/id_rsa
|
||||
|
||||
private_registries:
|
||||
- url: <REGISTRY.YOURDOMAIN.COM:PORT> # private registry url
|
||||
user: rancher
|
||||
password: '*********'
|
||||
is_default: true
|
||||
```
|
||||
|
||||
### B. Run RKE
|
||||
|
||||
After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster:
|
||||
|
||||
```
|
||||
rke up --config ./rancher-cluster.yml
|
||||
```
|
||||
|
||||
### C. Save Your Files
|
||||
|
||||
> **Important**
|
||||
> The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
|
||||
|
||||
Save a copy of the following files in a secure location:
|
||||
|
||||
- `rancher-cluster.yml`: The RKE cluster configuration file.
|
||||
- `kube_config_rancher-cluster.yml`: The [Kubeconfig file](https://rancher.com/docs/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
|
||||
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
|
||||
|
||||
> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
|
||||
|
||||
### [Next: Install Rancher](../../../other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md)
|
||||
+282
@@ -0,0 +1,282 @@
|
||||
---
|
||||
title: '2. Collect and Publish Images to your Private Registry'
|
||||
weight: 200
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-installation/prepare-private-reg/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/prepare-private-registry/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-single-node/prepare-private-registry/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-single-node/config-rancher-for-private-reg/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/config-rancher-for-private-reg/
|
||||
- /rancher/v2.0-v2.4/en/installation/options/air-gap-helm2/populate-private-registry
|
||||
- /rancher/v2.x/en/installation/resources/advanced/air-gap-helm2/populate-private-registry/
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
> **Prerequisites:** You must have a [private registry](https://docs.docker.com/registry/deploying/) available to use.
|
||||
>
|
||||
> **Note:** Populating the private registry with images is the same process for HA and Docker installations, the differences in this section is based on whether or not you are planning to provision a Windows cluster or not.
|
||||
|
||||
By default, all images used to [provision Kubernetes clusters](../../../../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md) or launch any [tools](../../../../../reference-guides/rancher-cluster-tools.md) in Rancher, e.g. monitoring, pipelines, alerts, are pulled from Docker Hub. In an air gap installation of Rancher, you will need a private registry that is located somewhere accessible by your Rancher server. Then, you will load the registry with all the images.
|
||||
|
||||
This section describes how to set up your private registry so that when you install Rancher, Rancher will pull all the required images from this registry.
|
||||
|
||||
By default, we provide the steps of how to populate your private registry assuming you are provisioning Linux only clusters, but if you plan on provisioning any [Windows clusters](../../../../../pages-for-subheaders/use-windows-clusters.md), there are separate instructions to support the images needed for a Windows cluster.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Linux Only Clusters">
|
||||
|
||||
For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry.
|
||||
|
||||
A. Find the required assets for your Rancher version <br/>
|
||||
B. Collect all the required images <br/>
|
||||
C. Save the images to your workstation <br/>
|
||||
D. Populate the private registry
|
||||
|
||||
### Prerequisites
|
||||
|
||||
These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space.
|
||||
|
||||
If you will use ARM64 hosts, the registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests.
|
||||
|
||||
### A. Find the required assets for your Rancher version
|
||||
|
||||
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. Click **Assets*.*
|
||||
|
||||
2. From the release's **Assets** section, download the following files:
|
||||
|
||||
| Release File | Description |
|
||||
| ---------------- | -------------- |
|
||||
| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. |
|
||||
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
|
||||
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
### B. Collect all the required images (For Kubernetes Installs using Rancher Generated Self-Signed Certificate)
|
||||
|
||||
In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates.
|
||||
|
||||
1. Fetch the latest `cert-manager` Helm chart and parse the template for image details:
|
||||
|
||||
> **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation](installation/options/upgrading-cert-manager/).
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm fetch jetstack/cert-manager --version v0.14.2
|
||||
helm template ./cert-manager-<version>.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt
|
||||
```
|
||||
|
||||
2. Sort and unique the images list to remove any overlap between the sources:
|
||||
|
||||
```plain
|
||||
sort -u rancher-images.txt -o rancher-images.txt
|
||||
```
|
||||
|
||||
### C. Save the images to your workstation
|
||||
|
||||
1. Make `rancher-save-images.sh` an executable:
|
||||
```
|
||||
chmod +x rancher-save-images.sh
|
||||
```
|
||||
|
||||
1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images:
|
||||
```plain
|
||||
./rancher-save-images.sh --image-list ./rancher-images.txt
|
||||
```
|
||||
**Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
### D. Populate the private registry
|
||||
|
||||
Move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script.
|
||||
|
||||
1. Log into your private registry if required:
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
1. Make `rancher-load-images.sh` an executable:
|
||||
```
|
||||
chmod +x rancher-load-images.sh
|
||||
```
|
||||
|
||||
1. Use `rancher-load-images.sh` to extract, tag and push `rancher-images.txt` and `rancher-images.tar.gz` to your private registry:
|
||||
```plain
|
||||
./rancher-load-images.sh --image-list ./rancher-images.txt --registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Linux and Windows Clusters">
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
For Rancher servers that will provision Linux and Windows clusters, there are distinctive steps to populate your private registry for the Windows images and the Linux images. Since a Windows cluster is a mix of Linux and Windows nodes, the Linux images pushed into the private registry are manifests.
|
||||
|
||||
### Windows Steps
|
||||
|
||||
The Windows images need to be collected and pushed from a Windows server workstation.
|
||||
|
||||
A. Find the required assets for your Rancher version <br/>
|
||||
B. Save the images to your Windows Server workstation <br/>
|
||||
C. Prepare the Docker daemon <br/>
|
||||
D. Populate the private registry
|
||||
|
||||
<details>
|
||||
<summary>Collecting and Populating Windows Images into the Private Registry"%}}
|
||||
|
||||
### Prerequisites
|
||||
|
||||
These steps expect you to use a Windows Server 1809 workstation that has internet access, access to your private registry, and at least 50 GB of disk space.
|
||||
|
||||
The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters.
|
||||
|
||||
Your registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests.
|
||||
|
||||
### A. Find the required assets for your Rancher version
|
||||
|
||||
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments.
|
||||
|
||||
2. From the release's "Assets" section, download the following files:
|
||||
|
||||
| Release File | Description |
|
||||
|------------------------|-------------------|
|
||||
| `rancher-windows-images.txt` | This file contains a list of Windows images needed to provision Windows clusters. |
|
||||
| `rancher-save-images.ps1` | This script pulls all the images in the `rancher-windows-images.txt` from Docker Hub and saves all of the images as `rancher-windows-images.tar.gz`. |
|
||||
| `rancher-load-images.ps1` | This script loads the images from the `rancher-windows-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
### B. Save the images to your Windows Server workstation
|
||||
|
||||
1. Using `powershell`, go to the directory that has the files that were downloaded in the previous step.
|
||||
|
||||
1. Run `rancher-save-images.ps1` to create a tarball of all the required images:
|
||||
|
||||
```plain
|
||||
./rancher-save-images.ps1
|
||||
```
|
||||
|
||||
**Step Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-windows-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
### C. Prepare the Docker daemon
|
||||
|
||||
Append your private registry address to the `allow-nondistributable-artifacts` config field in the Docker daemon (`C:\ProgramData\Docker\config\daemon.json`). Since the base image of Windows images are maintained by the `mcr.microsoft.com` registry, this step is required as the layers in the Microsoft registry are missing from Docker Hub and need to be pulled into the private registry.
|
||||
|
||||
```
|
||||
{
|
||||
...
|
||||
"allow-nondistributable-artifacts": [
|
||||
...
|
||||
"<REGISTRY.YOURDOMAIN.COM:PORT>"
|
||||
]
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
### D. Populate the private registry
|
||||
|
||||
Move the images in the `rancher-windows-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-windows-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.ps1` script.
|
||||
|
||||
1. Using `powershell`, log into your private registry if required:
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
1. Using `powershell`, use `rancher-load-images.ps1` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry:
|
||||
```plain
|
||||
./rancher-load-images.ps1 --registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Linux Steps
|
||||
|
||||
The Linux images needs to be collected and pushed from a Linux host, but _must be done after_ populating the Windows images into the private registry. These step are different from the Linux only steps as the Linux images that are pushed will actually manifests that support Windows and Linux images.
|
||||
|
||||
A. Find the required assets for your Rancher version <br/>
|
||||
B. Collect all the required images <br/>
|
||||
C. Save the images to your Linux workstation <br/>
|
||||
D. Populate the private registry
|
||||
|
||||
<details>
|
||||
<summary>Collecting and Populating Linux Images into the Private Registry</summary>
|
||||
|
||||
### Prerequisites
|
||||
|
||||
You must populate the private registry with the Windows images before populating the private registry with Linux images. If you have already populated the registry with Linux images, you will need to follow these instructions again as they will publish manifests that support Windows and Linux images.
|
||||
|
||||
These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space.
|
||||
|
||||
The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters.
|
||||
|
||||
### A. Find the required assets for your Rancher version
|
||||
|
||||
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments.
|
||||
|
||||
2. From the release's **Assets** section, download the following files, which are required to install Rancher in an air gap environment:
|
||||
|
||||
| Release File | Description |
|
||||
|----------------------------|------|
|
||||
| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. |
|
||||
| `rancher-windows-images.txt` | This file contains a list of images needed to provision Windows clusters. |
|
||||
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
|
||||
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
### B. Collect all the required images
|
||||
|
||||
**For Kubernetes Installs using Rancher Generated Self-Signed Certificate:** In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates.
|
||||
|
||||
1. Fetch the latest `cert-manager` Helm chart and parse the template for image details:
|
||||
> **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation](installation/options/upgrading-cert-manager/).
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm fetch jetstack/cert-manager --version v0.14.2
|
||||
helm template ./cert-manager-<version>.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt
|
||||
```
|
||||
|
||||
2. Sort and unique the images list to remove any overlap between the sources:
|
||||
```plain
|
||||
sort -u rancher-images.txt -o rancher-images.txt
|
||||
```
|
||||
|
||||
### C. Save the images to your workstation
|
||||
|
||||
1. Make `rancher-save-images.sh` an executable:
|
||||
```
|
||||
chmod +x rancher-save-images.sh
|
||||
```
|
||||
|
||||
1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images:
|
||||
```plain
|
||||
./rancher-save-images.sh --image-list ./rancher-images.txt
|
||||
```
|
||||
|
||||
**Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
### D. Populate the private registry
|
||||
|
||||
Move the images in the `rancher-images.tar.gz` to your private registry using the `rancher-load-images.sh script` to load the images. The `rancher-images.txt` / `rancher-windows-images.txt` image list is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script.
|
||||
|
||||
1. Log into your private registry if required:
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
1. Make `rancher-load-images.sh` an executable:
|
||||
```
|
||||
chmod +x rancher-load-images.sh
|
||||
```
|
||||
|
||||
1. Use `rancher-load-images.sh` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry:
|
||||
```plain
|
||||
./rancher-load-images.sh --image-list ./rancher-images.txt \
|
||||
--windows-image-list ./rancher-windows-images.txt \
|
||||
--registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### [Next: Kubernetes Installs - Launch a Kubernetes Cluster with RKE](../../../other-installation-methods/air-gapped-helm-cli-install/install-kubernetes.md)
|
||||
|
||||
### [Next: Docker Installs - Install Rancher](../../../other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md)
|
||||
+112
@@ -0,0 +1,112 @@
|
||||
---
|
||||
title: '1. Prepare your Node(s)'
|
||||
weight: 100
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/provision-hosts
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-single-node/provision-host
|
||||
- /rancher/v2.0-v2.4/en/installation/options/air-gap-helm2/prepare-nodes
|
||||
- /rancher/v2.x/en/installation/resources/advanced/air-gap-helm2/prepare-nodes/
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
This section is about how to prepare your node(s) to install Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation.
|
||||
|
||||
# Prerequisites
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Kubernetes Install (Recommended)">
|
||||
|
||||
### OS, Docker, Hardware, and Networking
|
||||
|
||||
Make sure that your node(s) fulfill the general [installation requirements.](../../../../../pages-for-subheaders/installation-requirements.md)
|
||||
|
||||
### Private Registry
|
||||
|
||||
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines.
|
||||
|
||||
If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/).
|
||||
|
||||
### CLI Tools
|
||||
|
||||
The following CLI tools are required for the Kubernetes Install. Make sure these tools are installed on your workstation and available in your `$PATH`.
|
||||
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool.
|
||||
- [rke](https://rancher.com/docs/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters.
|
||||
- [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements](installation/options/helm-version) to choose a version of Helm to install Rancher.
|
||||
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker Install">
|
||||
|
||||
### OS, Docker, Hardware, and Networking
|
||||
|
||||
Make sure that your node(s) fulfill the general [installation requirements.](../../../../../pages-for-subheaders/installation-requirements.md)
|
||||
|
||||
### Private Registry
|
||||
|
||||
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines.
|
||||
|
||||
If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/).
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
# Set up Infrastructure
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Kubernetes Install (Recommended)">
|
||||
|
||||
Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes install is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
|
||||
|
||||
### Recommended Architecture
|
||||
|
||||
- DNS for Rancher should resolve to a layer 4 load balancer
|
||||
- The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster.
|
||||
- The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443.
|
||||
- The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment.
|
||||
|
||||
<figcaption>Rancher installed on a Kubernetes cluster with layer 4 load balancer, depicting SSL termination at ingress controllers</figcaption>
|
||||
|
||||

|
||||
|
||||
### A. Provision three air gapped Linux hosts according to our requirements
|
||||
|
||||
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
|
||||
|
||||
View hardware and software requirements for each of your cluster nodes in [Requirements](../../../../../pages-for-subheaders/installation-requirements.md).
|
||||
|
||||
### B. Set up your Load Balancer
|
||||
|
||||
When setting up the Kubernetes cluster that will run the Rancher server components, an Ingress controller pod will be deployed on each of your nodes. The Ingress controller pods are bound to ports TCP/80 and TCP/443 on the host network and are the entry point for HTTPS traffic to the Rancher server.
|
||||
|
||||
You will need to configure a load balancer as a basic Layer 4 TCP forwarder to direct traffic to these ingress controller pods. The exact configuration will vary depending on your environment.
|
||||
|
||||
> **Important:**
|
||||
> Only use this load balancer (i.e, the `local` cluster Ingress) to load balance the Rancher server. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps.
|
||||
|
||||
**Load Balancer Configuration Samples:**
|
||||
|
||||
- For an example showing how to set up an NGINX load balancer, refer to [this page.](installation/options/nginx)
|
||||
- For an example showing how to set up an Amazon NLB load balancer, refer to [this page.](installation/options/nlb)
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker Install">
|
||||
|
||||
The Docker installation is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
|
||||
|
||||
> **Important:** If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker installation to a Kubernetes Installation.
|
||||
|
||||
Instead of running the Docker installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation.
|
||||
|
||||
### A. Provision a single, air gapped Linux host according to our Requirements
|
||||
|
||||
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
|
||||
|
||||
View hardware and software requirements for each of your cluster nodes in [Requirements](../../../../../pages-for-subheaders/installation-requirements.md).
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### [Next: Collect and Publish Images to your Private Registry](../../../other-installation-methods/air-gapped-helm-cli-install/publish-images.md)
|
||||
+165
@@ -0,0 +1,165 @@
|
||||
---
|
||||
title: Template for an RKE Cluster with a Certificate Signed by Recognized CA and a Layer 4 Load Balancer
|
||||
weight: 3
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/cluster-yml-templates/3-node-certificate-recognizedca
|
||||
- /rancher/v2.x/en/installation/resources/advanced/cluster-yml-templates/3-node-certificate-recognizedca/
|
||||
---
|
||||
|
||||
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
|
||||
|
||||
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version.
|
||||
|
||||
The following template can be used for the cluster.yml if you have a setup with:
|
||||
|
||||
- Certificate signed by a recognized CA
|
||||
- Layer 4 load balancer
|
||||
- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/)
|
||||
|
||||
> For more options, refer to [RKE Documentation: Config Options](https://rancher.com/docs/rke/latest/en/config-options/).
|
||||
|
||||
```yaml
|
||||
nodes:
|
||||
- address: <IP> # hostname or IP to access nodes
|
||||
user: <USER> # root user (usually 'root')
|
||||
role: [controlplane,etcd,worker] # K8s roles for node
|
||||
ssh_key_path: <PEM_FILE> # path to PEM file
|
||||
- address: <IP>
|
||||
user: <USER>
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: <PEM_FILE>
|
||||
- address: <IP>
|
||||
user: <USER>
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: <PEM_FILE>
|
||||
|
||||
services:
|
||||
etcd:
|
||||
snapshot: true
|
||||
creation: 6h
|
||||
retention: 24h
|
||||
|
||||
addons: |-
|
||||
---
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: cattle-system
|
||||
---
|
||||
kind: ServiceAccount
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: cattle-admin
|
||||
namespace: cattle-system
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: cattle-crb
|
||||
namespace: cattle-system
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cattle-admin
|
||||
namespace: cattle-system
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cattle-keys-ingress
|
||||
namespace: cattle-system
|
||||
type: Opaque
|
||||
data:
|
||||
tls.crt: <BASE64_CRT> # ssl cert for ingress. If self-signed, must be signed by same CA as cattle server
|
||||
tls.key: <BASE64_KEY> # ssl key for ingress. If self-signed, must be signed by same CA as cattle server
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle-service
|
||||
labels:
|
||||
app: cattle
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
name: http
|
||||
- port: 443
|
||||
targetPort: 443
|
||||
protocol: TCP
|
||||
name: https
|
||||
selector:
|
||||
app: cattle
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle-ingress-http
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
|
||||
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
spec:
|
||||
rules:
|
||||
- host: <FQDN> # FQDN to access cattle server
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: cattle-service
|
||||
servicePort: 80
|
||||
tls:
|
||||
- secretName: cattle-keys-ingress
|
||||
hosts:
|
||||
- <FQDN> # FQDN to access cattle server
|
||||
---
|
||||
kind: Deployment
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cattle
|
||||
spec:
|
||||
serviceAccountName: cattle-admin
|
||||
containers:
|
||||
# Rancher install via RKE addons is only supported up to v2.0.8
|
||||
- image: rancher/rancher:v2.0.8
|
||||
args:
|
||||
- --no-cacerts
|
||||
imagePullPolicy: Always
|
||||
name: cattle-server
|
||||
# env:
|
||||
# - name: HTTP_PROXY
|
||||
# value: "http://your_proxy_address:port"
|
||||
# - name: HTTPS_PROXY
|
||||
# value: "http://your_proxy_address:port"
|
||||
# - name: NO_PROXY
|
||||
# value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 80
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 60
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 80
|
||||
initialDelaySeconds: 20
|
||||
periodSeconds: 10
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
- containerPort: 443
|
||||
protocol: TCP
|
||||
```
|
||||
+180
@@ -0,0 +1,180 @@
|
||||
---
|
||||
title: Template for an RKE Cluster with a Self-signed Certificate and Layer 4 Load Balancer
|
||||
weight: 2
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/cluster-yml-templates/3-node-certificate
|
||||
- /rancher/v2.x/en/installation/resources/advanced/cluster-yml-templates/3-node-certificate/
|
||||
---
|
||||
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
|
||||
|
||||
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version.
|
||||
|
||||
The following template can be used for the cluster.yml if you have a setup with:
|
||||
|
||||
- Self-signed SSL
|
||||
- Layer 4 load balancer
|
||||
- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/)
|
||||
|
||||
> For more options, refer to [RKE Documentation: Config Options](https://rancher.com/docs/rke/latest/en/config-options/).
|
||||
|
||||
```yaml
|
||||
nodes:
|
||||
- address: <IP> # hostname or IP to access nodes
|
||||
user: <USER> # root user (usually 'root')
|
||||
role: [controlplane,etcd,worker] # K8s roles for node
|
||||
ssh_key_path: <PEM_FILE> # path to PEM file
|
||||
- address: <IP>
|
||||
user: <USER>
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: <PEM_FILE>
|
||||
- address: <IP>
|
||||
user: <USER>
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: <PEM_FILE>
|
||||
|
||||
services:
|
||||
etcd:
|
||||
snapshot: true
|
||||
creation: 6h
|
||||
retention: 24h
|
||||
|
||||
addons: |-
|
||||
---
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: cattle-system
|
||||
---
|
||||
kind: ServiceAccount
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: cattle-admin
|
||||
namespace: cattle-system
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: cattle-crb
|
||||
namespace: cattle-system
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cattle-admin
|
||||
namespace: cattle-system
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cattle-keys-ingress
|
||||
namespace: cattle-system
|
||||
type: Opaque
|
||||
data:
|
||||
tls.crt: <BASE64_CRT> # ssl cert for ingress. If selfsigned, must be signed by same CA as cattle server
|
||||
tls.key: <BASE64_KEY> # ssl key for ingress. If selfsigned, must be signed by same CA as cattle server
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cattle-keys-server
|
||||
namespace: cattle-system
|
||||
type: Opaque
|
||||
data:
|
||||
cacerts.pem: <BASE64_CA> # CA cert used to sign cattle server cert and key
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle-service
|
||||
labels:
|
||||
app: cattle
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
name: http
|
||||
- port: 443
|
||||
targetPort: 443
|
||||
protocol: TCP
|
||||
name: https
|
||||
selector:
|
||||
app: cattle
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle-ingress-http
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
|
||||
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
spec:
|
||||
rules:
|
||||
- host: <FQDN> # FQDN to access cattle server
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: cattle-service
|
||||
servicePort: 80
|
||||
tls:
|
||||
- secretName: cattle-keys-ingress
|
||||
hosts:
|
||||
- <FQDN> # FQDN to access cattle server
|
||||
---
|
||||
kind: Deployment
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cattle
|
||||
spec:
|
||||
serviceAccountName: cattle-admin
|
||||
containers:
|
||||
# Rancher install via RKE addons is only supported up to v2.0.8
|
||||
- image: rancher/rancher:v2.0.8
|
||||
imagePullPolicy: Always
|
||||
name: cattle-server
|
||||
# env:
|
||||
# - name: HTTP_PROXY
|
||||
# value: "http://your_proxy_address:port"
|
||||
# - name: HTTPS_PROXY
|
||||
# value: "http://your_proxy_address:port"
|
||||
# - name: NO_PROXY
|
||||
# value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 80
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 60
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 80
|
||||
initialDelaySeconds: 20
|
||||
periodSeconds: 10
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
- containerPort: 443
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /etc/rancher/ssl
|
||||
name: cattle-keys-volume
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: cattle-keys-volume
|
||||
secret:
|
||||
defaultMode: 420
|
||||
secretName: cattle-keys-server
|
||||
```
|
||||
+161
@@ -0,0 +1,161 @@
|
||||
---
|
||||
title: Template for an RKE Cluster with a Self-signed Certificate and SSL Termination on Layer 7 Load Balancer
|
||||
weight: 3
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/cluster-yml-templates/3-node-externalssl-certificate
|
||||
- /rancher/v2.x/en/installation/resources/advanced/cluster-yml-templates/3-node-externalssl-certificate/
|
||||
---
|
||||
|
||||
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
|
||||
|
||||
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version.
|
||||
|
||||
The following template can be used for the cluster.yml if you have a setup with:
|
||||
|
||||
- Layer 7 load balancer with self-signed SSL termination (HTTPS)
|
||||
- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/)
|
||||
|
||||
> For more options, refer to [RKE Documentation: Config Options](https://rancher.com/docs/rke/latest/en/config-options/).
|
||||
|
||||
```yaml
|
||||
nodes:
|
||||
- address: <IP> # hostname or IP to access nodes
|
||||
user: <USER> # root user (usually 'root')
|
||||
role: [controlplane,etcd,worker] # K8s roles for node
|
||||
ssh_key_path: <PEM_FILE> # path to PEM file
|
||||
- address: <IP>
|
||||
user: <USER>
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: <PEM_FILE>
|
||||
- address: <IP>
|
||||
user: <USER>
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: <PEM_FILE>
|
||||
|
||||
services:
|
||||
etcd:
|
||||
snapshot: true
|
||||
creation: 6h
|
||||
retention: 24h
|
||||
|
||||
addons: |-
|
||||
---
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: cattle-system
|
||||
---
|
||||
kind: ServiceAccount
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: cattle-admin
|
||||
namespace: cattle-system
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: cattle-crb
|
||||
namespace: cattle-system
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cattle-admin
|
||||
namespace: cattle-system
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cattle-keys-server
|
||||
namespace: cattle-system
|
||||
type: Opaque
|
||||
data:
|
||||
cacerts.pem: <BASE64_CA> # CA cert used to sign cattle server cert and key
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle-service
|
||||
labels:
|
||||
app: cattle
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: cattle
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle-ingress-http
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
|
||||
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
nginx.ingress.kubernetes.io/ssl-redirect: "false" # Disable redirect to ssl
|
||||
spec:
|
||||
rules:
|
||||
- host: <FQDN>
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: cattle-service
|
||||
servicePort: 80
|
||||
---
|
||||
kind: Deployment
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cattle
|
||||
spec:
|
||||
serviceAccountName: cattle-admin
|
||||
containers:
|
||||
# Rancher install via RKE addons is only supported up to v2.0.8
|
||||
- image: rancher/rancher:v2.0.8
|
||||
imagePullPolicy: Always
|
||||
name: cattle-server
|
||||
# env:
|
||||
# - name: HTTP_PROXY
|
||||
# value: "http://your_proxy_address:port"
|
||||
# - name: HTTPS_PROXY
|
||||
# value: "http://your_proxy_address:port"
|
||||
# - name: NO_PROXY
|
||||
# value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 80
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 60
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 80
|
||||
initialDelaySeconds: 20
|
||||
periodSeconds: 10
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /etc/rancher/ssl
|
||||
name: cattle-keys-volume
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: cattle-keys-volume
|
||||
secret:
|
||||
defaultMode: 420
|
||||
secretName: cattle-keys-server
|
||||
```
|
||||
+145
@@ -0,0 +1,145 @@
|
||||
---
|
||||
title: Template for an RKE Cluster with a Recognized CA Certificate and SSL Termination on Layer 7 Load Balancer
|
||||
weight: 4
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/cluster-yml-templates/3-node-externalssl-recognizedca
|
||||
- /rancher/v2.x/en/installation/resources/advanced/cluster-yml-templates/3-node-externalssl-recognizedca/
|
||||
---
|
||||
|
||||
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
|
||||
|
||||
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version.
|
||||
|
||||
The following template can be used for the cluster.yml if you have a setup with:
|
||||
|
||||
- Layer 7 load balancer with SSL termination (HTTPS)
|
||||
- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/)
|
||||
|
||||
> For more options, refer to [RKE Documentation: Config Options](https://rancher.com/docs/rke/latest/en/config-options/).
|
||||
|
||||
```yaml
|
||||
nodes:
|
||||
- address: <IP> # hostname or IP to access nodes
|
||||
user: <USER> # root user (usually 'root')
|
||||
role: [controlplane,etcd,worker] # K8s roles for node
|
||||
ssh_key_path: <PEM_FILE> # path to PEM file
|
||||
- address: <IP>
|
||||
user: <USER>
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: <PEM_FILE>
|
||||
- address: <IP>
|
||||
user: <USER>
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: <PEM_FILE>
|
||||
|
||||
services:
|
||||
etcd:
|
||||
snapshot: true
|
||||
creation: 6h
|
||||
retention: 24h
|
||||
|
||||
addons: |-
|
||||
---
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: cattle-system
|
||||
---
|
||||
kind: ServiceAccount
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: cattle-admin
|
||||
namespace: cattle-system
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: cattle-crb
|
||||
namespace: cattle-system
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cattle-admin
|
||||
namespace: cattle-system
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle-service
|
||||
labels:
|
||||
app: cattle
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: cattle
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle-ingress-http
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
|
||||
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
nginx.ingress.kubernetes.io/ssl-redirect: "false" # Disable redirect to ssl
|
||||
spec:
|
||||
rules:
|
||||
- host: <FQDN>
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: cattle-service
|
||||
servicePort: 80
|
||||
---
|
||||
kind: Deployment
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cattle
|
||||
spec:
|
||||
serviceAccountName: cattle-admin
|
||||
containers:
|
||||
# Rancher install via RKE addons is only supported up to v2.0.8
|
||||
- image: rancher/rancher:v2.0.8
|
||||
args:
|
||||
- --no-cacerts
|
||||
imagePullPolicy: Always
|
||||
name: cattle-server
|
||||
# env:
|
||||
# - name: HTTP_PROXY
|
||||
# value: "http://your_proxy_address:port"
|
||||
# - name: HTTPS_PROXY
|
||||
# value: "http://your_proxy_address:port"
|
||||
# - name: NO_PROXY
|
||||
# value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access"
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 80
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 60
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ping
|
||||
port: 80
|
||||
initialDelaySeconds: 20
|
||||
periodSeconds: 10
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
```
|
||||
+257
@@ -0,0 +1,257 @@
|
||||
---
|
||||
title: Docker Install with TLS Termination at Layer-7 NGINX Load Balancer
|
||||
weight: 252
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/single-node/single-node-install-external-lb/
|
||||
- /rancher/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker/single-node-install-external-lb
|
||||
- /rancher/v2.0-v2.4/en/installation/options/single-node-install-external-lb
|
||||
- /rancher/v2.0-v2.4/en/installation/single-node-install-external-lb
|
||||
---
|
||||
|
||||
For development and testing environments that have a special requirement to terminate TLS/SSL at a load balancer instead of your Rancher Server container, deploy Rancher and configure a load balancer to work with it conjunction.
|
||||
|
||||
A layer-7 load balancer can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with.
|
||||
|
||||
This install procedure walks you through deployment of Rancher using a single container, and then provides a sample configuration for a layer-7 NGINX load balancer.
|
||||
|
||||
> **Want to skip the external load balancer?**
|
||||
> See [Docker Installation](installation/single-node) instead.
|
||||
|
||||
## Requirements for OS, Docker, Hardware, and Networking
|
||||
|
||||
Make sure that your node fulfills the general [installation requirements.](../../../../pages-for-subheaders/installation-requirements.md)
|
||||
|
||||
## Installation Outline
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
- [1. Provision Linux Host](#1-provision-linux-host)
|
||||
- [2. Choose an SSL Option and Install Rancher](#2-choose-an-ssl-option-and-install-rancher)
|
||||
- [3. Configure Load Balancer](#3-configure-load-balancer)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## 1. Provision Linux Host
|
||||
|
||||
Provision a single Linux host according to our [Requirements](../../../../pages-for-subheaders/installation-requirements.md) to launch your Rancher Server.
|
||||
|
||||
## 2. Choose an SSL Option and Install Rancher
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
> **Do you want to...**
|
||||
>
|
||||
> - Complete an Air Gap Installation?
|
||||
> - Record all transactions with the Rancher API?
|
||||
>
|
||||
> See [Advanced Options](#advanced-options) below before continuing.
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Option A-Bring Your Own Certificate: Self-Signed</summary>
|
||||
|
||||
If you elect to use a self-signed certificate to encrypt communication, you must install the certificate on your load balancer (which you'll do later) and your Rancher container. Run the Docker command to deploy Rancher, pointing it toward your certificate.
|
||||
|
||||
> **Prerequisites:**
|
||||
> Create a self-signed certificate.
|
||||
>
|
||||
> - The certificate files must be in PEM format.
|
||||
|
||||
**To Install Rancher Using a Self-Signed Cert:**
|
||||
|
||||
1. While running the Docker command to deploy Rancher, point Docker toward your CA certificate file.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /etc/your_certificate_directory/cacerts.pem:/etc/rancher/ssl/cacerts.pem \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
</details>
|
||||
<details id="option-b">
|
||||
<summary>Option B-Bring Your Own Certificate: Signed by Recognized CA</summary>
|
||||
If your cluster is public facing, it's best to use a certificate signed by a recognized CA.
|
||||
|
||||
> **Prerequisites:**
|
||||
>
|
||||
> - The certificate files must be in PEM format.
|
||||
|
||||
**To Install Rancher Using a Cert Signed by a Recognized CA:**
|
||||
|
||||
If you use a certificate signed by a recognized CA, installing your certificate in the Rancher container isn't necessary. We do have to make sure there is no default CA certificate generated and stored, you can do this by passing the `--no-cacerts` parameter to the container.
|
||||
|
||||
1. Enter the following command.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:latest --no-cacerts
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## 3. Configure Load Balancer
|
||||
|
||||
When using a load balancer in front of your Rancher container, there's no need for the container to redirect port communication from port 80 or port 443. By passing the header `X-Forwarded-Proto: https` header, this redirect is disabled.
|
||||
|
||||
The load balancer or proxy has to be configured to support the following:
|
||||
|
||||
- **WebSocket** connections
|
||||
- **SPDY** / **HTTP/2** protocols
|
||||
- Passing / setting the following headers:
|
||||
|
||||
| Header | Value | Description |
|
||||
|--------|-------|-------------|
|
||||
| `Host` | Hostname used to reach Rancher. | To identify the server requested by the client.
|
||||
| `X-Forwarded-Proto` | `https` | To identify the protocol that a client used to connect to the load balancer or proxy.<br /><br/>**Note:** If this header is present, `rancher/rancher` does not redirect HTTP to HTTPS.
|
||||
| `X-Forwarded-Port` | Port used to reach Rancher. | To identify the protocol that client used to connect to the load balancer or proxy.
|
||||
| `X-Forwarded-For` | IP of the client connection. | To identify the originating IP address of a client.
|
||||
### Example NGINX configuration
|
||||
|
||||
This NGINX configuration is tested on NGINX 1.14.
|
||||
|
||||
> **Note:** This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - HTTP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/).
|
||||
|
||||
- Replace `rancher-server` with the IP address or hostname of the node running the Rancher container.
|
||||
- Replace both occurrences of `FQDN` to the DNS name for Rancher.
|
||||
- Replace `/certs/fullchain.pem` and `/certs/privkey.pem` to the location of the server certificate and the server certificate key respectively.
|
||||
|
||||
```
|
||||
worker_processes 4;
|
||||
worker_rlimit_nofile 40000;
|
||||
|
||||
events {
|
||||
worker_connections 8192;
|
||||
}
|
||||
|
||||
http {
|
||||
upstream rancher {
|
||||
server rancher-server:80;
|
||||
}
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default Upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name FQDN;
|
||||
ssl_certificate /certs/fullchain.pem;
|
||||
ssl_certificate_key /certs/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_pass http://rancher;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
|
||||
proxy_read_timeout 900s;
|
||||
proxy_buffering off;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name FQDN;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<br/>
|
||||
|
||||
## What's Next?
|
||||
|
||||
- **Recommended:** Review [Single Node Backup and Restore](installation/backups-and-restoration/single-node-backup-and-restoration/). Although you don't have any data you need to back up right now, we recommend creating backups after regular Rancher use.
|
||||
- Create a Kubernetes cluster: [Provisioning Kubernetes Clusters](../../../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md).
|
||||
|
||||
<br/>
|
||||
|
||||
## FAQ and Troubleshooting
|
||||
|
||||
For help troubleshooting certificates, see [this section.](../../other-installation-methods/rancher-on-a-single-node-with-docker/certificate-troubleshooting.md)
|
||||
|
||||
## Advanced Options
|
||||
|
||||
### API Auditing
|
||||
|
||||
If you want to record all transactions with the Rancher API, enable the [API Auditing](installation/api-auditing) feature by adding the flags below into your install command.
|
||||
|
||||
-e AUDIT_LEVEL=1 \
|
||||
-e AUDIT_LOG_PATH=/var/log/auditlog/rancher-api-audit.log \
|
||||
-e AUDIT_LOG_MAXAGE=20 \
|
||||
-e AUDIT_LOG_MAXBACKUP=20 \
|
||||
-e AUDIT_LOG_MAXSIZE=100 \
|
||||
|
||||
### Air Gap
|
||||
|
||||
If you are visiting this page to complete an [Air Gap Installation](installation/air-gap-installation/), you must pre-pend your private registry URL to the server tag when running the installation command in the option that you choose. Add `<REGISTRY.DOMAIN.COM:PORT>` with your private registry URL in front of `rancher/rancher:latest`.
|
||||
|
||||
**Example:**
|
||||
|
||||
<REGISTRY.DOMAIN.COM:PORT>/rancher/rancher:latest
|
||||
|
||||
### Persistent Data
|
||||
|
||||
Rancher uses etcd as a datastore. When Rancher is installed with Docker, the embedded etcd is being used. The persistent data is at the following path in the container: `/var/lib/rancher`.
|
||||
|
||||
You can bind mount a host volume to this location to preserve data on the host it is running on:
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /opt/rancher:/var/lib/rancher \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
This layer 7 NGINX configuration is tested on NGINX version 1.13 (mainline) and 1.14 (stable).
|
||||
|
||||
> **Note:** This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/).
|
||||
|
||||
```
|
||||
upstream rancher {
|
||||
server rancher-server:80;
|
||||
}
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default Upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name rancher.yourdomain.com;
|
||||
ssl_certificate /etc/your_certificate_directory/fullchain.pem;
|
||||
ssl_certificate_key /etc/your_certificate_directory/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_pass http://rancher;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
|
||||
proxy_read_timeout 900s;
|
||||
proxy_buffering off;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name rancher.yourdomain.com;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
```
|
||||
|
||||
<br/>
|
||||
|
||||
+569
@@ -0,0 +1,569 @@
|
||||
---
|
||||
title: Enabling the API Audit Log to Record System Events
|
||||
weight: 4
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/api-audit-log/
|
||||
- /rancher/v2.0-v2.4/en/installation/api-auditing
|
||||
---
|
||||
|
||||
You can enable the API audit log to record the sequence of system events initiated by individual users. You can know what happened, when it happened, who initiated it, and what cluster it affected. When you enable this feature, all requests to the Rancher API and all responses from it are written to a log.
|
||||
|
||||
You can enable API Auditing during Rancher installation or upgrade.
|
||||
|
||||
## Enabling API Audit Log
|
||||
|
||||
The Audit Log is enabled and configured by passing environment variables to the Rancher server container. See the following to enable on your installation.
|
||||
|
||||
- [Docker Install](../../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#api-audit-log)
|
||||
|
||||
- [Kubernetes Install](../../../../reference-guides/installation-references/helm-chart-options.md#api-audit-log)
|
||||
|
||||
## API Audit Log Options
|
||||
|
||||
The usage below defines rules about what the audit log should record and what data it should include:
|
||||
|
||||
| Parameter | Description |
|
||||
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| <a id="audit-level"></a>`AUDIT_LEVEL` | `0` - Disable audit log (default setting).<br/>`1` - Log event metadata.<br/>`2` - Log event metadata and request body.<br/>`3` - Log event metadata, request body, and response body. Each log transaction for a request/response pair uses the same `auditID` value.<br/><br/>See [Audit Level Logging](#audit-log-levels) for a table that displays what each setting logs. |
|
||||
| `AUDIT_LOG_PATH` | Log path for Rancher Server API. Default path is `/var/log/auditlog/rancher-api-audit.log`. You can mount the log directory to host. <br/><br/>Usage Example: `AUDIT_LOG_PATH=/my/custom/path/`<br/> |
|
||||
| `AUDIT_LOG_MAXAGE` | Defined the maximum number of days to retain old audit log files. Default is 10 days. |
|
||||
| `AUDIT_LOG_MAXBACKUP` | Defines the maximum number of audit log files to retain. Default is 10. |
|
||||
| `AUDIT_LOG_MAXSIZE` | Defines the maximum size in megabytes of the audit log file before it gets rotated. Default size is 100M. |
|
||||
|
||||
<br/>
|
||||
|
||||
### Audit Log Levels
|
||||
|
||||
The following table displays what parts of API transactions are logged for each [`AUDIT_LEVEL`](#audit-level) setting.
|
||||
|
||||
| `AUDIT_LEVEL` Setting | Request Metadata | Request Body | Response Metadata | Response Body |
|
||||
| --------------------- | ---------------- | ------------ | ----------------- | ------------- |
|
||||
| `0` | | | | |
|
||||
| `1` | ✓ | | | |
|
||||
| `2` | ✓ | ✓ | | |
|
||||
| `3` | ✓ | ✓ | ✓ | ✓ |
|
||||
|
||||
## Viewing API Audit Logs
|
||||
|
||||
### Docker Install
|
||||
|
||||
Share the `AUDIT_LOG_PATH` directory (Default: `/var/log/auditlog`) with the host system. The log can be parsed by standard CLI tools or forwarded on to a log collection tool like Fluentd, Filebeat, Logstash, etc.
|
||||
|
||||
### Kubernetes Install
|
||||
|
||||
Enabling the API Audit Log with the Helm chart install will create a `rancher-audit-log` sidecar container in the Rancher pod. This container will stream the log to standard output (stdout). You can view the log as you would any container log.
|
||||
|
||||
The `rancher-audit-log` container is part of the `rancher` pod in the `cattle-system` namespace.
|
||||
|
||||
#### CLI
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system logs -f rancher-84d886bdbb-s4s69 rancher-audit-log
|
||||
```
|
||||
|
||||
#### Rancher Web GUI
|
||||
|
||||
1. From the context menu, select **Cluster: local > System**.
|
||||
1. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Find the `cattle-system` namespace. Open the `rancher` workload by clicking its link.
|
||||
1. Pick one of the `rancher` pods and select **⋮ > View Logs**.
|
||||
1. From the **Logs** drop-down, select `rancher-audit-log`.
|
||||
|
||||
#### Shipping the Audit Log
|
||||
|
||||
You can enable Rancher's built in log collection and shipping for the cluster to ship the audit and other services logs to a supported collection endpoint. See [Rancher Tools - Logging](cluster-admin/tools/logging) for details.
|
||||
|
||||
## Audit Log Samples
|
||||
|
||||
After you enable auditing, each API request or response is logged by Rancher in the form of JSON. Each of the following code samples provide examples of how to identify each API transaction.
|
||||
|
||||
### Metadata Level
|
||||
|
||||
If you set your `AUDIT_LEVEL` to `1`, Rancher logs the metadata header for every API request, but not the body. The header provides basic information about the API transaction, such as the transaction's ID, who initiated the transaction, the time it occurred, etc.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "30022177-9e2e-43d1-b0d0-06ef9d3db183",
|
||||
"requestURI": "/v3/schemas",
|
||||
"sourceIPs": ["::1"],
|
||||
"user": {
|
||||
"name": "user-f4tt2",
|
||||
"group": ["system:authenticated"]
|
||||
},
|
||||
"verb": "GET",
|
||||
"stage": "RequestReceived",
|
||||
"stageTimestamp": "2018-07-20 10:22:43 +0800"
|
||||
}
|
||||
```
|
||||
|
||||
### Metadata and Request Body Level
|
||||
|
||||
If you set your `AUDIT_LEVEL` to `2`, Rancher logs the metadata header and body for every API request.
|
||||
|
||||
The code sample below depicts an API request, with both its metadata header and body.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "ef1d249e-bfac-4fd0-a61f-cbdcad53b9bb",
|
||||
"requestURI": "/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
|
||||
"sourceIPs": ["::1"],
|
||||
"user": {
|
||||
"name": "user-f4tt2",
|
||||
"group": ["system:authenticated"]
|
||||
},
|
||||
"verb": "PUT",
|
||||
"stage": "RequestReceived",
|
||||
"stageTimestamp": "2018-07-20 10:28:08 +0800",
|
||||
"requestBody": {
|
||||
"hostIPC": false,
|
||||
"hostNetwork": false,
|
||||
"hostPID": false,
|
||||
"paused": false,
|
||||
"annotations": {},
|
||||
"baseType": "workload",
|
||||
"containers": [
|
||||
{
|
||||
"allowPrivilegeEscalation": false,
|
||||
"image": "nginx",
|
||||
"imagePullPolicy": "Always",
|
||||
"initContainer": false,
|
||||
"name": "nginx",
|
||||
"ports": [
|
||||
{
|
||||
"containerPort": 80,
|
||||
"dnsName": "nginx-nodeport",
|
||||
"kind": "NodePort",
|
||||
"name": "80tcp01",
|
||||
"protocol": "TCP",
|
||||
"sourcePort": 0,
|
||||
"type": "/v3/project/schemas/containerPort"
|
||||
}
|
||||
],
|
||||
"privileged": false,
|
||||
"readOnly": false,
|
||||
"resources": {
|
||||
"type": "/v3/project/schemas/resourceRequirements",
|
||||
"requests": {},
|
||||
"limits": {}
|
||||
},
|
||||
"restartCount": 0,
|
||||
"runAsNonRoot": false,
|
||||
"stdin": true,
|
||||
"stdinOnce": false,
|
||||
"terminationMessagePath": "/dev/termination-log",
|
||||
"terminationMessagePolicy": "File",
|
||||
"tty": true,
|
||||
"type": "/v3/project/schemas/container",
|
||||
"environmentFrom": [],
|
||||
"capAdd": [],
|
||||
"capDrop": [],
|
||||
"livenessProbe": null,
|
||||
"volumeMounts": []
|
||||
}
|
||||
],
|
||||
"created": "2018-07-18T07:34:16Z",
|
||||
"createdTS": 1531899256000,
|
||||
"creatorId": null,
|
||||
"deploymentConfig": {
|
||||
"maxSurge": 1,
|
||||
"maxUnavailable": 0,
|
||||
"minReadySeconds": 0,
|
||||
"progressDeadlineSeconds": 600,
|
||||
"revisionHistoryLimit": 10,
|
||||
"strategy": "RollingUpdate"
|
||||
},
|
||||
"deploymentStatus": {
|
||||
"availableReplicas": 1,
|
||||
"conditions": [
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:38Z",
|
||||
"lastTransitionTimeTS": 1531899278000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "Deployment has minimum availability.",
|
||||
"reason": "MinimumReplicasAvailable",
|
||||
"status": "True",
|
||||
"type": "Available"
|
||||
},
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:16Z",
|
||||
"lastTransitionTimeTS": 1531899256000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "ReplicaSet \"nginx-64d85666f9\" has successfully progressed.",
|
||||
"reason": "NewReplicaSetAvailable",
|
||||
"status": "True",
|
||||
"type": "Progressing"
|
||||
}
|
||||
],
|
||||
"observedGeneration": 2,
|
||||
"readyReplicas": 1,
|
||||
"replicas": 1,
|
||||
"type": "/v3/project/schemas/deploymentStatus",
|
||||
"unavailableReplicas": 0,
|
||||
"updatedReplicas": 1
|
||||
},
|
||||
"dnsPolicy": "ClusterFirst",
|
||||
"id": "deployment:default:nginx",
|
||||
"labels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"name": "nginx",
|
||||
"namespaceId": "default",
|
||||
"projectId": "c-bcz5t:p-fdr4s",
|
||||
"publicEndpoints": [
|
||||
{
|
||||
"addresses": ["10.64.3.58"],
|
||||
"allNodes": true,
|
||||
"ingressId": null,
|
||||
"nodeId": null,
|
||||
"podId": null,
|
||||
"port": 30917,
|
||||
"protocol": "TCP",
|
||||
"serviceId": "default:nginx-nodeport",
|
||||
"type": "publicEndpoint"
|
||||
}
|
||||
],
|
||||
"restartPolicy": "Always",
|
||||
"scale": 1,
|
||||
"schedulerName": "default-scheduler",
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"type": "/v3/project/schemas/labelSelector"
|
||||
},
|
||||
"state": "active",
|
||||
"terminationGracePeriodSeconds": 30,
|
||||
"transitioning": "no",
|
||||
"transitioningMessage": "",
|
||||
"type": "deployment",
|
||||
"uuid": "f998037d-8a5c-11e8-a4cf-0245a7ebb0fd",
|
||||
"workloadAnnotations": {
|
||||
"deployment.kubernetes.io/revision": "1",
|
||||
"field.cattle.io/creatorId": "user-f4tt2"
|
||||
},
|
||||
"workloadLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"scheduling": {
|
||||
"node": {}
|
||||
},
|
||||
"description": "my description",
|
||||
"volumes": []
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Metadata, Request Body, and Response Body Level
|
||||
|
||||
If you set your `AUDIT_LEVEL` to `3`, Rancher logs:
|
||||
|
||||
- The metadata header and body for every API request.
|
||||
- The metadata header and body for every API response.
|
||||
|
||||
#### Request
|
||||
|
||||
The code sample below depicts an API request, with both its metadata header and body.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "a886fd9f-5d6b-4ae3-9a10-5bff8f3d68af",
|
||||
"requestURI": "/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
|
||||
"sourceIPs": ["::1"],
|
||||
"user": {
|
||||
"name": "user-f4tt2",
|
||||
"group": ["system:authenticated"]
|
||||
},
|
||||
"verb": "PUT",
|
||||
"stage": "RequestReceived",
|
||||
"stageTimestamp": "2018-07-20 10:33:06 +0800",
|
||||
"requestBody": {
|
||||
"hostIPC": false,
|
||||
"hostNetwork": false,
|
||||
"hostPID": false,
|
||||
"paused": false,
|
||||
"annotations": {},
|
||||
"baseType": "workload",
|
||||
"containers": [
|
||||
{
|
||||
"allowPrivilegeEscalation": false,
|
||||
"image": "nginx",
|
||||
"imagePullPolicy": "Always",
|
||||
"initContainer": false,
|
||||
"name": "nginx",
|
||||
"ports": [
|
||||
{
|
||||
"containerPort": 80,
|
||||
"dnsName": "nginx-nodeport",
|
||||
"kind": "NodePort",
|
||||
"name": "80tcp01",
|
||||
"protocol": "TCP",
|
||||
"sourcePort": 0,
|
||||
"type": "/v3/project/schemas/containerPort"
|
||||
}
|
||||
],
|
||||
"privileged": false,
|
||||
"readOnly": false,
|
||||
"resources": {
|
||||
"type": "/v3/project/schemas/resourceRequirements",
|
||||
"requests": {},
|
||||
"limits": {}
|
||||
},
|
||||
"restartCount": 0,
|
||||
"runAsNonRoot": false,
|
||||
"stdin": true,
|
||||
"stdinOnce": false,
|
||||
"terminationMessagePath": "/dev/termination-log",
|
||||
"terminationMessagePolicy": "File",
|
||||
"tty": true,
|
||||
"type": "/v3/project/schemas/container",
|
||||
"environmentFrom": [],
|
||||
"capAdd": [],
|
||||
"capDrop": [],
|
||||
"livenessProbe": null,
|
||||
"volumeMounts": []
|
||||
}
|
||||
],
|
||||
"created": "2018-07-18T07:34:16Z",
|
||||
"createdTS": 1531899256000,
|
||||
"creatorId": null,
|
||||
"deploymentConfig": {
|
||||
"maxSurge": 1,
|
||||
"maxUnavailable": 0,
|
||||
"minReadySeconds": 0,
|
||||
"progressDeadlineSeconds": 600,
|
||||
"revisionHistoryLimit": 10,
|
||||
"strategy": "RollingUpdate"
|
||||
},
|
||||
"deploymentStatus": {
|
||||
"availableReplicas": 1,
|
||||
"conditions": [
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:38Z",
|
||||
"lastTransitionTimeTS": 1531899278000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "Deployment has minimum availability.",
|
||||
"reason": "MinimumReplicasAvailable",
|
||||
"status": "True",
|
||||
"type": "Available"
|
||||
},
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:16Z",
|
||||
"lastTransitionTimeTS": 1531899256000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "ReplicaSet \"nginx-64d85666f9\" has successfully progressed.",
|
||||
"reason": "NewReplicaSetAvailable",
|
||||
"status": "True",
|
||||
"type": "Progressing"
|
||||
}
|
||||
],
|
||||
"observedGeneration": 2,
|
||||
"readyReplicas": 1,
|
||||
"replicas": 1,
|
||||
"type": "/v3/project/schemas/deploymentStatus",
|
||||
"unavailableReplicas": 0,
|
||||
"updatedReplicas": 1
|
||||
},
|
||||
"dnsPolicy": "ClusterFirst",
|
||||
"id": "deployment:default:nginx",
|
||||
"labels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"name": "nginx",
|
||||
"namespaceId": "default",
|
||||
"projectId": "c-bcz5t:p-fdr4s",
|
||||
"publicEndpoints": [
|
||||
{
|
||||
"addresses": ["10.64.3.58"],
|
||||
"allNodes": true,
|
||||
"ingressId": null,
|
||||
"nodeId": null,
|
||||
"podId": null,
|
||||
"port": 30917,
|
||||
"protocol": "TCP",
|
||||
"serviceId": "default:nginx-nodeport",
|
||||
"type": "publicEndpoint"
|
||||
}
|
||||
],
|
||||
"restartPolicy": "Always",
|
||||
"scale": 1,
|
||||
"schedulerName": "default-scheduler",
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"type": "/v3/project/schemas/labelSelector"
|
||||
},
|
||||
"state": "active",
|
||||
"terminationGracePeriodSeconds": 30,
|
||||
"transitioning": "no",
|
||||
"transitioningMessage": "",
|
||||
"type": "deployment",
|
||||
"uuid": "f998037d-8a5c-11e8-a4cf-0245a7ebb0fd",
|
||||
"workloadAnnotations": {
|
||||
"deployment.kubernetes.io/revision": "1",
|
||||
"field.cattle.io/creatorId": "user-f4tt2"
|
||||
},
|
||||
"workloadLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"scheduling": {
|
||||
"node": {}
|
||||
},
|
||||
"description": "my decript",
|
||||
"volumes": []
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Response
|
||||
|
||||
The code sample below depicts an API response, with both its metadata header and body.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "a886fd9f-5d6b-4ae3-9a10-5bff8f3d68af",
|
||||
"responseStatus": "200",
|
||||
"stage": "ResponseComplete",
|
||||
"stageTimestamp": "2018-07-20 10:33:06 +0800",
|
||||
"responseBody": {
|
||||
"actionLinks": {
|
||||
"pause": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx?action=pause",
|
||||
"resume": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx?action=resume",
|
||||
"rollback": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx?action=rollback"
|
||||
},
|
||||
"annotations": {},
|
||||
"baseType": "workload",
|
||||
"containers": [
|
||||
{
|
||||
"allowPrivilegeEscalation": false,
|
||||
"image": "nginx",
|
||||
"imagePullPolicy": "Always",
|
||||
"initContainer": false,
|
||||
"name": "nginx",
|
||||
"ports": [
|
||||
{
|
||||
"containerPort": 80,
|
||||
"dnsName": "nginx-nodeport",
|
||||
"kind": "NodePort",
|
||||
"name": "80tcp01",
|
||||
"protocol": "TCP",
|
||||
"sourcePort": 0,
|
||||
"type": "/v3/project/schemas/containerPort"
|
||||
}
|
||||
],
|
||||
"privileged": false,
|
||||
"readOnly": false,
|
||||
"resources": {
|
||||
"type": "/v3/project/schemas/resourceRequirements"
|
||||
},
|
||||
"restartCount": 0,
|
||||
"runAsNonRoot": false,
|
||||
"stdin": true,
|
||||
"stdinOnce": false,
|
||||
"terminationMessagePath": "/dev/termination-log",
|
||||
"terminationMessagePolicy": "File",
|
||||
"tty": true,
|
||||
"type": "/v3/project/schemas/container"
|
||||
}
|
||||
],
|
||||
"created": "2018-07-18T07:34:16Z",
|
||||
"createdTS": 1531899256000,
|
||||
"creatorId": null,
|
||||
"deploymentConfig": {
|
||||
"maxSurge": 1,
|
||||
"maxUnavailable": 0,
|
||||
"minReadySeconds": 0,
|
||||
"progressDeadlineSeconds": 600,
|
||||
"revisionHistoryLimit": 10,
|
||||
"strategy": "RollingUpdate"
|
||||
},
|
||||
"deploymentStatus": {
|
||||
"availableReplicas": 1,
|
||||
"conditions": [
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:38Z",
|
||||
"lastTransitionTimeTS": 1531899278000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "Deployment has minimum availability.",
|
||||
"reason": "MinimumReplicasAvailable",
|
||||
"status": "True",
|
||||
"type": "Available"
|
||||
},
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:16Z",
|
||||
"lastTransitionTimeTS": 1531899256000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "ReplicaSet \"nginx-64d85666f9\" has successfully progressed.",
|
||||
"reason": "NewReplicaSetAvailable",
|
||||
"status": "True",
|
||||
"type": "Progressing"
|
||||
}
|
||||
],
|
||||
"observedGeneration": 2,
|
||||
"readyReplicas": 1,
|
||||
"replicas": 1,
|
||||
"type": "/v3/project/schemas/deploymentStatus",
|
||||
"unavailableReplicas": 0,
|
||||
"updatedReplicas": 1
|
||||
},
|
||||
"dnsPolicy": "ClusterFirst",
|
||||
"hostIPC": false,
|
||||
"hostNetwork": false,
|
||||
"hostPID": false,
|
||||
"id": "deployment:default:nginx",
|
||||
"labels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"links": {
|
||||
"remove": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
|
||||
"revisions": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx/revisions",
|
||||
"self": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
|
||||
"update": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
|
||||
"yaml": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx/yaml"
|
||||
},
|
||||
"name": "nginx",
|
||||
"namespaceId": "default",
|
||||
"paused": false,
|
||||
"projectId": "c-bcz5t:p-fdr4s",
|
||||
"publicEndpoints": [
|
||||
{
|
||||
"addresses": ["10.64.3.58"],
|
||||
"allNodes": true,
|
||||
"ingressId": null,
|
||||
"nodeId": null,
|
||||
"podId": null,
|
||||
"port": 30917,
|
||||
"protocol": "TCP",
|
||||
"serviceId": "default:nginx-nodeport"
|
||||
}
|
||||
],
|
||||
"restartPolicy": "Always",
|
||||
"scale": 1,
|
||||
"schedulerName": "default-scheduler",
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"type": "/v3/project/schemas/labelSelector"
|
||||
},
|
||||
"state": "active",
|
||||
"terminationGracePeriodSeconds": 30,
|
||||
"transitioning": "no",
|
||||
"transitioningMessage": "",
|
||||
"type": "deployment",
|
||||
"uuid": "f998037d-8a5c-11e8-a4cf-0245a7ebb0fd",
|
||||
"workloadAnnotations": {
|
||||
"deployment.kubernetes.io/revision": "1",
|
||||
"field.cattle.io/creatorId": "user-f4tt2"
|
||||
},
|
||||
"workloadLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
+82
@@ -0,0 +1,82 @@
|
||||
---
|
||||
title: NGINX
|
||||
weight: 270
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/create-nodes-lb/nginx
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/create-nodes-lb/nginx/
|
||||
---
|
||||
NGINX will be configured as Layer 4 load balancer (TCP) that forwards connections to one of your Rancher nodes.
|
||||
|
||||
>**Note:**
|
||||
> In this configuration, the load balancer is positioned in front of your nodes. The load balancer can be any host capable of running NGINX.
|
||||
>
|
||||
> One caveat: do not use one of your Rancher nodes as the load balancer.
|
||||
|
||||
## Install NGINX
|
||||
|
||||
Start by installing NGINX on the node you want to use as a load balancer. NGINX has packages available for all known operating systems. The versions tested are `1.14` and `1.15`. For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/).
|
||||
|
||||
The `stream` module is required, which is present when using the official NGINX packages. Please refer to your OS documentation on how to install and enable the NGINX `stream` module on your operating system.
|
||||
|
||||
## Create NGINX Configuration
|
||||
|
||||
After installing NGINX, you need to update the NGINX configuration file, `nginx.conf`, with the IP addresses for your nodes.
|
||||
|
||||
1. Copy and paste the code sample below into your favorite text editor. Save it as `nginx.conf`.
|
||||
|
||||
2. From `nginx.conf`, replace both occurrences (port 80 and port 443) of `<IP_NODE_1>`, `<IP_NODE_2>`, and `<IP_NODE_3>` with the IPs of your [nodes](installation/options/helm2/create-nodes-lb/).
|
||||
|
||||
>**Note:** See [NGINX Documentation: TCP and UDP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/) for all configuration options.
|
||||
|
||||
<figcaption>Example NGINX config</figcaption>
|
||||
```
|
||||
worker_processes 4;
|
||||
worker_rlimit_nofile 40000;
|
||||
|
||||
events {
|
||||
worker_connections 8192;
|
||||
}
|
||||
|
||||
stream {
|
||||
upstream rancher_servers_http {
|
||||
least_conn;
|
||||
server <IP_NODE_1>:80 max_fails=3 fail_timeout=5s;
|
||||
server <IP_NODE_2>:80 max_fails=3 fail_timeout=5s;
|
||||
server <IP_NODE_3>:80 max_fails=3 fail_timeout=5s;
|
||||
}
|
||||
server {
|
||||
listen 80;
|
||||
proxy_pass rancher_servers_http;
|
||||
}
|
||||
|
||||
upstream rancher_servers_https {
|
||||
least_conn;
|
||||
server <IP_NODE_1>:443 max_fails=3 fail_timeout=5s;
|
||||
server <IP_NODE_2>:443 max_fails=3 fail_timeout=5s;
|
||||
server <IP_NODE_3>:443 max_fails=3 fail_timeout=5s;
|
||||
}
|
||||
server {
|
||||
listen 443;
|
||||
proxy_pass rancher_servers_https;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. Save `nginx.conf` to your load balancer at the following path: `/etc/nginx/nginx.conf`.
|
||||
|
||||
4. Load the updates to your NGINX configuration by running the following command:
|
||||
|
||||
```
|
||||
# nginx -s reload
|
||||
```
|
||||
|
||||
## Option - Run NGINX as Docker container
|
||||
|
||||
Instead of installing NGINX as a package on the operating system, you can rather run it as a Docker container. Save the edited **Example NGINX config** as `/etc/nginx.conf` and run the following command to launch the NGINX container:
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /etc/nginx.conf:/etc/nginx/nginx.conf \
|
||||
nginx:1.14
|
||||
```
|
||||
+178
@@ -0,0 +1,178 @@
|
||||
---
|
||||
title: Amazon NLB
|
||||
weight: 277
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/create-nodes-lb/nlb
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/create-nodes-lb/nlb/
|
||||
---
|
||||
## Objectives
|
||||
|
||||
Configuring an Amazon NLB is a multistage process. We've broken it down into multiple tasks so that it's easy to follow.
|
||||
|
||||
1. [Create Target Groups](#create-target-groups)
|
||||
|
||||
Begin by creating two target groups for the **TCP** protocol, one regarding TCP port 443 and one regarding TCP port 80 (providing redirect to TCP port 443). You'll add your Linux nodes to these groups.
|
||||
|
||||
2. [Register Targets](#register-targets)
|
||||
|
||||
Add your Linux nodes to the target groups.
|
||||
|
||||
3. [Create Your NLB](#create-your-nlb)
|
||||
|
||||
Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in **1. Create Target Groups**.
|
||||
|
||||
> **Note:** Rancher only supports using the Amazon NLB when terminating traffic in `tcp` mode for port 443 rather than `tls` mode. This is due to the fact that the NLB does not inject the correct headers into requests when terminated at the NLB. This means that if you want to use certificates managed by the Amazon Certificate Manager (ACM), you should use an ELB or ALB.
|
||||
|
||||
## Create Target Groups
|
||||
|
||||
Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but its convenient to add a listener for port 80 which will be redirected to port 443 automatically. The NGINX ingress controller on the nodes will make sure that port 80 gets redirected to port 443.
|
||||
|
||||
Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started, make sure to select the **Region** where your EC2 instances (Linux nodes) are created.
|
||||
|
||||
The Target Groups configuration resides in the **Load Balancing** section of the **EC2** service. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**.
|
||||
|
||||

|
||||
|
||||
Click **Create target group** to create the first target group, regarding TCP port 443.
|
||||
|
||||
### Target Group (TCP port 443)
|
||||
|
||||
Configure the first target group according to the table below. Screenshots of the configuration are shown just below the table.
|
||||
|
||||
Option | Setting
|
||||
--------------------------------------|------------------------------------
|
||||
Target Group Name | `rancher-tcp-443`
|
||||
Protocol | `TCP`
|
||||
Port | `443`
|
||||
Target type | `instance`
|
||||
VPC | Choose your VPC
|
||||
Protocol<br/>(Health Check) | `HTTP`
|
||||
Path<br/>(Health Check) | `/healthz`
|
||||
Port (Advanced health check) | `override`,`80`
|
||||
Healthy threshold (Advanced health) | `3`
|
||||
Unhealthy threshold (Advanced) | `3`
|
||||
Timeout (Advanced) | `6 seconds`
|
||||
Interval (Advanced) | `10 second`
|
||||
Success codes | `200-399`
|
||||
|
||||
***
|
||||
**Screenshot Target group TCP port 443 settings**<br/>
|
||||

|
||||
|
||||
***
|
||||
**Screenshot Target group TCP port 443 Advanced settings**<br/>
|
||||

|
||||
|
||||
***
|
||||
|
||||
Click **Create target group** to create the second target group, regarding TCP port 80.
|
||||
|
||||
### Target Group (TCP port 80)
|
||||
|
||||
Configure the second target group according to the table below. Screenshots of the configuration are shown just below the table.
|
||||
|
||||
Option | Setting
|
||||
--------------------------------------|------------------------------------
|
||||
Target Group Name | `rancher-tcp-80`
|
||||
Protocol | `TCP`
|
||||
Port | `80`
|
||||
Target type | `instance`
|
||||
VPC | Choose your VPC
|
||||
Protocol<br/>(Health Check) | `HTTP`
|
||||
Path<br/>(Health Check) | `/healthz`
|
||||
Port (Advanced health check) | `traffic port`
|
||||
Healthy threshold (Advanced health) | `3`
|
||||
Unhealthy threshold (Advanced) | `3`
|
||||
Timeout (Advanced) | `6 seconds`
|
||||
Interval (Advanced) | `10 second`
|
||||
Success codes | `200-399`
|
||||
|
||||
***
|
||||
**Screenshot Target group TCP port 80 settings**<br/>
|
||||

|
||||
|
||||
***
|
||||
**Screenshot Target group TCP port 80 Advanced settings**<br/>
|
||||

|
||||
|
||||
***
|
||||
|
||||
## Register Targets
|
||||
|
||||
Next, add your Linux nodes to both target groups.
|
||||
|
||||
Select the target group named **rancher-tcp-443**, click the tab **Targets** and choose **Edit**.
|
||||
|
||||

|
||||
|
||||
Select the instances (Linux nodes) you want to add, and click **Add to registered**.
|
||||
|
||||
***
|
||||
**Screenshot Add targets to target group TCP port 443**<br/>
|
||||
|
||||

|
||||
|
||||
***
|
||||
**Screenshot Added targets to target group TCP port 443**<br/>
|
||||
|
||||

|
||||
|
||||
When the instances are added, click **Save** on the bottom right of the screen.
|
||||
|
||||
Repeat those steps, replacing **rancher-tcp-443** with **rancher-tcp-80**. The same instances need to be added as targets to this target group.
|
||||
|
||||
## Create Your NLB
|
||||
|
||||
Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in [Create Target Groups](#create-target-groups).
|
||||
|
||||
1. From your web browser, navigate to the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/).
|
||||
|
||||
2. From the navigation pane, choose **LOAD BALANCING** > **Load Balancers**.
|
||||
|
||||
3. Click **Create Load Balancer**.
|
||||
|
||||
4. Choose **Network Load Balancer** and click **Create**.
|
||||
|
||||
5. Complete the **Step 1: Configure Load Balancer** form.
|
||||
- **Basic Configuration**
|
||||
|
||||
- Name: `rancher`
|
||||
- Scheme: `internal` or `internet-facing`
|
||||
|
||||
The Scheme that you choose for your NLB is dependent on the configuration of your instances/VPC. If your instances do not have public IPs associated with them, or you will only be accessing Rancher internally, you should set your NLB Scheme to `internal` rather than `internet-facing`.
|
||||
- **Listeners**
|
||||
|
||||
Add the **Load Balancer Protocols** and **Load Balancer Ports** below.
|
||||
- `TCP`: `443`
|
||||
|
||||
- **Availability Zones**
|
||||
|
||||
- Select Your **VPC** and **Availability Zones**.
|
||||
|
||||
6. Complete the **Step 2: Configure Routing** form.
|
||||
|
||||
- From the **Target Group** drop-down, choose **Existing target group**.
|
||||
|
||||
- From the **Name** drop-down, choose `rancher-tcp-443`.
|
||||
|
||||
- Open **Advanced health check settings**, and configure **Interval** to `10 seconds`.
|
||||
|
||||
7. Complete **Step 3: Register Targets**. Since you registered your targets earlier, all you have to do is click **Next: Review**.
|
||||
|
||||
8. Complete **Step 4: Review**. Look over the load balancer details and click **Create** when you're satisfied.
|
||||
|
||||
9. After AWS creates the NLB, click **Close**.
|
||||
|
||||
## Add listener to NLB for TCP port 80
|
||||
|
||||
1. Select your newly created NLB and select the **Listeners** tab.
|
||||
|
||||
2. Click **Add listener**.
|
||||
|
||||
3. Use `TCP`:`80` as **Protocol** : **Port**
|
||||
|
||||
4. Click **Add action** and choose **Forward to...**
|
||||
|
||||
5. From the **Forward to** drop-down, choose `rancher-tcp-80`.
|
||||
|
||||
6. Click **Save** in the top right of the screen.
|
||||
+26
@@ -0,0 +1,26 @@
|
||||
---
|
||||
title: Troubleshooting
|
||||
weight: 276
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/helm-init/troubleshooting
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/helm-init/troubleshooting/
|
||||
---
|
||||
|
||||
### Helm commands show forbidden
|
||||
|
||||
When Helm is initiated in the cluster without specifying the correct `ServiceAccount`, the command `helm init` will succeed but you won't be able to execute most of the other `helm` commands. The following error will be shown:
|
||||
|
||||
```
|
||||
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system"
|
||||
```
|
||||
|
||||
To resolve this, the server component (`tiller`) needs to be removed and added with the correct `ServiceAccount`. You can use `helm reset --force` to remove the `tiller` from the cluster. Please check if it is removed using `helm version --server`.
|
||||
|
||||
```
|
||||
helm reset --force
|
||||
Tiller (the Helm server-side component) has been uninstalled from your Kubernetes Cluster.
|
||||
helm version --server
|
||||
Error: could not find tiller
|
||||
```
|
||||
|
||||
When you have confirmed that `tiller` has been removed, please follow the steps provided in [Initialize Helm (Install tiller)](installation/options/helm2/helm-init/) to install `tiller` with the correct `ServiceAccount`.
|
||||
+248
@@ -0,0 +1,248 @@
|
||||
---
|
||||
title: Chart Options
|
||||
weight: 276
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/helm-rancher/chart-options
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/helm-rancher/chart-options/
|
||||
---
|
||||
|
||||
### Common Options
|
||||
|
||||
| Option | Default Value | Description |
|
||||
| --- | --- | --- |
|
||||
| `hostname` | " " | `string` - the Fully Qualified Domain Name for your Rancher Server |
|
||||
| `ingress.tls.source` | "rancher" | `string` - Where to get the cert for the ingress. - "rancher, letsEncrypt, secret" |
|
||||
| `letsEncrypt.email` | " " | `string` - Your email address |
|
||||
| `letsEncrypt.environment` | "production" | `string` - Valid options: "staging, production" |
|
||||
| `privateCA` | false | `bool` - Set to true if your cert is signed by a private CA |
|
||||
|
||||
<br/>
|
||||
|
||||
### Advanced Options
|
||||
|
||||
| Option | Default Value | Description |
|
||||
| --- | --- | --- |
|
||||
| `additionalTrustedCAs` | false | `bool` - See [Additional Trusted CAs](#additional-trusted-cas) |
|
||||
| `addLocal` | "auto" | `string` - Have Rancher detect and import the local Rancher server cluster |
|
||||
| `antiAffinity` | "preferred" | `string` - AntiAffinity rule for Rancher pods - "preferred, required" |
|
||||
| `auditLog.destination` | "sidecar" | `string` - Stream to sidecar container console or hostPath volume - "sidecar, hostPath" |
|
||||
| `auditLog.hostPath` | "/var/log/rancher/audit" | `string` - log file destination on host (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
| `auditLog.level` | 0 | `int` - set the [API Audit Log](installation/api-auditing) level. 0 is off. [0-3] |
|
||||
| `auditLog.maxAge` | 1 | `int` - maximum number of days to retain old audit log files (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
| `auditLog.maxBackups` | 1 | `int` - maximum number of audit log files to retain (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
| `auditLog.maxSize` | 100 | `int` - maximum size in megabytes of the audit log file before it gets rotated (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
| `busyboxImage` | "busybox" | `string` - Image location for busybox image used to collect audit logs _Note: Available as of v2.2.0_ |
|
||||
| `debug` | false | `bool` - set debug flag on rancher server |
|
||||
| `extraEnv` | [] | `list` - set additional environment variables for Rancher _Note: Available as of v2.2.0_ |
|
||||
| `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials |
|
||||
| `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress |
|
||||
| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. _Note: Available as of v2.0.15, v2.1.10 and v2.2.4_ |
|
||||
| `proxy` | "" | `string` - HTTP[S] proxy server for Rancher |
|
||||
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16" | `string` - comma separated list of hostnames or ip address not to use the proxy |
|
||||
| `resources` | {} | `map` - rancher pod resource requests & limits |
|
||||
| `rancherImage` | "rancher/rancher" | `string` - rancher image source |
|
||||
| `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag |
|
||||
| `tls` | "ingress" | `string` - See [External TLS Termination](#external-tls-termination) for details. - "ingress, external" |
|
||||
| `systemDefaultRegistry` | "" | `string` - private registry to be used for all system Docker images, e.g., http://registry.example.com/ _Available as of v2.3.0_ |
|
||||
| `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. _Available as of v2.3.0_
|
||||
|
||||
<br/>
|
||||
|
||||
### API Audit Log
|
||||
|
||||
Enabling the [API Audit Log](installation/api-auditing/).
|
||||
|
||||
You can collect this log as you would any container log. Enable the [Logging service under Rancher Tools](cluster-admin/tools/logging/) for the `System` Project on the Rancher server cluster.
|
||||
|
||||
```plain
|
||||
--set auditLog.level=1
|
||||
```
|
||||
|
||||
By default enabling Audit Logging will create a sidecar container in the Rancher pod. This container (`rancher-audit-log`) will stream the log to `stdout`. You can collect this log as you would any container log. When using the sidecar as the audit log destination, the `hostPath`, `maxAge`, `maxBackups`, and `maxSize` options do not apply. It's advised to use your OS or Docker daemon's log rotation features to control disk space use. Enable the [Logging service under Rancher Tools](cluster-admin/tools/logging/) for the Rancher server cluster or System Project.
|
||||
|
||||
Set the `auditLog.destination` to `hostPath` to forward logs to volume shared with the host system instead of streaming to a sidecar container. When setting the destination to `hostPath` you may want to adjust the other auditLog parameters for log rotation.
|
||||
|
||||
### Setting Extra Environment Variables
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
You can set extra environment variables for Rancher server using `extraEnv`. This list uses the same `name` and `value` keys as the container manifest definitions. Remember to quote the values.
|
||||
|
||||
```plain
|
||||
--set 'extraEnv[0].name=CATTLE_TLS_MIN_VERSION'
|
||||
--set 'extraEnv[0].value=1.0'
|
||||
```
|
||||
|
||||
### TLS settings
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` and `CATTLE_TLS_CIPHERS` environment variables. For example, to configure TLS 1.0 as minimum accepted TLS version:
|
||||
|
||||
```plain
|
||||
--set 'extraEnv[0].name=CATTLE_TLS_MIN_VERSION'
|
||||
--set 'extraEnv[0].value=1.0'
|
||||
```
|
||||
|
||||
See [TLS settings](admin-settings/tls-settings) for more information and options.
|
||||
|
||||
### Import `local` Cluster
|
||||
|
||||
By default Rancher server will detect and import the `local` cluster it's running on. User with access to the `local` cluster will essentially have "root" access to all the clusters managed by Rancher server.
|
||||
|
||||
If this is a concern in your environment you can set this option to "false" on your initial install.
|
||||
|
||||
> Note: This option is only effective on the initial Rancher install. See [Issue 16522](https://github.com/rancher/rancher/issues/16522) for more information.
|
||||
|
||||
```plain
|
||||
--set addLocal="false"
|
||||
```
|
||||
|
||||
### Customizing your Ingress
|
||||
|
||||
To customize or use a different ingress with Rancher server you can set your own Ingress annotations.
|
||||
|
||||
Example on setting a custom certificate issuer:
|
||||
|
||||
```plain
|
||||
--set ingress.extraAnnotations.'certmanager\.k8s\.io/cluster-issuer'=ca-key-pair
|
||||
```
|
||||
|
||||
_Available as of v2.0.15, v2.1.10 and v2.2.4_
|
||||
|
||||
Example on setting a static proxy header with `ingress.configurationSnippet`. This value is parsed like a template so variables can be used.
|
||||
|
||||
```plain
|
||||
--set ingress.configurationSnippet='more_set_input_headers X-Forwarded-Host {{ .Values.hostname }};'
|
||||
```
|
||||
|
||||
### HTTP Proxy
|
||||
|
||||
Rancher requires internet access for some functionality (helm charts). Use `proxy` to set your proxy server.
|
||||
|
||||
Add your IP exceptions to the `noProxy` list. Make sure you add the Service cluster IP range (default: 10.43.0.1/16) and any worker cluster `controlplane` nodes. Rancher supports CIDR notation ranges in this list.
|
||||
|
||||
```plain
|
||||
--set proxy="http://<username>:<password>@<proxy_url>:<proxy_port>/"
|
||||
--set noProxy="127.0.0.0/8\,10.0.0.0/8\,172.16.0.0/12\,192.168.0.0/16"
|
||||
```
|
||||
|
||||
### Additional Trusted CAs
|
||||
|
||||
If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add additional trusted CAs to Rancher.
|
||||
|
||||
```plain
|
||||
--set additionalTrustedCAs=true
|
||||
```
|
||||
|
||||
Once the Rancher deployment is created, copy your CA certs in pem format into a file named `ca-additional.pem` and use `kubectl` to create the `tls-ca-additional` secret in the `cattle-system` namespace.
|
||||
|
||||
```plain
|
||||
kubectl -n cattle-system create secret generic tls-ca-additional --from-file=ca-additional.pem
|
||||
```
|
||||
|
||||
### Private Registry and Air Gap Installs
|
||||
|
||||
For details on installing Rancher with a private registry, see:
|
||||
|
||||
- [Air Gap: Docker Install](installation/air-gap-single-node/)
|
||||
- [Air Gap: Kubernetes Install](installation/air-gap-high-availability/)
|
||||
|
||||
|
||||
### External TLS Termination
|
||||
|
||||
We recommend configuring your load balancer as a Layer 4 balancer, forwarding plain 80/tcp and 443/tcp to the Rancher Management cluster nodes. The Ingress Controller on the cluster will redirect http traffic on port 80 to https on port 443.
|
||||
|
||||
You may terminate the SSL/TLS on a L7 load balancer external to the Rancher cluster (ingress). Use the `--set tls=external` option and point your load balancer at port http 80 on all of the Rancher cluster nodes. This will expose the Rancher interface on http port 80. Be aware that clients that are allowed to connect directly to the Rancher cluster will not be encrypted. If you choose to do this we recommend that you restrict direct access at the network level to just your load balancer.
|
||||
|
||||
> **Note:** If you are using a Private CA signed certificate, add `--set privateCA=true` and see [Adding TLS Secrets - Using a Private CA Signed Certificate](installation/options/helm2/helm-rancher/tls-secrets/) to add the CA cert for Rancher.
|
||||
|
||||
Your load balancer must support long lived websocket connections and will need to insert proxy headers so Rancher can route links correctly.
|
||||
|
||||
#### Configuring Ingress for External TLS when Using NGINX v0.25
|
||||
|
||||
In NGINX v0.25, the behavior of NGINX has [changed](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0220) regarding forwarding headers and external TLS termination. Therefore, in the scenario that you are using external TLS termination configuration with NGINX v0.25, you must edit the `cluster.yml` to enable the `use-forwarded-headers` option for ingress:
|
||||
|
||||
```yaml
|
||||
ingress:
|
||||
provider: nginx
|
||||
options:
|
||||
use-forwarded-headers: "true"
|
||||
```
|
||||
|
||||
#### Required Headers
|
||||
|
||||
* `Host`
|
||||
* `X-Forwarded-Proto`
|
||||
* `X-Forwarded-Port`
|
||||
* `X-Forwarded-For`
|
||||
|
||||
#### Recommended Timeouts
|
||||
|
||||
* Read Timeout: `1800 seconds`
|
||||
* Write Timeout: `1800 seconds`
|
||||
* Connect Timeout: `30 seconds`
|
||||
|
||||
#### Health Checks
|
||||
|
||||
Rancher will respond `200` to health checks on the `/healthz` endpoint.
|
||||
|
||||
|
||||
#### Example NGINX config
|
||||
|
||||
This NGINX configuration is tested on NGINX 1.14.
|
||||
|
||||
>**Note:** This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - HTTP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/).
|
||||
|
||||
* Replace `IP_NODE1`, `IP_NODE2` and `IP_NODE3` with the IP addresses of the nodes in your cluster.
|
||||
* Replace both occurrences of `FQDN` to the DNS name for Rancher.
|
||||
* Replace `/certs/fullchain.pem` and `/certs/privkey.pem` to the location of the server certificate and the server certificate key respectively.
|
||||
|
||||
```
|
||||
worker_processes 4;
|
||||
worker_rlimit_nofile 40000;
|
||||
|
||||
events {
|
||||
worker_connections 8192;
|
||||
}
|
||||
|
||||
http {
|
||||
upstream rancher {
|
||||
server IP_NODE_1:80;
|
||||
server IP_NODE_2:80;
|
||||
server IP_NODE_3:80;
|
||||
}
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default Upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name FQDN;
|
||||
ssl_certificate /certs/fullchain.pem;
|
||||
ssl_certificate_key /certs/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_pass http://rancher;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
|
||||
proxy_read_timeout 900s;
|
||||
proxy_buffering off;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name FQDN;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
}
|
||||
```
|
||||
+35
@@ -0,0 +1,35 @@
|
||||
---
|
||||
title: Adding Kubernetes TLS Secrets
|
||||
description: Read about how to populate the Kubernetes TLS secret for a Rancher installation
|
||||
weight: 276
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/helm-rancher/tls-secrets
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/helm-rancher/tls-secrets/
|
||||
---
|
||||
|
||||
Kubernetes will create all the objects and services for Rancher, but it will not become available until we populate the `tls-rancher-ingress` secret in the `cattle-system` namespace with the certificate and key.
|
||||
|
||||
Combine the server certificate followed by any intermediate certificate(s) needed into a file named `tls.crt`. Copy your certificate key into a file named `tls.key`.
|
||||
|
||||
Use `kubectl` with the `tls` secret type to create the secrets.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
> **Note:** If you want to replace the certificate, you can delete the `tls-rancher-ingress` secret using `kubectl -n cattle-system delete secret tls-rancher-ingress` and add a new one using the command shown above. If you are using a private CA signed certificate, replacing the certificate is only possible if the new certificate is signed by the same CA as the certificate currently in use.
|
||||
|
||||
### Using a Private CA Signed Certificate
|
||||
|
||||
If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server.
|
||||
|
||||
Copy the CA certificate into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
|
||||
>**Important:** Make sure the file is called `cacerts.pem` as Rancher uses that filename to configure the CA certificate.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem
|
||||
```
|
||||
+136
@@ -0,0 +1,136 @@
|
||||
---
|
||||
title: Troubleshooting
|
||||
weight: 276
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/helm-rancher/troubleshooting
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/helm-rancher/troubleshooting/
|
||||
---
|
||||
|
||||
### Where is everything
|
||||
|
||||
Most of the troubleshooting will be done on objects in these 3 namespaces.
|
||||
|
||||
* `cattle-system` - `rancher` deployment and pods.
|
||||
* `ingress-nginx` - Ingress controller pods and services.
|
||||
* `kube-system` - `tiller` and `cert-manager` pods.
|
||||
|
||||
### "default backend - 404"
|
||||
|
||||
A number of things can cause the ingress-controller not to forward traffic to your rancher instance. Most of the time its due to a bad ssl configuration.
|
||||
|
||||
Things to check
|
||||
|
||||
* [Is Rancher Running](#is-rancher-running)
|
||||
* [Cert CN is "Kubernetes Ingress Controller Fake Certificate"](#cert-cn-is-kubernetes-ingress-controller-fake-certificate)
|
||||
|
||||
### Is Rancher Running
|
||||
|
||||
Use `kubectl` to check the `cattle-system` system namespace and see if the Rancher pods are in a Running state.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system get pods
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
If the state is not `Running`, run a `describe` on the pod and check the Events.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe pod
|
||||
|
||||
...
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 11m default-scheduler Successfully assigned rancher-784d94f59b-vgqzh to localhost
|
||||
Normal SuccessfulMountVolume 11m kubelet, localhost MountVolume.SetUp succeeded for volume "rancher-token-dj4mt"
|
||||
Normal Pulling 11m kubelet, localhost pulling image "rancher/rancher:v2.0.4"
|
||||
Normal Pulled 11m kubelet, localhost Successfully pulled image "rancher/rancher:v2.0.4"
|
||||
Normal Created 11m kubelet, localhost Created container
|
||||
Normal Started 11m kubelet, localhost Started container
|
||||
```
|
||||
|
||||
### Checking the rancher logs
|
||||
|
||||
Use `kubectl` to list the pods.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system get pods
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
Use `kubectl` and the pod name to list the logs from the pod.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system logs -f rancher-784d94f59b-vgqzh
|
||||
```
|
||||
|
||||
### Cert CN is "Kubernetes Ingress Controller Fake Certificate"
|
||||
|
||||
Use your browser to check the certificate details. If it says the Common Name is "Kubernetes Ingress Controller Fake Certificate", something may have gone wrong with reading or issuing your SSL cert.
|
||||
|
||||
> **Note:** if you are using LetsEncrypt to issue certs it can sometimes take a few minuets to issue the cert.
|
||||
|
||||
#### cert-manager issued certs (Rancher Generated or LetsEncrypt)
|
||||
|
||||
`cert-manager` has 3 parts.
|
||||
|
||||
* `cert-manager` pod in the `kube-system` namespace.
|
||||
* `Issuer` object in the `cattle-system` namespace.
|
||||
* `Certificate` object in the `cattle-system` namespace.
|
||||
|
||||
Work backwards and do a `kubectl describe` on each object and check the events. You can track down what might be missing.
|
||||
|
||||
For example there is a problem with the Issuer:
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe certificate
|
||||
...
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Warning IssuerNotReady 18s (x23 over 19m) cert-manager Issuer rancher not ready
|
||||
```
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe issuer
|
||||
...
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Warning ErrInitIssuer 19m (x12 over 19m) cert-manager Error initializing issuer: secret "tls-rancher" not found
|
||||
Warning ErrGetKeyPair 9m (x16 over 19m) cert-manager Error getting keypair for CA issuer: secret "tls-rancher" not found
|
||||
```
|
||||
|
||||
#### Bring Your Own SSL Certs
|
||||
|
||||
Your certs get applied directly to the Ingress object in the `cattle-system` namespace.
|
||||
|
||||
Check the status of the Ingress object and see if its ready.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe ingress
|
||||
```
|
||||
|
||||
If its ready and the SSL is still not working you may have a malformed cert or secret.
|
||||
|
||||
Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container.
|
||||
|
||||
```
|
||||
kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller
|
||||
...
|
||||
W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found
|
||||
```
|
||||
|
||||
### no matches for kind "Issuer"
|
||||
|
||||
The SSL configuration option you have chosen requires cert-manager to be installed before installing Rancher or else the following error is shown:
|
||||
|
||||
```
|
||||
Error: validation failed: unable to recognize "": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
|
||||
```
|
||||
|
||||
Install cert-manager and try installing Rancher again.
|
||||
+55
@@ -0,0 +1,55 @@
|
||||
---
|
||||
title: Troubleshooting
|
||||
weight: 276
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/kubernetes-rke/troubleshooting
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/kubernetes-rke/troubleshooting/
|
||||
---
|
||||
|
||||
### canal Pods show READY 2/3
|
||||
|
||||
The most common cause of this issue is port 8472/UDP is not open between the nodes. Check your local firewall, network routing or security groups.
|
||||
|
||||
Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections.
|
||||
|
||||
### nginx-ingress-controller Pods show RESTARTS
|
||||
|
||||
The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-2-3) for troubleshooting.
|
||||
|
||||
### Failed to set up SSH tunneling for host [xxx.xxx.xxx.xxx]: Can't retrieve Docker Info
|
||||
|
||||
#### Failed to dial to /var/run/docker.sock: ssh: rejected: administratively prohibited (open failed)
|
||||
|
||||
* User specified to connect with does not have permission to access the Docker socket. This can be checked by logging into the host and running the command `docker ps`:
|
||||
|
||||
```
|
||||
$ ssh user@server
|
||||
user@server$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
```
|
||||
|
||||
See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly.
|
||||
|
||||
* When using RedHat/CentOS as operating system, you cannot use the user `root` to connect to the nodes because of [Bugzilla #1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). You will need to add a separate user and configure it to access the Docker socket. See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly.
|
||||
|
||||
* SSH server version is not version 6.7 or higher. This is needed for socket forwarding to work, which is used to connect to the Docker socket over SSH. This can be checked using `sshd -V` on the host you are connecting to, or using netcat:
|
||||
```
|
||||
$ nc xxx.xxx.xxx.xxx 22
|
||||
SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.10
|
||||
```
|
||||
|
||||
#### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: no key found
|
||||
|
||||
* The key file specified as `ssh_key_path` cannot be accessed. Make sure that you specified the private key file (not the public key, `.pub`), and that the user that is running the `rke` command can access the private key file.
|
||||
|
||||
#### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
|
||||
|
||||
* The key file specified as `ssh_key_path` is not correct for accessing the node. Double-check if you specified the correct `ssh_key_path` for the node and if you specified the correct user to connect with.
|
||||
|
||||
#### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: cannot decode encrypted private keys
|
||||
|
||||
* If you want to use encrypted private keys, you should use `ssh-agent` to load your keys with your passphrase. If the `SSH_AUTH_SOCK` environment variable is found in the environment where the `rke` command is run, it will be used automatically to connect to the node.
|
||||
|
||||
#### Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
|
||||
|
||||
* The node is not reachable on the configured `address` and `port`.
|
||||
+57
@@ -0,0 +1,57 @@
|
||||
---
|
||||
title: Enable API Auditing
|
||||
weight: 300
|
||||
aliases:
|
||||
- /rke/latest/en/config-options/add-ons/api-auditing/
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/rke-add-on/api-auditing
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/api-auditing/
|
||||
---
|
||||
|
||||
>**Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](installation/options/helm2/).
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
|
||||
If you're using RKE to install Rancher, you can use directives to enable API Auditing for your Rancher install. You can know what happened, when it happened, who initiated it, and what cluster it affected. API auditing records all requests and responses to and from the Rancher API, which includes use of the Rancher UI and any other use of the Rancher API through programmatic use.
|
||||
|
||||
## In-line Arguments
|
||||
|
||||
Enable API Auditing using RKE by adding arguments to your Rancher container.
|
||||
|
||||
To enable API auditing:
|
||||
|
||||
- Add API Auditing arguments (`args`) to your Rancher container.
|
||||
- Declare a `mountPath` in the `volumeMounts` directive of the container.
|
||||
- Declare a `path` in the `volumes` directive.
|
||||
|
||||
For more information about each argument, its syntax, and how to view API Audit logs, see [Rancher v2.0 Documentation: API Auditing](installation/api-auditing).
|
||||
|
||||
```yaml
|
||||
...
|
||||
containers:
|
||||
- image: rancher/rancher:latest
|
||||
imagePullPolicy: Always
|
||||
name: cattle-server
|
||||
args: ["--audit-log-path", "/var/log/auditlog/rancher-api-audit.log", "--audit-log-maxbackup", "5", "--audit-log-maxsize", "50", "--audit-level", "2"]
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
- containerPort: 443
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /etc/rancher/ssl
|
||||
name: cattle-keys-volume
|
||||
readOnly: true
|
||||
- mountPath: /var/log/auditlog
|
||||
name: audit-log-dir
|
||||
volumes:
|
||||
- name: cattle-keys-volume
|
||||
secret:
|
||||
defaultMode: 420
|
||||
secretName: cattle-keys-server
|
||||
- name: audit-log-dir
|
||||
hostPath:
|
||||
path: /var/log/rancher/auditlog
|
||||
type: Directory
|
||||
```
|
||||
+183
@@ -0,0 +1,183 @@
|
||||
---
|
||||
title: Amazon NLB Configuration
|
||||
weight: 277
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/ha-server-install/nlb/
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/rke-add-on/layer-4-lb/nlb
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/layer-4-lb/nlb/
|
||||
---
|
||||
|
||||
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](installation/options/helm2/).
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, see [Migrating from a High-availability Kubernetes install with an RKE add-on](upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
|
||||
## Objectives
|
||||
|
||||
Configuring an Amazon NLB is a multistage process. We've broken it down into multiple tasks so that it's easy to follow.
|
||||
|
||||
1. [Create Target Groups](#create-target-groups)
|
||||
|
||||
Begin by creating two target groups for the **TCP** protocol, one regarding TCP port 443 and one regarding TCP port 80 (providing redirect to TCP port 443). You'll add your Linux nodes to these groups.
|
||||
|
||||
2. [Register Targets](#register-targets)
|
||||
|
||||
Add your Linux nodes to the target groups.
|
||||
|
||||
3. [Create Your NLB](#create-your-nlb)
|
||||
|
||||
Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in **1. Create Target Groups**.
|
||||
|
||||
|
||||
## Create Target Groups
|
||||
|
||||
Your first NLB configuration step is to create two target groups. Technically, only port 443 is needed to access Rancher, but its convenient to add a listener for port 80 which will be redirected to port 443 automatically. The NGINX controller on the nodes will make sure that port 80 gets redirected to port 443.
|
||||
|
||||
Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started, make sure to select the **Region** where your EC2 instances (Linux nodes) are created.
|
||||
|
||||
The Target Groups configuration resides in the **Load Balancing** section of the **EC2** service. Select **Services** and choose **EC2**, find the section **Load Balancing** and open **Target Groups**.
|
||||
|
||||

|
||||
|
||||
Click **Create target group** to create the first target group, regarding TCP port 443.
|
||||
|
||||
### Target Group (TCP port 443)
|
||||
|
||||
Configure the first target group according to the table below. Screenshots of the configuration are shown just below the table.
|
||||
|
||||
Option | Setting
|
||||
--------------------------------------|------------------------------------
|
||||
Target Group Name | `rancher-tcp-443`
|
||||
Protocol | `TCP`
|
||||
Port | `443`
|
||||
Target type | `instance`
|
||||
VPC | Choose your VPC
|
||||
Protocol<br/>(Health Check) | `HTTP`
|
||||
Path<br/>(Health Check) | `/healthz`
|
||||
Port (Advanced health check) | `override`,`80`
|
||||
Healthy threshold (Advanced health) | `3`
|
||||
Unhealthy threshold (Advanced) | `3`
|
||||
Timeout (Advanced) | `6 seconds`
|
||||
Interval (Advanced) | `10 second`
|
||||
Success codes | `200-399`
|
||||
|
||||
***
|
||||
**Screenshot Target group TCP port 443 settings**<br/>
|
||||

|
||||
|
||||
***
|
||||
**Screenshot Target group TCP port 443 Advanced settings**<br/>
|
||||

|
||||
|
||||
***
|
||||
|
||||
Click **Create target group** to create the second target group, regarding TCP port 80.
|
||||
|
||||
### Target Group (TCP port 80)
|
||||
|
||||
Configure the second target group according to the table below. Screenshots of the configuration are shown just below the table.
|
||||
|
||||
Option | Setting
|
||||
--------------------------------------|------------------------------------
|
||||
Target Group Name | `rancher-tcp-80`
|
||||
Protocol | `TCP`
|
||||
Port | `80`
|
||||
Target type | `instance`
|
||||
VPC | Choose your VPC
|
||||
Protocol<br/>(Health Check) | `HTTP`
|
||||
Path<br/>(Health Check) | `/healthz`
|
||||
Port (Advanced health check) | `traffic port`
|
||||
Healthy threshold (Advanced health) | `3`
|
||||
Unhealthy threshold (Advanced) | `3`
|
||||
Timeout (Advanced) | `6 seconds`
|
||||
Interval (Advanced) | `10 second`
|
||||
Success codes | `200-399`
|
||||
|
||||
***
|
||||
**Screenshot Target group TCP port 80 settings**<br/>
|
||||

|
||||
|
||||
***
|
||||
**Screenshot Target group TCP port 80 Advanced settings**<br/>
|
||||

|
||||
|
||||
***
|
||||
|
||||
## Register Targets
|
||||
|
||||
Next, add your Linux nodes to both target groups.
|
||||
|
||||
Select the target group named **rancher-tcp-443**, click the tab **Targets** and choose **Edit**.
|
||||
|
||||

|
||||
|
||||
Select the instances (Linux nodes) you want to add, and click **Add to registered**.
|
||||
|
||||
***
|
||||
**Screenshot Add targets to target group TCP port 443**<br/>
|
||||
|
||||

|
||||
|
||||
***
|
||||
**Screenshot Added targets to target group TCP port 443**<br/>
|
||||
|
||||

|
||||
|
||||
When the instances are added, click **Save** on the bottom right of the screen.
|
||||
|
||||
Repeat those steps, replacing **rancher-tcp-443** with **rancher-tcp-80**. The same instances need to be added as targets to this target group.
|
||||
|
||||
## Create Your NLB
|
||||
|
||||
Use Amazon's Wizard to create an Network Load Balancer. As part of this process, you'll add the target groups you created in [Create Target Groups](#create-target-groups).
|
||||
|
||||
1. From your web browser, navigate to the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/).
|
||||
|
||||
2. From the navigation pane, choose **LOAD BALANCING** > **Load Balancers**.
|
||||
|
||||
3. Click **Create Load Balancer**.
|
||||
|
||||
4. Choose **Network Load Balancer** and click **Create**.
|
||||
|
||||
5. Complete the **Step 1: Configure Load Balancer** form.
|
||||
- **Basic Configuration**
|
||||
|
||||
- Name: `rancher`
|
||||
- Scheme: `internet-facing`
|
||||
- **Listeners**
|
||||
|
||||
Add the **Load Balancer Protocols** and **Load Balancer Ports** below.
|
||||
- `TCP`: `443`
|
||||
|
||||
- **Availability Zones**
|
||||
|
||||
- Select Your **VPC** and **Availability Zones**.
|
||||
|
||||
6. Complete the **Step 2: Configure Routing** form.
|
||||
|
||||
- From the **Target Group** drop-down, choose **Existing target group**.
|
||||
|
||||
- From the **Name** drop-down, choose `rancher-tcp-443`.
|
||||
|
||||
- Open **Advanced health check settings**, and configure **Interval** to `10 seconds`.
|
||||
|
||||
7. Complete **Step 3: Register Targets**. Since you registered your targets earlier, all you have to do is click **Next: Review**.
|
||||
|
||||
8. Complete **Step 4: Review**. Look over the load balancer details and click **Create** when you're satisfied.
|
||||
|
||||
9. After AWS creates the NLB, click **Close**.
|
||||
|
||||
## Add listener to NLB for TCP port 80
|
||||
|
||||
1. Select your newly created NLB and select the **Listeners** tab.
|
||||
|
||||
2. Click **Add listener**.
|
||||
|
||||
3. Use `TCP`:`80` as **Protocol** : **Port**
|
||||
|
||||
4. Click **Add action** and choose **Forward to...**
|
||||
|
||||
5. From the **Forward to** drop-down, choose `rancher-tcp-80`.
|
||||
|
||||
6. Click **Save** in the top right of the screen.
|
||||
+105
@@ -0,0 +1,105 @@
|
||||
---
|
||||
title: Amazon ALB Configuration
|
||||
weight: 277
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/ha-server-install-external-lb/alb/
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/rke-add-on/layer-7-lb/alb
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/layer-7-lb/alb/
|
||||
---
|
||||
|
||||
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher helm chart to install Kubernetes Rancher. For details, see the [Kubernetes Install ](installation/options/helm2/).
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
|
||||
## Objectives
|
||||
|
||||
Configuring an Amazon ALB is a multistage process. We've broken it down into multiple tasks so that it's easy to follow.
|
||||
|
||||
1. [Create Target Group](#create-target-group)
|
||||
|
||||
Begin by creating one target group for the http protocol. You'll add your Linux nodes to this group.
|
||||
|
||||
2. [Register Targets](#register-targets)
|
||||
|
||||
Add your Linux nodes to the target group.
|
||||
|
||||
3. [Create Your ALB](#create-your-alb)
|
||||
|
||||
Use Amazon's Wizard to create an Application Load Balancer. As part of this process, you'll add the target groups you created in **1. Create Target Groups**.
|
||||
|
||||
|
||||
## Create Target Group
|
||||
|
||||
Your first ALB configuration step is to create one target group for HTTP.
|
||||
|
||||
Log into the [Amazon AWS Console](https://console.aws.amazon.com/ec2/) to get started.
|
||||
|
||||
The document below will guide you through this process. Use the data in the tables below to complete the procedure.
|
||||
|
||||
[Amazon Documentation: Create a Target Group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-target-group.html)
|
||||
|
||||
### Target Group (HTTP)
|
||||
|
||||
Option | Setting
|
||||
----------------------------|------------------------------------
|
||||
Target Group Name | `rancher-http-80`
|
||||
Protocol | `HTTP`
|
||||
Port | `80`
|
||||
Target type | `instance`
|
||||
VPC | Choose your VPC
|
||||
Protocol<br/>(Health Check) | `HTTP`
|
||||
Path<br/>(Health Check) | `/healthz`
|
||||
|
||||
## Register Targets
|
||||
|
||||
Next, add your Linux nodes to your target group.
|
||||
|
||||
[Amazon Documentation: Register Targets with Your Target Group](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-register-targets.html)
|
||||
|
||||
### Create Your ALB
|
||||
|
||||
Use Amazon's Wizard to create an Application Load Balancer. As part of this process, you'll add the target group you created in [Create Target Group](#create-target-group).
|
||||
|
||||
1. From your web browser, navigate to the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/).
|
||||
|
||||
2. From the navigation pane, choose **LOAD BALANCING** > **Load Balancers**.
|
||||
|
||||
3. Click **Create Load Balancer**.
|
||||
|
||||
4. Choose **Application Load Balancer**.
|
||||
|
||||
5. Complete the **Step 1: Configure Load Balancer** form.
|
||||
- **Basic Configuration**
|
||||
|
||||
- Name: `rancher-http`
|
||||
- Scheme: `internet-facing`
|
||||
- IP address type: `ipv4`
|
||||
- **Listeners**
|
||||
|
||||
Add the **Load Balancer Protocols** and **Load Balancer Ports** below.
|
||||
- `HTTP`: `80`
|
||||
- `HTTPS`: `443`
|
||||
|
||||
- **Availability Zones**
|
||||
|
||||
- Select Your **VPC** and **Availability Zones**.
|
||||
|
||||
6. Complete the **Step 2: Configure Security Settings** form.
|
||||
|
||||
Configure the certificate you want to use for SSL termination.
|
||||
|
||||
7. Complete the **Step 3: Configure Security Groups** form.
|
||||
|
||||
8. Complete the **Step 4: Configure Routing** form.
|
||||
|
||||
- From the **Target Group** drop-down, choose **Existing target group**.
|
||||
|
||||
- Add target group `rancher-http-80`.
|
||||
|
||||
9. Complete **Step 5: Register Targets**. Since you registered your targets earlier, all you have to do it click **Next: Review**.
|
||||
|
||||
10. Complete **Step 6: Review**. Look over the load balancer details and click **Create** when you're satisfied.
|
||||
|
||||
11. After AWS creates the ALB, click **Close**.
|
||||
+42
@@ -0,0 +1,42 @@
|
||||
---
|
||||
title: NGINX Configuration
|
||||
weight: 277
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/ha-server-install-external-lb/nginx/
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/rke-add-on/layer-7-lb/nginx
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/layer-7-lb/nginx/
|
||||
---
|
||||
|
||||
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](installation/options/helm2/).
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
|
||||
## Install NGINX
|
||||
|
||||
Start by installing NGINX on your load balancer host. NGINX has packages available for all known operating systems.
|
||||
|
||||
For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/).
|
||||
|
||||
## Create NGINX Configuration
|
||||
|
||||
See [Example NGINX config](installation/options/helm2/helm-rancher/chart-options/#example-nginx-config).
|
||||
|
||||
## Run NGINX
|
||||
|
||||
* Reload or restart NGINX
|
||||
|
||||
````
|
||||
# Reload NGINX
|
||||
nginx -s reload
|
||||
|
||||
# Restart NGINX
|
||||
# Depending on your Linux distribution
|
||||
service nginx restart
|
||||
systemctl restart nginx
|
||||
````
|
||||
|
||||
## Browse to Rancher UI
|
||||
|
||||
You should now be to able to browse to `https://FQDN`.
|
||||
+72
@@ -0,0 +1,72 @@
|
||||
---
|
||||
title: HTTP Proxy Configuration
|
||||
weight: 277
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/rke-add-on/proxy
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/proxy/
|
||||
---
|
||||
|
||||
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](installation/options/helm2/).
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
|
||||
If you operate Rancher behind a proxy and you want to access services through the proxy (such as retrieving catalogs), you must provide Rancher information about your proxy. As Rancher is written in Go, it uses the common proxy environment variables as shown below.
|
||||
|
||||
Make sure `NO_PROXY` contains the network addresses, network address ranges and domains that should be excluded from using the proxy.
|
||||
|
||||
Environment variable | Purpose
|
||||
--------------------------|---------
|
||||
HTTP_PROXY | Proxy address to use when initiating HTTP connection(s)
|
||||
HTTPS_PROXY | Proxy address to use when initiating HTTPS connection(s)
|
||||
NO_PROXY | Network address(es), network address range(s) and domains to exclude from using the proxy when initiating connection(s)
|
||||
|
||||
> **Note** NO_PROXY must be in uppercase to use network range (CIDR) notation.
|
||||
|
||||
## Kubernetes installation
|
||||
|
||||
When using Kubernetes installation, the environment variables need to be added to the RKE Config File template.
|
||||
|
||||
* [Kubernetes Installation with External Load Balancer (TCP/Layer 4) RKE Config File Template](../../../../../../pages-for-subheaders/helm2-rke-add-on-layer-4-lb.md#5-download-rke-config-file-template)
|
||||
* [Kubernetes Installation with External Load Balancer (HTTPS/Layer 7) RKE Config File Template](../../../../../../pages-for-subheaders/helm2-rke-add-on-layer-7-lb.md#5-download-rke-config-file-template)
|
||||
|
||||
The environment variables should be defined in the `Deployment` inside the RKE Config File Template. You only have to add the part starting with `env:` to (but not including) `ports:`. Make sure the indentation is identical to the preceding `name:`. Required values for `NO_PROXY` are:
|
||||
|
||||
* `localhost`
|
||||
* `127.0.0.1`
|
||||
* `0.0.0.0`
|
||||
* Configured `service_cluster_ip_range` (default: `10.43.0.0/16`)
|
||||
|
||||
The example below is based on a proxy server accessible at `http://192.168.0.1:3128`, and excluding usage of the proxy when accessing network range `192.168.10.0/24`, the configured `service_cluster_ip_range` (`10.43.0.0/16`) and every hostname under the domain `example.com`. If you have changed the `service_cluster_ip_range`, you have to update the value below accordingly.
|
||||
|
||||
```yaml
|
||||
...
|
||||
---
|
||||
kind: Deployment
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cattle
|
||||
spec:
|
||||
serviceAccountName: cattle-admin
|
||||
containers:
|
||||
- image: rancher/rancher:latest
|
||||
imagePullPolicy: Always
|
||||
name: cattle-server
|
||||
env:
|
||||
- name: HTTP_PROXY
|
||||
value: "http://192.168.10.1:3128"
|
||||
- name: HTTPS_PROXY
|
||||
value: "http://192.168.10.1:3128"
|
||||
- name: NO_PROXY
|
||||
value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,192.168.10.0/24,example.com"
|
||||
ports:
|
||||
...
|
||||
```
|
||||
+51
@@ -0,0 +1,51 @@
|
||||
---
|
||||
title: 404 - default backend
|
||||
weight: 30
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/troubleshooting-ha/404-default-backend/
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/rke-add-on/troubleshooting/404-default-backend
|
||||
- /404-default-backend/
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/troubleshooting/404-default-backend/
|
||||
---
|
||||
|
||||
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](installation/options/helm2/).
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
|
||||
To debug issues around this error, you will need to download the command-line tool `kubectl`. See [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) how to download `kubectl` for your platform.
|
||||
|
||||
When you have made changes to `rancher-cluster.yml`, you will have to run `rke remove --config rancher-cluster.yml` to clean the nodes, so it cannot conflict with previous configuration errors.
|
||||
|
||||
### Possible causes
|
||||
|
||||
The nginx ingress controller is not able to serve the configured host in `rancher-cluster.yml`. This should be the FQDN you configured to access Rancher. You can check if it is properly configured by viewing the ingress that is created by running the following command:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml get ingress -n cattle-system -o wide
|
||||
```
|
||||
|
||||
Check if the `HOSTS` column is displaying the FQDN you configured in the template, and that the used nodes are listed in the `ADDRESS` column. If that is configured correctly, we can check the logging of the nginx ingress controller.
|
||||
|
||||
The logging of the nginx ingress controller will show why it cannot serve the requested host. To view the logs, you can run the following command
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l app=ingress-nginx -n ingress-nginx
|
||||
```
|
||||
|
||||
<b>Errors</b>
|
||||
|
||||
* `x509: certificate is valid for fqdn, not your_configured_fqdn`
|
||||
|
||||
The used certificates do not contain the correct hostname. Generate new certificates that contain the chosen FQDN to access Rancher and redeploy.
|
||||
|
||||
* `Port 80 is already in use. Please check the flag --http-port`
|
||||
|
||||
There is a process on the node occupying port 80, this port is needed for the nginx ingress controller to route requests to Rancher. You can find the process by running the command: `netstat -plant | grep \:80`.
|
||||
|
||||
Stop/kill the process and redeploy.
|
||||
|
||||
* `unexpected error creating pem file: no valid PEM formatted block found`
|
||||
|
||||
The base64 encoded string configured in the template is not valid. Please check if you can decode the configured string using `base64 -D STRING`, this should return the same output as the content of the file you used to generate the string. If this is correct, please check if the base64 encoded string is placed directly after the key, without any newlines before, in between or after. (For example: `tls.crt: LS01..`)
|
||||
+163
@@ -0,0 +1,163 @@
|
||||
---
|
||||
title: Generic troubleshooting
|
||||
weight: 5
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/troubleshooting-ha/generic-troubleshooting/
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/rke-add-on/troubleshooting/generic-troubleshooting
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/troubleshooting/generic-troubleshooting/
|
||||
---
|
||||
|
||||
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](installation/options/helm2/).
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
|
||||
Below are steps that you can follow to determine what is wrong in your cluster.
|
||||
|
||||
### Double check if all the required ports are opened in your (host) firewall
|
||||
|
||||
Double check if all the [required ports](../../../../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) are opened in your (host) firewall.
|
||||
|
||||
### All nodes should be present and in **Ready** state
|
||||
|
||||
To check, run the command:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml get nodes
|
||||
```
|
||||
|
||||
If a node is not shown in this output or a node is not in **Ready** state, you can check the logging of the `kubelet` container. Login to the node and run `docker logs kubelet`.
|
||||
|
||||
### All pods/jobs should be in **Running**/**Completed** state
|
||||
|
||||
To check, run the command:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml get pods --all-namespaces
|
||||
```
|
||||
|
||||
If a pod is not in **Running** state, you can dig into the root cause by running:
|
||||
|
||||
#### Describe pod
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml describe pod POD_NAME -n NAMESPACE
|
||||
```
|
||||
|
||||
#### Pod container logs</h3>
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml logs POD_NAME -n NAMESPACE
|
||||
```
|
||||
|
||||
If a job is not in **Completed** state, you can dig into the root cause by running:
|
||||
|
||||
#### Describe job
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml describe job JOB_NAME -n NAMESPACE
|
||||
```
|
||||
|
||||
#### Logs from the containers of pods of the job
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l job-name=JOB_NAME -n NAMESPACE
|
||||
```
|
||||
|
||||
### Check ingress
|
||||
|
||||
Ingress should have the correct `HOSTS` (showing the configured FQDN) and `ADDRESS` (address(es) it will be routed to).
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml get ingress --all-namespaces
|
||||
```
|
||||
|
||||
### List all Kubernetes cluster events
|
||||
|
||||
Kubernetes cluster events are stored, and can be retrieved by running:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml get events --all-namespaces
|
||||
```
|
||||
|
||||
### Check Rancher container logging
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l app=cattle -n cattle-system
|
||||
```
|
||||
|
||||
### Check NGINX ingress controller logging
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l app=ingress-nginx -n ingress-nginx
|
||||
```
|
||||
|
||||
### Check if overlay network is functioning correctly
|
||||
|
||||
The pod can be scheduled to any of the hosts you used for your cluster, but that means that the NGINX ingress controller needs to be able to route the request from `NODE_1` to `NODE_2`. This happens over the overlay network. If the overlay network is not functioning, you will experience intermittent TCP/HTTP connection failures due to the NGINX ingress controller not being able to route to the pod.
|
||||
|
||||
To test the overlay network, you can launch the following `DaemonSet` definition. This will run an `alpine` container on every host, which we will use to run a `ping` test between containers on all hosts.
|
||||
|
||||
1. Save the following file as `ds-alpine.yml`
|
||||
|
||||
```
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: alpine
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
name: alpine
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: alpine
|
||||
spec:
|
||||
tolerations:
|
||||
- effect: NoExecute
|
||||
key: "node-role.kubernetes.io/etcd"
|
||||
value: "true"
|
||||
- effect: NoSchedule
|
||||
key: "node-role.kubernetes.io/controlplane"
|
||||
value: "true"
|
||||
containers:
|
||||
- image: alpine
|
||||
imagePullPolicy: Always
|
||||
name: alpine
|
||||
command: ["sh", "-c", "tail -f /dev/null"]
|
||||
terminationMessagePath: /dev/termination-log
|
||||
```
|
||||
|
||||
2. Launch it using `kubectl --kubeconfig kube_config_rancher-cluster.yml create -f ds-alpine.yml`
|
||||
3. Wait until `kubectl --kubeconfig kube_config_rancher-cluster.yml rollout status ds/alpine -w` returns: `daemon set "alpine" successfully rolled out`.
|
||||
4. Run the following command to let each container on every host ping each other (it's a single line command).
|
||||
|
||||
```
|
||||
echo "=> Start"; kubectl --kubeconfig kube_config_rancher-cluster.yml get pods -l name=alpine -o jsonpath='{range .items[*]}{@.metadata.name}{" "}{@.spec.nodeName}{"\n"}{end}' | while read spod shost; do kubectl --kubeconfig kube_config_rancher-cluster.yml get pods -l name=alpine -o jsonpath='{range .items[*]}{@.status.podIP}{" "}{@.spec.nodeName}{"\n"}{end}' | while read tip thost; do kubectl --kubeconfig kube_config_rancher-cluster.yml --request-timeout='10s' exec $spod -- /bin/sh -c "ping -c2 $tip > /dev/null 2>&1"; RC=$?; if [ $RC -ne 0 ]; then echo $shost cannot reach $thost; fi; done; done; echo "=> End"
|
||||
```
|
||||
|
||||
5. When this command has finished running, the output indicating everything is correct is:
|
||||
|
||||
```
|
||||
=> Start
|
||||
=> End
|
||||
```
|
||||
|
||||
If you see error in the output, that means that the [required ports](../../../../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) for overlay networking are not opened between the hosts indicated.
|
||||
|
||||
Example error output of a situation where NODE1 had the UDP ports blocked.
|
||||
|
||||
```
|
||||
=> Start
|
||||
command terminated with exit code 1
|
||||
NODE2 cannot reach NODE1
|
||||
command terminated with exit code 1
|
||||
NODE3 cannot reach NODE1
|
||||
command terminated with exit code 1
|
||||
NODE1 cannot reach NODE2
|
||||
command terminated with exit code 1
|
||||
NODE1 cannot reach NODE3
|
||||
=> End
|
||||
```
|
||||
+64
@@ -0,0 +1,64 @@
|
||||
---
|
||||
title: Failed to get job complete status
|
||||
weight: 20
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/troubleshooting-ha/job-complete-status/
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/rke-add-on/troubleshooting/job-complete-status
|
||||
- /rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/troubleshooting/job-complete-status/
|
||||
---
|
||||
|
||||
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install ](installation/options/helm2/).
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
|
||||
To debug issues around this error, you will need to download the command-line tool `kubectl`. See [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) how to download `kubectl` for your platform.
|
||||
|
||||
When you have made changes to `rancher-cluster.yml`, you will have to run `rke remove --config rancher-cluster.yml` to clean the nodes, so it cannot conflict with previous configuration errors.
|
||||
|
||||
### Failed to deploy addon execute job [rke-user-includes-addons]: Failed to get job complete status
|
||||
|
||||
Something is wrong in the addons definitions, you can run the following command to get the root cause in the logging of the job:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml logs -l job-name=rke-user-addon-deploy-job -n kube-system
|
||||
```
|
||||
|
||||
#### error: error converting YAML to JSON: yaml: line 9:
|
||||
|
||||
The structure of the addons definition in `rancher-cluster.yml` is wrong. In the different resources specified in the addons section, there is a error in the structure of the YAML. The pointer `yaml line 9` references to the line number of the addon that is causing issues.
|
||||
|
||||
<b>Things to check</b>
|
||||
<ul>
|
||||
<ul>
|
||||
<li>Is each of the base64 encoded certificate string placed directly after the key, for example: `tls.crt: LS01...`, there should be no newline/space before, in between or after.</li>
|
||||
<li>Is the YAML properly formatted, each indentation should be 2 spaces as shown in the template files.</li>
|
||||
<li>Verify the integrity of your certificate by running this command `cat MyCertificate | base64 -d` on Linux, `cat MyCertificate | base64 -D` on Mac OS . If any error exists, the command output will tell you.
|
||||
</ul>
|
||||
</ul>
|
||||
|
||||
#### Error from server (BadRequest): error when creating "/etc/config/rke-user-addon.yaml": Secret in version "v1" cannot be handled as a Secret
|
||||
|
||||
The base64 string of one of the certificate strings is wrong. The log message will try to show you what part of the string is not recognized as valid base64.
|
||||
|
||||
<b>Things to check</b>
|
||||
<ul>
|
||||
<ul>
|
||||
<li>Check if the base64 string is valid by running one of the commands below:</li>
|
||||
|
||||
```
|
||||
# MacOS
|
||||
echo BASE64_CRT | base64 -D
|
||||
# Linux
|
||||
echo BASE64_CRT | base64 -d
|
||||
# Windows
|
||||
certutil -decode FILENAME.base64 FILENAME.verify
|
||||
```
|
||||
|
||||
</ul>
|
||||
</ul>
|
||||
|
||||
#### The Ingress "cattle-ingress-http" is invalid: spec.rules[0].host: Invalid value: "IP": must be a DNS name, not an IP address
|
||||
|
||||
The host value can only contain a host name, as it is needed by the ingress controller to match the hostname and pass to the correct backend.
|
||||
+108
@@ -0,0 +1,108 @@
|
||||
---
|
||||
title: Opening Ports with firewalld
|
||||
weight: 1
|
||||
---
|
||||
|
||||
> We recommend disabling firewalld. For Kubernetes 1.19.x and higher, firewalld must be turned off.
|
||||
|
||||
Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm.
|
||||
|
||||
For example, one Oracle Linux image in AWS has REJECT rules that stop Helm from communicating with Tiller:
|
||||
|
||||
```
|
||||
Chain INPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
|
||||
ACCEPT icmp -- anywhere anywhere
|
||||
ACCEPT all -- anywhere anywhere
|
||||
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
|
||||
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||
|
||||
Chain FORWARD (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||
|
||||
Chain OUTPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
```
|
||||
|
||||
You can check the default firewall rules with this command:
|
||||
|
||||
```
|
||||
sudo iptables --list
|
||||
```
|
||||
|
||||
This section describes how to use `firewalld` to apply the [firewall port rules](installation/references) for nodes in a high-availability Rancher server cluster.
|
||||
|
||||
# Prerequisite
|
||||
|
||||
Install v7.x or later ofv`firewalld`:
|
||||
|
||||
```
|
||||
yum install firewalld
|
||||
systemctl start firewalld
|
||||
systemctl enable firewalld
|
||||
```
|
||||
|
||||
# Applying Firewall Port Rules
|
||||
|
||||
In the Rancher high-availability installation instructions, the Rancher server is set up on three nodes that have all three Kubernetes roles: etcd, controlplane, and worker. If your Rancher server nodes have all three roles, run the following commands on each node:
|
||||
|
||||
```
|
||||
firewall-cmd --permanent --add-port=22/tcp
|
||||
firewall-cmd --permanent --add-port=80/tcp
|
||||
firewall-cmd --permanent --add-port=443/tcp
|
||||
firewall-cmd --permanent --add-port=2376/tcp
|
||||
firewall-cmd --permanent --add-port=2379/tcp
|
||||
firewall-cmd --permanent --add-port=2380/tcp
|
||||
firewall-cmd --permanent --add-port=6443/tcp
|
||||
firewall-cmd --permanent --add-port=8472/udp
|
||||
firewall-cmd --permanent --add-port=9099/tcp
|
||||
firewall-cmd --permanent --add-port=10250/tcp
|
||||
firewall-cmd --permanent --add-port=10254/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/udp
|
||||
```
|
||||
If your Rancher server nodes have separate roles, use the following commands based on the role of the node:
|
||||
|
||||
```
|
||||
# For etcd nodes, run the following commands:
|
||||
firewall-cmd --permanent --add-port=2376/tcp
|
||||
firewall-cmd --permanent --add-port=2379/tcp
|
||||
firewall-cmd --permanent --add-port=2380/tcp
|
||||
firewall-cmd --permanent --add-port=8472/udp
|
||||
firewall-cmd --permanent --add-port=9099/tcp
|
||||
firewall-cmd --permanent --add-port=10250/tcp
|
||||
|
||||
# For control plane nodes, run the following commands:
|
||||
firewall-cmd --permanent --add-port=80/tcp
|
||||
firewall-cmd --permanent --add-port=443/tcp
|
||||
firewall-cmd --permanent --add-port=2376/tcp
|
||||
firewall-cmd --permanent --add-port=6443/tcp
|
||||
firewall-cmd --permanent --add-port=8472/udp
|
||||
firewall-cmd --permanent --add-port=9099/tcp
|
||||
firewall-cmd --permanent --add-port=10250/tcp
|
||||
firewall-cmd --permanent --add-port=10254/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/udp
|
||||
|
||||
# For worker nodes, run the following commands:
|
||||
firewall-cmd --permanent --add-port=22/tcp
|
||||
firewall-cmd --permanent --add-port=80/tcp
|
||||
firewall-cmd --permanent --add-port=443/tcp
|
||||
firewall-cmd --permanent --add-port=2376/tcp
|
||||
firewall-cmd --permanent --add-port=8472/udp
|
||||
firewall-cmd --permanent --add-port=9099/tcp
|
||||
firewall-cmd --permanent --add-port=10250/tcp
|
||||
firewall-cmd --permanent --add-port=10254/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/udp
|
||||
```
|
||||
|
||||
After the `firewall-cmd` commands have been run on a node, use the following command to enable the firewall rules:
|
||||
|
||||
```
|
||||
firewall-cmd --reload
|
||||
```
|
||||
|
||||
**Result:** The firewall is updated so that Helm can communicate with the Rancher server nodes.
|
||||
+401
@@ -0,0 +1,401 @@
|
||||
---
|
||||
title: Kubernetes Install with External Load Balancer (TCP/Layer 4)
|
||||
weight: 275
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/ha/rke-add-on/layer-4-lb
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/rke-add-on/layer-4-lb
|
||||
- /rancher/v2.0-v2.4/en/installation/options/rke-add-on/layer-4-lb
|
||||
- /rancher/v2.x/en/installation/resources/advanced/rke-add-on/layer-4-lb/
|
||||
---
|
||||
|
||||
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install](../../../../../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md).
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
|
||||
This procedure walks you through setting up a 3-node cluster using the Rancher Kubernetes Engine (RKE). The cluster's sole purpose is running pods for Rancher. The setup is based on:
|
||||
|
||||
- Layer 4 load balancer (TCP)
|
||||
- [NGINX ingress controller with SSL termination (HTTPS)](https://kubernetes.github.io/ingress-nginx/)
|
||||
|
||||
In an HA setup that uses a layer 4 load balancer, the load balancer accepts Rancher client connections over the TCP/UDP protocols (i.e., the transport level). The load balancer then forwards these connections to individual cluster nodes without reading the request itself. Because the load balancer cannot read the packets it's forwarding, the routing decisions it can make are limited.
|
||||
|
||||
<sup>Rancher installed on a Kubernetes cluster with layer 4 load balancer, depicting SSL termination at ingress controllers</sup>
|
||||

|
||||
|
||||
## Installation Outline
|
||||
|
||||
Installation of Rancher in a high-availability configuration involves multiple procedures. Review this outline to learn about each procedure you need to complete.
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
- [1. Provision Linux Hosts](#1-provision-linux-hosts)
|
||||
- [2. Configure Load Balancer](#2-configure-load-balancer)
|
||||
- [3. Configure DNS](#3-configure-dns)
|
||||
- [4. Install RKE](#4-install-rke)
|
||||
- [5. Download RKE Config File Template](#5-download-rke-config-file-template)
|
||||
- [6. Configure Nodes](#6-configure-nodes)
|
||||
- [7. Configure Certificates](#7-configure-certificates)
|
||||
- [8. Configure FQDN](#8-configure-fqdn)
|
||||
- [9. Configure Rancher version](#9-configure-rancher-version)
|
||||
- [10. Back Up Your RKE Config File](#10-back-up-your-rke-config-file)
|
||||
- [11. Run RKE](#11-run-rke)
|
||||
- [12. Back Up Auto-Generated Config File](#12-back-up-auto-generated-config-file)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
<br/>
|
||||
|
||||
## 1. Provision Linux Hosts
|
||||
|
||||
Provision three Linux hosts according to our [Requirements](../../../../../pages-for-subheaders/installation-requirements.md).
|
||||
|
||||
## 2. Configure Load Balancer
|
||||
|
||||
We will be using NGINX as our Layer 4 Load Balancer (TCP). NGINX will forward all connections to one of your Rancher nodes. If you want to use Amazon NLB, you can skip this step and use [Amazon NLB configuration](../../../../../how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md)
|
||||
|
||||
>**Note:**
|
||||
> In this configuration, the load balancer is positioned in front of your Linux hosts. The load balancer can be any host that you have available that's capable of running NGINX.
|
||||
>
|
||||
>One caveat: do not use one of your Rancher nodes as the load balancer.
|
||||
|
||||
### A. Install NGINX
|
||||
|
||||
Start by installing NGINX on your load balancer host. NGINX has packages available for all known operating systems. For help installing NGINX, refer to their [install documentation](https://www.nginx.com/resources/wiki/start/topics/tutorials/install/).
|
||||
|
||||
The `stream` module is required, which is present when using the official NGINX packages. Please refer to your OS documentation how to install and enable the NGINX `stream` module on your operating system.
|
||||
|
||||
### B. Create NGINX Configuration
|
||||
|
||||
After installing NGINX, you need to update the NGINX config file, `nginx.conf`, with the IP addresses for your nodes.
|
||||
|
||||
1. Copy and paste the code sample below into your favorite text editor. Save it as `nginx.conf`.
|
||||
|
||||
2. From `nginx.conf`, replace `IP_NODE_1`, `IP_NODE_2`, and `IP_NODE_3` with the IPs of your [Linux hosts](#1-provision-linux-hosts).
|
||||
|
||||
>**Note:** This Nginx configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/).
|
||||
|
||||
**Example NGINX config:**
|
||||
```
|
||||
worker_processes 4;
|
||||
worker_rlimit_nofile 40000;
|
||||
|
||||
events {
|
||||
worker_connections 8192;
|
||||
}
|
||||
|
||||
http {
|
||||
server {
|
||||
listen 80;
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
}
|
||||
|
||||
stream {
|
||||
upstream rancher_servers {
|
||||
least_conn;
|
||||
server IP_NODE_1:443 max_fails=3 fail_timeout=5s;
|
||||
server IP_NODE_2:443 max_fails=3 fail_timeout=5s;
|
||||
server IP_NODE_3:443 max_fails=3 fail_timeout=5s;
|
||||
}
|
||||
server {
|
||||
listen 443;
|
||||
proxy_pass rancher_servers;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. Save `nginx.conf` to your load balancer at the following path: `/etc/nginx/nginx.conf`.
|
||||
|
||||
4. Load the updates to your NGINX configuration by running the following command:
|
||||
|
||||
```
|
||||
# nginx -s reload
|
||||
```
|
||||
|
||||
### Option - Run NGINX as Docker container
|
||||
|
||||
Instead of installing NGINX as a package on the operating system, you can rather run it as a Docker container. Save the edited **Example NGINX config** as `/etc/nginx.conf` and run the following command to launch the NGINX container:
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /etc/nginx.conf:/etc/nginx/nginx.conf \
|
||||
nginx:1.14
|
||||
```
|
||||
|
||||
## 3. Configure DNS
|
||||
|
||||
Choose a fully qualified domain name (FQDN) that you want to use to access Rancher (e.g., `rancher.yourdomain.com`).<br/><br/>
|
||||
|
||||
1. Log into your DNS server a create a `DNS A` record that points to the IP address of your [load balancer](#2-configure-load-balancer).
|
||||
|
||||
2. Validate that the `DNS A` is working correctly. Run the following command from any terminal, replacing `HOSTNAME.DOMAIN.COM` with your chosen FQDN:
|
||||
|
||||
`nslookup HOSTNAME.DOMAIN.COM`
|
||||
|
||||
**Step Result:** Terminal displays output similar to the following:
|
||||
|
||||
```
|
||||
$ nslookup rancher.yourdomain.com
|
||||
Server: YOUR_HOSTNAME_IP_ADDRESS
|
||||
Address: YOUR_HOSTNAME_IP_ADDRESS#53
|
||||
|
||||
Non-authoritative answer:
|
||||
Name: rancher.yourdomain.com
|
||||
Address: HOSTNAME.DOMAIN.COM
|
||||
```
|
||||
|
||||
<br/>
|
||||
|
||||
## 4. Install RKE
|
||||
|
||||
RKE (Rancher Kubernetes Engine) is a fast, versatile Kubernetes installer that you can use to install Kubernetes on your Linux hosts. We will use RKE to setup our cluster and run Rancher.
|
||||
|
||||
1. Follow the [RKE Install](https://rancher.com/docs/rke/latest/en/installation) instructions.
|
||||
|
||||
2. Confirm that RKE is now executable by running the following command:
|
||||
|
||||
```
|
||||
rke --version
|
||||
```
|
||||
|
||||
## 5. Download RKE Config File Template
|
||||
|
||||
RKE uses a `.yml` config file to install and configure your Kubernetes cluster. There are 2 templates to choose from, depending on the SSL certificate you want to use.
|
||||
|
||||
1. Download one of following templates, depending on the SSL certificate you're using.
|
||||
|
||||
- [Template for self-signed certificate<br/>](installation/options/cluster-yml-templates/3-node-certificate)
|
||||
- [Template for certificate signed by recognized CA<br/> ](installation/options/cluster-yml-templates/3-node-certificate-recognizedca)
|
||||
|
||||
|
||||
|
||||
2. Rename the file to `rancher-cluster.yml`.
|
||||
|
||||
## 6. Configure Nodes
|
||||
|
||||
Once you have the `rancher-cluster.yml` config file template, edit the nodes section to point toward your Linux hosts.
|
||||
|
||||
1. Open `rancher-cluster.yml` in your favorite text editor.
|
||||
|
||||
1. Update the `nodes` section with the information of your [Linux hosts](#1-provision-linux-hosts).
|
||||
|
||||
For each node in your cluster, update the following placeholders: `IP_ADDRESS_X` and `USER`. The specified user should be able to access the Docker socket, you can test this by logging in with the specified user and run `docker ps`.
|
||||
|
||||
>**Note:**
|
||||
> When using RHEL/CentOS, the SSH user can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565. See [Operating System Requirements](https://rancher.com/docs/rke/latest/en/installation/os#redhat-enterprise-linux-rhel-centos) >for RHEL/CentOS specific requirements.
|
||||
|
||||
nodes:
|
||||
# The IP address or hostname of the node
|
||||
- address: IP_ADDRESS_1
|
||||
# User that can login to the node and has access to the Docker socket (i.e. can execute `docker ps` on the node)
|
||||
# When using RHEL/CentOS, this can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565
|
||||
user: USER
|
||||
role: [controlplane,etcd,worker]
|
||||
# Path the SSH key that can be used to access to node with the specified user
|
||||
ssh_key_path: ~/.ssh/id_rsa
|
||||
- address: IP_ADDRESS_2
|
||||
user: USER
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: ~/.ssh/id_rsa
|
||||
- address: IP_ADDRESS_3
|
||||
user: USER
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: ~/.ssh/id_rsa
|
||||
|
||||
1. **Optional:** By default, `rancher-cluster.yml` is configured to take backup snapshots of your data. To disable these snapshots, change the `backup` directive setting to `false`, as depicted below.
|
||||
|
||||
services:
|
||||
etcd:
|
||||
backup: false
|
||||
|
||||
|
||||
## 7. Configure Certificates
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Option A—Bring Your Own Certificate: Self-Signed</summary>
|
||||
|
||||
>**Prerequisites:**
|
||||
>Create a self-signed certificate.
|
||||
>
|
||||
>- The certificate files must be in PEM format.
|
||||
>- The certificate files must be encoded in [base64](#base64).
|
||||
>- In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](../../../other-installation-methods/rancher-on-a-single-node-with-docker/certificate-troubleshooting.md)
|
||||
|
||||
1. In `kind: Secret` with `name: cattle-keys-ingress`:
|
||||
|
||||
* Replace `<BASE64_CRT>` with the base64 encoded string of the Certificate file (usually called `cert.pem` or `domain.crt`)
|
||||
* Replace `<BASE64_KEY>` with the base64 encoded string of the Certificate Key file (usually called `key.pem` or `domain.key`)
|
||||
|
||||
>**Note:**
|
||||
> The base64 encoded string should be on the same line as `tls.crt` or `tls.key`, without any newline at the beginning, in between or at the end.
|
||||
|
||||
**Step Result:** After replacing the values, the file should look like the example below (the base64 encoded strings should be different):
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cattle-keys-ingress
|
||||
namespace: cattle-system
|
||||
type: Opaque
|
||||
data:
|
||||
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1RENDQWN5Z0F3SUJBZ0lKQUlHc25NeG1LeGxLTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NVGd3TlRBMk1qRXdOREE1V2hjTk1UZ3dOekExTWpFd05EQTVXakFXTVJRdwpFZ1lEVlFRRERBdG9ZUzV5Ym1Ob2NpNXViRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBTFJlMXdzekZSb2Rib2pZV05DSHA3UkdJaUVIMENDZ1F2MmdMRXNkUUNKZlcrUFEvVjM0NnQ3bSs3TFEKZXJaV3ZZMWpuY2VuWU5JSGRBU0VnU0ducWExYnhUSU9FaE0zQXpib3B0WDhjSW1OSGZoQlZETGdiTEYzUk0xaQpPM1JLTGdIS2tYSTMxZndjbU9zWGUwaElYQnpUbmxnM20vUzlXL3NTc0l1dDVwNENDUWV3TWlpWFhuUElKb21lCmpkS3VjSHFnMTlzd0YvcGVUalZrcVpuMkJHazZRaWFpMU41bldRV0pjcThTenZxTTViZElDaWlwYU9hWWQ3RFEKYWRTejV5dlF0YkxQNW4wTXpnOU43S3pGcEpvUys5QWdkWDI5cmZqV2JSekp3RzM5R3dRemN6VWtLcnZEb05JaQo0UFJHc01yclFNVXFSYjRSajNQOEJodEMxWXNDQXdFQUFhTTVNRGN3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFCkJBTUNCZUF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdJR0NDc0dBUVVGQndNQk1BMEdDU3FHU0liM0RRRUIKQ3dVQUE0SUJBUUNKZm5PWlFLWkowTFliOGNWUW5Vdi9NZkRZVEJIQ0pZcGM4MmgzUGlXWElMQk1jWDhQRC93MgpoOUExNkE4NGNxODJuQXEvaFZYYy9JNG9yaFY5WW9jSEg5UlcvbGthTUQ2VEJVR0Q1U1k4S292MHpHQ1ROaDZ6Ci9wZTNqTC9uU0pYSjRtQm51czJheHFtWnIvM3hhaWpYZG9kMmd3eGVhTklvRjNLbHB2aGU3ZjRBNmpsQTM0MmkKVVlCZ09iN1F5KytRZWd4U1diSmdoSzg1MmUvUUhnU2FVSkN6NW1sNGc1WndnNnBTUXhySUhCNkcvREc4dElSYwprZDMxSk1qY25Fb1Rhc1Jyc1NwVmNGdXZyQXlXN2liakZyYzhienBNcE1obDVwYUZRcEZzMnIwaXpZekhwakFsCk5ZR2I2OHJHcjBwQkp3YU5DS2ErbCtLRTk4M3A3NDYwCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
|
||||
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdEY3WEN6TVZHaDF1aU5oWTBJZW50RVlpSVFmUUlLQkMvYUFzU3gxQUlsOWI0OUQ5ClhmanEzdWI3c3RCNnRsYTlqV09keDZkZzBnZDBCSVNCSWFlcHJWdkZNZzRTRXpjRE51aW0xZnh3aVkwZCtFRlUKTXVCc3NYZEV6V0k3ZEVvdUFjcVJjamZWL0J5WTZ4ZDdTRWhjSE5PZVdEZWI5TDFiK3hLd2k2M21uZ0lKQjdBeQpLSmRlYzhnbWlaNk4wcTV3ZXFEWDJ6QVgrbDVPTldTcG1mWUVhVHBDSnFMVTNtZFpCWWx5cnhMTytvemx0MGdLCktLbG81cGgzc05CcDFMUG5LOUMxc3MvbWZRek9EMDNzck1Xa21oTDcwQ0IxZmIydCtOWnRITW5BYmYwYkJETnoKTlNRcXU4T2cwaUxnOUVhd3l1dEF4U3BGdmhHUGMvd0dHMExWaXdJREFRQUJBb0lCQUJKYUErOHp4MVhjNEw0egpwUFd5bDdHVDRTMFRLbTNuWUdtRnZudjJBZXg5WDFBU2wzVFVPckZyTnZpK2xYMnYzYUZoSFZDUEN4N1RlMDVxClhPa2JzZnZkZG5iZFQ2RjgyMnJleVByRXNINk9TUnBWSzBmeDVaMDQwVnRFUDJCWm04eTYyNG1QZk1vbDdya2MKcm9Kd09rOEVpUHZZekpsZUd0bTAwUm1sRysyL2c0aWJsOTVmQXpyc1MvcGUyS3ZoN2NBVEtIcVh6MjlpUmZpbApiTGhBamQwcEVSMjNYU0hHR1ZqRmF3amNJK1c2L2RtbDZURDhrSzFGaUtldmJKTlREeVNXQnpPbXRTYUp1K01JCm9iUnVWWG4yZVNoamVGM1BYcHZRMWRhNXdBa0dJQWxOWjRHTG5QU2ZwVmJyU0plU3RrTGNzdEJheVlJS3BWZVgKSVVTTHM0RUNnWUVBMmNnZUE2WHh0TXdFNU5QWlNWdGhzbXRiYi9YYmtsSTdrWHlsdk5zZjFPdXRYVzkybVJneQpHcEhUQ0VubDB0Z1p3T081T1FLNjdFT3JUdDBRWStxMDJzZndwcmgwNFZEVGZhcW5QNTBxa3BmZEJLQWpmanEyCjFoZDZMd2hLeDRxSm9aelp2VkowV0lvR1ZLcjhJSjJOWGRTUVlUanZUZHhGczRTamdqNFFiaEVDZ1lFQTFBWUUKSEo3eVlza2EvS2V2OVVYbmVrSTRvMm5aYjJ1UVZXazRXSHlaY2NRN3VMQVhGY3lJcW5SZnoxczVzN3RMTzJCagozTFZNUVBzazFNY25oTTl4WE4vQ3ZDTys5b2t0RnNaMGJqWFh6NEJ5V2lFNHJPS1lhVEFwcDVsWlpUT3ZVMWNyCm05R3NwMWJoVDVZb2RaZ3IwUHQyYzR4U2krUVlEWnNFb2lFdzNkc0NnWUVBcVJLYWNweWZKSXlMZEJjZ0JycGkKQTRFalVLMWZsSjR3enNjbGFKUDVoM1NjZUFCejQzRU1YT0kvSXAwMFJsY3N6em83N3cyMmpud09mOEJSM0RBMwp6ZTRSWDIydWw4b0hGdldvdUZOTTNOZjNaNExuYXpVc0F0UGhNS2hRWGMrcEFBWGthUDJkZzZ0TU5PazFxaUNHCndvU212a1BVVE84b1ViRTB1NFZ4ZmZFQ2dZQUpPdDNROVNadUlIMFpSSitIV095enlOQTRaUEkvUkhwN0RXS1QKajVFS2Y5VnR1OVMxY1RyOTJLVVhITXlOUTNrSjg2OUZPMnMvWk85OGg5THptQ2hDTjhkOWN6enI5SnJPNUFMTApqWEtBcVFIUlpLTFgrK0ZRcXZVVlE3cTlpaHQyMEZPb3E5OE5SZDMzSGYxUzZUWDNHZ3RWQ21YSml6dDAxQ3ZHCmR4VnVnd0tCZ0M2Mlp0b0RLb3JyT2hvdTBPelprK2YwQS9rNDJBOENiL29VMGpwSzZtdmxEWmNYdUF1QVZTVXIKNXJCZjRVYmdVYndqa1ZWSFR6LzdDb1BWSjUvVUxJWk1Db1RUNFprNTZXWDk4ZE93Q3VTVFpZYnlBbDZNS1BBZApTZEpuVVIraEpnSVFDVGJ4K1dzYnh2d0FkbWErWUhtaVlPRzZhSklXMXdSd1VGOURLUEhHCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
|
||||
```
|
||||
|
||||
2. In `kind: Secret` with `name: cattle-keys-server`, replace `<BASE64_CA>` with the base64 encoded string of the CA Certificate file (usually called `ca.pem` or `ca.crt`).
|
||||
|
||||
>**Note:**
|
||||
> The base64 encoded string should be on the same line as `cacerts.pem`, without any newline at the beginning, in between or at the end.
|
||||
|
||||
|
||||
**Step Result:** The file should look like the example below (the base64 encoded string should be different):
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cattle-keys-server
|
||||
namespace: cattle-system
|
||||
type: Opaque
|
||||
data:
|
||||
cacerts.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNvRENDQVlnQ0NRRHVVWjZuMEZWeU16QU5CZ2txaGtpRzl3MEJBUXNGQURBU01SQXdEZ1lEVlFRRERBZDAKWlhOMExXTmhNQjRYRFRFNE1EVXdOakl4TURRd09Wb1hEVEU0TURjd05USXhNRFF3T1Zvd0VqRVFNQTRHQTFVRQpBd3dIZEdWemRDMWpZVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNQmpBS3dQCndhRUhwQTdaRW1iWWczaTNYNlppVmtGZFJGckJlTmFYTHFPL2R0RUdmWktqYUF0Wm45R1VsckQxZUlUS3UzVHgKOWlGVlV4Mmo1Z0tyWmpwWitCUnFiZ1BNbk5hS1hocmRTdDRtUUN0VFFZdGRYMVFZS0pUbWF5NU45N3FoNTZtWQprMllKRkpOWVhHWlJabkdMUXJQNk04VHZramF0ZnZOdmJ0WmtkY2orYlY3aWhXanp2d2theHRUVjZlUGxuM2p5CnJUeXBBTDliYnlVcHlad3E2MWQvb0Q4VUtwZ2lZM1dOWmN1YnNvSjhxWlRsTnN6UjVadEFJV0tjSE5ZbE93d2oKaG41RE1tSFpwZ0ZGNW14TU52akxPRUc0S0ZRU3laYlV2QzlZRUhLZTUxbGVxa1lmQmtBZWpPY002TnlWQUh1dApuay9DMHpXcGdENkIwbkVDQXdFQUFUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHTCtaNkRzK2R4WTZsU2VBClZHSkMvdzE1bHJ2ZXdia1YxN3hvcmlyNEMxVURJSXB6YXdCdFJRSGdSWXVtblVqOGo4T0hFWUFDUEthR3BTVUsKRDVuVWdzV0pMUUV0TDA2eTh6M3A0MDBrSlZFZW9xZlVnYjQrK1JLRVJrWmowWXR3NEN0WHhwOVMzVkd4NmNOQQozZVlqRnRQd2hoYWVEQmdma1hXQWtISXFDcEsrN3RYem9pRGpXbi8walI2VDcrSGlaNEZjZ1AzYnd3K3NjUDIyCjlDQVZ1ZFg4TWpEQ1hTcll0Y0ZINllBanlCSTJjbDhoSkJqa2E3aERpVC9DaFlEZlFFVFZDM3crQjBDYjF1NWcKdE03Z2NGcUw4OVdhMnp5UzdNdXk5bEthUDBvTXl1Ty82Tm1wNjNsVnRHeEZKSFh4WTN6M0lycGxlbTNZQThpTwpmbmlYZXc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details id="option-b">
|
||||
<summary>Option B—Bring Your Own Certificate: Signed by Recognized CA</summary>
|
||||
|
||||
If you are using a Certificate Signed By A Recognized Certificate Authority, you will need to generate a base64 encoded string for the Certificate file and the Certificate Key file. Make sure that your certificate file includes all the intermediate certificates in the chain, the order of certificates in this case is first your own certificate, followed by the intermediates. Please refer to the documentation of your CSP (Certificate Service Provider) to see what intermediate certificate(s) need to be included.
|
||||
|
||||
In the `kind: Secret` with `name: cattle-keys-ingress`:
|
||||
|
||||
* Replace `<BASE64_CRT>` with the base64 encoded string of the Certificate file (usually called `cert.pem` or `domain.crt`)
|
||||
* Replace `<BASE64_KEY>` with the base64 encoded string of the Certificate Key file (usually called `key.pem` or `domain.key`)
|
||||
|
||||
After replacing the values, the file should look like the example below (the base64 encoded strings should be different):
|
||||
|
||||
>**Note:**
|
||||
> The base64 encoded string should be on the same line as `tls.crt` or `tls.key`, without any newline at the beginning, in between or at the end.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cattle-keys-ingress
|
||||
namespace: cattle-system
|
||||
type: Opaque
|
||||
data:
|
||||
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1RENDQWN5Z0F3SUJBZ0lKQUlHc25NeG1LeGxLTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIzUmxjM1F0WTJFd0hoY05NVGd3TlRBMk1qRXdOREE1V2hjTk1UZ3dOekExTWpFd05EQTVXakFXTVJRdwpFZ1lEVlFRRERBdG9ZUzV5Ym1Ob2NpNXViRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBTFJlMXdzekZSb2Rib2pZV05DSHA3UkdJaUVIMENDZ1F2MmdMRXNkUUNKZlcrUFEvVjM0NnQ3bSs3TFEKZXJaV3ZZMWpuY2VuWU5JSGRBU0VnU0ducWExYnhUSU9FaE0zQXpib3B0WDhjSW1OSGZoQlZETGdiTEYzUk0xaQpPM1JLTGdIS2tYSTMxZndjbU9zWGUwaElYQnpUbmxnM20vUzlXL3NTc0l1dDVwNENDUWV3TWlpWFhuUElKb21lCmpkS3VjSHFnMTlzd0YvcGVUalZrcVpuMkJHazZRaWFpMU41bldRV0pjcThTenZxTTViZElDaWlwYU9hWWQ3RFEKYWRTejV5dlF0YkxQNW4wTXpnOU43S3pGcEpvUys5QWdkWDI5cmZqV2JSekp3RzM5R3dRemN6VWtLcnZEb05JaQo0UFJHc01yclFNVXFSYjRSajNQOEJodEMxWXNDQXdFQUFhTTVNRGN3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFCkJBTUNCZUF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdJR0NDc0dBUVVGQndNQk1BMEdDU3FHU0liM0RRRUIKQ3dVQUE0SUJBUUNKZm5PWlFLWkowTFliOGNWUW5Vdi9NZkRZVEJIQ0pZcGM4MmgzUGlXWElMQk1jWDhQRC93MgpoOUExNkE4NGNxODJuQXEvaFZYYy9JNG9yaFY5WW9jSEg5UlcvbGthTUQ2VEJVR0Q1U1k4S292MHpHQ1ROaDZ6Ci9wZTNqTC9uU0pYSjRtQm51czJheHFtWnIvM3hhaWpYZG9kMmd3eGVhTklvRjNLbHB2aGU3ZjRBNmpsQTM0MmkKVVlCZ09iN1F5KytRZWd4U1diSmdoSzg1MmUvUUhnU2FVSkN6NW1sNGc1WndnNnBTUXhySUhCNkcvREc4dElSYwprZDMxSk1qY25Fb1Rhc1Jyc1NwVmNGdXZyQXlXN2liakZyYzhienBNcE1obDVwYUZRcEZzMnIwaXpZekhwakFsCk5ZR2I2OHJHcjBwQkp3YU5DS2ErbCtLRTk4M3A3NDYwCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
|
||||
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdEY3WEN6TVZHaDF1aU5oWTBJZW50RVlpSVFmUUlLQkMvYUFzU3gxQUlsOWI0OUQ5ClhmanEzdWI3c3RCNnRsYTlqV09keDZkZzBnZDBCSVNCSWFlcHJWdkZNZzRTRXpjRE51aW0xZnh3aVkwZCtFRlUKTXVCc3NYZEV6V0k3ZEVvdUFjcVJjamZWL0J5WTZ4ZDdTRWhjSE5PZVdEZWI5TDFiK3hLd2k2M21uZ0lKQjdBeQpLSmRlYzhnbWlaNk4wcTV3ZXFEWDJ6QVgrbDVPTldTcG1mWUVhVHBDSnFMVTNtZFpCWWx5cnhMTytvemx0MGdLCktLbG81cGgzc05CcDFMUG5LOUMxc3MvbWZRek9EMDNzck1Xa21oTDcwQ0IxZmIydCtOWnRITW5BYmYwYkJETnoKTlNRcXU4T2cwaUxnOUVhd3l1dEF4U3BGdmhHUGMvd0dHMExWaXdJREFRQUJBb0lCQUJKYUErOHp4MVhjNEw0egpwUFd5bDdHVDRTMFRLbTNuWUdtRnZudjJBZXg5WDFBU2wzVFVPckZyTnZpK2xYMnYzYUZoSFZDUEN4N1RlMDVxClhPa2JzZnZkZG5iZFQ2RjgyMnJleVByRXNINk9TUnBWSzBmeDVaMDQwVnRFUDJCWm04eTYyNG1QZk1vbDdya2MKcm9Kd09rOEVpUHZZekpsZUd0bTAwUm1sRysyL2c0aWJsOTVmQXpyc1MvcGUyS3ZoN2NBVEtIcVh6MjlpUmZpbApiTGhBamQwcEVSMjNYU0hHR1ZqRmF3amNJK1c2L2RtbDZURDhrSzFGaUtldmJKTlREeVNXQnpPbXRTYUp1K01JCm9iUnVWWG4yZVNoamVGM1BYcHZRMWRhNXdBa0dJQWxOWjRHTG5QU2ZwVmJyU0plU3RrTGNzdEJheVlJS3BWZVgKSVVTTHM0RUNnWUVBMmNnZUE2WHh0TXdFNU5QWlNWdGhzbXRiYi9YYmtsSTdrWHlsdk5zZjFPdXRYVzkybVJneQpHcEhUQ0VubDB0Z1p3T081T1FLNjdFT3JUdDBRWStxMDJzZndwcmgwNFZEVGZhcW5QNTBxa3BmZEJLQWpmanEyCjFoZDZMd2hLeDRxSm9aelp2VkowV0lvR1ZLcjhJSjJOWGRTUVlUanZUZHhGczRTamdqNFFiaEVDZ1lFQTFBWUUKSEo3eVlza2EvS2V2OVVYbmVrSTRvMm5aYjJ1UVZXazRXSHlaY2NRN3VMQVhGY3lJcW5SZnoxczVzN3RMTzJCagozTFZNUVBzazFNY25oTTl4WE4vQ3ZDTys5b2t0RnNaMGJqWFh6NEJ5V2lFNHJPS1lhVEFwcDVsWlpUT3ZVMWNyCm05R3NwMWJoVDVZb2RaZ3IwUHQyYzR4U2krUVlEWnNFb2lFdzNkc0NnWUVBcVJLYWNweWZKSXlMZEJjZ0JycGkKQTRFalVLMWZsSjR3enNjbGFKUDVoM1NjZUFCejQzRU1YT0kvSXAwMFJsY3N6em83N3cyMmpud09mOEJSM0RBMwp6ZTRSWDIydWw4b0hGdldvdUZOTTNOZjNaNExuYXpVc0F0UGhNS2hRWGMrcEFBWGthUDJkZzZ0TU5PazFxaUNHCndvU212a1BVVE84b1ViRTB1NFZ4ZmZFQ2dZQUpPdDNROVNadUlIMFpSSitIV095enlOQTRaUEkvUkhwN0RXS1QKajVFS2Y5VnR1OVMxY1RyOTJLVVhITXlOUTNrSjg2OUZPMnMvWk85OGg5THptQ2hDTjhkOWN6enI5SnJPNUFMTApqWEtBcVFIUlpLTFgrK0ZRcXZVVlE3cTlpaHQyMEZPb3E5OE5SZDMzSGYxUzZUWDNHZ3RWQ21YSml6dDAxQ3ZHCmR4VnVnd0tCZ0M2Mlp0b0RLb3JyT2hvdTBPelprK2YwQS9rNDJBOENiL29VMGpwSzZtdmxEWmNYdUF1QVZTVXIKNXJCZjRVYmdVYndqa1ZWSFR6LzdDb1BWSjUvVUxJWk1Db1RUNFprNTZXWDk4ZE93Q3VTVFpZYnlBbDZNS1BBZApTZEpuVVIraEpnSVFDVGJ4K1dzYnh2d0FkbWErWUhtaVlPRzZhSklXMXdSd1VGOURLUEhHCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
## 8. Configure FQDN
|
||||
|
||||
There are two references to `<FQDN>` in the config file (one in this step and one in the next). Both need to be replaced with the FQDN chosen in [Configure DNS](#3-configure-dns).
|
||||
|
||||
In the `kind: Ingress` with `name: cattle-ingress-http`:
|
||||
|
||||
* Replace `<FQDN>` with the FQDN chosen in [Configure DNS](#3-configure-dns).
|
||||
|
||||
After replacing `<FQDN>` with the FQDN chosen in [Configure DNS](#3-configure-dns), the file should look like the example below (`rancher.yourdomain.com` is the FQDN used in this example):
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle-ingress-http
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
|
||||
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
spec:
|
||||
rules:
|
||||
- host: rancher.yourdomain.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: cattle-service
|
||||
servicePort: 80
|
||||
tls:
|
||||
- secretName: cattle-keys-ingress
|
||||
hosts:
|
||||
- rancher.yourdomain.com
|
||||
```
|
||||
|
||||
Save the `.yml` file and close it.
|
||||
|
||||
## 9. Configure Rancher version
|
||||
|
||||
The last reference that needs to be replaced is `<RANCHER_VERSION>`. This needs to be replaced with a Rancher version which is marked as stable. The latest stable release of Rancher can be found in the [GitHub README](https://github.com/rancher/rancher/blob/master/README.md). Make sure the version is an actual version number, and not a named tag like `stable` or `latest`. The example below shows the version configured to `v2.0.6`.
|
||||
|
||||
```
|
||||
spec:
|
||||
serviceAccountName: cattle-admin
|
||||
containers:
|
||||
- image: rancher/rancher:v2.0.6
|
||||
imagePullPolicy: Always
|
||||
```
|
||||
|
||||
## 10. Back Up Your RKE Config File
|
||||
|
||||
After you close your `.yml` file, back it up to a secure location. You can use this file again when it's time to upgrade Rancher.
|
||||
|
||||
## 11. Run RKE
|
||||
|
||||
With all configuration in place, use RKE to launch Rancher. You can complete this action by running the `rke up` command and using the `--config` parameter to point toward your config file.
|
||||
|
||||
1. From your workstation, make sure `rancher-cluster.yml` and the downloaded `rke` binary are in the same directory.
|
||||
|
||||
2. Open a Terminal instance. Change to the directory that contains your config file and `rke`.
|
||||
|
||||
3. Enter one of the `rke up` commands listen below.
|
||||
|
||||
```
|
||||
rke up --config rancher-cluster.yml
|
||||
```
|
||||
|
||||
**Step Result:** The output should be similar to the snippet below:
|
||||
|
||||
```
|
||||
INFO[0000] Building Kubernetes cluster
|
||||
INFO[0000] [dialer] Setup tunnel for host [1.1.1.1]
|
||||
INFO[0000] [network] Deploying port listener containers
|
||||
INFO[0000] [network] Pulling image [alpine:latest] on host [1.1.1.1]
|
||||
...
|
||||
INFO[0101] Finished building Kubernetes cluster successfully
|
||||
```
|
||||
|
||||
## 12. Back Up Auto-Generated Config File
|
||||
|
||||
During installation, RKE automatically generates a config file named `kube_config_rancher-cluster.yml` in the same directory as the RKE binary. Copy this file and back it up to a safe location. You'll use this file later when upgrading Rancher Server.
|
||||
|
||||
## What's Next?
|
||||
|
||||
You have a couple of options:
|
||||
|
||||
- Create a backup of your Rancher Server in case of a disaster scenario: [High Availability Back Up and Restore](installation/backups-and-restoration/ha-backup-and-restoration).
|
||||
- Create a Kubernetes cluster: [Provisioning Kubernetes Clusters](../../../../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md).
|
||||
|
||||
<br/>
|
||||
|
||||
## FAQ and Troubleshooting
|
||||
|
||||
{{< ssl_faq_ha >}}
|
||||
+292
@@ -0,0 +1,292 @@
|
||||
---
|
||||
title: Kubernetes Install with External Load Balancer (HTTPS/Layer 7)
|
||||
weight: 276
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/ha/rke-add-on/layer-7-lb
|
||||
- /rancher/v2.0-v2.4/en/installation/options/rke-add-on/layer-7-lb/
|
||||
- /rancher/v2.0-v2.4/en/installation/options/rke-add-on/layer-7-lb
|
||||
- /rancher/v2.x/en/installation/resources/advanced/rke-add-on/layer-7-lb/
|
||||
---
|
||||
|
||||
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher Helm chart to install Rancher on a Kubernetes cluster. For details, see the [Kubernetes Install](../../../../../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md).
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on](upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
|
||||
This procedure walks you through setting up a 3-node cluster using the Rancher Kubernetes Engine (RKE). The cluster's sole purpose is running pods for Rancher. The setup is based on:
|
||||
|
||||
- Layer 7 load balancer with SSL termination (HTTPS)
|
||||
- [NGINX Ingress controller (HTTP)](https://kubernetes.github.io/ingress-nginx/)
|
||||
|
||||
In an HA setup that uses a layer 7 load balancer, the load balancer accepts Rancher client connections over the HTTP protocol (i.e., the application level). This application-level access allows the load balancer to read client requests and then redirect to them to cluster nodes using logic that optimally distributes load.
|
||||
|
||||
<sup>Rancher installed on a Kubernetes cluster with layer 7 load balancer, depicting SSL termination at load balancer</sup>
|
||||

|
||||
|
||||
## Installation Outline
|
||||
|
||||
Installation of Rancher in a high-availability configuration involves multiple procedures. Review this outline to learn about each procedure you need to complete.
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
- [1. Provision Linux Hosts](#1-provision-linux-hosts)
|
||||
- [2. Configure Load Balancer](#2-configure-load-balancer)
|
||||
- [3. Configure DNS](#3-configure-dns)
|
||||
- [4. Install RKE](#4-install-rke)
|
||||
- [5. Download RKE Config File Template](#5-download-rke-config-file-template)
|
||||
- [6. Configure Nodes](#6-configure-nodes)
|
||||
- [7. Configure Certificates](#7-configure-certificates)
|
||||
- [8. Configure FQDN](#8-configure-fqdn)
|
||||
- [9. Configure Rancher version](#9-configure-rancher-version)
|
||||
- [10. Back Up Your RKE Config File](#10-back-up-your-rke-config-file)
|
||||
- [11. Run RKE](#11-run-rke)
|
||||
- [12. Back Up Auto-Generated Config File](#12-back-up-auto-generated-config-file)
|
||||
|
||||
|
||||
<!-- /TOC -->
|
||||
## 1. Provision Linux Hosts
|
||||
|
||||
Provision three Linux hosts according to our [Requirements](../../../../../pages-for-subheaders/installation-requirements.md).
|
||||
|
||||
## 2. Configure Load Balancer
|
||||
|
||||
When using a load balancer in front of Rancher, there's no need for the container to redirect port communication from port 80 or port 443. By passing the header `X-Forwarded-Proto: https`, this redirect is disabled. This is the expected configuration when terminating SSL externally.
|
||||
|
||||
The load balancer has to be configured to support the following:
|
||||
|
||||
* **WebSocket** connections
|
||||
* **SPDY** / **HTTP/2** protocols
|
||||
* Passing / setting the following headers:
|
||||
|
||||
| Header | Value | Description |
|
||||
|---------------------|----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `Host` | FQDN used to reach Rancher. | To identify the server requested by the client. |
|
||||
| `X-Forwarded-Proto` | `https` | To identify the protocol that a client used to connect to the load balancer.<br /><br/>**Note:** If this header is present, `rancher/rancher` does not redirect HTTP to HTTPS. |
|
||||
| `X-Forwarded-Port` | Port used to reach Rancher. | To identify the protocol that client used to connect to the load balancer. |
|
||||
| `X-Forwarded-For` | IP of the client connection. | To identify the originating IP address of a client. |
|
||||
|
||||
Health checks can be executed on the `/healthz` endpoint of the node, this will return HTTP 200.
|
||||
|
||||
We have example configurations for the following load balancers:
|
||||
|
||||
* [Amazon ELB configuration](../../../../../how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md)
|
||||
* [NGINX configuration](../../../../../how-to-guides/new-user-guides/infrastructure-setup/nginx-load-balancer.md)
|
||||
|
||||
## 3. Configure DNS
|
||||
|
||||
Choose a fully qualified domain name (FQDN) that you want to use to access Rancher (e.g., `rancher.yourdomain.com`).<br/><br/>
|
||||
|
||||
1. Log into your DNS server a create a `DNS A` record that points to the IP address of your [load balancer](#2-configure-load-balancer).
|
||||
|
||||
2. Validate that the `DNS A` is working correctly. Run the following command from any terminal, replacing `HOSTNAME.DOMAIN.COM` with your chosen FQDN:
|
||||
|
||||
`nslookup HOSTNAME.DOMAIN.COM`
|
||||
|
||||
**Step Result:** Terminal displays output similar to the following:
|
||||
|
||||
```
|
||||
$ nslookup rancher.yourdomain.com
|
||||
Server: YOUR_HOSTNAME_IP_ADDRESS
|
||||
Address: YOUR_HOSTNAME_IP_ADDRESS#53
|
||||
|
||||
Non-authoritative answer:
|
||||
Name: rancher.yourdomain.com
|
||||
Address: HOSTNAME.DOMAIN.COM
|
||||
```
|
||||
|
||||
<br/>
|
||||
|
||||
## 4. Install RKE
|
||||
|
||||
RKE (Rancher Kubernetes Engine) is a fast, versatile Kubernetes installer that you can use to install Kubernetes on your Linux hosts. We will use RKE to setup our cluster and run Rancher.
|
||||
|
||||
1. Follow the [RKE Install](https://rancher.com/docs/rke/latest/en/installation) instructions.
|
||||
|
||||
2. Confirm that RKE is now executable by running the following command:
|
||||
|
||||
```
|
||||
rke --version
|
||||
```
|
||||
|
||||
## 5. Download RKE Config File Template
|
||||
|
||||
RKE uses a YAML config file to install and configure your Kubernetes cluster. There are 2 templates to choose from, depending on the SSL certificate you want to use.
|
||||
|
||||
1. Download one of following templates, depending on the SSL certificate you're using.
|
||||
|
||||
- [Template for self-signed certificate<br/> `3-node-externalssl-certificate.yml`](installation/options/cluster-yml-templates/3-node-externalssl-certificate)
|
||||
- [Template for certificate signed by recognized CA<br/> `3-node-externalssl-recognizedca.yml`](installation/options/cluster-yml-templates/3-node-externalssl-recognizedca)
|
||||
|
||||
|
||||
|
||||
2. Rename the file to `rancher-cluster.yml`.
|
||||
|
||||
## 6. Configure Nodes
|
||||
|
||||
Once you have the `rancher-cluster.yml` config file template, edit the nodes section to point toward your Linux hosts.
|
||||
|
||||
1. Open `rancher-cluster.yml` in your favorite text editor.
|
||||
|
||||
1. Update the `nodes` section with the information of your [Linux hosts](#1-provision-linux-hosts).
|
||||
|
||||
For each node in your cluster, update the following placeholders: `IP_ADDRESS_X` and `USER`. The specified user should be able to access the Docker socket, you can test this by logging in with the specified user and run `docker ps`.
|
||||
|
||||
>**Note:**
|
||||
>
|
||||
>When using RHEL/CentOS, the SSH user can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565. See [Operating System Requirements](https://rancher.com/docs/rke/latest/en/installation/os#redhat-enterprise-linux-rhel-centos) for RHEL/CentOS specific requirements.
|
||||
|
||||
nodes:
|
||||
# The IP address or hostname of the node
|
||||
- address: IP_ADDRESS_1
|
||||
# User that can login to the node and has access to the Docker socket (i.e. can execute `docker ps` on the node)
|
||||
# When using RHEL/CentOS, this can't be root due to https://bugzilla.redhat.com/show_bug.cgi?id=1527565
|
||||
user: USER
|
||||
role: [controlplane,etcd,worker]
|
||||
# Path the SSH key that can be used to access to node with the specified user
|
||||
ssh_key_path: ~/.ssh/id_rsa
|
||||
- address: IP_ADDRESS_2
|
||||
user: USER
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: ~/.ssh/id_rsa
|
||||
- address: IP_ADDRESS_3
|
||||
user: USER
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: ~/.ssh/id_rsa
|
||||
|
||||
1. **Optional:** By default, `rancher-cluster.yml` is configured to take backup snapshots of your data. To disable these snapshots, change the `backup` directive setting to `false`, as depicted below.
|
||||
|
||||
services:
|
||||
etcd:
|
||||
backup: false
|
||||
|
||||
## 7. Configure Certificates
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Option A—Bring Your Own Certificate: Self-Signed</summary>
|
||||
|
||||
>**Prerequisites:**
|
||||
>Create a self-signed certificate.
|
||||
>
|
||||
>- The certificate files must be in PEM format.
|
||||
>- The certificate files must be encoded in [base64](#base64).
|
||||
>- In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](../../../other-installation-methods/rancher-on-a-single-node-with-docker/certificate-troubleshooting.md)
|
||||
|
||||
In `kind: Secret` with `name: cattle-keys-ingress`, replace `<BASE64_CA>` with the base64 encoded string of the CA Certificate file (usually called `ca.pem` or `ca.crt`)
|
||||
|
||||
>**Note:** The base64 encoded string should be on the same line as `cacerts.pem`, without any newline at the beginning, in between or at the end.
|
||||
|
||||
After replacing the values, the file should look like the example below (the base64 encoded strings should be different):
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cattle-keys-server
|
||||
namespace: cattle-system
|
||||
type: Opaque
|
||||
data:
|
||||
cacerts.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNvRENDQVlnQ0NRRHVVWjZuMEZWeU16QU5CZ2txaGtpRzl3MEJBUXNGQURBU01SQXdEZ1lEVlFRRERBZDAKWlhOMExXTmhNQjRYRFRFNE1EVXdOakl4TURRd09Wb1hEVEU0TURjd05USXhNRFF3T1Zvd0VqRVFNQTRHQTFVRQpBd3dIZEdWemRDMWpZVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNQmpBS3dQCndhRUhwQTdaRW1iWWczaTNYNlppVmtGZFJGckJlTmFYTHFPL2R0RUdmWktqYUF0Wm45R1VsckQxZUlUS3UzVHgKOWlGVlV4Mmo1Z0tyWmpwWitCUnFiZ1BNbk5hS1hocmRTdDRtUUN0VFFZdGRYMVFZS0pUbWF5NU45N3FoNTZtWQprMllKRkpOWVhHWlJabkdMUXJQNk04VHZramF0ZnZOdmJ0WmtkY2orYlY3aWhXanp2d2theHRUVjZlUGxuM2p5CnJUeXBBTDliYnlVcHlad3E2MWQvb0Q4VUtwZ2lZM1dOWmN1YnNvSjhxWlRsTnN6UjVadEFJV0tjSE5ZbE93d2oKaG41RE1tSFpwZ0ZGNW14TU52akxPRUc0S0ZRU3laYlV2QzlZRUhLZTUxbGVxa1lmQmtBZWpPY002TnlWQUh1dApuay9DMHpXcGdENkIwbkVDQXdFQUFUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHTCtaNkRzK2R4WTZsU2VBClZHSkMvdzE1bHJ2ZXdia1YxN3hvcmlyNEMxVURJSXB6YXdCdFJRSGdSWXVtblVqOGo4T0hFWUFDUEthR3BTVUsKRDVuVWdzV0pMUUV0TDA2eTh6M3A0MDBrSlZFZW9xZlVnYjQrK1JLRVJrWmowWXR3NEN0WHhwOVMzVkd4NmNOQQozZVlqRnRQd2hoYWVEQmdma1hXQWtISXFDcEsrN3RYem9pRGpXbi8walI2VDcrSGlaNEZjZ1AzYnd3K3NjUDIyCjlDQVZ1ZFg4TWpEQ1hTcll0Y0ZINllBanlCSTJjbDhoSkJqa2E3aERpVC9DaFlEZlFFVFZDM3crQjBDYjF1NWcKdE03Z2NGcUw4OVdhMnp5UzdNdXk5bEthUDBvTXl1Ty82Tm1wNjNsVnRHeEZKSFh4WTN6M0lycGxlbTNZQThpTwpmbmlYZXc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
|
||||
|
||||
</details>
|
||||
<details id="option-b">
|
||||
<summary>Option B—Bring Your Own Certificate: Signed by Recognized CA</summary>
|
||||
|
||||
If you are using a Certificate Signed By A Recognized Certificate Authority, you don't need to perform any step in this part.
|
||||
|
||||
</details>
|
||||
|
||||
## 8. Configure FQDN
|
||||
|
||||
There is one reference to `<FQDN>` in the RKE config file. Replace this reference with the FQDN you chose in [3. Configure DNS](#3-configure-dns).
|
||||
|
||||
1. Open `rancher-cluster.yml`.
|
||||
|
||||
2. In the `kind: Ingress` with `name: cattle-ingress-http:`
|
||||
|
||||
Replace `<FQDN>` with the FQDN chosen in [3. Configure DNS](#3-configure-dns).
|
||||
|
||||
**Step Result:** After replacing the values, the file should look like the example below (the base64 encoded strings should be different):
|
||||
|
||||
```
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
namespace: cattle-system
|
||||
name: cattle-ingress-http
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
|
||||
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
|
||||
spec:
|
||||
rules:
|
||||
- host: rancher.yourdomain.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: cattle-service
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
|
||||
3. Save the file and close it.
|
||||
|
||||
## 9. Configure Rancher version
|
||||
|
||||
The last reference that needs to be replaced is `<RANCHER_VERSION>`. This needs to be replaced with a Rancher version which is marked as stable. The latest stable release of Rancher can be found in the [GitHub README](https://github.com/rancher/rancher/blob/master/README.md). Make sure the version is an actual version number, and not a named tag like `stable` or `latest`. The example below shows the version configured to `v2.0.6`.
|
||||
|
||||
```
|
||||
spec:
|
||||
serviceAccountName: cattle-admin
|
||||
containers:
|
||||
- image: rancher/rancher:v2.0.6
|
||||
imagePullPolicy: Always
|
||||
```
|
||||
|
||||
## 10. Back Up Your RKE Config File
|
||||
|
||||
After you close your RKE config file, `rancher-cluster.yml`, back it up to a secure location. You can use this file again when it's time to upgrade Rancher.
|
||||
|
||||
## 11. Run RKE
|
||||
|
||||
With all configuration in place, use RKE to launch Rancher. You can complete this action by running the `rke up` command and using the `--config` parameter to point toward your config file.
|
||||
|
||||
1. From your workstation, make sure `rancher-cluster.yml` and the downloaded `rke` binary are in the same directory.
|
||||
|
||||
2. Open a Terminal instance. Change to the directory that contains your config file and `rke`.
|
||||
|
||||
3. Enter one of the `rke up` commands listen below.
|
||||
|
||||
```
|
||||
rke up --config rancher-cluster.yml
|
||||
```
|
||||
|
||||
**Step Result:** The output should be similar to the snippet below:
|
||||
|
||||
```
|
||||
INFO[0000] Building Kubernetes cluster
|
||||
INFO[0000] [dialer] Setup tunnel for host [1.1.1.1]
|
||||
INFO[0000] [network] Deploying port listener containers
|
||||
INFO[0000] [network] Pulling image [alpine:latest] on host [1.1.1.1]
|
||||
...
|
||||
INFO[0101] Finished building Kubernetes cluster successfully
|
||||
```
|
||||
|
||||
## 12. Back Up Auto-Generated Config File
|
||||
|
||||
During installation, RKE automatically generates a config file named `kube_config_rancher-cluster.yml` in the same directory as the `rancher-cluster.yml` file. Copy this file and back it up to a safe location. You'll use this file later when upgrading Rancher Server.
|
||||
|
||||
## What's Next?
|
||||
|
||||
- **Recommended:** Review [Creating Backups—High Availability Back Up and Restoration](backups/backups/ha-backups/) to learn how to backup your Rancher Server in case of a disaster scenario.
|
||||
- Create a Kubernetes cluster: [Creating a Cluster](tasks/clusters/creating-a-cluster/).
|
||||
|
||||
<br/>
|
||||
|
||||
## FAQ and Troubleshooting
|
||||
|
||||
{{< ssl_faq_ha >}}
|
||||
+42
@@ -0,0 +1,42 @@
|
||||
---
|
||||
title: Tuning etcd for Large Installations
|
||||
weight: 2
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/etcd
|
||||
---
|
||||
|
||||
When running larger Rancher installations with 15 or more clusters it is recommended to increase the default keyspace for etcd from the default 2GB. The maximum setting is 8GB and the host should have enough RAM to keep the entire dataset in memory. When increasing this value you should also increase the size of the host. The keyspace size can also be adjusted in smaller installations if you anticipate a high rate of change of pods during the garbage collection interval.
|
||||
|
||||
The etcd data set is automatically cleaned up on a five minute interval by Kubernetes. There are situations, e.g. deployment thrashing, where enough events could be written to etcd and deleted before garbage collection occurs and cleans things up causing the keyspace to fill up. If you see `mvcc: database space exceeded` errors, in the etcd logs or Kubernetes API server logs, you should consider increasing the keyspace size. This can be accomplished by setting the [quota-backend-bytes](https://etcd.io/docs/v3.4.0/op-guide/maintenance/#space-quota) setting on the etcd servers.
|
||||
|
||||
### Example: This snippet of the RKE cluster.yml file increases the keyspace size to 5GB
|
||||
|
||||
```yaml
|
||||
# RKE cluster.yml
|
||||
---
|
||||
services:
|
||||
etcd:
|
||||
extra_args:
|
||||
quota-backend-bytes: 5368709120
|
||||
```
|
||||
|
||||
## Scaling etcd disk performance
|
||||
|
||||
You can follow the recommendations from [the etcd docs](https://etcd.io/docs/v3.4.0/tuning/#disk) on how to tune the disk priority on the host.
|
||||
|
||||
Additionally, to reduce IO contention on the disks for etcd, you can use a dedicated device for the data and wal directory. Based on etcd best practices, mirroring RAID configurations are unnecessary because etcd replicates data between the nodes in the cluster. You can use stripping RAID configurations to increase available IOPS.
|
||||
|
||||
To implement this solution in an RKE cluster, the `/var/lib/etcd/data` and `/var/lib/etcd/wal` directories will need to have disks mounted and formatted on the underlying host. In the `extra_args` directive of the `etcd` service, you must include the `wal_dir` directory. Without specifying the `wal_dir`, etcd process will try to manipulate the underlying `wal` mount with insufficient permissions.
|
||||
|
||||
```yaml
|
||||
# RKE cluster.yml
|
||||
---
|
||||
services:
|
||||
etcd:
|
||||
extra_args:
|
||||
data-dir: '/var/lib/rancher/etcd/data/'
|
||||
wal-dir: '/var/lib/rancher/etcd/wal/wal_dir'
|
||||
extra_binds:
|
||||
- '/var/lib/etcd/data:/var/lib/rancher/etcd/data'
|
||||
- '/var/lib/etcd/wal:/var/lib/rancher/etcd/wal'
|
||||
```
|
||||
+33
@@ -0,0 +1,33 @@
|
||||
---
|
||||
title: UI for Istio Virtual Services and Destination Rules
|
||||
weight: 2
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/feature-flags/istio-virtual-service-ui
|
||||
---
|
||||
|
||||
This feature enables a UI that lets you create, read, update and delete virtual services and destination rules, which are traffic management features of Istio.
|
||||
|
||||
> **Prerequisite:** Turning on this feature does not enable Istio. A cluster administrator needs to [enable Istio for the cluster](../../../../pages-for-subheaders/istio-setup-guide.md) in order to use the feature.
|
||||
|
||||
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.](installation/options/feature-flags/)
|
||||
|
||||
Environment Variable Key | Default Value | Status | Available as of
|
||||
---|---|---|---
|
||||
`istio-virtual-service-ui` |`false` | Experimental | v2.3.0
|
||||
`istio-virtual-service-ui` | `true` | GA | v2.3.2
|
||||
|
||||
# About this Feature
|
||||
|
||||
A central advantage of Istio's traffic management features is that they allow dynamic request routing, which is useful for canary deployments, blue/green deployments, or A/B testing.
|
||||
|
||||
When enabled, this feature turns on a page that lets you configure some traffic management features of Istio using the Rancher UI. Without this feature, you need to use `kubectl` to manage traffic with Istio.
|
||||
|
||||
The feature enables two UI tabs: one tab for **Virtual Services** and another for **Destination Rules.**
|
||||
|
||||
- **Virtual services** intercept and direct traffic to your Kubernetes services, allowing you to direct percentages of traffic from a request to different services. You can use them to define a set of routing rules to apply when a host is addressed. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/)
|
||||
- **Destination rules** serve as the single source of truth about which service versions are available to receive traffic from virtual services. You can use these resources to define policies that apply to traffic that is intended for a service after routing has occurred. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule)
|
||||
|
||||
To see these tabs,
|
||||
|
||||
1. Go to the project view in Rancher and click **Resources > Istio.**
|
||||
1. You will see tabs for **Traffic Graph,** which has the Kiali network visualization integrated into the UI, and **Traffic Metrics,** which shows metrics for the success rate and request volume of traffic to your services, among other metrics. Next to these tabs, you should see the tabs for **Virtual Services** and **Destination Rules.**
|
||||
+43
@@ -0,0 +1,43 @@
|
||||
---
|
||||
title: "Running on ARM64 (Experimental)"
|
||||
weight: 3
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/arm64-platform
|
||||
---
|
||||
|
||||
> **Important:**
|
||||
>
|
||||
> Running on an ARM64 platform is currently an experimental feature and is not yet officially supported in Rancher. Therefore, we do not recommend using ARM64 based nodes in a production environment.
|
||||
|
||||
The following options are available when using an ARM64 platform:
|
||||
|
||||
- Running Rancher on ARM64 based node(s)
|
||||
- Only for Docker Install. Please note that the following installation command replaces the examples found in the [Docker Install](../../../../pages-for-subheaders/rancher-on-a-single-node-with-docker.md) link:
|
||||
|
||||
```
|
||||
# In the last line `rancher/rancher:vX.Y.Z`, be certain to replace "X.Y.Z" with a released version in which ARM64 builds exist. For example, if your matching version is v2.5.8, you would fill in this line with `rancher/rancher:v2.5.8`.
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--privileged \
|
||||
rancher/rancher:vX.Y.Z
|
||||
```
|
||||
> **Note:** To check if your specific released version is compatible with the ARM64 architecture, you may navigate to your
|
||||
> version's release notes in the following two ways:
|
||||
>
|
||||
> - Manually find your version using https://github.com/rancher/rancher/releases.
|
||||
> - Go directly to your version using the tag and the specific version number. If you plan to use v2.5.8, for example, you may
|
||||
> navigate to https://github.com/rancher/rancher/releases/tag/v2.5.8.
|
||||
|
||||
- Create custom cluster and adding ARM64 based node(s)
|
||||
- Kubernetes cluster version must be 1.12 or higher
|
||||
- CNI Network Provider must be [Flannel](../../../../faq/container-network-interface-providers.md#flannel)
|
||||
|
||||
- Importing clusters that contain ARM64 based nodes
|
||||
- Kubernetes cluster version must be 1.12 or higher
|
||||
|
||||
Please see [Cluster Options](../../../../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md) for information on how to configure the cluster options.
|
||||
|
||||
The following features are not tested:
|
||||
|
||||
- Monitoring, alerts, notifiers, pipelines and logging
|
||||
- Launching apps from the catalog
|
||||
+42
@@ -0,0 +1,42 @@
|
||||
---
|
||||
title: Allow Unsupported Storage Drivers
|
||||
weight: 1
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/feature-flags/enable-not-default-storage-drivers/
|
||||
---
|
||||
|
||||
This feature allows you to use types for storage providers and provisioners that are not enabled by default.
|
||||
|
||||
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.](installation/options/feature-flags/)
|
||||
|
||||
Environment Variable Key | Default Value | Description
|
||||
---|---|---
|
||||
`unsupported-storage-drivers` | `false` | This feature enables types for storage providers and provisioners that are not enabled by default.
|
||||
|
||||
### Types for Persistent Volume Plugins that are Enabled by Default
|
||||
Below is a list of storage types for persistent volume plugins that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported:
|
||||
|
||||
Name | Plugin
|
||||
--------|----------
|
||||
Amazon EBS Disk | `aws-ebs`
|
||||
AzureFile | `azure-file`
|
||||
AzureDisk | `azure-disk`
|
||||
Google Persistent Disk | `gce-pd`
|
||||
Longhorn | `flex-volume-longhorn`
|
||||
VMware vSphere Volume | `vsphere-volume`
|
||||
Local | `local`
|
||||
Network File System | `nfs`
|
||||
hostPath | `host-path`
|
||||
|
||||
### Types for StorageClass that are Enabled by Default
|
||||
Below is a list of storage types for a StorageClass that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported:
|
||||
|
||||
Name | Plugin
|
||||
--------|--------
|
||||
Amazon EBS Disk | `aws-ebs`
|
||||
AzureFile | `azure-file`
|
||||
AzureDisk | `azure-disk`
|
||||
Google Persistent Disk | `gce-pd`
|
||||
Longhorn | `flex-volume-longhorn`
|
||||
VMware vSphere Volume | `vsphere-volume`
|
||||
Local | `local`
|
||||
+90
@@ -0,0 +1,90 @@
|
||||
---
|
||||
title: Rollbacks
|
||||
weight: 3
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/upgrades/rollbacks
|
||||
- /rancher/v2.0-v2.4/en/installation/upgrades-rollbacks/rollbacks
|
||||
- /rancher/v2.0-v2.4/en/upgrades/ha-server-rollbacks
|
||||
- /rancher/v2.0-v2.4/en/upgrades/rollbacks/ha-server-rollbacks
|
||||
- /rancher/v2.0-v2.4/en/installation/upgrades-rollbacks/rollbacks/ha-server-rollbacks
|
||||
- /rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades-rollbacks/rollbacks
|
||||
---
|
||||
|
||||
### Rolling Back to Rancher v2.2-v2.4
|
||||
|
||||
For Rancher installed on Kubernetes, follow the procedure detailed here: [Restoring Backups for Kubernetes installs.](backups/restorations/ha-restoration) Restoring a snapshot of the Rancher Server cluster will revert Rancher to the version and state at the time of the snapshot.
|
||||
|
||||
For information on how to roll back Rancher installed with Docker, refer to [this page.](../other-installation-methods/rancher-on-a-single-node-with-docker/roll-back-docker-installed-rancher.md)
|
||||
|
||||
> Managed clusters are authoritative for their state. This means restoring the rancher server will not revert workload deployments or changes made on managed clusters after the snapshot was taken.
|
||||
|
||||
### Rolling Back to v2.0.0-v2.1.5
|
||||
|
||||
If you are rolling back to versions in either of these scenarios, you must follow some extra instructions in order to get your clusters working.
|
||||
|
||||
- Rolling back from v2.1.6+ to any version between v2.1.0 - v2.1.5 or v2.0.0 - v2.0.10.
|
||||
- Rolling back from v2.0.11+ to any version between v2.0.0 - v2.0.10.
|
||||
|
||||
Because of the changes necessary to address [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321), special steps are necessary if the user wants to roll back to a previous version of Rancher where this vulnerability exists. The steps are as follows:
|
||||
|
||||
1. Record the `serviceAccountToken` for each cluster. To do this, save the following script on a machine with `kubectl` access to the Rancher management plane and execute it. You will need to run these commands on the machine where the rancher container is running. Ensure JQ is installed before running the command. The commands will vary depending on how you installed Rancher.
|
||||
|
||||
**Rancher Installed with Docker**
|
||||
```
|
||||
docker exec <NAME OF RANCHER CONTAINER> kubectl get clusters -o json | jq '[.items[] | select(any(.status.conditions[]; .type == "ServiceAccountMigrated")) | {name: .metadata.name, token: .status.serviceAccountToken}]' > tokens.json
|
||||
```
|
||||
|
||||
**Rancher Installed on a Kubernetes Cluster**
|
||||
```
|
||||
kubectl get clusters -o json | jq '[.items[] | select(any(.status.conditions[]; .type == "ServiceAccountMigrated")) | {name: .metadata.name, token: .status.serviceAccountToken}]' > tokens.json
|
||||
```
|
||||
|
||||
2. After executing the command a `tokens.json` file will be created. Important! Back up this file in a safe place.** You will need it to restore functionality to your clusters after rolling back Rancher. **If you lose this file, you may lose access to your clusters.**
|
||||
|
||||
3. Rollback Rancher following the [normal instructions](upgrades/rollbacks/).
|
||||
|
||||
4. Once Rancher comes back up, every cluster managed by Rancher (except for Imported clusters) will be in an `Unavailable` state.
|
||||
|
||||
5. Apply the backed up tokens based on how you installed Rancher.
|
||||
|
||||
**Rancher Installed with Docker**
|
||||
|
||||
Save the following script as `apply_tokens.sh` to the machine where the Rancher docker container is running. Also copy the `tokens.json` file created previously to the same directory as the script.
|
||||
```
|
||||
set -e
|
||||
|
||||
tokens=$(jq .[] -c tokens.json)
|
||||
for token in $tokens; do
|
||||
name=$(echo $token | jq -r .name)
|
||||
value=$(echo $token | jq -r .token)
|
||||
|
||||
docker exec $1 kubectl patch --type=merge clusters $name -p "{\"status\": {\"serviceAccountToken\": \"$value\"}}"
|
||||
done
|
||||
```
|
||||
the script to allow execution (`chmod +x apply_tokens.sh`) and execute the script as follows:
|
||||
```
|
||||
./apply_tokens.sh <DOCKER CONTAINER NAME>
|
||||
```
|
||||
After a few moments the clusters will go from Unavailable back to Available.
|
||||
|
||||
**Rancher Installed on a Kubernetes Cluster**
|
||||
|
||||
Save the following script as `apply_tokens.sh` to a machine with kubectl access to the Rancher management plane. Also copy the `tokens.json` file created previously to the same directory as the script.
|
||||
```
|
||||
set -e
|
||||
|
||||
tokens=$(jq .[] -c tokens.json)
|
||||
for token in $tokens; do
|
||||
name=$(echo $token | jq -r .name)
|
||||
value=$(echo $token | jq -r .token)
|
||||
|
||||
kubectl patch --type=merge clusters $name -p "{\"status\": {\"serviceAccountToken\": \"$value\"}}"
|
||||
done
|
||||
```
|
||||
Set the script to allow execution (`chmod +x apply_tokens.sh`) and execute the script as follows:
|
||||
```
|
||||
./apply_tokens.sh
|
||||
```
|
||||
After a few moments the clusters will go from `Unavailable` back to `Available`.
|
||||
|
||||
6. Continue using Rancher as normal.
|
||||
+190
@@ -0,0 +1,190 @@
|
||||
---
|
||||
title: Troubleshooting the Rancher Server Kubernetes Cluster
|
||||
weight: 276
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/k8s-install/helm-rancher/troubleshooting
|
||||
- /rancher/v2.0-v2.4/en/installation/ha/kubernetes-rke/troubleshooting
|
||||
- /rancher/v2.0-v2.4/en/installation/k8s-install/kubernetes-rke/troubleshooting
|
||||
- /rancher/v2.0-v2.4/en/installation/options/troubleshooting
|
||||
---
|
||||
|
||||
This section describes how to troubleshoot an installation of Rancher on a Kubernetes cluster.
|
||||
|
||||
### Relevant Namespaces
|
||||
|
||||
Most of the troubleshooting will be done on objects in these 3 namespaces.
|
||||
|
||||
- `cattle-system` - `rancher` deployment and pods.
|
||||
- `ingress-nginx` - Ingress controller pods and services.
|
||||
- `cert-manager` - `cert-manager` pods.
|
||||
|
||||
### "default backend - 404"
|
||||
|
||||
A number of things can cause the ingress-controller not to forward traffic to your rancher instance. Most of the time its due to a bad ssl configuration.
|
||||
|
||||
Things to check
|
||||
|
||||
- [Is Rancher Running](#check-if-rancher-is-running)
|
||||
- [Cert CN is "Kubernetes Ingress Controller Fake Certificate"](#cert-cn-is-kubernetes-ingress-controller-fake-certificate)
|
||||
|
||||
### Check if Rancher is Running
|
||||
|
||||
Use `kubectl` to check the `cattle-system` system namespace and see if the Rancher pods are in a Running state.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system get pods
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
If the state is not `Running`, run a `describe` on the pod and check the Events.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe pod
|
||||
|
||||
...
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 11m default-scheduler Successfully assigned rancher-784d94f59b-vgqzh to localhost
|
||||
Normal SuccessfulMountVolume 11m kubelet, localhost MountVolume.SetUp succeeded for volume "rancher-token-dj4mt"
|
||||
Normal Pulling 11m kubelet, localhost pulling image "rancher/rancher:v2.0.4"
|
||||
Normal Pulled 11m kubelet, localhost Successfully pulled image "rancher/rancher:v2.0.4"
|
||||
Normal Created 11m kubelet, localhost Created container
|
||||
Normal Started 11m kubelet, localhost Started container
|
||||
```
|
||||
|
||||
### Check the Rancher Logs
|
||||
|
||||
Use `kubectl` to list the pods.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system get pods
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
Use `kubectl` and the pod name to list the logs from the pod.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system logs -f rancher-784d94f59b-vgqzh
|
||||
```
|
||||
|
||||
### Cert CN is "Kubernetes Ingress Controller Fake Certificate"
|
||||
|
||||
Use your browser to check the certificate details. If it says the Common Name is "Kubernetes Ingress Controller Fake Certificate", something may have gone wrong with reading or issuing your SSL cert.
|
||||
|
||||
> **Note:** if you are using LetsEncrypt to issue certs it can sometimes take a few minutes to issue the cert.
|
||||
|
||||
### Checking for issues with cert-manager issued certs (Rancher Generated or LetsEncrypt)
|
||||
|
||||
`cert-manager` has 3 parts.
|
||||
|
||||
- `cert-manager` pod in the `cert-manager` namespace.
|
||||
- `Issuer` object in the `cattle-system` namespace.
|
||||
- `Certificate` object in the `cattle-system` namespace.
|
||||
|
||||
Work backwards and do a `kubectl describe` on each object and check the events. You can track down what might be missing.
|
||||
|
||||
For example there is a problem with the Issuer:
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe certificate
|
||||
...
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Warning IssuerNotReady 18s (x23 over 19m) cert-manager Issuer rancher not ready
|
||||
```
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe issuer
|
||||
...
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Warning ErrInitIssuer 19m (x12 over 19m) cert-manager Error initializing issuer: secret "tls-rancher" not found
|
||||
Warning ErrGetKeyPair 9m (x16 over 19m) cert-manager Error getting keypair for CA issuer: secret "tls-rancher" not found
|
||||
```
|
||||
|
||||
### Checking for Issues with Your Own SSL Certs
|
||||
|
||||
Your certs get applied directly to the Ingress object in the `cattle-system` namespace.
|
||||
|
||||
Check the status of the Ingress object and see if its ready.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe ingress
|
||||
```
|
||||
|
||||
If its ready and the SSL is still not working you may have a malformed cert or secret.
|
||||
|
||||
Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container.
|
||||
|
||||
```
|
||||
kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller
|
||||
...
|
||||
W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found
|
||||
```
|
||||
|
||||
### No matches for kind "Issuer"
|
||||
|
||||
The SSL configuration option you have chosen requires cert-manager to be installed before installing Rancher or else the following error is shown:
|
||||
|
||||
```
|
||||
Error: validation failed: unable to recognize "": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
|
||||
```
|
||||
|
||||
Install cert-manager and try installing Rancher again.
|
||||
|
||||
|
||||
### Canal Pods show READY 2/3
|
||||
|
||||
The most common cause of this issue is port 8472/UDP is not open between the nodes. Check your local firewall, network routing or security groups.
|
||||
|
||||
Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections.
|
||||
|
||||
### nginx-ingress-controller Pods show RESTARTS
|
||||
|
||||
The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-2-3) for troubleshooting.
|
||||
|
||||
|
||||
### Failed to dial to /var/run/docker.sock: ssh: rejected: administratively prohibited (open failed)
|
||||
|
||||
Some causes of this error include:
|
||||
|
||||
* User specified to connect with does not have permission to access the Docker socket. This can be checked by logging into the host and running the command `docker ps`:
|
||||
|
||||
```
|
||||
$ ssh user@server
|
||||
user@server$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
```
|
||||
|
||||
See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly.
|
||||
|
||||
* When using RedHat/CentOS as operating system, you cannot use the user `root` to connect to the nodes because of [Bugzilla #1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). You will need to add a separate user and configure it to access the Docker socket. See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly.
|
||||
|
||||
* SSH server version is not version 6.7 or higher. This is needed for socket forwarding to work, which is used to connect to the Docker socket over SSH. This can be checked using `sshd -V` on the host you are connecting to, or using netcat:
|
||||
```
|
||||
$ nc xxx.xxx.xxx.xxx 22
|
||||
SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.10
|
||||
```
|
||||
|
||||
### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: no key found
|
||||
|
||||
The key file specified as `ssh_key_path` cannot be accessed. Make sure that you specified the private key file (not the public key, `.pub`), and that the user that is running the `rke` command can access the private key file.
|
||||
|
||||
### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
|
||||
|
||||
The key file specified as `ssh_key_path` is not correct for accessing the node. Double-check if you specified the correct `ssh_key_path` for the node and if you specified the correct user to connect with.
|
||||
|
||||
### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: Error configuring SSH: ssh: cannot decode encrypted private keys
|
||||
|
||||
If you want to use encrypted private keys, you should use `ssh-agent` to load your keys with your passphrase. If the `SSH_AUTH_SOCK` environment variable is found in the environment where the `rke` command is run, it will be used automatically to connect to the node.
|
||||
|
||||
### Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
|
||||
|
||||
The node is not reachable on the configured `address` and `port`.
|
||||
+224
@@ -0,0 +1,224 @@
|
||||
---
|
||||
title: Upgrading Rancher Installed on Kubernetes with Helm 2
|
||||
weight: 1050
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/upgrades/upgrades/ha/helm2
|
||||
- /rancher/v2.0-v2.4/en/upgrades/helm2
|
||||
- /rancher/v2.0-v2.4/en/installation/upgrades-rollbacks/upgrades/ha/helm2
|
||||
- /rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades-rollbacks/upgrades/ha/helm2
|
||||
- /rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades-rollbacks/upgrades/helm2
|
||||
- /rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
> Helm 3 has been released. If you are using Helm 2, we recommend [migrating to Helm 3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) because it is simpler to use and more secure than Helm 2.
|
||||
>
|
||||
> The [current instructions for Upgrading Rancher Installed on Kubernetes](https://rancher.com/docs/rancher/v2.0-v2.4/en/upgrades/upgrades/ha/) use Helm 3.
|
||||
>
|
||||
> This section provides a copy of the older instructions for upgrading Rancher with Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible.
|
||||
|
||||
The following instructions will guide you through using Helm to upgrade a Rancher server that is installed on a Kubernetes cluster.
|
||||
|
||||
To upgrade the components in your Kubernetes cluster, or the definition of the [Kubernetes services](https://rancher.com/docs/rke/latest/en/config-options/services/) or [add-ons](https://rancher.com/docs/rke/latest/en/config-options/add-ons/), refer to the [upgrade documentation for RKE](https://rancher.com/docs/rke/latest/en/upgrades/), the Rancher Kubernetes Engine.
|
||||
|
||||
If you installed Rancher using the RKE Add-on yaml, follow the directions to [migrate or upgrade](upgrades/upgrades/migrating-from-rke-add-on).
|
||||
|
||||
>**Notes:**
|
||||
>
|
||||
> - [Let's Encrypt will be blocking cert-manager instances older than 0.8.0 starting November 1st 2019.](https://community.letsencrypt.org/t/blocking-old-cert-manager-versions/98753) Upgrade cert-manager to the latest version by following [these instructions.](installation/options/upgrading-cert-manager)
|
||||
> - If you are upgrading Rancher from v2.x to v2.3+, and you are using external TLS termination, you will need to edit the cluster.yml to [enable using forwarded host headers.](../../../../reference-guides/installation-references/helm-chart-options.md#configuring-ingress-for-external-tls-when-using-nginx-v0-25)
|
||||
> - The upgrade instructions assume you are using Helm 3. For migration of installs started with Helm 2, refer to the official [Helm 2 to 3 migration docs.](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) This [section](installation/upgrades-rollbacks/upgrades/ha/helm2) provides a copy of the older upgrade instructions that used Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible.
|
||||
|
||||
# Prerequisites
|
||||
|
||||
- **Review the [known upgrade issues](upgrades/upgrades)** in the Rancher documentation for the most noteworthy issues to consider when upgrading Rancher. A more complete list of known issues for each Rancher version can be found in the release notes on [GitHub](https://github.com/rancher/rancher/releases) and on the [Rancher forums.](https://forums.rancher.com/c/announcements/12)
|
||||
- **For [air gap installs only,](../../../../pages-for-subheaders/air-gapped-helm-cli-install.md) collect and populate images for the new Rancher server version.** Follow the guide to [populate your private registry](../../other-installation-methods/air-gapped-helm-cli-install/publish-images.md) with the images for the Rancher version that you want to upgrade to.
|
||||
|
||||
# Upgrade Outline
|
||||
|
||||
Follow the steps to upgrade Rancher server:
|
||||
|
||||
- [A. Back up your Kubernetes cluster that is running Rancher server](#a-back-up-your-kubernetes-cluster-that-is-running-rancher-server)
|
||||
- [B. Update the Helm chart repository](#b-update-the-helm-chart-repository)
|
||||
- [C. Upgrade Rancher](#c-upgrade-rancher)
|
||||
- [D. Verify the Upgrade](#d-verify-the-upgrade)
|
||||
|
||||
### A. Back up Your Kubernetes Cluster that is Running Rancher Server
|
||||
|
||||
[Take a one-time snapshot](backups/v2.0.x-v2.4.x/backup/rke-backups/#option-b-one-time-snapshots)
|
||||
of your Kubernetes cluster running Rancher server. You'll use the snapshot as a restore point if something goes wrong during upgrade.
|
||||
|
||||
### B. Update the Helm chart repository
|
||||
|
||||
1. Update your local helm repo cache.
|
||||
|
||||
```
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Get the repository name that you used to install Rancher.
|
||||
|
||||
For information about the repos and their differences, see [Helm Chart Repositories](../../../../reference-guides/installation-references/helm-chart-options.md#helm-chart-repositories).
|
||||
|
||||
{{< release-channel >}}
|
||||
|
||||
```
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://charts.helm.sh/stable
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
> **Note:** If you want to switch to a different Helm chart repository, please follow the [steps on how to switch repositories](../../resources/choose-a-rancher-version.md#switching-to-a-different-helm-chart-repository). If you switch repositories, make sure to list the repositories again before continuing onto Step 3 to ensure you have the correct one added.
|
||||
|
||||
|
||||
1. Fetch the latest chart to install Rancher from the Helm chart repository.
|
||||
|
||||
This command will pull down the latest charts and save it in the current directory as a `.tgz` file.
|
||||
|
||||
```plain
|
||||
helm fetch rancher-<CHART_REPO>/rancher
|
||||
```
|
||||
|
||||
### C. Upgrade Rancher
|
||||
|
||||
This section describes how to upgrade normal (Internet-connected) or air gap installations of Rancher with Helm.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Kubernetes Upgrade">
|
||||
|
||||
Get the values, which were passed with `--set`, from the current Rancher Helm chart that is installed.
|
||||
|
||||
```
|
||||
helm get values rancher
|
||||
|
||||
hostname: rancher.my.org
|
||||
```
|
||||
|
||||
> **Note:** There will be more values that are listed with this command. This is just an example of one of the values.
|
||||
|
||||
If you are also upgrading cert-manager to the latest version from a version older than 0.11.0, follow `Option B: Reinstalling Rancher`. Otherwise, follow `Option A: Upgrading Rancher`.
|
||||
|
||||
<details>
|
||||
<summary>Option A: Upgrading Rancher</summary>
|
||||
|
||||
Upgrade Rancher to the latest version with all your settings.
|
||||
|
||||
Take all the values from the previous step and append them to the command using `--set key=value`. Note: There will be many more options from the previous step that need to be appended.
|
||||
|
||||
```
|
||||
helm upgrade --install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>Option B: Reinstalling Rancher chart</summary>
|
||||
|
||||
If you are currently running the cert-manager whose version is older than v0.11, and want to upgrade both Rancher and cert-manager to a newer version, then you need to reinstall both Rancher and cert-manager due to the API change in cert-manager v0.11.
|
||||
|
||||
1. Uninstall Rancher
|
||||
|
||||
```
|
||||
helm delete rancher
|
||||
```
|
||||
In case this results in an error that the release "rancher" was not found, make sure you are using the correct deployment name. Use `helm list` to list the helm-deployed releases.
|
||||
|
||||
2. Uninstall and reinstall `cert-manager` according to the instructions on the [Upgrading Cert-Manager](installation/options/upgrading-cert-manager/helm-2-instructions) page.
|
||||
|
||||
3. Reinstall Rancher to the latest version with all your settings. Take all the values from the step 1 and append them to the command using `--set key=value`. Note: There will be many more options from the step 1 that need to be appended.
|
||||
|
||||
```
|
||||
helm install rancher-<CHART_REPO>/rancher \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Kubernetes Air Gap Upgrade">
|
||||
|
||||
1. Render the Rancher template using the same chosen options that were used when installing Rancher. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
Based on the choice you made during installation, complete one of the procedures below.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<VERSION>` | The version number of the output tarball.
|
||||
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry.
|
||||
`<CERTMANAGER_VERSION>` | Cert-manager version running on k8s cluster.
|
||||
|
||||
<details id="self-signed">
|
||||
<summary>Option A-Default Self-Signed Certificate</summary>
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set certmanager.version=<CERTMANAGER_VERSION> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
</details>
|
||||
<details id="secret">
|
||||
<summary>Option B: Certificates From Files using Kubernetes Secrets</summary>
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`:
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set privateCA=true \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
2. Copy the rendered manifest directories to a system with access to the Rancher server cluster and apply the rendered templates.
|
||||
|
||||
Use `kubectl` to apply the rendered manifests.
|
||||
|
||||
```plain
|
||||
kubectl -n cattle-system apply -R -f ./rancher
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### D. Verify the Upgrade
|
||||
|
||||
Log into Rancher to confirm that the upgrade succeeded.
|
||||
|
||||
>**Having network issues following upgrade?**
|
||||
>
|
||||
> See [Restoring Cluster Networking](namespace-migration.md#restoring-cluster-networking).
|
||||
|
||||
## Rolling Back
|
||||
|
||||
Should something go wrong, follow the [roll back](upgrades/rollbacks/ha-server-rollbacks/) instructions to restore the snapshot you took before you preformed the upgrade.
|
||||
+113
@@ -0,0 +1,113 @@
|
||||
---
|
||||
title: Migrating from a Kubernetes Install with an RKE Add-on
|
||||
weight: 1030
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/upgrades/ha-server-upgrade/
|
||||
- /rancher/v2.0-v2.4/en/upgrades/upgrades/ha-server-upgrade/
|
||||
- /rancher/v2.0-v2.4/en/upgrades/upgrades/migrating-from-rke-add-on
|
||||
- /rancher/v2.0-v2.4/en/installation/upgrades-rollbacks/upgrades/migrating-from-rke-add-on
|
||||
- /rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades-rollbacks/upgrades/migrating-from-rke-add-on
|
||||
- /rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/migrating-from-rke-add-on/
|
||||
---
|
||||
|
||||
> **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, please follow these directions to migrate to the Helm install.
|
||||
|
||||
|
||||
The following instructions will help guide you through migrating from the RKE Add-on install to managing Rancher with the Helm package manager.
|
||||
|
||||
You will need the to have [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) installed and the kubeconfig YAML file (`kube_config_rancher-cluster.yml`) generated by RKE.
|
||||
|
||||
> **Note:** This guide assumes a standard Rancher install. If you have modified any of the object names or namespaces, please adjust accordingly.
|
||||
|
||||
> **Note:** If you are upgrading from from Rancher v2.0.13 or earlier, or v2.1.8 or earlier, and your cluster's certificates have expired, you will need to perform [additional steps](../../../../how-to-guides/advanced-user-guides/manage-clusters/rotate-certificates.md#rotating-expired-certificates-after-upgrading-older-rancher-versions) to rotate the certificates.
|
||||
|
||||
### Point kubectl at your Rancher Cluster
|
||||
|
||||
Make sure `kubectl` is using the correct kubeconfig YAML file. Set the `KUBECONFIG` environmental variable to point to `kube_config_rancher-cluster.yml`:
|
||||
|
||||
```
|
||||
export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
|
||||
```
|
||||
|
||||
After setting the `KUBECONFIG` environment variable, verify that it contains the correct `server` parameter. It should point directly to one of your cluster nodes on port `6443`.
|
||||
|
||||
```
|
||||
kubectl config view -o=jsonpath='{.clusters[*].cluster.server}'
|
||||
https://NODE:6443
|
||||
```
|
||||
|
||||
If the output from the command shows your Rancher hostname with the suffix `/k8s/clusters`, the wrong kubeconfig YAML file is configured. It should be the file that was created when you used RKE to create the cluster to run Rancher.
|
||||
|
||||
### Save your certificates
|
||||
|
||||
If you have terminated ssl on the Rancher cluster ingress, recover your certificate and key for use in the Helm install.
|
||||
|
||||
Use `kubectl` to get the secret, decode the value and direct the output to a file.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system get secret cattle-keys-ingress -o jsonpath --template='{ .data.tls\.crt }' | base64 -d > tls.crt
|
||||
kubectl -n cattle-system get secret cattle-keys-ingress -o jsonpath --template='{ .data.tls\.key }' | base64 -d > tls.key
|
||||
```
|
||||
|
||||
If you specified a private CA root cert
|
||||
|
||||
```
|
||||
kubectl -n cattle-system get secret cattle-keys-server -o jsonpath --template='{ .data.cacerts\.pem }' | base64 -d > cacerts.pem
|
||||
```
|
||||
|
||||
### Remove previous Kubernetes objects
|
||||
|
||||
Remove the Kubernetes objects created by the RKE install.
|
||||
|
||||
> **Note:** Removing these Kubernetes components will not affect the Rancher configuration or database, but with any maintenance it is a good idea to create a backup of the data before hand. See [Creating Backups-Kubernetes Install](backups/backups/ha-backups) for details.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system delete ingress cattle-ingress-http
|
||||
kubectl -n cattle-system delete service cattle-service
|
||||
kubectl -n cattle-system delete deployment cattle
|
||||
kubectl -n cattle-system delete clusterrolebinding cattle-crb
|
||||
kubectl -n cattle-system delete serviceaccount cattle-admin
|
||||
```
|
||||
|
||||
### Remove addons section from `rancher-cluster.yml`
|
||||
|
||||
The addons section from `rancher-cluster.yml` contains all the resources needed to deploy Rancher using RKE. By switching to Helm, this part of the cluster configuration file is no longer needed. Open `rancher-cluster.yml` in your favorite text editor and remove the addons section:
|
||||
|
||||
>**Important:** Make sure you only remove the addons section from the cluster configuration file.
|
||||
|
||||
```
|
||||
nodes:
|
||||
- address: <IP> # hostname or IP to access nodes
|
||||
user: <USER> # root user (usually 'root')
|
||||
role: [controlplane,etcd,worker] # K8s roles for node
|
||||
ssh_key_path: <PEM_FILE> # path to PEM file
|
||||
- address: <IP>
|
||||
user: <USER>
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: <PEM_FILE>
|
||||
- address: <IP>
|
||||
user: <USER>
|
||||
role: [controlplane,etcd,worker]
|
||||
ssh_key_path: <PEM_FILE>
|
||||
|
||||
services:
|
||||
etcd:
|
||||
snapshot: true
|
||||
creation: 6h
|
||||
retention: 24h
|
||||
|
||||
# Remove addons section from here til end of file
|
||||
addons: |-
|
||||
---
|
||||
...
|
||||
# End of file
|
||||
```
|
||||
|
||||
### Follow Helm and Rancher install steps
|
||||
|
||||
From here follow the standard install steps.
|
||||
|
||||
* [3 - Initialize Helm](installation/options/helm2/helm-init/)
|
||||
* [4 - Install Rancher](installation/options/helm2/helm-rancher/)
|
||||
+196
@@ -0,0 +1,196 @@
|
||||
---
|
||||
title: Upgrading to v2.0.7+ — Namespace Migration
|
||||
weight: 1040
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/upgrades/upgrades/namespace-migration
|
||||
- /rancher/v2.0-v2.4/en/installation/upgrades-rollbacks/upgrades/namespace-migration
|
||||
- /rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades-rollbacks/upgrades/namespace-migration
|
||||
- /rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/namespace-migration/
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
>This section applies only to Rancher upgrades from v2.0.6 or earlier to v2.0.7 or later. Upgrades from v2.0.7 to later version are unaffected.
|
||||
|
||||
In Rancher v2.0.6 and prior, system namespaces crucial for Rancher and Kubernetes operations were not assigned to any Rancher project by default. Instead, these namespaces existed independently from all Rancher projects, but you could move these namespaces into any project without affecting cluster operations.
|
||||
|
||||
These namespaces include:
|
||||
|
||||
- `kube-system`
|
||||
- `kube-public`
|
||||
- `cattle-system`
|
||||
- `cattle-alerting`<sup>1</sup>
|
||||
- `cattle-logging`<sup>1</sup>
|
||||
- `cattle-pipeline`<sup>1</sup>
|
||||
- `ingress-nginx`
|
||||
|
||||
><sup>1</sup> Only displays if this feature is enabled for the cluster.
|
||||
|
||||
However, with the release of Rancher v2.0.7, the `System` project was introduced. This project, which is automatically created during the upgrade, is assigned the system namespaces above to hold these crucial components for safe keeping.
|
||||
|
||||
During upgrades from Rancher v2.0.6- to Rancher v2.0.7+, all system namespaces are moved from their default location outside of all projects into the newly created `System` project. However, if you assigned any of your system namespaces to a project before upgrading, your cluster networking may encounter issues afterwards. This issue occurs because the system namespaces are not where the upgrade expects them to be during the upgrade, so it cannot move them to the `System` project.
|
||||
|
||||
- To prevent this issue from occurring before the upgrade, see [Preventing Cluster Networking Issues](#preventing-cluster-networking-issues).
|
||||
- To fix this issue following upgrade, see [Restoring Cluster Networking](#restoring-cluster-networking).
|
||||
|
||||
> **Note:** If you are upgrading from from Rancher v2.0.13 or earlier, or v2.1.8 or earlier, and your cluster's certificates have expired, you will need to perform [additional steps](../../../../how-to-guides/advanced-user-guides/manage-clusters/rotate-certificates.md#rotating-expired-certificates-after-upgrading-older-rancher-versions) to rotate the certificates.
|
||||
|
||||
## Preventing Cluster Networking Issues
|
||||
|
||||
You can prevent cluster networking issues from occurring during your upgrade to v2.0.7+ by unassigning system namespaces from all of your Rancher projects. Complete this task if you've assigned any of a cluster's system namespaces into a Rancher project.
|
||||
|
||||
1. Log into the Rancher UI before upgrade.
|
||||
|
||||
1. From the context menu, open the **local** cluster (or any of your other clusters).
|
||||
|
||||
1. From the main menu, select **Project/Namespaces**.
|
||||
|
||||
1. Find and select the following namespaces. Click **Move** and then choose **None** to move them out of your projects. Click **Move** again.
|
||||
|
||||
>**Note:** Some or all of these namespaces may already be unassigned from all projects.
|
||||
|
||||
- `kube-system`
|
||||
- `kube-public`
|
||||
- `cattle-system`
|
||||
- `cattle-alerting`<sup>1</sup>
|
||||
- `cattle-logging`<sup>1</sup>
|
||||
- `cattle-pipeline`<sup>1</sup>
|
||||
- `ingress-nginx`
|
||||
|
||||
><sup>1</sup> Only displays if this feature is enabled for the cluster.
|
||||
|
||||
<figcaption>Moving namespaces out of projects</figcaption>
|
||||

|
||||
|
||||
1. Repeat these steps for each cluster where you've assigned system namespaces to projects.
|
||||
|
||||
**Result:** All system namespaces are moved out of Rancher projects. You can now safely begin the [upgrade](upgrades/upgrades).
|
||||
|
||||
## Restoring Cluster Networking
|
||||
|
||||
Reset the cluster nodes' network policies to restore connectivity.
|
||||
|
||||
>**Prerequisites:**
|
||||
>
|
||||
>Download and setup [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Kubernetes Install">
|
||||
|
||||
1. From **Terminal**, change directories to your kubectl file that's generated during Rancher install, `kube_config_rancher-cluster.yml`. This file is usually in the directory where you ran RKE during Rancher installation.
|
||||
|
||||
1. Before repairing networking, run the following two commands to make sure that your nodes have a status of `Ready` and that your cluster components are `Healthy`.
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml get nodes
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
165.227.114.63 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
165.227.116.167 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
165.227.127.226 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml get cs
|
||||
|
||||
NAME STATUS MESSAGE ERROR
|
||||
scheduler Healthy ok
|
||||
controller-manager Healthy ok
|
||||
etcd-0 Healthy {"health": "true"}
|
||||
etcd-2 Healthy {"health": "true"}
|
||||
etcd-1 Healthy {"health": "true"}
|
||||
```
|
||||
|
||||
1. Check the `networkPolicy` for all clusters by running the following command.
|
||||
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml get cluster -o=custom-columns=ID:.metadata.name,NAME:.spec.displayName,NETWORKPOLICY:.spec.enableNetworkPolicy,APPLIEDNP:.status.appliedSpec.enableNetworkPolicy,ANNOTATION:.metadata.annotations."networking\.management\.cattle\.io/enable-network-policy"
|
||||
|
||||
ID NAME NETWORKPOLICY APPLIEDNP ANNOTATION
|
||||
c-59ptz custom <nil> <nil> <none>
|
||||
local local <nil> <nil> <none>
|
||||
|
||||
|
||||
1. Disable the `networkPolicy` for all clusters, still pointing toward your `kube_config_rancher-cluster.yml`.
|
||||
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml get cluster -o jsonpath='{range .items[*]}{@.metadata.name}{"\n"}{end}' | xargs -I {} kubectl --kubeconfig kube_config_rancher-cluster.yml patch cluster {} --type merge -p '{"spec": {"enableNetworkPolicy": false},"status": {"appliedSpec": {"enableNetworkPolicy": false }}}'
|
||||
|
||||
>**Tip:** If you want to keep `networkPolicy` enabled for all created clusters, you can run the following command to disable `networkPolicy` for `local` cluster (i.e., your Rancher Server nodes):
|
||||
>
|
||||
>```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml patch cluster local --type merge -p '{"spec": {"enableNetworkPolicy": false},"status": {"appliedSpec": {"enableNetworkPolicy": false }}}'
|
||||
```
|
||||
|
||||
1. Remove annotations for network policy for all clusters
|
||||
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml get cluster -o jsonpath='{range .items[*]}{@.metadata.name}{"\n"}{end}' | xargs -I {} kubectl --kubeconfig kube_config_rancher-cluster.yml annotate cluster {} "networking.management.cattle.io/enable-network-policy"="false" --overwrite
|
||||
|
||||
>**Tip:** If you want to keep `networkPolicy` enabled for all created clusters, you can run the following command to disable `networkPolicy` for `local` cluster (i.e., your Rancher Server nodes):
|
||||
>
|
||||
>```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml annotate cluster local "networking.management.cattle.io/enable-network-policy"="false" --overwrite
|
||||
```
|
||||
|
||||
1. Check the `networkPolicy` for all clusters again to make sure the policies have a status of `false `.
|
||||
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml get cluster -o=custom-columns=ID:.metadata.name,NAME:.spec.displayName,NETWORKPOLICY:.spec.enableNetworkPolicy,APPLIEDNP:.status.appliedSpec.enableNetworkPolicy,ANNOTATION:.metadata.annotations."networking\.management\.cattle\.io/enable-network-policy"
|
||||
|
||||
ID NAME NETWORKPOLICY APPLIEDNP ANNOTATION
|
||||
c-59ptz custom false false false
|
||||
local local false false false
|
||||
|
||||
1. Remove all network policies from all namespaces. Run this command for each cluster, using the kubeconfig generated by RKE.
|
||||
|
||||
```
|
||||
for namespace in $(kubectl --kubeconfig kube_config_rancher-cluster.yml get ns -o custom-columns=NAME:.metadata.name --no-headers); do
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml -n $namespace delete networkpolicy --all;
|
||||
done
|
||||
```
|
||||
|
||||
1. Remove all the projectnetworkpolicies created for the clusters, to make sure networkpolicies are not recreated.
|
||||
|
||||
```
|
||||
for cluster in $(kubectl --kubeconfig kube_config_rancher-cluster.yml get clusters -o custom-columns=NAME:.metadata.name --no-headers); do
|
||||
for project in $(kubectl --kubeconfig kube_config_rancher-cluster.yml get project -n $cluster -o custom-columns=NAME:.metadata.name --no-headers); do
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml delete projectnetworkpolicy -n $project --all
|
||||
done
|
||||
done
|
||||
```
|
||||
|
||||
>**Tip:** If you want to keep `networkPolicy` enabled for all created clusters, you can run the following command to disable `networkPolicy` for `local` cluster (i.e., your Rancher Server nodes):
|
||||
>
|
||||
>```
|
||||
for project in $(kubectl --kubeconfig kube_config_rancher-cluster.yml get project -n local -o custom-columns=NAME:.metadata.name --no-headers); do
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml -n $project delete projectnetworkpolicy --all;
|
||||
done
|
||||
```
|
||||
|
||||
1. Wait a few minutes and then log into the Rancher UI.
|
||||
|
||||
- If you can access Rancher, you're done, so you can skip the rest of the steps.
|
||||
- If you still can't access Rancher, complete the steps below.
|
||||
|
||||
1. Force your pods to recreate themselves by entering the following command.
|
||||
|
||||
```
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml delete pods -n cattle-system --all
|
||||
```
|
||||
|
||||
1. Log into the Rancher UI and view your clusters. Created clusters will show errors from attempting to contact Rancher while it was unavailable. However, these errors should resolve automatically.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Rancher Launched Kubernetes">
|
||||
|
||||
If you can access Rancher, but one or more of the clusters that you launched using Rancher has no networking, you can repair them by moving them:
|
||||
|
||||
- Using the cluster's [embedded kubectl shell](k8s-in-rancher/kubectl/).
|
||||
- By [downloading the cluster kubeconfig file and running it](../../../../how-to-guides/advanced-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) from your workstation.
|
||||
|
||||
```
|
||||
for namespace in $(kubectl --kubeconfig kube_config_rancher-cluster.yml get ns -o custom-columns=NAME:.metadata.name --no-headers); do
|
||||
kubectl --kubeconfig kube_config_rancher-cluster.yml -n $namespace delete networkpolicy --all;
|
||||
done
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
||||
+18
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title: Installing Docker
|
||||
weight: 1
|
||||
---
|
||||
|
||||
Docker is required to be installed on nodes where the Rancher server will be installed with Helm or Docker.
|
||||
|
||||
There are a couple of options for installing Docker. One option is to refer to the [official Docker documentation](https://docs.docker.com/install/) about how to install Docker on Linux. The steps will vary based on the Linux distribution.
|
||||
|
||||
Another option is to use one of Rancher's Docker installation scripts, which are available for most recent versions of Docker.
|
||||
|
||||
For example, this command could be used to install Docker 19.03 on Ubuntu:
|
||||
|
||||
```
|
||||
curl https://releases.rancher.com/install-docker/19.03.sh | sh
|
||||
```
|
||||
|
||||
Rancher has installation scripts for every version of upstream Docker that Kubernetes supports. To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher's Docker installation scripts.
|
||||
+267
@@ -0,0 +1,267 @@
|
||||
---
|
||||
title: Port Requirements
|
||||
description: Read about port requirements needed in order for Rancher to operate properly, both for Rancher nodes and downstream Kubernetes cluster nodes
|
||||
weight: 300
|
||||
---
|
||||
|
||||
To operate properly, Rancher requires a number of ports to be open on Rancher nodes and on downstream Kubernetes cluster nodes.
|
||||
|
||||
- [Rancher Nodes](#rancher-nodes)
|
||||
- [Ports for Rancher Server Nodes on K3s](#ports-for-rancher-server-nodes-on-k3s)
|
||||
- [Ports for Rancher Server Nodes on RKE](#ports-for-rancher-server-nodes-on-rke)
|
||||
- [Ports for Rancher Server in Docker](#ports-for-rancher-server-in-docker)
|
||||
- [Downstream Kubernetes Cluster Nodes](#downstream-kubernetes-cluster-nodes)
|
||||
- [Ports for Rancher Launched Kubernetes Clusters using Node Pools](#ports-for-rancher-launched-kubernetes-clusters-using-node-pools)
|
||||
- [Ports for Rancher Launched Kubernetes Clusters using Custom Nodes](#ports-for-rancher-launched-kubernetes-clusters-using-custom-nodes)
|
||||
- [Ports for Hosted Kubernetes Clusters](#ports-for-hosted-kubernetes-clusters)
|
||||
- [Ports for Imported Clusters](#ports-for-imported-clusters)
|
||||
- [Other Port Considerations](#other-port-considerations)
|
||||
- [Commonly Used Ports](#commonly-used-ports)
|
||||
- [Local Node Traffic](#local-node-traffic)
|
||||
- [Rancher AWS EC2 Security Group](#rancher-aws-ec2-security-group)
|
||||
- [Opening SUSE Linux Ports](#opening-suse-linux-ports)
|
||||
|
||||
# Rancher Nodes
|
||||
|
||||
The following table lists the ports that need to be open to and from nodes that are running the Rancher server.
|
||||
|
||||
The port requirements differ based on the Rancher server architecture.
|
||||
|
||||
> **Notes:**
|
||||
>
|
||||
> - Rancher nodes may also require additional outbound access for any external authentication provider which is configured (LDAP for example).
|
||||
> - Kubernetes recommends TCP 30000-32767 for node port services.
|
||||
> - For firewalls, traffic may need to be enabled within the cluster and pod CIDR.
|
||||
|
||||
### Ports for Rancher Server Nodes on K3s
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The K3s server needs port 6443 to be accessible by the nodes.
|
||||
|
||||
The nodes need to be able to reach other nodes over UDP port 8472 when Flannel VXLAN is used. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel. However, if you do not use Flannel and provide your own custom CNI, then port 8472 is not needed by K3s.
|
||||
|
||||
If you wish to utilize the metrics server, you will need to open port 10250 on each node.
|
||||
|
||||
> **Important:** The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disables access to port 8472.
|
||||
|
||||
The following tables break down the port requirements for inbound and outbound traffic:
|
||||
|
||||
<figcaption>Inbound Rules for Rancher Server Nodes</figcaption>
|
||||
|
||||
| Protocol | Port | Source | Description
|
||||
|-----|-----|----------------|---|
|
||||
| TCP | 80 | Load balancer/proxy that does external SSL termination | Rancher UI/API when external SSL termination is used |
|
||||
| TCP | 443 | <ul><li>server nodes</li><li>agent nodes</li><li>hosted/imported Kubernetes</li><li>any source that needs to be able to use the Rancher UI or API</li></ul> | Rancher agent, Rancher UI/API, kubectl |
|
||||
| TCP | 6443 | K3s server nodes | Kubernetes API
|
||||
| UDP | 8472 | K3s server and agent nodes | Required only for Flannel VXLAN.
|
||||
| TCP | 10250 | K3s server and agent nodes | kubelet
|
||||
|
||||
<figcaption>Outbound Rules for Rancher Nodes</figcaption>
|
||||
|
||||
| Protocol | Port | Destination | Description |
|
||||
| -------- | ---- | -------------------------------------------------------- | --------------------------------------------- |
|
||||
| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver |
|
||||
| TCP | 443 | git.rancher.io | Rancher catalog |
|
||||
| TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine |
|
||||
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
|
||||
|
||||
</details>
|
||||
|
||||
### Ports for Rancher Server Nodes on RKE
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
Typically Rancher is installed on three RKE nodes that all have the etcd, control plane and worker roles.
|
||||
|
||||
The following tables break down the port requirements for traffic between the Rancher nodes:
|
||||
|
||||
<figcaption>Rules for traffic between Rancher nodes</figcaption>
|
||||
|
||||
| Protocol | Port | Description |
|
||||
|-----|-----|----------------|
|
||||
| TCP | 443 | Rancher agents |
|
||||
| TCP | 2379 | etcd client requests |
|
||||
| TCP | 2380 | etcd peer communication |
|
||||
| TCP | 6443 | Kubernetes apiserver |
|
||||
| UDP | 8472 | Canal/Flannel VXLAN overlay networking |
|
||||
| TCP | 9099 | Canal/Flannel livenessProbe/readinessProbe |
|
||||
| TCP | 10250 | kubelet |
|
||||
| TCP | 10254 | Ingress controller livenessProbe/readinessProbe |
|
||||
|
||||
The following tables break down the port requirements for inbound and outbound traffic:
|
||||
|
||||
<figcaption>Inbound Rules for Rancher Nodes</figcaption>
|
||||
|
||||
| Protocol | Port | Source | Description |
|
||||
|-----|-----|----------------|---|
|
||||
| TCP | 22 | RKE CLI | SSH provisioning of node by RKE |
|
||||
| TCP | 80 | Load Balancer/Reverse Proxy | HTTP traffic to Rancher UI/API |
|
||||
| TCP | 443 | <ul><li>Load Balancer/Reverse Proxy</li><li>IPs of all cluster nodes and other API/UI clients</li></ul> | HTTPS traffic to Rancher UI/API |
|
||||
| TCP | 6443 | Kubernetes API clients | HTTPS traffic to Kubernetes API |
|
||||
|
||||
<figcaption>Outbound Rules for Rancher Nodes</figcaption>
|
||||
|
||||
| Protocol | Port | Destination | Description |
|
||||
|-----|-----|----------------|---|
|
||||
| TCP | 443 | `35.160.43.145`,`35.167.242.46`,`52.33.59.17` | Rancher catalog (git.rancher.io) |
|
||||
| TCP | 22 | Any node created using a node driver | SSH provisioning of node by node driver |
|
||||
| TCP | 2376 | Any node created using a node driver | Docker daemon TLS port used by node driver |
|
||||
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
|
||||
| TCP | Provider dependent | Port of the Kubernetes API endpoint in hosted cluster | Kubernetes API |
|
||||
|
||||
</details>
|
||||
|
||||
### Ports for Rancher Server in Docker
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The following tables break down the port requirements for Rancher nodes, for inbound and outbound traffic:
|
||||
|
||||
<figcaption>Inbound Rules for Rancher Node</figcaption>
|
||||
|
||||
| Protocol | Port | Source | Description
|
||||
|-----|-----|----------------|---|
|
||||
| TCP | 80 | Load balancer/proxy that does external SSL termination | Rancher UI/API when external SSL termination is used
|
||||
| TCP | 443 | <ul><li>hosted/imported Kubernetes</li><li>any source that needs to be able to use the Rancher UI or API</li></ul> | Rancher agent, Rancher UI/API, kubectl
|
||||
|
||||
<figcaption>Outbound Rules for Rancher Node</figcaption>
|
||||
|
||||
| Protocol | Port | Source | Description |
|
||||
|-----|-----|----------------|---|
|
||||
| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver |
|
||||
| TCP | 443 | git.rancher.io | Rancher catalog |
|
||||
| TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine |
|
||||
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
|
||||
|
||||
</details>
|
||||
|
||||
# Downstream Kubernetes Cluster Nodes
|
||||
|
||||
Downstream Kubernetes clusters run your apps and services. This section describes what ports need to be opened on the nodes in downstream clusters so that Rancher can communicate with them.
|
||||
|
||||
The port requirements differ depending on how the downstream cluster was launched. Each of the tabs below list the ports that need to be opened for different [cluster types](../../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md).
|
||||
|
||||
The following diagram depicts the ports that are opened for each [cluster type](../../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md).
|
||||
|
||||
<figcaption>Port Requirements for the Rancher Management Plane</figcaption>
|
||||
|
||||

|
||||
|
||||
>**Tip:**
|
||||
>
|
||||
>If security isn't a large concern and you're okay with opening a few additional ports, you can use the table in [Commonly Used Ports](#commonly-used-ports) as your port reference instead of the comprehensive tables below.
|
||||
|
||||
### Ports for Rancher Launched Kubernetes Clusters using Node Pools
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The following table depicts the port requirements for [Rancher Launched Kubernetes](../../../pages-for-subheaders/launch-kubernetes-with-rancher.md) with nodes created in an [Infrastructure Provider](../../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md).
|
||||
|
||||
>**Note:**
|
||||
>The required ports are automatically opened by Rancher during creation of clusters in cloud providers like Amazon EC2 or DigitalOcean.
|
||||
|
||||
{{< ports-iaas-nodes >}}
|
||||
|
||||
</details>
|
||||
|
||||
### Ports for Rancher Launched Kubernetes Clusters using Custom Nodes
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The following table depicts the port requirements for [Rancher Launched Kubernetes](../../../pages-for-subheaders/launch-kubernetes-with-rancher.md) with [Custom Nodes](../../../pages-for-subheaders/use-existing-nodes.md).
|
||||
|
||||
{{< ports-custom-nodes >}}
|
||||
|
||||
</details>
|
||||
|
||||
### Ports for Hosted Kubernetes Clusters
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The following table depicts the port requirements for [hosted clusters](../../../pages-for-subheaders/set-up-clusters-from-hosted-kubernetes-providers.md).
|
||||
|
||||
{{< ports-imported-hosted >}}
|
||||
|
||||
</details>
|
||||
|
||||
### Ports for Imported Clusters
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The following table depicts the port requirements for [imported clusters](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/import-existing-clusters.md).
|
||||
|
||||
{{< ports-imported-hosted >}}
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
# Other Port Considerations
|
||||
|
||||
### Commonly Used Ports
|
||||
|
||||
These ports are typically opened on your Kubernetes nodes, regardless of what type of cluster it is.
|
||||
|
||||
import CommonPortsTable from 'shared-files/_common-ports-table.md';
|
||||
|
||||
<CommonPortsTable />
|
||||
|
||||
----
|
||||
|
||||
### Local Node Traffic
|
||||
|
||||
Ports marked as `local traffic` (i.e., `9099 TCP`) in the above requirements are used for Kubernetes healthchecks (`livenessProbe` and`readinessProbe`).
|
||||
These healthchecks are executed on the node itself. In most cloud environments, this local traffic is allowed by default.
|
||||
|
||||
However, this traffic may be blocked when:
|
||||
|
||||
- You have applied strict host firewall policies on the node.
|
||||
- You are using nodes that have multiple interfaces (multihomed).
|
||||
|
||||
In these cases, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as source or destination in your security group, explicitly opening ports only applies to the private interface of the nodes / instances.
|
||||
|
||||
### Rancher AWS EC2 Security Group
|
||||
|
||||
When using the [AWS EC2 node driver](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md) to provision cluster nodes in Rancher, you can choose to let Rancher create a security group called `rancher-nodes`. The following rules are automatically added to this security group.
|
||||
|
||||
| Type | Protocol | Port Range | Source/Destination | Rule Type |
|
||||
|-----------------|:--------:|:-----------:|------------------------|:---------:|
|
||||
| SSH | TCP | 22 | 0.0.0.0/0 | Inbound |
|
||||
| HTTP | TCP | 80 | 0.0.0.0/0 | Inbound |
|
||||
| Custom TCP Rule | TCP | 443 | 0.0.0.0/0 | Inbound |
|
||||
| Custom TCP Rule | TCP | 2376 | 0.0.0.0/0 | Inbound |
|
||||
| Custom TCP Rule | TCP | 2379-2380 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom UDP Rule | UDP | 4789 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom TCP Rule | TCP | 6443 | 0.0.0.0/0 | Inbound |
|
||||
| Custom UDP Rule | UDP | 8472 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom TCP Rule | TCP | 10250-10252 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom TCP Rule | TCP | 10256 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom TCP Rule | TCP | 30000-32767 | 0.0.0.0/0 | Inbound |
|
||||
| Custom UDP Rule | UDP | 30000-32767 | 0.0.0.0/0 | Inbound |
|
||||
| All traffic | All | All | 0.0.0.0/0 | Outbound |
|
||||
|
||||
### Opening SUSE Linux Ports
|
||||
|
||||
SUSE Linux may have a firewall that blocks all ports by default. To open the ports needed for adding the host to a custom cluster,
|
||||
|
||||
1. SSH into the instance.
|
||||
1. Edit /`etc/sysconfig/SuSEfirewall2` and open the required ports. In this example, ports 9796 and 10250 are also opened for monitoring:
|
||||
```
|
||||
FW_SERVICES_EXT_TCP="22 80 443 2376 2379 2380 6443 9099 9796 10250 10254 30000:32767"
|
||||
FW_SERVICES_EXT_UDP="8472 30000:32767"
|
||||
FW_ROUTE=yes
|
||||
```
|
||||
1. Restart the firewall with the new ports:
|
||||
```
|
||||
SuSEfirewall2
|
||||
```
|
||||
|
||||
**Result:** The node has the open ports required to be added to a custom cluster.
|
||||
+179
@@ -0,0 +1,179 @@
|
||||
---
|
||||
title: '1. Set up Infrastructure and Private Registry'
|
||||
weight: 100
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-single-node/provision-host
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
In this section, you will provision the underlying infrastructure for your Rancher management server in an air gapped environment. You will also set up the private Docker registry that must be available to your Rancher node(s).
|
||||
|
||||
An air gapped environment is an environment where the Rancher server is installed offline or behind a firewall.
|
||||
|
||||
The infrastructure depends on whether you are installing Rancher on a K3s Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container. For more information on each installation option, refer to [this page.](../../../../pages-for-subheaders/installation-and-upgrade.md)
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="K3s">
|
||||
|
||||
We recommend setting up the following infrastructure for a high-availability installation:
|
||||
|
||||
- **Two Linux nodes,** typically virtual machines, in the infrastructure provider of your choice.
|
||||
- **An external database** to store the cluster data. PostgreSQL, MySQL, and etcd are supported.
|
||||
- **A load balancer** to direct traffic to the two nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
- **A private Docker registry** to distribute Docker images to your machines.
|
||||
|
||||
### 1. Set up Linux Nodes
|
||||
|
||||
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
|
||||
|
||||
Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.](../../../../pages-for-subheaders/installation-requirements.md)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](installation/options/ec2-node) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up External Datastore
|
||||
|
||||
The ability to run Kubernetes using a datastore other than etcd sets K3s apart from other Kubernetes distributions. This feature provides flexibility to Kubernetes operators. The available options allow you to select a datastore that best fits your use case.
|
||||
|
||||
For a high-availability K3s installation, you will need to set up one of the following external databases:
|
||||
|
||||
* [PostgreSQL](https://www.postgresql.org/) (certified against versions 10.7 and 11.5)
|
||||
* [MySQL](https://www.mysql.com/) (certified against version 5.7)
|
||||
* [etcd](https://etcd.io/) (certified against version 3.3.15)
|
||||
|
||||
When you install Kubernetes, you will pass in details for K3s to connect to the database.
|
||||
|
||||
For an example of one way to set up the database, refer to this [tutorial](installation/options/rds) for setting up a MySQL database on Amazon's RDS service.
|
||||
|
||||
For the complete list of options that are available for configuring a K3s cluster datastore, refer to the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/datastore/)
|
||||
|
||||
### 3. Set up the Load Balancer
|
||||
|
||||
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
|
||||
|
||||
When Kubernetes gets set up in a later step, the K3s tool will deploy a Traefik Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
|
||||
|
||||
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the Traefik Ingress controller to listen for traffic destined for the Rancher hostname. The Traefik Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.
|
||||
|
||||
For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer:
|
||||
|
||||
- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment.
|
||||
- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.](../../../../reference-guides/installation-references/helm-chart-options.md#external-tls-termination)
|
||||
|
||||
For an example showing how to set up an NGINX load balancer, refer to [this page.](installation/options/nginx/)
|
||||
|
||||
For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.](installation/options/nlb/)
|
||||
|
||||
> **Important:**
|
||||
> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
|
||||
|
||||
### 4. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
Depending on your environment, this may be an A record pointing to the load balancer IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on.
|
||||
|
||||
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
### 5. Set up a Private Docker Registry
|
||||
|
||||
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines.
|
||||
|
||||
In a later step, when you set up your K3s Kubernetes cluster, you will create a [private registries configuration file](https://rancher.com/docs/k3s/latest/en/installation/private-registry/) with details from this registry.
|
||||
|
||||
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="RKE">
|
||||
|
||||
To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure:
|
||||
|
||||
- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere.
|
||||
- **A load balancer** to direct front-end traffic to the three nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
- **A private Docker registry** to distribute Docker images to your machines.
|
||||
|
||||
These nodes must be in the same region/data center. You may place these servers in separate availability zones.
|
||||
|
||||
### Why three nodes?
|
||||
|
||||
In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
|
||||
|
||||
The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes.
|
||||
|
||||
### 1. Set up Linux Nodes
|
||||
|
||||
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
|
||||
|
||||
Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.](../../../../pages-for-subheaders/installation-requirements.md)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](installation/options/ec2-node) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up the Load Balancer
|
||||
|
||||
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
|
||||
|
||||
When Kubernetes gets set up in a later step, the RKE tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
|
||||
|
||||
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.
|
||||
|
||||
For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer:
|
||||
|
||||
- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment.
|
||||
- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.](../../../../reference-guides/installation-references/helm-chart-options.md#external-tls-termination)
|
||||
|
||||
For an example showing how to set up an NGINX load balancer, refer to [this page.](installation/options/nginx/)
|
||||
|
||||
For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.](installation/options/nlb/)
|
||||
|
||||
> **Important:**
|
||||
> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
|
||||
|
||||
### 3. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
Depending on your environment, this may be an A record pointing to the LB IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on.
|
||||
|
||||
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
### 4. Set up a Private Docker Registry
|
||||
|
||||
Rancher supports air gap installs using a secure Docker private registry. You must have your own private registry or other means of distributing Docker images to your machines.
|
||||
|
||||
In a later step, when you set up your RKE Kubernetes cluster, you will create a [private registries configuration file](https://rancher.com/docs/rke/latest/en/config-options/private-registries/) with details from this registry.
|
||||
|
||||
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry)
|
||||
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker">
|
||||
|
||||
> The Docker installation is for Rancher users that are wanting to test out Rancher. Since there is only one node and a single Docker container, if the node goes down, you will lose all the data of your Rancher server.
|
||||
>
|
||||
> For Rancher v2.0-v2.4, there is no migration path from a Docker installation to a high-availability installation. Therefore, you may want to use a Kubernetes installation from the start.
|
||||
|
||||
### 1. Set up a Linux Node
|
||||
|
||||
This host will be disconnected from the Internet, but needs to be able to connect to your private registry.
|
||||
|
||||
Make sure that your node fulfills the general installation requirements for [OS, Docker, hardware, and networking.](../../../../pages-for-subheaders/installation-requirements.md)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](installation/options/ec2-node) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up a Private Docker Registry
|
||||
|
||||
Rancher supports air gap installs using a Docker private registry on your bastion server. You must have your own private registry or other means of distributing Docker images to your machines.
|
||||
|
||||
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/)
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### [Next: Collect and Publish Images to your Private Registry](publish-images.md)
|
||||
+230
@@ -0,0 +1,230 @@
|
||||
---
|
||||
title: '3. Install Kubernetes (Skip for Docker Installs)'
|
||||
weight: 300
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/install-kube
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
> Skip this section if you are installing Rancher on a single node with Docker.
|
||||
|
||||
This section describes how to install a Kubernetes cluster according to our [best practices for the Rancher server environment.](../../../../reference-guides/rancher-manager-architecture/architecture-recommendations.md#environment-for-kubernetes-installations) This cluster should be dedicated to run only the Rancher server.
|
||||
|
||||
For Rancher before v2.4, Rancher should be installed on an [RKE](https://rancher.com/docs/rke/latest/en/) (Rancher Kubernetes Engine) Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
|
||||
|
||||
In Rancher v2.4, the Rancher management server can be installed on either an RKE cluster or a K3s Kubernetes cluster. K3s is also a fully certified Kubernetes distribution released by Rancher, but is newer than RKE. We recommend installing Rancher on K3s because K3s is easier to use, and more lightweight, with a binary size of less than 100 MB. The Rancher management server can only be run on a Kubernetes cluster in an infrastructure provider where Kubernetes is installed using RKE or K3s. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported. Note: After Rancher is installed on an RKE cluster, there is no migration path to a K3s setup at this time.
|
||||
|
||||
The steps to set up an air-gapped Kubernetes cluster on RKE or K3s are shown below.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="K3s">
|
||||
|
||||
In this guide, we are assuming you have created your nodes in your air gapped environment and have a secure Docker private registry on your bastion server.
|
||||
|
||||
### Installation Outline
|
||||
|
||||
1. [Prepare Images Directory](#1-prepare-images-directory)
|
||||
2. [Create Registry YAML](#2-create-registry-yaml)
|
||||
3. [Install K3s](#3-install-k3s)
|
||||
4. [Save and Start Using the kubeconfig File](#4-save-and-start-using-the-kubeconfig-file)
|
||||
|
||||
### 1. Prepare Images Directory
|
||||
Obtain the images tar file for your architecture from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be running.
|
||||
|
||||
Place the tar file in the `images` directory before starting K3s on each node, for example:
|
||||
|
||||
```sh
|
||||
sudo mkdir -p /var/lib/rancher/k3s/agent/images/
|
||||
sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/
|
||||
```
|
||||
|
||||
### 2. Create Registry YAML
|
||||
Create the registries.yaml file at `/etc/rancher/k3s/registries.yaml`. This will tell K3s the necessary details to connect to your private registry.
|
||||
|
||||
The registries.yaml file should look like this before plugging in the necessary information:
|
||||
|
||||
```
|
||||
---
|
||||
mirrors:
|
||||
customreg:
|
||||
endpoint:
|
||||
- "https://ip-to-server:5000"
|
||||
configs:
|
||||
customreg:
|
||||
auth:
|
||||
username: xxxxxx # this is the registry username
|
||||
password: xxxxxx # this is the registry password
|
||||
tls:
|
||||
cert_file: <path to the cert file used in the registry>
|
||||
key_file: <path to the key file used in the registry>
|
||||
ca_file: <path to the ca file used in the registry>
|
||||
```
|
||||
|
||||
Note, at this time only secure registries are supported with K3s (SSL with custom CA).
|
||||
|
||||
For more information on private registries configuration file for K3s, refer to the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/private-registry/)
|
||||
|
||||
### 3. Install K3s
|
||||
|
||||
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
|
||||
|
||||
To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script.
|
||||
|
||||
Obtain the K3s binary from the [releases](https://github.com/rancher/k3s/releases) page, matching the same version used to get the airgap images tar.
|
||||
Also obtain the K3s install script at https://get.k3s.io
|
||||
|
||||
Place the binary in `/usr/local/bin` on each node.
|
||||
Place the install script anywhere on each node, and name it `install.sh`.
|
||||
|
||||
Install K3s on each server:
|
||||
|
||||
```
|
||||
INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh
|
||||
```
|
||||
|
||||
Install K3s on each agent:
|
||||
|
||||
```
|
||||
INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken ./install.sh
|
||||
```
|
||||
|
||||
Note, take care to ensure you replace `myserver` with the IP or valid DNS of the server and replace `mynodetoken` with the node-token from the server.
|
||||
The node-token is on the server at `/var/lib/rancher/k3s/server/node-token`
|
||||
|
||||
>**Note:** K3s additionally provides a `--resolv-conf` flag for kubelets, which may help with configuring DNS in air-gap networks.
|
||||
|
||||
### 4. Save and Start Using the kubeconfig File
|
||||
|
||||
When you installed K3s on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/k3s/k3s.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location.
|
||||
|
||||
To use this `kubeconfig` file,
|
||||
|
||||
1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool.
|
||||
2. Copy the file at `/etc/rancher/k3s/k3s.yaml` and save it to the directory `~/.kube/config` on your local machine.
|
||||
3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your load balancer, referring to port 6443. (The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443.) Here is an example `k3s.yaml`:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: [CERTIFICATE-DATA]
|
||||
server: [LOAD-BALANCER-DNS]:6443 # Edit this line
|
||||
name: default
|
||||
contexts:
|
||||
- context:
|
||||
cluster: default
|
||||
user: default
|
||||
name: default
|
||||
current-context: default
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: default
|
||||
user:
|
||||
password: [PASSWORD]
|
||||
username: admin
|
||||
```
|
||||
|
||||
**Result:** You can now use `kubectl` to manage your K3s cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using `kubectl`:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig ~/.kube/config/k3s.yaml get pods --all-namespaces
|
||||
```
|
||||
|
||||
For more information about the `kubeconfig` file, refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/cluster-access/) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files.
|
||||
|
||||
### Note on Upgrading
|
||||
|
||||
Upgrading an air-gap environment can be accomplished in the following manner:
|
||||
|
||||
1. Download the new air-gap images (tar file) from the [releases](https://github.com/rancher/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each node. Delete the old tar file.
|
||||
2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables.
|
||||
3. Restart the K3s service (if not restarted automatically by installer).
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="RKE">
|
||||
|
||||
We will create a Kubernetes cluster using Rancher Kubernetes Engine (RKE). Before being able to start your Kubernetes cluster, you’ll need to install RKE and create a RKE config file.
|
||||
|
||||
### 1. Install RKE
|
||||
|
||||
Install RKE by following the instructions in the [RKE documentation.](https://rancher.com/docs/rke/latest/en/installation/)
|
||||
|
||||
### 2. Create an RKE Config File
|
||||
|
||||
From a system that can access ports 22/TCP and 6443/TCP on the Linux host node(s) that you set up in a previous step, use the sample below to create a new file named `rancher-cluster.yml`.
|
||||
|
||||
This file is an RKE configuration file, which is a configuration for the cluster you're deploying Rancher to.
|
||||
|
||||
Replace values in the code sample below with help of the _RKE Options_ table. Use the IP address or DNS names of the [3 nodes](installation/air-gap-high-availability/provision-hosts) you created.
|
||||
|
||||
> **Tip:** For more details on the options available, see the RKE [Config Options](https://rancher.com/docs/rke/latest/en/config-options/).
|
||||
|
||||
<figcaption>RKE Options</figcaption>
|
||||
|
||||
| Option | Required | Description |
|
||||
| ------------------ | -------------------- | --------------------------------------------------------------------------------------- |
|
||||
| `address` | ✓ | The DNS or IP address for the node within the air gapped network. |
|
||||
| `user` | ✓ | A user that can run Docker commands. |
|
||||
| `role` | ✓ | List of Kubernetes roles assigned to the node. |
|
||||
| `internal_address` | optional<sup>1</sup> | The DNS or IP address used for internal cluster traffic. |
|
||||
| `ssh_key_path` | | Path to the SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`). |
|
||||
|
||||
> <sup>1</sup> Some services like AWS EC2 require setting the `internal_address` if you want to use self-referencing security groups or firewalls.
|
||||
|
||||
```yaml
|
||||
nodes:
|
||||
- address: 10.10.3.187 # node air gap network IP
|
||||
internal_address: 172.31.7.22 # node intra-cluster IP
|
||||
user: rancher
|
||||
role: ['controlplane', 'etcd', 'worker']
|
||||
ssh_key_path: /home/user/.ssh/id_rsa
|
||||
- address: 10.10.3.254 # node air gap network IP
|
||||
internal_address: 172.31.13.132 # node intra-cluster IP
|
||||
user: rancher
|
||||
role: ['controlplane', 'etcd', 'worker']
|
||||
ssh_key_path: /home/user/.ssh/id_rsa
|
||||
- address: 10.10.3.89 # node air gap network IP
|
||||
internal_address: 172.31.3.216 # node intra-cluster IP
|
||||
user: rancher
|
||||
role: ['controlplane', 'etcd', 'worker']
|
||||
ssh_key_path: /home/user/.ssh/id_rsa
|
||||
|
||||
private_registries:
|
||||
- url: <REGISTRY.YOURDOMAIN.COM:PORT> # private registry url
|
||||
user: rancher
|
||||
password: '*********'
|
||||
is_default: true
|
||||
```
|
||||
|
||||
### 3. Run RKE
|
||||
|
||||
After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster:
|
||||
|
||||
```
|
||||
rke up --config ./rancher-cluster.yml
|
||||
```
|
||||
|
||||
### 4. Save Your Files
|
||||
|
||||
> **Important**
|
||||
> The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
|
||||
|
||||
Save a copy of the following files in a secure location:
|
||||
|
||||
- `rancher-cluster.yml`: The RKE cluster configuration file.
|
||||
- `kube_config_rancher-cluster.yml`: The [Kubeconfig file](https://rancher.com/docs/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
|
||||
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state), this file contains the current state of the cluster including the RKE configuration and the certificates.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
|
||||
|
||||
### Issues or errors?
|
||||
|
||||
See the [Troubleshooting](installation/options/troubleshooting/) page.
|
||||
|
||||
### [Next: Install Rancher](install-rancher-ha.md)
|
||||
+367
@@ -0,0 +1,367 @@
|
||||
---
|
||||
title: 4. Install Rancher
|
||||
weight: 400
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/config-rancher-system-charts/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/config-rancher-for-private-reg/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-single-node/install-rancher
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap/install-rancher
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/install-rancher/
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
This section is about how to deploy Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Kubernetes Install (Recommended)">
|
||||
|
||||
Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes install is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
|
||||
|
||||
This section describes installing Rancher in five parts:
|
||||
|
||||
- [1. Add the Helm Chart Repository](#1-add-the-helm-chart-repository)
|
||||
- [2. Choose your SSL Configuration](#2-choose-your-ssl-configuration)
|
||||
- [3. Render the Rancher Helm Template](#3-render-the-rancher-helm-template)
|
||||
- [4. Install Rancher](#4-install-rancher)
|
||||
- [5. For Rancher versions before v2.3.0, Configure System Charts](#5-for-rancher-versions-before-v2-3-0-configure-system-charts)
|
||||
|
||||
# 1. Add the Helm Chart Repository
|
||||
|
||||
From a system that has access to the internet, fetch the latest Helm chart and copy the resulting manifests to a system that has access to the Rancher server cluster.
|
||||
|
||||
1. If you haven't already, install `helm` locally on a workstation that has internet access. Note: Refer to the [Helm version requirements](installation/options/helm-version) to choose a version of Helm to install Rancher.
|
||||
|
||||
2. Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher](../../../../reference-guides/installation-references/helm-chart-options.md#helm-chart-repositories).
|
||||
{{< release-channel >}}
|
||||
```
|
||||
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
3. Fetch the latest Rancher chart. This will pull down the chart and save it in the current directory as a `.tgz` file.
|
||||
```plain
|
||||
helm fetch rancher-<CHART_REPO>/rancher
|
||||
```
|
||||
|
||||
If you require a specific version of Rancher, you can fetch this with the Helm `--version` parameter like in the following example:
|
||||
```plain
|
||||
helm fetch rancher-stable/rancher --version=v2.4.8
|
||||
```
|
||||
|
||||
# 2. Choose your SSL Configuration
|
||||
|
||||
Rancher Server is designed to be secure by default and requires SSL/TLS configuration.
|
||||
|
||||
When Rancher is installed on an air gapped Kubernetes cluster, there are two recommended options for the source of the certificate.
|
||||
|
||||
> **Note:** If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer](../../../../reference-guides/installation-references/helm-chart-options.md#external-tls-termination).
|
||||
|
||||
| Configuration | Chart option | Description | Requires cert-manager |
|
||||
| ------------------------------------------ | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
|
||||
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br/> This is the **default** and does not need to be added when rendering the Helm template. | yes |
|
||||
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s). <br/> This option must be passed when rendering the Rancher Helm template. | no |
|
||||
|
||||
# 3. Render the Rancher Helm Template
|
||||
|
||||
When setting up the Rancher Helm template, there are several options in the Helm chart that are designed specifically for air gap installations.
|
||||
|
||||
| Chart Option | Chart Value | Description |
|
||||
| ----------------------- | -------------------------------- | ---- |
|
||||
| `certmanager.version` | `<version>` | Configure proper Rancher TLS issuer depending of running cert-manager version. |
|
||||
| `systemDefaultRegistry` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
|
||||
| `useBundledSystemChart` | `true` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_ |
|
||||
|
||||
Based on the choice your made in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), complete one of the procedures below.
|
||||
|
||||
### Option A: Default Self-Signed Certificate
|
||||
|
||||
<details id="k8s-1">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
By default, Rancher generates a CA and uses cert-manager to issue the certificate for access to the Rancher server interface.
|
||||
|
||||
> **Note:**
|
||||
> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.11.0, please see our [upgrade cert-manager documentation](installation/options/upgrading-cert-manager/).
|
||||
|
||||
1. From a system connected to the internet, add the cert-manager repo to Helm.
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager).
|
||||
|
||||
```plain
|
||||
helm fetch jetstack/cert-manager --version v1.0.4
|
||||
```
|
||||
|
||||
1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
|
||||
```plain
|
||||
helm template cert-manager ./cert-manager-v1.0.4.tgz --output-dir . \
|
||||
--namespace cert-manager \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller \
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook \
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
|
||||
```
|
||||
|
||||
1. Download the required CRD file for cert-manager
|
||||
```plain
|
||||
curl -L -o cert-manager/cert-manager-crd.yaml https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.crds.yaml
|
||||
```
|
||||
|
||||
1. Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<VERSION>` | The version number of the output tarball.
|
||||
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry.
|
||||
`<CERTMANAGER_VERSION>` | Cert-manager version running on k8s cluster.
|
||||
|
||||
```plain
|
||||
helm template rancher ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set certmanager.version=<CERTMANAGER_VERSION> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.3.6`
|
||||
|
||||
</details>
|
||||
|
||||
### Option B: Certificates From Files using Kubernetes Secrets
|
||||
|
||||
<details id="k8s-2">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
Create Kubernetes secrets from your own certificates for Rancher to use. The common name for the cert will need to match the `hostname` option in the command below, or the ingress controller will fail to provision the site for Rancher.
|
||||
|
||||
Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------- |
|
||||
| `<VERSION>` | The version number of the output tarball. |
|
||||
| `<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer. |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry. |
|
||||
|
||||
```plain
|
||||
helm template rancher ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`:
|
||||
|
||||
```plain
|
||||
helm template rancher ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set privateCA=true \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.3.6`
|
||||
|
||||
Then refer to [Adding TLS Secrets](installation/resources/encryption/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them.
|
||||
|
||||
</details>
|
||||
|
||||
# 4. Install Rancher
|
||||
|
||||
Copy the rendered manifest directories to a system that has access to the Rancher server cluster to complete installation.
|
||||
|
||||
Use `kubectl` to create namespaces and apply the rendered manifests.
|
||||
|
||||
If you choose to use self-signed certificates in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), install cert-manager.
|
||||
|
||||
### For Self-Signed Certificate Installs, Install Cert-manager
|
||||
|
||||
<details id="install-cert-manager">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you are using self-signed certificates, install cert-manager:
|
||||
|
||||
1. Create the namespace for cert-manager.
|
||||
```plain
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
1. Create the cert-manager CustomResourceDefinitions (CRDs).
|
||||
```plain
|
||||
kubectl apply -f cert-manager/cert-manager-crd.yaml
|
||||
```
|
||||
|
||||
> **Note:**
|
||||
> If you are running Kubernetes v1.15 or below, you will need to add the `--validate=false` flag to your `kubectl apply` command above, or else you will receive a validation error relating to the `x-kubernetes-preserve-unknown-fields` field in cert-manager’s CustomResourceDefinition resources. This is a benign error and occurs due to the way kubectl performs resource validation.
|
||||
|
||||
1. Launch cert-manager.
|
||||
```plain
|
||||
kubectl apply -R -f ./cert-manager
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Install Rancher with kubectl
|
||||
|
||||
```plain
|
||||
kubectl create namespace cattle-system
|
||||
kubectl -n cattle-system apply -R -f ./rancher
|
||||
```
|
||||
**Step Result:** If you are installing Rancher v2.3.0+, the installation is complete.
|
||||
|
||||
> **Note:** If you don't intend to send telemetry data, opt out [telemetry](../../../../faq/telemetry.md) during the initial login. Leaving this active in an air-gapped environment can cause issues if the sockets cannot be opened successfully.
|
||||
|
||||
# 5. For Rancher versions before v2.3.0, Configure System Charts
|
||||
|
||||
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts](../../resources/local-system-charts.md).
|
||||
|
||||
# Additional Resources
|
||||
|
||||
These resources could be helpful when installing Rancher:
|
||||
|
||||
- [Rancher Helm chart options](installation/resources/chart-options/)
|
||||
- [Adding TLS secrets](installation/resources/encryption/tls-secrets/)
|
||||
- [Troubleshooting Rancher Kubernetes Installations](installation/options/troubleshooting/)
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker Install">
|
||||
|
||||
The Docker installation is for Rancher users who want to test out Rancher.
|
||||
|
||||
Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
|
||||
|
||||
> **Important:** There is no upgrade path to transition your Docker installation to a Kubernetes Installation.** Instead of running the single node installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation.
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
| Environment Variable Key | Environment Variable Value | Description |
|
||||
| -------------------------------- | -------------------------------- | ---- |
|
||||
| `CATTLE_SYSTEM_DEFAULT_REGISTRY` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
|
||||
| `CATTLE_SYSTEM_CATALOG` | `bundled` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_ |
|
||||
|
||||
> **Do you want to...**
|
||||
>
|
||||
> - Configure custom CA root certificate to access your services? See [Custom CA root certificate](installation/options/custom-ca-root-certificate/).
|
||||
> - Record all transactions with the Rancher API? See [API Auditing](../../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#api-audit-log).
|
||||
|
||||
- For Rancher before v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.](../../resources/local-system-charts.md)
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
### Option A: Default Self-Signed Certificate
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself.
|
||||
|
||||
Log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder.
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
|
||||
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/resources/chart-options/) that you want to install. |
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Option B: Bring Your Own Certificate: Self-Signed
|
||||
|
||||
<details id="option-b">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher.
|
||||
|
||||
> **Prerequisites:**
|
||||
> From a computer with an internet connection, create a self-signed certificate using [OpenSSL](https://www.openssl.org/) or another method of your choice.
|
||||
>
|
||||
> - The certificate files must be in PEM format.
|
||||
> - In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](../rancher-on-a-single-node-with-docker/certificate-troubleshooting.md)
|
||||
|
||||
After creating your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Use the `-v` flag and provide the path to your certificates to mount them in your container.
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `<CERT_DIRECTORY>` | The path to the directory containing your certificate files. |
|
||||
| `<FULL_CHAIN.pem>` | The path to your full certificate chain. |
|
||||
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
|
||||
| `<CA_CERTS.pem>` | The path to the certificate authority's certificate. |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
|
||||
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/resources/chart-options/) that you want to install. |
|
||||
|
||||
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Option C: Bring Your Own Certificate: Signed by Recognized CA
|
||||
|
||||
<details id="option-c">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
In development or testing environments where you're exposing an app publicly, use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings.
|
||||
|
||||
> **Prerequisite:** The certificate files must be in PEM format.
|
||||
|
||||
After obtaining your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Because your certificate is signed by a recognized CA, mounting an additional CA certificate file is unnecessary.
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `<CERT_DIRECTORY>` | The path to the directory containing your certificate files. |
|
||||
| `<FULL_CHAIN.pem>` | The path to your full certificate chain. |
|
||||
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
|
||||
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/resources/chart-options/) that you want to install. |
|
||||
|
||||
> **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--no-cacerts \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
If you are installing Rancher v2.3.0+, the installation is complete.
|
||||
|
||||
> **Note:** If you don't intend to send telemetry data, opt out [telemetry](../../../../faq/telemetry.md) during the initial login.
|
||||
|
||||
If you are installing Rancher versions before v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts](../../resources/local-system-charts.md).
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
+301
@@ -0,0 +1,301 @@
|
||||
---
|
||||
title: '2. Collect and Publish Images to your Private Registry'
|
||||
weight: 200
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/prepare-private-registry/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-single-node/prepare-private-registry/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-single-node/config-rancher-for-private-reg/
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/config-rancher-for-private-reg/
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
This section describes how to set up your private registry so that when you install Rancher, Rancher will pull all the required images from this registry.
|
||||
|
||||
By default, all images used to [provision Kubernetes clusters](../../../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md) or launch any [tools](../../../../reference-guides/rancher-cluster-tools.md) in Rancher, e.g. monitoring, pipelines, alerts, are pulled from Docker Hub. In an air gapped installation of Rancher, you will need a private registry that is located somewhere accessible by your Rancher server. Then, you will load the registry with all the images.
|
||||
|
||||
Populating the private registry with images is the same process for installing Rancher with Docker and for installing Rancher on a Kubernetes cluster.
|
||||
|
||||
The steps in this section differ depending on whether or not you are planning to use Rancher to provision a downstream cluster with Windows nodes or not. By default, we provide the steps of how to populate your private registry assuming that Rancher will provision downstream Kubernetes clusters with only Linux nodes. But if you plan on provisioning any [downstream Kubernetes clusters using Windows nodes](../../../../pages-for-subheaders/use-windows-clusters.md), there are separate instructions to support the images needed.
|
||||
|
||||
> **Prerequisites:**
|
||||
>
|
||||
> You must have a [private registry](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry) available to use.
|
||||
>
|
||||
> If the registry has certs, follow [this K3s documentation](https://rancher.com/docs/k3s/latest/en/installation/private-registry/) about adding a private registry. The certs and registry configuration files need to be mounted into the Rancher container.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Linux Only Clusters">
|
||||
|
||||
For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry.
|
||||
|
||||
1. [Find the required assets for your Rancher version](#1-find-the-required-assets-for-your-rancher-version)
|
||||
2. [Collect the cert-manager image](#2-collect-the-cert-manager-image) (unless you are bringing your own certificates or terminating TLS on a load balancer)
|
||||
3. [Save the images to your workstation](#3-save-the-images-to-your-workstation)
|
||||
4. [Populate the private registry](#4-populate-the-private-registry)
|
||||
|
||||
### Prerequisites
|
||||
|
||||
These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space.
|
||||
|
||||
If you will use ARM64 hosts, the registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests.
|
||||
|
||||
### 1. Find the required assets for your Rancher version
|
||||
|
||||
1. Go to our [releases page,](https://github.com/rancher/rancher/releases) find the Rancher v2.x.x release that you want to install, and click **Assets.** Note: Don't use releases marked `rc` or `Pre-release`, as they are not stable for production environments.
|
||||
|
||||
2. From the release's **Assets** section, download the following files, which are required to install Rancher in an air gap environment:
|
||||
|
||||
| Release File | Description |
|
||||
| ---------------- | -------------- |
|
||||
| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. |
|
||||
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
|
||||
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
### 2. Collect the cert-manager image
|
||||
|
||||
> Skip this step if you are using your own certificates, or if you are terminating TLS on an external load balancer.
|
||||
|
||||
In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well.
|
||||
|
||||
1. Fetch the latest `cert-manager` Helm chart and parse the template for image details:
|
||||
|
||||
> **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation](installation/options/upgrading-cert-manager/).
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm fetch jetstack/cert-manager --version v1.0.4
|
||||
helm template ./cert-manager-<version>.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt
|
||||
```
|
||||
|
||||
2. Sort and unique the images list to remove any overlap between the sources:
|
||||
|
||||
```plain
|
||||
sort -u rancher-images.txt -o rancher-images.txt
|
||||
```
|
||||
|
||||
### 3. Save the images to your workstation
|
||||
|
||||
1. Make `rancher-save-images.sh` an executable:
|
||||
```
|
||||
chmod +x rancher-save-images.sh
|
||||
```
|
||||
|
||||
1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images:
|
||||
```plain
|
||||
./rancher-save-images.sh --image-list ./rancher-images.txt
|
||||
```
|
||||
**Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
### 4. Populate the private registry
|
||||
|
||||
Next, you will move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images.
|
||||
|
||||
Move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images.
|
||||
|
||||
The `rancher-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. The `rancher-images.tar.gz` should also be in the same directory.
|
||||
|
||||
1. Log into your private registry if required:
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
1. Make `rancher-load-images.sh` an executable:
|
||||
```
|
||||
chmod +x rancher-load-images.sh
|
||||
```
|
||||
|
||||
1. Use `rancher-load-images.sh` to extract, tag and push `rancher-images.txt` and `rancher-images.tar.gz` to your private registry:
|
||||
```plain
|
||||
./rancher-load-images.sh --image-list ./rancher-images.txt --registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Linux and Windows Clusters">
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
For Rancher servers that will provision Linux and Windows clusters, there are distinctive steps to populate your private registry for the Windows images and the Linux images. Since a Windows cluster is a mix of Linux and Windows nodes, the Linux images pushed into the private registry are manifests.
|
||||
|
||||
# Windows Steps
|
||||
|
||||
The Windows images need to be collected and pushed from a Windows server workstation.
|
||||
|
||||
1. <a href="#windows-1">Find the required assets for your Rancher version</a>
|
||||
2. <a href="#windows-2">Save the images to your Windows Server workstation</a>
|
||||
3. <a href="#windows-3">Prepare the Docker daemon</a>
|
||||
4. <a href="#windows-4">Populate the private registry</a>
|
||||
|
||||
### Prerequisites
|
||||
|
||||
These steps expect you to use a Windows Server 1809 workstation that has internet access, access to your private registry, and at least 50 GB of disk space.
|
||||
|
||||
The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters.
|
||||
|
||||
Your registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests.
|
||||
|
||||
<a name="windows-1"></a>
|
||||
|
||||
### 1. Find the required assets for your Rancher version
|
||||
|
||||
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments.
|
||||
|
||||
2. From the release's "Assets" section, download the following files:
|
||||
|
||||
| Release File | Description |
|
||||
|----------------------------|------------------|
|
||||
| `rancher-windows-images.txt` | This file contains a list of Windows images needed to provision Windows clusters. |
|
||||
| `rancher-save-images.ps1` | This script pulls all the images in the `rancher-windows-images.txt` from Docker Hub and saves all of the images as `rancher-windows-images.tar.gz`. |
|
||||
| `rancher-load-images.ps1` | This script loads the images from the `rancher-windows-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
<a name="windows-2"></a>
|
||||
|
||||
### 2. Save the images to your Windows Server workstation
|
||||
|
||||
1. Using `powershell`, go to the directory that has the files that were downloaded in the previous step.
|
||||
|
||||
1. Run `rancher-save-images.ps1` to create a tarball of all the required images:
|
||||
```plain
|
||||
./rancher-save-images.ps1
|
||||
```
|
||||
|
||||
**Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-windows-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
<a name="windows-3"></a>
|
||||
|
||||
### 3. Prepare the Docker daemon
|
||||
|
||||
Append your private registry address to the `allow-nondistributable-artifacts` config field in the Docker daemon (`C:\ProgramData\Docker\config\daemon.json`). Since the base image of Windows images are maintained by the `mcr.microsoft.com` registry, this step is required as the layers in the Microsoft registry are missing from Docker Hub and need to be pulled into the private registry.
|
||||
|
||||
```
|
||||
{
|
||||
...
|
||||
"allow-nondistributable-artifacts": [
|
||||
...
|
||||
"<REGISTRY.YOURDOMAIN.COM:PORT>"
|
||||
]
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
<a name="windows-4"></a>
|
||||
|
||||
### 4. Populate the private registry
|
||||
|
||||
Move the images in the `rancher-windows-images.tar.gz` to your private registry using the scripts to load the images.
|
||||
|
||||
The `rancher-windows-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.ps1` script. The `rancher-windows-images.tar.gz` should also be in the same directory.
|
||||
|
||||
1. Using `powershell`, log into your private registry if required:
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
1. Using `powershell`, use `rancher-load-images.ps1` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry:
|
||||
```plain
|
||||
./rancher-load-images.ps1 --registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
# Linux Steps
|
||||
|
||||
The Linux images needs to be collected and pushed from a Linux host, but _must be done after_ populating the Windows images into the private registry. These step are different from the Linux only steps as the Linux images that are pushed will actually manifests that support Windows and Linux images.
|
||||
|
||||
1. <a href="#linux-1">Find the required assets for your Rancher version</a>
|
||||
2. <a href="#linux-2">Collect all the required images</a>
|
||||
3. <a href="#linux-3">Save the images to your Linux workstation</a>
|
||||
4. <a href="#linux-4">Populate the private registry</a>
|
||||
|
||||
### Prerequisites
|
||||
|
||||
You must populate the private registry with the Windows images before populating the private registry with Linux images. If you have already populated the registry with Linux images, you will need to follow these instructions again as they will publish manifests that support Windows and Linux images.
|
||||
|
||||
These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space.
|
||||
|
||||
The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters.
|
||||
|
||||
<a name="linux-1"></a>
|
||||
|
||||
### 1. Find the required assets for your Rancher version
|
||||
|
||||
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. Click **Assets.**
|
||||
|
||||
2. From the release's **Assets** section, download the following files:
|
||||
|
||||
| Release File | Description |
|
||||
|----------------------------| -------------------------- |
|
||||
| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. |
|
||||
| `rancher-windows-images.txt` | This file contains a list of images needed to provision Windows clusters. |
|
||||
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
|
||||
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
<a name="linux-2"></a>
|
||||
|
||||
### 2. Collect all the required images
|
||||
|
||||
**For Kubernetes Installs using Rancher Generated Self-Signed Certificate:** In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates.
|
||||
|
||||
1. Fetch the latest `cert-manager` Helm chart and parse the template for image details:
|
||||
> **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation](installation/options/upgrading-cert-manager/).
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm fetch jetstack/cert-manager --version v0.12.0
|
||||
helm template ./cert-manager-<version>.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt
|
||||
```
|
||||
|
||||
2. Sort and unique the images list to remove any overlap between the sources:
|
||||
```plain
|
||||
sort -u rancher-images.txt -o rancher-images.txt
|
||||
```
|
||||
|
||||
<a name="linux-3"></a>
|
||||
|
||||
### 3. Save the images to your workstation
|
||||
|
||||
1. Make `rancher-save-images.sh` an executable:
|
||||
```
|
||||
chmod +x rancher-save-images.sh
|
||||
```
|
||||
|
||||
1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images:
|
||||
```plain
|
||||
./rancher-save-images.sh --image-list ./rancher-images.txt
|
||||
```
|
||||
|
||||
**Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
<a name="linux-4"></a>
|
||||
|
||||
### 4. Populate the private registry
|
||||
|
||||
Move the images in the `rancher-images.tar.gz` to your private registry using the `rancher-load-images.sh script` to load the images.
|
||||
|
||||
The image list, `rancher-images.txt` or `rancher-windows-images.txt`, is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. The `rancher-images.tar.gz` should also be in the same directory.
|
||||
|
||||
1. Log into your private registry if required:
|
||||
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
1. Make `rancher-load-images.sh` an executable:
|
||||
|
||||
```
|
||||
chmod +x rancher-load-images.sh
|
||||
```
|
||||
|
||||
1. Use `rancher-load-images.sh` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry:
|
||||
|
||||
```plain
|
||||
./rancher-load-images.sh --image-list ./rancher-images.txt \
|
||||
--windows-image-list ./rancher-windows-images.txt \
|
||||
--registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### [Next step for Kubernetes Installs - Launch a Kubernetes Cluster](install-kubernetes.md)
|
||||
|
||||
### [Next step for Docker Installs - Install Rancher](install-rancher-ha.md)
|
||||
+151
@@ -0,0 +1,151 @@
|
||||
---
|
||||
title: '2. Install Kubernetes'
|
||||
weight: 200
|
||||
---
|
||||
|
||||
Once the infrastructure is ready, you can continue with setting up an RKE cluster to install Rancher in.
|
||||
|
||||
### Installing Docker
|
||||
|
||||
First, you have to install Docker and setup the HTTP proxy on all three Linux nodes. For this perform the following steps on all three nodes.
|
||||
|
||||
For convenience export the IP address and port of your proxy into an environment variable and set up the HTTP_PROXY variables for your current shell:
|
||||
|
||||
```
|
||||
export proxy_host="10.0.0.5:8888"
|
||||
export HTTP_PROXY=http://${proxy_host}
|
||||
export HTTPS_PROXY=http://${proxy_host}
|
||||
export NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16
|
||||
```
|
||||
|
||||
Next configure apt to use this proxy when installing packages. If you are not using Ubuntu, you have to adapt this step accordingly:
|
||||
|
||||
```
|
||||
cat <<'EOF' | sudo tee /etc/apt/apt.conf.d/proxy.conf > /dev/null
|
||||
Acquire::http::Proxy "http://${proxy_host}/";
|
||||
Acquire::https::Proxy "http://${proxy_host}/";
|
||||
EOF
|
||||
```
|
||||
|
||||
Now you can install Docker:
|
||||
|
||||
```
|
||||
curl -sL https://releases.rancher.com/install-docker/19.03.sh | sh
|
||||
```
|
||||
|
||||
Then ensure that your current user is able to access the Docker daemon without sudo:
|
||||
|
||||
```
|
||||
sudo usermod -aG docker YOUR_USERNAME
|
||||
```
|
||||
|
||||
And configure the Docker daemon to use the proxy to pull images:
|
||||
|
||||
```
|
||||
sudo mkdir -p /etc/systemd/system/docker.service.d
|
||||
cat <<'EOF' | sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf > /dev/null
|
||||
[Service]
|
||||
Environment="HTTP_PROXY=http://${proxy_host}"
|
||||
Environment="HTTPS_PROXY=http://${proxy_host}"
|
||||
Environment="NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16"
|
||||
EOF
|
||||
```
|
||||
|
||||
To apply the configuration, restart the Docker daemon:
|
||||
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart docker
|
||||
```
|
||||
|
||||
### Creating the RKE Cluster
|
||||
|
||||
You need several command line tools on the host where you have SSH access to the Linux nodes to create and interact with the cluster:
|
||||
|
||||
* [RKE CLI binary](https://rancher.com/docs/rke/latest/en/installation/#download-the-rke-binary)
|
||||
|
||||
```
|
||||
sudo curl -fsSL -o /usr/local/bin/rke https://github.com/rancher/rke/releases/download/v1.1.4/rke_linux-amd64
|
||||
sudo chmod +x /usr/local/bin/rke
|
||||
```
|
||||
|
||||
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
|
||||
|
||||
```
|
||||
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
chmod +x ./kubectl
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl
|
||||
```
|
||||
|
||||
* [helm](https://helm.sh/docs/intro/install/)
|
||||
|
||||
```
|
||||
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
|
||||
chmod +x get_helm.sh
|
||||
sudo ./get_helm.sh
|
||||
```
|
||||
|
||||
Next, create a YAML file that describes the RKE cluster. Ensure that the IP addresses of the nodes and the SSH username are correct. For more information on the cluster YAML, have a look at the [RKE documentation](https://rancher.com/docs/rke/latest/en/example-yamls/).
|
||||
|
||||
```
|
||||
nodes:
|
||||
- address: 10.0.1.200
|
||||
user: ubuntu
|
||||
role: [controlplane,worker,etcd]
|
||||
- address: 10.0.1.201
|
||||
user: ubuntu
|
||||
role: [controlplane,worker,etcd]
|
||||
- address: 10.0.1.202
|
||||
user: ubuntu
|
||||
role: [controlplane,worker,etcd]
|
||||
|
||||
services:
|
||||
etcd:
|
||||
backup_config:
|
||||
interval_hours: 12
|
||||
retention: 6
|
||||
```
|
||||
|
||||
After that, you can create the Kubernetes cluster by running:
|
||||
|
||||
```
|
||||
rke up --config rancher-cluster.yaml
|
||||
```
|
||||
|
||||
RKE creates a state file called `rancher-cluster.rkestate`, this is needed if you want to perform updates, modify your cluster configuration or restore it from a backup. It also creates a `kube_config_rancher-cluster.yaml` file, that you can use to connect to the remote Kubernetes cluster locally with tools like kubectl or Helm. Make sure to save all of these files in a secure location, for example by putting them into a version control system.
|
||||
|
||||
To have a look at your cluster run:
|
||||
|
||||
```
|
||||
export KUBECONFIG=kube_config_rancher-cluster.yaml
|
||||
kubectl cluster-info
|
||||
kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
You can also verify that your external load balancer works, and the DNS entry is set up correctly. If you send a request to either, you should receive HTTP 404 response from the ingress controller:
|
||||
|
||||
```
|
||||
$ curl 10.0.1.100
|
||||
default backend - 404
|
||||
$ curl rancher.example.com
|
||||
default backend - 404
|
||||
```
|
||||
|
||||
### Save Your Files
|
||||
|
||||
> **Important**
|
||||
> The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
|
||||
|
||||
Save a copy of the following files in a secure location:
|
||||
|
||||
- `rancher-cluster.yml`: The RKE cluster configuration file.
|
||||
- `kube_config_rancher-cluster.yml`: The [Kubeconfig file](https://rancher.com/docs/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
|
||||
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file](https://rancher.com/docs/rke/latest/en/installation/#kubernetes-cluster-state), this file contains the current state of the cluster including the RKE configuration and the certificates.
|
||||
|
||||
> **Note:** The "rancher-cluster" parts of the two latter file names are dependent on how you name the RKE cluster configuration file.
|
||||
|
||||
### Issues or errors?
|
||||
|
||||
See the [Troubleshooting](installation/options/troubleshooting/) page.
|
||||
|
||||
### [Next: Install Rancher](install-rancher.md)
|
||||
+86
@@ -0,0 +1,86 @@
|
||||
---
|
||||
title: 3. Install Rancher
|
||||
weight: 300
|
||||
---
|
||||
|
||||
Now that you have a running RKE cluster, you can install Rancher in it. For security reasons all traffic to Rancher must be encrypted with TLS. For this tutorial you are going to automatically issue a self-signed certificate through [cert-manager](https://cert-manager.io/). In a real-world use-case you will likely use Let's Encrypt or provide your own certificate.
|
||||
|
||||
> **Note:** These installation instructions assume you are using Helm 3.
|
||||
|
||||
### Install cert-manager
|
||||
|
||||
Add the cert-manager helm repository:
|
||||
|
||||
```
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
```
|
||||
|
||||
Create a namespace for cert-manager:
|
||||
|
||||
```
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
Install the CustomResourceDefinitions of cert-manager:
|
||||
|
||||
```
|
||||
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.2/cert-manager.crds.yaml
|
||||
```
|
||||
|
||||
And install it with Helm. Note that cert-manager also needs your proxy configured in case it needs to communicate with Let's Encrypt or other external certificate issuers:
|
||||
|
||||
```
|
||||
helm upgrade --install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager --version v0.15.2 \
|
||||
--set http_proxy=http://${proxy_host} \
|
||||
--set https_proxy=http://${proxy_host} \
|
||||
--set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local
|
||||
```
|
||||
|
||||
Now you should wait until cert-manager is finished starting up:
|
||||
|
||||
```
|
||||
kubectl rollout status deployment -n cert-manager cert-manager
|
||||
kubectl rollout status deployment -n cert-manager cert-manager-webhook
|
||||
```
|
||||
|
||||
### Install Rancher
|
||||
|
||||
Next you can install Rancher itself. First add the helm repository:
|
||||
|
||||
```
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
```
|
||||
|
||||
Create a namespace:
|
||||
|
||||
```
|
||||
kubectl create namespace cattle-system
|
||||
```
|
||||
|
||||
And install Rancher with Helm. Rancher also needs a proxy configuration so that it can communicate with external application catalogs or retrieve Kubernetes version update metadata:
|
||||
|
||||
```
|
||||
helm upgrade --install rancher rancher-latest/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.example.com \
|
||||
--set proxy=http://${proxy_host}
|
||||
```
|
||||
|
||||
After waiting for the deployment to finish:
|
||||
|
||||
```
|
||||
kubectl rollout status deployment -n cattle-system rancher
|
||||
```
|
||||
|
||||
You can now navigate to `https://rancher.example.com` and start using Rancher.
|
||||
|
||||
> **Note:** If you don't intend to send telemetry data, opt out [telemetry](../../../../faq/telemetry.md) during the initial login. Leaving this active in an air-gapped environment can cause issues if the sockets cannot be opened successfully.
|
||||
|
||||
### Additional Resources
|
||||
|
||||
These resources could be helpful when installing Rancher:
|
||||
|
||||
- [Rancher Helm chart options](installation/resources/chart-options/)
|
||||
- [Adding TLS secrets](installation/resources/encryption/tls-secrets/)
|
||||
- [Troubleshooting Rancher Kubernetes Installations](installation/options/troubleshooting/)
|
||||
+61
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: '1. Set up Infrastructure'
|
||||
weight: 100
|
||||
---
|
||||
|
||||
In this section, you will provision the underlying infrastructure for your Rancher management server with internete access through a HTTP proxy.
|
||||
|
||||
To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure:
|
||||
|
||||
- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere.
|
||||
- **A load balancer** to direct front-end traffic to the three nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
|
||||
These nodes must be in the same region/data center. You may place these servers in separate availability zones.
|
||||
|
||||
### Why three nodes?
|
||||
|
||||
In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
|
||||
|
||||
The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes.
|
||||
|
||||
### 1. Set up Linux Nodes
|
||||
|
||||
These hosts will connect to the internet through an HTTP proxy.
|
||||
|
||||
Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.](../../../../pages-for-subheaders/installation-requirements.md)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](installation/options/ec2-node) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up the Load Balancer
|
||||
|
||||
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
|
||||
|
||||
When Kubernetes gets set up in a later step, the RKE tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
|
||||
|
||||
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.
|
||||
|
||||
For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer:
|
||||
|
||||
- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment.
|
||||
- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.](../../../../reference-guides/installation-references/helm-chart-options.md#external-tls-termination)
|
||||
|
||||
For an example showing how to set up an NGINX load balancer, refer to [this page.](installation/options/nginx/)
|
||||
|
||||
For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.](installation/options/nlb/)
|
||||
|
||||
> **Important:**
|
||||
> Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
|
||||
|
||||
### 3. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
Depending on your environment, this may be an A record pointing to the LB IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on.
|
||||
|
||||
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
|
||||
### [Next: Set up a Kubernetes cluster](install-kubernetes.md)
|
||||
+88
@@ -0,0 +1,88 @@
|
||||
---
|
||||
title: Certificate Troubleshooting
|
||||
weight: 4
|
||||
---
|
||||
### How Do I Know if My Certificates are in PEM Format?
|
||||
|
||||
You can recognize the PEM format by the following traits:
|
||||
|
||||
- The file begins with the following header:
|
||||
```
|
||||
-----BEGIN CERTIFICATE-----
|
||||
```
|
||||
- The header is followed by a long string of characters.
|
||||
- The file ends with a footer:
|
||||
-----END CERTIFICATE-----
|
||||
|
||||
PEM Certificate Example:
|
||||
|
||||
```
|
||||
----BEGIN CERTIFICATE-----
|
||||
MIIGVDCCBDygAwIBAgIJAMiIrEm29kRLMA0GCSqGSIb3DQEBCwUAMHkxCzAJBgNV
|
||||
... more lines
|
||||
VWQqljhfacYPgp8KJUJENQ9h5hZ2nSCrI+W00Jcw4QcEdCI8HL5wmg==
|
||||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
PEM Certificate Key Example:
|
||||
|
||||
```
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIGVDCCBDygAwIBAgIJAMiIrEm29kRLMA0GCSqGSIb3DQEBCwUAMHkxCzAJBgNV
|
||||
... more lines
|
||||
VWQqljhfacYPgp8KJUJENQ9h5hZ2nSCrI+W00Jcw4QcEdCI8HL5wmg==
|
||||
-----END RSA PRIVATE KEY-----
|
||||
```
|
||||
|
||||
If your key looks like the example below, see [Converting a Certificate Key From PKCS8 to PKCS1.](#converting-a-certificate-key-from-pkcs8-to-pkcs1)
|
||||
|
||||
```
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
MIIGVDCCBDygAwIBAgIJAMiIrEm29kRLMA0GCSqGSIb3DQEBCwUAMHkxCzAJBgNV
|
||||
... more lines
|
||||
VWQqljhfacYPgp8KJUJENQ9h5hZ2nSCrI+W00Jcw4QcEdCI8HL5wmg==
|
||||
-----END PRIVATE KEY-----
|
||||
```
|
||||
|
||||
### Converting a Certificate Key From PKCS8 to PKCS1
|
||||
|
||||
If you are using a PKCS8 certificate key file, Rancher will log the following line:
|
||||
|
||||
```
|
||||
ListenConfigController cli-config [listener] failed with : failed to read private key: asn1: structure error: tags don't match (2 vs {class:0 tag:16 length:13 isCompound:true})
|
||||
```
|
||||
|
||||
To make this work, you will need to convert the key from PKCS8 to PKCS1 using the command below:
|
||||
|
||||
```
|
||||
openssl rsa -in key.pem -out convertedkey.pem
|
||||
```
|
||||
|
||||
You can now use `convertedkey.pem` as certificate key file for Rancher.
|
||||
|
||||
### What is the Order of Certificates if I Want to Add My Intermediate(s)?
|
||||
|
||||
The order of adding certificates is as follows:
|
||||
|
||||
```
|
||||
-----BEGIN CERTIFICATE-----
|
||||
%YOUR_CERTIFICATE%
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN CERTIFICATE-----
|
||||
%YOUR_INTERMEDIATE_CERTIFICATE%
|
||||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
### How Do I Validate My Certificate Chain?
|
||||
|
||||
You can validate the certificate chain by using the `openssl` binary. If the output of the command (see the command example below) ends with `Verify return code: 0 (ok)`, your certificate chain is valid. The `ca.pem` file must be the same as you added to the `rancher/rancher` container.
|
||||
|
||||
When using a certificate signed by a recognized Certificate Authority, you can omit the `-CAfile` parameter.
|
||||
|
||||
Command:
|
||||
|
||||
```
|
||||
openssl s_client -CAfile ca.pem -connect rancher.yourdomain.com:443
|
||||
...
|
||||
Verify return code: 0 (ok)
|
||||
```
|
||||
+85
@@ -0,0 +1,85 @@
|
||||
---
|
||||
title: Rolling Back Rancher Installed with Docker
|
||||
weight: 1015
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/upgrades/single-node-rollbacks
|
||||
- /rancher/v2.0-v2.4/en/upgrades/rollbacks/single-node-rollbacks
|
||||
---
|
||||
|
||||
If a Rancher upgrade does not complete successfully, you'll have to roll back to your Rancher setup that you were using before [Docker Upgrade](upgrades/upgrades/single-node-upgrade). Rolling back restores:
|
||||
|
||||
- Your previous version of Rancher.
|
||||
- Your data backup created before upgrade.
|
||||
|
||||
## Before You Start
|
||||
|
||||
During rollback to a prior version of Rancher, you'll enter a series of commands, filling placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (`<EXAMPLE>`). Here's an example of a command with a placeholder:
|
||||
|
||||
```
|
||||
docker pull rancher/rancher:<PRIOR_RANCHER_VERSION>
|
||||
```
|
||||
|
||||
In this command, `<PRIOR_RANCHER_VERSION>` is the version of Rancher you were running before your unsuccessful upgrade. `v2.0.5` for example.
|
||||
|
||||
Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the procedure below.
|
||||
|
||||
<sup>Terminal <code>docker ps</code> Command, Displaying Where to Find <code><PRIOR_RANCHER_VERSION></code> and <code><RANCHER_CONTAINER_NAME></code></sup>
|
||||
|
||||
| Placeholder | Example | Description |
|
||||
| -------------------------- | -------------------------- | ------------------------------------------------------- |
|
||||
| `<PRIOR_RANCHER_VERSION>` | `v2.0.5` | The rancher/rancher image you used before upgrade. |
|
||||
| `<RANCHER_CONTAINER_NAME>` | `festive_mestorf` | The name of your Rancher container. |
|
||||
| `<RANCHER_VERSION>` | `v2.0.5` | The version of Rancher that the backup is for. |
|
||||
| `<DATE>` | `9-27-18` | The date that the data container or backup was created. |
|
||||
<br/>
|
||||
|
||||
You can obtain `<PRIOR_RANCHER_VERSION>` and `<RANCHER_CONTAINER_NAME>` by logging into your Rancher Server by remote connection and entering the command to view the containers that are running: `docker ps`. You can also view containers that are stopped using a different command: `docker ps -a`. Use these commands for help anytime during while creating backups.
|
||||
|
||||
## Rolling Back Rancher
|
||||
|
||||
If you have issues upgrading Rancher, roll it back to its latest known healthy state by pulling the last version you used and then restoring the backup you made before upgrade.
|
||||
|
||||
>**Warning!** Rolling back to a previous version of Rancher destroys any changes made to Rancher following the upgrade. Unrecoverable data loss may occur.
|
||||
|
||||
1. Using a remote Terminal connection, log into the node running your Rancher Server.
|
||||
|
||||
1. Pull the version of Rancher that you were running before upgrade. Replace the `<PRIOR_RANCHER_VERSION>` with that version.
|
||||
|
||||
For example, if you were running Rancher v2.0.5 before upgrade, pull v2.0.5.
|
||||
|
||||
```
|
||||
docker pull rancher/rancher:<PRIOR_RANCHER_VERSION>
|
||||
```
|
||||
|
||||
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container.
|
||||
|
||||
```
|
||||
docker stop <RANCHER_CONTAINER_NAME>
|
||||
```
|
||||
You can obtain the name for your Rancher container by entering `docker ps`.
|
||||
|
||||
1. Move the backup tarball that you created during completion of [Docker Upgrade](upgrades/upgrades/single-node-upgrade/) onto your Rancher Server. Change to the directory that you moved it to. Enter `dir` to confirm that it's there.
|
||||
|
||||
If you followed the naming convention we suggested in [Docker Upgrade](upgrades/upgrades/single-node-upgrade/), it will have a name similar to (`rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`).
|
||||
|
||||
1. Run the following command to replace the data in the `rancher-data` container with the data in the backup tarball, replacing the placeholder. Don't forget to close the quotes.
|
||||
|
||||
```
|
||||
docker run --volumes-from rancher-data \
|
||||
-v $PWD:/backup busybox sh -c "rm /var/lib/rancher/* -rf \
|
||||
&& tar zxvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz"
|
||||
```
|
||||
|
||||
1. Start a new Rancher Server container with the `<PRIOR_RANCHER_VERSION>` tag placeholder pointing to the data container.
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:<PRIOR_RANCHER_VERSION>
|
||||
```
|
||||
|
||||
>**Note:** _Do not_ stop the rollback after initiating it, even if the rollback process seems longer than expected. Stopping the rollback may result in database issues during future upgrades.
|
||||
|
||||
1. Wait a few moments and then open Rancher in a web browser. Confirm that the rollback succeeded and that your data is restored.
|
||||
|
||||
**Result:** Rancher is rolled back to its version and data state before upgrade.
|
||||
+363
@@ -0,0 +1,363 @@
|
||||
---
|
||||
title: Upgrading Rancher Installed with Docker
|
||||
weight: 1010
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/upgrades/single-node-upgrade/
|
||||
- /rancher/v2.0-v2.4/en/upgrades/upgrades/single-node-air-gap-upgrade
|
||||
- /rancher/v2.0-v2.4/en/upgrades/upgrades/single-node
|
||||
- /rancher/v2.0-v2.4/en/upgrades/upgrades/single-node-upgrade/
|
||||
- /rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/upgrades/upgrades/single-node/
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
The following instructions will guide you through upgrading a Rancher server that was installed with Docker.
|
||||
|
||||
# Prerequisites
|
||||
|
||||
- **Review the [known upgrade issues](../../../../pages-for-subheaders/upgrades.md#known-upgrade-issues) in the Rancher documentation for the most noteworthy issues to consider when upgrading Rancher. A more complete list of known issues for each Rancher version can be found in the release notes on [GitHub](https://github.com/rancher/rancher/releases) and on the [Rancher forums.](https://forums.rancher.com/c/announcements/12) Note that upgrades to or from any chart in the [rancher-alpha repository](../../../../reference-guides/installation-references/helm-chart-options.md#helm-chart-repositories/) aren’t supported.
|
||||
- **For [air gap installs only,](../../../../pages-for-subheaders/air-gapped-helm-cli-install.md) collect and populate images for the new Rancher server version.** Follow the guide to [populate your private registry](../air-gapped-helm-cli-install/publish-images.md) with the images for the Rancher version that you want to upgrade to.
|
||||
|
||||
# Placeholder Review
|
||||
|
||||
During upgrade, you'll enter a series of commands, filling placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (`<EXAMPLE>`).
|
||||
|
||||
Here's an **example** of a command with a placeholder:
|
||||
|
||||
```
|
||||
docker stop <RANCHER_CONTAINER_NAME>
|
||||
```
|
||||
|
||||
In this command, `<RANCHER_CONTAINER_NAME>` is the name of your Rancher container.
|
||||
|
||||
# Get Data for Upgrade Commands
|
||||
|
||||
To obtain the data to replace the placeholders, run:
|
||||
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
|
||||
Write down or copy this information before starting the upgrade.
|
||||
|
||||
<sup>Terminal <code>docker ps</code> Command, Displaying Where to Find <code><RANCHER_CONTAINER_TAG></code> and <code><RANCHER_CONTAINER_NAME></code></sup>
|
||||
|
||||

|
||||
|
||||
| Placeholder | Example | Description |
|
||||
| -------------------------- | -------------------------- | --------------------------------------------------------- |
|
||||
| `<RANCHER_CONTAINER_TAG>` | `v2.1.3` | The rancher/rancher image you pulled for initial install. |
|
||||
| `<RANCHER_CONTAINER_NAME>` | `festive_mestorf` | The name of your Rancher container. |
|
||||
| `<RANCHER_VERSION>` | `v2.1.3` | The version of Rancher that you're creating a backup for. |
|
||||
| `<DATE>` | `2018-12-19` | The date that the data container or backup was created. |
|
||||
<br/>
|
||||
|
||||
You can obtain `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>` by logging into your Rancher server by remote connection and entering the command to view the containers that are running: `docker ps`. You can also view containers that are stopped using a different command: `docker ps -a`. Use these commands for help anytime during while creating backups.
|
||||
|
||||
# Upgrade Outline
|
||||
|
||||
During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data. Follow the steps to upgrade Rancher server:
|
||||
|
||||
- [1. Create a copy of the data from your Rancher server container](#1-create-a-copy-of-the-data-from-your-rancher-server-container)
|
||||
- [2. Create a backup tarball](#2-create-a-backup-tarball)
|
||||
- [3. Pull the new Docker image](#3-pull-the-new-docker-image)
|
||||
- [4. Start the new Rancher server container](#4-start-the-new-rancher-server-container)
|
||||
- [5. Verify the Upgrade](#5-verify-the-upgrade)
|
||||
- [6. Clean up your old Rancher server container](#6-clean-up-your-old-rancher-server-container)
|
||||
|
||||
# 1. Create a copy of the data from your Rancher server container
|
||||
|
||||
1. Using a remote Terminal connection, log into the node running your Rancher server.
|
||||
|
||||
1. Stop the container currently running Rancher server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container.
|
||||
|
||||
```
|
||||
docker stop <RANCHER_CONTAINER_NAME>
|
||||
```
|
||||
|
||||
1. <a id="backup"></a>Use the command below, replacing each placeholder, to create a data container from the Rancher container that you just stopped.
|
||||
|
||||
```
|
||||
docker create --volumes-from <RANCHER_CONTAINER_NAME> --name rancher-data rancher/rancher:<RANCHER_CONTAINER_TAG>
|
||||
```
|
||||
|
||||
# 2. Create a backup tarball
|
||||
|
||||
1. <a id="tarball"></a>From the data container that you just created (<code>rancher-data</code>), create a backup tarball (<code>rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz</code>).
|
||||
|
||||
This tarball will serve as a rollback point if something goes wrong during upgrade. Use the following command, replacing each placeholder.
|
||||
|
||||
|
||||
```
|
||||
docker run --volumes-from rancher-data -v "$PWD:/backup" --rm busybox tar zcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
|
||||
```
|
||||
|
||||
**Step Result:** When you enter this command, a series of commands should run.
|
||||
|
||||
1. Enter the `ls` command to confirm that the backup tarball was created. It will have a name similar to `rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`.
|
||||
|
||||
```
|
||||
[rancher@ip-10-0-0-50 ~]$ ls
|
||||
rancher-data-backup-v2.1.3-20181219.tar.gz
|
||||
```
|
||||
|
||||
1. Move your backup tarball to a safe location external from your Rancher server.
|
||||
|
||||
# 3. Pull the New Docker Image
|
||||
|
||||
Pull the image of the Rancher version that you want to upgrade to.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/resources/chart-options/) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker pull rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
# 4. Start the New Rancher Server Container
|
||||
|
||||
Start a new Rancher server container using the data from the `rancher-data` container. Remember to pass in all the environment variables that you had used when you started the original container.
|
||||
|
||||
>**Important:** _Do not_ stop the upgrade after initiating it, even if the upgrade process seems longer than expected. Stopping the upgrade may result in database migration errors during future upgrades.
|
||||
|
||||
If you used a proxy, see [HTTP Proxy Configuration.](../../../../reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md)
|
||||
|
||||
If you configured a custom CA root certificate to access your services, see [Custom CA root certificate.](../../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#custom-ca-certificate)
|
||||
|
||||
If you are recording all transactions with the Rancher API, see [API Auditing](../../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#api-audit-log)
|
||||
|
||||
To see the command to use when starting the new Rancher server container, choose from the following options:
|
||||
|
||||
- Docker Upgrade
|
||||
- Docker Upgrade for Air Gap Installs
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Docker Upgrade">
|
||||
|
||||
Select which option you had installed Rancher server
|
||||
|
||||
### Option A: Default Self-Signed Certificate
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to use the Rancher generated self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/resources/chart-options/) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Option B: Bring Your Own Certificate: Self-Signed
|
||||
|
||||
<details id="option-b">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to bring your own self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificate that you had originally installed with.
|
||||
|
||||
>**Reminder of the Cert Prerequisite:** The certificate files must be in PEM format. In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<CA_CERTS.pem>` | The path to the certificate authority's certificate.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/resources/chart-options/) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
|
||||
</details>
|
||||
|
||||
### Option C: Bring Your Own Certificate: Signed by Recognized CA
|
||||
|
||||
<details id="option-c">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to use a certificate signed by a recognized CA, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificates that you had originally installed with. Remember to include `--no-cacerts` as an argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
>**Reminder of the Cert Prerequisite:** The certificate files must be in PEM format. In your certificate file, include all intermediate certificates provided by the recognized CA. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](certificate-troubleshooting.md)
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/resources/chart-options/) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG> \
|
||||
--no-cacerts
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Option D: Let's Encrypt Certificate
|
||||
|
||||
<details id="option-d">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
>**Remember:** Let's Encrypt provides rate limits for requesting new certificates. Therefore, limit how often you create or destroy the container. For more information, see [Let's Encrypt documentation on rate limits](https://letsencrypt.org/docs/rate-limits/).
|
||||
|
||||
If you have selected to use [Let's Encrypt](https://letsencrypt.org/) certificates, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to provide the domain that you had used when you originally installed Rancher.
|
||||
|
||||
>**Reminder of the Cert Prerequisites:**
|
||||
>
|
||||
>- Create a record in your DNS that binds your Linux host IP address to the hostname that you want to use for Rancher access (`rancher.mydomain.com` for example).
|
||||
>- Open port `TCP/80` on your Linux host. The Let's Encrypt http-01 challenge can come from any source IP address, so port `TCP/80` must be open to all IP addresses.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/resources/chart-options/) that you want to upgrade to.
|
||||
`<YOUR.DNS.NAME>` | The domain address that you had originally started with
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG> \
|
||||
--acme-domain <YOUR.DNS.NAME>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker Air Gap Upgrade">
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
> For Rancher versions from v2.2.0 to v2.2.x, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher before v2.3.0.](../../resources/local-system-charts.md)
|
||||
|
||||
When starting the new Rancher server container, choose from the following options:
|
||||
|
||||
### Option A: Default Self-Signed Certificate
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to use the Rancher generated self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/resources/chart-options/) that you want to to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Option B: Bring Your Own Certificate: Self-Signed
|
||||
|
||||
<details id="option-b">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to bring your own self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificate that you had originally installed with.
|
||||
|
||||
>**Reminder of the Prerequisite:** The certificate files must be in PEM format. In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](certificate-troubleshooting.md)
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<CA_CERTS.pem>` | The path to the certificate authority's certificate.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/resources/chart-options/) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
</details>
|
||||
|
||||
### Option C: Bring Your Own Certificate: Signed by Recognized CA
|
||||
|
||||
<details id="option-c">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to use a certificate signed by a recognized CA, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificates that you had originally installed with.
|
||||
|
||||
>**Reminder of the Prerequisite:** The certificate files must be in PEM format. In your certificate file, include all intermediate certificates provided by the recognized CA. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](certificate-troubleshooting.md)
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](installation/resources/chart-options/) that you want to upgrade to.
|
||||
|
||||
> **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--no-cacerts \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
**Result:** You have upgraded Rancher. Data from your upgraded server is now saved to the `rancher-data` container for use in future upgrades.
|
||||
|
||||
# 5. Verify the Upgrade
|
||||
|
||||
Log into Rancher. Confirm that the upgrade succeeded by checking the version displayed in the bottom-left corner of the browser window.
|
||||
|
||||
>**Having network issues in your user clusters following upgrade?**
|
||||
>
|
||||
> See [Restoring Cluster Networking](../../install-upgrade-on-a-kubernetes-cluster/upgrades/namespace-migration.md#restoring-cluster-networking).
|
||||
|
||||
|
||||
# 6. Clean up Your Old Rancher Server Container
|
||||
|
||||
Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot.
|
||||
|
||||
# Rolling Back
|
||||
|
||||
If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback](upgrades/rollbacks/single-node-rollbacks/).
|
||||
+38
@@ -0,0 +1,38 @@
|
||||
---
|
||||
title: Adding TLS Secrets
|
||||
weight: 2
|
||||
---
|
||||
|
||||
Kubernetes will create all the objects and services for Rancher, but it will not become available until we populate the `tls-rancher-ingress` secret in the `cattle-system` namespace with the certificate and key.
|
||||
|
||||
Combine the server certificate followed by any intermediate certificate(s) needed into a file named `tls.crt`. Copy your certificate key into a file named `tls.key`.
|
||||
|
||||
For example, [acme.sh](https://acme.sh) provides server certificate and CA chains in `fullchain.cer` file.
|
||||
This `fullchain.cer` should be renamed to `tls.crt` & certificate key file as `tls.key`.
|
||||
|
||||
Use `kubectl` with the `tls` secret type to create the secrets.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
> **Note:** If you want to replace the certificate, you can delete the `tls-rancher-ingress` secret using `kubectl -n cattle-system delete secret tls-rancher-ingress` and add a new one using the command shown above. If you are using a private CA signed certificate, replacing the certificate is only possible if the new certificate is signed by the same CA as the certificate currently in use.
|
||||
|
||||
# Using a Private CA Signed Certificate
|
||||
|
||||
If you are using a private CA, Rancher requires a copy of the CA certificate which is used by the Rancher Agent to validate the connection to the server.
|
||||
|
||||
Copy the CA certificate into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem=./cacerts.pem
|
||||
```
|
||||
|
||||
> **Note:** The configured `tls-ca` secret is retrieved when Rancher starts. On a running Rancher installation the updated CA will take effect after new Rancher pods are started.
|
||||
|
||||
# Updating a Private CA Certificate
|
||||
|
||||
Follow the steps on [this page](update-rancher-certificate.md) to update the SSL certificate of the ingress in a Rancher [high availability Kubernetes installation](../../../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md) or to switch from the default self-signed certificate to a custom certificate.
|
||||
+105
@@ -0,0 +1,105 @@
|
||||
---
|
||||
title: Choosing a Rancher Version
|
||||
weight: 1
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/server-tags
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
This section describes how to choose a Rancher version.
|
||||
|
||||
For a high-availability installation of Rancher, which is recommended for production, the Rancher server is installed using a **Helm chart** on a Kubernetes cluster. Refer to the [Helm version requirements](installation/options/helm-version) to choose a version of Helm to install Rancher.
|
||||
|
||||
For Docker installations of Rancher, which is used for development and testing, you will install Rancher as a **Docker image.**
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Helm Charts">
|
||||
|
||||
When installing, upgrading, or rolling back Rancher Server when it is [installed on a Kubernetes cluster](../../../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md), Rancher server is installed using a Helm chart on a Kubernetes cluster. Therefore, as you prepare to install or upgrade a high availability Rancher configuration, you must add a Helm chart repository that contains the charts for installing Rancher.
|
||||
|
||||
Refer to the [Helm version requirements](installation/options/helm-version) to choose a version of Helm to install Rancher.
|
||||
|
||||
### Helm Chart Repositories
|
||||
|
||||
Rancher provides several different Helm chart repositories to choose from. We align our latest and stable Helm chart repositories with the Docker tags that are used for a Docker installation. Therefore, the `rancher-latest` repository will contain charts for all the Rancher versions that have been tagged as `rancher/rancher:latest`. When a Rancher version has been promoted to the `rancher/rancher:stable`, it will get added to the `rancher-stable` repository.
|
||||
|
||||
| Type | Command to Add the Repo | Description of the Repo |
|
||||
| -------------- | ------------ | ----------------- |
|
||||
| rancher-latest | `helm repo add rancher-latest https://releases.rancher.com/server-charts/latest` | Adds a repository of Helm charts for the latest versions of Rancher. We recommend using this repo for testing out new Rancher builds. |
|
||||
| rancher-stable | `helm repo add rancher-stable https://releases.rancher.com/server-charts/stable` | Adds a repository of Helm charts for older, stable versions of Rancher. We recommend using this repo for production environments. |
|
||||
| rancher-alpha | `helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha` | Adds a repository of Helm charts for alpha versions of Rancher for previewing upcoming releases. These releases are discouraged in production environments. Upgrades _to_ or _from_ charts in the rancher-alpha repository to any other chart, regardless or repository, aren't supported. |
|
||||
|
||||
<br/>
|
||||
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository).
|
||||
|
||||
> **Note:** The introduction of the `rancher-latest` and `rancher-stable` Helm Chart repositories was introduced after Rancher v2.1.0, so the `rancher-stable` repository contains some Rancher versions that were never marked as `rancher/rancher:stable`. The versions of Rancher that were tagged as `rancher/rancher:stable` before v2.1.0 are v2.0.4, v2.0.6, v2.0.8. Post v2.1.0, all charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`.
|
||||
|
||||
### Helm Chart Versions
|
||||
|
||||
Rancher Helm chart versions match the Rancher version (i.e `appVersion`). Once you've added the repo you can search it to show available versions with the following command:<br/>
|
||||
`helm search repo --versions`
|
||||
|
||||
If you have several repos you can specify the repo name, ie. `helm search repo rancher-stable/rancher --versions` <br/>
|
||||
For more information, see https://helm.sh/docs/helm/helm_search_repo/
|
||||
|
||||
To fetch a specific version of your chosen repo, define the `--version` parameter like in the following example:<br/>
|
||||
`helm fetch rancher-stable/rancher --version=2.4.8`
|
||||
|
||||
For the Rancher v2.1.x versions, there were some Helm charts where the version was a build number, i.e. `yyyy.mm.<build-number>`. These charts have been replaced with the equivalent Rancher version and are no longer available.
|
||||
|
||||
### Switching to a Different Helm Chart Repository
|
||||
|
||||
After installing Rancher, if you want to change which Helm chart repository to install Rancher from, you will need to follow these steps.
|
||||
|
||||
> **Note:** Because the rancher-alpha repository contains only alpha charts, switching between the rancher-alpha repository and the rancher-stable or rancher-latest repository for upgrades is not supported.
|
||||
|
||||
{{< release-channel >}}
|
||||
|
||||
1. List the current Helm chart repositories.
|
||||
|
||||
```plain
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://charts.helm.sh/stable
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
2. Remove the existing Helm Chart repository that contains your charts to install Rancher, which will either be `rancher-stable` or `rancher-latest` depending on what you had initially added.
|
||||
|
||||
```plain
|
||||
helm repo remove rancher-<CHART_REPO>
|
||||
```
|
||||
|
||||
3. Add the Helm chart repository that you want to start installing Rancher from.
|
||||
|
||||
```plain
|
||||
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
4. Continue to follow the steps to [upgrade Rancher](installation/upgrades-rollbacks/upgrades/ha) from the new Helm chart repository.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker Images">
|
||||
|
||||
When performing [Docker installs](installation/single-node), upgrades, or rollbacks, you can use _tags_ to install a specific version of Rancher.
|
||||
|
||||
### Server Tags
|
||||
|
||||
Rancher Server is distributed as a Docker image, which have tags attached to them. You can specify this tag when entering the command to deploy Rancher. Remember that if you use a tag without an explicit version (like `latest` or `stable`), you must explicitly pull a new version of that image tag. Otherwise, any image cached on the host will be used.
|
||||
|
||||
| Tag | Description |
|
||||
| -------------------------- | ------ |
|
||||
| `rancher/rancher:latest` | Our latest development release. These builds are validated through our CI automation framework. These releases are not recommended for production environments. |
|
||||
| `rancher/rancher:stable` | Our newest stable release. This tag is recommended for production. |
|
||||
| `rancher/rancher:<v2.X.X>` | You can install specific versions of Rancher by using the tag from a previous release. See what's available at DockerHub. |
|
||||
|
||||
> **Notes:**
|
||||
>
|
||||
> - The `master` tag or any tag with `-rc` or another suffix is meant for the Rancher testing team to validate. You should not use these tags, as these builds are not officially supported.
|
||||
> - Want to install an alpha review for preview? Install using one of the alpha tags listed on our [announcements page](https://forums.rancher.com/c/announcements) (e.g., `v2.2.0-alpha1`). Caveat: Alpha releases cannot be upgraded to or from any other release.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
+28
@@ -0,0 +1,28 @@
|
||||
---
|
||||
title: About Custom CA Root Certificates
|
||||
weight: 1
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/custom-ca-root-certificate/
|
||||
- /rancher/v2.0-v2.4/en/installation/resources/choosing-version/encryption/custom-ca-root-certificate
|
||||
---
|
||||
|
||||
If you're using Rancher in an internal production environment where you aren't exposing apps publicly, use a certificate from a private certificate authority (CA).
|
||||
|
||||
Services that Rancher needs to access are sometimes configured with a certificate from a custom/internal CA root, also known as self signed certificate. If the presented certificate from the service cannot be validated by Rancher, the following error displays: `x509: certificate signed by unknown authority`.
|
||||
|
||||
To validate the certificate, the CA root certificates need to be added to Rancher. As Rancher is written in Go, we can use the environment variable `SSL_CERT_DIR` to point to the directory where the CA root certificates are located in the container. The CA root certificates directory can be mounted using the Docker volume option (`-v host-source-directory:container-destination-directory`) when starting the Rancher container.
|
||||
|
||||
Examples of services that Rancher can access:
|
||||
|
||||
- Catalogs
|
||||
- Authentication providers
|
||||
- Accessing hosting/cloud API when using Node Drivers
|
||||
|
||||
## Installing with the custom CA Certificate
|
||||
|
||||
For details on starting a Rancher container with your private CA certificates mounted, refer to the installation docs:
|
||||
|
||||
- [Docker install Custom CA certificate options](../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#custom-ca-certificate)
|
||||
|
||||
- [Kubernetes install options for Additional Trusted CAs](../../../reference-guides/installation-references/helm-chart-options.md#additional-trusted-cas)
|
||||
|
||||
+17
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title: Helm Version Requirements
|
||||
weight: 3
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm-version
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/helm-init
|
||||
- /rancher/v2.0-v2.4/en/installation/options/helm2/helm-rancher
|
||||
---
|
||||
|
||||
This section contains the requirements for Helm, which is the tool used to install Rancher on a high-availability Kubernetes cluster.
|
||||
|
||||
> The installation instructions have been updated for Helm 3. For migration of installs started with Helm 2, refer to the official [Helm 2 to 3 Migration Docs.](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) [This section](installation/options/helm2) provides a copy of the older high-availability Rancher installation instructions that used Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible.
|
||||
|
||||
- Helm v2.16.0 or higher is required for Kubernetes v1.16. For the default Kubernetes version, refer to the [release notes](https://github.com/rancher/rke/releases) for the version of RKE that you are using.
|
||||
- Helm v2.15.0 should not be used, because of an issue with converting/comparing numbers.
|
||||
- Helm v2.12.0 should not be used, because of an issue with `cert-manager`.
|
||||
+72
@@ -0,0 +1,72 @@
|
||||
---
|
||||
title: Setting up Local System Charts for Air Gapped Installations
|
||||
weight: 120
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-single-node/config-rancher-system-charts/_index.md
|
||||
- /rancher/v2.0-v2.4/en/installation/air-gap-high-availability/config-rancher-system-charts/_index.md
|
||||
- /rancher/v2.0-v2.4/en/installation/options/local-system-charts
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS.
|
||||
|
||||
In an air gapped installation of Rancher, you will need to configure Rancher to use a local copy of the system charts. This section describes how to use local system charts using a CLI flag in Rancher v2.3.0, and using a Git mirror for Rancher versions before v2.3.0.
|
||||
|
||||
# Using Local System Charts in Rancher v2.3.0
|
||||
|
||||
In Rancher v2.3.0, a local copy of `system-charts` has been packaged into the `rancher/rancher` container. To be able to use these features in an air gap install, you will need to run the Rancher install command with an extra environment variable, `CATTLE_SYSTEM_CATALOG=bundled`, which tells Rancher to use the local copy of the charts instead of attempting to fetch them from GitHub.
|
||||
|
||||
Example commands for a Rancher installation with a bundled `system-charts` are included in the [air gap Docker installation](installation/air-gap-single-node/install-rancher) instructions and the [air gap Kubernetes installation](installation/air-gap-high-availability/install-rancher/) instructions.
|
||||
|
||||
# Setting Up System Charts for Rancher Before v2.3.0
|
||||
|
||||
### A. Prepare System Charts
|
||||
|
||||
The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach and configure Rancher to use that repository.
|
||||
|
||||
Refer to the release notes in the `system-charts` repository to see which branch corresponds to your version of Rancher.
|
||||
|
||||
### B. Configure System Charts
|
||||
|
||||
Rancher needs to be configured to use your Git mirror of the `system-charts` repository. You can configure the system charts repository either from the Rancher UI or from Rancher's API view.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Rancher UI">
|
||||
|
||||
In the catalog management page in the Rancher UI, follow these steps:
|
||||
|
||||
1. Go to the **Global** view.
|
||||
|
||||
1. Click **Tools > Catalogs.**
|
||||
|
||||
1. The system chart is displayed under the name `system-library`. To edit the configuration of the system chart, click **⋮ > Edit.**
|
||||
|
||||
1. In the **Catalog URL** field, enter the location of the Git mirror of the `system-charts` repository.
|
||||
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Rancher API">
|
||||
|
||||
1. Log into Rancher.
|
||||
|
||||
1. Open `https://<your-rancher-server>/v3/catalogs/system-library` in your browser.
|
||||
|
||||

|
||||
|
||||
1. Click **Edit** on the upper right corner and update the value for **url** to the location of the Git mirror of the `system-charts` repository.
|
||||
|
||||

|
||||
|
||||
1. Click **Show Request**
|
||||
|
||||
1. Click **Send Request**
|
||||
|
||||
**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
+234
@@ -0,0 +1,234 @@
|
||||
---
|
||||
title: Updating the Rancher Certificate
|
||||
weight: 10
|
||||
---
|
||||
|
||||
# Updating a Private CA Certificate
|
||||
|
||||
Follow these steps to update the SSL certificate of the ingress in a Rancher [high availability Kubernetes installation](../../../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md) or to switch from the default self-signed certificate to a custom certificate.
|
||||
|
||||
A summary of the steps is as follows:
|
||||
|
||||
1. Create or update the `tls-rancher-ingress` Kubernetes secret resource with the new certificate and private key.
|
||||
2. Create or update the `tls-ca` Kubernetes secret resource with the root CA certificate (only required when using a private CA).
|
||||
3. Update the Rancher installation using the Helm CLI.
|
||||
4. Reconfigure the Rancher agents to trust the new CA certificate.
|
||||
|
||||
The details of these instructions are below.
|
||||
|
||||
## 1. Create/update the certificate secret resource
|
||||
|
||||
First, concatenate the server certificate followed by any intermediate certificate(s) to a file named `tls.crt` and provide the corresponding certificate key in a file named `tls.key`.
|
||||
|
||||
If you are switching the install from using the Rancher self-signed certificate or Let’s Encrypt issued certificates, use the following command to create the `tls-rancher-ingress` secret resource in your Rancher HA cluster:
|
||||
|
||||
```
|
||||
$ kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
Alternatively, to update an existing certificate secret:
|
||||
|
||||
```
|
||||
$ kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key \
|
||||
--dry-run --save-config -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
## 2. Create/update the CA certificate secret resource
|
||||
|
||||
If the new certificate was signed by a private CA, you will need to copy the corresponding root CA certificate into a file named `cacerts.pem` and create or update the `tls-ca secret` in the `cattle-system` namespace. If the certificate was signed by an intermediate CA, then the `cacerts.pem` must contain both the intermediate and root CA certificates (in this order).
|
||||
|
||||
To create the initial secret:
|
||||
|
||||
```
|
||||
$ kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem
|
||||
```
|
||||
|
||||
To update an existing `tls-ca` secret:
|
||||
|
||||
```
|
||||
$ kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem \
|
||||
--dry-run --save-config -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
## 3. Reconfigure the Rancher deployment
|
||||
|
||||
> Before proceeding, [generate an API token in the Rancher UI](../../../reference-guides/user-settings/api-keys.md#creating-an-api-key) (<b>User > API & Keys</b>).
|
||||
|
||||
This step is required if Rancher was initially installed with self-signed certificates (`ingress.tls.source=rancher`) or with a Let's Encrypt issued certificate (`ingress.tls.source=letsEncrypt`).
|
||||
|
||||
It ensures that the Rancher pods and ingress resources are reconfigured to use the new server and optional CA certificate.
|
||||
|
||||
To update the Helm deployment you will need to use the same (`--set`) options that were used during initial installation. Check with:
|
||||
|
||||
```
|
||||
$ helm get values rancher -n cattle-system
|
||||
```
|
||||
|
||||
Also get the version string of the currently deployed Rancher chart:
|
||||
|
||||
```
|
||||
$ helm ls -A
|
||||
```
|
||||
|
||||
Upgrade the Helm application instance using the original configuration values and making sure to specify `ingress.tls.source=secret` as well as the current chart version to prevent an application upgrade.
|
||||
|
||||
If the certificate was signed by a private CA, add the `set privateCA=true` argument as well. Also make sure to read the documentation describing the initial installation using custom certificates.
|
||||
|
||||
```
|
||||
helm upgrade rancher rancher-stable/rancher \
|
||||
--namespace cattle-system \
|
||||
--version <DEPLOYED_CHART_VERSION> \
|
||||
--set hostname=rancher.my.org \
|
||||
--set ingress.tls.source=secret \
|
||||
--set ...
|
||||
```
|
||||
|
||||
When the upgrade is completed, navigate to `https://<Rancher_SERVER>/v3/settings/cacerts` to verify that the value matches the CA certificate written in the `tls-ca` secret earlier.
|
||||
|
||||
## 4. Reconfigure Rancher agents to trust the private CA
|
||||
|
||||
This section covers three methods to reconfigure Rancher agents to trust the private CA. This step is required if either of the following is true:
|
||||
|
||||
- Rancher was initially configured to use the Rancher self-signed certificate (`ingress.tls.source=rancher`) or with a Let's Encrypt issued certificate (`ingress.tls.source=letsEncrypt`)
|
||||
- The root CA certificate for the new custom certificate has changed
|
||||
|
||||
### Why is this step required?
|
||||
|
||||
When Rancher is configured with a certificate signed by a private CA, the CA certificate chain is downloaded into Rancher agent containers. Agents compare the checksum of the downloaded certificate against the `CATTLE_CA_CHECKSUM` environment variable. This means that, when the private CA certificate is changed on Rancher server side, the environvment variable `CATTLE_CA_CHECKSUM` must be updated accordingly.
|
||||
|
||||
### Which method should I choose?
|
||||
|
||||
Method 1 is the easiest one but requires all clusters to be connected to Rancher after the certificates have been rotated. This is usually the case if the process is performed right after updating the Rancher deployment (Step 3).
|
||||
|
||||
If the clusters have lost connection to Rancher but you have [Authorized Cluster Endpoints](https://rancher.com/docs/rancher/v2.0-v2.4/en/cluster-admin/cluster-access/ace/) enabled, then go with method 2.
|
||||
|
||||
Method 3 can be used as a fallback if method 1 and 2 are unfeasible.
|
||||
|
||||
### Method 1: Kubectl command
|
||||
|
||||
For each cluster under Rancher management (including `local`) run the following command using the Kubeconfig file of the Rancher management cluster (RKE or K3S).
|
||||
|
||||
```
|
||||
kubectl patch clusters <REPLACE_WITH_CLUSTERID> -p '{"status":{"agentImage":"dummy"}}' --type merge
|
||||
```
|
||||
|
||||
This command will cause all Agent Kubernetes resources to be reconfigured with the checksum of the new certificate.
|
||||
|
||||
|
||||
### Method 2: Manually update checksum
|
||||
|
||||
Manually patch the agent Kubernetes resources by updating the `CATTLE_CA_CHECKSUM` environment variable to the value matching the checksum of the new CA certificate. Generate the new checksum value like so:
|
||||
|
||||
```
|
||||
$ curl -k -s -fL <RANCHER_SERVER>/v3/settings/cacerts | jq -r .value > cacert.tmp
|
||||
$ sha256sum cacert.tmp | awk '{print $1}'
|
||||
```
|
||||
|
||||
Using a Kubeconfig for each downstream cluster update the environment variable for the two agent deployments.
|
||||
|
||||
```
|
||||
$ kubectl edit -n cattle-system ds/cattle-node-agent
|
||||
$ kubectl edit -n cattle-system deployment/cluster-agent
|
||||
```
|
||||
|
||||
### Method 3: Recreate Rancher agents
|
||||
|
||||
With this method you are recreating the Rancher agents by running a set of commands on a controlplane node of each downstream cluster.
|
||||
|
||||
First, generate the agent definitions as described here: https://gist.github.com/superseb/076f20146e012f1d4e289f5bd1bd4971
|
||||
|
||||
Then, connect to a controlplane node of the downstream cluster via SSH, create a Kubeconfig and apply the definitions:
|
||||
https://gist.github.com/superseb/b14ed3b5535f621ad3d2aa6a4cd6443b
|
||||
|
||||
# Updating from a Private CA Certificate to a Common Certificate
|
||||
|
||||
>It is possible to perform the opposite procedure as shown above: you may change from a private certificate to a common, or non-private, certificate. The steps involved are outlined below.
|
||||
|
||||
## 1. Create/update the certificate secret resource
|
||||
|
||||
First, concatenate the server certificate followed by any intermediate certificate(s) to a file named `tls.crt` and provide the corresponding certificate key in a file named `tls.key`.
|
||||
|
||||
If you are switching the install from using the Rancher self-signed certificate or Let’s Encrypt issued certificates, use the following command to create the `tls-rancher-ingress` secret resource in your Rancher HA cluster:
|
||||
|
||||
```
|
||||
$ kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
Alternatively, to update an existing certificate secret:
|
||||
|
||||
```
|
||||
$ kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key \
|
||||
--dry-run --save-config -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
## 2. Delete the CA certificate secret resource
|
||||
|
||||
You will delete the `tls-ca secret` in the `cattle-system` namespace as it is no longer needed. You may also optionally save a copy of the `tls-ca secret` if desired.
|
||||
|
||||
To save the existing secret:
|
||||
|
||||
```
|
||||
kubectl -n cattle-system get secret tls-ca -o yaml > tls-ca.yaml
|
||||
```
|
||||
|
||||
To delete the existing `tls-ca` secret:
|
||||
|
||||
```
|
||||
kubectl -n cattle-system delete secret tls-ca
|
||||
```
|
||||
|
||||
## 3. Reconfigure the Rancher deployment
|
||||
|
||||
> Before proceeding, [generate an API token in the Rancher UI](https://rancher.com/docs/rancher/v2.6/en/user-settings/api-keys/#creating-an-api-key) (<b>User > API & Keys</b>) and save the Bearer Token which you might need in step 4.
|
||||
|
||||
This step is required if Rancher was initially installed with self-signed certificates (`ingress.tls.source=rancher`) or with a Let's Encrypt issued certificate (`ingress.tls.source=letsEncrypt`).
|
||||
|
||||
It ensures that the Rancher pods and ingress resources are reconfigured to use the new server and optional CA certificate.
|
||||
|
||||
To update the Helm deployment you will need to use the same (`--set`) options that were used during initial installation. Check with:
|
||||
|
||||
```
|
||||
$ helm get values rancher -n cattle-system
|
||||
```
|
||||
|
||||
Also get the version string of the currently deployed Rancher chart:
|
||||
|
||||
```
|
||||
$ helm ls -A
|
||||
```
|
||||
|
||||
Upgrade the Helm application instance using the original configuration values and making sure to specify the current chart version to prevent an application upgrade.
|
||||
|
||||
Also make sure to read the documentation describing the initial installation using custom certificates.
|
||||
|
||||
```
|
||||
helm upgrade rancher rancher-stable/rancher \
|
||||
--namespace cattle-system \
|
||||
--version <DEPLOYED_CHART_VERSION> \
|
||||
--set hostname=rancher.my.org \
|
||||
--set ...
|
||||
```
|
||||
|
||||
On upgrade, you can either
|
||||
|
||||
- remove `--set ingress.tls.source=secret \` from the Helm upgrade command, as shown above, or
|
||||
|
||||
- remove the `privateCA` parameter or set it to `false` because the CA is valid:
|
||||
|
||||
```
|
||||
set privateCA=false
|
||||
```
|
||||
|
||||
## 4. Reconfigure Rancher agents for the non-private/common certificate
|
||||
|
||||
`CATTLE_CA_CHECKSUM` environment variable on the downstream cluster agents should be removed or set to "" (an empty string).
|
||||
+181
@@ -0,0 +1,181 @@
|
||||
---
|
||||
title: Upgrading Cert-Manager with Helm 2
|
||||
weight: 2040
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/upgrading-cert-manager/helm-2-instructions
|
||||
- /rancher/v2.0-v2.4/en/installation/resources/choosing-version/encryption/upgrading-cert-manager/helm-2-instructions
|
||||
- /rancher/v2.x/en/installation/resources/upgrading-cert-manager/helm-2-instructions/
|
||||
---
|
||||
|
||||
Rancher uses cert-manager to automatically generate and renew TLS certificates for HA deployments of Rancher. As of Fall 2019, three important changes to cert-manager are set to occur that you need to take action on if you have an HA deployment of Rancher:
|
||||
|
||||
1. [Let's Encrypt will be blocking cert-manager instances older than 0.8.0 starting November 1st 2019.](https://community.letsencrypt.org/t/blocking-old-cert-manager-versions/98753)
|
||||
1. [Cert-manager is deprecating and replacing the certificate.spec.acme.solvers field](https://docs.cert-manager.io/en/latest/tasks/upgrading/upgrading-0.7-0.8.html#upgrading-from-v0-7-to-v0-8). This change has no exact deadline.
|
||||
1. [Cert-manager is deprecating `v1alpha1` API and replacing its API group](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/)
|
||||
|
||||
To address these changes, this guide will do two things:
|
||||
|
||||
1. Document the procedure for upgrading cert-manager
|
||||
1. Explain the cert-manager API changes and link to cert-manager's offficial documentation for migrating your data
|
||||
|
||||
> **Important:**
|
||||
> If you are currently running the cert-manager whose version is older than v0.11, and want to upgrade both Rancher and cert-manager to a newer version, you need to reinstall both of them:
|
||||
|
||||
> 1. Take a one-time snapshot of your Kubernetes cluster running Rancher server
|
||||
> 2. Uninstall Rancher, cert-manager, and the CustomResourceDefinition for cert-manager
|
||||
> 3. Install the newer version of Rancher and cert-manager
|
||||
|
||||
> The reason is that when Helm upgrades Rancher, it will reject the upgrade and show error messages if the running Rancher app does not match the chart template used to install it. Because cert-manager changed its API group and we cannot modify released charts for Rancher, there will always be a mismatch on the cert-manager's API version, therefore the upgrade will be rejected.
|
||||
|
||||
> For reinstalling Rancher with Helm, please check [Option B: Reinstalling Rancher Chart](installation/upgrades-rollbacks/upgrades/ha/) under the upgrade Rancher section.
|
||||
|
||||
## Upgrade Cert-Manager Only
|
||||
|
||||
> **Note:**
|
||||
> These instructions are applied if you have no plan to upgrade Rancher.
|
||||
|
||||
The namespace used in these instructions depends on the namespace cert-manager is currently installed in. If it is in kube-system use that in the instructions below. You can verify by running `kubectl get pods --all-namespaces` and checking which namespace the cert-manager-\* pods are listed in. Do not change the namespace cert-manager is running in or this can cause issues.
|
||||
|
||||
In order to upgrade cert-manager, follow these instructions:
|
||||
|
||||
<details id="normal">
|
||||
<summary>Upgrading cert-manager with Internet access</summary>
|
||||
1. Back up existing resources as a precaution
|
||||
|
||||
```plain
|
||||
kubectl get -o yaml --all-namespaces issuer,clusterissuer,certificates > cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
1. Delete the existing deployment
|
||||
|
||||
```plain
|
||||
helm delete --purge cert-manager
|
||||
```
|
||||
|
||||
1. Install the CustomResourceDefinition resources separately
|
||||
|
||||
```plain
|
||||
kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml
|
||||
```
|
||||
|
||||
1. Add the Jetstack Helm repository
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
```
|
||||
|
||||
1. Update your local Helm chart repository cache
|
||||
|
||||
```plain
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Install the new version of cert-manager
|
||||
|
||||
```plain
|
||||
helm install --version 0.12.0 --name cert-manager --namespace kube-system jetstack/cert-manager
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details id="airgap">
|
||||
<summary>Upgrading cert-manager in an airgapped environment</summary>
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before you can perform the upgrade, you must prepare your air gapped environment by adding the necessary container images to your private registry and downloading or rendering the required Kubernetes manifest files.
|
||||
|
||||
1. Follow the guide to [Prepare your Private Registry](installation/air-gap-installation/prepare-private-reg/) with the images needed for the upgrade.
|
||||
|
||||
1. From a system connected to the internet, add the cert-manager repo to Helm
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager).
|
||||
|
||||
```plain
|
||||
helm fetch jetstack/cert-manager --version v0.12.0
|
||||
```
|
||||
|
||||
1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
|
||||
|
||||
```plain
|
||||
helm template ./cert-manager-v0.12.0.tgz --output-dir . \
|
||||
--name cert-manager --namespace kube-system \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
|
||||
```
|
||||
|
||||
1. Download the required CRD file for cert-manager
|
||||
|
||||
```plain
|
||||
curl -L -o cert-manager/cert-manager-crd.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml
|
||||
```
|
||||
|
||||
### Install cert-manager
|
||||
|
||||
1. Back up existing resources as a precaution
|
||||
|
||||
```plain
|
||||
kubectl get -o yaml --all-namespaces issuer,clusterissuer,certificates > cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
1. Delete the existing cert-manager installation
|
||||
|
||||
```plain
|
||||
kubectl -n kube-system delete deployment,sa,clusterrole,clusterrolebinding -l 'app=cert-manager' -l 'chart=cert-manager-v0.5.2'
|
||||
```
|
||||
|
||||
1. Install the CustomResourceDefinition resources separately
|
||||
|
||||
```plain
|
||||
kubectl apply -f cert-manager/cert-manager-crd.yaml
|
||||
```
|
||||
|
||||
|
||||
1. Install cert-manager
|
||||
|
||||
```plain
|
||||
kubectl -n kube-system apply -R -f ./cert-manager
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
Once you’ve installed cert-manager, you can verify it is deployed correctly by checking the kube-system namespace for running pods:
|
||||
|
||||
```
|
||||
kubectl get pods --namespace kube-system
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cert-manager-7cbdc48784-rpgnt 1/1 Running 0 3m
|
||||
cert-manager-webhook-5b5dd6999-kst4x 1/1 Running 0 3m
|
||||
cert-manager-cainjector-3ba5cd2bcd-de332x 1/1 Running 0 3m
|
||||
```
|
||||
|
||||
If the ‘webhook’ pod (2nd line) is in a ContainerCreating state, it may still be waiting for the Secret to be mounted into the pod. Wait a couple of minutes for this to happen but if you experience problems, please check cert-manager's [troubleshooting](https://docs.cert-manager.io/en/latest/getting-started/troubleshooting.html) guide.
|
||||
|
||||
> **Note:** The above instructions ask you to add the disable-validation label to the kube-system namespace. Here are additional resources that explain why this is necessary:
|
||||
>
|
||||
> - [Information on the disable-validation label](https://docs.cert-manager.io/en/latest/tasks/upgrading/upgrading-0.4-0.5.html?highlight=certmanager.k8s.io%2Fdisable-validation#disabling-resource-validation-on-the-cert-manager-namespace)
|
||||
> - [Information on webhook validation for certificates](https://docs.cert-manager.io/en/latest/getting-started/webhook.html)
|
||||
|
||||
## Cert-Manager API change and data migration
|
||||
|
||||
Cert-manager has deprecated the use of the `certificate.spec.acme.solvers` field and will drop support for it completely in an upcoming release.
|
||||
|
||||
Per the cert-manager documentation, a new format for configuring ACME certificate resources was introduced in v0.8. Specifically, the challenge solver configuration field was moved. Both the old format and new are supported as of v0.9, but support for the old format will be dropped in an upcoming release of cert-manager. The cert-manager documentation strongly recommends that after upgrading you update your ACME Issuer and Certificate resources to the new format.
|
||||
|
||||
Details about the change and migration instructions can be found in the [cert-manager v0.7 to v0.8 upgrade instructions](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/).
|
||||
|
||||
The v0.11 release marks the removal of the v1alpha1 API that was used in previous versions of cert-manager, as well as our API group changing to be `cert-manager.io` instead of `certmanager.k8s.io.`
|
||||
|
||||
We have also removed support for the old configuration format that was deprecated in the v0.8 release. This means you must transition to using the new solvers style configuration format for your ACME issuers before upgrading to v0.11. For more information, see the [upgrading to v0.8 guide](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/).
|
||||
|
||||
Details about the change and migration instructions can be found in the [cert-manager v0.10 to v0.11 upgrade instructions](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/).
|
||||
|
||||
For information on upgrading from all other versions of cert-manager, refer to the [official documentation](https://cert-manager.io/docs/installation/upgrading/).
|
||||
+246
@@ -0,0 +1,246 @@
|
||||
---
|
||||
title: Upgrading Cert-Manager
|
||||
weight: 4
|
||||
aliases:
|
||||
- /rancher/v2.0-v2.4/en/installation/options/upgrading-cert-manager
|
||||
- /rancher/v2.0-v2.4/en/installation/options/upgrading-cert-manager/helm-2-instructions
|
||||
- /rancher/v2.0-v2.4/en/installation/resources/encryption/upgrading-cert-manager
|
||||
---
|
||||
|
||||
Rancher uses cert-manager to automatically generate and renew TLS certificates for HA deployments of Rancher. As of Fall 2019, three important changes to cert-manager are set to occur that you need to take action on if you have an HA deployment of Rancher:
|
||||
|
||||
1. [Let's Encrypt will be blocking cert-manager instances older than 0.8.0 starting November 1st 2019.](https://community.letsencrypt.org/t/blocking-old-cert-manager-versions/98753)
|
||||
1. [Cert-manager is deprecating and replacing the certificate.spec.acme.solvers field](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/). This change has no exact deadline.
|
||||
1. [Cert-manager is deprecating `v1alpha1` API and replacing its API group](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/)
|
||||
|
||||
To address these changes, this guide will do two things:
|
||||
|
||||
1. Document the procedure for upgrading cert-manager
|
||||
1. Explain the cert-manager API changes and link to cert-manager's official documentation for migrating your data
|
||||
|
||||
> **Important:**
|
||||
> If you are currently running the cert-manager whose version is older than v0.11, and want to upgrade both Rancher and cert-manager to a newer version, you need to reinstall both of them:
|
||||
|
||||
> 1. Take a one-time snapshot of your Kubernetes cluster running Rancher server
|
||||
> 2. Uninstall Rancher, cert-manager, and the CustomResourceDefinition for cert-manager
|
||||
> 3. Install the newer version of Rancher and cert-manager
|
||||
|
||||
> The reason is that when Helm upgrades Rancher, it will reject the upgrade and show error messages if the running Rancher app does not match the chart template used to install it. Because cert-manager changed its API group and we cannot modify released charts for Rancher, there will always be a mismatch on the cert-manager's API version, therefore the upgrade will be rejected.
|
||||
|
||||
> For reinstalling Rancher with Helm, please check [Option B: Reinstalling Rancher Chart](installation/upgrades-rollbacks/upgrades/ha/) under the upgrade Rancher section.
|
||||
|
||||
# Upgrade Cert-Manager
|
||||
|
||||
The namespace used in these instructions depends on the namespace cert-manager is currently installed in. If it is in kube-system use that in the instructions below. You can verify by running `kubectl get pods --all-namespaces` and checking which namespace the cert-manager-\* pods are listed in. Do not change the namespace cert-manager is running in or this can cause issues.
|
||||
|
||||
> These instructions have been updated for Helm 3. If you are still using Helm 2, refer to [these instructions.](installation/options/upgrading-cert-manager/helm-2-instructions)
|
||||
|
||||
In order to upgrade cert-manager, follow these instructions:
|
||||
|
||||
### Option A: Upgrade cert-manager with Internet Access
|
||||
|
||||
<details id="normal">
|
||||
<summary>Click to expand</summary>
|
||||
1. [Back up existing resources](https://cert-manager.io/docs/tutorials/backup/) as a precaution
|
||||
|
||||
```plain
|
||||
kubectl get -o yaml --all-namespaces \
|
||||
issuer,clusterissuer,certificates,certificaterequests > cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
> **Important:**
|
||||
> If you are upgrading from a version older than 0.11.0, Update the apiVersion on all your backed up resources from `certmanager.k8s.io/v1alpha1` to `cert-manager.io/v1alpha2`. If you use any cert-manager annotations on any of your other resources, you will need to update them to reflect the new API group. For details, refer to the documentation on [additional annotation changes.](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/#additional-annotation-changes)
|
||||
|
||||
1. [Uninstall existing deployment](https://cert-manager.io/docs/installation/uninstall/kubernetes/#uninstalling-with-helm)
|
||||
|
||||
```plain
|
||||
helm uninstall cert-manager
|
||||
```
|
||||
|
||||
Delete the CustomResourceDefinition using the link to the version vX.Y.Z you installed
|
||||
|
||||
```plain
|
||||
kubectl delete -f https://github.com/jetstack/cert-manager/releases/download/vX.Y.Z/cert-manager.crds.yaml
|
||||
```
|
||||
|
||||
1. Install the CustomResourceDefinition resources separately
|
||||
|
||||
```plain
|
||||
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/vX.Y.Z/cert-manager.crds.yaml
|
||||
```
|
||||
|
||||
> **Note:**
|
||||
> If you are running Kubernetes v1.15 or below, you will need to add the `--validate=false` flag to your `kubectl apply` command above. Otherwise, you will receive a validation error relating to the `x-kubernetes-preserve-unknown-fields` field in cert-manager’s CustomResourceDefinition resources. This is a benign error and occurs due to the way kubectl performs resource validation.
|
||||
|
||||
1. Create the namespace for cert-manager if needed
|
||||
|
||||
```plain
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
1. Add the Jetstack Helm repository
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
```
|
||||
|
||||
1. Update your local Helm chart repository cache
|
||||
|
||||
```plain
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Install the new version of cert-manager
|
||||
|
||||
```plain
|
||||
helm install \
|
||||
cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--version v0.12.0
|
||||
```
|
||||
|
||||
1. [Restore back up resources](https://cert-manager.io/docs/tutorials/backup/#restoring-resources)
|
||||
|
||||
```plain
|
||||
kubectl apply -f cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Option B: Upgrade cert-manager in an Air Gap Environment
|
||||
|
||||
<details id="airgap">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before you can perform the upgrade, you must prepare your air gapped environment by adding the necessary container images to your private registry and downloading or rendering the required Kubernetes manifest files.
|
||||
|
||||
1. Follow the guide to [Prepare your Private Registry](installation/air-gap-installation/prepare-private-reg/) with the images needed for the upgrade.
|
||||
|
||||
1. From a system connected to the internet, add the cert-manager repo to Helm
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager).
|
||||
|
||||
```plain
|
||||
helm fetch jetstack/cert-manager --version v0.12.0
|
||||
```
|
||||
|
||||
1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
|
||||
|
||||
The Helm 3 command is as follows:
|
||||
|
||||
```plain
|
||||
helm template cert-manager ./cert-manager-v0.12.0.tgz --output-dir . \
|
||||
--namespace cert-manager \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
|
||||
```
|
||||
|
||||
The Helm 2 command is as follows:
|
||||
|
||||
```plain
|
||||
helm template ./cert-manager-v0.12.0.tgz --output-dir . \
|
||||
--name cert-manager --namespace cert-manager \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
|
||||
```
|
||||
|
||||
1. Download the required CRD file for cert-manager (old and new)
|
||||
|
||||
```plain
|
||||
curl -L -o cert-manager/cert-manager-crd.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml
|
||||
curl -L -o cert-manager/cert-manager-crd-old.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-X.Y/deploy/manifests/00-crds.yaml
|
||||
```
|
||||
|
||||
### Install cert-manager
|
||||
|
||||
1. Back up existing resources as a precaution
|
||||
|
||||
```plain
|
||||
kubectl get -o yaml --all-namespaces \
|
||||
issuer,clusterissuer,certificates,certificaterequests > cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
> **Important:**
|
||||
> If you are upgrading from a version older than 0.11.0, Update the apiVersion on all your backed up resources from `certmanager.k8s.io/v1alpha1` to `cert-manager.io/v1alpha2`. If you use any cert-manager annotations on any of your other resources, you will need to update them to reflect the new API group. For details, refer to the documentation on [additional annotation changes.](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/#additional-annotation-changes)
|
||||
|
||||
1. Delete the existing cert-manager installation
|
||||
|
||||
```plain
|
||||
kubectl -n cert-manager \
|
||||
delete deployment,sa,clusterrole,clusterrolebinding \
|
||||
-l 'app=cert-manager' -l 'chart=cert-manager-v0.5.2'
|
||||
```
|
||||
|
||||
Delete the CustomResourceDefinition using the link to the version vX.Y you installed
|
||||
|
||||
```plain
|
||||
kubectl delete -f cert-manager/cert-manager-crd-old.yaml
|
||||
```
|
||||
|
||||
1. Install the CustomResourceDefinition resources separately
|
||||
|
||||
```plain
|
||||
kubectl apply -f cert-manager/cert-manager-crd.yaml
|
||||
```
|
||||
|
||||
> **Note:**
|
||||
> If you are running Kubernetes v1.15 or below, you will need to add the `--validate=false` flag to your `kubectl apply` command above. Otherwise, you will receive a validation error relating to the `x-kubernetes-preserve-unknown-fields` field in cert-manager’s CustomResourceDefinition resources. This is a benign error and occurs due to the way kubectl performs resource validation.
|
||||
|
||||
1. Create the namespace for cert-manager
|
||||
|
||||
```plain
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
1. Install cert-manager
|
||||
|
||||
```plain
|
||||
kubectl -n cert-manager apply -R -f ./cert-manager
|
||||
```
|
||||
|
||||
1. [Restore back up resources](https://cert-manager.io/docs/tutorials/backup/#restoring-resources)
|
||||
|
||||
```plain
|
||||
kubectl apply -f cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Verify the Deployment
|
||||
|
||||
Once you’ve installed cert-manager, you can verify it is deployed correctly by checking the kube-system namespace for running pods:
|
||||
|
||||
```
|
||||
kubectl get pods --namespace cert-manager
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cert-manager-5c6866597-zw7kh 1/1 Running 0 2m
|
||||
cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m
|
||||
cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
|
||||
```
|
||||
|
||||
## Cert-Manager API change and data migration
|
||||
|
||||
Cert-manager has deprecated the use of the `certificate.spec.acme.solvers` field and will drop support for it completely in an upcoming release.
|
||||
|
||||
Per the cert-manager documentation, a new format for configuring ACME certificate resources was introduced in v0.8. Specifically, the challenge solver configuration field was moved. Both the old format and new are supported as of v0.9, but support for the old format will be dropped in an upcoming release of cert-manager. The cert-manager documentation strongly recommends that after upgrading you update your ACME Issuer and Certificate resources to the new format.
|
||||
|
||||
Details about the change and migration instructions can be found in the [cert-manager v0.7 to v0.8 upgrade instructions](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/).
|
||||
|
||||
The v0.11 release marks the removal of the v1alpha1 API that was used in previous versions of cert-manager, as well as our API group changing to be cert-manager.io instead of certmanager.k8s.io.
|
||||
|
||||
We have also removed support for the old configuration format that was deprecated in the v0.8 release. This means you must transition to using the new solvers style configuration format for your ACME issuers before upgrading to v0.11. For more information, see the [upgrading to v0.8 guide](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/).
|
||||
|
||||
Details about the change and migration instructions can be found in the [cert-manager v0.10 to v0.11 upgrade instructions](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/).
|
||||
|
||||
More info about [cert-manager upgrade information](https://cert-manager.io/docs/installation/upgrading/).
|
||||
|
||||
+167
@@ -0,0 +1,167 @@
|
||||
---
|
||||
title: Upgrading and Rolling Back Kubernetes
|
||||
weight: 70
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
Following an upgrade to the latest version of Rancher, downstream Kubernetes clusters can be upgraded to use the latest supported version of Kubernetes.
|
||||
|
||||
Rancher calls RKE (Rancher Kubernetes Engine) as a library when provisioning and editing RKE clusters. For more information on configuring the upgrade strategy for RKE clusters, refer to the [RKE documentation](https://rancher.com/docs/rke/latest/en/).
|
||||
|
||||
This section covers the following topics:
|
||||
|
||||
- [New Features](#new-features)
|
||||
- [Tested Kubernetes Versions](#tested-kubernetes-versions)
|
||||
- [How Upgrades Work](#how-upgrades-work)
|
||||
- [Recommended Best Practice for Upgrades](#recommended-best-practice-for-upgrades)
|
||||
- [Upgrading the Kubernetes Version](#upgrading-the-kubernetes-version)
|
||||
- [Rolling Back](#rolling-back)
|
||||
- [Configuring the Upgrade Strategy](#configuring-the-upgrade-strategy)
|
||||
- [Configuring the Maximum Unavailable Worker Nodes in the Rancher UI](#configuring-the-maximum-unavailable-worker-nodes-in-the-rancher-ui)
|
||||
- [Enabling Draining Nodes During Upgrades from the Rancher UI](#enabling-draining-nodes-during-upgrades-from-the-rancher-ui)
|
||||
- [Maintaining Availability for Applications During Upgrades](#maintaining-availability-for-applications-during-upgrades)
|
||||
- [Configuring the Upgrade Strategy in the cluster.yml](#configuring-the-upgrade-strategy-in-the-cluster-yml)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
|
||||
# New Features
|
||||
|
||||
As of Rancher v2.3.0, the Kubernetes metadata feature was added, which allows Rancher to ship Kubernetes patch versions without upgrading Rancher. For details, refer to the [section on Kubernetes metadata.](upgrade-kubernetes-without-upgrading-rancher.md)
|
||||
|
||||
As of Rancher v2.4.0,
|
||||
|
||||
- The ability to import K3s Kubernetes clusters into Rancher was added, along with the ability to upgrade Kubernetes when editing those clusters. For details, refer to the [section on imported clusters.](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/import-existing-clusters.md)
|
||||
- New advanced options are exposed in the Rancher UI for configuring the upgrade strategy of an RKE cluster: **Maximum Worker Nodes Unavailable** and **Drain nodes.** These options leverage the new cluster upgrade process of RKE v1.1.0, in which worker nodes are upgraded in batches, so that applications can remain available during cluster upgrades, under [certain conditions.](#maintaining-availability-for-applications-during-upgrades)
|
||||
|
||||
# Tested Kubernetes Versions
|
||||
|
||||
Before a new version of Rancher is released, it's tested with the latest minor versions of Kubernetes to ensure compatibility. For details on which versions of Kubernetes were tested on each Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.4.17/)
|
||||
|
||||
# How Upgrades Work
|
||||
|
||||
RKE v1.1.0 changed the way that clusters are upgraded.
|
||||
|
||||
In this section of the [RKE documentation,](https://rancher.com/docs/rke/latest/en/upgrades/how-upgrades-work) you'll learn what happens when you edit or upgrade your RKE Kubernetes cluster.
|
||||
|
||||
|
||||
# Recommended Best Practice for Upgrades
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Rancher v2.4+">
|
||||
|
||||
When upgrading the Kubernetes version of a cluster, we recommend that you:
|
||||
|
||||
1. Take a snapshot.
|
||||
1. Initiate a Kubernetes upgrade.
|
||||
1. If the upgrade fails, revert the cluster to the pre-upgrade Kubernetes version. This is achieved by selecting the **Restore etcd and Kubernetes version** option. This will return your cluster to the pre-upgrade kubernetes version before restoring the etcd snapshot.
|
||||
|
||||
The restore operation will work on a cluster that is not in a healthy or active state.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Rancher before v2.4">
|
||||
|
||||
When upgrading the Kubernetes version of a cluster, we recommend that you:
|
||||
|
||||
1. Take a snapshot.
|
||||
1. Initiate a Kubernetes upgrade.
|
||||
1. If the upgrade fails, restore the cluster from the etcd snapshot.
|
||||
|
||||
The cluster cannot be downgraded to a previous Kubernetes version.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
# Upgrading the Kubernetes Version
|
||||
|
||||
> **Prerequisites:**
|
||||
>
|
||||
> - The options below are available only for [Rancher-launched RKE Kubernetes clusters](../../pages-for-subheaders/launch-kubernetes-with-rancher.md) and imported/registered K3s Kubernetes clusters.
|
||||
> - Before upgrading Kubernetes, [back up your cluster.](../../pages-for-subheaders/backup-restore-and-disaster-recovery.md)
|
||||
|
||||
1. From the **Global** view, find the cluster for which you want to upgrade Kubernetes. Select **⋮ > Edit**.
|
||||
|
||||
1. Expand **Cluster Options**.
|
||||
|
||||
1. From the **Kubernetes Version** drop-down, choose the version of Kubernetes that you want to use for the cluster.
|
||||
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:** Kubernetes begins upgrading for the cluster.
|
||||
|
||||
# Rolling Back
|
||||
|
||||
_Available as of v2.4_
|
||||
|
||||
A cluster can be restored to a backup in which the previous Kubernetes version was used. For more information, refer to the following sections:
|
||||
|
||||
- [Backing up a cluster](../../how-to-guides/advanced-user-guides/manage-clusters/backing-up-etcd.md#how-snapshots-work)
|
||||
- [Restoring a cluster from backup](../../how-to-guides/advanced-user-guides/manage-clusters/restoring-etcd.md#restoring-a-cluster-from-a-snapshot)
|
||||
|
||||
# Configuring the Upgrade Strategy
|
||||
|
||||
As of RKE v1.1.0, additional upgrade options became available to give you more granular control over the upgrade process. These options can be used to maintain availability of your applications during a cluster upgrade if certain [conditions and requirements](https://rancher.com/docs/rke/latest/en/upgrades/maintaining-availability) are met.
|
||||
|
||||
The upgrade strategy can be configured in the Rancher UI, or by editing the `cluster.yml`. More advanced options are available by editing the `cluster.yml`.
|
||||
|
||||
### Configuring the Maximum Unavailable Worker Nodes in the Rancher UI
|
||||
|
||||
From the Rancher UI, the maximum number of unavailable worker nodes can be configured. During a cluster upgrade, worker nodes will be upgraded in batches of this size.
|
||||
|
||||
By default, the maximum number of unavailable worker is defined as 10 percent of all worker nodes. This number can be configured as a percentage or as an integer. When defined as a percentage, the batch size is rounded down to the nearest node, with a minimum of one node.
|
||||
|
||||
To change the default number or percentage of worker nodes,
|
||||
|
||||
1. Go to the cluster view in the Rancher UI.
|
||||
1. Click **⋮ > Edit.**
|
||||
1. In the **Advanced Options** section, go to the **Maxiumum Worker Nodes Unavailable** field. Enter the percentage of worker nodes that can be upgraded in a batch. Optionally, select **Count** from the drop-down menu and enter the maximum unavailable worker nodes as an integer.
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The cluster is updated to use the new upgrade strategy.
|
||||
|
||||
### Enabling Draining Nodes During Upgrades from the Rancher UI
|
||||
|
||||
By default, RKE [cordons](https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration) each node before upgrading it. [Draining](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) is disabled during upgrades by default. If draining is enabled in the cluster configuration, RKE will both cordon and drain the node before it is upgraded.
|
||||
|
||||
To enable draining each node during a cluster upgrade,
|
||||
|
||||
1. Go to the cluster view in the Rancher UI.
|
||||
1. Click **⋮ > Edit.**
|
||||
1. In the **Advanced Options** section, go to the **Drain nodes** field and click **Yes.**
|
||||
1. Choose a safe or aggressive drain option. For more information about each option, refer to [this section.](../../how-to-guides/advanced-user-guides/manage-clusters/nodes-and-node-pools.md#aggressive-and-safe-draining-options)
|
||||
1. Optionally, configure a grace period. The grace period is the timeout given to each pod for cleaning things up, so they will have chance to exit gracefully. Pods might need to finish any outstanding requests, roll back transactions or save state to some external storage. If this value is negative, the default value specified in the pod will be used.
|
||||
1. Optionally, configure a timeout, which is the amount of time the drain should continue to wait before giving up.
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The cluster is updated to use the new upgrade strategy.
|
||||
|
||||
> **Note:** As of Rancher v2.4.0, there is a [known issue](https://github.com/rancher/rancher/issues/25478) in which the Rancher UI doesn't show state of etcd and controlplane as drained, even though they are being drained.
|
||||
|
||||
### Maintaining Availability for Applications During Upgrades
|
||||
|
||||
_Available as of RKE v1.1.0_
|
||||
|
||||
In [this section of the RKE documentation,](https://rancher.com/docs/rke/latest/en/upgrades/maintaining-availability/) you'll learn the requirements to prevent downtime for your applications when upgrading the cluster.
|
||||
|
||||
### Configuring the Upgrade Strategy in the cluster.yml
|
||||
|
||||
More advanced upgrade strategy configuration options are available by editing the `cluster.yml`.
|
||||
|
||||
For details, refer to [Configuring the Upgrade Strategy](https://rancher.com/docs/rke/latest/en/upgrades/configuring-strategy) in the RKE documentation. The section also includes an example `cluster.yml` for configuring the upgrade strategy.
|
||||
|
||||
# Troubleshooting
|
||||
|
||||
If a node doesn't come up after an upgrade, the `rke up` command errors out.
|
||||
|
||||
No upgrade will proceed if the number of unavailable nodes exceeds the configured maximum.
|
||||
|
||||
If an upgrade stops, you may need to fix an unavailable node or remove it from the cluster before the upgrade can continue.
|
||||
|
||||
A failed node could be in many different states:
|
||||
|
||||
- Powered off
|
||||
- Unavailable
|
||||
- User drains a node while upgrade is in process, so there are no kubelets on the node
|
||||
- The upgrade itself failed
|
||||
|
||||
If the max unavailable number of nodes is reached during an upgrade, Rancher user clusters will be stuck in updating state and not move forward with upgrading any other control plane nodes. It will continue to evaluate the set of unavailable nodes in case one of the nodes becomes available. If the node cannot be fixed, you must remove the node in order to continue the upgrade.
|
||||
+100
@@ -0,0 +1,100 @@
|
||||
---
|
||||
title: Upgrading Kubernetes without Upgrading Rancher
|
||||
weight: 1120
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
The RKE metadata feature allows you to provision clusters with new versions of Kubernetes as soon as they are released, without upgrading Rancher. This feature is useful for taking advantage of patch versions of Kubernetes, for example, if you want to upgrade to Kubernetes v1.14.7 when your Rancher server originally supported v1.14.6.
|
||||
|
||||
> **Note:** The Kubernetes API can change between minor versions. Therefore, we don't support introducing minor Kubernetes versions, such as introducing v1.15 when Rancher currently supports v1.14. You would need to upgrade Rancher to add support for minor Kubernetes versions.
|
||||
|
||||
Rancher's Kubernetes metadata contains information specific to the Kubernetes version that Rancher uses to provision [RKE clusters](../../pages-for-subheaders/launch-kubernetes-with-rancher.md). Rancher syncs the data periodically and creates custom resource definitions (CRDs) for **system images,** **service options** and **addon templates.** Consequently, when a new Kubernetes version is compatible with the Rancher server version, the Kubernetes metadata makes the new version available to Rancher for provisioning clusters. The metadata gives you an overview of the information that the [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) (RKE) uses for deploying various Kubernetes versions.
|
||||
|
||||
This table below describes the CRDs that are affected by the periodic data sync.
|
||||
|
||||
> **Note:** Only administrators can edit metadata CRDs. It is recommended not to update existing objects unless explicitly advised.
|
||||
|
||||
| Resource | Description | Rancher API URL |
|
||||
|----------|-------------|-----------------|
|
||||
| System Images | List of system images used to deploy Kubernetes through RKE. | `<RANCHER_SERVER_URL>/v3/rkek8ssystemimages` |
|
||||
| Service Options | Default options passed to Kubernetes components like `kube-api`, `scheduler`, `kubelet`, `kube-proxy`, and `kube-controller-manager` | `<RANCHER_SERVER_URL>/v3/rkek8sserviceoptions` |
|
||||
| Addon Templates | YAML definitions used to deploy addon components like Canal, Calico, Flannel, Weave, Kube-dns, CoreDNS, `metrics-server`, `nginx-ingress` | `<RANCHER_SERVER_URL>/v3/rkeaddons` |
|
||||
|
||||
Administrators might configure the RKE metadata settings to do the following:
|
||||
|
||||
- Refresh the Kubernetes metadata, if a new patch version of Kubernetes comes out and they want Rancher to provision clusters with the latest version of Kubernetes without having to upgrade Rancher
|
||||
- Change the metadata URL that Rancher uses to sync the metadata, which is useful for air gap setups if you need to sync Rancher locally instead of with GitHub
|
||||
- Prevent Rancher from auto-syncing the metadata, which is one way to prevent new and unsupported Kubernetes versions from being available in Rancher
|
||||
|
||||
### Refresh Kubernetes Metadata
|
||||
|
||||
The option to refresh the Kubernetes metadata is available for administrators by default, or for any user who has the **Manage Cluster Drivers** [global role.](../../how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md)
|
||||
|
||||
To force Rancher to refresh the Kubernetes metadata, a manual refresh action is available under **Tools > Drivers > Refresh Kubernetes Metadata** on the right side corner.
|
||||
|
||||
You can configure Rancher to only refresh metadata when desired by setting `refresh-interval-minutes` to `0` (see below) and using this button to perform the metadata refresh manually when desired.
|
||||
|
||||
### Configuring the Metadata Synchronization
|
||||
|
||||
> Only administrators can change these settings.
|
||||
|
||||
The RKE metadata config controls how often Rancher syncs metadata and where it downloads data from. You can configure the metadata from the settings in the Rancher UI, or through the Rancher API at the endpoint `v3/settings/rke-metadata-config`.
|
||||
|
||||
The way that the metadata is configured depends on the Rancher version.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Rancher v2.4+">
|
||||
|
||||
To edit the metadata config in Rancher,
|
||||
|
||||
1. Go to the **Global** view and click the **Settings** tab.
|
||||
1. Go to the **rke-metadata-config** section. Click the **⋮** and click **Edit.**
|
||||
1. You can optionally fill in the following parameters:
|
||||
|
||||
- `refresh-interval-minutes`: This is the amount of time that Rancher waits to sync the metadata. To disable the periodic refresh, set `refresh-interval-minutes` to 0.
|
||||
- `url`: This is the HTTP path that Rancher fetches data from. The path must be a direct path to a JSON file. For example, the default URL for Rancher v2.4 is `https://releases.rancher.com/kontainer-driver-metadata/release-v2.4/data.json`.
|
||||
|
||||
If you don't have an air gap setup, you don't need to specify the URL where Rancher gets the metadata, because the default setting is to pull from [Rancher's metadata Git repository.](https://github.com/rancher/kontainer-driver-metadata/blob/dev-v2.5/data/data.json)
|
||||
|
||||
However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL to point to the new location of the JSON file.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Rancher v2.3">
|
||||
|
||||
To edit the metadata config in Rancher,
|
||||
|
||||
1. Go to the **Global** view and click the **Settings** tab.
|
||||
1. Go to the **rke-metadata-config** section. Click the **⋮** and click **Edit.**
|
||||
1. You can optionally fill in the following parameters:
|
||||
|
||||
- `refresh-interval-minutes`: This is the amount of time that Rancher waits to sync the metadata. To disable the periodic refresh, set `refresh-interval-minutes` to 0.
|
||||
- `url`: This is the HTTP path that Rancher fetches data from.
|
||||
- `branch`: This refers to the Git branch name if the URL is a Git URL.
|
||||
|
||||
If you don't have an air gap setup, you don't need to specify the URL or Git branch where Rancher gets the metadata, because the default setting is to pull from [Rancher's metadata Git repository.](https://github.com/rancher/kontainer-driver-metadata.git)
|
||||
|
||||
However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL and Git branch in the `rke-metadata-config` settings to point to the new location of the repository.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Air Gap Setups
|
||||
|
||||
Rancher relies on a periodic refresh of the `rke-metadata-config` to download new Kubernetes version metadata if it is supported with the current version of the Rancher server. For a table of compatible Kubernetes and Rancher versions, refer to the [service terms section.](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.2.8/)
|
||||
|
||||
If you have an air gap setup, you might not be able to get the automatic periodic refresh of the Kubernetes metadata from Rancher's Git repository. In that case, you should disable the periodic refresh to prevent your logs from showing errors. Optionally, you can configure your metadata settings so that Rancher can sync with a local copy of the RKE metadata.
|
||||
|
||||
To sync Rancher with a local mirror of the RKE metadata, an administrator would configure the `rke-metadata-config` settings to point to the mirror. For details, refer to [Configuring the Metadata Synchronization.](#configuring-the-metadata-synchronization)
|
||||
|
||||
After new Kubernetes versions are loaded into the Rancher setup, additional steps would be required in order to use them for launching clusters. Rancher needs access to updated system images. While the metadata settings can only be changed by administrators, any user can download the Rancher system images and prepare a private Docker registry for them.
|
||||
|
||||
1. To download the system images for the private registry, click the Rancher server version at the bottom left corner of the Rancher UI.
|
||||
1. Download the OS specific image lists for Linux or Windows.
|
||||
1. Download `rancher-images.txt`.
|
||||
1. Prepare the private registry using the same steps during the [air gap install](other-installation-methods/air-gapped-helm-cli-install/publish-images.md), but instead of using the `rancher-images.txt` from the releases page, use the one obtained from the previous steps.
|
||||
|
||||
**Result:** The air gap installation of Rancher can now sync the Kubernetes metadata. If you update your private registry when new versions of Kubernetes are released, you can provision clusters with the new version without having to upgrade Rancher.
|
||||
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: Overview
|
||||
weight: 1
|
||||
---
|
||||
Rancher is a container management platform built for organizations that deploy containers in production. Rancher makes it easy to run Kubernetes everywhere, meet IT requirements, and empower DevOps teams.
|
||||
|
||||
# Run Kubernetes Everywhere
|
||||
|
||||
Kubernetes has become the container orchestration standard. Most cloud and virtualization vendors now offer it as standard infrastructure. Rancher users have the choice of creating Kubernetes clusters with Rancher Kubernetes Engine (RKE) or cloud Kubernetes services, such as GKE, AKS, and EKS. Rancher users can also import and manage their existing Kubernetes clusters created using any Kubernetes distribution or installer.
|
||||
|
||||
# Meet IT requirements
|
||||
|
||||
Rancher supports centralized authentication, access control, and monitoring for all Kubernetes clusters under its control. For example, you can:
|
||||
|
||||
- Use your Active Directory credentials to access Kubernetes clusters hosted by cloud vendors, such as GKE.
|
||||
- Setup and enforce access control and security policies across all users, groups, projects, clusters, and clouds.
|
||||
- View the health and capacity of your Kubernetes clusters from a single-pane-of-glass.
|
||||
|
||||
# Empower DevOps Teams
|
||||
|
||||
Rancher provides an intuitive user interface for DevOps engineers to manage their application workload. The user does not need to have in-depth knowledge of Kubernetes concepts to start using Rancher. Rancher catalog contains a set of useful DevOps tools. Rancher is certified with a wide selection of cloud native ecosystem products, including, for example, security tools, monitoring systems, container registries, and storage and networking drivers.
|
||||
|
||||
The following figure illustrates the role Rancher plays in IT and DevOps organizations. Each team deploys their applications on the public or private clouds they choose. IT administrators gain visibility and enforce policies across all users, clusters, and clouds.
|
||||
|
||||

|
||||
|
||||
# Features of the Rancher API Server
|
||||
|
||||
The Rancher API server is built on top of an embedded Kubernetes API server and an etcd database. It implements the following functionalities:
|
||||
|
||||
### Authorization and Role-Based Access Control
|
||||
|
||||
- **User management:** The Rancher API server [manages user identities](../../pages-for-subheaders/about-authentication.md) that correspond to external authentication providers like Active Directory or GitHub, in addition to local users.
|
||||
- **Authorization:** The Rancher API server manages [access control](../../pages-for-subheaders/manage-role-based-access-control-rbac.md) and [security](../../how-to-guides/advanced-user-guides/authentication-permissions-and-global-configuration/create-pod-security-policies.md) policies.
|
||||
|
||||
### Working with Kubernetes
|
||||
|
||||
- **Provisioning Kubernetes clusters:** The Rancher API server can [provision Kubernetes](../../pages-for-subheaders/kubernetes-clusters-in-rancher-setup.md) on existing nodes, or perform [Kubernetes upgrades.](../installation-and-upgrade/upgrade-and-roll-back-kubernetes.md)
|
||||
- **Catalog management:** Rancher provides the ability to use a [catalog of Helm charts](catalog/) that make it easy to repeatedly deploy applications.
|
||||
- **Managing projects:** A project is a group of multiple namespaces and access control policies within a cluster. A project is a Rancher concept, not a Kubernetes concept, which allows you to manage multiple namespaces as a group and perform Kubernetes operations in them. The Rancher UI provides features for [project administration](../../pages-for-subheaders/manage-projects.md) and for [managing applications within projects.](../../pages-for-subheaders/kubernetes-resources-setup.md)
|
||||
- **Pipelines:** Setting up a [pipeline](../../how-to-guides/advanced-user-guides/manage-projects/ci-cd-pipelines.md) can help developers deliver new software as quickly and efficiently as possible. Within Rancher, you can configure pipelines for each of your Rancher projects.
|
||||
- **Istio:** Our [integration with Istio](../../pages-for-subheaders/istio.md) is designed so that a Rancher operator, such as an administrator or cluster owner, can deliver Istio to developers. Then developers can use Istio to enforce security policies, troubleshoot problems, or manage traffic for green/blue deployments, canary deployments, or A/B testing.
|
||||
|
||||
### Working with Cloud Infrastructure
|
||||
|
||||
- **Tracking nodes:** The Rancher API server tracks identities of all the [nodes](../../how-to-guides/advanced-user-guides/manage-clusters/nodes-and-node-pools.md) in all clusters.
|
||||
- **Setting up infrastructure:** When configured to use a cloud provider, Rancher can dynamically provision [new nodes](../../pages-for-subheaders/use-new-nodes-in-an-infra-provider.md) and [persistent storage](../../pages-for-subheaders/create-kubernetes-persistent-storage.md) in the cloud.
|
||||
|
||||
### Cluster Visibility
|
||||
|
||||
- **Logging:** Rancher can integrate with a variety of popular logging services and tools that exist outside of your Kubernetes clusters.
|
||||
- **Monitoring:** Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with Prometheus, a leading open-source monitoring solution.
|
||||
- **Alerting:** To keep your clusters and applications healthy and driving your organizational productivity forward, you need to stay informed of events occurring in your clusters and projects, both planned and unplanned.
|
||||
|
||||
# Editing Downstream Clusters with Rancher
|
||||
|
||||
The options and settings available for an existing cluster change based on the method that you used to provision it. For example, only clusters [provisioned by RKE](../../pages-for-subheaders/launch-kubernetes-with-rancher.md) have **Cluster Options** available for editing.
|
||||
|
||||
After a cluster is created with Rancher, a cluster administrator can manage cluster membership, enable pod security policies, and manage node pools, among [other options.](../../pages-for-subheaders/cluster-configuration.md)
|
||||
|
||||
The following table summarizes the options and settings available for each cluster type:
|
||||
|
||||
import ClusterCapabilitiesTable from 'shared-files/_cluster-capabilities-table.md';
|
||||
|
||||
<ClusterCapabilitiesTable />
|
||||
@@ -0,0 +1 @@
|
||||
<!-- PLACEHOLDER -->
|
||||
@@ -0,0 +1,51 @@
|
||||
---
|
||||
title: CLI with Rancher
|
||||
weight: 100
|
||||
---
|
||||
|
||||
Interact with Rancher using command line interface (CLI) tools from your workstation.
|
||||
|
||||
## Rancher CLI
|
||||
|
||||
Follow the steps in [rancher cli](../../pages-for-subheaders/cli-with-rancher.md).
|
||||
|
||||
Ensure you can run `rancher kubectl get pods` successfully.
|
||||
|
||||
|
||||
## kubectl
|
||||
Install the `kubectl` utility. See [install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
|
||||
|
||||
|
||||
Configure kubectl by visiting your cluster in the Rancher Web UI then clicking on `Kubeconfig`, copying contents and putting into your `~/.kube/config` file.
|
||||
|
||||
Run `kubectl cluster-info` or `kubectl get pods` successfully.
|
||||
|
||||
## Authentication with kubectl and kubeconfig Tokens with TTL
|
||||
|
||||
_**Available as of v2.4.6**_
|
||||
|
||||
_Requirements_
|
||||
|
||||
If admins have [enforced TTL on kubeconfig tokens](../../reference-guides/about-the-api/api-tokens.md#setting-ttl-on-kubeconfig-tokens), the kubeconfig file requires the [Rancher cli](cli.md) to be present in your PATH when you run `kubectl`. Otherwise, you’ll see error like:
|
||||
`Unable to connect to the server: getting credentials: exec: exec: "rancher": executable file not found in $PATH`.
|
||||
|
||||
This feature enables kubectl to authenticate with the Rancher server and get a new kubeconfig token when required. The following auth providers are currently supported:
|
||||
|
||||
1. Local
|
||||
2. Active Directory
|
||||
3. FreeIpa, OpenLdap
|
||||
4. SAML providers - Ping, Okta, ADFS, Keycloak, Shibboleth
|
||||
|
||||
When you first run kubectl, for example, `kubectl get pods`, it will ask you to pick an auth provider and log in with the Rancher server.
|
||||
The kubeconfig token is cached in the path where you run kubectl under `./.cache/token`. This token is valid till [it expires](../../reference-guides/about-the-api/api-tokens.md#setting-ttl-on-kubeconfig-tokens-period), or [gets deleted from the Rancher server](../../reference-guides/about-the-api/api-tokens.md#deleting-tokens)
|
||||
Upon expiration, the next `kubectl get pods` will ask you to log in with the Rancher server again.
|
||||
|
||||
_Note_
|
||||
|
||||
As of CLI [v2.4.10](https://github.com/ranchquick-start-guide/cli/releases/tag/v2.4.10), the kubeconfig token can be cached at a chosen path with `cache-dir` flag or env var `RANCHER_CACHE_DIR`.
|
||||
|
||||
_**Current Known Issues**_
|
||||
|
||||
1. If [authorized cluster endpoint](../../pages-for-subheaders/rancher-manager-architecture.md#4-authorized-cluster-endpoint) is enabled for RKE clusters to [authenticate directly with downstream cluster](../../how-to-guides/advanced-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) and Rancher server goes down, all kubectl calls will fail after the kubeconfig token expires. No new kubeconfig tokens can be generated if Rancher server isn't accessible.
|
||||
2. If a kubeconfig token is deleted from Rancher [API tokens]({{<baseurl>}}/rancher/v2.0-v2api/api-tokens/#deleting-tokens) page, and the token is still cached, cli won't ask you to login again until the token expires or is deleted.
|
||||
`kubectl` calls will result into an error like `error: You must be logged in to the server (the server has asked for the client to provide credentials`. Tokens can be deleted using `rancher token delete`.
|
||||
+68
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: Rancher AWS Quick Start Guide
|
||||
description: Read this step by step Rancher AWS guide to quickly deploy a Rancher Server with a single node cluster attached.
|
||||
weight: 100
|
||||
---
|
||||
The following steps will quickly deploy a Rancher Server on AWS with a single node cluster attached.
|
||||
|
||||
>**Note:** The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../../pages-for-subheaders/installation-and-upgrade.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
>**Note**
|
||||
>Deploying to Amazon AWS will incur charges.
|
||||
|
||||
- [Amazon AWS Account](https://aws.amazon.com/account/): An Amazon AWS Account is required to create resources for deploying Rancher and Kubernetes.
|
||||
- [Amazon AWS Access Key](https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html): Use this link to follow a tutorial to create an Amazon AWS Access Key if you don't have one yet.
|
||||
- Install [Terraform](https://www.terraform.io/downloads.html): Used to provision the server and cluster in Amazon AWS.
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
1. Go into the AWS folder containing the terraform files by executing `cd quickstart/aws`.
|
||||
|
||||
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
1. Edit `terraform.tfvars` and customize the following variables:
|
||||
- `aws_access_key` - Amazon AWS Access Key
|
||||
- `aws_secret_key` - Amazon AWS Secret Key
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server
|
||||
|
||||
1. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [AWS Quickstart Readme](https://github.com/rancher/quickstart/tree/master/aws) for more information.
|
||||
Suggestions include:
|
||||
- `aws_region` - Amazon AWS region, choose the closest instead of the default
|
||||
- `prefix` - Prefix for all created resources
|
||||
- `instance_type` - EC2 instance size used, minimum is `t3a.medium` but `t3a.large` or `t3a.xlarge` could be used if within budget
|
||||
|
||||
1. Run `terraform init`.
|
||||
|
||||
1. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following:
|
||||
|
||||
```
|
||||
Apply complete! Resources: 16 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
rancher_node_ip = xx.xx.xx.xx
|
||||
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
|
||||
workload_node_ip = yy.yy.yy.yy
|
||||
```
|
||||
|
||||
1. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
|
||||
#### Result
|
||||
|
||||
Two Kubernetes clusters are deployed into your AWS account, one running Rancher Server and the other ready for experimentation deployments. Please note that while this setup is a great way to explore Rancher functionality, a production setup should follow our high availability setup guidelines.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../../../pages-for-subheaders/deploy-rancher-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/aws` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
+74
@@ -0,0 +1,74 @@
|
||||
---
|
||||
title: Rancher Azure Quick Start Guide
|
||||
description: Read this step by step Rancher Azure guide to quickly deploy a Rancher Server with a single node cluster attached.
|
||||
weight: 100
|
||||
---
|
||||
|
||||
The following steps will quickly deploy a Rancher server on Azure in a single-node RKE Kubernetes cluster, with a single-node downstream Kubernetes cluster attached.
|
||||
|
||||
>**Note:** The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../../pages-for-subheaders/installation-and-upgrade.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
>**Note**
|
||||
>Deploying to Microsoft Azure will incur charges.
|
||||
|
||||
- [Microsoft Azure Account](https://azure.microsoft.com/en-us/free/): A Microsoft Azure Account is required to create resources for deploying Rancher and Kubernetes.
|
||||
- [Microsoft Azure Subscription](https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/create-subscription#create-a-subscription-in-the-azure-portal): Use this link to follow a tutorial to create a Microsoft Azure subscription if you don't have one yet.
|
||||
- [Micsoroft Azure Tenant](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-create-new-tenant): Use this link and follow instructions to create a Microsoft Azure tenant.
|
||||
- [Microsoft Azure Client ID/Secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal): Use this link and follow instructions to create a Microsoft Azure client and secret.
|
||||
- [Terraform](https://www.terraform.io/downloads.html): Used to provision the server and cluster in Microsoft Azure.
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
1. Go into the Azure folder containing the terraform files by executing `cd quickstart/azure`.
|
||||
|
||||
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
1. Edit `terraform.tfvars` and customize the following variables:
|
||||
- `azure_subscription_id` - Microsoft Azure Subscription ID
|
||||
- `azure_client_id` - Microsoft Azure Client ID
|
||||
- `azure_client_secret` - Microsoft Azure Client Secret
|
||||
- `azure_tenant_id` - Microsoft Azure Tenant ID
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server
|
||||
|
||||
2. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [Azure Quickstart Readme](https://github.com/rancher/quickstart/tree/master/azure) for more information.
|
||||
Suggestions include:
|
||||
- `azure_location` - Microsoft Azure region, choose the closest instead of the default
|
||||
- `prefix` - Prefix for all created resources
|
||||
- `instance_type` - Compute instance size used, minimum is `Standard_DS2_v2` but `Standard_DS2_v3` or `Standard_DS3_v2` could be used if within budget
|
||||
- `ssh_key_file_name` - Use a specific SSH key instead of `~/.ssh/id_rsa` (public key is assumed to be `${ssh_key_file_name}.pub`)
|
||||
|
||||
1. Run `terraform init`.
|
||||
|
||||
1. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following:
|
||||
|
||||
```
|
||||
Apply complete! Resources: 16 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
rancher_node_ip = xx.xx.xx.xx
|
||||
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
|
||||
workload_node_ip = yy.yy.yy.yy
|
||||
```
|
||||
|
||||
1. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
|
||||
#### Result
|
||||
|
||||
Two Kubernetes clusters are deployed into your Azure account, one running Rancher Server and the other ready for experimentation deployments.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../../../pages-for-subheaders/deploy-rancher-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/azure` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
+68
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: Rancher DigitalOcean Quick Start Guide
|
||||
description: Read this step by step Rancher DigitalOcean guide to quickly deploy a Rancher Server with a single node cluster attached.
|
||||
weight: 100
|
||||
---
|
||||
The following steps will quickly deploy a Rancher Server on DigitalOcean with a single node cluster attached.
|
||||
|
||||
>**Note:** The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../../pages-for-subheaders/installation-and-upgrade.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
>**Note**
|
||||
>Deploying to DigitalOcean will incur charges.
|
||||
|
||||
- [DigitalOcean Account](https://www.digitalocean.com): You will require an account on DigitalOcean as this is where the server and cluster will run.
|
||||
- [DigitalOcean Access Key](https://www.digitalocean.com/community/tutorials/how-to-create-a-digitalocean-space-and-api-key): Use this link to create a DigitalOcean Access Key if you don't have one.
|
||||
- [Terraform](https://www.terraform.io/downloads.html): Used to provision the server and cluster to DigitalOcean.
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
1. Go into the DigitalOcean folder containing the terraform files by executing `cd quickstart/do`.
|
||||
|
||||
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
1. Edit `terraform.tfvars` and customize the following variables:
|
||||
- `do_token` - DigitalOcean access key
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server
|
||||
|
||||
1. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [DO Quickstart Readme](https://github.com/rancher/quickstart/tree/master/do) for more information.
|
||||
Suggestions include:
|
||||
- `do_region` - DigitalOcean region, choose the closest instead of the default
|
||||
- `prefix` - Prefix for all created resources
|
||||
- `droplet_size` - Droplet size used, minimum is `s-2vcpu-4gb` but `s-4vcpu-8gb` could be used if within budget
|
||||
- `ssh_key_file_name` - Use a specific SSH key instead of `~/.ssh/id_rsa` (public key is assumed to be `${ssh_key_file_name}.pub`)
|
||||
|
||||
1. Run `terraform init`.
|
||||
|
||||
1. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following:
|
||||
|
||||
```
|
||||
Apply complete! Resources: 15 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
rancher_node_ip = xx.xx.xx.xx
|
||||
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
|
||||
workload_node_ip = yy.yy.yy.yy
|
||||
```
|
||||
|
||||
1. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
|
||||
#### Result
|
||||
|
||||
Two Kubernetes clusters are deployed into your DigitalOcean account, one running Rancher Server and the other ready for experimentation deployments.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../../../pages-for-subheaders/deploy-rancher-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/do` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
+69
@@ -0,0 +1,69 @@
|
||||
---
|
||||
title: Rancher GCP Quick Start Guide
|
||||
description: Read this step by step Rancher GCP guide to quickly deploy a Rancher Server with a single node cluster attached.
|
||||
weight: 100
|
||||
---
|
||||
The following steps will quickly deploy a Rancher server on GCP in a single-node RKE Kubernetes cluster, with a single-node downstream Kubernetes cluster attached.
|
||||
|
||||
>**Note:** The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../../pages-for-subheaders/installation-and-upgrade.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
>**Note**
|
||||
>Deploying to Google GCP will incur charges.
|
||||
|
||||
- [Google GCP Account](https://console.cloud.google.com/): A Google GCP Account is required to create resources for deploying Rancher and Kubernetes.
|
||||
- [Google GCP Project](https://cloud.google.com/appengine/docs/standard/nodejs/building-app/creating-project): Use this link to follow a tutorial to create a GCP Project if you don't have one yet.
|
||||
- [Google GCP Service Account](https://cloud.google.com/iam/docs/creating-managing-service-account-keys): Use this link and follow instructions to create a GCP service account and token file.
|
||||
- [Terraform](https://www.terraform.io/downloads.html): Used to provision the server and cluster in Google GCP.
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
1. Go into the GCP folder containing the terraform files by executing `cd quickstart/gcp`.
|
||||
|
||||
1. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
1. Edit `terraform.tfvars` and customize the following variables:
|
||||
- `gcp_account_json` - GCP service account file path and file name
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server
|
||||
|
||||
1. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [GCP Quickstart Readme](https://github.com/rancher/quickstart/tree/master/gcp) for more information.
|
||||
Suggestions include:
|
||||
- `gcp_region` - Google GCP region, choose the closest instead of the default
|
||||
- `prefix` - Prefix for all created resources
|
||||
- `machine_type` - Compute instance size used, minimum is `n1-standard-1` but `n1-standard-2` or `n1-standard-4` could be used if within budget
|
||||
- `ssh_key_file_name` - Use a specific SSH key instead of `~/.ssh/id_rsa` (public key is assumed to be `${ssh_key_file_name}.pub`)
|
||||
|
||||
1. Run `terraform init`.
|
||||
|
||||
1. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following:
|
||||
|
||||
```
|
||||
Apply complete! Resources: 16 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
rancher_node_ip = xx.xx.xx.xx
|
||||
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
|
||||
workload_node_ip = yy.yy.yy.yy
|
||||
```
|
||||
|
||||
1. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
|
||||
#### Result
|
||||
|
||||
Two Kubernetes clusters are deployed into your GCP account, one running Rancher Server and the other ready for experimentation deployments.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../../../pages-for-subheaders/deploy-rancher-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/gcp` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
+118
@@ -0,0 +1,118 @@
|
||||
---
|
||||
title: Manual Quick Start
|
||||
weight: 300
|
||||
---
|
||||
Howdy Partner! This tutorial walks you through:
|
||||
|
||||
- Installation of Rancher 2.x
|
||||
- Creation of your first cluster
|
||||
- Deployment of an application, Nginx
|
||||
|
||||
>**Note:** The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../../pages-for-subheaders/installation-and-upgrade.md).
|
||||
|
||||
## Quick Start Outline
|
||||
|
||||
This Quick Start Guide is divided into different tasks for easier consumption.
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
|
||||
1. [Provision a Linux Host](#1-provision-a-linux-host)
|
||||
|
||||
1. [Install Rancher](#2-install-rancher)
|
||||
|
||||
1. [Log In](#3-log-in)
|
||||
|
||||
1. [Create the Cluster](#4-create-the-cluster)
|
||||
|
||||
<!-- /TOC -->
|
||||
<br/>
|
||||
### 1. Provision a Linux Host
|
||||
|
||||
Begin creation of a custom cluster by provisioning a Linux host. Your host can be:
|
||||
|
||||
- A cloud-host virtual machine (VM)
|
||||
- An on-prem VM
|
||||
- A bare-metal server
|
||||
|
||||
>**Note:**
|
||||
> When using a cloud-hosted virtual machine you need to allow inbound TCP communication to ports 80 and 443. Please see your cloud-host's documentation for information regarding port configuration.
|
||||
>
|
||||
> For a full list of port requirements, refer to [Docker Installation](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md).
|
||||
|
||||
Provision the host according to our [Requirements](../../../pages-for-subheaders/installation-requirements.md).
|
||||
|
||||
### 2. Install Rancher
|
||||
|
||||
To install Rancher on your host, connect to it and then use a shell to install.
|
||||
|
||||
1. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection.
|
||||
|
||||
2. From your shell, enter the following command:
|
||||
|
||||
```
|
||||
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
|
||||
```
|
||||
|
||||
**Result:** Rancher is installed.
|
||||
|
||||
### 3. Log In
|
||||
|
||||
Log in to Rancher to begin using the application. After you log in, you'll make some one-time configurations.
|
||||
|
||||
1. Open a web browser and enter the IP address of your host: `https://<SERVER_IP>`.
|
||||
|
||||
Replace `<SERVER_IP>` with your host IP address.
|
||||
|
||||
2. When prompted, create a password for the default `admin` account there cowpoke!
|
||||
|
||||
3. Set the **Rancher Server URL**. The URL can either be an IP address or a host name. However, each node added to your cluster must be able to connect to this URL.<br/><br/>If you use a hostname in the URL, this hostname must be resolvable by DNS on the nodes you want to add to you cluster.
|
||||
|
||||
<br/>
|
||||
|
||||
### 4. Create the Cluster
|
||||
|
||||
Welcome to Rancher! You are now able to create your first Kubernetes cluster.
|
||||
|
||||
In this task, you can use the versatile **Custom** option. This option lets you add _any_ Linux host (cloud-hosted VM, on-prem VM, or bare-metal) to be used in a cluster.
|
||||
|
||||
1. From the **Clusters** page, click **Add Cluster**.
|
||||
|
||||
2. Choose **Custom**.
|
||||
|
||||
3. Enter a **Cluster Name**.
|
||||
|
||||
4. Skip **Member Roles** and **Cluster Options**. We'll tell you about them later.
|
||||
|
||||
5. Click **Next**.
|
||||
|
||||
6. From **Node Role**, select _all_ the roles: **etcd**, **Control**, and **Worker**.
|
||||
|
||||
7. **Optional**: Rancher auto-detects the IP addresses used for Rancher communication and cluster communication. You can override these using `Public Address` and `Internal Address` in the **Node Address** section.
|
||||
|
||||
8. Skip the **Labels** stuff. It's not important for now.
|
||||
|
||||
9. Copy the command displayed on screen to your clipboard.
|
||||
|
||||
10. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.
|
||||
|
||||
11. When you finish running the command on your Linux host, click **Done**.
|
||||
|
||||
**Result:**
|
||||
|
||||
Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster.
|
||||
|
||||
You can access your cluster after its state is updated to **Active.**
|
||||
|
||||
**Active** clusters are assigned two Projects:
|
||||
|
||||
- `Default`, containing the `default` namespace
|
||||
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
|
||||
|
||||
#### Finished
|
||||
|
||||
Congratulations! You have created your first cluster.
|
||||
|
||||
#### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../../../pages-for-subheaders/deploy-rancher-workloads.md).
|
||||
+47
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: Vagrant Quick Start
|
||||
weight: 200
|
||||
---
|
||||
The following steps quickly deploy a Rancher Server with a single node cluster attached.
|
||||
|
||||
>**Note:** The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../../pages-for-subheaders/installation-and-upgrade.md).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Vagrant](https://www.vagrantup.com): Vagrant is required as this is used to provision the machine based on the Vagrantfile.
|
||||
- [Virtualbox](https://www.virtualbox.org): The virtual machines that Vagrant provisions need to be provisioned to VirtualBox.
|
||||
- At least 4GB of free RAM.
|
||||
|
||||
### Note
|
||||
- Vagrant will require plugins to create VirtualBox VMs. Install them with the following commands:
|
||||
|
||||
`vagrant plugin install vagrant-vboxmanage`
|
||||
|
||||
`vagrant plugin install vagrant-vbguest`
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the folder containing the Vagrantfile by executing `cd quickstart/vagrant`.
|
||||
|
||||
3. **Optional:** Edit `config.yaml` to:
|
||||
|
||||
- Change the number of nodes and the memory allocations, if required. (`node.count`, `node.cpus`, `node.memory`)
|
||||
- Change the password of the `admin` user for logging into Rancher. (`default_password`)
|
||||
|
||||
4. To initiate the creation of the environment run, `vagrant up --provider=virtualbox`.
|
||||
|
||||
5. Once provisioning finishes, go to `https://172.22.101.101` in the browser. The default user/password is `admin/admin`.
|
||||
|
||||
**Result:** Rancher Server and your Kubernetes cluster is installed on VirtualBox.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../../../pages-for-subheaders/deploy-rancher-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/vagrant` folder execute `vagrant destroy -f`.
|
||||
|
||||
2. Wait for the confirmation that all resources have been destroyed.
|
||||
+156
@@ -0,0 +1,156 @@
|
||||
---
|
||||
title: Workload with NodePort Quick Start
|
||||
weight: 200
|
||||
---
|
||||
|
||||
### Prerequisite
|
||||
|
||||
You have a running cluster with at least 1 node.
|
||||
|
||||
### 1. Deploying a Workload
|
||||
|
||||
You're ready to create your first Kubernetes [workload](https://kubernetes.io/docs/concepts/workloads/). A workload is an object that includes pods along with other files and info needed to deploy your application.
|
||||
|
||||
For this workload, you'll be deploying the application Rancher Hello-World.
|
||||
|
||||
1. From the **Clusters** page, open the cluster that you just created.
|
||||
|
||||
2. From the main menu of the **Dashboard**, select **Projects/Namespaces**.
|
||||
|
||||
3. Open the **Project: Default** project.
|
||||
|
||||
4. Click **Resources > Workloads.** In versions before v2.3.0, click **Workloads > Workloads.**
|
||||
|
||||
5. Click **Deploy**.
|
||||
|
||||
**Step Result:** The **Deploy Workload** page opens.
|
||||
|
||||
6. Enter a **Name** for your workload.
|
||||
|
||||
7. From the **Docker Image** field, enter `rancher/hello-world`. This field is case-sensitive.
|
||||
|
||||
8. From **Port Mapping**, click **Add Port**.
|
||||
|
||||
9. From the **As a** drop-down, make sure that **NodePort (On every node)** is selected.
|
||||
|
||||

|
||||
|
||||
10. From the **On Listening Port** field, leave the **Random** value in place.
|
||||
|
||||

|
||||
|
||||
11. From the **Publish the container port** field, enter port `80`.
|
||||
|
||||

|
||||
|
||||
12. Leave the remaining options on their default setting. We'll tell you about them later.
|
||||
|
||||
13. Click **Launch**.
|
||||
|
||||
**Result:**
|
||||
|
||||
* Your workload is deployed. This process might take a few minutes to complete.
|
||||
* When your workload completes deployment, it's assigned a state of **Active**. You can view this status from the project's **Workloads** page.
|
||||
|
||||
<br/>
|
||||
|
||||
### 2. Viewing Your Application
|
||||
|
||||
From the **Workloads** page, click the link underneath your workload. If your deployment succeeded, your application opens.
|
||||
|
||||
### Attention: Cloud-Hosted Sandboxes
|
||||
|
||||
When using a cloud-hosted virtual machine, you may not have access to the port running the container. In this event, you can test Nginx in an ssh session on the local machine using `Execute Shell`. Use the port number after the `:` in the link under your workload if available, which is `31568` in this example.
|
||||
|
||||
```sh
|
||||
gettingstarted@rancher:~$ curl http://localhost:31568
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Rancher</title>
|
||||
<link rel="icon" href="img/favicon.png">
|
||||
<style>
|
||||
body {
|
||||
background-color: white;
|
||||
text-align: center;
|
||||
padding: 50px;
|
||||
font-family: "Open Sans","Helvetica Neue",Helvetica,Arial,sans-serif;
|
||||
}
|
||||
button {
|
||||
background-color: #0075a8;
|
||||
border: none;
|
||||
color: white;
|
||||
padding: 15px 32px;
|
||||
text-align: center;
|
||||
text-decoration: none;
|
||||
display: inline-block;
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
#logo {
|
||||
margin-bottom: 40px;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<img id="logo" src="img/rancher-logo.svg" alt="Rancher logo" width=400 />
|
||||
<h1>Hello world!</h1>
|
||||
<h3>My hostname is hello-world-66b4b9d88b-78bhx</h3>
|
||||
<div id='Services'>
|
||||
<h3>k8s services found 2</h3>
|
||||
|
||||
<b>INGRESS_D1E1A394F61C108633C4BD37AEDDE757</b> tcp://10.43.203.31:80<br />
|
||||
|
||||
<b>KUBERNETES</b> tcp://10.43.0.1:443<br />
|
||||
|
||||
</div>
|
||||
<br />
|
||||
|
||||
<div id='rancherLinks' class="row social">
|
||||
<a class="p-a-xs" href="https://rancher.com/docs"><img src="img/favicon.png" alt="Docs" height="25" width="25"></a>
|
||||
<a class="p-a-xs" href="https://slack.rancher.io/"><img src="img/icon-slack.svg" alt="slack" height="25" width="25"></a>
|
||||
<a class="p-a-xs" href="https://github.com/rancher/rancher"><img src="img/icon-github.svg" alt="github" height="25" width="25"></a>
|
||||
<a class="p-a-xs" href="https://twitter.com/Rancher_Labs"><img src="img/icon-twitter.svg" alt="twitter" height="25" width="25"></a>
|
||||
<a class="p-a-xs" href="https://www.facebook.com/rancherlabs/"><img src="img/icon-facebook.svg" alt="facebook" height="25" width="25"></a>
|
||||
<a class="p-a-xs" href="https://www.linkedin.com/groups/6977008/profile"><img src="img/icon-linkedin.svg" height="25" alt="linkedin" width="25"></a>
|
||||
</div>
|
||||
<br />
|
||||
<button class='button' onclick='myFunction()'>Show request details</button>
|
||||
<div id="reqInfo" style='display:none'>
|
||||
<h3>Request info</h3>
|
||||
<b>Host:</b> 172.22.101.111:31411 <br />
|
||||
<b>Pod:</b> hello-world-66b4b9d88b-78bhx </b><br />
|
||||
|
||||
<b>Accept:</b> [*/*]<br />
|
||||
|
||||
<b>User-Agent:</b> [curl/7.47.0]<br />
|
||||
|
||||
</div>
|
||||
<br />
|
||||
<script>
|
||||
function myFunction() {
|
||||
var x = document.getElementById("reqInfo");
|
||||
if (x.style.display === "none") {
|
||||
x.style.display = "block";
|
||||
} else {
|
||||
x.style.display = "none";
|
||||
}
|
||||
}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
gettingstarted@rancher:~$
|
||||
|
||||
```
|
||||
|
||||
### Finished
|
||||
|
||||
Congratulations! You have successfully deployed a workload exposed via a NodePort.
|
||||
|
||||
#### What's Next?
|
||||
|
||||
When you're done using your sandbox, destroy the Rancher Server and your cluster. See one of the following:
|
||||
|
||||
- [Amazon AWS: Destroying the Environment](../deploy-rancher-manager/aws.md#destroying-the-environment)
|
||||
- [DigitalOcean: Destroying the Environment](../deploy-rancher-manager/digitalocean.md#destroying-the-environment)
|
||||
- [Vagrant: Destroying the Environment](../deploy-rancher-manager/vagrant.md#destroying-the-environment)
|
||||
+82
@@ -0,0 +1,82 @@
|
||||
---
|
||||
title: Workload with Ingress Quick Start
|
||||
weight: 100
|
||||
---
|
||||
|
||||
### Prerequisite
|
||||
|
||||
You have a running cluster with at least 1 node.
|
||||
|
||||
### 1. Deploying a Workload
|
||||
|
||||
You're ready to create your first Kubernetes [workload](https://kubernetes.io/docs/concepts/workloads/). A workload is an object that includes pods along with other files and info needed to deploy your application.
|
||||
|
||||
For this workload, you'll be deploying the application Rancher Hello-World.
|
||||
|
||||
1. From the **Clusters** page, open the cluster that you just created.
|
||||
|
||||
2. From the main menu of the **Dashboard**, select **Projects/Namespaces**.
|
||||
|
||||
3. Open the **Project: Default** project.
|
||||
|
||||
4. Click **Resources > Workloads.** In versions before v2.3.0, click **Workloads > Workloads.**
|
||||
|
||||
5. Click **Deploy**.
|
||||
|
||||
**Step Result:** The **Deploy Workload** page opens.
|
||||
|
||||
6. Enter a **Name** for your workload.
|
||||
|
||||
7. From the **Docker Image** field, enter `rancher/hello-world`. This field is case-sensitive.
|
||||
|
||||
8. Leave the remaining options on their default setting. We'll tell you about them later.
|
||||
|
||||
9. Click **Launch**.
|
||||
|
||||
**Result:**
|
||||
|
||||
* Your workload is deployed. This process might take a few minutes to complete.
|
||||
* When your workload completes deployment, it's assigned a state of **Active**. You can view this status from the project's **Workloads** page.
|
||||
|
||||
<br/>
|
||||
### 2. Expose The Application Via An Ingress
|
||||
|
||||
Now that the application is up and running it needs to be exposed so that other services can connect.
|
||||
|
||||
1. From the **Clusters** page, open the cluster that you just created.
|
||||
|
||||
2. From the main menu of the **Dashboard**, select **Projects**.
|
||||
|
||||
3. Open the **Default** project.
|
||||
|
||||
4. Click **Resources > Workloads > Load Balancing.** In versions before v2.3.0, click the **Workloads** tab. Click on the **Load Balancing** tab.
|
||||
|
||||
5. Click **Add Ingress**.
|
||||
|
||||
6. Enter a name i.e. **hello**.
|
||||
|
||||
7. In the **Target** field, drop down the list and choose the name that you set for your service.
|
||||
|
||||
8. Enter `80` in the **Port** field.
|
||||
|
||||
9. Leave everything else as default and click **Save**.
|
||||
|
||||
**Result:** The application is assigned a `sslip.io` address and exposed. It may take a minute or two to populate.
|
||||
|
||||
### View Your Application
|
||||
|
||||
From the **Load Balancing** page, click the target link, which will look something like `hello.default.xxx.xxx.xxx.xxx.sslip.io > hello-world`.
|
||||
|
||||
Your application will open in a separate window.
|
||||
|
||||
#### Finished
|
||||
|
||||
Congratulations! You have successfully deployed a workload exposed via an ingress.
|
||||
|
||||
#### What's Next?
|
||||
|
||||
When you're done using your sandbox, destroy the Rancher Server and your cluster. See one of the following:
|
||||
|
||||
- [Amazon AWS: Destroying the Environment](../deploy-rancher-manager/aws.md#destroying-the-environment)
|
||||
- [DigitalOcean: Destroying the Environment](../deploy-rancher-manager/digitalocean.md#destroying-the-environment)
|
||||
- [Vagrant: Destroying the Environment](../deploy-rancher-manager/vagrant.md#destroying-the-environment)
|
||||
Reference in New Issue
Block a user