mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-17 10:25:16 +00:00
Merge branch 'master' into audit-17
This commit is contained in:
@@ -29,7 +29,7 @@ Windows
|
||||
|
||||
License
|
||||
=======
|
||||
Copyright (c) 2014-2019 [Rancher Labs, Inc.](http://rancher.com)
|
||||
Copyright (c) 2014-2019 [Rancher Labs, Inc.](https://rancher.com)
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
||||
@@ -13,3 +13,7 @@
|
||||
display: none;
|
||||
visibility: hidden;
|
||||
}
|
||||
|
||||
pre > code {
|
||||
padding: 0;
|
||||
}
|
||||
@@ -65,7 +65,7 @@ Open Ports / Network Security
|
||||
---------------------------
|
||||
|
||||
The server needs port 6443 to be accessible by the nodes. The nodes need to be able to reach
|
||||
other nodes over UDP port 8472. This is used for flannel VXLAN. If you don't use flannel
|
||||
other nodes over UDP port 8472. The nodes also need to be able to reach the server on UDP port 8472. This is used for flannel VXLAN. If you don't use flannel
|
||||
and provide your own custom CNI, then 8472 is not needed by k3s. The node should not listen
|
||||
on any other port. k3s uses reverse tunneling such that the nodes make outbound connections
|
||||
to the server and all kubelet traffic runs through that tunnel.
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
---
|
||||
title: Date and time zone
|
||||
weight: 121
|
||||
---
|
||||
|
||||
The default console keeps time in the Coordinated Universal Time (UTC) zone and synchronizes clocks with the Network Time Protocol (NTP). The Network Time Protocol daemon (ntpd) is an operating system program that maintains the system time in synchronization with time servers using the NTP.
|
||||
|
||||
RancherOS can run ntpd in the System Docker container. You can update its configurations by updating `/etc/ntp.conf`. For an example of how to update a file such as `/etc/ntp.conf` within a container, refer to [this page.]({{< baseurl >}}/os/v1.x/en/installation/configuration/write-files/#writing-files-in-specific-system-services)
|
||||
|
||||
The default console cannot support changing the time zone because including `tzdata` (time zone data) will increase the ISO size. However, you can change the time zone in the container by passing a flag to specify the time zone when you run the container:
|
||||
|
||||
```
|
||||
$ docker run -e TZ=Europe/Amsterdam debian:jessie date
|
||||
Tue Aug 20 09:28:19 CEST 2019
|
||||
```
|
||||
|
||||
You may need to install the `tzdata` in some images:
|
||||
|
||||
```
|
||||
$ docker run -e TZ=Asia/Shanghai -e DEBIAN_FRONTEND=noninteractive -it --rm ubuntu /bin/bash -c "apt-get update && apt-get install -yq tzdata && date”
|
||||
Thu Aug 29 08:13:02 CST 2019
|
||||
```
|
||||
@@ -86,7 +86,7 @@ _Available as of v1.4.x_
|
||||
The docker0 bridge can be configured with docker args, it will take effect after reboot.
|
||||
|
||||
```
|
||||
$ ros config set rancher.docker.bip 192.168.100.1/16
|
||||
$ ros config set rancher.docker.bip 192.168.0.0/16
|
||||
```
|
||||
|
||||
### Configuring System Docker
|
||||
@@ -114,13 +114,13 @@ _Available as of v1.4.x_
|
||||
The docker-sys bridge can be configured with system-docker args, it will take effect after reboot.
|
||||
|
||||
```
|
||||
$ ros config set rancher.system_docker.bip 172.18.43.1/16
|
||||
$ ros config set rancher.system_docker.bip 172.19.0.0/16
|
||||
```
|
||||
|
||||
_Available as of v1.4.x_
|
||||
|
||||
The default path of system-docker logs is `/var/log/system-docker.log`. If you want to write the system-docker logs to a separate partition,
|
||||
e.g. [RANCHE_OEM partition]({{< baseurl >}}/os/v1.x/en/about/custom-partition-layout/#use-rancher-oem-partition), you can try `rancher.defaults.system_docker_logs`:
|
||||
e.g. [RANCHER_OEM partition]({{< baseurl >}}/os/v1.x/en/about/custom-partition-layout/#use-rancher-oem-partition), you can try `rancher.defaults.system_docker_logs`:
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
|
||||
@@ -7,7 +7,7 @@ As of v1.1.0, RancherOS automatically detects that it is running on VMware ESXi,
|
||||
|
||||
As of v1.5.0, RancherOS releases anything required for VMware, which includes initrd, a standard ISO for VMware, a `vmdk` image, and a specific ISO to be used with Docker Machine. The open-vm-tools is built in to RancherOS, there is no need to download it.
|
||||
|
||||
| Description | Downlaod URL |
|
||||
| Description | Download URL |
|
||||
|---|---|
|
||||
| Booting from ISO | https://releases.rancher.com/os/latest/vmware/rancheros.iso |
|
||||
| For docker-machine | https://releases.rancher.com/os/latest/vmware/rancheros-autoformat.iso |
|
||||
|
||||
@@ -165,12 +165,12 @@ Once you have your own Services repository, you can add a new service to its ind
|
||||
|
||||
To create your own console images, you need to:
|
||||
|
||||
1 install some basic tools, including an ssh daemon, sudo, and kernel module tools
|
||||
2 create `rancher` and `docker` users and groups with UID and GID's of `1100` and `1101` respectively
|
||||
3 add both users to the `docker` and `sudo` groups
|
||||
4 add both groups into the `/etc/sudoers` file to allow password-less sudo
|
||||
5 configure sshd to accept logins from users in the `docker` group, and deny `root`.
|
||||
6 set `ENTRYPOINT ["/usr/bin/ros", "entrypoint"]`
|
||||
1. install some basic tools, including an ssh daemon, sudo, and kernel module tools
|
||||
2. create `rancher` and `docker` users and groups with UID and GID's of `1100` and `1101` respectively
|
||||
3. add both users to the `docker` and `sudo` groups
|
||||
4. add both groups into the `/etc/sudoers` file to allow password-less sudo
|
||||
5. configure sshd to accept logins from users in the `docker` group, and deny `root`.
|
||||
6. set `ENTRYPOINT ["/usr/bin/ros", "entrypoint"]`
|
||||
|
||||
the `ros` binary, and other host specific configuration files will be bind mounted into the running console container when its launched.
|
||||
|
||||
|
||||
@@ -41,3 +41,21 @@ For more information how to create and use PSPs, see [Pod Security Policies]({{<
|
||||
Drivers in Rancher allow you to manage which providers can be used to provision [hosted Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) or [nodes in an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) to allow Rancher to deploy and manage Kubernetes.
|
||||
|
||||
For more information, see [Provisioning Drivers]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/).
|
||||
|
||||
## Adding Kubernetes Versions into RANCHER
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
With this feature, you can upgrade to the latest version of Kubernetes as soon as it is released, without upgrading Rancher. This feature allows you to easily upgrade Kubernetes patch versions (i.e. `v1.15.X`), but not intended to upgrade Kubernetes minor versions (i.e. `v1.X.0`) as Kubernetes tends to deprecate or add APIs between minor versions.
|
||||
|
||||
The information that Rancher uses to provision [RKE clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) is now located in the [Rancher Kubernetes Metadata]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rke-metadata/). For details on metadata configuration and how to change the Kubernetes version used for provisioning RKE clusters, see [Rancher Kubernetes Metadata]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rke-metadata/).
|
||||
|
||||
Rancher Kubernetes Metadata contains Kubernetes version information which Rancher uses to provision [RKE clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/).
|
||||
|
||||
For more information on how metadata works and how to configure metadata config, see [Rancher Kubernetes Metadata]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rke-metadata/).
|
||||
|
||||
## Enabling Experimental Features
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
Rancher includes some features that are experimental and disabled by default. Feature flags were introduced to allow you to try these features. For more information, refer to the section about [feature flags.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags)
|
||||
@@ -18,27 +18,28 @@ The Rancher authentication proxy integrates with the following external authenti
|
||||
|
||||
| Auth Service | Available as of |
|
||||
| ------------------------------------------------------------------------------------------------ | ---------------- |
|
||||
| [Microsoft Active Directory]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/ad/) | v2.0.0 |
|
||||
| [GitHub]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/github/) | v2.0.0 |
|
||||
| [Microsoft Azure AD]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/azure-ad/) | v2.0.3 |
|
||||
| [FreeIPA]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/freeipa/) | v2.0.5 |
|
||||
| [OpenLDAP]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/openldap/) | v2.0.5 |
|
||||
| [Microsoft AD FS]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/) | v2.0.7 |
|
||||
| [PingIdentity]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/ping-federate/) | v2.0.7 |
|
||||
| [Keycloak]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/keycloak/) | v2.1.0 |
|
||||
| [Okta]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/okta/) | v2.2.0 |
|
||||
| [Microsoft Active Directory]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/ad/) | v2.0.0 |
|
||||
| [GitHub]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/github/) | v2.0.0 |
|
||||
| [Microsoft Azure AD]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/azure-ad/) | v2.0.3 |
|
||||
| [FreeIPA]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/freeipa/) | v2.0.5 |
|
||||
| [OpenLDAP]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/openldap/) | v2.0.5 |
|
||||
| [Microsoft AD FS]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/) | v2.0.7 |
|
||||
| [PingIdentity]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/ping-federate/) | v2.0.7 |
|
||||
| [Keycloak]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/keycloak/) | v2.1.0 |
|
||||
| [Okta]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/okta/) | v2.2.0 |
|
||||
| [Google OAuth]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/google/) | v2.3.0 |
|
||||
<br/>
|
||||
However, Rancher also provides [local authentication]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/local/).
|
||||
However, Rancher also provides [local authentication]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/local/).
|
||||
|
||||
In most cases, you should use an external authentication service over local authentication, as external authentication allows user management from a central location. However, you may want a few local authentication users for managing Rancher under rare circumstances, such as if your external authentication provider is unavailable or undergoing maintenance.
|
||||
|
||||
## Users and Groups
|
||||
|
||||
Rancher relies on users and groups to determine who is allowed to log in to Rancher and which resources they can access. When authenticating with an external provider, groups are provided from the external provider based on the user. These users and groups are given specific roles to resources like clusters, projects, multi-cluster apps, and global DNS providers and entries. When you give access to a group, all users who are a member of that group in the authentication provider will be able to access the resource with the permissions that you've specified. For more information on roles and permissions, see [Role Based Access Control]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/).
|
||||
Rancher relies on users and groups to determine who is allowed to log in to Rancher and which resources they can access. When authenticating with an external provider, groups are provided from the external provider based on the user. These users and groups are given specific roles to resources like clusters, projects, multi-cluster apps, and global DNS providers and entries. When you give access to a group, all users who are a member of that group in the authentication provider will be able to access the resource with the permissions that you've specified. For more information on roles and permissions, see [Role Based Access Control]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/).
|
||||
|
||||
> **Note:** Local authentication does not support creating or managing groups.
|
||||
|
||||
For more information, see [Users and Groups]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/user-groups/)
|
||||
For more information, see [Users and Groups]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/user-groups/)
|
||||
|
||||
## Scope of Rancher Authorization
|
||||
|
||||
@@ -75,22 +76,22 @@ Configuration of external authentication affects how principal users are managed
|
||||
|
||||
1. Sign into Rancher as the local principal and complete configuration of external authentication.
|
||||
|
||||

|
||||

|
||||
|
||||
2. Rancher associates the external principal with the local principal. These two users share the local principal's user ID.
|
||||
|
||||

|
||||

|
||||
|
||||
3. After you complete configuration, Rancher automatically signs out the local principal.
|
||||
|
||||

|
||||

|
||||
|
||||
4. Then, Rancher automatically signs you back in as the external principal.
|
||||
|
||||

|
||||

|
||||
|
||||
5. Because the external principal and the local principal share an ID, no unique object for the external principal displays on the Users page.
|
||||
|
||||

|
||||

|
||||
|
||||
6. The external principal and the local principal share the same access rights.
|
||||
|
||||
@@ -0,0 +1,103 @@
|
||||
---
|
||||
title: Configuring Google OAuth
|
||||
---
|
||||
_Available as of v2.3.0_
|
||||
|
||||
If your organization uses G Suite for user authentication, you can configure Rancher to allow your users to log in using their G Suite credentials.
|
||||
|
||||
Only admins of the G Suite domain have access to the Admin SDK. Therefore, only G Suite admins can configure Google OAuth for Rancher.
|
||||
|
||||
Within Rancher, only administrators or users with the **Manage Authentication** [global role]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#global-permissions-reference) can configure authentication.
|
||||
|
||||
# Prerequisites
|
||||
- You must have a [G Suite admin account](https://admin.google.com) configured.
|
||||
- G Suite requires a [top private domain FQDN](https://github.com/google/guava/wiki/InternetDomainNameExplained#public-suffixes-and-private-domains) as an authorized domain. One way to get an FQDN is by creating an A-record in Route53 for your Rancher server. You do not need to update your Rancher Server URL setting with that record, because there could be clusters using that URL.
|
||||
- You must have the Admin SDK API enabled for your G Suite domain. You can enable it using the steps on [this page.](https://support.google.com/a/answer/60757?hl=en)
|
||||
|
||||
After the Admin SDK API is enabled, your G Suite domain's API screen should look like this:
|
||||

|
||||
|
||||
# Setting up G Suite for OAuth with Rancher
|
||||
Before you can set up Google OAuth in Rancher, you need to log in to your G Suite account and do the following:
|
||||
|
||||
1. [Add Rancher as an authorized domain in G Suite](#1-adding-rancher-as-an-authorized-domain)
|
||||
1. [Generate OAuth2 credentials for the Rancher server](#2-creating-oauth2-credentials-for-the-rancher-server)
|
||||
1. [Create service account credentials for the Rancher server](#3-creating-service-account-credentials)
|
||||
1. [Register the service account key as an OAuth Client](#4-register-the-service-account-key-as-an-oauth-client)
|
||||
|
||||
### 1. Adding Rancher as an Authorized Domain
|
||||
1. Click [here](https://console.developers.google.com/apis/credentials) to go to credentials page of your Google domain.
|
||||
1. Select your project and click **OAuth consent screen.**
|
||||

|
||||
1. Go to **Authorized Domains** and enter the top private domain of your Rancher server URL in the list. The top private domain is the rightmost superdomain. So for example, www.foo.co.uk a top private domain of foo.co.uk. For more information on top-level domains, refer to [this article.](https://github.com/google/guava/wiki/InternetDomainNameExplained#public-suffixes-and-private-domains)
|
||||
1. Go to **Scopes for Google APIs** and make sure **email,** **profile** and **openid** are enabled.
|
||||
|
||||
**Result:** Rancher has been added as an authorized domain for the Admin SDK API.
|
||||
|
||||
### 2. Creating OAuth2 Credentials for the Rancher Server
|
||||
1. Go to the Google API console, select your project, and go to the [credentials page.]((https://console.developers.google.com/apis/credentials) )
|
||||

|
||||
1. On the **Create Credentials** dropdown, select **OAuth client ID.**
|
||||
1. Click **Web application.**
|
||||
1. Provide a name.
|
||||
1. Fill out the **Authorized JavaScript origins** and **Authorized redirect URIs.** Note: The Rancher UI page for setting up Google OAuth (available from the Global view under **Security > Authentication > Google**) provides you the exact links to enter for this step.
|
||||
- Under **Authorized JavaScript origins,** enter your Rancher server URL.
|
||||
- Under **Authorized redirect URIs,** enter your Rancher server URL appended with the path `verify-auth`. For example, if your URI is `https://rancherServer`, you will enter `https://rancherServer/verify-auth`.
|
||||
1. Click on **Create.**
|
||||
1. After the credential is created, you will see a screen with a list of your credentials. Choose the credential you just created, and in that row on rightmost side, click **Download JSON.** Save the file so that you can provide these credentials to Rancher.
|
||||
|
||||
**Result:** Your OAuth credentials have been successfully created.
|
||||
|
||||
### 3. Creating Service Account Credentials
|
||||
Since the Google Admin SDK is available only to admins, regular users cannot use it to retrieve profiles of other users or their groups. Regular users cannot even retrieve their own groups.
|
||||
|
||||
Since Rancher provides group-based membership access, we require the users to be able to get their own groups, and look up other users and groups when needed.
|
||||
|
||||
As a workaround to get this capability, G Suite recommends creating a service account and delegating authority of your G Suite domain to that service account.
|
||||
|
||||
This section describes how to:
|
||||
|
||||
- Create a service account
|
||||
- Create a key for the service account and download the credenials as JSON
|
||||
|
||||
1. Click [here](https://console.developers.google.com/iam-admin/serviceaccounts) and select your project for which you generated OAuth credentials.
|
||||
1. Click on **Create Service Account.**
|
||||
1. Enter a name and click **Create.**
|
||||

|
||||
1. Don't provide any roles on the **Service account permissions** page and click **Continue**
|
||||

|
||||
1. Click on **Create Key** and select the JSON option. Download the JSON file and save it so that you can provide it as the service account credentials to Rancher.
|
||||

|
||||
|
||||
**Result:** Your service account is created.
|
||||
|
||||
### 4. Register the Service Account Key as an OAuth Client
|
||||
|
||||
You will need to grant some permissions to the service account you created in the last step. Rancher requires you to grant only read-only permissions for users and groups.
|
||||
|
||||
Using the Unique ID of the service account key, register it as an Oauth Client using the following steps:
|
||||
|
||||
1. Get the Unique ID of the key you just created. If it's not displayed in the list of keys right next to the one you created, you will have to enable it. To enable it, click **Unique ID** and click **OK.** This will add a **Unique ID** column to the list of service account keys. Save the one listed for the service account you created. NOTE: This is a numeric key, not to be confused with the alphanumeric field **Key ID.**
|
||||
|
||||

|
||||
1. Go to the [**Manage OAuth Client Access** page.](https://admin.google.com/AdminHome?chromeless=1#OGX:ManageOauthClients)
|
||||
1. Add the Unique ID obtained in the previous step in the **Client Name** field.
|
||||
1. In the **One or More API Scopes** field, add the following scopes:
|
||||
```
|
||||
openid,profile,email,https://www.googleapis.com/auth/admin.directory.user.readonly,https://www.googleapis.com/auth/admin.directory.group.readonly
|
||||
```
|
||||
1. Click **Authorize.**
|
||||
|
||||
**Result:** The service account is registered as an OAuth client in your G Suite account.
|
||||
|
||||
# Configuring Google OAuth in Rancher
|
||||
1. Sign into Rancher using a local user assigned the [administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions) role. This user is also called the local principal.
|
||||
1. From the **Global** view, click **Security > Authentication** from the main menu.
|
||||
1. Click **Google.** The instructions in the UI cover the steps to set up authentication with Google OAuth.
|
||||
- **Step One** is about adding Rancher as an authorized domain, which we already covered in [this section.](#1-adding-rancher-as-an-authorized-domain)
|
||||
- For **Step Two,** provide the OAuth credentials JSON that you downloaded after completing [this section.](#2-creating-oauth2-credentials-for-the-rancher-server) You can upload the file or paste the contents into the **OAuth Credentials** field.
|
||||
- For **Step Three,** provide the service account credentials JSON that downloaded at the end of [this section.](#3-creating-service-account-credentials) The credentials will only work if you successfully [registered the service account key](#4-register-the-service-account-key-as-an-oauth-client) as an OAuth client in your G Suite account.
|
||||
1. Click **Authenticate with Google**.
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:** Google authentication is successfully configured.
|
||||
@@ -91,13 +91,14 @@ You are correctly redirected to your IdP login page and you are able to enter yo
|
||||
|
||||
### Keycloak 6.0.0+: IDPSSODescriptor missing from options
|
||||
|
||||
SAML Metadata IDPSSODescriptor is no longer available on Keycloak 6.0.0+. You can still get the XML from the following url:
|
||||
Keycloak versions 6.0.0 and up no longer provide the IDP metadata under the `Installation` tab.
|
||||
You can still get the XML from the following url:
|
||||
|
||||
`https://{KEYCLOAK-URL}/auth/realms/{REALM-NAME}/protocol/saml/descriptor`
|
||||
|
||||
At the moment of writing rancher (rancher 2.2.7) wants the root element to be `EntityDescriptor` rather than `EntitiesDescriptor` follow the the steps to adjust the xml:
|
||||
The XML obtained from this URL contains `EntitiesDescriptor` as the root element. Rancher expects the root element to be `EntityDescriptor` rather than `EntitiesDescriptor`. So before passing this XML to Rancher, follow these steps to adjust it:
|
||||
|
||||
* Copying the tags from `EntitiesDescriptor` to the `EntityDescriptor`.
|
||||
* Copy all the tags from `EntitiesDescriptor` to the `EntityDescriptor`.
|
||||
* Remove the `<EntitiesDescriptor>` tag from the beginning.
|
||||
* Remove the `</EntitiesDescriptor>` from the end of the xml.
|
||||
|
||||
|
||||
@@ -49,3 +49,16 @@ If you are not sure the last time Rancher performed an automatic refresh of user
|
||||
**Results:** Rancher refreshes the user information for all users. Requesting this refresh will update which users can access Rancher as well as all the groups that each user belongs to.
|
||||
|
||||
>**Note:** Since SAML does not support user lookup, SAML-based authentication providers do not support the ability to manually refresh user information. User information will only be refreshed when the user logs into the Rancher UI.
|
||||
|
||||
|
||||
## Session Length
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
The default length (TTL) of each user session is adjustable. The default session length is 16 hours.
|
||||
|
||||
1. From the **Global** view, click on **Settings**.
|
||||
1. In the **Settings** page, find **`auth-user-session-ttl-minutes`** and click **Edit.**
|
||||
1. Enter the amount of time in minutes a session length should last and click **Save.**
|
||||
|
||||
**Result:** Users are automatically logged out of Rancher after the set number of minutes.
|
||||
|
||||
+6
-9
@@ -1,13 +1,14 @@
|
||||
---
|
||||
title: "4. Configure Rancher for the Private Registry"
|
||||
title: Configuring a Private Registry
|
||||
weight: 400
|
||||
aliases:
|
||||
---
|
||||
|
||||
After your private registry is populated with all the required system images, you need to configure Rancher to use the private registry. There are two places you need to use a private registry:
|
||||
You might want to use a private Docker registry to share your custom base images within your organization. With a private registry, you can keep a private, consistent, and centralized source of truth for the Docker images that are used in your clusters.
|
||||
|
||||
- When Rancher is installed, to provide the Rancher system images
|
||||
- After Rancher is installed, to use when deploying clusters
|
||||
A private registry is also used for air gap installations of Rancher, in which the registry is located somewhere accessible by Rancher. Then Rancher can provision clusters using images from the registry without direct access to the Internet.
|
||||
|
||||
This section describes how to configure a private Docker registry from the Rancher UI after Rancher is installed. For instructions on setting up a private registry with command line options during the installation of Rancher, refer to the [single node]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-single-node) or [high-availability]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-high-availability) Rancher air gap installation instructions.
|
||||
|
||||
There are multiple ways to configure private registries in Rancher, depending on whether your private registry requires credentials:
|
||||
|
||||
@@ -18,8 +19,6 @@ If your private registry requires credentials, it cannot be used as the default
|
||||
|
||||
# Setting a Private Registry with No Credentials as the Default Registry
|
||||
|
||||
>**Note:** If you want to set the default private registry when starting the rancher/rancher container, you can use the environment variable `CATTLE_SYSTEM_DEFAULT_REGISTRY`.
|
||||
|
||||
1. Log into Rancher and configure the default admin password.
|
||||
|
||||
1. Go into the **Settings** view.
|
||||
@@ -36,7 +35,7 @@ If your private registry requires credentials, it cannot be used as the default
|
||||
|
||||
**Result:** Rancher will use your private registry to pull system images.
|
||||
|
||||
# Setting a Private Registry with Credentials for Deploying Clusters
|
||||
# Setting a Private Registry with Credentials when Deploying a Cluster
|
||||
|
||||
You can follow these steps to configure a private registry when you provision a cluster with Rancher:
|
||||
|
||||
@@ -46,5 +45,3 @@ You can follow these steps to configure a private registry when you provision a
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The new cluster will be able to pull images from the private registry.
|
||||
|
||||
### [Next: Configure Rancher System Charts]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/config-rancher-system-charts/)
|
||||
@@ -0,0 +1,101 @@
|
||||
---
|
||||
title: Enabling Experimental Features
|
||||
weight: 8000
|
||||
---
|
||||
_Available as of v2.3.0_
|
||||
|
||||
Rancher includes some features that are experimental and disabled by default. You might want to enable these features, for example, if you decide that the benefits of using an [unsupported storage type]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags/enable-not-default-storage-drivers) outweighs the risk of using an untested feature. Feature flags were introduced to allow you to try these features that are not enabled by default.
|
||||
|
||||
The features can be enabled in two ways:
|
||||
|
||||
- When installing Rancher with a CLI, you can use a feature flag to enable a feature by default
|
||||
- After installing Rancher, you can turn on the features with the Rancher API
|
||||
|
||||
Each feature has two values:
|
||||
|
||||
- A default value, which can be configured with a flag or environment variable from the command line
|
||||
- A set value, which can be configured with the Rancher API
|
||||
|
||||
If no value has been set, Rancher uses the default value.
|
||||
|
||||
Because the API sets the actual value and the command line sets the default value, that means that if you enable or disable a feature with the API, it will override any value set with the command line.
|
||||
|
||||
For example, if you install Rancher, then set a feature flag to true with the Rancher API, then upgrade Rancher with a command that sets the feature flag to false, the default value will still be false, but the feature will still be enabled because it was set with the Rancher API. If you then deleted the set value (true) with the Rancher API, setting it to NULL, the default value (false) would take effect.
|
||||
|
||||
The following is a list of the feature flags available in Rancher:
|
||||
|
||||
Feature | Environment Variable Key | Default Value | Description | Available as of |
|
||||
---|---|---|---|---
|
||||
[Allow unsupported storage drivers]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags/enable-not-default-storage-drivers) | `unsupported-storage-drivers` | `false` | This feature enables types for storage providers and provisioners that are not enabled by default. | v2.3.0
|
||||
[UI for Istio virtual services and destination rules]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags/istio-virtual-service-ui) | `istio-virtual-service-ui`| `false` | Enables a UI that lets you create, read, update and delete virtual services and destination rules, which are traffic management features of Istio | v2.3.0
|
||||
|
||||
# Enabling Features when Starting Rancher
|
||||
|
||||
When you install Rancher, enable the feature you want with a feature flag. The command is different depending on whether you are installing Rancher on a single node or if you are doing an HA installation of Rancher.
|
||||
|
||||
> **Note:** Values set from the Rancher API will override the value passed in through the command line.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "HA Install" %}}
|
||||
When installing Rancher with a Helm chart, use the `--features` option:
|
||||
```
|
||||
helm install rancher-latest/rancher \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set 'extraEnv[0].name=CATTLE_FEATURES' # Available as of v2.3.0
|
||||
--set 'extraEnv[0].value=<FEATURE-NAME1>=true,<FEATURE-NAME2>=true' # Available as of v2.3.0
|
||||
```
|
||||
|
||||
### Rendering the Helm Chart for Air Gap Installations
|
||||
|
||||
For an air gap installation of Rancher, you need to add a Helm chart repository and render a Helm template before installing Rancher with Helm. For details, refer to the [air gap installation documentation.]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap/install-rancher)
|
||||
|
||||
Here is an example of a command for passing in the feature flag options when rendering the Helm template:
|
||||
```
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
--set 'extraEnv[0].name=CATTLE_FEATURES' # Available as of v2.3.0
|
||||
--set 'extraEnv[0].value=<FEATURE-NAME1>=true,<FEATURE-NAME2>=true' # Available as of v2.3.0
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab "Single Node Install" %}}
|
||||
When installing Rancher with Docker, use the `--features` option:
|
||||
```
|
||||
docker run -d -p 80:80 -p 443:443 \
|
||||
--restart=unless-stopped \
|
||||
rancher/rancher:rancher-latest \
|
||||
--features=<FEATURE-NAME1>=true,<FEATURE-NAME2>=true # Available as of v2.3.0
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
# Enabling Features with the Rancher API
|
||||
|
||||
1. Go to `<RANCHER-SERVER-URL>/v3/features`.
|
||||
1. In the `data` section, you will see an array containing all of the features that can be turned on with feature flags. The name of the feature is in the `id` field. Click the name of the feature you want to enable.
|
||||
1. In the upper left corner of the screen, under **Operations,** click **Edit.**
|
||||
1. In the **Value** drop-down menu, click **True.**
|
||||
1. Click **Show Request.**
|
||||
1. Click **Send Request.**
|
||||
1. Click **Close.**
|
||||
|
||||
**Result:** The feature is enabled.
|
||||
|
||||
# Disabling Features with the Rancher API
|
||||
|
||||
1. Go to `<RANCHER-SERVER-URL>/v3/features`.
|
||||
1. In the `data` section, you will see an array containing all of the features that can be turned on with feature flags. The name of the feature is in the `id` field. Click the name of the feature you want to enable.
|
||||
1. In the upper left corner of the screen, under **Operations,** click **Edit.**
|
||||
1. In the **Value** drop-down menu, click **False.**
|
||||
1. Click **Show Request.**
|
||||
1. Click **Send Request.**
|
||||
1. Click **Close.**
|
||||
|
||||
**Result:** The feature is disabled.
|
||||
+37
@@ -0,0 +1,37 @@
|
||||
---
|
||||
title: Allow Unsupported Storage Drivers
|
||||
weight: 1
|
||||
---
|
||||
_Available as of v2.3.0_
|
||||
|
||||
This feature allows you to use types for storage providers and provisioners that are not enabled by default.
|
||||
|
||||
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags)
|
||||
|
||||
Environment Variable Key | Default Value | Description
|
||||
---|---|---
|
||||
`unsupported-storage-drivers` | `false` | This feature enables types for storage providers and provisioners that are not enabled by default.
|
||||
|
||||
### Types for Persistent Volume Plugins that are Enabled by Default
|
||||
Below is a list of storage types for persistent volume plugins that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported:
|
||||
|
||||
- `aws-ebs`
|
||||
- `azure-disk`
|
||||
- `azure-file`
|
||||
- `flex-volume-longhorn`
|
||||
- `gce-pd`
|
||||
- `host-path`
|
||||
- `local`
|
||||
- `nfs`
|
||||
- `vsphere-volume`
|
||||
|
||||
### Types for StorageClass that are Enabled by Default
|
||||
Below is a list of storage types for a StorageClass that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported:
|
||||
|
||||
- `aws-ebs`
|
||||
- `azure-disk`
|
||||
- `azure-file`
|
||||
- `gce-pd`
|
||||
- `longhorn`
|
||||
- `local`
|
||||
- `vsphere-volume`
|
||||
+29
@@ -0,0 +1,29 @@
|
||||
---
|
||||
title: UI for Istio Virtual Services and Destination Rules
|
||||
weight: 2
|
||||
---
|
||||
_Available as of v2.3.0_
|
||||
|
||||
> **Prerequisite:** Turning on this feature does not enable Istio. A cluster administrator needs to [enable Istio for the cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup) in order to use the feature.
|
||||
|
||||
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags)
|
||||
|
||||
Environment Variable Key | Default Value | Description
|
||||
---|---|---
|
||||
`istio-virtual-service-ui`| `false` | Enables a UI that lets you create, read, update and delete virtual services and destination rules, which are traffic management features of Istio
|
||||
|
||||
# About this Feature
|
||||
|
||||
A central advantage of Istio's traffic management features is that they allow dynamic request routing, which is useful for canary deployments, blue/green deployments, or A/B testing.
|
||||
|
||||
When enabled, this feature turns on a page that lets you configure some traffic management features of Istio using the Rancher UI. Without this feature, you need to use `kubectl` to manage traffic with Istio.
|
||||
|
||||
The feature enables two UI tabs: one tab for **Virtual Services** and another for **Destination Rules.**
|
||||
|
||||
- **Virtual services** intercept and direct traffic to your Kubernetes services, allowing you to direct percentages of traffic from a request to different services. You can use them to define a set of routing rules to apply when a host is addressed. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/)
|
||||
- **Destination rules** serve as the single source of truth about which service versions are available to receive traffic from virtual services. You can use these resources to define policies that apply to traffic that is intended for a service after routing has occurred. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule)
|
||||
|
||||
To see these tabs,
|
||||
|
||||
1. Go to the project view in Rancher and click **Resources > Istio.**
|
||||
1. You will see tabs for **Traffic Graph,** which has the Kiali network visualization integrated into the UI, and **Traffic Metrics,** which shows metrics for the success rate and request volume of traffic to your services, among other metrics. Next to these tabs, you should see the tabs for **Virtual Services** and **Destination Rules.**
|
||||
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: Upgrading Kubernetes without Upgrading Rancher
|
||||
weight: 1120
|
||||
---
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
The RKE metadata feature allows you to provision clusters with new versions of Kubernetes as soon as they are released, without upgrading Rancher. This feature is useful for taking advantage of patch versions of Kubernetes, for example, if you want to upgrade to Kubernetes v1.14.7 when your Rancher server originally supported v1.14.6.
|
||||
|
||||
**Note:** The Kubernetes API can change between minor versions. Therefore, we don't support introducing minor Kubernetes versions, such as introducing v1.15 when Rancher currently supports v1.14. You would need to upgrade Rancher to add support for minor Kubernetes versions.
|
||||
|
||||
Rancher's Kubernetes metadata contains information specific to the Kubernetes version that Rancher uses to provision [RKE clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/). Rancher syncs the data periodically and creates custom resource definitions (CRDs) for **system images,** **service options** and **addon templates.** Consequently, when a new Kubernetes version is compatible with the Rancher server version, the Kubernetes metadata makes the new version available to Rancher for provisioning clusters. The metadata gives you an overview of the information that the [Rancher Kubernetes Engine]({{<baseurl>}}/rke/latest/en/) (RKE) uses for deploying various Kubernetes versions.
|
||||
|
||||
This table below describes the CRDs that are affected by the periodic data sync.
|
||||
|
||||
> **Note:** Only administrators can edit metadata CRDs. It is recommended not to update existing objects unless explicitly advised.
|
||||
|
||||
| Resource | Description | Rancher API URL |
|
||||
|----------|-------------|-----------------|
|
||||
| System Images | List of system images used to deploy Kubernetes through RKE. | `<RANCHER_SERVER_URL>/v3/rkek8ssystemimages` |
|
||||
| Service Options | Default options passed to Kubernetes components like `kube-api`, `scheduler`, `kubelet`, `kube-proxy`, and `kube-controller-manager` | `<RANCHER_SERVER_URL>/v3/rkek8sserviceoptions` |
|
||||
| Addon Templates | YAML definitions used to deploy addon components like Canal, Calico, Flannel, Weave, Kube-dns, CoreDNS, `metrics-server`, `nginx-ingress` | `<RANCHER_SERVER_URL>/v3/rkeaddons` |
|
||||
|
||||
Administrators might configure the RKE metadata settings to do the following:
|
||||
|
||||
- Refresh the Kubernetes metadata, if a new patch version of Kubernetes comes out and they want Rancher to provision clusters with the latest version of Kubernetes without having to upgrade Rancher
|
||||
- Change the metadata URL that Rancher uses to sync the metadata, which is useful for air gap setups if you need to sync Rancher locally instead of with GitHub
|
||||
- Prevent Rancher from auto-syncing the metadata, which is one way to prevent new and unsupported Kubernetes versions from being available in Rancher
|
||||
|
||||
# Refresh Kubernetes Metadata
|
||||
|
||||
The option to refresh the Kubernetes metadata is available for administrators by default, or for any user who has the **Manage Cluster Drivers** [global role.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
|
||||
|
||||
To force Rancher to refresh the Kubernetes metadata, a manual refresh action is available under **Tools > Drivers > Refresh Kubernetes Metadata** on the right side corner.
|
||||
|
||||
# Configuring the Metadata Synchronization
|
||||
|
||||
> Only administrators can change these settings.
|
||||
|
||||
The RKE metadata config controls how often Rancher syncs metadata and where it downloads data from. You can configure the metadata from the settings in the Rancher UI, or through the Rancher API at the endpoint `v3/settings/rke-metadata-config`.
|
||||
|
||||
To edit the metadata config in Rancher,
|
||||
|
||||
1. Go to the **Global** view and click the **Settings** tab.
|
||||
1. Go to the **rke-metadata-config** section. Click the **Ellipsis (...)** and click **Edit.**
|
||||
1. You can optionally fill in the following parameters:
|
||||
|
||||
- `refresh-interval-minutes`: This is the amount of time that Rancher waits to sync the metadata. To disable the periodic refresh, set `refresh-interval-minutes` to 0.
|
||||
- `url`: This is the HTTP path that Rancher fetches data from.
|
||||
- `branch`: This refers to the Git branch name if the URL is a Git URL.
|
||||
|
||||
If you don't have an air gap setup, you don't need to specify the URL or Git branch where Rancher gets the metadata, because the default setting is to pull from [Rancher's metadata Git repository.](https://github.com/rancher/kontainer-driver-metadata.git)
|
||||
|
||||
However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL and Git branch in the `rke-metadata-config` settings to point to the new location of the repository.
|
||||
|
||||
# Air Gap Setups
|
||||
|
||||
Rancher relies on a periodic refresh of the `rke-metadata-config` to download new Kubernetes version metadata if it is supported with the current version of the Rancher server. For a table of compatible Kubernetes and Rancher versions, refer to the [service terms section.](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.2.8/)
|
||||
|
||||
If you have an air gap setup, you might not be able to get the automatic periodic refresh of the Kubernetes metadata from Rancher's Git repository. In that case, you should disable the periodic refresh to prevent your logs from showing errors. Optionally, you can configure your metadata settings so that Rancher can sync with a local copy of the RKE metadata.
|
||||
|
||||
To sync Rancher with a local mirror of the RKE metadata, an administrator would configure the `rke-metadata-config` settings by updating the `url` and `branch` to point to the mirror.
|
||||
|
||||
After new Kubernetes versions are loaded into the Rancher setup, additional steps would be required in order to use them for launching clusters. Rancher needs access to updated system images. While the metadata settings can only be changed by administrators, any user can download the Rancher system images and prepare a private Docker registry for them.
|
||||
|
||||
1. To download the system images for the private registry, click the Rancher server version at the bottom left corner of the Rancher UI.
|
||||
1. Download the OS specific image lists for Linux or Windows.
|
||||
1. Download `rancher-images.txt`.
|
||||
1. Prepare the private registry using the same steps during the [air gap install]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap/populate-private-registry), but instead of using the `rancher-images.txt` from the releases page, use the one obtained from the previous steps.
|
||||
|
||||
**Result:** The air gap installation of Rancher can now sync the Kubernetes metadata. If you update your private registry when new versions of Kubernetes are released, you can provision clusters with the new version without having to upgrade Rancher.
|
||||
@@ -49,7 +49,8 @@ The following table lists each custom global permission available and whether it
|
||||
| Manage Roles | ✓ | |
|
||||
| Manage Users | ✓ | |
|
||||
| Create Clusters | ✓ | ✓ |
|
||||
| Use Catalog Templates | ✓ | ✓ |
|
||||
| Create RKE Templates | ✓ | ✓ |
|
||||
| Use Catalog Templates | ✓ | ✓ |
|
||||
| Login Access | ✓ | ✓ |
|
||||
|
||||
> **Notes:**
|
||||
@@ -75,4 +76,4 @@ You can change the default global permissions that are assigned to external user
|
||||
|
||||
1. If you want to remove a default permission, edit the permission and select **No** from **New User Default**.
|
||||
|
||||
**Result:** The default global permissions are configured based on your changes. Permissions assigned to new users display a check in the **New User Default** column.
|
||||
**Result:** The default global permissions are configured based on your changes. Permissions assigned to new users display a check in the **New User Default** column.
|
||||
@@ -0,0 +1,119 @@
|
||||
---
|
||||
title: RKE Templates
|
||||
weight: 7010
|
||||
---
|
||||
|
||||
_Available as of Rancher v2.3.0_
|
||||
|
||||
RKE templates are designed to allow DevOps and security teams to standardize and simplify the creation of Kubernetes clusters.
|
||||
|
||||
RKE is the [Rancher Kubernetes Engine,]({{<baseurl>}}/rke/latest/en/) which is the tool that Rancher uses to provision Kubernetes clusters.
|
||||
|
||||
With Kubernetes increasing in popularity, there is a trend toward managing a larger number of smaller clusters. When you want to create many clusters, it’s more important to manage them consistently. Multi-cluster management comes with challenges to enforcing security and add-on configurations that need to be standardized before turning clusters over to end users.
|
||||
|
||||
RKE templates help standardize these configurations. Regardless of whether clusters are created with the Rancher UI, the Rancher API, or an automated process, Rancher will guarantee that every cluster it provisions from an RKE template is uniform and consistent in the way it is produced.
|
||||
|
||||
Admins control which cluster options can be changed by end users. RKE templates can also be shared with specific users and groups, so that admins can create different RKE templates for different sets of users.
|
||||
|
||||
If a cluster was created with an RKE template, you can't change it to a different RKE template. You can only update the cluster to a new revision of the same template.
|
||||
|
||||
To summarize, RKE templates allow DevOps and security teams to:
|
||||
|
||||
- Standardize cluster configuration and ensure that Rancher-provisioned clusters are created following best practices
|
||||
- Prevent less technical users from making uninformed choices when provisioning clusters
|
||||
- Share different templates with different sets of users and groups
|
||||
- Delegate ownership of templates to users who are trusted to make changes to them
|
||||
- Control which users can create templates
|
||||
- Require users to create clusters from a template
|
||||
|
||||
# Configurable Settings
|
||||
|
||||
RKE templates can be created in the Rancher UI or defined in YAML format. They can define all the same parameters that can be specified when you use Rancher to provision custom nodes or nodes from an infrastructure provider:
|
||||
|
||||
- Cloud provider options
|
||||
- Pod security options
|
||||
- Network providers
|
||||
- Ingress controllers
|
||||
- Network security configuration
|
||||
- Network plugins
|
||||
- Private registry URL and credentials
|
||||
- Add-ons
|
||||
- Kubernetes options, including configurations for Kubernetes components such as kube-api, kube-controller, kubelet, and services
|
||||
|
||||
The [add-on section](#add-ons) of an RKE template is especially powerful because it allows a wide range of customization options.
|
||||
|
||||
# Scope of RKE Templates
|
||||
|
||||
RKE templates are supported for Rancher-provisioned clusters. The templates can be used to provision custom clusters or clusters that are launched by an infrastructure provider.
|
||||
|
||||
RKE templates are for defining Kubernetes and Rancher settings. Node templates are responsible for configuring nodes. For tips on how to use RKE templates in conjunction with hardware, refer to [RKE Templates and Hardware]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/rke-templates-and-hardware).
|
||||
|
||||
RKE templates can be applied to new clusters, but not existing clusters.
|
||||
|
||||
# Example Scenarios
|
||||
When an organization has both basic and advanced Rancher users, administrators might want to give the advanced users more options for cluster creation, while restricting the options for basic users.
|
||||
|
||||
These [example scenarios]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/example-scenarios) describe how an organization could use templates to standardize cluster creation.
|
||||
|
||||
Some of the example scenarios include the following:
|
||||
|
||||
- **Enforcing templates:** Administrators might want to [enforce one or more template settings for everyone]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/example-scenarios/#enforcing-a-template-setting-for-everyone) if they want all new Rancher-provisioned clusters to have those settings.
|
||||
- **Sharing different templates with different users:** Administrators might give [different templates to basic and advanced users,]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/example-scenarios/#templates-for-basic-and-advanced-users) so that basic users can have more restricted options and advanced users can have more discretion when creating clusters.
|
||||
- **Updating template settings:** If an organization's security and DevOps teams decide to embed best practices into the required settings for new clusters, those best practices could change over time. If the best practices change, [a template can be updated to a new revision]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/example-scenarios/#updating-templates-and-clusters-created-with-them) and clusters created from the template can upgrade to the new version of the template.
|
||||
- **Sharing ownership of a template:** When a template owner no longer wants to maintain a template, or wants to share ownership of the template, this scenario describes how [template ownership can be shared.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/example-scenarios/#allowing-other-users-to-control-and-share-a-template)
|
||||
|
||||
# Template Management
|
||||
|
||||
When you create a RKE template, it is available in the Rancher UI from the **Global** view under **Tools > RKE Templates.** When you create a template, you become the template owner, which gives you permission to revise and share the template. You can share the RKE templates with specific users or groups, and you can also make it public.
|
||||
|
||||
Administrators can turn on template enforcement to require users to always use RKE templates when creating a cluster. This allows administrators to guarantee that Rancher always provisions clusters with specific settings.
|
||||
|
||||
RKE template updates are handled through a revision system. If you want to change or update a template, you create a new revision of the template. Then a cluster that was created with the older version of the template can be upgraded to the new template revision.
|
||||
|
||||
In an RKE template, settings can be restricted to what the template owner chooses, or they can be open for the end user to select the value. The difference is indicated by the **Allow User Override** toggle over each setting in the Rancher UI when the template is created.
|
||||
|
||||
For the settings that cannot be overridden, the end user will not be able to directly edit them. In order for a user to get different options of these settings, an RKE template owner would need to create a new revision of the RKE template, which would allow the user to upgrade and change that option.
|
||||
|
||||
The documents in this section explain the details of RKE template management:
|
||||
|
||||
- [Getting permission to create templates]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creator-permissions/)
|
||||
- [Creating and revising templates]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/)
|
||||
- [Enforcing template settings]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/enforcement/#requiring-new-clusters-to-use-a-cluster-template)
|
||||
- [Overriding template settings]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/overrides/)
|
||||
- [Sharing templates with cluster creators]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/#sharing-templates-with-specific-users)
|
||||
- [Sharing ownership of a template]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/#sharing-ownership-of-templates)
|
||||
|
||||
An [example YAML configuration file for a template]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/example-yaml) is provided for reference.
|
||||
|
||||
# Applying Templates
|
||||
|
||||
You can [create a cluster from a template]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/applying-templates/#creating-a-cluster-from-a-cluster-template) that you created, or from a template that has been [shared with you.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing)
|
||||
|
||||
If the RKE template owner creates a new revision of the template, you can [upgrade your cluster to that revision.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/applying-templates/#updating-a-cluster-created-with-an-rke-template)
|
||||
|
||||
RKE templates can only be applied to new clusters, not existing clusters.
|
||||
|
||||
# Standardizing Hardware
|
||||
RKE templates are designed to standardize Kubernetes and Rancher settings. If you want to standardize your infrastructure as well, you use RKE templates [in conjuction with other tools]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/rke-templates-and-hardware).
|
||||
|
||||
# YAML Customization
|
||||
|
||||
If you define an RKE template as a YAML file, you can modify this [example RKE template YAML]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/example-yaml). The YAML in the RKE template uses the same customization that Rancher uses when creating an RKE cluster, but since the YAML is located within the context of a Rancher provisioned cluster, you will need to nest the RKE template customization under the `rancher_kubernetes_engine_config` directive in the YAML.
|
||||
|
||||
The RKE documentation also has [annotated]({{<baseurl>}}/rke/latest/en/example-yamls/) `cluster.yml` files that you can use for reference.
|
||||
|
||||
For guidance on available options, refer to the RKE documentation on [cluster configuration.]({{<baseurl>}}/rke/latest/en/config-options/)
|
||||
|
||||
### Add-ons
|
||||
|
||||
The add-on section of the RKE template configuration file works the same way as the [add-on section of a cluster configuration file]({{<baseurl>}}/rke/latest/en/config-options/add-ons/).
|
||||
|
||||
The user-defined add-ons directive allows you to either call out and pull down Kubernetes manifests or put them inline directly. If you include these manifests as part of your RKE template, Rancher will provision those in the cluster.
|
||||
|
||||
Some things you could do with add-ons include:
|
||||
|
||||
- Install applications on the Kubernetes cluster after it starts
|
||||
- Install plugins on nodes that are deployed with a Kubernetes daemonset
|
||||
- Automatically set up namespaces, service accounts, or role binding
|
||||
|
||||
The RKE template configuration must be nested within the `rancher_kubernetes_engine_config` directive. To set add-ons, when creating the template, you will click **Edit as YAML.** Then use the `addons` directive to add a manifest, or the `addons_include` directive to set which YAML files are used for the add-ons. For more information on custom add-ons, refer to the [user-defined add-ons documentation.]({{<baseurl>}}/rke/latest/en/config-options/add-ons/user-defined-add-ons/)
|
||||
@@ -0,0 +1,33 @@
|
||||
---
|
||||
title: Applying Templates
|
||||
weight: 50
|
||||
---
|
||||
|
||||
You can create a cluster from an RKE template that you created, or from a template that has been [shared with you.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing)
|
||||
|
||||
RKE templates can only be applied to new clusters, not existing clusters.
|
||||
|
||||
You can't change the cluster to use a different RKE template. You can only update the cluster to a new revision of the same template.
|
||||
|
||||
### Creating a Cluster from an RKE Template
|
||||
|
||||
To add a cluster [hosted by an infrastructure provider]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters) using an RKE template, use these steps:
|
||||
|
||||
1. From the **Global** view, go to the **Clusters** tab.
|
||||
1. Click **Add Cluster** and choose the infrastructure provider.
|
||||
1. Provide the cluster name and node template details as usual.
|
||||
1. To use an RKE template, under the **Cluster Options**, check the box for **Use an existing RKE template and revision.**
|
||||
1. Choose an existing template and revision from the dropdown menu.
|
||||
1. Optional: You can edit any settings that the RKE template owner marked as **Allow User Override** when the template was created. If there are settings that you want to change, but don't have the option to, you will need to contact the template owner to get a new revision of the template. Then you will need to edit the cluster to upgrade it to the new revision.
|
||||
1. Click **Save** to launch the cluster.
|
||||
|
||||
### Updating a Cluster Created with an RKE Template
|
||||
|
||||
When the template owner creates a template, each setting has a switch in the Rancher UI that indicates if users can override the setting.
|
||||
|
||||
- If the setting allows a user override, you can update these settings in the cluster by [editing the cluster.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/editing-clusters/)
|
||||
- If the switch is turned off, you cannot change these settings unless the cluster owner creates a template revision that lets you override them. If there are settings that you want to change, but don't have the option to, you will need to contact the template owner to get a new revision of the template.
|
||||
|
||||
If a cluster was created from an RKE template, you can edit the cluster to update the cluster to a new revision of the template.
|
||||
|
||||
> **Note:** You can't change the cluster to use a different RKE template. You can only update the cluster to a new revision of the same template.
|
||||
@@ -0,0 +1,115 @@
|
||||
---
|
||||
title: Creating and Revising Templates
|
||||
weight: 32
|
||||
---
|
||||
|
||||
This section describes how to manage RKE templates and revisions. You an create, share, update, and delete templates from the **Global** view under **Tools > RKE Templates.**
|
||||
|
||||
Template updates are handled through a revision system. When template owners want to change or update a template, they create a new revision of the template. Individual revisions cannot be edited. However, if you want to prevent a revision from being used to create a new cluster, you can disable it.
|
||||
|
||||
Template revisions can be used in two ways: to create a new cluster, or to upgrade a cluster that was created with an earlier version of the template. The template creator can choose a default revision, but when end users create a cluster, they can choose any template and any template revision that is available to them. After the cluster is created from a specific revision, it cannot change to another template, but the cluster can be upgraded to another available revision of the same template.
|
||||
|
||||
The template owner has full control over template revisions, and can create new revisions to update the template, delete or disable revisions that should not be used to create clusters, and choose which template revision is the default.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
You can create RKE templates if you have the **Create RKE Templates** permission, which can be [given by an administrator.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creator-permissions)
|
||||
|
||||
You can revise, share, and delete a template if you are an owner of the template. For details on how to become an owner of a template, refer to [the documentation on sharing template ownership.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/#sharing-ownership-of-templates)
|
||||
|
||||
### Creating a Template
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Click **Add Template.**
|
||||
1. Provide a name for the template. An auto-generated name is already provided for the template' first version, which is created along with this template.
|
||||
1. Optional: Share the template with other users or groups by [adding them as members.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/#sharing-templates-with-specific-users) You can also make the template public to share with everyone in the Rancher setup.
|
||||
1. Then follow the form on screen to save the cluster configuration parameters as part of the template's revision. The revision can be marked as default for this template.
|
||||
|
||||
**Result:** An RKE template with one revision is configured. You can use this RKE template revision later when you [provision a Rancher-launched cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters).
|
||||
|
||||
### Updating a Template
|
||||
|
||||
When you update an RKE template, you are creating a revision of the existing template. Clusters that were created with an older version of the template can be updated to match the new revision.
|
||||
|
||||
You can't edit individual revisions. Since you can't edit individual revisions of a template, in order to prevent a revision from being used, you can [disable it.](#disabling-a-template-revision)
|
||||
|
||||
New template revisions can be created without affecting clusters already using a revision of the template.
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Go to the template that you want to edit and click the **Vertical Ellipsis (...) > Edit.**
|
||||
1. Edit the required information and click **Save.**
|
||||
1. Optional: You can change the default revision of this template and also change who it is shared with.
|
||||
|
||||
**Result:** The template is updated.
|
||||
|
||||
### Deleting a Template
|
||||
|
||||
When you no longer use an RKE template for any of your clusters, you can delete it.
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Go to the RKE template that you want to delete and click the **Vertical Ellipsis (...) > Delete.**
|
||||
1. Confirm the deletion when prompted.
|
||||
|
||||
**Result:** The template is deleted.
|
||||
|
||||
### Creating a Revision Based on the Default Revision
|
||||
|
||||
You can clone the default template revision and quickly update its settings rather than creating a new revision from scratch. Cloning templates saves you the hassle of re-entering the access keys and other parameters needed for cluster creation.
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Go to the RKE template that you want to clone and click the **Vertical Ellipsis (...) > New Revision From Default.**
|
||||
1. Complete the rest of the form to create a new revision.
|
||||
|
||||
**Result:** The RKE template revision is cloned and configured.
|
||||
|
||||
### Creating a Revision Based on a Cloned Revision
|
||||
|
||||
When creating new RKE template revisions from your user settings, you can clone an existing revision and quickly update its settings rather than creating a new one from scratch. Cloning template revisions saves you the hassle of re-entering the cluster parameters.
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Go to the template revision you want to clone. Then select **Ellipsis > Clone Revision.**
|
||||
1. Complete the rest of the form.
|
||||
|
||||
**Result:** The RKE template revision is cloned and configured. You can use the RKE template revision later when you provision a cluster. Any existing cluster using this RKE template can be upgraded to this new revision.
|
||||
|
||||
### Disabling a Template Revision
|
||||
|
||||
When you no longer want an RKE template revision to be used for creating new clusters, you can disable it. A disabled revision can be re-enabled.
|
||||
|
||||
You can disable the revision if it is not being used by any cluster.
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Go to the template revision you want to disable. Then select **Ellipsis > Disable.**
|
||||
|
||||
**Result:** The RKE template revision cannot be used to create a new cluster.
|
||||
|
||||
### Re-enabling a Disabled Template Revision
|
||||
|
||||
If you decide that a disabled RKE template revision should be used to create new clusters, you can re-enable it.
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Go to the template revision you want to re-enable. Then select **Ellipsis > Enable.**
|
||||
|
||||
**Result:** The RKE template revision can be used to create a new cluster.
|
||||
|
||||
### Setting a Template Revision as Default
|
||||
|
||||
When end users create a cluster using an RKE template, they can choose which revision to create the cluster with. You can configure which revision is used by default.
|
||||
|
||||
To set an RKE template revision as default,
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Go to the RKE template revision that should be default and click the **Ellipsis (...) > Set as Default.**
|
||||
|
||||
**Result:** The RKE template revision will be used as the default option when clusters are created with the template.
|
||||
|
||||
### Deleting a Template Revision
|
||||
|
||||
You can delete all revisions of a template except for the default revision.
|
||||
|
||||
To permanently delete a revision,
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Go to the RKE template revision that should be deleted and click the **Ellipsis (...) > Delete.**
|
||||
|
||||
**Result:** The RKE template revision is deleted.
|
||||
@@ -0,0 +1,50 @@
|
||||
---
|
||||
title: Template Creator Permissions
|
||||
weight: 10
|
||||
---
|
||||
|
||||
Administrators have the permission to create RKE templates, and only administrators can give that permission to other users.
|
||||
|
||||
For more information on administrator permissions, refer to the [documentation on global permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/).
|
||||
|
||||
# Giving Users Permission to Create Templates
|
||||
|
||||
Templates can only be created by users who have the global permission **Create RKE Templates.**
|
||||
|
||||
Administrators have the global permission to create templates, and only administrators can give that permission to other users.
|
||||
|
||||
For information on allowing users to modify existing templates, refer to [Sharing Templates.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing)
|
||||
|
||||
Administrators can give users permission to create RKE templates in two ways:
|
||||
|
||||
- By editing the permissions of an [individual user](#allowing-a-user-to-create-templates)
|
||||
- By changing the [default permissions of new users](#allowing-new-users-to-create-templates-by-default)
|
||||
|
||||
### Allowing a User to Create Templates
|
||||
|
||||
An administrator can individually grant the role **Create RKE Templates** to any existing user by following these steps:
|
||||
|
||||
1. From the global view, click the **Users** tab. Choose the user you want to edit and click the **Vertical Ellipsis (...) > Edit.**
|
||||
1. In the **Global Permissions** section, choose **Custom** and select the **Create RKE Templates** role along with any other roles the user should have. Click **Save.**
|
||||
|
||||
**Result:** The user has permission to create RKE templates.
|
||||
|
||||
### Allowing New Users to Create Templates by Default
|
||||
|
||||
Alternatively, the administrator can give all new users the default permission to create RKE templates by following the following steps. This will not affect the permissions of existing users.
|
||||
|
||||
1. From the **Global** view, click **Security > Roles.**
|
||||
1. Under the **Global** roles tab, go to the role **Create RKE Templates** and click the **Vertical Ellipsis (...) > Edit**.
|
||||
1. Select the option **Yes: Default role for new users** and click **Save.**
|
||||
|
||||
**Result:** Any new user created in this Rancher installation will be able to create RKE templates. Existing users will not get this permission.
|
||||
|
||||
### Revoking Permission to Create Templates
|
||||
|
||||
Administrators can remove a user's permission to create templates with the following steps:
|
||||
|
||||
1. From the global view, click the **Users** tab. Choose the user you want to edit and click the **Vertical Ellipsis (...) > Edit.**
|
||||
1. In the **Global Permissions** section, un-check the box for **Create RKE Templates**. In this section, you can change the user back to a standard user, or give the user a different set of custom permissions.
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The user cannot create RKE templates.
|
||||
@@ -0,0 +1,38 @@
|
||||
---
|
||||
title: Template Enforcement
|
||||
weight: 32
|
||||
---
|
||||
|
||||
This section describes how template administrators can enforce templates in Rancher, restricting the ability of users to create clusters without a template.
|
||||
|
||||
By default, any standard user in Rancher can create clusters. But when RKE template enforcement is turned on,
|
||||
|
||||
- Only an administrator has the ability to create clusters without a template.
|
||||
- All standard users must use an RKE template to create a new cluster.
|
||||
- Standard users cannot create a cluster without using a template.
|
||||
|
||||
Users can only create new templates if the administrator [gives them permission.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creator-permissions/#allowing-a-user-to-create-templates)
|
||||
|
||||
After a cluster is created with an RKE template, the cluster creator cannot edit settings that are defined in the template. The only way to change those settings after the cluster is created is to [upgrade the cluster to a new revision]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/applying-templates/#updating-a-cluster-created-with-an-rke-template) of the same template. If cluster creators want to change template-defined settings, they would need to contact the template owner to get a new revision of the template. For details on how template revisions work, refer to the [documentation on revising templates.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/#updating-a-template)
|
||||
|
||||
# Requiring New Clusters to Use an RKE Template
|
||||
|
||||
You might want to require new clusters to use a template to ensure that any cluster launched by a [standard user]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) will use the Kubernetes and/or Rancher settings that are vetted by administrators.
|
||||
|
||||
To require new clusters to use an RKE template, administrators can turn on RKE template enforcement with the following steps:
|
||||
|
||||
1. From the **Global** view, click the **Settings** tab.
|
||||
1. Go to the `rke-template-enforcement` setting. Click the vertical **Ellipsis (...)** and click **Edit.**
|
||||
1. Set the value to **True** and click **Save.**
|
||||
|
||||
**Result:** All clusters provisioned by Rancher must use a template, unless the creator is an administrator.
|
||||
|
||||
# Disabling RKE Template Enforcement
|
||||
|
||||
To allow new clusters to be created without an RKE template, administrators can turn off RKE template enforcement with the following steps:
|
||||
|
||||
1. From the **Global** view, click the **Settings** tab.
|
||||
1. Go to the `rke-template-enforcement` setting. Click the vertical **Ellipsis (...)** and click **Edit.**
|
||||
1. Set the value to **False** and click **Save.**
|
||||
|
||||
**Result:** When clusters are provisioned by Rancher, they don't need to use a template.
|
||||
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: Example Scenarios
|
||||
weight: 5
|
||||
---
|
||||
|
||||
These example scenarios describe how an organization could use templates to standardize cluster creation.
|
||||
|
||||
- **Enforcing templates:** Administrators might want to [enforce one or more template settings for everyone](#enforcing-a-template-setting-for-everyone) if they want all new Rancher-provisioned clusters to have those settings.
|
||||
- **Sharing different templates with different users:** Administrators might give [different templates to basic and advanced users,](#templates-for-basic-and-advanced-users) so that basic users have more restricted options and advanced users have more discretion when creating clusters.
|
||||
- **Updating template settings:** If an organization's security and DevOps teams decide to embed best practices into the required settings for new clusters, those best practices could change over time. If the best practices change, [a template can be updated to a new revision](#updating-templates-and-clusters-created-with-them) and clusters created from the template can upgrade to the new version of the template.
|
||||
- **Sharing ownership of a template:** When a template owner no longer wants to maintain a template, or wants to delegate ownership of the template, this scenario describes how [template ownership can be shared.](#allowing-other-users-to-control-and-share-a-template)
|
||||
|
||||
|
||||
# Enforcing a Template Setting for Everyone
|
||||
|
||||
Let's say there is an organization in which the administrators decide that all new clusters should be created with Kubernetes version 1.14.
|
||||
|
||||
1. First, an administrator creates a template which specifies the Kubernetes version as 1.14 and marks all other settings as **Allow User Override**.
|
||||
1. The administrator makes the template public.
|
||||
1. The administrator turns on template enforcement.
|
||||
|
||||
**Results:**
|
||||
|
||||
- All Rancher users in the organization have access to the template.
|
||||
- All new clusters created by [standard users]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) with this template will use Kubernetes 1.14 and they are unable to use a different Kubernetes version. By default, standard users don't have permission to create templates, so this template will be the only template they can use unless more templates are shared with them.
|
||||
- All standard users must use a cluster template to create a new cluster. They cannot create a cluster without using a template.
|
||||
|
||||
In this way, the administrators enforce the Kubernetes version across the organization, while still allowing end users to configure everything else.
|
||||
|
||||
# Templates for Basic and Advanced Users
|
||||
|
||||
Let's say an organization has both basic and advanced users. Adminstrators want the basic users to be required to use a template, while the advanced users and administrators create their clusters however they want.
|
||||
|
||||
1. First, an administrator turns on [RKE template enforcement.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/enforcement/#requiring-new-clusters-to-use-a-cluster-template) This means that every [standard user]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) in Rancher will need to use an RKE template when they create a cluster.
|
||||
1. The administrator then creates two templates:
|
||||
|
||||
- One template for basic users, with almost every option specified except for access keys
|
||||
- One template for advanced users, which has most or all options has **Allow User Override** turned on
|
||||
|
||||
1. The administrator shares the advanced template with only the advanced users.
|
||||
1. The administrator makes the template for basic users public, so the more restrictive template is an option for everyone who creates a Rancher-provisioned cluster.
|
||||
|
||||
**Result:** All Rancher users, except for administrators, are required to use a template when creating a cluster. Everyone has access to the restrictive template, but only advanced users have permission to use the more permissive template. The basic users are more restricted, while advanced users have more freedom when configuring their Kubernetes clusters.
|
||||
|
||||
# Updating Templates and Clusters Created with Them
|
||||
|
||||
Let's say an organization has a template that requires clusters to use Kubernetes v1.14. However, as time goes on, the adminstrators change their minds. They decide they want users to be able to upgrade their clusters to use newer versions of Kubernetes.
|
||||
|
||||
In this organization, many clusters were created with a template that requires Kubernetes v1.14. Because the template does not allow that setting to be overridden, the users who created the cluster cannot directly edit that setting.
|
||||
|
||||
The template owner has several options for allowing the cluster creators to upgrade Kubernetes on their clusters:
|
||||
|
||||
- **Specify Kubernetes v1.15 on the template:** The template owner can create a new template revision that specifies Kubernetes v1.15. Then the owner of each cluster that uses that template can upgrade their cluster to a new revision of the template. This template upgrade allows the cluster creator to upgrade Kubernetes to v1.15 on their cluster.
|
||||
- **Allow any Kubernetes version on the template:** When creating a template revision, the template owner can also mark the the Kubernetes version as **Allow User Override** using the switch near that setting on the Rancher UI. This will allow clusters that upgrade to this template revision to use any version of Kubernetes.
|
||||
- **Allow the latest minor Kubernetes version on the template:** The template owner can also create a template revision in which the Kubernetes version is defined as **Latest v1.14 (Allows patch version upgrades).** This means clusters that use that revision will be able to get patch version upgrades, but major version upgrades will not be allowed.
|
||||
|
||||
# Allowing Other Users to Control and Share a Template
|
||||
|
||||
Let's say Alice is a Rancher administrator. She owns an RKE template that reflects her organization's agreed-upon best practices for creating a cluster.
|
||||
|
||||
Bob is an advanced user who can make informed decisions about cluster configuration. Alice trusts Bob to create new revisions of her template as the best practices get updated over time. Therefore, she decides to make Bob an owner of the template.
|
||||
|
||||
To share ownership of the template with Bob, Alice [adds Bob as an owner of her template.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing/#sharing-ownership-of-templates)
|
||||
|
||||
The result is that as a template owner, Bob is in charge of version control for that template. Bob can now do all of the following:
|
||||
|
||||
- [Revise the template]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/#updating-a-template) when the best practices change
|
||||
- [Disable outdated revisions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/#disabling-a-template-revision) of the template so that no new clusters can be created with it
|
||||
- [Delete the whole template]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/#deleting-a-template) if the organization wants to go in a different direction
|
||||
- [Set a certain revision as default]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/#setting-a-template-revision-as-default) when users create a cluster with it. End users of the template will still be able to choose which revision they want to create the cluster with.
|
||||
- [Share the template]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/template-access-and-sharing) with specific users, make the template available to all Rancher users, or share ownership of the template with another user.
|
||||
@@ -0,0 +1,99 @@
|
||||
---
|
||||
title: Example YAML
|
||||
weight: 60
|
||||
---
|
||||
|
||||
Below is an example RKE template configuration file for reference.
|
||||
|
||||
The YAML in the RKE template uses the same customization that is used when you create an RKE cluster. However, since the YAML is within the context of a Rancher provisioned RKE cluster, the customization from the RKE docs needs to be nested under the `rancher_kubernetes_engine` directive.
|
||||
|
||||
```yaml
|
||||
#
|
||||
# Cluster Config
|
||||
#
|
||||
docker_root_dir: /var/lib/docker
|
||||
enable_cluster_alerting: false
|
||||
enable_cluster_monitoring: false
|
||||
enable_network_policy: false
|
||||
local_cluster_auth_endpoint:
|
||||
enabled: true
|
||||
#
|
||||
# Rancher Config
|
||||
#
|
||||
rancher_kubernetes_engine_config: # Your RKE template config goes here.
|
||||
addon_job_timeout: 30
|
||||
authentication:
|
||||
strategy: x509
|
||||
ignore_docker_version: true
|
||||
#
|
||||
# # Currently only nginx ingress provider is supported.
|
||||
# # To disable ingress controller, set `provider: none`
|
||||
# # To enable ingress on specific nodes, use the node_selector, eg:
|
||||
# provider: nginx
|
||||
# node_selector:
|
||||
# app: ingress
|
||||
#
|
||||
ingress:
|
||||
provider: nginx
|
||||
kubernetes_version: v1.15.3-rancher3-1
|
||||
monitoring:
|
||||
provider: metrics-server
|
||||
#
|
||||
# If you are using calico on AWS
|
||||
#
|
||||
# network:
|
||||
# plugin: calico
|
||||
# calico_network_provider:
|
||||
# cloud_provider: aws
|
||||
#
|
||||
# # To specify flannel interface
|
||||
#
|
||||
# network:
|
||||
# plugin: flannel
|
||||
# flannel_network_provider:
|
||||
# iface: eth1
|
||||
#
|
||||
# # To specify flannel interface for canal plugin
|
||||
#
|
||||
# network:
|
||||
# plugin: canal
|
||||
# canal_network_provider:
|
||||
# iface: eth1
|
||||
#
|
||||
network:
|
||||
options:
|
||||
flannel_backend_type: vxlan
|
||||
plugin: canal
|
||||
#
|
||||
# services:
|
||||
# kube-api:
|
||||
# service_cluster_ip_range: 10.43.0.0/16
|
||||
# kube-controller:
|
||||
# cluster_cidr: 10.42.0.0/16
|
||||
# service_cluster_ip_range: 10.43.0.0/16
|
||||
# kubelet:
|
||||
# cluster_domain: cluster.local
|
||||
# cluster_dns_server: 10.43.0.10
|
||||
#
|
||||
services:
|
||||
etcd:
|
||||
backup_config:
|
||||
enabled: true
|
||||
interval_hours: 12
|
||||
retention: 6
|
||||
safe_timestamp: false
|
||||
creation: 12h
|
||||
extra_args:
|
||||
election-timeout: 5000
|
||||
heartbeat-interval: 500
|
||||
gid: 0
|
||||
retention: 72h
|
||||
snapshot: false
|
||||
uid: 0
|
||||
kube_api:
|
||||
always_pull_images: false
|
||||
pod_security_policy: false
|
||||
service_node_port_range: 30000-32767
|
||||
ssh_agent_auth: false
|
||||
windows_prefered_cluster: false
|
||||
```
|
||||
@@ -0,0 +1,15 @@
|
||||
---
|
||||
title: Overriding Template Settings
|
||||
weight: 33
|
||||
---
|
||||
|
||||
When a user creates an RKE template, each setting in the template has a switch in the Rancher UI that indicates if users can override the setting. This switch marks those settings as **Allow User Override.**
|
||||
|
||||
After a cluster is created with a template, end users can't update any of the settings defined in the template unless the template owner marked them as **Allow User Override.** However, if the template is [updated to a new revision]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising) that changes the settings or allows end users to change them, the cluster can be upgraded to a new revision of the template and the changes in the new revision will be applied to the cluster.
|
||||
|
||||
When any parameter is set as **Allow User Override** on the RKE template, it means that end users have to fill out those fields during cluster creation and they can edit those settings afterward at any time.
|
||||
|
||||
The **Allow User Override** model of the RKE template is useful for situations such as:
|
||||
|
||||
- Administrators know that some settings will need the flexibility to be frequently updated over time
|
||||
- End users will need to enter their own access keys or secret keys, for example, cloud credentials or credentials for backup snapshots
|
||||
+70
@@ -0,0 +1,70 @@
|
||||
---
|
||||
title: RKE Templates and Infrastructure
|
||||
weight: 90
|
||||
---
|
||||
|
||||
In Rancher, RKE templates are used to provision Kubernetes and define Rancher settings, while node templates are used to provision nodes.
|
||||
|
||||
Therefore, even if RKE template enforcement is turned on, the end user still has flexibility when picking the underlying hardware when creating a Rancher cluster. The end users of an RKE template can still choose an infrastructure provider and the nodes they want to use.
|
||||
|
||||
If you want to standardize the hardware in your clusters, use RKE templates conjunction with node templates or with a server provisioning tool such as Terraform.
|
||||
|
||||
### Node Templates
|
||||
|
||||
[Node templates]({{<baseurl>}}/rancher/v2.x/en/user-settings/node-templates) are responsible for node configuration and node provisioning in Rancher. From your user profile, you can set up node templates to define which templates are used in each of your node pools. With node pools enabled, you can make sure you have the required number of nodes in each node pool, and ensure that all nodes in the pool are the same.
|
||||
|
||||
### Terraform
|
||||
|
||||
Terraform is a server provisioning tool. It uses infrastructure-as-code that lets you create almost every aspect of your infrastructure with Terraform configuration files. It can automate the process of server provisioning in a way that is self-documenting and easy to track in version control.
|
||||
|
||||
This section focuses on how to use Terraform with the [Rancher 2 Terraform provider](https://www.terraform.io/docs/providers/rancher2/), which is a recommended option to standardize the hardware for your Kubernetes clusters. If you use the Rancher Terraform provider to provision hardware, and then use an RKE template to provision a Kubernetes cluster on that hardware, you can quickly create a comprehensive, production-ready cluster.
|
||||
|
||||
Terraform allows you to:
|
||||
|
||||
- Define almost any kind of infrastructure-as-code, including servers, databases, load balancers, monitoring, firewall settings, and SSL certificates
|
||||
- Leverage catalog apps and multi-cluster apps
|
||||
- Codify infrastructure across many platforms, including Rancher and major cloud providers
|
||||
- Commit infrastructure-as-code to version control
|
||||
- Easily repeat configuration and setup of infrastructure
|
||||
- Incorporate infrastructure changes into standard development practices
|
||||
- Prevent configuration drift, in which some servers become configured differently than others
|
||||
|
||||
# How Does Terraform Work?
|
||||
|
||||
Terraform is written in files with the extension `.tf`. It is written in HashiCorp Configuration Language, which is a declarative language that lets you define the infrastructure you want in your cluster, the cloud provider you are using, and your credentials for the provider. Then Terraform makes API calls to the provider in order to efficiently create that infrastructure.
|
||||
|
||||
To create a Rancher-provisioned cluster with Terraform, go to your Terraform configuration file and define the provider as Rancher 2. You can set up your Rancher 2 provider with a Rancher API key. Note: The API key has the same permissions and access level as the user it is associated with.
|
||||
|
||||
Then Terraform calls the Rancher API to provision your infrastructure, and Rancher calls the infrastructure provider. As an example, if you wanted to use Rancher to provision infrastructure on AWS, you would provide both your Rancher API key and your AWS credentials in the Terraform configuration file or in environment variables so that they could be used to provision the infrastructure.
|
||||
|
||||
When you need to make changes to your infrastructure, instead of manually updating the servers, you can make changes in the Terraform configuration files. Then those files can be committed to version control, validated, and reviewed as necessary. Then when you run `terraform apply`, the changes would be deployed.
|
||||
|
||||
# Tips for Working with Terraform
|
||||
|
||||
- There are examples of how to provide most aspects of a cluster in the [documentation for the Rancher 2 provider.](https://www.terraform.io/docs/providers/rancher2/)
|
||||
|
||||
- In the Terraform settings, you can install Docker Machine by using the Docker Machine node driver.
|
||||
|
||||
- You can also modify auth in the Terraform provider.
|
||||
|
||||
- You can reverse engineer how to do define a setting in Terraform by changing the setting in Rancher, then going back and checking your Terraform state file to see how it maps to the current state of your infrastructure.
|
||||
|
||||
- If you want to manage Kubernetes cluster settings, Rancher settings, and hardware settings all in one place, use [Terraform modules](https://github.com/rancher/terraform-modules). You can pass a cluster configuration YAML file or an RKE template configuration file to a Terraform module so that the Terraform module will create it. In that case, you could use your infrastructure-as-code to manage the version control and revision history of both your Kubernetes cluster and its underlying hardware.
|
||||
|
||||
# Tip for Creating CIS Benchmark Compliant Clusters
|
||||
|
||||
This section describes one way that you can make security and compliance-related config files standard in your clusters.
|
||||
|
||||
When you create a [CIS benchmark compliant cluster,]({{<baseurl>}}/rancher/v2.x/en/security/) you have an encryption config file and an audit log config file.
|
||||
|
||||
Your infrastructure provisioning system can write those files to disk. Then in your RKE template, you would specify where those files will be, then add your encryption config file and audit log config file as extra mounts to the `kube-api-server`.
|
||||
|
||||
Then you would make sure that the `kube-api-server` flag in your RKE template uses your CIS-compliant config files.
|
||||
|
||||
In this way, you can create flags that comply with the CIS benchmark.
|
||||
|
||||
# Resources
|
||||
|
||||
- [Terraform documentation](https://www.terraform.io/docs/)
|
||||
- [Rancher2 Terraform provider documentation](https://www.terraform.io/docs/providers/rancher2/)
|
||||
- [The RanchCast - Episode 1: Rancher 2 Terraform Provider](https://youtu.be/YNCq-prI8-8): In this demo, Director of Community Jason van Brackel walks through using the Rancher 2 Terraform Provider to provision nodes and create a custom cluster.
|
||||
+61
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: Access and Sharing
|
||||
weight: 31
|
||||
---
|
||||
|
||||
If you are an RKE template owner, you can share it with users or groups of users, who can then use the template to create clusters.
|
||||
|
||||
Since RKE templates are specifically shared with users and groups, owners can share different RKE templates with different sets of users.
|
||||
|
||||
When you share a template, each user can have one of two access levels:
|
||||
|
||||
- **Owner:** This user can update, delete, and share the templates that they own. The owner can also share the template with other users.
|
||||
- **User:** These users can create clusters using the template. They can also upgrade those clusters to new revisions of the same template. When you share a template as **Make Public (read-only),** all users in your Rancher setup have the User access level for the template.
|
||||
|
||||
If you create a template, you automatically become an owner of that template.
|
||||
|
||||
If you want to delegate responsibility for updating the template, you can share ownership of the template. For details on how owners can modify templates, refer to the [documentation about revising templates.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising)
|
||||
|
||||
There are several ways to share templates:
|
||||
|
||||
- Add users to a new RKE template during template creation
|
||||
- Add users to an existing RKE template
|
||||
- Make the RKE template public, sharing it with all users in the Rancher setup
|
||||
- Share template ownership with users who are trusted to modify the template
|
||||
|
||||
### Sharing Templates with Specific Users or Groups
|
||||
|
||||
To allow users or groups to create clusters using your template, you can give them the basic **User** access level for the template.
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Go to the template that you want to share and click the **Vertical Ellipsis (...) > Edit.**
|
||||
1. In the **Share Template** section, click on **Add Member**.
|
||||
1. Search in the **Name** field for the user or group you want to share the template with.
|
||||
1. Choose the **User** access type.
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The user or group can create clusters using the template.
|
||||
|
||||
### Sharing Templates with All Users
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Go to the template that you want to share and click the **Vertical Ellipsis (...) > Edit.**
|
||||
1. Under **Share Template,** click **Make Public (read-only).** Then click **Save.**
|
||||
|
||||
**Result:** All users in the Rancher setup can create clusters using the template.
|
||||
|
||||
### Sharing Ownership of Templates
|
||||
|
||||
If you are the creator of a template, you might want to delegate responsibility for maintaining and updating a template to another user or group.
|
||||
|
||||
In that case, you can give users the Owner access type, which allows another user to update your template, delete it, or share access to it with other users.
|
||||
|
||||
To give Owner access to a user or group,
|
||||
|
||||
1. From the **Global** view, click **Tools > RKE Templates.**
|
||||
1. Go to the RKE template that you want to share and click the **Vertical Ellipsis (...) > Edit.**
|
||||
1. Under **Share Template**, click on **Add Member** and search in the **Name** field for the user or group you want to share the template with.
|
||||
1. In the **Access Type** field, click **Owner.**
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The user or group has the Owner access type, and can modify, share, or delete the template.
|
||||
@@ -61,6 +61,7 @@ To take recurring snapshots, enable the `etcd-snapshot` service, which is a serv
|
||||
access_key: "myaccesskey"
|
||||
secret_key: "myaccesssecret"
|
||||
bucket_name: "my-backup-bucket"
|
||||
folder: "folder-name" # Available as of v2.3.0
|
||||
endpoint: "s3.eu-west-1.amazonaws.com"
|
||||
region: "eu-west-1"
|
||||
```
|
||||
@@ -112,7 +113,8 @@ _Available as of RKE v0.2.0_
|
||||
```shell
|
||||
rke etcd snapshot-save --config rancher-cluster.yml --name snapshot-name \
|
||||
--s3 --access-key S3_ACCESS_KEY --secret-key S3_SECRET_KEY \
|
||||
--bucket-name s3-bucket-name --s3-endpoint s3.amazonaws.com
|
||||
--bucket-name s3-bucket-name --s3-endpoint s3.amazonaws.com \
|
||||
--folder folder-name # Available as of v2.3.0
|
||||
```
|
||||
|
||||
**Result:** RKE takes a snapshot of `etcd` running on each `etcd` node. The file is saved to `/opt/rke/etcd-snapshots`. It is also uploaded to the S3 compatible backend.
|
||||
|
||||
@@ -115,7 +115,8 @@ When restoring etcd from a snapshot located in an S3 compatible backend, the com
|
||||
```
|
||||
$ rke etcd snapshot-restore --config cluster.yml --name snapshot-name \
|
||||
--s3 --access-key S3_ACCESS_KEY --secret-key S3_SECRET_KEY \
|
||||
--bucket-name s3-bucket-name --s3-endpoint s3.amazonaws.com
|
||||
--bucket-name s3-bucket-name --s3-endpoint s3.amazonaws.com \
|
||||
--folder folder-name # Available as of v2.3.0
|
||||
```
|
||||
|
||||
#### Options for `rke etcd snapshot-restore`
|
||||
@@ -131,6 +132,7 @@ S3 specific options are only available for RKE v0.2.0+.
|
||||
| `--access-key` value | Specify s3 accessKey | *|
|
||||
| `--secret-key` value | Specify s3 secretKey | *|
|
||||
| `--bucket-name` value | Specify s3 bucket name | *|
|
||||
| `--folder` value | Specify s3 folder in the bucket name _Available as of v2.3.0_ | *|
|
||||
| `--region` value | Specify the s3 bucket location (optional) | *|
|
||||
| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{< baseurl >}}/rke/latest/en/config-options/#ssh-agent) | |
|
||||
| `--ignore-docker-version` | [Disable Docker version check]({{< baseurl >}}/rke/latest/en/config-options/#supported-docker-versions) |
|
||||
|
||||
@@ -151,4 +151,8 @@ _Available as v2.2.0_
|
||||
|
||||
When creating applications that span multiple Kubernetes clusters, a Global DNS entry can be created to route traffic to the endpoints in all of the different clusters. An external DNS server will need be programmed to assign a fully qualified domain name (a.k.a FQDN) to your application. Rancher will use the FQDN you provide and the IP addresses where your application is running to program the DNS. Rancher will gather endpoints from all the Kubernetes clusters running your application and program the DNS.
|
||||
|
||||
For more information on how to use this feature, see [Global DNS]({{< baseurl >}}/rancher/v2.x/en/catalog/globaldns/).
|
||||
For more information on how to use this feature, see [Global DNS]({{< baseurl >}}/rancher/v2.x/en/admin-settings/globaldns/).
|
||||
|
||||
## Chart Compatibility with Rancher
|
||||
|
||||
Charts now support a field called `rancher_min_version` and `rancher_max_version` in the [`questions.yml` file](https://github.com/rancher/integration-test-charts/blob/master/charts/chartmuseum/v1.6.0/questions.yml) to specify the versions of Rancher that the chart is compatible with. When using the UI, only app versions that are valid for the version of Rancher running will be shown. API validation is done to ensure apps that don't meet the Rancher requirements cannot be launched. An app that is already running will not be affected on a Rancher upgrade if the newer Rancher version does not meet the app's requirements.
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
---
|
||||
title: Creating Custom Catalogs
|
||||
title: Creating Custom Catalogs Apps
|
||||
weight: 4000
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/global-configuration/catalog/customizing-charts/
|
||||
---
|
||||
|
||||
Rancher's catalog service requires any custom catalogs to be structured in a specific format for the catalog service to be able to leverage it in Rancher.
|
||||
Rancher's catalog service requires any custom catalogs to be structured in a specific format for the catalog service to be able to leverage it in Rancher.
|
||||
|
||||
## Chart Types
|
||||
|
||||
@@ -73,9 +73,26 @@ Before you create your own custom catalog, you should have a basic understanding
|
||||

|
||||
|
||||
|
||||
### Question Variable Reference
|
||||
### Questions.yml
|
||||
|
||||
This reference contains variables that you can use in `questions.yml`.
|
||||
Inside the `questions.yml`, most of the content will be around the questions to ask the end user, but there are some additional fields that can be set in this file.
|
||||
|
||||
#### Min/Max Rancher versions
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
For each chart, you can add the minimum and/or maximum Rancher version, which determines whether or not this chart is available to be deployed from Rancher.
|
||||
|
||||
> **Note:** Even though Rancher release versions are prefixed with a `v`, there is *no* prefix for the release version when using this option.
|
||||
|
||||
```
|
||||
rancher_min_version: 2.3.0
|
||||
rancher_max_version: 2.3.99
|
||||
```
|
||||
|
||||
#### Question Variable Reference
|
||||
|
||||
This reference contains variables that you can use in `questions.yml` nested under `questions:`.
|
||||
|
||||
| Variable | Type | Required | Description |
|
||||
| ------------- | ------------- | --- |------------- |
|
||||
|
||||
@@ -51,6 +51,14 @@ Rancher supports two different backup targets:
|
||||
|
||||
By default, the `local` backup target is selected. The benefits of this option is that there is no external configuration. Snapshots are automatically saved locally to the etcd nodes in the [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) in `/opt/rke/etcd-snapshots`. All recurring snapshots are taken at configured intervals. The downside of using the `local` backup target is that if there is a total disaster and _all_ etcd nodes are lost, there is no ability to restore the cluster.
|
||||
|
||||
#### Safe Timestamps
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
As of v2.2.6, snapshot files are timestamped to simplify processing the files using external tools and scripts, but in some S3 compatible backends, these timestamps were unusable. As of Rancher v2.3.0, the option `safe_timestamp` is added to support compatiable file names. When this flag is set to `true`, all special characters in the snapshot filename timestamp are replaced.
|
||||
|
||||
>>**Note:** This option is not available directly in the UI, and is only available through the `Edit as Yaml` interface.
|
||||
|
||||
### S3 Backup Target
|
||||
|
||||
The `S3` backup target allows users to configure a S3 compatible backend to store the snapshots. The primary benefit of this option is that if the cluster loses all the etcd nodes, the cluster can still be restored as the snapshots are stored externally. Rancher recommends external targets like `S3` backup, however its configuration requirements do require additional effort that should be considered.
|
||||
@@ -62,6 +70,13 @@ The `S3` backup target allows users to configure a S3 compatible backend to stor
|
||||
|S3 Region Endpoint|S3 regions endpoint for the backup bucket|* |
|
||||
|S3 Access Key|S3 access key with permission to access the backup bucket|*|
|
||||
|S3 Secret Key|S3 secret key with permission to access the backup bucket|*|
|
||||
| Custom CA Certificate | A custom certificate used to access private S3 backends _Available as of v2.2.5_ ||
|
||||
|
||||
#### Using a custom CA certificate for S3
|
||||
|
||||
_Available as of v2.2.5_
|
||||
|
||||
The backup snapshot can be stored on a custom `S3` backup like [minio](https://min.io/). If the S3 back end uses a self-signed or custom certificate, provide a custom certificate using the `Custom CA Certificate` option to connect to the S3 backend.
|
||||
|
||||
# IAM Support for Storing Snapshots in S3
|
||||
The `S3` backup target supports using IAM authentication to AWS API in addition to using API credentials. An IAM role gives temporary permissions that an application can use when making API calls to S3 storage. To use IAM authentication, the following requirements must be met:
|
||||
|
||||
@@ -55,4 +55,33 @@ Rancher launched Kubernetes clusters have the ability to rotate the auto-generat
|
||||
|
||||
5. Click on **Send Request**.
|
||||
|
||||
**Results:** All kubernetes certificates will be rotated.
|
||||
**Results:** All Kubernetes certificates will be rotated.
|
||||
|
||||
### Rotating Expired Certificates After Upgrading Older Rancher Versions
|
||||
|
||||
If you are upgrading from Rancher v2.0.13 or earlier, or v2.1.8 or earlier, and your clusters have expired certificates, some manual steps are required to complete the certificate rotation.
|
||||
|
||||
1. For the `controlplane` and `etcd` nodes, log in to each corresponding host and check if the certificate `kube-apiserver-requestheader-ca.pem` is in the following directory:
|
||||
|
||||
```
|
||||
cd /etc/kubernetes/.tmp
|
||||
```
|
||||
|
||||
If the certificate is not in the directory, perform the following commands:
|
||||
|
||||
```
|
||||
cp kube-ca.pem kube-apiserver-requestheader-ca.pem
|
||||
cp kube-ca-key.pem kube-apiserver-requestheader-ca-key.pem
|
||||
cp kube-apiserver.pem kube-apiserver-proxy-client.pem
|
||||
cp kube-apiserver-key.pem kube-apiserver-proxy-client-key.pem
|
||||
```
|
||||
|
||||
If the `.tmp` directory does not exist, you can copy the entire SSL certificate to `.tmp`:
|
||||
|
||||
```
|
||||
cp -r /etc/kubernetes/ssl /etc/kubernetes/.tmp
|
||||
```
|
||||
|
||||
1. Rotate the certificates. For Rancher v2.0.x and v2.1.x, use the [Rancher API.](#certificate-rotation-in-rancher-v2-1-x-and-v2-0-x) For Rancher 2.2.x, [use the UI.](#certificate-rotation-in-rancher-v2-2-x)
|
||||
|
||||
1. After the command is finished, check if the `worker` nodes are Active. If not, log in to each `worker` node and restart the kubelet and proxy.
|
||||
@@ -8,7 +8,7 @@ aliases:
|
||||
After you provision a Kubernetes cluster using Rancher, you can still edit options and settings for the cluster. To edit your cluster, open the **Global** view, make sure the **Clusters** tab is selected, and then select **Ellipsis (...) > Edit** for the cluster that you want to edit.
|
||||
|
||||
<sup>To Edit an Existing Cluster</sup>
|
||||

|
||||

|
||||
|
||||
The options and settings available for an existing cluster change based on the method that you used to provision it. For example, only clusters [provisioned by RKE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) have **Cluster Options** available for editing.
|
||||
|
||||
@@ -63,6 +63,8 @@ When editing clusters, clusters that are [launched using RKE]({{< baseurl >}}/ra
|
||||
|
||||
Following an upgrade to the latest version of Rancher, you can update your existing clusters to use the latest supported version of Kubernetes. Before a new version of Rancher is released, it's tested with the latest versions of Kubernetes to ensure compatibility.
|
||||
|
||||
As of Rancher v2.3.0, the Kubernetes metadata feature was added, which allows you to use newer Kubernetes versions as soon as they are released, without upgrading Rancher. For details, refer to the [section on Kubernetes metadata.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/k8s-metadata)
|
||||
|
||||
>**Recommended:** Before upgrading Kubernetes, [backup your cluster]({{< baseurl >}}/rancher/v2.x/en/backups).
|
||||
|
||||
1. From the **Global** view, find the cluster for which you want to upgrade Kubernetes. Select **Vertical Ellipsis (...) > Edit**.
|
||||
|
||||
@@ -62,3 +62,17 @@ kubectl --context <CLUSTER_NAME>-<NODE_NAME> get nodes
|
||||
# Directly referencing the location of the kubeconfig file
|
||||
kubectl --kubeconfig /custom/path/kube.config --context <CLUSTER_NAME>-<NODE_NAME> get pods
|
||||
```
|
||||
|
||||
### kube-api-auth
|
||||
|
||||
The `kube-api-auth` resource is deployed to provide the functionality for Authorized Cluster Endpoint.
|
||||
|
||||
During cluster provisioning, the file `/etc/kubernetes/kube-api-authn-webhook.yaml` is deployed and `kube-apiserver` is configured with `--authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml`. This configures the `kube-apiserver` to query `http://127.0.0.1:6440/v1/authenticate` to determine authentication for bearer tokens.
|
||||
|
||||
The scheduling rules for `kube-api-auth` are listed below:
|
||||
|
||||
_Applies to v2.3.0 and higher_
|
||||
|
||||
| Component | nodeAffinity nodeSelectorTerms | nodeSelector | Tolerations |
|
||||
| -------------------- | ------------------------------------------ | ------------ | ------------------------------------------------------------------------------ |
|
||||
| kube-api-auth | `beta.kubernetes.io/os:NotIn:windows`<br/>`node-role.kubernetes.io/controlplane:In:"true"` | none | `operator:Exists` |
|
||||
|
||||
@@ -18,7 +18,7 @@ The following table lists which node options are available for each [type of clu
|
||||
| ------------------------------------------------ | ------------------------------------------------ | ---------------- | ------------------- | ------------------- | ------------------------------------------------------------------ |
|
||||
| [Cordon](#cordoning-a-node) | ✓ | ✓ | ✓ | | Marks the node as unschedulable. |
|
||||
| [Drain](#draining-a-node) | ✓ | ✓ | ✓ | | Marks the node as unschedulable _and_ evicts all pods. |
|
||||
| [Edit](#editing-a-node) | ✓ | ✓ | ✓ | | Enter a custom name, description, or label for a node. |
|
||||
| [Edit](#editing-a-node) | ✓ | ✓ | ✓ | | Enter a custom name, description, label, or taints for a node. |
|
||||
| [View API](#viewing-a-node-api) | ✓ | ✓ | ✓ | | View API data. |
|
||||
| [Delete](#deleting-a-node) | ✓ | ✓ | | | Deletes defective nodes from the cluster. |
|
||||
| [Download Keys](#ssh-into-a-node-hosted-by-an-infrastructure-provider) | ✓ | | | | Download SSH key for in order to SSH into the node. |
|
||||
@@ -51,14 +51,14 @@ The node draining options are different based on your version of Rancher.
|
||||
|
||||
There are two drain modes: aggressive and safe.
|
||||
|
||||
- **Aggressive Mode**
|
||||
|
||||
- **Aggressive Mode**
|
||||
|
||||
In this mode, pods won't get rescheduled to a new node, even if they do not have a controller. Kubernetes expects you to have your own logic that handles the deletion of these pods.
|
||||
|
||||
|
||||
Kubernetes also expects the implementation to decide what to do with pods using emptyDir. If a pod uses emptyDir to store local data, you might not be able to safely delete it, since the data in the emptyDir will be deleted once the pod is removed from the node. Choosing aggressive mode will delete these pods.
|
||||
|
||||
- **Safe Mode**
|
||||
|
||||
- **Safe Mode**
|
||||
|
||||
If a node has standalone pods or ephemeral data it will be cordoned but not drained.
|
||||
|
||||
### Aggressive and Safe Draining Options for Rancher Prior to v2.2.x
|
||||
@@ -82,7 +82,7 @@ The following list describes each drain option:
|
||||
|
||||
The timeout given to each pod for cleaning things up, so they will have chance to exit gracefully. For example, when pods might need to finish any outstanding requests, roll back transactions or save state to some external storage. If negative, the default value specified in the pod will be used.
|
||||
|
||||
### Timeout
|
||||
### Timeout
|
||||
|
||||
The amount of time drain should continue to wait before giving up.
|
||||
|
||||
@@ -101,7 +101,12 @@ Once drain successfully completes, the node will be in a state of `drained`. You
|
||||
|
||||
## Editing a Node
|
||||
|
||||
Editing a node lets you change its name, add a description of the node, or add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/).
|
||||
Editing a node lets you:
|
||||
|
||||
* Change its name
|
||||
* Change its description
|
||||
* Add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
|
||||
* Add/Remove [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)
|
||||
|
||||
|
||||
## Viewing a Node API
|
||||
|
||||
@@ -33,7 +33,7 @@ If your Kubernetes cluster is broken, you can restore the cluster from a snapsho
|
||||
|
||||
**Result:** The cluster will go into `updating` state and the process of restoring the `etcd` nodes from the snapshot will start. The cluster is restored when it returns to an `active` state.
|
||||
|
||||
> **Note:** If you are restoring a cluster with unavailable etcd nodes, it's recommended that all etcd nodes are removed from Rancher before attempting to restore. For clusters that were provisioned using [nodes hosted in an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/), new etcd nodes will automatically be created. For [custom clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/), please ensure that you add new etcd nodes to the cluster.
|
||||
> **Note:** If you are restoring a cluster with unavailable etcd nodes, it's recommended that all etcd nodes are removed from Rancher before attempting to restore. For clusters that were provisioned using [nodes hosted in an infrastructure provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/), new etcd nodes will automatically be created. For [custom clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/), please ensure that you add new etcd nodes to the cluster.
|
||||
|
||||
## Recovering etcd without a Snapshot
|
||||
|
||||
|
||||
@@ -1,75 +1,87 @@
|
||||
---
|
||||
title: Istio
|
||||
weight: 5
|
||||
aliases:
|
||||
- /rancher/v2.x/en/project-admin/istio/configuring-resource-allocations/_index.md
|
||||
- /rancher/v2.x/en/cluster-admin/tools/istio/_index.md
|
||||
- /rancher/v2.x/en/project-admin/istio/index.md
|
||||
---
|
||||
_Available as of v2.3.0_
|
||||
|
||||
_Available as of v2.3.0-alpha5_
|
||||
[Istio](https://istio.io/) is an open-source tool that makes it easier for DevOps teams to observe, control, troubleshoot, and secure the traffic within a complex network of microservices.
|
||||
|
||||
Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications.
|
||||
As a network of microservices changes and grows, the interactions between them can become more difficult to manage and understand. In such a situation, it is useful to have a service mesh as a separate infrastructure layer. Istio's service mesh lets you manipulate traffic between microservces without changing the microservices directly.
|
||||
|
||||
## Prerequisites
|
||||
Our integration of Istio is designed so that a Rancher operator, such as an administrator or cluster owner, can deliver Istio to developers. Then developers can use Istio to enforce security policies, troubleshoot problems, or manage traffic for green/blue deployments, canary deployments, or A/B testing.
|
||||
|
||||
The required resource allocation for each service is listed in the [configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/istio/config/). Please review it before attempting to enable Istio.
|
||||
This service mesh provides features that include but are not limited to the following:
|
||||
|
||||
In larger deployments, it is strongly advised that the infrastructure be placed on dedicated nodes in the cluster by adding node selector for each Istio components.
|
||||
- Traffic management features
|
||||
- Enhanced monitoring and tracing
|
||||
- Service discovery and routing
|
||||
- Secure connections and service-to-service authentication with mutual TLS
|
||||
- Load balancing
|
||||
- Automatic retries, backoff, and circuit breaking
|
||||
|
||||
#### Default Resource Consumption
|
||||
After Istio is enabled in a cluster, you can leverage Istio's control plane functionality with `kubectl`.
|
||||
|
||||
Workload | Container | CPU - Request | Mem - Request | CPU - Limit | Mem - Limit | Configurable
|
||||
---------|-----------|---------------|---------------|-------------|-------------|-------------
|
||||
istio-pilot |discovery| 500m | 2048Mi | 1000m | 4096Mi | Y
|
||||
istio-telemetry |mixer| 1000m | 1024Mi | 4800m | 4096Mi | Y
|
||||
istio-policy | mixer | 1000m | 1024Mi | 4800m | 4096Mi | Y
|
||||
istio-tracing | jaeger | 100m | 100Mi | 500m | 1024Mi | Y
|
||||
prometheus | prometheus | 750m | 750Mi | 1000m | 1024Mi | Y
|
||||
grafana | grafana | 100m | 100Mi | 200m | 512Mi | Y
|
||||
Others | - | 500m | 500Mi | - | - | N
|
||||
Total | - | 3950m | 5546Mi | - | - | -
|
||||
Rancher's Istio integration comes with comprehensive visualization aids:
|
||||
|
||||
## Enabling Istio
|
||||
- **Trace the root cause of errors with Jaeger.** [Jaeger](https://www.jaegertracing.io/) is an open-source tool that provides a UI for a distributed tracing system, which is useful for root cause analysis and for determining what causes poor performance. Distributed tracing allows you to view an entire chain of calls, which might originate with a user request and traverse dozens of microservices.
|
||||
- **Get the full picture of your microservice architecture with Kiali.** [Kiali](https://www.kiali.io/) provides a diagram that shows the services within a service mesh and how they are connected, including the traffic rates and latencies between them. You can check the health of the service mesh, or drill down to see the incoming and outgoing requests to a single component.
|
||||
- **Gain insights from time series analytics with Grafana dashboards.** [Grafana](https://grafana.com/) is an analytics platform that allows you to query, visualize, alert on and understand the data gathered by Prometheus.
|
||||
- **Write custom queries for time series data with the Promethus UI.** [Prometheus](https://prometheus.io/) is a systems monitoring and alerting toolkit. Prometheus scrapes data from your cluster, which is then used by Grafana. A Prometheus UI is also integrated into Rancher, and lets you write custom queries for time series data and see the results in the UI.
|
||||
|
||||
As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Istio to your Kubernetes cluster.
|
||||
# Prerequisites
|
||||
|
||||
1. From the **Global** view, navigate to the cluster that you want to configure Istio for.
|
||||
Before enabling Istio, we recommend that you confirm that your Rancher worker nodes have enough [CPU and memory]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/resources) to run all of the components of Istio.
|
||||
|
||||
1. Select **Tools > Istio** in the navigation bar.
|
||||
# Setup Guide
|
||||
|
||||
1. Select **Enable** to show the [Istio configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/istio/config/). Enter in your desired configuration options. Ensure you have enough resources on your worker nodes to enable Istio.
|
||||
Refer to the [setup guide]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup) for instructions on how to set up Istio and use it in a project.
|
||||
|
||||
1. Click **Save**.
|
||||
# Disabling Istio
|
||||
|
||||
**Result:** The Istio application, `cluster-istio`, is added as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the application is `active`, you can start using Istio.
|
||||
To remove Istio components from a cluster, namepace, or workload, refer to the section on [disabling Istio.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/disabling-istio)
|
||||
|
||||
# Accessing Visualizations
|
||||
|
||||
## Using Istio for Metrics Visualization
|
||||
> By default, only cluster owners have access to Jaeger and Kiali. For instructions on how to allow project members to access them, refer to [Access to Visualizations.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/rbac/#access-to-visualizations)
|
||||
|
||||
Once Istio is `active`, you can see visualizations of your Istio service mesh with Kiali, Jaeger, Grafana, and Prometheus, which are all open-source projects that Rancher has integrated with.
|
||||
After Istio is set up in a cluster, Grafana, Prometheus, Jaeger, and Kiali are available in the Rancher UI.
|
||||
|
||||
- **Kiali** helps you define, validate, and observe your Istio service mesh. Kiali shows you what services are in your mesh and how they are connected. Kiali includes Jaeger Tracing to provide distributed tracing out of the box.
|
||||
- **Jaeger** is a distributed tracing system released as open source by Uber Technologies. It is used for monitoring and troubleshooting microservices-based distributed systems.
|
||||
- **Grafana** is an analytics platform that allows you to query, visualize, alert on and understand your metrics. Grafana lets you visualize data from Prometheus.
|
||||
- **Prometheus** is a systems monitoring and alerting toolkit.
|
||||
Your access to the visualizations depend on your role. Grafana and Prometheus are only available for cluster owners. The Kiali and Jaeger UIs are available only to cluster owners by default, but cluster owners can allow project members to access them by editing the Istio settings. When you go to your project and click **Resources > Istio,** you can go to each UI for Kiali, Jaeger, Grafana, and Prometheus by clicking their icons in the top right corner of the page.
|
||||
|
||||
With Istio enabled, you can:
|
||||
To see the visualizations, go to the cluster where Istio is set up and click **Tools > Istio.** You should see links to each UI at the top of the page.
|
||||
|
||||
- Access [Kiali UI](https://www.kiali.io/) by clicking the Kiali UI icon in the Istio page.
|
||||
- Access [Jaeger UI](https://www.jaegertracing.io/) by clicking the Jaeger UI icon in the Istio page.
|
||||
- Access [Grafana UI](https://grafana.com/) by clicking the Grafana UI icon in the Istio page.
|
||||
- Access [Prometheus UI](https://prometheus.io/) by clicking the Prometheus UI icon in the Istio page.
|
||||
- Go to a project to [view traffic graph, traffic metrics and manage traffic]({{< baseurl >}}/rancher/v2.x/en/project-admin/istio/).
|
||||
You can also get to the visualization tools from the project view.
|
||||
|
||||
## Leveraging Istio in Projects
|
||||
# Viewing the Kiali Traffic Graph
|
||||
|
||||
After you enable Istio, you can see traphic metrics and a traffic graph on the project level. You can see a traffic graph for all namespaces that have Istio sidecar injection enabled. For more information, refer to [How to Use Istio in Your Project]({{< baseurl >}}/rancher/v2.x/en/project-admin/istio/).
|
||||
1. From the project view in Rancher, click **Resources > Istio.**
|
||||
1. If you are a cluster owner, you can go to the **Traffic Graph** tab. This tab has the Kiali network visualization integrated into the UI.
|
||||
|
||||
## Disabling Istio
|
||||
# Viewing Traffic Metrics
|
||||
|
||||
To disable Istio:
|
||||
Istio’s monitoring features provide visibility into the performance of all your services.
|
||||
|
||||
1. From the **Global** view, navigate to the cluster that you want to disable Istio for.
|
||||
1. From the project view in Rancher, click **Resources > Istio.**
|
||||
1. Go to the **Traffic Metrics** tab. After traffic is generated in your cluster, you should be able to see metrics for **Success Rate, Request Volume, 4xx Response Count, Project 5xx Response Count** and **Request Duration.** Cluster owners can see all of the metrics, while project members can see a subset of the metrics.
|
||||
|
||||
1. Select **Tools > Istio** in the navigation bar.
|
||||
# Architecture
|
||||
|
||||
1. Click **Disable Istio**, then click the red button again to confirm the disable action.
|
||||
Istio installs a service mesh that uses [Envoy](https://www.envoyproxy.io/learn/service-mesh) sidecar proxies to intercept traffic to each workload. These sidecars intercept and manage service-to-service communication, allowing fine-grained observation and control over traffic within the cluster.
|
||||
|
||||
**Result:** The `cluster-istio` application in the cluster's `system` project gets removed.
|
||||
Only workloads that have the Istio sidecar injected can be tracked and controlled by Istio.
|
||||
|
||||
Enabling Istio in Rancher enables monitoring in the cluster, and enables Istio in all new namespaces that are created in a cluster. You need to manually enable Istio in preexisting namespaces.
|
||||
|
||||
When a namespace has Istio enabled, new workloads deployed in the namespace will automatically have the Istio sidecar. You need to manually enable Istio in preexisting workloads.
|
||||
|
||||
For more information on the Istio sidecar, refer to the [Istio docs](https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/).
|
||||
|
||||
### Two Ingresses
|
||||
|
||||
By default, each Rancher-provisioned cluster has one NGINX ingress controller allowing traffic into the cluster. To allow Istio to receive external traffic, you need to enable the Istio ingress gateway for the cluster. The result is that your cluster will have two ingresses.
|
||||
|
||||

|
||||
@@ -0,0 +1,27 @@
|
||||
---
|
||||
title: Disabling Istio
|
||||
weight: 4
|
||||
---
|
||||
|
||||
This section describes how to disable Istio in a cluster, namespace, or workload.
|
||||
|
||||
# Disable Istio in a Cluster
|
||||
|
||||
To disable Istio,
|
||||
|
||||
1. From the **Global** view, navigate to the cluster that you want to disable Istio for.
|
||||
1. Click **Tools > Istio.**
|
||||
1. Click **Disable,** then click the red button again to confirm the disable action.
|
||||
|
||||
**Result:** The `cluster-istio` application in the cluster's `system` project gets removed. The Istio sidecar cannot be deployed on any workloads in the cluster.
|
||||
|
||||
# Disable Istio in a Namespace
|
||||
|
||||
1. In the Rancher UI, go to the project that has the namespace where you want to disable Istio.
|
||||
1. On the **Workloads** tab, you will see a list of namespaces and the workloads deployed in them. Go to the namespace where you want to disable and click the **Ellipsis (...) > Disable Istio Auto Injection.**
|
||||
|
||||
**Result:** When workloads are deployed in this namespace, they will not have the Istio sidecar.
|
||||
|
||||
# Remove the Istio Sidecar from a Workload
|
||||
|
||||
Disable Istio in the namespace, then redeploy the workloads with in it. They will be deployed without the Istio sidecar.
|
||||
@@ -0,0 +1,58 @@
|
||||
---
|
||||
title: Role-based Access Control
|
||||
weight: 3
|
||||
---
|
||||
|
||||
This section describes the permissions required to access Istio features and how to configure access to the Kiali and Jaeger visualizations.
|
||||
|
||||
# Cluster-level Access
|
||||
|
||||
By default, only cluster adminstrators can:
|
||||
|
||||
- Enable Istio for the cluster
|
||||
- Configure resource allocations for Istio
|
||||
- View each UI for Prometheus, Grafana, Kiali, and Jaeger
|
||||
|
||||
# Project-level Access
|
||||
|
||||
After Istio is enabled in a cluster, project owners and members have permission to:
|
||||
|
||||
- Enable and disable Istio sidecar auto-injection for namespaces
|
||||
- Add the Istio sidecar to workloads
|
||||
- View the traffic metrics and traffic graph for the cluster
|
||||
- View the Kiali and Jaeger visualizations if cluster administrators give access to project members
|
||||
- Configure Istio's resources (such as the gateway, destination rules, or virtual services) with `kubectl` (This does not apply to read-only project members)
|
||||
|
||||
# Access to Visualizations
|
||||
|
||||
By default, the Kiali and Jaeger visualizations are restricted to the cluster owner because the information in them could be sensitive.
|
||||
|
||||
**Jaeger** provides a UI for a distributed tracing system, which is useful for root cause analysis and for determining what causes poor performance.
|
||||
|
||||
**Kiali** provides a diagram that shows the services within a service mesh and how they are connected.
|
||||
|
||||
Rancher supports giving groups permission to access Kiali and Jaeger, but not individuals.
|
||||
|
||||
To configure who has permission to access the Kiali and Jaeger UI,
|
||||
|
||||
1. Go to the cluster view and click **Tools > Istio.**
|
||||
1. Then go to the **Member Access** section. If you want to restrict access to certain groups, choose **Allow cluster owner and specified members to access Kiali and Jaeger UI.** Search for the groups that you want to have access to Kiali and Jaeger. If you want all members to have access to the tools, click **Allow all members to access Kiali and Jaeger UI.**
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The access levels for Kiali and Jaeger have been updated.
|
||||
|
||||
# Summary of Default Permissions for Istio Users
|
||||
|
||||
| Permission | Cluster Administrators | Project Owners | Project Members | Read-only Project Members |
|
||||
|------------------------------------------|----------------|----------------|-----------------|---------------------------|
|
||||
| Enable and disable Istio for the cluster | ✓ | | | |
|
||||
| Configure Istio resource limits | ✓ | | | |
|
||||
| Control who has access to Kiali and the Jaeger UI | ✓ | | | |
|
||||
| Enable and disable Istio for a namespace | ✓ | ✓ | ✓ | |
|
||||
| Enable and disable Istio on workloads | ✓ | ✓ | ✓ | |
|
||||
| Configure Istio with `kubectl` | ✓ | ✓ | ✓ | |
|
||||
| View Prometheus UI and Grafana UI | ✓ | | | |
|
||||
| View Kiali UI and Jaeger UI ([Configurable](#access-to-visualizations)) | ✓ | | | |
|
||||
| View Istio project dashboard, including traffic metrics* | ✓ | ✓ | ✓ | ✓ |
|
||||
|
||||
* By default, only the cluster owner will see the traffic graph. Project members will see only a subset of traffic metrics. Project members cannot see the traffic graph because it comes from Kiali, and access to Kiali is restricted to cluster owners by default.
|
||||
+65
-17
@@ -1,15 +1,61 @@
|
||||
---
|
||||
title: Istio Configuration
|
||||
title: CPU and Memory Allocations
|
||||
weight: 1
|
||||
aliases:
|
||||
- /rancher/v2.x/en/project-admin/istio/configuring-resource-allocations/_index.md
|
||||
- /rancher/v2.x/en/project-admin/istio/config/_index.md
|
||||
---
|
||||
_Available as of v2.3.0_
|
||||
|
||||
_Available as of v2.3.0-alpha5_
|
||||
This section describes the minimum recommended computing resources for the Istio components in a cluster.
|
||||
|
||||
There are several configuration options for Istio. You can find more information about Istio configuration in the [official Istio documentation](https://istio.io/docs/concepts/what-is-istio).
|
||||
The CPU and memory allocations for each component are [configurable.](#configuring-resource-allocations)
|
||||
|
||||
## PILOT
|
||||
Before enabling Istio, we recommend that you confirm that your Rancher worker nodes have enough CPU and memory to run all of the components of Istio.
|
||||
|
||||
Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (e.g., A/B tests, canary rollouts, etc.), and resiliency (timeouts, retries, circuit breakers, etc.).
|
||||
> **Tip:** In larger deployments, it is strongly advised that the infrastructure be placed on dedicated nodes in the cluster by adding a node selector for each Istio component.
|
||||
|
||||
The table below shows a summary of the minimum recommended resource requests and limits for the CPU and memory of each central Istio component.
|
||||
|
||||
In Kubernetes, the resource request indicates that the workload will not deployed on a node unless the node has at least the specified amount of memory and CPU available. If the workload surpasses the limit for CPU or memory, it can be terminated or evicted from the node. For more information on managing resource limits for containers, refer to the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
|
||||
|
||||
Workload | Container | CPU - Request | Mem - Request | CPU - Limit | Mem - Limit | Configurable
|
||||
---------|-----------|---------------|---------------|-------------|-------------|-------------
|
||||
istio-pilot |discovery| 500m | 2048Mi | 1000m | 4096Mi | Y
|
||||
istio-telemetry |mixer| 1000m | 1024Mi | 4800m | 4096Mi | Y
|
||||
istio-policy | mixer | 1000m | 1024Mi | 4800m | 4096Mi | Y
|
||||
istio-tracing | jaeger | 100m | 100Mi | 500m | 1024Mi | Y
|
||||
prometheus | prometheus | 750m | 750Mi | 1000m | 1024Mi | Y
|
||||
grafana | grafana | 100m | 100Mi | 200m | 512Mi | Y
|
||||
Others | - | 500m | 500Mi | - | - | N
|
||||
**Total** | **-** | **3950m** | **5546Mi** | **>12300m** | **>14848Mi** | **-**
|
||||
|
||||
|
||||
# Configuring Resource Allocations
|
||||
|
||||
You can individually configure the resource allocation for each type of Istio component. This section includes the default resource allocations for each component.
|
||||
|
||||
To make it easier to schedule the workloads to a node, a cluster administrator can reduce the CPU and memory resource requests for the component. However, the default CPU and memory allocations are the minimum that we recommend.
|
||||
|
||||
You can find more information about Istio configuration in the [official Istio documentation](https://istio.io/docs/concepts/what-is-istio).
|
||||
|
||||
To configure the resources allocated to an Istio component,
|
||||
|
||||
1. In Rancher, go to the cluster where you have Istio installed.
|
||||
1. Click **Tools > Istio.** This opens the Istio configuration page.
|
||||
1. Change the CPU or memory allocations, the nodes where each component will be scheduled to, or the node tolerations.
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The resource allocations for the Istio components are updated.
|
||||
|
||||
## Pilot
|
||||
|
||||
[Pilot](https://istio.io/docs/concepts/what-is-istio/#pilot) provides the following:
|
||||
|
||||
- Authentication configuration
|
||||
- Service discovery for the Envoy sidecars
|
||||
- Traffic management capabilities for intelligent routing (A/B tests and canary rollouts)
|
||||
- Configuration for resiliency (timeouts, retries, circuit breakers, etc)
|
||||
|
||||
For more information on Pilot, refer to the [documentation](https://istio.io/docs/concepts/traffic-management/#pilot-and-envoy).
|
||||
|
||||
@@ -22,9 +68,11 @@ Pilot Memory Reservation | Memory resource requests for the istio-pilot pod. | Y
|
||||
Trace sampling Percentage | [Trace sampling percentage](https://istio.io/docs/tasks/telemetry/distributed-tracing/overview/#trace-sampling) | Yes | 1
|
||||
Pilot Selector | Ability to select the nodes in which istio-pilot pod is deployed to. To use this option, the nodes must have labels. | No | n/a
|
||||
|
||||
## MIXER
|
||||
## Mixer
|
||||
|
||||
Mixer is a platform-independent component. Mixer enforces access control and usage policies across the service mesh, and collects telemetry data from the Envoy proxy and other services. For more information on Mixer, policies and telemetry, refer to the [documentation](https://istio.io/docs/concepts/policies-and-telemetry/).
|
||||
[Mixer](https://istio.io/docs/concepts/what-is-istio/#mixer) enforces access control and usage policies across the service mesh. It also integrates with plugins for monitoring tools such as Prometheus. The Envoy sidecar proxy passes telemetry data and monitoring data to Mixer, and Mixer passes the monitoring data to Prometheus.
|
||||
|
||||
For more information on Mixer, policies and telemetry, refer to the [documentation](https://istio.io/docs/concepts/policies-and-telemetry/).
|
||||
|
||||
Option | Description| Required | Default
|
||||
-------|------------|-------|-------
|
||||
@@ -39,9 +87,9 @@ Mixer Policy Memory Limit | Memory resource limit for the istio-policy pod. | Ye
|
||||
Mixer Policy Memory Reservation | Memory resource requests for the istio-policy pod. | Yes, when policy enabled | 1024
|
||||
Mixer Selector | Ability to select the nodes in which istio-policy and istio-telemetry pods are deployed to. To use this option, the nodes must have labels. | No | n/a
|
||||
|
||||
## TRACING
|
||||
## Tracing
|
||||
|
||||
Istio-enabled applications can collect trace spans. For more information on distributed tracing with Istio, refer to the [documentation](https://istio.io/docs/tasks/telemetry/distributed-tracing/overview/).
|
||||
[Distributed tracing](https://istio.io/docs/tasks/telemetry/distributed-tracing/overview/) enables users to track a request through a service mesh. This makes it easier to troubleshoot problems with latency, parallelism and serialization.
|
||||
|
||||
Option | Description| Required | Default
|
||||
-------|------------|-------|-------
|
||||
@@ -52,9 +100,11 @@ Tracing Memory Limit | Memory resource limit for the istio-tracing pod. | Yes
|
||||
Tracing Memory Reservation | Memory resource requests for the istio-tracing pod. | Yes | 100
|
||||
Tracing Selector | Ability to select the nodes in which tracing pod is deployed to. To use this option, the nodes must have labels. | No | n/a
|
||||
|
||||
## INGRESS GATEWAY
|
||||
## Ingress Gateway
|
||||
|
||||
The Istio Gateway allows Istio features such as monitoring and route rules to be applied to traffic entering the cluster. For more information, refer to the [documentation](https://istio.io/docs/tasks/traffic-management/ingress/).
|
||||
The Istio gateway allows Istio features such as monitoring and route rules to be applied to traffic entering the cluster. This gateway is a prerequisite for outside traffic to make requests to Istio.
|
||||
|
||||
For more information, refer to the [documentation](https://istio.io/docs/tasks/traffic-management/ingress/).
|
||||
|
||||
Option | Description| Required | Default
|
||||
-------|------------|-------|-------
|
||||
@@ -70,7 +120,7 @@ Ingress Gateway Memory Limit | Memory resource limit for the istio-ingressgatewa
|
||||
Ingress Gateway Memory Reservation | Memory resource requests for the istio-ingressgateway pod. | Yes | 128
|
||||
Ingress Gateway Selector | Ability to select the nodes in which istio-ingressgateway pod is deployed to. To use this option, the nodes must have labels. | No | n/a
|
||||
|
||||
## PROMETHEUS
|
||||
## Prometheus
|
||||
|
||||
You can query for Istio metrics using Prometheus. Prometheus is an open-source systems monitoring and alerting toolkit.
|
||||
|
||||
@@ -83,9 +133,9 @@ Prometheus Memory Reservation | Memory resource requests for the Prometheus pod.
|
||||
Retention for Prometheus | How long your Prometheus instance retains data | Yes | 6
|
||||
Prometheus Selector | Ability to select the nodes in which Prometheus pod is deployed to. To use this option, the nodes must have labels.| No | n/a
|
||||
|
||||
## GRAFANA
|
||||
## Grafana
|
||||
|
||||
You can visualize metrics with Grafana. Grafana is a tool that lets you visualize Istio traffic data.
|
||||
You can visualize metrics with Grafana. Grafana lets you visualize Istio traffic data scraped by Prometheus.
|
||||
|
||||
Option | Description| Required | Default
|
||||
-------|------------|-------|-------
|
||||
@@ -99,6 +149,4 @@ Enable Persistent Storage for Grafana | Enable Persistent Storage for Grafana |
|
||||
Source | Use a Storage Class to provision a new persistent volume or Use an existing persistent volume claim | Yes, when Grafana enabled and enabled PV | Use SC
|
||||
Storage Class | Storage Class for provisioning PV for Grafana | Yes, when Grafana enabled, enabled PV and use storage class | Use the default class
|
||||
Persistent Volume Size | The size for the PV you would like to provision for Grafana | Yes, when Grafana enabled, enabled PV and use storage class | 5Gi
|
||||
Existing Claim | Use existing PVC for Grafna | Yes, when Grafana enabled, enabled PV and use existing PVC | n/a
|
||||
|
||||
|
||||
Existing Claim | Use existing PVC for Grafana | Yes, when Grafana enabled, enabled PV and use existing PVC | n/a
|
||||
@@ -0,0 +1,28 @@
|
||||
---
|
||||
title: Setup Guide
|
||||
weight: 2
|
||||
---
|
||||
|
||||
This section describes how to enable Istio and start using it in your projects.
|
||||
|
||||
This section assumes that you have Rancher installed, and you have a Rancher-provisioned Kubernetes cluster where you would like to set up Istio.
|
||||
|
||||
If you use Istio for traffic management, you will need to allow external traffic to the cluster. In that case, you will need to follow all of the steps below.
|
||||
|
||||
> **Quick Setup** If you don't need external traffic to reach Istio, and you just want to set up Istio for monitoring and tracing traffic within the cluster, skip the steps for [setting up the Istio gateway]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/gateway) and [setting up Istio's components for traffic management.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/set-up-traffic-management)
|
||||
|
||||
1. [Enable Istio in the cluster.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster)
|
||||
1. [Enable Istio in all the namespaces where you want to use it.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace)
|
||||
1. [Select the nodes where the main Istio components will be deployed.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/node-selectors)
|
||||
1. [Add deployments and services that have the Istio sidecar injected.](#deploy-workloads-in-the-cluster)
|
||||
1. [Set up the Istio gateway. ]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/gateway)
|
||||
1. [Set up Istio's components for traffic management.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/set-up-traffic-management)
|
||||
1. [Generate traffic and see Istio in action.](#generate-traffic-and-see-istio-in-action)
|
||||
|
||||
# Prerequisites
|
||||
|
||||
This guide assumes you have already [installed Rancher,]({{<baseurl>}}/rancher/v2.x/en/installation) and you have already [provisioned a separate Kubernetes cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning) on which you will install Istio.
|
||||
|
||||
The nodes in your cluster must meet the [CPU and memory requirements.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/istio/#cpu-and-memory-requirements)
|
||||
|
||||
The workloads and services that you want to be controlled by Istio must meet [Istio's requirements.](https://istio.io/docs/setup/additional-setup/requirements/)
|
||||
@@ -0,0 +1,321 @@
|
||||
---
|
||||
title: 4. Add Deployments and Services with the Istio Sidecar
|
||||
weight: 4
|
||||
---
|
||||
|
||||
> **Prerequisite:** To enable Istio for a workload, the cluster and namespace must have Istio enabled.
|
||||
|
||||
Enabling Istio in a namespace only enables automatic sidecar injection for new workloads. To enable the Envoy sidecar for existing workloads, you need to enable it manually for each workload.
|
||||
|
||||
To inject the Istio sidecar on an existing workload in the namespace, go to the workload, click the **Ellipsis (...),** and click **Redeploy.** When the workload is redeployed, it will have the Envoy sidecar automatically injected.
|
||||
|
||||
Wait a few minutes for the workload to upgrade to have the istio sidecar. Click it and go to the Containers section. You should be able to see istio-init and istio-proxy alongside your original workload. This means the Istio sidecar is enabled for the workload. Istio is doing all the wiring for the sidecar envoy. Now Istio can do all the features automatically if you enable them in the yaml.
|
||||
|
||||
### 3. Add Deployments and Services
|
||||
|
||||
Next we add the Kubernetes resources for the sample deployments and services for the BookInfo app in Istio's documentation.
|
||||
|
||||
1. Go to the cluster view and click **Import YAML.**
|
||||
1. Copy the below resources into the form.
|
||||
1. Click **Import.**
|
||||
|
||||
This will set up the following sample resources from Istio's example BookInfo app:
|
||||
|
||||
Details service and deployment:
|
||||
|
||||
- A `details` Service
|
||||
- A ServiceAccount for `bookinfo-details`
|
||||
- A `details-v1` Deployment
|
||||
|
||||
Ratings service and deployment:
|
||||
|
||||
- A `ratings` Service
|
||||
- A ServiceAccount for `bookinfo-ratings`
|
||||
- A `ratings-v1` Deployment
|
||||
|
||||
Reviews service and deployments (three versions):
|
||||
|
||||
- A `reviews` Service
|
||||
- A ServiceAccount for `bookinfo-reviews`
|
||||
- A `reviews-v1` Deployment
|
||||
- A `reviews-v2` Deployment
|
||||
- A `reviews-v3` Deployment
|
||||
|
||||
Productpage service and deployment:
|
||||
|
||||
This is the main page of the app, which will be visible from a web browser. The other services will be called from this page.
|
||||
|
||||
- A `productpage` service
|
||||
- A ServiceAccount for `bookinfo-productpage`
|
||||
- A `productpage-v1` Deployment
|
||||
|
||||
### Resource YAML
|
||||
|
||||
```yaml
|
||||
# Copyright 2017 Istio Authors
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
##################################################################################################
|
||||
# Details service
|
||||
##################################################################################################
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: details
|
||||
labels:
|
||||
app: details
|
||||
service: details
|
||||
spec:
|
||||
ports:
|
||||
- port: 9080
|
||||
name: http
|
||||
selector:
|
||||
app: details
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: bookinfo-details
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: details-v1
|
||||
labels:
|
||||
app: details
|
||||
version: v1
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: details
|
||||
version: v1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: details
|
||||
version: v1
|
||||
spec:
|
||||
serviceAccountName: bookinfo-details
|
||||
containers:
|
||||
- name: details
|
||||
image: docker.io/istio/examples-bookinfo-details-v1:1.15.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9080
|
||||
---
|
||||
##################################################################################################
|
||||
# Ratings service
|
||||
##################################################################################################
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ratings
|
||||
labels:
|
||||
app: ratings
|
||||
service: ratings
|
||||
spec:
|
||||
ports:
|
||||
- port: 9080
|
||||
name: http
|
||||
selector:
|
||||
app: ratings
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: bookinfo-ratings
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ratings-v1
|
||||
labels:
|
||||
app: ratings
|
||||
version: v1
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: ratings
|
||||
version: v1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: ratings
|
||||
version: v1
|
||||
spec:
|
||||
serviceAccountName: bookinfo-ratings
|
||||
containers:
|
||||
- name: ratings
|
||||
image: docker.io/istio/examples-bookinfo-ratings-v1:1.15.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9080
|
||||
---
|
||||
##################################################################################################
|
||||
# Reviews service
|
||||
##################################################################################################
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: reviews
|
||||
labels:
|
||||
app: reviews
|
||||
service: reviews
|
||||
spec:
|
||||
ports:
|
||||
- port: 9080
|
||||
name: http
|
||||
selector:
|
||||
app: reviews
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: bookinfo-reviews
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: reviews-v1
|
||||
labels:
|
||||
app: reviews
|
||||
version: v1
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: reviews
|
||||
version: v1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: reviews
|
||||
version: v1
|
||||
spec:
|
||||
serviceAccountName: bookinfo-reviews
|
||||
containers:
|
||||
- name: reviews
|
||||
image: docker.io/istio/examples-bookinfo-reviews-v1:1.15.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9080
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: reviews-v2
|
||||
labels:
|
||||
app: reviews
|
||||
version: v2
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: reviews
|
||||
version: v2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: reviews
|
||||
version: v2
|
||||
spec:
|
||||
serviceAccountName: bookinfo-reviews
|
||||
containers:
|
||||
- name: reviews
|
||||
image: docker.io/istio/examples-bookinfo-reviews-v2:1.15.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9080
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: reviews-v3
|
||||
labels:
|
||||
app: reviews
|
||||
version: v3
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: reviews
|
||||
version: v3
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: reviews
|
||||
version: v3
|
||||
spec:
|
||||
serviceAccountName: bookinfo-reviews
|
||||
containers:
|
||||
- name: reviews
|
||||
image: docker.io/istio/examples-bookinfo-reviews-v3:1.15.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9080
|
||||
---
|
||||
##################################################################################################
|
||||
# Productpage services
|
||||
##################################################################################################
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: productpage
|
||||
labels:
|
||||
app: productpage
|
||||
service: productpage
|
||||
spec:
|
||||
ports:
|
||||
- port: 9080
|
||||
name: http
|
||||
selector:
|
||||
app: productpage
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: bookinfo-productpage
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: productpage-v1
|
||||
labels:
|
||||
app: productpage
|
||||
version: v1
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: productpage
|
||||
version: v1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: productpage
|
||||
version: v1
|
||||
spec:
|
||||
serviceAccountName: bookinfo-productpage
|
||||
containers:
|
||||
- name: productpage
|
||||
image: docker.io/istio/examples-bookinfo-productpage-v1:1.15.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9080
|
||||
---
|
||||
```
|
||||
|
||||
### [Next: Set up the Istio Gateway]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/gateway)
|
||||
+22
@@ -0,0 +1,22 @@
|
||||
---
|
||||
title: 1. Enable Istio in the Cluster
|
||||
weight: 1
|
||||
---
|
||||
|
||||
This cluster uses the default Nginx controller to allow traffic into the cluster.
|
||||
|
||||
A Rancher [administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) can configure Rancher to deploy Istio in a Kubernetes cluster.
|
||||
|
||||
1. From the **Global** view, navigate to the cluster where you want to enable Istio.
|
||||
1. Click **Tools > Istio.**
|
||||
1. Optional: Configure member access and [resource limits]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/config/) for the Istio components. Ensure you have enough resources on your worker nodes to enable Istio.
|
||||
1. Click **Enable**.
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:** Istio is enabled at the cluster level.
|
||||
|
||||
The Istio application, `cluster-istio`, is added as an [application]({{<baseurl>}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project.
|
||||
|
||||
When Istio is enabled in the cluster, the label for Istio sidecar auto injection,`istio-injection=enabled`, will be automatically added to each new namespace in this cluster. This automatically enables Istio sidecar injection in all new workloads that are deployed in those namespaces. You will need to manually enable Istio in preexisting namespaces and workloads.
|
||||
|
||||
### [Next: Enable Istio in a Namespace]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-namespace)
|
||||
+24
@@ -0,0 +1,24 @@
|
||||
---
|
||||
title: 2. Enable Istio in a Namespace
|
||||
weight: 2
|
||||
---
|
||||
|
||||
You will need to manually enable Istio in each namespace that you want to be tracked or controlled by Istio. When Istio is enabled in a namespace, the Envoy sidecar proxy will be automatically injected into all new workloads that are deployed in the namespace.
|
||||
|
||||
This namespace setting will only affect new workloads in the namespace. Any preexisting workloads will need to be re-deployed to leverage the sidecar auto injection.
|
||||
|
||||
> **Prerequisite:** To enable Istio in a namespace, the cluster must have Istio enabled.
|
||||
|
||||
1. In the Rancher UI, go to the cluster view. Click the **Projects/Namespaces** tab.
|
||||
1. Go to the namespace where you want to enable the Istio sidecar auto injection and click the **Ellipsis (...).**
|
||||
1. Click **Edit.**
|
||||
1. In the **Istio sidecar auto injection** section, click **Enable.**
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The namespace now has the label `istio-injection=enabled`. All new workloads deployed in this namespace will have the Istio sidecar injected by default.
|
||||
|
||||
### Verifying that Automatic Istio Sidecar Injection is Enabled
|
||||
|
||||
To verify that Istio is enabled, deploy a hello-world workload in the namespace. Go to the workload and click the pod name. In the **Containers** section, you should see the `istio-proxy` container.
|
||||
|
||||
### [Next: Set up Taints and Tolerations]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/node-selectors)
|
||||
@@ -0,0 +1,103 @@
|
||||
---
|
||||
title: 5. Set up the Istio Gateway
|
||||
weight: 5
|
||||
---
|
||||
|
||||
The gateway to each cluster can have its own port or load balancer, which is unrelated to a service mesh. By default, each Rancher-provisioned cluster has one NGINX ingress controller allowing traffic into the cluster.
|
||||
|
||||
You can use the NGINX ingress controller with or without Istio installed. If this is the only gateway to your cluster, Istio will be able to route traffic from service to service, but Istio will not be able to receive traffic from outside the cluster.
|
||||
|
||||
To allow Istio to receive external traffic, you need to enable Istio's gateway, which works as a north-south proxy for external traffic. When you enable the Istio gateway, the result is that your cluster will have two ingresses.
|
||||
|
||||
You will also need to set up a Kubernetes gateway for your services. This Kubernetes resource points to Istio's implementation of the ingress gateway to the cluster.
|
||||
|
||||
You can route traffic into the service mesh with a load balancer or just Istio's NodePort gateway. This section describes how to set up the NodePort gateway.
|
||||
|
||||
For more information on the Istio gateway, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/gateway/)
|
||||
|
||||

|
||||
|
||||
# Enable the Istio Gateway
|
||||
|
||||
The ingress gateway is a Kubernetes service that will be deployed in your cluster. There is only one Istio gateway per cluster.
|
||||
|
||||
1. Go to the cluster where you want to allow outside traffic into Istio.
|
||||
1. Click **Tools > Istio.**
|
||||
1. Expand the **Ingress Gateway** section.
|
||||
1. Under **Enable Ingress Gateway,** click **True.** The default type of service for the Istio gateway is NodePort. You can also configure it as a [load balancer.]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/)
|
||||
1. Optionally, configure the ports, service types, node selectors and tolerations, and resource requests and limits for this service. The default resource requests for CPU and memory are the minimum recommended resources.
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The gateway is deployed, which allows Istio to receive traffic from outside the cluster.
|
||||
|
||||
# Add a Kubernetes Gateway that Points to the Istio Gateway
|
||||
|
||||
To allow traffic to reach Ingress, you will also need to provide a Kubernetes gateway resource in your YAML that points to Istio's implementation of the ingress gateway to the cluster.
|
||||
|
||||
1. Go to the namespace where you want to deploy the Kubernetes gateway and click **Import YAML.**
|
||||
1. Upload the gateway YAML as a file or paste it into the form. An example gateway YAML is provided below.
|
||||
1. Click **Import.**
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: Gateway
|
||||
metadata:
|
||||
name: bookinfo-gateway
|
||||
spec:
|
||||
selector:
|
||||
istio: ingressgateway # use istio default controller
|
||||
servers:
|
||||
- port:
|
||||
number: 80
|
||||
name: http
|
||||
protocol: HTTP
|
||||
hosts:
|
||||
- "*"
|
||||
```
|
||||
|
||||
**Result:** You have configured your gateway resource so that Istio can receive traffic from outside the cluster.
|
||||
|
||||
Confirm that the resource exists by running:
|
||||
```
|
||||
kubectl get gateway
|
||||
```
|
||||
|
||||
The result should be something like this:
|
||||
```
|
||||
NAME AGE
|
||||
bookinfo-gateway 64m
|
||||
```
|
||||
|
||||
### Access the ProductPage Service from a Web Browser
|
||||
|
||||
To test and see if the BookInfo app deployed correctly, the app can be viewed a web browser using the Istio controller IP and port, combined with the request name specified in your Kubernetes gateway resource:
|
||||
|
||||
`http://<IP of Istio controller>:<Port of istio controller>/productpage`
|
||||
|
||||
To get the ingress gateway URL and port,
|
||||
|
||||
1. Go to the `System` project in your cluster.
|
||||
1. Within the `System` project, go to the namespace `istio-system`.
|
||||
1. Within `istio-system`, there is a workload named `istio-ingressgateway`. Under the name of this workload, you should see links, such as `80/tcp`.
|
||||
1. Click one of those links. This should show you the URL of the ingress gateway in your web browser. Append `/productpage` to the URL.
|
||||
|
||||
**Result:** You should see the BookInfo app in the web browser.
|
||||
|
||||
For help inspecting the Istio controller URL and ports, try the commands the [Istio documentation.](https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports)
|
||||
|
||||
# Troubleshooting
|
||||
|
||||
The [official Istio documentation](https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/#troubleshooting) suggests `kubectl` commands to inspect the correct ingress host and ingress port for external requests.
|
||||
|
||||
### Confirming that the Kubernetes Gateway Matches Istio's Ingress Controller
|
||||
|
||||
You can try the steps in this section to make sure the Kubernetes gateway is configured properly.
|
||||
|
||||
In the gateway resource, the selector refers to Istio's default ingress controller by its label, in which the key of the label is `istio` and the value is `ingressgateway`. To make sure the label is appropriate for the gateway, do the following:
|
||||
|
||||
1. Go to the `System` project in your cluster.
|
||||
1. Within the `System` project, go to the namespace `istio-system`.
|
||||
1. Within `istio-system`, there is a workload named `istio-ingressgateway`.
|
||||
1. Click the name of this workload and go to the **Labels and Annotations** section. You should see that it has the key `istio` and the value `ingressgateway`. This confirms that the selector in the Gateway resource matches Istio's default ingress controller.
|
||||
|
||||
### [Next: Set up Istio's Components for Traffic Management]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/set-up-traffic-management)
|
||||
@@ -0,0 +1,38 @@
|
||||
---
|
||||
title: 3. Select the Nodes Where Istio Components Will be Deployed
|
||||
weight: 3
|
||||
---
|
||||
|
||||
> **Prerequisite:** Your cluster needs a worker node that can designated for Istio. The worker node should meet the [resource requirements.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/resources)
|
||||
|
||||
This section describes how use node selectors to configure Istio components to be deployed on a designated node.
|
||||
|
||||
In larger deployments, it is strongly advised that Istio's infrastructure be placed on dedicated nodes in the cluster by adding a node selector for each Istio component.
|
||||
|
||||
# Adding a Label to the Istio Node
|
||||
|
||||
First, add a label to the node where Istio components should be deployed. This label can have any key-value pair. For this example, we will use the key `istio` and the value `enabled`.
|
||||
|
||||
1. From the cluster view, go to the **Nodes** tab.
|
||||
1. Go to a worker node that will host the Istio components and click **Ellipsis (...) > Edit.**
|
||||
1. Expand the **Labels & Annotations** section.
|
||||
1. Click **Add Label.**
|
||||
1. In the fields that appear, enter `istio` for the key and `enabled` for the value.
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** A worker node has the label that will allow you to designate it for Istio components.
|
||||
|
||||
# Configuring Istio Components to Use the Labeled Node
|
||||
|
||||
Configure each Istio component to be deployed to the node with the Istio label. Each Istio component can be configured individually, but in this tutorial, we will configure all of the components to be scheduled on the same node for the sake of simplicity.
|
||||
|
||||
For larger deployments, it is recommended to schedule each component of Istio onto separate nodes.
|
||||
|
||||
1. From the cluster view, click **Tools > Istio.**
|
||||
1. Expand the **Pilot** section and click **Add Selector** in the form that appears. Enter the node selector label that you added to the Istio node. In our case, we are using the key `istio` and the value `enabled.`
|
||||
1. Repeat the previous step for the **Mixer** and **Tracing** sections.
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The Istio components will be deployed on the Istio node.
|
||||
|
||||
### [Next: Add Deployments and Services]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/deploy-workloads)
|
||||
+61
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: 6. Set up Istio's Components for Traffic Management
|
||||
weight: 6
|
||||
---
|
||||
|
||||
A central advantage of traffic management in Istio is that it allows dynamic request routing, which is useful for canary deployments or blue/green deployments. The two key resources in Istio traffic management are virtual services and destination rules.
|
||||
|
||||
- [Virtual services](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/)intercept and direct traffic to your Kubernetes services, allowing you to divide percentages of traffic from a request to different services. You can use them to define a set of routing rules to apply when a host is addressed.
|
||||
- [Destination rules](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/) serve as the single source of truth about which service versions are available to receive traffic from virtual services. You can use these resources to define policies that apply to traffic that is intended for a service after routing has occurred.
|
||||
|
||||
This section describes how to add an example virtual service that corresponds to the `reviews` microservice in the sample BookInfo app. The purpose of this service is to divide traffic between two versions of the `reviews` service.
|
||||
|
||||
In this example, we take the traffic to the `reviews` service and intercept it so that 50 percent of it goes to `v1` of the service and 50 percent goes to `v2`.
|
||||
|
||||
After this virtual service is deployed, we will generate traffic and see from the Kiali visualization that traffic is being routed evenly between the two versions of the service.
|
||||
|
||||
To deploy the virtual service and destination rules for the `reviews` service,
|
||||
|
||||
1. Go to the cluster view and click **Import YAML.**
|
||||
1. Copy resources below into the form.
|
||||
1. Click **Import.**
|
||||
|
||||
**Result:** When you generate traffic to this service (for example, by refreshing the ingress gateway URL), the Kiali traffic graph will reflect that traffic to the `reviews` service is divided evenly between `v1` and `v3`.
|
||||
```
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: VirtualService
|
||||
metadata:
|
||||
name: reviews
|
||||
spec:
|
||||
hosts:
|
||||
- reviews
|
||||
http:
|
||||
- route:
|
||||
- destination:
|
||||
host: reviews
|
||||
subset: v1
|
||||
weight: 50
|
||||
- destination:
|
||||
host: reviews
|
||||
subset: v3
|
||||
weight: 50
|
||||
---
|
||||
apiVersion: networking.istio.io/v1alpha3
|
||||
kind: DestinationRule
|
||||
metadata:
|
||||
name: reviews
|
||||
spec:
|
||||
host: reviews
|
||||
subsets:
|
||||
- name: v1
|
||||
labels:
|
||||
version: v1
|
||||
- name: v2
|
||||
labels:
|
||||
version: v2
|
||||
- name: v3
|
||||
labels:
|
||||
version: v3
|
||||
```
|
||||
|
||||
### [Next: Generate and View Traffic]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/setup/view-traffic)
|
||||
@@ -0,0 +1,26 @@
|
||||
---
|
||||
title: 7. Generate and View Traffic
|
||||
weight: 7
|
||||
---
|
||||
|
||||
This section describes how to view the traffic that is being managed by Istio.
|
||||
|
||||
# The Kiali Traffic Graph
|
||||
|
||||
Rancher integrates a Kiali graph into the Rancher UI. The Kiali graph provides a powerful way to visualize the topology of your Istio service mesh. It shows you which services communicate with each other.
|
||||
|
||||
To see the traffic graph,
|
||||
|
||||
1. From the project view in Rancher, click **Resources > Istio.**
|
||||
1. Go to the **Traffic Graph** tab. This tab has the Kiali network visualization integrated into the UI.
|
||||
|
||||
If you refresh the URL to the BookInfo app several times, you should be able to see green arrows on the Kiali graph showing traffic to `v1` and `v3` of the `reviews` service. The control panel on the right side of the graph lets you configure details including how many minutes of the most recent traffic should be shown on the graph.
|
||||
|
||||
For additional tools and visualizations, you can go to each UI for Kiali, Jaeger, Grafana, and Prometheus by clicking their icons in the top right corner of the page.
|
||||
|
||||
# Viewing Traffic Metrics
|
||||
|
||||
Istio’s monitoring features provide visibility into the performance of all your services.
|
||||
|
||||
1. From the project view in Rancher, click **Resources > Istio.**
|
||||
1. Go to the **Traffic Metrics** tab. After traffic is generated in your cluster, you should be able to see metrics for **Success Rate, Request Volume, 4xx Response Count, Project 5xx Response Count** and **Request Duration.**
|
||||
@@ -49,20 +49,15 @@ Each storage class contains the fields `provisioner`, `parameters`, and `reclaim
|
||||
|
||||
The `provisioner` determines which volume plugin is used to provision the persistent volumes.
|
||||
|
||||
{{% accordion id="provisioners" label="Supported Storage Class Provisioners" %}}
|
||||
{{% accordion id="provisioners" label="Enabled Storage Class Provisioners" %}}
|
||||
- Amazon EBS Disk
|
||||
- AzureFile
|
||||
- AzureDisk
|
||||
- Ceph RBD
|
||||
- Gluster Volume
|
||||
- Google Persistent Disk
|
||||
- Longhorn
|
||||
- Openstack Cinder Volume
|
||||
- Portworx Volume
|
||||
- Quobyte Volume
|
||||
- ScaleIO Volume
|
||||
- StorageOS
|
||||
- Vmware vSphere Volume
|
||||
- Local
|
||||
|
||||
{{% /accordion %}}
|
||||
<br/>
|
||||
|
||||
|
||||
@@ -3,6 +3,8 @@ title: Custom Cluster
|
||||
weight: 2210
|
||||
---
|
||||
|
||||
When you create a custom cluster, Rancher uses RKE (the Rancher Kubernetes Engine) to provision the Kubernetes cluster on your existing infrastructure.
|
||||
|
||||
If you don't want to host your Kubernetes cluster in a [hosted kubernetes provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters) or provision nodes through Rancher, you can use the _custom cluster_ option to create a Kubernetes cluster in on-premise bare-metal servers, on-premise virtual machines, or in _any_ node hosted by an infrastructure provider.
|
||||
|
||||
In this scenario, you'll bring the nodes yourself, and then configure them to meet Rancher's [requirements]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#requirements). Then, use the [Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/) install option to setup your cluster.
|
||||
In this scenario, you'll bring the nodes yourself, and then configure them to meet Rancher's [requirements]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#requirements). Then, use the [Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/) install option to set up your cluster.
|
||||
|
||||
+2
-1
@@ -16,7 +16,8 @@ Rancher deploys an agent on each node to communicate with the node. This pages d
|
||||
| `--token` | `CATTLE_TOKEN` | Token that is needed to register the node in Rancher |
|
||||
| `--ca-checksum` | `CATTLE_CA_CHECKSUM` | The SHA256 checksum of the configured Rancher `cacerts` setting to validate |
|
||||
| `--node-name` | `CATTLE_NODE_NAME` | Override the hostname that is used to register the node (defaults to `hostname -s`) |
|
||||
| `--label` | `CATTLE_NODE_LABEL` | Add node labels to the node (`--label key=value`) |
|
||||
| `--label` | `CATTLE_NODE_LABEL` | Add node labels to the node. For multiple labels, pass additional `--label` options. (`--label key=value`) |
|
||||
| `--taints` | `CATTLE_NODE_TAINTS` | Add node taints to the node. For multiple taints, pass additional `--taints` options. (`--taints key=value:effect`) |
|
||||
|
||||
## Role options
|
||||
|
||||
|
||||
@@ -0,0 +1,38 @@
|
||||
---
|
||||
title: Rancher agents
|
||||
weight: 2400
|
||||
---
|
||||
|
||||
There are two different agent resources deployed on Rancher managed clusters:
|
||||
|
||||
- [cattle-cluster-agent](#cattle-cluster-agent)
|
||||
- [cattle-node-agent](#cattle-node-agent)
|
||||
|
||||
### cattle-cluster-agent
|
||||
|
||||
The `cattle-cluster-agent` is used to connect to the Kubernetes API of [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters. The `cattle-cluster-agent` is deployed using a Deployment resource.
|
||||
|
||||
### cattle-node-agent
|
||||
|
||||
The `cattle-node-agent` is used to interact with nodes in a [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) cluster when performing cluster operations. Examples of cluster operations are upgrading Kubernetes version and creating/restoring etcd snapshots. The `cattle-node-agent` is deployed using a DaemonSet resource to make sure it runs on every node. The `cattle-node-agent` is used as fallback option to connect to the Kubernets API of [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters when `cattle-cluster-agent` is unavailable.
|
||||
|
||||
> **Note:** In Rancher v2.2.4 and lower, the `cattle-node-agent` pods did not tolerate all taints, causing Kubernetes upgrades to fail on these nodes. The fix for this has been included in Rancher v2.2.5 and higher.
|
||||
|
||||
### Scheduling rules
|
||||
|
||||
_Applies to v2.3.0 and higher_
|
||||
|
||||
| Component | nodeAffinity nodeSelectorTerms | nodeSelector | Tolerations |
|
||||
| ---------------------- | ------------------------------------------ | ------------ | ------------------------------------------------------------------------------ |
|
||||
| `cattle-cluster-agent` | `beta.kubernetes.io/os:NotIn:windows` | none | `operator:Exists` |
|
||||
| `cattle-node-agent` | `beta.kubernetes.io/os:NotIn:windows` | none | `operator:Exists` |
|
||||
|
||||
The `cattle-cluster-agent` Deployment has preferred scheduling rules using `requiredDuringSchedulingIgnoredDuringExecution`, favoring to be scheduled on nodes with the `controlplane` node. See [Kubernetes: Assigning Pods to Nodes](https://kubernetes.io/docs
|
||||
concepts/configuration/assign-pod-node/) to find more information about scheduling rules.
|
||||
|
||||
The `requiredDuringSchedulingIgnoredDuringExecution` configuration is shown in the table below:
|
||||
|
||||
| Weight | Expression |
|
||||
| ------ | ------------------------------------------------ |
|
||||
| 100 | `node-role.kubernetes.io/controlplane:In:"true"` |
|
||||
| 1 | `node-role.kubernetes.io/etcd:In:"true"` |
|
||||
@@ -6,6 +6,8 @@ aliases:
|
||||
- /rancher/v2.x/en/tasks/clusters/creating-a-cluster/create-cluster-custom/
|
||||
---
|
||||
|
||||
When you create a custom cluster, Rancher uses RKE (the Rancher Kubernetes Engine) to provision the Kubernetes cluster on your existing infrastructure. This section describes how to set up a custom cluster.
|
||||
|
||||
## Custom Nodes
|
||||
|
||||
To use this option you'll need access to servers you intend to use in your Kubernetes cluster. Provision each server according to Rancher [requirements](#requirements), which includes some hardware specifications and Docker. After you install Docker on each server, run the command provided in the Rancher UI to turn each server into a Kubernetes node.
|
||||
@@ -55,7 +57,7 @@ Each node in your cluster must meet our [Requirements]({{< baseurl >}}/rancher/v
|
||||
|
||||
5. {{< step_create-cluster_cluster-options >}}
|
||||
|
||||
>**Using Windows nodes as Kubernetes workers?**
|
||||
>**Using Windows nodes as Kubernetes workers?**
|
||||
>
|
||||
>- See [Enable the Windows Support Option]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/#enable-the-windows-support-option).
|
||||
>- The only Network Provider available for clusters with Windows support is Flannel. See [Networking Option]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/#networking-option).
|
||||
@@ -68,10 +70,7 @@ Each node in your cluster must meet our [Requirements]({{< baseurl >}}/rancher/v
|
||||
>- Using Windows nodes as Kubernetes workers? See [Node Configuration]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/#node-configuration).
|
||||
>- Bare-Metal Server Reminder: If you plan on dedicating bare-metal servers to each role, you must provision a bare-metal server for each role (i.e. provision multiple bare-metal servers).
|
||||
|
||||
8. <a id="step-8"></a>**Optional**: Click **Show advanced options** to specify IP address(es) to use when registering the node, override the hostname of the node or to add labels to the node.
|
||||
|
||||
[Rancher Agent Options]({{< baseurl >}}/rancher/v2.x/en/admin-settings/agent-options/)<br/>
|
||||
[Kubernetes Documentation: Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
|
||||
8. <a id="step-8"></a>**Optional**: Click **[Show advanced options]({{< baseurl >}}/rancher/v2.x/en/admin-settings/agent-options/)** to specify IP address(es) to use when registering the node, override the hostname of the node, or to add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) to the node.
|
||||
|
||||
9. Copy the command displayed on screen to your clipboard.
|
||||
|
||||
|
||||
@@ -5,19 +5,81 @@ aliases:
|
||||
- /rancher/v2.x/en/concepts/global-configuration/node-templates/
|
||||
---
|
||||
|
||||
## Node Pools
|
||||
# Node Templates
|
||||
|
||||
Using Rancher, you can create pools of nodes based on a [node template](#node-templates). The benefit of using a node pool is that if a node loses connectivity with the cluster, Rancher will automatically create another node to join the cluster to ensure that the count of the node pool is as expected.
|
||||
A node template is the saved configuration for the parameters to use when provisioning nodes in a specific cloud provider. These nodes can be launched from the UI. Rancher uses [Docker Machine](https://docs.docker.com/machine/) to provision these nodes. The available cloud providers to create node templates are based on the active node drivers in Rancher.
|
||||
|
||||
Each node pool is assigned with a [node component]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#node-components) to specify how these nodes should be configured for the Kubernetes cluster.
|
||||
After you create a node template in Rancher, it's saved so that you can use this template again to create node pools. Node templates are bound to your login. After you add a template, you can remove them from your user profile.
|
||||
|
||||
## Node Templates
|
||||
### Node Labels
|
||||
|
||||
A node template is the saved configuration for the parameters to use when provisioning nodes in a specific cloud provider. Rancher provides a nice UI to be able to launch these nodes and uses [Docker Machine](https://docs.docker.com/machine/) to provision these nodes. The available cloud providers to create node templates are based on the active node drivers in Rancher.
|
||||
You can add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) on each node template, so that any nodes created from the node template will automatically have these labels on them.
|
||||
|
||||
After you create a node template in Rancher, it's saved so that you can use this template again to create other node pools. Node templates are bound to your login. After you add a template, you can remove them from your user profile.
|
||||
### Node Taints
|
||||
|
||||
## Cloud Credentials
|
||||
_Available as of Rancher v2.3.0_
|
||||
|
||||
You can add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on each node template, so that any nodes created from the node template will automatically have these taints on them.
|
||||
|
||||
Since taints can be added at a node template and node pool, if there is no conflict with the same key and effect of the taints, all taints will be added to the nodes. If there are taints with the same key and different effect, the taints from the node pool will override the taints from the node template.
|
||||
|
||||
# Node Pools
|
||||
|
||||
Using Rancher, you can create pools of nodes based on a [node template](#node-templates). The benefit of using a node pool is that if a node is destroyed or deleted, you can increase the number of live nodes to compensate for the node that was lost. The node pool helps you ensure that the count of the node pool is as expected.
|
||||
|
||||
Each node pool is assigned with a [node component]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) to specify how these nodes should be configured for the Kubernetes cluster.
|
||||
|
||||
### Node Pool Taints
|
||||
|
||||
_Available as of Rancher v2.3.0_
|
||||
|
||||
If you haven't defined [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on your node template, you can add taints for each node pool. The benefit of adding taints at a node pool is beneficial over adding it at a node template is that you can swap out the node templates without worrying if the taint is on the node template.
|
||||
|
||||
For each taint, they will automatically be added to any created node in the node pool. Therefore, if you add taints to a node pool that have existing nodes, the taints won't apply to existing nodes in the node pool, but any new node added into the node pool will get the taint.
|
||||
|
||||
When there are taints on the node pool and node template, if there is no conflict with the same key and effect of the taints, all taints will be added to the nodes. If there are taints with the same key and different effect, the taints from the node pool will override the taints from the node template.
|
||||
|
||||
### Node Auto-replace
|
||||
|
||||
_Available as of Rancher v2.3.0_
|
||||
|
||||
If a node is in a node pool, Rancher can automatically replace unreachable nodes. Rancher will use the existing node template for the given node pool to recreate the node if it becomes inactive for a specified number of minutes.
|
||||
|
||||
{{% accordion id="how-does-node-auto-replace-work" label="How does Node Auto-replace Work?" %}}
|
||||
Node auto-replace works on top of the Kubernetes node controller. The node controller periodically checks the status of all the nodes (configurable via the `--node-monitor-period` flag of the `kube-controller`). When a node is unreachable, the node controller will taint that node. When this occurs, Rancher will begin its deletion countdown. You can configure the amount of time Rancher waits to delete the node. If the taint is not removed before the deletion countdown ends, Rancher will proceed to delete the node object. Rancher will then provision a node in accordance with the set quantity of the node pool.
|
||||
{{% /accordion %}}
|
||||
|
||||
### Enabling Node Auto-replace
|
||||
|
||||
When you create the node pool, you can specify the amount of time in minutes that Rancher will wait to replace an unresponsive node.
|
||||
|
||||
1. In the form for creating a cluster, go to the **Node Pools** section.
|
||||
1. Go to the node pool where you want to enable node auto-replace. In the **Recreate Unreachable After** field, enter the number of minutes that Rancher should wait for a node to respond before replacing the node.
|
||||
1. Fill out the rest of the form for creating a cluster.
|
||||
|
||||
**Result:** Node auto-replace is enabled for the node pool.
|
||||
|
||||
You can also enable node auto-replace after the cluster is created with the following steps:
|
||||
|
||||
1. From the Global view, click the Clusters tab.
|
||||
1. Go to the cluster where you want to enable node auto-replace, click the vertical ellipsis **(…)**, and click **Edit.**
|
||||
1. In the **Node Pools** section, go to the node pool where you want to enable node auto-replace. In the **Recreate Unreachable After** field, enter the number of minutes that Rancher should wait for a node to respond before replacing the node.
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** Node auto-replace is enabled for the node pool.
|
||||
|
||||
### Disabling Node Auto-replace
|
||||
|
||||
You can disable node auto-replace from the Rancher UI with the following steps:
|
||||
|
||||
1. From the Global view, click the Clusters tab.
|
||||
1. Go to the cluster where you want to enable node auto-replace, click the vertical ellipsis **(…)**, and click **Edit.**
|
||||
1. In the **Node Pools** section, go to the node pool where you want to enable node auto-replace. In the **Recreate Unreachable After** field, enter 0.
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** Node auto-replace is disabled for the node pool.
|
||||
|
||||
# Cloud Credentials
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
@@ -33,6 +95,6 @@ Node templates can use cloud credentials to store credentials for launching node
|
||||
|
||||
After cloud credentials are created, the user can start [managing the cloud credentials that they created]({{< baseurl >}}/rancher/v2.x/en/user-settings/cloud-credentials/).
|
||||
|
||||
## Node Drivers
|
||||
# Node Drivers
|
||||
|
||||
If you don't find the node driver that you want to use, you can see if it is available in Rancher's built-in [node drivers and activate it]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/node-drivers/#activating-deactivating-node-drivers), or you can [add your own custom node driver]({{< baseurl >}}/rancher/v2.x/en/admin-settings/drivers/node-drivers/#adding-custom-node-drivers).
|
||||
|
||||
+148
-132
@@ -1,141 +1,172 @@
|
||||
---
|
||||
title: Configuring Custom Clusters for Windows (Experimental)
|
||||
title: Configuring Custom Clusters for Windows
|
||||
weight: 2240
|
||||
---
|
||||
|
||||
>**Notes:**
|
||||
>
|
||||
>- Configuring Windows clusters is new and improved for Rancher v2.3.0!
|
||||
>- Still using v2.1.x or v2.2.x? See the documentation for how to provision Windows clusters on [previous versions]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/docs-for-2.1-and-2.2/). As of v2.1.10 and v2.2.4, the ability to provision Windows clusters has been removed in the 2.1.x and 2.2.x lines.
|
||||
_Available as of v2.3.0_
|
||||
|
||||
_Available as of v2.3.0-alpha1_
|
||||
|
||||
>**Important:**
|
||||
>
|
||||
>Support for Windows nodes is currently an experimental feature and is not yet officially supported in Rancher. Therefore, we do not recommend using Windows nodes in a production environment.
|
||||
When provisioning a [custom cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/) using Rancher, Rancher uses RKE (the Rancher Kubernetes Engine) to provision the Kubernetes custom cluster on your existing infrastructure.
|
||||
|
||||
When provisioning a [custom cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/) using Rancher, you can use a mix of Linux and Windows hosts as your cluster nodes.
|
||||
You can use a mix of Linux and Windows hosts as your cluster nodes. Windows nodes can only be used for deploying workloads, while Linux nodes are required for cluster management.
|
||||
|
||||
This guide walks you through the creation of a custom cluster that includes three nodes.
|
||||
You can only add Windows nodes to a cluster if Windows support is enabled. Windows support can be enabled for new custom clusters that use Kubernetes 1.15+ and the Flannel network provider. Windows support cannot be enabled for existing clusters.
|
||||
|
||||
* A Linux node, which serves as the Kubernetes control plane node.
|
||||
* Another Linux node, which serves as a Kubernetes worker used to support Rancher Cluster agent, Metrics server, DNS and Ingress for the cluster.
|
||||
* A Windows node, which is assigned the Kubernetes worker role and runs your Windows containers.
|
||||
For a summary of Kubernetes features supported in Windows, see the Kubernetes documentation on [supported functionality and limitations for using Kubernetes with Windows](https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#supported-functionality-and-limitations) or the [guide for scheduling Windows containers in Kubernetes](https://kubernetes.io/docs/setup/production-environment/windows/user-guide-windows-containers/).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before provisioning a new cluster, be sure that you have already installed Rancher on a device that accepts inbound network traffic. This is required in order for the cluster nodes to communicate with Rancher. If you have not already installed Rancher, please refer to the [installation documentation]({{< baseurl >}}/rancher/v2.x/en/installation/) before proceeding with this guide.
|
||||
|
||||
For a summary of Kubernetes features supported in Windows, see [Using Windows Server Containers in Kubernetes](https://kubernetes.io/docs/getting-started-guides/windows/#supported-features).
|
||||
|
||||
### Node Requirements
|
||||
|
||||
In order to add Windows worker nodes, the node must be running Windows Server 2019 (i.e. core version 1809 or above). Any earlier versions (e.g. core version 1803 and earlier) do not properly support Kubernetes.
|
||||
|
||||
Windows overlay networking requires that [KB4489899](https://support.microsoft.com/en-us/help/4489899) hotfix is installed. Most cloud-hosted VMs already have this hotfix.
|
||||
|
||||
### Container Requirements
|
||||
|
||||
Windows requires that containers must be built on the same Windows Server version that they are being deployed on. Therefore, containers must be built on Windows Server 2019 core version 1809. If you have existing containers built for Windows Server 2019 core version 1803 or earlier, they must be re-built on Windows Server 2019 core version 1809.
|
||||
|
||||
## Steps for Creating a Cluster with Windows Support
|
||||
|
||||
To set up a custom cluster with support for Windows nodes and containers, you will need to complete the series of tasks listed below.
|
||||
This guide covers the following topics:
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
- [1. Provision Hosts](#1-provision-hosts)
|
||||
- [2. Create the Custom Cluster](#2-create-the-custom-cluster)
|
||||
- [3. Add Linux Master Node](#3-add-linux-master-node)
|
||||
- [4. Add Linux Worker Node](#4-add-linux-worker-node)
|
||||
- [5. Add Windows Workers](#5-add-windows-workers)
|
||||
- [6. Cloud-host VM Routes Configuration for Host Gateway Mode (Optional)](#6-cloud-hosted-vm-routes-configuration-for-host-gateway-mode)
|
||||
- [7. Configuration for Azure Files (Optional)](#7-configuration-for-azure-files)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Requirements](#requirements-for-windows-clusters)
|
||||
- [OS and Docker](#os-and-docker)
|
||||
- [Hardware](#hardware)
|
||||
- [Networking](#networking)
|
||||
- [Architecture](#architecture)
|
||||
- [Containers](#containers)
|
||||
- [Tutorial: How to Create a Cluster with Windows Support](#tutorial-how-to-create-a-cluster-with-windows-support)
|
||||
- [Configuration for Storage Classes in Azure](#configuration-for-storage-classes-in-azure)
|
||||
<!-- /TOC -->
|
||||
|
||||
## 1. Provision Hosts
|
||||
# Prerequisites
|
||||
|
||||
To begin provisioning a custom cluster with Windows support, prepare your hosts. Provision three nodes according to our [installation requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/) - two Linux, one Windows. Your hosts can be:
|
||||
Before provisioning a new cluster, be sure that you have already installed Rancher on a device that accepts inbound network traffic. This is required in order for the cluster nodes to communicate with Rancher. If you have not already installed Rancher, please refer to the [installation documentation]({{< baseurl >}}/rancher/v2.x/en/installation/) before proceeding with this guide.
|
||||
|
||||
> **Note on Cloud Providers:** If you set a Kubernetes cloud provider in your cluster, some additional steps are required. You might want to set a cloud provider if you want to want to leverage a cloud provider's capabilities, for example, to automatically provision storage, load balancers, or other infrastructure for your cluster. Refer to [this page]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) for details on how to configure a cloud provider cluster of nodes that meet the prerequisites.
|
||||
|
||||
# Requirements for Windows Clusters
|
||||
|
||||
For a custom cluster, the general node requirements for networking, operating systems, and Docker are the same as the node requirements for a [Rancher installation]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/).
|
||||
|
||||
### OS and Docker
|
||||
|
||||
In order to add Windows worker nodes to a cluster, the node must be running Windows Server 2019 (i.e. core version 1903 or above) and [Docker 19.03.]({{<baseurl>}}/rancher/v2.x/en/installation/requirements/)
|
||||
|
||||
>**Notes:**
|
||||
>
|
||||
>- If you are using AWS, Rancher recommends *Microsoft Windows Server 2019 Base with Containers* as the Amazon Machine Image (AMI).
|
||||
>- If you are using GCE, Rancher recommends *Windows Server 2019 Datacenter for Containers* as the OS image.
|
||||
|
||||
### Hardware
|
||||
|
||||
The hosts in the cluster need to have at least:
|
||||
|
||||
- 2 core CPUs
|
||||
- 4.5 GiB memory (~4.83 GB)
|
||||
- 30 GiB of disk space (~32.21 GB)
|
||||
|
||||
Rancher will not provision the node if the node does not meet these requirements.
|
||||
|
||||
### Networking
|
||||
|
||||
Rancher only supports Windows using Flannel as the network provider.
|
||||
|
||||
There are two network options: [**Host Gateway (L2bridge)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) and [**VXLAN (Overlay)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan). The default option is **VXLAN (Overlay)** mode.
|
||||
|
||||
For **Host Gateway (L2bridge)** networking, it's best to use the same Layer 2 network for all nodes. Otherwise, you need to configure the route rules for them. For details, refer to the [documentation on configuring cloud-hosted VM routes.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/host-gateway-requirements/#cloud-hosted-vm-routes-configuration) You will also need to [disable private IP address checks]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/host-gateway-requirements/#disabling-private-ip-address-checks) if you are using Amazon EC2, Google GCE, or Azure VM.
|
||||
|
||||
For **VXLAN (Overlay)** networking, the [KB4489899](https://support.microsoft.com/en-us/help/4489899) hotfix must be installed. Most cloud-hosted VMs already have this hotfix.
|
||||
|
||||
### Architecture
|
||||
|
||||
The Kubernetes cluster management nodes (`etcd` and `controlplane`) must be run on Linux nodes.
|
||||
|
||||
The `worker` nodes, which is where your workloads will be deployed on, will typically be Windows nodes, but there must be at least one `worker` node that is run on Linux in order to run the Rancher cluster agent, DNS, metrics server, and Ingress related containers.
|
||||
|
||||
We recommend the minimum three-node architecture listed in the table below, but you can always add additional Linux and Windows workers to scale up your cluster for redundancy:
|
||||
|
||||
<a id="guide-architecture"></a>
|
||||
|
||||
Node | Operating System | Kubernetes Cluster Role(s) | Purpose
|
||||
--------|------------------|----------------------------|--------
|
||||
Node 1 | Linux (Ubuntu Server 18.04 recommended) | [Control Plane]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#control-plane-nodes), [etcd]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#etcd-nodes), [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) | Manage the Kubernetes cluster
|
||||
Node 2 | Linux (Ubuntu Server 18.04 recommended) | [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) | Support the Rancher Cluster agent, Metrics server, DNS, and Ingress for the cluster
|
||||
Node 3 | Windows (Windows Server 2019 required) | [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) | Run your Windows containers
|
||||
|
||||
### Containers
|
||||
|
||||
Windows requires that containers must be built on the same Windows Server version that they are being deployed on. Therefore, containers must be built on Windows Server 2019 core version 1903. If you have existing containers built for an earlier Windows Server 2019 core version, they must be re-built on Windows Server 2019 core version 1903.
|
||||
|
||||
# Tutorial: How to Create a Cluster with Windows Support
|
||||
|
||||
This tutorial describes how to create a Rancher-provisioned cluster with the three nodes in the [recommended architecture.](#guide-architecture)
|
||||
|
||||
When you provision a custom cluster with Rancher, you will add nodes to the cluster by installing the [Rancher agent]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/agent-options/) on each one. When you create or edit your cluster from the Rancher UI, you will see a **Customize Node Run Command** that you can run on each server to add it to your custom cluster.
|
||||
|
||||
To set up a custom cluster with support for Windows nodes and containers, you will need to complete the tasks below.
|
||||
|
||||
<!-- TOC -->
|
||||
1. [Provision Hosts](#1-provision-hosts)
|
||||
1. [Create the Custom Cluster](#2-create-the-custom-cluster)
|
||||
1. [Add Nodes to the Cluster](#3-add-nodes-to-the-cluster)
|
||||
1. [Optional: Configuration for Azure Files](#5-optional-configuration-for-azure-files)
|
||||
<!-- /TOC -->
|
||||
|
||||
# 1. Provision Hosts
|
||||
|
||||
To begin provisioning a custom cluster with Windows support, prepare your hosts.
|
||||
|
||||
Your hosts can be:
|
||||
|
||||
- Cloud-hosted VMs
|
||||
- VMs from virtualization clusters
|
||||
- Bare-metal servers
|
||||
|
||||
The table below lists the [Kubernetes roles]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) you'll assign to each host. The roles will be enabled later on in the configuration process. The first node, a Linux host, is primarily responsible for managing the Kubernetes control plane. In this guide, we will be installing all three roles on this node. The second node is also a Linux worker, which is responsible for running a DNS server, Ingress controller, Metrics server and Rancher Cluster agent. The third node, a Windows worker, will run your Windows containers.
|
||||
You will provision three nodes:
|
||||
|
||||
Node | Operating System | Future Cluster Role(s)
|
||||
--------|------------------|------
|
||||
Node 1 | Linux (Ubuntu Server 18.04 recommended) | [Control Plane]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#control-plane-nodes), [etcd]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#etcd), [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes)
|
||||
Node 2 | Linux (Ubuntu Server 18.04 recommended) | [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes)
|
||||
Node 3 | Windows (Windows Server 2019 required) | [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes)
|
||||
- One Linux node, which manages the Kubernetes control plane and stores your `etcd`
|
||||
- A second Linux node, which will be another worker node
|
||||
- The Windows node, which will run your Windows containers as a worker node
|
||||
|
||||
>**Notes:**
|
||||
>
|
||||
>- If you are using AWS, you should choose *Microsoft Windows Server 2019 Base with Containers* as the Amazon Machine Image (AMI).
|
||||
>- If you are using GCE, you should choose *Windows Server 2019 Datacenter for Containers* as the OS image.
|
||||
Node | Operating System
|
||||
-----|-----------------
|
||||
Node 1 | Linux (Ubuntu Server 18.04 recommended)
|
||||
Node 2 | Linux (Ubuntu Server 18.04 recommended)
|
||||
Node 3 | Windows (Windows Server 2019 required)
|
||||
|
||||
### Requirements
|
||||
If your nodes are hosted by a **Cloud Provider** and you want automation support such as loadbalancers or persistent storage devices, your nodes have additional configuration requirements. For details, see [Selecting Cloud Providers.]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers)
|
||||
|
||||
- You can view the general requirements for Linux and Windows nodes in the [installation section]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/).
|
||||
- For **Host Gateway (L2bridge)** networking, it's best to use the same Layer 2 network for all nodes. Otherwise, you need to configure the route rules for them.
|
||||
- For **VXLAN (Overlay)** networking, you must confirm that Windows Server 2019 has the [KB4489899](https://support.microsoft.com/en-us/help/4489899) hotfix installed. Most cloud-hosted VMs already have this hotfix.
|
||||
- Your cluster must include at least one Linux worker node to run Rancher Cluster agent, DNS, Metrics server and Ingress related containers.
|
||||
- Although we recommend the three node architecture listed in the table above, you can always add additional Linux and Windows workers to scale up your cluster for redundancy.
|
||||
# 2. Create the Custom Cluster
|
||||
|
||||
## 2. Create the Custom Cluster
|
||||
The instructions for creating a custom cluster that supports Windows nodes are very similar to the general [instructions for creating a custom cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster) with some Windows-specific requirements.
|
||||
|
||||
The instructions for creating a custom cluster that supports Windows nodes are very similar to the general [instructions for creating a custom cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster) with some Windows specific requirements. The entire process is documented below.
|
||||
Windows support only be enabled if the cluster uses Kubernetes v1.15+ and the Flannel network provider.
|
||||
|
||||
1. From the main Rancher dashboard click on the **Clusters** tab and select **Add Cluster**.
|
||||
1. From the **Global** view, click on the **Clusters** tab and click **Add Cluster**.
|
||||
|
||||
1. The first section asks where the cluster is hosted. You should select **Custom**.
|
||||
1. Click **From existing nodes (Custom)**.
|
||||
|
||||
1. Enter a name for your cluster in the **Cluster Name** text box.
|
||||
1. Enter a name for your cluster in the **Cluster Name** text box.
|
||||
|
||||
1. {{< step_create-cluster_member-roles >}}
|
||||
1. In the **Kubernetes Version** dropdown menu, select v1.15 or above.
|
||||
|
||||
1. {{< step_create-cluster_cluster-options >}}
|
||||
1. In the **Network Provider** field, select **Flannel.**
|
||||
|
||||
In order to use Windows workers, you must choose the following options:
|
||||
- You must select `v1.14` or above for **Kubernetes Version**.
|
||||
- You must select **Flannel** as the **Network Provider**. There are two options: [**Host Gateway (L2bridge)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) and [**VXLAN (Overlay)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan). The default option is **VXLAN (Overlay)** mode.
|
||||
- You must select **Enable** for **Windows Support**.
|
||||
1. In the **Windows Support** section, click **Enable.**
|
||||
|
||||
1. If your nodes are hosted by a **Cloud Provider** and you want automation support such as loadbalancers or persistent storage devices, see [Selecting Cloud Providers]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers) for configuration info.
|
||||
1. Optional: After you enable Windows support, you will be able to choose the Flannel backend. There are two network options: [**Host Gateway (L2bridge)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) and [**VXLAN (Overlay)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan). The default option is **VXLAN (Overlay)** mode.
|
||||
|
||||
1. Click **Next**.
|
||||
|
||||
> **Important:** For **Host Gateway (L2bridge)** networking, it's best to use the same Layer 2 network for all nodes. Otherwise, you need to configure the route rules for them. For details, refer to the [documentation on configuring cloud-hosted VM routes.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/host-gateway-requirements/#cloud-hosted-vm-routes-configuration) You will also need to [disable private IP address checks]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/host-gateway-requirements/#disabling-private-ip-address-checks) if you are using Amazon EC2, Google GCE, or Azure VM.
|
||||
|
||||
# 3. Add Nodes to the Cluster
|
||||
|
||||
>**Important:** If you are using *Host Gateway (L2bridge)* mode and hosting your nodes on any of the cloud services listed below, you must disable the private IP address checks for both your Linux or Windows hosts on startup. To disable this check for each node, follow the directions provided by each service below.
|
||||
This section describes how to register your Linux and Worker nodes to your custom cluster.
|
||||
|
||||
Service | Directions to disable private IP address checks
|
||||
--------|------------------------------------------------
|
||||
Amazon EC2 | [Disabling Source/Destination Checks](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html#EIP_Disable_SrcDestCheck)
|
||||
Google GCE | [Enabling IP Forwarding for Instances](https://cloud.google.com/vpc/docs/using-routes#canipforward)
|
||||
Azure VM | [Enable or Disable IP Forwarding](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-network-interface#enable-or-disable-ip-forwarding)
|
||||
### Add Linux Master Node
|
||||
|
||||
## 3. Add Linux Master Node
|
||||
The first node in your cluster should be a Linux host has both the **Control Plane** and **etcd** roles. At a minimum, both of these roles must be enabled for this node, and this node must be added to your cluster before you can add Windows hosts.
|
||||
|
||||
The first node in your cluster should be a Linux host that fills both *Control Plane* and *etcd* role. Both of these two roles must be fulfilled before you can add Windows hosts to your cluster. At a minimum, the node must have 2 roles enabled, but we recommend enabling all three. The following table lists our recommended settings (we'll provide the recommended settings for nodes 2 and 3 later).
|
||||
In this section, we fill out a form on the Rancher UI to get a custom command to install the Rancher agent on the Linux master node. Then we will copy the command and run it on our Linux master node to register the node in the cluster.
|
||||
|
||||
Option | Setting
|
||||
-------|--------
|
||||
Node Operating System | Linux
|
||||
Node Roles | etcd <br/> Control Plane <br/> Worker (optional)
|
||||
1. In the **Node Operating System** section, click **Linux**.
|
||||
|
||||
1. For Node Operating System select **Linux**.
|
||||
1. In the **Node Role** section, choose at least **etcd** and **Control Plane**. We recommend selecting all three.
|
||||
|
||||
1. From **Node Role**, choose at least **etcd** and **Control Plane**.
|
||||
1. Optional: If you click **Show advanced options,** you can customize the settings for the [Rancher agent]({{< baseurl >}}/rancher/v2.x/en/admin-settings/agent-options/) and [node labels.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
|
||||
|
||||
1. **Optional**: Click **Show advanced options** to specify IP address(es) to use when registering the node, override the hostname of the node or to add labels to the node.
|
||||
|
||||
[Rancher Agent Options]({{< baseurl >}}/rancher/v2.x/en/admin-settings/agent-options/)<br/>
|
||||
[Kubernetes Documentation: Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
|
||||
|
||||
1. Copy the command displayed on the screen to your clipboard.
|
||||
|
||||
>**Note:** Repeat steps 7-10 if you want to dedicate specific hosts to specific node roles. Repeat the steps as many times as needed.
|
||||
1. Copy the command displayed on the screen to your clipboard.
|
||||
|
||||
1. SSH into your Linux host and run the command that you copied to your clipboard.
|
||||
|
||||
@@ -143,19 +174,20 @@ Node Roles | etcd <br/> Control Plane <br/> Worker (optional)
|
||||
|
||||
{{< result_create-cluster >}}
|
||||
|
||||
## 4. Add Linux Worker Node
|
||||
It may take a few minutes for the node to be registered in your cluster.
|
||||
|
||||
After the initial provisioning of your custom cluster, your cluster only has a single Linux host. Add another Linux host, which will be used to support *Rancher cluster agent*, *Metrics server*, *DNS* and *Ingress* for your cluster.
|
||||
### Add Linux Worker Node
|
||||
|
||||
1. Using the content menu, open the custom cluster your created in [2. Create the Custom Cluster](#2-create-the-custom-cluster).
|
||||
After the initial provisioning of your custom cluster, your cluster only has a single Linux host. Next, we add another Linux `worker` host, which will be used to support *Rancher cluster agent*, *Metrics server*, *DNS* and *Ingress* for your cluster.
|
||||
|
||||
1. From the main menu, select **Nodes**.
|
||||
|
||||
1. Click **Edit Cluster**.
|
||||
1. From the **Global** view, click **Clusters.**
|
||||
|
||||
1. Go to the custom cluster that you created and click **Ellipsis (...) > Edit.**
|
||||
|
||||
1. Scroll down to **Node Operating System**. Choose **Linux**.
|
||||
|
||||
1. Select the **Worker** role.
|
||||
1. In the **Customize Node Run Command** section, go to the **Node Options** and select the **Worker** role.
|
||||
|
||||
1. Copy the command displayed on screen to your clipboard.
|
||||
|
||||
@@ -163,19 +195,25 @@ After the initial provisioning of your custom cluster, your cluster only has a s
|
||||
|
||||
1. From **Rancher**, click **Save**.
|
||||
|
||||
**Result:** The **Worker** role is installed on your Linux host, and the node registers with Rancher.
|
||||
**Result:** The **Worker** role is installed on your Linux host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster.
|
||||
|
||||
## 5. Add Windows Workers
|
||||
> **Note:** Taints on Linux Worker Nodes
|
||||
>
|
||||
>For each Linux worker node added into the cluster, the following taints will be added to Linux worker node. By adding this taint to the Linux worker node, any workloads added to the windows cluster will be automatically scheduled to the Windows worker node. If you want to schedule workloads specifically onto the Linux worker node, you will need to add tolerations to those workloads.
|
||||
|
||||
>Taint Key | Taint Value | Taint Effect
|
||||
>---|---|---
|
||||
>`cattle.io/os` | `linux` | `NoSchedule`
|
||||
|
||||
### Add a Windows Worker Node
|
||||
|
||||
You can add Windows hosts to a custom cluster by editing the cluster and choosing the **Windows** option.
|
||||
|
||||
1. From the main menu, select **Nodes**.
|
||||
1. From the **Global** view, click **Clusters.**
|
||||
|
||||
1. Click **Edit Cluster**.
|
||||
1. Go to the custom cluster that you created and click **Ellipsis (...) > Edit.**
|
||||
|
||||
1. Scroll down to **Node Operating System**. Choose **Windows**.
|
||||
|
||||
1. Select the **Worker** role.
|
||||
1. Scroll down to **Node Operating System**. Choose **Windows**. Note: You will see that the **worker** role is the only available role.
|
||||
|
||||
1. Copy the command displayed on screen to your clipboard.
|
||||
|
||||
@@ -183,33 +221,11 @@ You can add Windows hosts to a custom cluster by editing the cluster and choosin
|
||||
|
||||
1. From Rancher, click **Save**.
|
||||
|
||||
1. **Optional:** Repeat these instruction if you want to add more Windows nodes to your cluster.
|
||||
1. Optional: Repeat these instructions if you want to add more Windows nodes to your cluster.
|
||||
|
||||
**Result:** The **Worker** role is installed on your Windows host, and the node registers with Rancher.
|
||||
**Result:** The **Worker** role is installed on your Windows host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster. You now have a Windows Kubernetes cluster.
|
||||
|
||||
## 6. Cloud-hosted VM Routes Configuration for Host Gateway Mode
|
||||
|
||||
If you are using the [**Host Gateway (L2bridge)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) backend of Flannel, all containers on the same node belong to a private subnet, and traffic routes from a subnet on one node to a subnet on another node through the host network.
|
||||
|
||||
- When worker nodes are provisioned on AWS, virtualization clusters, or bare metal servers, make sure they belong to the same layer 2 subnet. If the nodes don't belong to the same layer 2 subnet, `host-gw` networking will not work.
|
||||
|
||||
- When worker nodes are provisioned on GCE or Azure, they are not on the same layer 2 subnet. Nodes on GCE and Azure belong to a routable layer 3 network. Follow the instructions below to configure GCE and Azure so that the cloud network knows how to route the host subnets on each node.
|
||||
|
||||
To configure host subnet routing on GCE or Azure, first run the following command to find out the host subnets on each worker node:
|
||||
|
||||
```bash
|
||||
kubectl get nodes -o custom-columns=nodeName:.metadata.name,nodeIP:status.addresses[0].address,routeDestination:.spec.podCIDR
|
||||
```
|
||||
|
||||
Then follow the instructions for each cloud provider to configure routing rules for each node:
|
||||
|
||||
Service | Instructions
|
||||
--------|-------------
|
||||
Google GCE | For GCE, add a static route for each node: [Adding a Static Route](https://cloud.google.com/vpc/docs/using-routes#addingroute).
|
||||
Azure VM | For Azure, create a routing table: [Custom Routes: User-defined](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview#user-defined).
|
||||
|
||||
|
||||
## 7. Configuration for Azure Files
|
||||
# Configuration for Storage Classes in Azure
|
||||
|
||||
If you are using Azure VMs for your nodes, you can use [Azure files](https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv) as a [storage class]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/#adding-storage-classes) for the cluster.
|
||||
|
||||
@@ -219,7 +235,7 @@ In order to have the Azure platform create the required storage resources, follo
|
||||
|
||||
1. Configure `kubectl` to connect to your cluster.
|
||||
|
||||
1. Copy the `ClusterRole` and `ClusterRoleBinding` manifest for service account.
|
||||
1. Copy the `ClusterRole` and `ClusterRoleBinding` manifest for the service account:
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
||||
+15
-12
@@ -3,25 +3,28 @@ title: v2.1.x and v2.2.x Windows Documentation (Experimental)
|
||||
weight: 9100
|
||||
---
|
||||
|
||||
>**Note:** This section describes how to provision Windows clusters in Rancher v2.1.x and v2.2.x. If you are using Rancher v2.3.0 or later, please refer to the new documentation for [v2.3.0 or later]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/).
|
||||
|
||||
_Available from v2.1.0 to v2.1.9 and v2.2.0 to v2.2.3_
|
||||
|
||||
>**Important:**
|
||||
>
|
||||
>Support for Windows nodes is currently an experimental feature and is not yet officially supported in Rancher. Therefore, we do not recommend using Windows nodes in a production environment.
|
||||
This section describes how to provision Windows clusters in Rancher v2.1.x and v2.2.x. If you are using Rancher v2.3.0 or later, please refer to the new documentation for [v2.3.0 or later]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/).
|
||||
|
||||
When provisioning a [custom cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/) using Rancher, you can use a mix of Linux and Windows hosts as your cluster nodes.
|
||||
When you create a [custom cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/), Rancher uses RKE (the Rancher Kubernetes Engine) to provision the Kubernetes cluster on your existing infrastructure.
|
||||
|
||||
This guide walks you through create of a custom cluster that includes 3 nodes: a Linux node, which serves as a Kubernetes control plane node; another Linux node, which serves as a Kubernetes worker used to support Ingress for the cluster; and a Windows node, which is assigned the Kubernetes worker role and runs your Windows containers.
|
||||
You can provision a custom Windows cluster using Rancher by using a mix of Linux and Windows hosts as your cluster nodes.
|
||||
|
||||
>**Notes:**
|
||||
>
|
||||
>- For a summary of Kubernetes features supported in Windows, see [Using Windows in Kubernetes](https://kubernetes.io/docs/setup/windows/intro-windows-in-kubernetes/).
|
||||
>- Windows containers must run on Windows Server 1803 nodes. Windows Server 1709 and earlier versions do not support Kubernetes properly.
|
||||
>- Containers built for Windows Server 1709 or earlier do not run on Windows Server 1803. You must build containers on Windows Server 1803 to run these containers on Windows Server 1803.
|
||||
>**Important:** In versions of Rancher prior to v2.3, support for Windows nodes is experimental. Therefore, it is not recommended to use Windows nodes for production environments if you are using Rancher prior to v2.3.
|
||||
|
||||
This guide walks you through create of a custom cluster that includes three nodes:
|
||||
|
||||
- A Linux node, which serves as a Kubernetes control plane node
|
||||
- Another Linux node, which serves as a Kubernetes worker used to support Ingress for the cluster
|
||||
- A Windows node, which is assigned the Kubernetes worker role and runs your Windows containers
|
||||
|
||||
For a summary of Kubernetes features supported in Windows, see [Using Windows in Kubernetes](https://kubernetes.io/docs/setup/windows/intro-windows-in-kubernetes/).
|
||||
|
||||
## OS and Container Requirements
|
||||
|
||||
- For clusters provisioned with Rancher v2.1.x and v2.2.x, containers must run on Windows Server 1803.
|
||||
- You must build containers on Windows Server 1803 to run these containers on Windows Server 1803.
|
||||
|
||||
## Objectives for Creating Cluster with Windows Support
|
||||
|
||||
|
||||
+37
@@ -0,0 +1,37 @@
|
||||
---
|
||||
title: Networking Requirements for Host Gateway (L2bridge)
|
||||
weight: 1000
|
||||
---
|
||||
|
||||
This section describes how to configure custom Windows clusters that are using *Host Gateway (L2bridge)* mode.
|
||||
|
||||
### Disabling Private IP Address Checks
|
||||
|
||||
If you are using *Host Gateway (L2bridge)* mode and hosting your nodes on any of the cloud services listed below, you must disable the private IP address checks for both your Linux or Windows hosts on startup. To disable this check for each node, follow the directions provided by each service below.
|
||||
|
||||
Service | Directions to disable private IP address checks
|
||||
--------|------------------------------------------------
|
||||
Amazon EC2 | [Disabling Source/Destination Checks](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html#EIP_Disable_SrcDestCheck)
|
||||
Google GCE | [Enabling IP Forwarding for Instances](https://cloud.google.com/vpc/docs/using-routes#canipforward) (By default, a VM cannot forward a packet originated by another VM)
|
||||
Azure VM | [Enable or Disable IP Forwarding](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-network-interface#enable-or-disable-ip-forwarding)
|
||||
|
||||
### Cloud-hosted VM Routes Configuration
|
||||
|
||||
If you are using the [**Host Gateway (L2bridge)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) backend of Flannel, all containers on the same node belong to a private subnet, and traffic routes from a subnet on one node to a subnet on another node through the host network.
|
||||
|
||||
- When worker nodes are provisioned on AWS, virtualization clusters, or bare metal servers, make sure they belong to the same layer 2 subnet. If the nodes don't belong to the same layer 2 subnet, `host-gw` networking will not work.
|
||||
|
||||
- When worker nodes are provisioned on GCE or Azure, they are not on the same layer 2 subnet. Nodes on GCE and Azure belong to a routable layer 3 network. Follow the instructions below to configure GCE and Azure so that the cloud network knows how to route the host subnets on each node.
|
||||
|
||||
To configure host subnet routing on GCE or Azure, first run the following command to find out the host subnets on each worker node:
|
||||
|
||||
```bash
|
||||
kubectl get nodes -o custom-columns=nodeName:.metadata.name,nodeIP:status.addresses[0].address,routeDestination:.spec.podCIDR
|
||||
```
|
||||
|
||||
Then follow the instructions for each cloud provider to configure routing rules for each node:
|
||||
|
||||
Service | Instructions
|
||||
--------|-------------
|
||||
Google GCE | For GCE, add a static route for each node: [Adding a Static Route](https://cloud.google.com/vpc/docs/using-routes#addingroute).
|
||||
Azure VM | For Azure, create a routing table: [Custom Routes: User-defined](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview#user-defined).
|
||||
@@ -42,7 +42,7 @@ Yes. In the upcoming Rancher v2.1 release we will provide a tool to help transla
|
||||
#### Can I still create templates for environments and clusters?
|
||||
|
||||
Starting with 2.0, the concept of an environment has now been changed to a Kubernetes cluster as going forward, only the Kubernetes orchestration engine is supported.
|
||||
Kubernetes Cluster Templates is on our roadmap for 2.x. Please refer to our Release Notes and documentation for all the features that we currently support.
|
||||
Kubernetes RKE Templates is on our roadmap for 2.x. Please refer to our Release Notes and documentation for all the features that we currently support.
|
||||
|
||||
#### Can you still add an existing host to an environment? (i.e. not provisioned directly from Rancher)
|
||||
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
---
|
||||
title: "Air Gap: High Availability Install"
|
||||
weight: 290
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/air-gap-installation/
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines. If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/).
|
||||
|
||||
The following CLI tools are required for this install. Make sure these tools are installed on your workstation and available in your `$PATH`.
|
||||
|
||||
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool.
|
||||
* [rke]({{< baseurl >}}/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters.
|
||||
* [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes.
|
||||
|
||||
>**Note:** If you install Rancher in an HA configuration in an air gap environment, you cannot transition to a single-node setup during future upgrades.
|
||||
|
||||
## Installation Outline
|
||||
|
||||
- [1. Create Nodes and Load Balancer]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-high-availability/provision-hosts/)
|
||||
- [2. Collect and Publish Image Sources]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-high-availability/prepare-private-registry/)
|
||||
- [3. Install Kubernetes with RKE]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-high-availability/install-kube/)
|
||||
- [4. Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-high-availability/install-rancher/)
|
||||
- [5. Configure Rancher System Charts]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-system-charts/)
|
||||
|
||||
### [Next: Create Nodes and Load Balancer]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-high-availability/provision-hosts/)
|
||||
-50
@@ -1,50 +0,0 @@
|
||||
---
|
||||
title: "5. Configure Rancher System Charts"
|
||||
weight: 600
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-for-private-reg/
|
||||
---
|
||||
|
||||
# A. Prepare System Charts
|
||||
|
||||
The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach and configure Rancher to use that repository.
|
||||
|
||||
Refer to the release notes in the `system-charts` repository to see which branch corresponds to your version of Rancher.
|
||||
|
||||
# B. Configure System Charts
|
||||
|
||||
Rancher needs to be configured to use your Git mirror of the `system-charts` repository. You can configure the system charts repository either from the Rancher UI or from Rancher's API view.
|
||||
|
||||
### Configuring the Registry from the Rancher UI
|
||||
|
||||
In the catalog management page in the Rancher UI, follow these steps:
|
||||
|
||||
1. Go to the **Global** view.
|
||||
|
||||
1. Click **Tools > Catalogs.**
|
||||
|
||||
1. The system chart is displayed under the name `system-library`. To edit the configuration of the system chart, click **Ellipsis (...) > Edit.**
|
||||
|
||||
1. In the **Catalog URL** field, enter the location of the Git mirror of the `system-charts` repository.
|
||||
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository.
|
||||
|
||||
### Configuring the Registry in Rancher's API View
|
||||
|
||||
1. Log into Rancher.
|
||||
|
||||
1. Open `https://<your-rancher-server>/v3/catalogs/system-library` in your browser.
|
||||
|
||||

|
||||
|
||||
1. Click **Edit** on the upper right corner and update the value for **url** to the location of the Git mirror of the `system-charts` repository.
|
||||
|
||||

|
||||
|
||||
1. Click **Show Request**
|
||||
|
||||
1. Click **Send Request**
|
||||
|
||||
**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository.
|
||||
-189
@@ -1,189 +0,0 @@
|
||||
---
|
||||
title: 4. Install Rancher
|
||||
weight: 400
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/air-gap-installation/install-rancher/
|
||||
---
|
||||
|
||||
## A. Add the Helm Chart Repository
|
||||
|
||||
|
||||
From a system that has access to the internet, render the installs and copy the resulting manifests to a system that has access to the Rancher server cluster.
|
||||
|
||||
1. If you haven't already, initialize `helm` locally on a system that has internet access.
|
||||
|
||||
```plain
|
||||
helm init -c
|
||||
```
|
||||
|
||||
2. Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories).
|
||||
|
||||
{{< release-channel >}}
|
||||
|
||||
```
|
||||
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
3. Fetch the latest Rancher chart. This will pull down the chart and save it in the current directory as a `.tgz` file.
|
||||
|
||||
```plain
|
||||
helm fetch rancher-<CHART_REPO>/rancher
|
||||
```
|
||||
|
||||
>Want additional options? Need help troubleshooting? See [High Availability Install: Advanced Options]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/#advanced-configurations).
|
||||
|
||||
|
||||
## B. Choose your SSL Configuration
|
||||
|
||||
Rancher Server is designed to be secure by default and requires SSL/TLS configuration.
|
||||
|
||||
For HA air gap configurations, there are two recommended options for the source of the certificate.
|
||||
|
||||
> **Note:** If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#external-tls-termination).
|
||||
|
||||
| Configuration | Chart option | Description | Requires cert-manager |
|
||||
|-----|-----|-----|-----|
|
||||
| [Rancher Generated Self-Signed Certificates](#self-signed) | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br/>This is the **default** | yes |
|
||||
| [Certificates from Files](#secret) | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s) | no |
|
||||
|
||||
## C. Set Up the Rancher Template
|
||||
|
||||
Based on the choice your made in [B. Choose your SSL Configuration](#b-optional-install-cert-manager), complete one of the procedures below.
|
||||
|
||||
In this section you will configure your cert manager and private registry in the Rancher template.
|
||||
|
||||
{{% accordion id="self-signed" label="Option A: Default Self-Signed Certificate" %}}
|
||||
By default, Rancher generates a CA and uses cert-manager to issue the certificate for access to the Rancher server interface.
|
||||
|
||||
> **Note:**
|
||||
> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.9.1, please see our [upgrade documentation]({{< baseurl >}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/).
|
||||
|
||||
1. From a system connected to the internet, add the cert-manager repo to Helm.
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager).
|
||||
|
||||
```plain
|
||||
helm fetch jetstack/cert-manager --version v0.9.1
|
||||
```
|
||||
|
||||
1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
|
||||
|
||||
```plain
|
||||
helm template ./cert-manager-v0.9.1.tgz --output-dir . \
|
||||
--name cert-manager --namespace cert-manager \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
|
||||
```
|
||||
|
||||
1. Download the required CRD file for cert-manager
|
||||
|
||||
```plain
|
||||
curl -L -o cert-manager/cert-manager-crd.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-0.9/deploy/manifests/00-crds.yaml
|
||||
```
|
||||
|
||||
1. Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools. To configure Rancher to use your private registry when starting the `rancher/rancher` container, use the `CATTLE_SYSTEM_DEFAULT_REGISTRY` variable. You can set the the extra environment variable `extraEnv` to use the same `name` and `value` keys as the container manifest definitions. Remember to quote the values:
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<VERSION>` | The version number of the output tarball.
|
||||
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry.).
|
||||
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher
|
||||
--set 'extraEnv[0].name=CATTLE_SYSTEM_DEFAULT_REGISTRY'
|
||||
--set 'extraEnv[0].value=<REGISTRY.YOURDOMAIN.COM:PORT>'
|
||||
```
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<VERSION>` | The version number of the output tarball.
|
||||
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry. This configures Rancher to use your private registry when starting the `rancher/rancher` container.
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
{{% accordion id="secret" label="Option B: Certificates for Files (Kubernetes Secret)" %}}
|
||||
|
||||
1. Create Kubernetes secrets from your own certificates for Rancher to use.
|
||||
|
||||
> **Note:** The common name for the cert will need to match the `hostname` option or the ingress controller will fail to provision the site for Rancher.
|
||||
|
||||
1. Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools. To configure Rancher to use your private registry when starting the `rancher/rancher` container, the `CATTLE_SYSTEM_DEFAULT_REGISTRY` variable. You can set the the extra environment variable `extraEnv` to use the same `name` and `value` keys as the container manifest definitions. Remember to quote the values:
|
||||
|
||||
```
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret
|
||||
--set 'extraEnv[0].name=CATTLE_SYSTEM_DEFAULT_REGISTRY'
|
||||
--set 'extraEnv[0].value=<REGISTRY.YOURDOMAIN.COM:PORT>'
|
||||
```
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<VERSION>` | The version number of the output tarball.
|
||||
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry. This configures Rancher to use your private registry when starting the `rancher/rancher` container.
|
||||
|
||||
> **Note:** If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`
|
||||
|
||||
1. See [Adding TLS Secrets]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them.
|
||||
{{% /accordion %}}
|
||||
|
||||
## D. Install Rancher
|
||||
|
||||
Copy the rendered manifest directories to a system that has access to the Rancher server cluster to complete installation.
|
||||
|
||||
Use `kubectl` to create namespaces and apply the rendered manifests.
|
||||
|
||||
If you are using self-signed certificates, install cert-manager:
|
||||
|
||||
1. Create the namespace for cert-manager.
|
||||
```plain
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
1. Label the cert-manager namespace to disable resource validation.
|
||||
```plain
|
||||
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
|
||||
```
|
||||
|
||||
1. Create the cert-manager CustomResourceDefinitions (CRDs).
|
||||
```plain
|
||||
kubectl apply -f cert-manager/cert-manager-crd.yaml
|
||||
```
|
||||
|
||||
1. Launch cert-manager.
|
||||
```plain
|
||||
kubectl apply -R -f ./cert-manager
|
||||
```
|
||||
|
||||
Install rancher:
|
||||
|
||||
```plain
|
||||
kubectl create namespace cattle-system
|
||||
kubectl -n cattle-system apply -R -f ./rancher
|
||||
```
|
||||
|
||||
### Additional Resources
|
||||
|
||||
These resources could be helpful when you install Rancher:
|
||||
|
||||
- [Rancher Helm chart options]({{<baseurl>}}rancher/v2.x/en/installation/ha/helm-rancher/chart-options/)
|
||||
- [Adding TLS secrets]({{<baseurl>}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/)
|
||||
- [Troubleshooting Rancher HA installations]({{<baseurl>}}/rancher/v2.x/en/installation/ha/helm-rancher/troubleshooting/)
|
||||
|
||||
### [Next: Configure Rancher System Charts]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-system-charts/)
|
||||
-94
@@ -1,94 +0,0 @@
|
||||
---
|
||||
title: "2. Prepare Private Registry"
|
||||
weight: 200
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/air-gap-installation/prepare-private-reg/
|
||||
---
|
||||
|
||||
>**Prerequisites:** You must have a [private registry](https://docs.docker.com/registry/deploying/) available to use.
|
||||
|
||||
By default, all system images are being pulled from DockerHub. If you are on a system that does not have access to DockerHub, you will need to create a private registry that is populated with all the required [system images]({{< baseurl >}}/rke/latest/en/config-options/system-images/).
|
||||
|
||||
As of RKE v0.1.10, you have to configure your private registry. You can specify this registry as a default registry so that all system images are pulled from the designated private registry. You can use the command `rke config --system-images` to get the list of default system images to populate your private registry. For details, refer to the [RKE documentation on how to set a default registry]({{<baseurl>}}/rke/latest/en/config-options/private-registries/).
|
||||
|
||||
Prior to RKE v0.1.10, you had to configure your private registry **and** update the names of all the [system images]({{< baseurl >}}/rke/latest/en/config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name.
|
||||
|
||||
When configuring your private registry, you only need to provide credentials if your registry requires them.
|
||||
|
||||
## A. Collect Images
|
||||
|
||||
Start by collecting all the images needed to install Rancher in an air gap environment. You'll collect images from your chosen Rancher release, RKE, and (if you're using a self-signed TLS certificate) Cert-Manager.
|
||||
|
||||
1. Using a computer with internet access, browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments.
|
||||
|
||||

|
||||
|
||||
2. From the release's **Assets** section (pictured above), download the following three files, which are required to install Rancher in an air gap environment:
|
||||
|
||||
|
||||
| Release File | Description |
|
||||
| --- | --- |
|
||||
| `rancher-images.txt` | This file contains a list of all files needed to install Rancher.
|
||||
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
|
||||
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
|
||||
1. Make `rancher-save-images.sh` an executable.
|
||||
|
||||
```
|
||||
chmod +x rancher-save-images.sh
|
||||
```
|
||||
|
||||
|
||||
1. From the directory that contains the RKE binary, add RKE's images to `rancher-images.txt`, which is a list of all the files needed to install Rancher.
|
||||
|
||||
```
|
||||
rke config --system-images >> ./rancher-images.txt
|
||||
```
|
||||
1. **Default Rancher Generated Self-Signed Certificate Users Only:** If you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://github.com/helm/charts/tree/master/stable/cert-manager) image to `rancher-images.txt` as well. You may skip this step if you are using you using your own certificates.
|
||||
|
||||
1. Fetch the latest `cert-manager` Helm chart and parse the template for image details.
|
||||
|
||||
> **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.9.1, please see our [upgrade documentation]({{< baseurl >}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/).
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm fetch jetstack/cert-manager --version v0.9.1
|
||||
helm template ./cert-manager-<version>.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt
|
||||
```
|
||||
|
||||
2. Sort and unique the images list to remove any overlap between the sources.
|
||||
|
||||
```plain
|
||||
sort -u rancher-images.txt -o rancher-images.txt
|
||||
```
|
||||
|
||||
1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images.
|
||||
|
||||
```plain
|
||||
./rancher-save-images.sh --image-list ./rancher-images.txt
|
||||
```
|
||||
|
||||
**Step Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
## B. Publish Images
|
||||
|
||||
|
||||
Using a computer with access to the internet and your private registry, move the images from `rancher-images.txt` to your private registry using the image scripts.
|
||||
|
||||
>**Note:** Image publication may require up to 20GB of empty disk space.
|
||||
|
||||
1. Log into your private registry if required.
|
||||
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
1. Use `rancher-load-images.sh` to extract, tag and push `rancher-images.txt` and `rancher-images.tar.gz` to your private registry.
|
||||
|
||||
```plain
|
||||
./rancher-load-images.sh --image-list ./rancher-images.txt --registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
### [Next: Install Kubernetes with RKE]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-high-availability/install-kube/)
|
||||
-39
@@ -1,39 +0,0 @@
|
||||
---
|
||||
title: "1. Create Nodes and Load Balancer"
|
||||
weight: 100
|
||||
aliases:
|
||||
---
|
||||
Provision three air gapped Linux hosts according to our requirements below to launch Rancher in an HA configuration.
|
||||
|
||||
These hosts should be disconnected from the internet, but should have connectivity with your private registry.
|
||||
|
||||
### Host Requirements
|
||||
|
||||
View hardware and software requirements for each of your cluster nodes in [Requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements).
|
||||
|
||||
### Recommended Architecture
|
||||
|
||||
- DNS for Rancher should resolve to a layer 4 load balancer
|
||||
- The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster.
|
||||
- The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443.
|
||||
- The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment.
|
||||
|
||||
<figcaption>HA Rancher install with layer 4 load balancer, depicting SSL termination at ingress controllers</figcaption>
|
||||
|
||||

|
||||
|
||||
### Load Balancer
|
||||
|
||||
RKE, the installer that provisions your air gapped cluster, will configure an Ingress controller pod on each of your nodes. The Ingress controller pods are bound to ports TCP/80 and TCP/443 on the host network and are the entry point for HTTPS traffic to the Rancher server.
|
||||
|
||||
Configure a load balancer as a basic Layer 4 TCP forwarder. The exact configuration will vary depending on your environment.
|
||||
|
||||
>**Important:**
|
||||
>Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
|
||||
|
||||
**Load Balancer Configuration Samples:**
|
||||
|
||||
- [NGINX]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx)
|
||||
- [Amazon NLB]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/nlb)
|
||||
|
||||
### [Next: Collect and Publish Image Sources]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-high-availability/prepare-private-registry/)
|
||||
@@ -1,20 +0,0 @@
|
||||
---
|
||||
title: "Air Gap: Single Node Install"
|
||||
weight: 280
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machine. If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/).
|
||||
|
||||
>**Note:** If you install Rancher on a single node in an air gap environment, you cannot transition to a HA configuration during future upgrades.
|
||||
|
||||
## Installation Outline
|
||||
|
||||
- [1. Provision Linux Host]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/provision-host/)
|
||||
- [2. Prepare Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/prepare-private-registry/)
|
||||
- [3. Choose an SSL Option and Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/install-rancher/)
|
||||
- [4. Configure Rancher for Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/config-rancher-for-private-reg/)
|
||||
- [5. Configure Rancher System Charts]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/config-rancher-system-charts/)
|
||||
|
||||
### [Next: Provision Linux Host]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/provision-host/)
|
||||
-49
@@ -1,49 +0,0 @@
|
||||
---
|
||||
title: "5. Configure Rancher System Charts"
|
||||
weight: 500
|
||||
aliases:
|
||||
---
|
||||
|
||||
# A. Prepare System Charts
|
||||
|
||||
The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach and configure Rancher to use that repository.
|
||||
|
||||
Refer to the release notes in the `system-charts` repository to see which branch corresponds to your version of Rancher.
|
||||
|
||||
# B. Configure System Charts
|
||||
|
||||
Rancher needs to be configured to use your Git mirror of the `system-charts` repository. You can configure the system charts repository either from the Rancher UI or from Rancher's API view.
|
||||
|
||||
### Configuring the Registry from the Rancher UI
|
||||
|
||||
In the catalog management page in the Rancher UI, follow these steps:
|
||||
|
||||
1. Go to the **Global** view.
|
||||
|
||||
1. Click **Tools > Catalogs.**
|
||||
|
||||
1. The system chart is displayed under the name `system-library`. To edit the configuration of the system chart, click **Ellipsis (...) > Edit.**
|
||||
|
||||
1. In the **Catalog URL** field, enter the location of the Git mirror of the `system-charts` repository.
|
||||
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository.
|
||||
|
||||
### Configuring the Registry in Rancher's API View
|
||||
|
||||
1. Log into Rancher.
|
||||
|
||||
1. Open `https://<your-rancher-server>/v3/catalogs/system-library` in your browser.
|
||||
|
||||

|
||||
|
||||
1. Click **Edit** on the upper right corner and update the value for **url** to the location of the Git mirror of the `system-charts` repository.
|
||||
|
||||

|
||||
|
||||
1. Click **Show Request**
|
||||
|
||||
1. Click **Send Request**
|
||||
|
||||
**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository.
|
||||
@@ -1,97 +0,0 @@
|
||||
---
|
||||
title: "3. Choose an SSL Option and Install Rancher"
|
||||
weight: 300
|
||||
aliases:
|
||||
---
|
||||
|
||||
For development and testing in air gap environments, we recommend installing Rancher by running a single Docker container. In this installation scenario, you'll deploy Rancher to your air gap host using an image pulled from your private registry.
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
>**Do you want to...**
|
||||
>
|
||||
>- Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{< baseurl >}}/rancher/v2.x/en/admin-settings/custom-ca-root-certificate/).
|
||||
>- Record all transactions with the Rancher API? See [API Auditing]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#api-audit-log).
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
{{% accordion id="option-a" label="Option A-Default Self-Signed Certificate" %}}
|
||||
|
||||
If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself.
|
||||
|
||||
Log into your Linux host, and then run the installation command below. Replace `<REGISTRY.YOURDOMAIN.COM:PORT>` with your private registry URL and port. Replace `<RANCHER_VERSION_TAG>` with release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to install.
|
||||
|
||||
If your private registry doesn't require credentials, you can set it as default when starting the rancher/rancher container by using the environment variable `CATTLE_SYSTEM_DEFAULT_REGISTRY`.
|
||||
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}}
|
||||
In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher.
|
||||
|
||||
>**Prerequisites:**
|
||||
>From a computer with an internet connection, create a self-signed certificate using [OpenSSL](https://www.openssl.org/) or another method of your choice.
|
||||
>
|
||||
>- The certificate files must be in [PEM format]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#pem).
|
||||
>- In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [SSL FAQ / Troubleshooting]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#cert-order).
|
||||
|
||||
After creating your certificate, run the Docker command below to install Rancher. Use the `-v` flag and provide the path to your certificates to mount them in your container.
|
||||
|
||||
When entering the command, use the table below to replace each placeholder.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<CA_CERTS>` | The path to the certificate authority's private key.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. This configures Rancher to use your private registry when starting the `rancher/rancher` container.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to install.
|
||||
|
||||
If your private registry doesn't require credentials, you can set it as default when starting the rancher/rancher container by using the environment variable `CATTLE_SYSTEM_DEFAULT_REGISTRY`.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="option-c" label="Option C-Bring Your Own Certificate: Signed by Recognized CA" %}}
|
||||
|
||||
In production environments where you're exposing an app publicly, use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings.
|
||||
|
||||
>**Prerequisite:** The certificate files must be in [PEM format]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#pem).
|
||||
|
||||
After obtaining your certificate, run the Docker command below, replacing each placeholder. Because your certificate is signed by a recognized CA, mounting an additional CA certificate file is unnecessary.
|
||||
|
||||
When entering the command, use the table below to replace each placeholder.
|
||||
|
||||
If your private registry doesn't require credentials, you can set it as default when starting the rancher/rancher container by using the environment variable `CATTLE_SYSTEM_DEFAULT_REGISTRY`.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to install.
|
||||
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG> --no-cacerts
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
### [Next: Configure Rancher for the Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/config-rancher-for-private-reg/)
|
||||
-55
@@ -1,55 +0,0 @@
|
||||
---
|
||||
title: "2. Prepare Private Registry"
|
||||
weight: 200
|
||||
aliases:
|
||||
---
|
||||
|
||||
>**Prerequisites:** You must have a [private registry](https://docs.docker.com/registry/deploying/) available to use.
|
||||
|
||||
By default, all system images are being pulled from DockerHub. If you are on a system that does not have access to DockerHub, you will need to create a private registry that is populated with all the required [system images]({{< baseurl >}}/rke/latest/en/config-options/system-images/).
|
||||
|
||||
When configuring your private registry, you only need to provide credentials if your registry requires them.
|
||||
|
||||
## A. Collect Image Sources
|
||||
|
||||
Using a computer with internet access, browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher 2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments.
|
||||
|
||||

|
||||
|
||||
From the release's **Assets** section, download the following three files, which are required to install Rancher in an air gap environment:
|
||||
|
||||
|
||||
| Release File | Description |
|
||||
| --- | --- |
|
||||
| `rancher-images.txt` | This file contains a list of all files needed to install Rancher.
|
||||
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
|
||||
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
|
||||
## B. Publish Images
|
||||
|
||||
After collecting the release files, publish the images from `rancher-images.txt` to your private registry using the image scripts.
|
||||
|
||||
>**Note:** Image publication may require up to 20GB of empty disk space.
|
||||
|
||||
1. From a system with internet access, use the `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images.
|
||||
|
||||
```plain
|
||||
./rancher-save-images.sh --image-list ./rancher-images.txt
|
||||
```
|
||||
|
||||
1. Copy `rancher-load-images.sh`, `rancher-images.txt` and `rancher-images.tar.gz` files to the [Linux host]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/provision-host) that you've provisioned by completing the substeps below.
|
||||
|
||||
1. Log into your registry if required.
|
||||
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
1. Use `rancher-load-images.sh` to extract, tag and push the images to your private registry.
|
||||
|
||||
```plain
|
||||
./rancher-load-images.sh --image-list ./rancher-images.txt --registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
### [Next: Choose an SSL Option and Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/install-rancher/)
|
||||
@@ -1,11 +0,0 @@
|
||||
---
|
||||
title: "1. Provision Linux Host"
|
||||
weight: 100
|
||||
aliases:
|
||||
---
|
||||
|
||||
Provision a single, air gapped Linux host according to our [Requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements) to launch your {{< product >}} Server.
|
||||
|
||||
This host should be disconnected from the internet, but should have connectivity with your private registry.
|
||||
|
||||
### [Next: Prepare Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/prepare-private-registry/)
|
||||
@@ -0,0 +1,40 @@
|
||||
---
|
||||
title: "Air Gap Install"
|
||||
weight: 290
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/air-gap-installation/
|
||||
- /rancher/v2.x/en/installation/air-gap-high-availability/
|
||||
- /rancher/v2.x/en/installation/air-gap-single-node/
|
||||
---
|
||||
|
||||
This section is about installations of Rancher server in an air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. Throughout the installations instructions, there will be _tabs_ for either a high availability installation or a single node installation.
|
||||
|
||||
* **High Availability (HA) Installation:** Rancher recommends installing Rancher in a Highly Available (HA) configuration. An HA installation is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
|
||||
|
||||
* **Single Node Installation:** The single node installation is for Rancher users that are wanting to **test** out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. **Important: If you install Rancher following the single node installation guide, there is no upgrade path to transition your single node installation to a HA installation.** Instead of running the single node installation, you have the option to follow the HA install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a HA installation.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines. If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/).
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "HA Install" %}}
|
||||
|
||||
The following CLI tools are required for the HA install. Make sure these tools are installed on your workstation and available in your `$PATH`.
|
||||
|
||||
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool.
|
||||
* [rke]({{< baseurl >}}/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters.
|
||||
* [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
## Installation Outline
|
||||
|
||||
- [1. Prepare your Node(s)]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap/prepare-nodes/)
|
||||
- [2. Collect and Publish Images to your Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap/populate-private-registry/)
|
||||
- [3. Launch a Kubernetes Cluster with RKE]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap/launch-kubernetes/)
|
||||
- [4. Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap/install-rancher/)
|
||||
|
||||
|
||||
### [Next: Prepare your Node(s)]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap/prepare-nodes/)
|
||||
@@ -0,0 +1,333 @@
|
||||
---
|
||||
title: 4. Install Rancher
|
||||
weight: 400
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/air-gap-installation/install-rancher/
|
||||
- /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-system-charts/
|
||||
- /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-for-private-reg/
|
||||
- /rancher/v2.x/en/installation/air-gap-high-availability/install-rancher/
|
||||
- /rancher/v2.x/en/installation/air-gap-single-node/install-rancher
|
||||
---
|
||||
|
||||
This section is about how to deploy Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a single node installation.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "HA Install (Recommended)" %}}
|
||||
|
||||
Rancher recommends installing Rancher in a Highly Available (HA) configuration. An HA installation is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
|
||||
|
||||
This section describes installing Rancher in five parts:
|
||||
|
||||
- [A. Add the Helm Chart Repository](#a-add-the-helm-chart-repository)
|
||||
- [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration)
|
||||
- [C. Render the Rancher Helm Template](#c-render-the-rancher-helm-template)
|
||||
- [D. Install Rancher](#d-install-rancher)
|
||||
- [E. For Rancher versions prior to v2.3.0, Configure System Charts](#e-for-rancher-versions-prior-to-v2-3-0-configure-system-charts)
|
||||
|
||||
### A. Add the Helm Chart Repository
|
||||
|
||||
From a system that has access to the internet, fetch the latest Helm chart and copy the resulting manifests to a system that has access to the Rancher server cluster.
|
||||
|
||||
1. If you haven't already, initialize `helm` locally on a workstation that has internet access.
|
||||
|
||||
```plain
|
||||
helm init -c
|
||||
```
|
||||
|
||||
2. Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Version of Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories).
|
||||
|
||||
{{< release-channel >}}
|
||||
|
||||
```
|
||||
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
3. Fetch the latest Rancher chart. This will pull down the chart and save it in the current directory as a `.tgz` file.
|
||||
|
||||
```plain
|
||||
helm fetch rancher-<CHART_REPO>/rancher
|
||||
```
|
||||
|
||||
>Want additional options? Need help troubleshooting? See [High Availability Install: Advanced Options]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/#advanced-configurations).
|
||||
|
||||
|
||||
### B. Choose your SSL Configuration
|
||||
|
||||
Rancher Server is designed to be secure by default and requires SSL/TLS configuration.
|
||||
|
||||
For HA air gap configurations, there are two recommended options for the source of the certificate.
|
||||
|
||||
> **Note:** If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer]({{<baseurl>}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#external-tls-termination).
|
||||
|
||||
| Configuration | Chart option | Description | Requires cert-manager |
|
||||
|-----|-----|-----|-----|
|
||||
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br> This is the **default** and does not need to be added when rendering the Helm template. | yes |
|
||||
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s). <br> This option must be passed when rendering the Rancher Helm template. | no |
|
||||
|
||||
### C. Render the Rancher Helm Template
|
||||
|
||||
When setting up the Rancher Helm template, there are several options in the Helm chart that are designed specifically for air gap installations.
|
||||
|
||||
Chart Option | Chart Value | Description
|
||||
---|---|---
|
||||
`systemDefaultRegistry` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters.
|
||||
`useBundledSystemChart` | `true` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_
|
||||
|
||||
Based on the choice your made in [B. Choose your SSL Configuration](#b-optional-install-cert-manager), complete one of the procedures below.
|
||||
|
||||
{{% accordion id="self-signed" label="Option A-Default Self-Signed Certificate" %}}
|
||||
|
||||
By default, Rancher generates a CA and uses cert-manager to issue the certificate for access to the Rancher server interface.
|
||||
|
||||
> **Note:**
|
||||
> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.9.1, please see our [upgrade cert-manager documentation]({{< baseurl >}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/).
|
||||
|
||||
1. From a system connected to the internet, add the cert-manager repo to Helm.
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager).
|
||||
|
||||
```plain
|
||||
helm fetch jetstack/cert-manager --version v0.9.1
|
||||
```
|
||||
|
||||
1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
|
||||
|
||||
```plain
|
||||
helm template ./cert-manager-v0.9.1.tgz --output-dir . \
|
||||
--name cert-manager --namespace cert-manager \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
|
||||
```
|
||||
|
||||
1. Download the required CRD file for cert-manager
|
||||
|
||||
```plain
|
||||
curl -L -o cert-manager/cert-manager-crd.yaml https://raw.githubusercontent.com/jetstack/cert-manager/release-0.9/deploy/manifests/00-crds.yaml
|
||||
```
|
||||
|
||||
1. Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<VERSION>` | The version number of the output tarball.
|
||||
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry.
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
{{% accordion id="secret" label="Option B: Certificates From Files using Kubernetes Secrets" %}}
|
||||
|
||||
1. Create Kubernetes secrets from your own certificates for Rancher to use.
|
||||
|
||||
> **Note:** The common name for the cert will need to match the `hostname` option or the ingress controller will fail to provision the site for Rancher.
|
||||
|
||||
1. Render the Rancher template, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
>**Note:** If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<VERSION>` | The version number of the output tarball.
|
||||
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry.
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
1. See [Adding TLS Secrets]({{<baseurl>}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them.
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
### D. Install Rancher
|
||||
|
||||
Copy the rendered manifest directories to a system that has access to the Rancher server cluster to complete installation.
|
||||
|
||||
Use `kubectl` to create namespaces and apply the rendered manifests.
|
||||
|
||||
If you chose to use self-signed certificates in [B. Choose your SSL Configuration](#b-optional-install-cert-manager), install cert-manager.
|
||||
|
||||
{{% accordion id="install-cert-manager" label="Self-Signed Certificate Installs - Install Cert-manager" %}}
|
||||
|
||||
If you are using self-signed certificates, install cert-manager:
|
||||
|
||||
1. Create the namespace for cert-manager.
|
||||
```plain
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
1. Label the cert-manager namespace to disable resource validation.
|
||||
```plain
|
||||
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
|
||||
```
|
||||
|
||||
1. Create the cert-manager CustomResourceDefinitions (CRDs).
|
||||
```plain
|
||||
kubectl apply -f cert-manager/cert-manager-crd.yaml
|
||||
```
|
||||
|
||||
1. Launch cert-manager.
|
||||
```plain
|
||||
kubectl apply -R -f ./cert-manager
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
Install Rancher:
|
||||
|
||||
```plain
|
||||
kubectl create namespace cattle-system
|
||||
kubectl -n cattle-system apply -R -f ./rancher
|
||||
```
|
||||
|
||||
**Step Result:** If you are installing Rancher v2.3.0+, the installation is complete.
|
||||
|
||||
### E. For Rancher versions prior to v2.3.0, Configure System Charts
|
||||
|
||||
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0).
|
||||
|
||||
### Additional Resources
|
||||
|
||||
These resources could be helpful when installing Rancher:
|
||||
|
||||
- [Rancher Helm chart options]({{<baseurl>}}rancher/v2.x/en/installation/ha/helm-rancher/chart-options/)
|
||||
- [Adding TLS secrets]({{<baseurl>}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/)
|
||||
- [Troubleshooting Rancher HA installations]({{<baseurl>}}/rancher/v2.x/en/installation/ha/helm-rancher/troubleshooting/)
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Single Node Install" %}}
|
||||
|
||||
The single node installation is for Rancher users that are wanting to **test** out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. **Important: If you install Rancher following the single node installation guide, there is no upgrade path to transition your single node installation to a HA installation.** Instead of running the single node installation, you have the option to follow the HA install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a HA installation.
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
Environment Variable Key | Environment Variable Value | Description
|
||||
---|---|---
|
||||
`CATTLE_SYSTEM_DEFAULT_REGISTRY` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters.
|
||||
`CATTLE_SYSTEM_CATALOG` | `bundled` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_
|
||||
|
||||
>**Do you want to...**
|
||||
>
|
||||
>- Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{< baseurl >}}/rancher/v2.x/en/admin-settings/custom-ca-root-certificate/).
|
||||
>- Record all transactions with the Rancher API? See [API Auditing]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#api-audit-log).
|
||||
|
||||
|
||||
- For Rancher prior to v2.3.0, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0)
|
||||
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
{{% accordion id="option-a" label="Option A-Default Self-Signed Certificate" %}}
|
||||
|
||||
If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself.
|
||||
|
||||
Log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to install.
|
||||
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}}
|
||||
|
||||
In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher.
|
||||
|
||||
>**Prerequisites:**
|
||||
>From a computer with an internet connection, create a self-signed certificate using [OpenSSL](https://www.openssl.org/) or another method of your choice.
|
||||
>
|
||||
>- The certificate files must be in [PEM format]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#pem).
|
||||
>- In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [SSL FAQ / Troubleshooting]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#cert-order).
|
||||
|
||||
After creating your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Use the `-v` flag and provide the path to your certificates to mount them in your container.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<CA_CERTS>` | The path to the certificate authority's private key.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to install.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="option-c" label="Option C-Bring Your Own Certificate: Signed by Recognized CA" %}}
|
||||
|
||||
In development or testing environments where you're exposing an app publicly, use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings.
|
||||
|
||||
>**Prerequisite:** The certificate files must be in [PEM format]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#pem).
|
||||
|
||||
After obtaining your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Because your certificate is signed by a recognized CA, mounting an additional CA certificate file is unnecessary.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to install.
|
||||
|
||||
> **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--no-cacerts \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
If you are installing Rancher v2.3.0+, the installation is complete.
|
||||
|
||||
If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0).
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
+16
-8
@@ -1,11 +1,19 @@
|
||||
---
|
||||
title: "3. Install Kubernetes with RKE"
|
||||
title: "3. Install Kubernetes with RKE (HA Installs Only)"
|
||||
weight: 300
|
||||
aliases:
|
||||
|
||||
- /rancher/v2.x/en/installation/air-gap-high-availability/install-kube
|
||||
---
|
||||
|
||||
## A. Create an RKE Config File
|
||||
>**Note:** Applicable only to HA installations.
|
||||
|
||||
Rancher recommends installing Rancher in a Highly Available (HA) configuration. An HA installation is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
|
||||
|
||||
This section is about how to prepare to launch a Kubernetes cluster which is used to deploy Rancher server for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
|
||||
|
||||
Since a HA installation requires a Kubernetes cluster, we will create a Kubernetes cluster using [Rancher Kubernetes Engine]({{< baseurl >}}/rke/latest/en/) (RKE). Before being able to start your Kubernetes cluster, you'll need to create a RKE config file.
|
||||
|
||||
### A. Create an RKE Config File
|
||||
|
||||
From a system that can access ports 22/tcp and 6443/tcp on your host nodes, use the sample below to create a new file named `rancher-cluster.yml`. This file is a Rancher Kubernetes Engine configuration file (RKE config file), which is a configuration for the cluster you're deploying Rancher to.
|
||||
|
||||
@@ -51,15 +59,15 @@ private_registries:
|
||||
is_default: true
|
||||
```
|
||||
|
||||
## B. Run RKE
|
||||
### B. Run RKE
|
||||
|
||||
After configuring `rancher-cluster.yml`, open Terminal and change directories to the RKE binary. Then enter the command below to stand up your high availability cluster.
|
||||
After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster:
|
||||
|
||||
```
|
||||
rke up --config ./rancher-cluster.yml
|
||||
```
|
||||
|
||||
## C. Save Your Files
|
||||
### C. Save Your Files
|
||||
|
||||
> **Important**
|
||||
> The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.
|
||||
@@ -68,6 +76,6 @@ Save a copy of the following files in a secure location:
|
||||
|
||||
- `rancher-cluster.yml`: The RKE cluster configuration file.
|
||||
- `kube_config_rancher-cluster.yml`: The [Kubeconfig file]({{< baseurl >}}/rke/latest/en/kubeconfig/) for the cluster, this file contains credentials for full access to the cluster.
|
||||
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{< baseurl >}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
|
||||
- `rancher-cluster.rkestate`: The [Kubernetes Cluster State file]({{< baseurl >}}/rke/latest/en/installation/#kubernetes-cluster-state), this file contains credentials for full access to the cluster.<br/><br/>_The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher._
|
||||
|
||||
### [Next: Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-high-availability/install-rancher)
|
||||
### [Next: Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap/install-rancher)
|
||||
@@ -0,0 +1,291 @@
|
||||
---
|
||||
title: "2. Collect and Publish Images to your Private Registry"
|
||||
weight: 200
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/air-gap-installation/prepare-private-reg/
|
||||
- /rancher/v2.x/en/installation/air-gap-high-availability/prepare-private-registry/
|
||||
- /rancher/v2.x/en/installation/air-gap-single-node/prepare-private-registry/
|
||||
---
|
||||
|
||||
>**Prerequisites:** You must have a [private registry](https://docs.docker.com/registry/deploying/) available to use.
|
||||
>
|
||||
>**Note:** Populating the private registry with images is the same process for HA and single node installations, the differences in this section is based on whether or not you are planning to provision a Windows cluster or not.
|
||||
|
||||
By default, all images used to [provision Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/) or launch any [tools]({{<baseurl>}}/rancher/v2.x/en/tools/) in Rancher, e.g. monitoring, pipelines, alerts, are pulled from Docker Hub. In an air gap installation of Rancher, you will need a private registry that is located somewhere accessible by your Rancher server. Then, you will load the registry with all the images.
|
||||
|
||||
This section describes how to set up your private registry so that when you install Rancher, Rancher will pull all the required images from this registry.
|
||||
|
||||
By default, we provide the steps of how to populate your private registry assuming you are provisioning Linux only clusters, but if you plan on provisioning any [Windows clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/), there are separate instructions to support the images needed for a Windows cluster.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "Linux Only Clusters" %}}
|
||||
|
||||
For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry.
|
||||
|
||||
A. Find the required assets for your Rancher version <br>
|
||||
B. Collect all the required images <br>
|
||||
C. Save the images to your workstation <br>
|
||||
D. Populate the private registry
|
||||
|
||||
### Prerequisites
|
||||
|
||||
These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space.
|
||||
|
||||
### A. Find the required assets for your Rancher version
|
||||
|
||||
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments.
|
||||
|
||||
2. From the release's **Assets** section (pictured above), download the following files, which are required to install Rancher in an air gap environment:
|
||||
|
||||
| Release File | Description |
|
||||
| --- | --- |
|
||||
| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools.
|
||||
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
|
||||
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
|
||||
### B. Collect all the required images (For HA Installs using Rancher Generated Self-Signed Certificate)
|
||||
|
||||
In an HA install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates.
|
||||
|
||||
1. Fetch the latest `cert-manager` Helm chart and parse the template for image details:
|
||||
|
||||
> **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.9.1, please see our [upgrade documentation]({{< baseurl >}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/).
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm fetch jetstack/cert-manager --version v0.9.1
|
||||
helm template ./cert-manager-<version>.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt
|
||||
```
|
||||
|
||||
2. Sort and unique the images list to remove any overlap between the sources:
|
||||
|
||||
```plain
|
||||
sort -u rancher-images.txt -o rancher-images.txt
|
||||
```
|
||||
|
||||
### C. Save the images to your workstation
|
||||
|
||||
1. Make `rancher-save-images.sh` an executable:
|
||||
|
||||
```
|
||||
chmod +x rancher-save-images.sh
|
||||
```
|
||||
|
||||
1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images:
|
||||
|
||||
```plain
|
||||
./rancher-save-images.sh --image-list ./rancher-images.txt
|
||||
```
|
||||
|
||||
**Step Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
|
||||
### D. Populate the private registry
|
||||
|
||||
Move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script.
|
||||
|
||||
1. Log into your private registry if required:
|
||||
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
1. Make `rancher-load-images.sh` an executable:
|
||||
|
||||
```
|
||||
chmod +x rancher-load-images.sh
|
||||
```
|
||||
|
||||
1. Use `rancher-load-images.sh` to extract, tag and push `rancher-images.txt` and `rancher-images.tar.gz` to your private registry:
|
||||
|
||||
```plain
|
||||
./rancher-load-images.sh --image-list ./rancher-images.txt --registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
{{% /tab %}}
|
||||
{{% tab "Linux and Windows Clusters" %}}
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
For Rancher servers that will provision Linux and Windows clusters, there are distinctive steps to populate your private registry for the Windows images and the Linux images. Since a Windows cluster is a mix of Linux and Windows nodes, the Linux images pushed into the private registry are manifests.
|
||||
|
||||
### Windows Steps
|
||||
|
||||
The Windows images need to be collected and pushed from a Windows server workstation.
|
||||
|
||||
A. Find the required assets for your Rancher version <br>
|
||||
B. Save the images to your Windows Server workstation <br>
|
||||
C. Prepare the Docker daemon <br>
|
||||
D. Populate the private registry
|
||||
|
||||
{{% accordion label="Collecting and Populating Windows Images into the Private Registry"%}}
|
||||
|
||||
### Prerequisites
|
||||
|
||||
These steps expect you to use a Windows 1903 Server workstation that has internet access, access to your private registry, and at least 50 GB of disk space.
|
||||
|
||||
The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters.
|
||||
|
||||
### A. Find the required assets for your Rancher version
|
||||
|
||||
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments.
|
||||
|
||||
2. From the release's "Assets" section, download the following files:
|
||||
|
||||
| Release File | Description |
|
||||
| --- | --- |
|
||||
| `rancher-windows-images.txt` | This file contains a list of Windows images needed to provision Windows clusters.
|
||||
| `rancher-save-images.ps1` | This script pulls all the images in the `rancher-windows-images.txt` from Docker Hub and saves all of the images as `rancher-windows-images.tar.gz`. |
|
||||
| `rancher-load-images.ps1` | This script loads the images from the `rancher-windows-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
|
||||
### B. Save the images to your Windows Server workstation
|
||||
|
||||
1. Using `powershell`, go to the directory that has the files that were downloaded in the previous step.
|
||||
|
||||
1. Run `rancher-save-images.ps1` to create a tarball of all the required images:
|
||||
|
||||
```plain
|
||||
./rancher-save-images.ps1
|
||||
```
|
||||
|
||||
**Step Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-windows-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
### C. Prepare the Docker daemon
|
||||
|
||||
1. Append your private registry address to the `allow-nondistributable-artifacts` config field in the Docker daemon (`C:\ProgramData\Docker\config\daemon.json`). Since the base image of Windows images are maintained by the `mcr.microsoft.com` registry, this step is required as the layers in the Microsoft registry are missing from Docker Hub and need to be pulled into the private registry.
|
||||
|
||||
```json
|
||||
{
|
||||
...
|
||||
"allow-nondistributable-artifacts": [
|
||||
...
|
||||
"<REGISTRY.YOURDOMAIN.COM:PORT>"
|
||||
]
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
### D. Populate the private registry
|
||||
|
||||
Move the images in the `rancher-windows-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-windows-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.ps1` script.
|
||||
|
||||
1. Using `powershell`, log into your private registry if required:
|
||||
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
1. Using `powershell`, use `rancher-load-images.ps1` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry:
|
||||
|
||||
```plain
|
||||
./rancher-load-images.ps1 --registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
### Linux Steps
|
||||
|
||||
The Linux images needs to be collected and pushed from a Linux host, but *must be done after* populating the Windows images into the private registry. These step are different from the Linux only steps as the Linux images that are pushed will actually manifests that support Windows and Linux images.
|
||||
|
||||
A. Find the required assets for your Rancher version <br>
|
||||
B. Collect all the required images <br>
|
||||
C. Save the images to your Linux workstation <br>
|
||||
D. Populate the private registry
|
||||
|
||||
|
||||
{{% accordion label="Collecting and Populating Linux Images into the Private Registry" %}}
|
||||
|
||||
### Prerequisites
|
||||
|
||||
You must populate the private registry with the Windows images before populating the private registry with Linux images. If you have already populated the registry with Linux images, you will need to follow these instructions again as they will publish manifests that support Windows and Linux images.
|
||||
|
||||
These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space.
|
||||
|
||||
The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters.
|
||||
|
||||
### A. Find the required assets for your Rancher version
|
||||
|
||||
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments.
|
||||
|
||||
2. From the release's **Assets** section (pictured above), download the following files, which are required to install Rancher in an air gap environment:
|
||||
|
||||
| Release File | Description |
|
||||
| --- | --- |
|
||||
| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools.
|
||||
| `rancher-windows-images.txt` | This file contains a list of images needed to provision Windows clusters.
|
||||
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
|
||||
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
|
||||
### B. Collect all the required images
|
||||
|
||||
1. **For HA Installs using Rancher Generated Self-Signed Certificate:** In a HA install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates.
|
||||
|
||||
1. Fetch the latest `cert-manager` Helm chart and parse the template for image details:
|
||||
|
||||
> **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.9.1, please see our [upgrade documentation]({{< baseurl >}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/).
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm fetch jetstack/cert-manager --version v0.9.1
|
||||
helm template ./cert-manager-<version>.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt
|
||||
```
|
||||
|
||||
2. Sort and unique the images list to remove any overlap between the sources:
|
||||
|
||||
```plain
|
||||
sort -u rancher-images.txt -o rancher-images.txt
|
||||
```
|
||||
|
||||
### C. Save the images to your workstation
|
||||
|
||||
1. Make `rancher-save-images.sh` an executable:
|
||||
|
||||
```
|
||||
chmod +x rancher-save-images.sh
|
||||
```
|
||||
|
||||
1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images:
|
||||
|
||||
```plain
|
||||
./rancher-save-images.sh --image-list ./rancher-images.txt
|
||||
```
|
||||
|
||||
**Step Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
|
||||
### D. Populate the private registry
|
||||
|
||||
Move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images. The `rancher-images.txt`, `rancher-windows-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script.
|
||||
|
||||
1. Log into your private registry if required:
|
||||
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
1. Make `rancher-load-images.sh` an executable:
|
||||
|
||||
```
|
||||
chmod +x rancher-load-images.sh
|
||||
```
|
||||
|
||||
1. Use `rancher-load-images.sh` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry:
|
||||
|
||||
```plain
|
||||
./rancher-load-images.sh --image-list ./rancher-images.txt \
|
||||
--windows-image-list ./rancher-windows-images.txt \
|
||||
--registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
### [Next: HA Installs - Launch a Kubernetes Cluster with RKE]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap/launch-kubernetes/)
|
||||
|
||||
### [Next: Single Node Installs - Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap/install-rancher/)
|
||||
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: "1. Prepare your Node(s)"
|
||||
weight: 100
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/air-gap-high-availability/provision-hosts
|
||||
- /rancher/v2.x/en/installation/air-gap-single-node/provision-host
|
||||
---
|
||||
|
||||
This section is about how to prepare your node(s) to install Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a single node installation.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "HA Install (Recommended)" %}}
|
||||
|
||||
Rancher recommends installing Rancher in a Highly Available (HA) configuration. An HA installation is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
|
||||
|
||||
### Recommended Architecture
|
||||
|
||||
- DNS for Rancher should resolve to a layer 4 load balancer
|
||||
- The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster.
|
||||
- The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443.
|
||||
- The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment.
|
||||
|
||||
<figcaption>HA Rancher install with layer 4 load balancer, depicting SSL termination at ingress controllers</figcaption>
|
||||
|
||||

|
||||
|
||||
### A. Provision three air gapped Linux hosts according to our requirements
|
||||
|
||||
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
|
||||
|
||||
View hardware and software requirements for each of your cluster nodes in [Requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements).
|
||||
|
||||
### B. Set up your Load Balancer
|
||||
|
||||
When setting up the Kubernetes cluster that will run the Rancher server components, an Ingress controller pod will be deployed on each of your nodes. The Ingress controller pods are bound to ports TCP/80 and TCP/443 on the host network and are the entry point for HTTPS traffic to the Rancher server.
|
||||
|
||||
You will need to configure a load balancer as a basic Layer 4 TCP forwarder to direct traffic to these ingress controller pods. The exact configuration will vary depending on your environment.
|
||||
|
||||
>**Important:**
|
||||
>Only use this load balancer (i.e, the `local` cluster Ingress) to load balance the Rancher server. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps.
|
||||
|
||||
**Load Balancer Configuration Samples:**
|
||||
|
||||
- [NGINX]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx)
|
||||
- [Amazon NLB]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/nlb)
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Single Node Install" %}}
|
||||
|
||||
The single node installation is for Rancher users that are wanting to **test** out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. **Important: If you install Rancher following the single node installation guide, there is no upgrade path to transition your single node installation to a HA installation.** Instead of running the single node installation, you have the option to follow the HA install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a HA installation.
|
||||
|
||||
### A. Provision a single, air gapped Linux host according to our Requirements
|
||||
|
||||
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
|
||||
|
||||
View hardware and software requirements for each of your cluster nodes in [Requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements).
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
### [Next: Collect and Publish Images to your Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap/populate-private-registry/)
|
||||
@@ -36,7 +36,6 @@ There are three recommended options for the source of the certificate.
|
||||
**Note:** cert-manager is only required for certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) and Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`). You should skip this step if you are using your own certificate files (option `ingress.tls.source=secret`) or if you use [TLS termination on an External Load Balancer]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#external-tls-termination).
|
||||
|
||||
> **Important:**
|
||||
|
||||
> Due to an issue with Helm v2.12.0 and cert-manager, please use Helm v2.12.1 or higher.
|
||||
|
||||
> Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.9.1, please see our [upgrade documentation]({{< baseurl >}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/).
|
||||
|
||||
@@ -40,6 +40,8 @@ weight: 276
|
||||
| `rancherImage` | "rancher/rancher" | `string` - rancher image source |
|
||||
| `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag |
|
||||
| `tls` | "ingress" | `string` - See [External TLS Termination](#external-tls-termination) for details. - "ingress, external" |
|
||||
| `systemDefaultRegistry` | "" | `string` - private registry to be used for all system Docker images, e.g., http://registry.example.com/ _Available as of v2.3.0_ |
|
||||
| `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. _Available as of v2.3.0_
|
||||
|
||||
<br/>
|
||||
|
||||
@@ -64,8 +66,8 @@ _Available as of v2.2.0_
|
||||
You can set extra environment variables for Rancher server using `extraEnv`. This list uses the same `name` and `value` keys as the container manifest definitions. Remember to quote the values.
|
||||
|
||||
```plain
|
||||
--set 'extraEnv[0].name=CATTLE_SYSTEM_DEFAULT_REGISTRY'
|
||||
--set 'extraEnv[0].value=http://registry.example.com/'
|
||||
--set 'extraEnv[0].name=CATTLE_TLS_MIN_VERSION'
|
||||
--set 'extraEnv[0].value=1.0'
|
||||
```
|
||||
|
||||
### TLS settings
|
||||
|
||||
@@ -7,7 +7,8 @@ When installing Rancher, there are several advanced options that can be enabled
|
||||
|
||||
| Advanced Option | Available as of |
|
||||
| --- | ---|
|
||||
| [Custom CA Certificate]({{< baseurl >}}/rancher/v2.x/en/installation/options/custom-ca-root-certificate/) | v2.0.0 |
|
||||
| [API Audit Log]({{< baseurl >}}/rancher/v2.x/en/installation/options/api-audit-log/) | v2.0.0 |
|
||||
| [TLS Settings]({{< baseurl >}}/rancher/v2.x/en/installation/options/tls-settings/) | v2.1.7 |
|
||||
| [etcd configuration]({{< baseurl >}}/rancher/v2.x/en/installation/options/etcd/) | v2.2.0 |
|
||||
| [Custom CA Certificate]({{<baseurl>}}/rancher/v2.x/en/installation/options/custom-ca-root-certificate/) | v2.0.0 |
|
||||
| [API Audit Log]({{<baseurl>}}/rancher/v2.x/en/installation/options/api-audit-log/) | v2.0.0 |
|
||||
| [TLS Settings]({{<baseurl>}}/rancher/v2.x/en/installation/options/tls-settings/) | v2.1.7 |
|
||||
| [etcd configuration]({{<baseurl>}}/rancher/v2.x/en/installation/options/etcd/) | v2.2.0 |
|
||||
| [Local System Charts for Air Gap Installations]({{<baseurl>}})/rancher/v2.x/en/installation/options/local-system-charts | v2.3.0 |
|
||||
|
||||
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: Local System Charts for Air Gap Installations
|
||||
weight: 1120
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/air-gap-single-node/config-rancher-system-charts/_index.md
|
||||
- /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-system-charts/_index.md
|
||||
---
|
||||
|
||||
The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS.
|
||||
|
||||
In an air gapped installation of Rancher, you will need to configure Rancher to use a local copy of the system charts. This section describes how to use local system charts using a CLI flag in Rancher v2.3.0, and using a Git mirror for Rancher versions prior to v2.3.0.
|
||||
|
||||
# Using Local System Charts in Rancher v2.3.0
|
||||
|
||||
In Rancher v2.3.0, a local copy of `system-charts` has been packaged into the `rancher/rancher` container. To be able to use these features in an air gap install, you will need to run the Rancher install command with an extra environment variable, `CATTLE_SYSTEM_CATALOG=bundled`, which tells Rancher to use the local copy of the charts instead of attempting to fetch them from GitHub.
|
||||
|
||||
Example commands for a Rancher installation with a bundled `system-charts` are included in the [air gap single node installation]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-single-node/install-rancher) instructions and the [air gap high availability installation]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap-high-availability/install-rancher/#c-install-rancher) instructions.
|
||||
|
||||
# Setting Up System Charts for Rancher Prior to v2.3.0
|
||||
|
||||
### A. Prepare System Charts
|
||||
|
||||
The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach and configure Rancher to use that repository.
|
||||
|
||||
Refer to the release notes in the `system-charts` repository to see which branch corresponds to your version of Rancher.
|
||||
|
||||
### B. Configure System Charts
|
||||
|
||||
Rancher needs to be configured to use your Git mirror of the `system-charts` repository. You can configure the system charts repository either from the Rancher UI or from Rancher's API view.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "Rancher UI" %}}
|
||||
|
||||
In the catalog management page in the Rancher UI, follow these steps:
|
||||
|
||||
1. Go to the **Global** view.
|
||||
|
||||
1. Click **Tools > Catalogs.**
|
||||
|
||||
1. The system chart is displayed under the name `system-library`. To edit the configuration of the system chart, click **Ellipsis (...) > Edit.**
|
||||
|
||||
1. In the **Catalog URL** field, enter the location of the Git mirror of the `system-charts` repository.
|
||||
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Rancher API" %}}
|
||||
|
||||
1. Log into Rancher.
|
||||
|
||||
1. Open `https://<your-rancher-server>/v3/catalogs/system-library` in your browser.
|
||||
|
||||

|
||||
|
||||
1. Click **Edit** on the upper right corner and update the value for **url** to the location of the Git mirror of the `system-charts` repository.
|
||||
|
||||

|
||||
|
||||
1. Click **Show Request**
|
||||
|
||||
1. Click **Send Request**
|
||||
|
||||
**Result:** Rancher is configured to download all the required catalog items from your `system-charts` repository.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
@@ -18,6 +18,7 @@ To address these changes, this guide will do two things:
|
||||
>**Note:** The namespace used in these instructions depends on the namespace cert-manager is currently installed in. If it is in kube-system use that in the instructions below. You can verify by running `kubectl get pods --all-namespaces` and checking which namespace the cert-manager-\* pods are listed in. Do not change the namespace cert-manager is running in or this can cause issues.
|
||||
|
||||
In order to upgrade cert-manager, follow these instructions:
|
||||
|
||||
{{% accordion id="normal" label="Upgrading cert-manager with Internet access" %}}
|
||||
1. Back up existing resources as a precaution
|
||||
```plain
|
||||
@@ -57,6 +58,7 @@ In order to upgrade cert-manager, follow these instructions:
|
||||
|
||||
{{% accordion id="airgap" label="Upgrading cert-manager in an airgapped environment" %}}
|
||||
### Prerequisites
|
||||
|
||||
Before you can perform the upgrade, you must prepare your air gapped environment by adding the necessary container images to your private registry and downloading or rendering the required Kubernetes manifest files.
|
||||
|
||||
1. Follow the guide to [Prepare your Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/prepare-private-reg/) with the images needed for the upgrade.
|
||||
@@ -98,10 +100,10 @@ Before you can perform the upgrade, you must prepare your air gapped environment
|
||||
kubectl get -o yaml --all-namespaces issuer,clusterissuer,certificates > cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
1. Delete the existing deployment
|
||||
1. Delete the existing cert-manager installation
|
||||
|
||||
```plain
|
||||
helm delete --purge cert-manager
|
||||
kubectl -n kube-system delete deployment,sa,clusterrole,clusterrolebinding -l 'app=cert-manager' -l 'chart=cert-manager-v0.5.2'
|
||||
```
|
||||
|
||||
1. Install the CustomResourceDefinition resources separately
|
||||
@@ -144,7 +146,7 @@ If the ‘webhook’ pod (2nd line) is in a ContainerCreating state, it may stil
|
||||
|
||||
## Cert-Manager API change and data migration
|
||||
|
||||
Cert-manager has deprecated the use of the `certificate.spec.acme.solvers` field and will drop support for it completely in an upcoming release.
|
||||
Cert-manager has deprecated the use of the `certificate.spec.acme.solvers` field and will drop support for it completely in an upcoming release.
|
||||
|
||||
Per the cert-manager documentation, a new format for configuring ACME certificate resources was introduced in v0.8. Specifically, the challenge solver configuration field was moved. Both the old format and new are supported as of v0.9, but support for the old format will be dropped in an upcoming release of cert-manager. The cert-manager documentation strongly recommends that after upgrading you update your ACME Issuer and Certificate resources to the new format.
|
||||
|
||||
|
||||
@@ -13,6 +13,8 @@ The following table lists the ports that need to be open to and from nodes that
|
||||
|
||||
{{< ports-rancher-nodes >}}
|
||||
|
||||
**Note** Rancher nodes may also require additional outbound access for any external [authentication provider]({{< baseurl >}}rancher/v2.x/en/admin-settings/authentication/) which is configured (LDAP for example).
|
||||
|
||||
## Kubernetes Cluster Nodes
|
||||
|
||||
The ports required to be open for cluster nodes changes depending on how the cluster was launched. Each of the tabs below list the ports that need to be opened for different [cluster creation options]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-options).
|
||||
|
||||
@@ -21,8 +21,8 @@ Rancher is tested on the following operating systems and their subsequent non-ma
|
||||
* RancherOS 1.5.1 (64-bit x86)
|
||||
* Docker 17.03.x, 18.06.x, 18.09.x
|
||||
* Windows Server 2019 (64-bit x86)
|
||||
* Docker 18.09
|
||||
* _Experimental, see [Configuring Custom Clusters for Windows]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/)_
|
||||
* Docker 19.03
|
||||
* Supported for worker nodes only. See [Configuring Custom Clusters for Windows]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/)
|
||||
|
||||
If you are using RancherOS, make sure you switch the Docker engine to a supported version using:<br>
|
||||
```
|
||||
|
||||
@@ -51,8 +51,12 @@ In development or testing environments where your team will access your Rancher
|
||||
|
||||
After creating your certificate, run the Docker command below to install Rancher. Use the `-v` flag and provide the path to your certificates to mount them in your container.
|
||||
|
||||
- Replace `<CERT_DIRECTORY>` with the directory path to your certificate file.
|
||||
- Replace `<FULL_CHAIN.pem>`,`<PRIVATE_KEY.pem>`, and `<CA_CERTS>` with your certificate names.
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<CA_CERTS>` | The path to the certificate authority's private key.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
@@ -67,27 +71,35 @@ docker run -d --restart=unless-stopped \
|
||||
|
||||
In production environments where you're exposing an app publicly, use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings.
|
||||
|
||||
>**Prerequisite:** The certificate files must be in [PEM format](#pem).
|
||||
>**Prerequisites:**
|
||||
>
|
||||
>- The certificate files must be in [PEM format](#pem).
|
||||
>- In your certificate file, include all intermediate certificates provided by the recognized CA. Order your certificates with your certificate first, followed by the intermediates. For an example, see [SSL FAQ / Troubleshooting](#cert-order).
|
||||
|
||||
After obtaining your certificate, run the Docker command below.
|
||||
|
||||
- Use the `-v` flag and provide the path to your certificates to mount them in your container. Because your certificate is signed by a recognized CA, mounting an additional CA certificate file is unnecessary.
|
||||
|
||||
- Replace `<CERT_DIRECTORY>` with the directory path to your certificate file.
|
||||
- Replace `<FULL_CHAIN.pem>` and `<PRIVATE_KEY.pem>` with your certificate names.
|
||||
|
||||
- Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
rancher/rancher:latest --no-cacerts
|
||||
rancher/rancher:latest \
|
||||
--no-cacerts
|
||||
```
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="option-d" label="Option D-Let's Encrypt Certificate" %}}
|
||||
|
||||
>**Remember:** Let's Encrypt provides rate limits for requesting new certificates. Therefore, limit how often you create or destroy the container. For more information, see [Let's Encrypt documentation on rate limits](https://letsencrypt.org/docs/rate-limits/).
|
||||
|
||||
For production environments, you also have the option of using [Let's Encrypt](https://letsencrypt.org/) certificates. Let's Encrypt uses an http-01 challenge to verify that you have control over your domain. You can confirm that you control the domain by pointing the hostname that you want to use for Rancher access (for example, `rancher.mydomain.com`) to the IP of the machine it is running on. You can bind the hostname to the IP address by creating an A record in DNS.
|
||||
|
||||
>**Prerequisites:**
|
||||
@@ -97,14 +109,19 @@ For production environments, you also have the option of using [Let's Encrypt](h
|
||||
>- Open port `TCP/80` on your Linux host. The Let's Encrypt http-01 challenge can come from any source IP address, so port `TCP/80` must be open to all IP addresses.
|
||||
|
||||
|
||||
After you fulfill the prerequisites, you can install Rancher using a Let's Encrypt certificate by running the following command. Replace `<YOUR.DNS.NAME>` with your domain.
|
||||
After you fulfill the prerequisites, you can install Rancher using a Let's Encrypt certificate by running the following command.
|
||||
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:latest \
|
||||
--acme-domain <YOUR.DNS.NAME>
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<YOUR.DNS.NAME>` | Your domain address
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:latest \
|
||||
--acme-domain <YOUR.DNS.NAME>
|
||||
```
|
||||
|
||||
>**Remember:** Let's Encrypt provides rate limits for requesting new certificates. Therefore, limit how often you create or destroy the container. For more information, see [Let's Encrypt documentation on rate limits](https://letsencrypt.org/docs/rate-limits/).
|
||||
{{% /accordion %}}
|
||||
|
||||
## What's Next?
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: How to Use Istio in Your Project
|
||||
title: Istio
|
||||
weight: 3528
|
||||
---
|
||||
|
||||
@@ -7,49 +7,15 @@ _Available as of v2.3.0-alpha5_
|
||||
|
||||
Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications.
|
||||
|
||||
Istio requires each pod in the service mesh to run an Istio compatible sidecar. This section describes how to set up Istio sidecar auto injection in the Rancher UI. For more information on the Istio sidecar, refer to the [Istio docs](https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/).
|
||||
This service mesh provides features that include but are not limited to the following:
|
||||
|
||||
>**Prerequisites:**
|
||||
>
|
||||
>- [Istio]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/istio/) must be enabled in the cluster.
|
||||
>- To be a part of an Istio service mesh, pods and services in a Kubernetes cluster must satisfy the [Istio Pods and Services Requirements](https://istio.io/docs/setup/kubernetes/prepare/requirements/).
|
||||
- Traffic management features
|
||||
- Enhanced monitoring and tracing
|
||||
- Service discovery and routing
|
||||
- Secure connections and service-to-service authentication with mutual TLS
|
||||
- Load balancing
|
||||
- Automatic retries, backoff, and circuit breaking
|
||||
|
||||
## Istio Sidecar Auto Injection
|
||||
Istio needs to be set up by a Rancher administrator or cluster administrator before it can be used in a project for [comprehensive data visualizations,]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/#accessing-visualizations) traffic management, or any of its other features.
|
||||
|
||||
If an Istio sidecar is not injected into a pod, Istio will not work for that pod. If you enable Istio sidecar auto injection for a namespace, all pods created in the namespace will have an injected Istio sidecar.
|
||||
|
||||
In the create and edit namespace page, you can enable or disable [Istio sidecar auto injection](https://istio.io/blog/2019/data-plane-setup/#automatic-injection). When you enable it, Rancher will add `istio-injection=enabled` label to the namespace automatically.
|
||||
|
||||
Injection occurs at pod creation time. If the pod has been created before you enable auto injection, you need to kill the running pod and verify that a new pod is created with the injected sidecar.
|
||||
|
||||
For information on how to inject the Istio sidecar manually, refer to the [Istio docs](https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/).
|
||||
|
||||
## View Traffic Graph
|
||||
|
||||
Rancher integrates a Kiali graph into the Rancher UI. The Kiali graph provides a powerful way to visualize the topology of your Istio service mesh. It shows you which services communicate with each other.
|
||||
|
||||
To see the traffic graph for a particular namespace:
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to view traffic graph for.
|
||||
|
||||
1. Select **Istio** in the navigation bar.
|
||||
|
||||
1. Select **Traffic Graph** in the navigation bar.
|
||||
|
||||
1. Select the namespace. Note: It only shows the namespaces which have the `istio-injection=enabled` label.
|
||||
|
||||
## View Traffic Metrics
|
||||
|
||||
Istio’s monitoring features provide visibility into the performance of all your services. To see the Success Rate, Request Volume, 4xx Response Count, Project 5xx Response Count and Request Duration metrics:
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to view traffic metrics for.
|
||||
|
||||
1. Select **Istio** in the navigation bar.
|
||||
|
||||
1. Select **Traffic Metrics** in the navigation bar.
|
||||
|
||||
|
||||
## Other Istio Features
|
||||
|
||||
There are many other [Istio Features](https://istio.io/docs/concepts/what-is-istio/#core-features)
|
||||
that you can now use in your cluster.
|
||||
For information on how Istio is integrated with Rancher and how to set it up, refer to the [section about Istio.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio)
|
||||
|
||||
@@ -34,7 +34,8 @@ The benchmark self-assessment is a companion to the Rancher security hardening g
|
||||
Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher created clusters. The original benchmark documents can be downloaded from the [CIS website](https://www.cisecurity.org/benchmark/kubernetes/).
|
||||
|
||||
* [CIS Kubernetes Benchmark 1.3.0 - Rancher 2.1.x with Kubernetes 1.11]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.1/)
|
||||
* [CIS Kubernetes Benchmark 1.4.0 - Rancher 2.2.x with Kubernetes 1.13]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.2/)
|
||||
* [CIS Kubernetes Benchmark 1.4.0 - Rancher 2.2.x with Kubernetes 1.13]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.2/#cis-kubernetes-benchmark-1-4-0-rancher-2-2-x-with-kubernetes-1-13/)
|
||||
* [CIS Kubernetes Benchmark 1.4.1 - Rancher 2.2.x with Kubernetes 1.13]({{< baseurl >}}/rancher/v2.x/en/security/benchmark-2.2/#cis-kubernetes-benchmark-1-4-1-rancher-2-2-x-with-kubernetes-1-13)
|
||||
|
||||
### Rancher CVEs and Resolutions
|
||||
|
||||
|
||||
@@ -4,12 +4,14 @@ weight: 103
|
||||
---
|
||||
|
||||
### CIS Kubernetes Benchmark 1.4.0 - Rancher 2.2.x with Kubernetes 1.13
|
||||
There is no material difference in control verification checks between CIS Kubernetes Benchmark 1.4.0 and [1.4.1](https://rancher.com/docs/rancher/v2.x/en/security/benchmark-2.2/#cis-kubernetes-benchmark-1-4-1-rancher-2-2-x-with-kubernetes-1-13)
|
||||
### CIS Kubernetes Benchmark 1.4.1 - Rancher 2.2.x with Kubernetes 1.13
|
||||
|
||||
[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.2.x/Rancher_Benchmark_Assessment.pdf)
|
||||
|
||||
#### Overview
|
||||
|
||||
The following document scores a Kubernetes 1.13.x RKE cluster provisioned according to the Rancher v2.2.x hardening guide against the CIS 1.4.0 Kubernetes benchmark.
|
||||
The following document scores a Kubernetes 1.13.x RKE cluster provisioned according to the Rancher v2.2.x hardening guide against the CIS 1.4.1 Kubernetes benchmark.
|
||||
|
||||
This document is a companion to the Rancher v2.2.x security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark.
|
||||
|
||||
|
||||
@@ -2,21 +2,27 @@
|
||||
title: Upgrades
|
||||
weight: 1005
|
||||
---
|
||||
This section contains information about how to upgrade your Rancher server to a newer version.
|
||||
This section contains information about how to upgrade your Rancher server to a newer version. Regardless if you installed in an air gap environment or not, the upgrade steps will be based on what type of install you chosen. Select from the following options:
|
||||
|
||||
### Single Node Install
|
||||
- [Upgrading a Single Node Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/single-node/)
|
||||
- [Upgrading an HA Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha/)
|
||||
|
||||
- [Upgrading a Single Node Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/)
|
||||
- [Upgrading an Air Gapped Single Node Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/single-node-air-gap-upgrade/)
|
||||
### Known Upgrade Issues
|
||||
|
||||
### Upgrading to an HA Helm Chart
|
||||
Upgrade Scenario | Issue
|
||||
---|---
|
||||
Upgrading to v2.3.0+ | Any user provisioned cluster will be automatically updated upon any edit as tolerations were added to the images used for Kubernetes provisioning.
|
||||
Upgrading to v2.2.0-v2.2.x | Rancher introduced the [system charts](https://github.com/rancher/system-charts) repository which contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository locally and configure Rancher to use that repository. Please follow the instructions to [configure Rancher system charts]({{< baseurl >}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0).
|
||||
Upgrading from v2.0.13 or earlier | If your cluster's certificates have expired, you will need to perform [additional steps]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/certificate-rotation/#rotating-expired-certificates-after-upgrading-older-rancher-versions) to rotate the certificates.
|
||||
Upgrading from v2.0.7 or earlier | Rancher introduced the `system` project, which is a project that's automatically created to store important namespaces that Kubernetes needs to operate. During upgrade to v2.0.7+, Rancher expects these namespaces to be unassigned from all projects. Before beginning upgrade, check your system namespaces to make sure that they're unassigned to [prevent cluster networking issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#preventing-cluster-networking-issues).
|
||||
|
||||
- [Upgrade an HA Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/)
|
||||
- [Upgrade a Air Gap HA Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm-airgap/)
|
||||
- [Migrating from an RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/)
|
||||
### Caveats
|
||||
Upgrades _to_ or _from_ any chart in the [rancher-alpha repository]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories/) aren't supported.
|
||||
|
||||
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
### RKE Add-on Installs
|
||||
|
||||
**Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
|
||||
Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
|
||||
|
||||
If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
|
||||
|
||||
@@ -1,99 +0,0 @@
|
||||
---
|
||||
title: High Availability (HA) Upgrade - Air Gap
|
||||
weight: 1021
|
||||
---
|
||||
|
||||
The following instructions will guide you through upgrading a high-availability Rancher Server installed in an air gap environment.
|
||||
|
||||
>**Note:** [Let's Encrypt will be blocking cert-manager instances older than 0.8.0 starting November 1st 2019.](https://community.letsencrypt.org/t/blocking-old-cert-manager-versions/98753) Upgrade cert-manager to the latest version by following [these instructions.]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/upgrade-cert-manager-airgap)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Populate Images**
|
||||
|
||||
Follow the guide to [Prepare the Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/prepare-private-reg/) with the images for the upgrade Rancher release.
|
||||
|
||||
- **Backup your Rancher Cluster**
|
||||
|
||||
[Take a one-time snapshot]({{< baseurl >}}/rancher/v2.x/en/backups/backups/ha-backups/#option-b-one-time-snapshots)
|
||||
of your Rancher Server cluster. You'll use the snapshot as a restoration point if something goes wrong during upgrade.
|
||||
|
||||
- **kubectl**
|
||||
|
||||
Follow the kubectl [configuration instructions]({{< baseurl >}}/rancher/v2.x/en/faq/kubectl) and confirm that you can connect to the Kubernetes cluster running Rancher server.
|
||||
|
||||
- **helm**
|
||||
|
||||
[Install or update](https://docs.helm.sh/using_helm/#installing-helm) Helm to the latest version.
|
||||
|
||||
- **Upgrades to v2.0.7+ only: check system namespace locations**<br/>
|
||||
Starting in v2.0.7, Rancher introduced the `system` project, which is a project that's automatically created to store important namespaces that Kubernetes needs to operate. During upgrade to v2.0.7+, Rancher expects these namespaces to be unassigned from all projects. Before beginning upgrade, check your system namespaces to make sure that they're unassigned to [prevent cluster networking issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#preventing-cluster-networking-issues).
|
||||
|
||||
- **Upgrades to v2.2.0 only: mirror system-charts repository and configure Rancher**<br/>
|
||||
Starting in v2.2.0, Rancher introduced the [System Charts](https://github.com/rancher/system-charts) repository which contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository locally and configure Rancher to use that repository. Please follow the instructions to [configure Rancher system charts]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-system-charts/).
|
||||
|
||||
## Caveats
|
||||
Upgrades _to_ or _from_ any chart in the [rancher-alpha repository]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories/) aren't supported.
|
||||
|
||||
## Upgrade Rancher
|
||||
|
||||
1. Update your local helm repo cache.
|
||||
|
||||
```
|
||||
helm repo update
|
||||
```
|
||||
|
||||
|
||||
2. Get the repository name that you used to install Rancher.
|
||||
|
||||
For information about the repos and their differences, see [Helm Chart Repositories]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories).
|
||||
|
||||
{{< release-channel >}}
|
||||
|
||||
```
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://kubernetes-charts.storage.googleapis.com
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
> **Note:** If you want to switch to a different Helm chart repository, please follow the [steps on how to switch repositories]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#switching-to-a-different-helm-chart-repository). If you switch repositories, make sure to list the repositories again before continuing onto Step 3 to ensure you have the correct one added.
|
||||
|
||||
|
||||
3. Fetch the latest chart to install Rancher from the Helm chart repository.
|
||||
|
||||
This command will pull down the latest chart and save it in the current directory as a `.tgz` file.
|
||||
|
||||
```plain
|
||||
helm fetch rancher-<CHART_REPO>/rancher
|
||||
```
|
||||
|
||||
3. Render the upgrade template.
|
||||
|
||||
Use the same `--set` values that you used for the install. Remember to set the `--is-upgrade` flag for `helm`. This will create a `rancher` directory with the Kubernetes manifest files.
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<version>.tgz --output-dir . --is-upgrade \
|
||||
--name rancher --namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher
|
||||
```
|
||||
|
||||
4. Copy and apply the rendered manifests.
|
||||
|
||||
Copy the files to a server with access to the Rancher server cluster and apply the rendered templates.
|
||||
|
||||
```plain
|
||||
kubectl -n cattle-system apply -R -f ./rancher
|
||||
```
|
||||
|
||||
**Result:** Rancher is upgraded. Log back into Rancher to confirm that the upgrade succeeded.
|
||||
|
||||
>**Having Network Issues Following Upgrade?**
|
||||
>
|
||||
> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking).
|
||||
|
||||
## Rolling Back
|
||||
|
||||
Should something go wrong, follow the [HA Rollback]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/) instructions to restore the snapshot you took before you preformed the upgrade.
|
||||
@@ -1,99 +0,0 @@
|
||||
---
|
||||
title: High Availability (HA) Upgrade
|
||||
weight: 1020
|
||||
---
|
||||
|
||||
The following instructions will guide you through upgrading a high-availability Rancher Server that was [installed using Helm package manager]({{< baseurl >}}/rancher/v2.x/en/installation/ha/).
|
||||
|
||||
>**Note:** If you installed Rancher using the RKE Add-on yaml, see the following documents to migrate or upgrade.
|
||||
>
|
||||
>- [Migrating from RKE Add-On Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on)
|
||||
>
|
||||
> As of release v2.0.8, Rancher supports installation and upgrade by Helm chart, although RKE installs/upgrades are still supported as well. If you want to change upgrade method from RKE Add-on to Helm chart, follow this procedure.
|
||||
|
||||
---
|
||||
|
||||
>**Note:** [Let's Encrypt will be blocking cert-manager instances older than 0.8.0 starting November 1st 2019.](https://community.letsencrypt.org/t/blocking-old-cert-manager-versions/98753) Upgrade cert-manager to the latest version by following [these instructions.]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/upgrade-cert-manager)
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Backup your Rancher cluster**
|
||||
|
||||
[Take a one-time snapshot]({{< baseurl >}}/rancher/v2.x/en/backups/backups/ha-backups/#option-b-one-time-snapshots)
|
||||
of your Rancher Server cluster. You'll use the snapshot as a restoration point if something goes wrong during upgrade.
|
||||
|
||||
- **kubectl**
|
||||
|
||||
Follow the kubectl [configuration instructions]({{< baseurl >}}/rancher/v2.x/en/faq/kubectl) and confirm that you can connect to the Kubernetes cluster running Rancher server.
|
||||
|
||||
- **Helm**
|
||||
|
||||
[Install or update](https://docs.helm.sh/using_helm/#installing-helm) Helm to the latest version.
|
||||
|
||||
- **Tiller**
|
||||
|
||||
Update the helm agent, Tiller, on your cluster.
|
||||
|
||||
```
|
||||
helm init --upgrade --service-account tiller
|
||||
```
|
||||
- **Upgrades to v2.0.7+ only: check system namespace locations**
|
||||
Starting in v2.0.7, Rancher introduced the `System` project, which is a project that's automatically created to store important namespaces that Kubernetes needs to operate. During upgrade to v2.0.7+, Rancher expects these namespaces to be unassigned from all projects. Before beginning upgrade, check your system namespaces to make sure that they're unassigned to [prevent cluster networking issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#preventing-cluster-networking-issues).
|
||||
|
||||
## Caveats
|
||||
Upgrades _to_ or _from_ any chart in the [rancher-alpha repository]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories/) aren't supported.
|
||||
|
||||
## Upgrade Rancher
|
||||
|
||||
> **Note:** For Air Gap installs see [Upgrading HA Rancher - Air Gap]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm-airgap/)
|
||||
|
||||
1. Update your local helm repo cache.
|
||||
|
||||
```
|
||||
helm repo update
|
||||
```
|
||||
|
||||
2. Get the repository name that you used to install Rancher.
|
||||
|
||||
For information about the repos and their differences, see [Helm Chart Repositories]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories).
|
||||
|
||||
{{< release-channel >}}
|
||||
|
||||
```
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://kubernetes-charts.storage.googleapis.com
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
> **Note:** If you want to switch to a different Helm chart repository, please follow the [steps on how to switch repositories]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#switching-to-a-different-helm-chart-repository). If you switch repositories, make sure to list the repositories again before continuing onto Step 3 to ensure you have the correct one added.
|
||||
|
||||
3. Get the set values from the current Rancher install.
|
||||
|
||||
```
|
||||
helm get values rancher
|
||||
|
||||
hostname: rancher.my.org
|
||||
```
|
||||
|
||||
> **Note:** There may be more values that are listed with this command depending on which [SSL configuration option you selected]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/#choose-your-ssl-configuration) when installing Rancher.
|
||||
|
||||
4. Upgrade Rancher to the latest version based on values from the previous steps.
|
||||
|
||||
- Take all the values from the previous step and append them to the command using `--set key=value`.
|
||||
|
||||
```
|
||||
helm upgrade rancher rancher-<CHART_REPO>/rancher --set hostname=rancher.my.org
|
||||
```
|
||||
|
||||
**Result:** Rancher is upgraded. Log back into Rancher to confirm that the upgrade succeeded.
|
||||
|
||||
>**Having Network Issues Following Upgrade?**
|
||||
>
|
||||
> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking).
|
||||
|
||||
## Rolling Back
|
||||
|
||||
Should something go wrong, follow the [HA Rollback]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/) instructions to restore the snapshot you took before you preformed the upgrade.
|
||||
@@ -0,0 +1,167 @@
|
||||
---
|
||||
title: High Availability (HA) Upgrade
|
||||
weight: 1020
|
||||
aliases:
|
||||
- /rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm
|
||||
- /rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm-airgap
|
||||
- /rancher/v2.x/en/upgrades/air-gap-upgrade/
|
||||
---
|
||||
|
||||
The following instructions will guide you through upgrading a high-availability (HA) Rancher server installation.
|
||||
|
||||
>**Note:** If you installed Rancher using the RKE Add-on yaml, following the directions to [migrate or upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on).
|
||||
|
||||
|
||||
>**Note:** [Let's Encrypt will be blocking cert-manager instances older than 0.8.0 starting November 1st 2019.](https://community.letsencrypt.org/t/blocking-old-cert-manager-versions/98753) Upgrade cert-manager to the latest version by following [these instructions.]({{<baseurl>}}/rancher/v2.x/en/installation/options/upgrading-cert-manager)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Review the [Known Upgrade Issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/#known-upgrade-issues) and [Caveats]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/#caveats)**
|
||||
|
||||
|
||||
- **[Air Gap Installs Only:]({{< baseurl >}}/rancher/v2.x/en/installations/air-gap/) Collect and Populate Images for the new Rancher server version**
|
||||
|
||||
Follow the guide to [populate your private registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap/populate-private-registry/) with the images for the Rancher version that you want to upgrade to.
|
||||
|
||||
## Upgrade Outline
|
||||
|
||||
Follow the steps to upgrade Rancher server:
|
||||
|
||||
- A. Backup your Kubernetes Cluster that is running Rancher server
|
||||
- B. Update the Helm chart repository
|
||||
- C. Upgrade Rancher
|
||||
- D. Verify the Upgrade
|
||||
|
||||
### A. Backup your Kubernetes Cluster that is running Rancher server
|
||||
|
||||
[Take a one-time snapshot]({{< baseurl >}}/rancher/v2.x/en/backups/backups/ha-backups/#option-b-one-time-snapshots)
|
||||
of your Kubernetes cluster running Rancher server. You'll use the snapshot as a restoration point if something goes wrong during upgrade.
|
||||
|
||||
### B. Update the Helm chart repository
|
||||
|
||||
1. Update your local helm repo cache.
|
||||
|
||||
```
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Get the repository name that you used to install Rancher.
|
||||
|
||||
For information about the repos and their differences, see [Helm Chart Repositories]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#helm-chart-repositories).
|
||||
|
||||
{{< release-channel >}}
|
||||
|
||||
```
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://kubernetes-charts.storage.googleapis.com
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
> **Note:** If you want to switch to a different Helm chart repository, please follow the [steps on how to switch repositories]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#switching-to-a-different-helm-chart-repository). If you switch repositories, make sure to list the repositories again before continuing onto Step 3 to ensure you have the correct one added.
|
||||
|
||||
|
||||
1. Fetch the latest chart to install Rancher from the Helm chart repository.
|
||||
|
||||
This command will pull down the latest charts and save it in the current directory as a `.tgz` file.
|
||||
|
||||
```plain
|
||||
helm fetch rancher-<CHART_REPO>/rancher
|
||||
```
|
||||
|
||||
### C. Upgrade Rancher
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
* HA Upgrade
|
||||
* HA Upgrade for Air Gap Installs
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "HA Upgrade" %}}
|
||||
|
||||
1. Get the values, that were passed with `--set`, from the current Rancher helm chart installed.
|
||||
|
||||
```
|
||||
helm get values rancher
|
||||
|
||||
hostname: rancher.my.org
|
||||
```
|
||||
|
||||
> **Note:** There will be more values that are listed with this command. This is just an example of one of the values.
|
||||
|
||||
2. Upgrade Rancher to the latest version with all your settings.
|
||||
|
||||
- Take all the values from the previous step and append them to the command using `--set key=value`.
|
||||
|
||||
```
|
||||
helm upgrade rancher rancher-<CHART_REPO>/rancher \
|
||||
--set hostname=rancher.my.org # Note: There will be many more options from the previous step that need to be appended.
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab "HA Air Gap Upgrade" %}}
|
||||
|
||||
1. Render the Rancher template using the same chosen options that were used when installing Rancher. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
Based on the choice you made during installation, complete one of the procedures below.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<VERSION>` | The version number of the output tarball.
|
||||
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry.
|
||||
|
||||
{{% accordion id="self-signed" label="Option A-Default Self-Signed Certificate" %}}
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="secret" label="Option B: Certificates From Files using Kubernetes Secrets" %}}
|
||||
|
||||
>**Note:** If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`.
|
||||
|
||||
```plain
|
||||
helm template ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--name rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Available as of v2.2.0, set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Available as of v2.3.0, use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
2. Copy the rendered manifest directories to a system with access to the Rancher server cluster and apply the rendered templates.
|
||||
|
||||
Use `kubectl` to apply the rendered manifests.
|
||||
|
||||
```plain
|
||||
kubectl -n cattle-system apply -R -f ./rancher
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
### D. Verify the Upgrade
|
||||
|
||||
Log into Rancher to confirm that the upgrade succeeded.
|
||||
|
||||
>**Having network issues following upgrade?**
|
||||
>
|
||||
> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking).
|
||||
|
||||
## Rolling Back
|
||||
|
||||
Should something go wrong, follow the [roll back]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/) instructions to restore the snapshot you took before you preformed the upgrade.
|
||||
@@ -6,9 +6,9 @@ aliases:
|
||||
- /rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade/
|
||||
---
|
||||
|
||||
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
> **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>If you are currently using the RKE add-on install method, please follow these directions to migrate to the Helm install.
|
||||
>If you are currently using the RKE add-on install method, please follow these directions to migrate to the Helm install.
|
||||
|
||||
|
||||
The following instructions will help guide you through migrating from the RKE Add-on install to managing Rancher with the Helm package manager.
|
||||
@@ -17,6 +17,8 @@ You will need the to have [kubectl](https://kubernetes.io/docs/tasks/tools/insta
|
||||
|
||||
> **Note:** This guide assumes a standard Rancher install. If you have modified any of the object names or namespaces, please adjust accordingly.
|
||||
|
||||
> **Note:** If you are upgrading from from Rancher v2.0.13 or earlier, or v2.1.8 or earlier, and your cluster's certificates have expired, you will need to perform [additional steps]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/certificate-rotation/#rotating-expired-certificates-after-upgrading-older-rancher-versions) to rotate the certificates.
|
||||
|
||||
### Point kubectl at your Rancher Cluster
|
||||
|
||||
Make sure `kubectl` is using the correct kubeconfig YAML file. Set the `KUBECONFIG` environmental variable to point to `kube_config_rancher-cluster.yml`:
|
||||
|
||||
@@ -26,6 +26,8 @@ During upgrades from Rancher v2.0.6- to Rancher v2.0.7+, all system namespaces a
|
||||
- To prevent this issue from occurring before the upgrade, see [Preventing Cluster Networking Issues](#preventing-cluster-networking-issues).
|
||||
- To fix this issue following upgrade, see [Restoring Cluster Networking](#restoring-cluster-networking).
|
||||
|
||||
> **Note:** If you are upgrading from from Rancher v2.0.13 or earlier, or v2.1.8 or earlier, and your cluster's certificates have expired, you will need to perform [additional steps]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/certificate-rotation/#rotating-expired-certificates-after-upgrading-older-rancher-versions) to rotate the certificates.
|
||||
|
||||
## Preventing Cluster Networking Issues
|
||||
|
||||
You can prevent cluster networking issues from occurring during your upgrade to v2.0.7+ by unassigning system namespaces from all of your Rancher projects. Complete this task if you've assigned any of a cluster's system namespaces into a Rancher project.
|
||||
|
||||
@@ -1,36 +0,0 @@
|
||||
---
|
||||
title: Single Node Upgrade - Air Gap
|
||||
weight: 1011
|
||||
aliases:
|
||||
- /rancher/v2.x/en/upgrades/air-gap-upgrade/
|
||||
---
|
||||
To upgrade an air gapped Rancher Server, update your private registry with the latest Docker images, and then run the upgrade command.
|
||||
|
||||
## Prerequisites
|
||||
**Upgrades to v2.0.7+ only:** Starting in v2.0.7, Rancher introduced the `system` project, which is a project that's automatically created to store important namespaces that Kubernetes needs to operate. During upgrade to v2.0.7+, Rancher expects these namespaces to be unassigned from all projects. Before beginning upgrade, check your system namespaces to make sure that they're unassigned to [prevent cluster networking issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#preventing-cluster-networking-issues).
|
||||
|
||||
**Upgrades to v2.2.0 only: mirror system-charts repository and configure Rancher**<br/>
|
||||
Starting in v2.2.0, Rancher introduced the [System Charts](https://github.com/rancher/system-charts) repository which contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository locally and configure Rancher to use that repository. Please follow the instructions to [configure Rancher system charts]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-single-node/config-rancher-system-charts/).
|
||||
|
||||
## Caveats
|
||||
Upgrades _to_ or _from_ any tag containing [alpha]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#server-tags) aren't supported.
|
||||
|
||||
## Upgrading An Air Gapped Rancher Server
|
||||
|
||||
1. Follow the directions in Air Gap Installation to [pull the Docker images]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/#release-files) required for the new version of Rancher.
|
||||
|
||||
2. Follow the directions in [Single Node Upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/single-node-upgrade/) to complete upgrade of your air gapped Rancher Server.
|
||||
|
||||
>**Note:**
|
||||
> While completing [Single Node Upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/single-node-upgrade/), prepend your private registry URL to the image when running the `docker run` command.
|
||||
>
|
||||
> Example: `<registry.yourdomain.com:port>/rancher/rancher:latest`
|
||||
|
||||
**Result:** Rancher is upgraded. Log back into Rancher to confirm that the upgrade succeeded.
|
||||
|
||||
>**Having Network Issues Following Upgrade?**
|
||||
>
|
||||
> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking).
|
||||
|
||||
## Rolling Back
|
||||
If your upgrade does not complete successfully, you can roll Rancher Server and its data back to its last healthy state. For more information, see [Single Node Rollback]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/).
|
||||
@@ -1,139 +0,0 @@
|
||||
---
|
||||
title: Single Node Upgrade
|
||||
weight: 1010
|
||||
aliases:
|
||||
- /rancher/v2.x/en/upgrades/single-node-upgrade/
|
||||
---
|
||||
To upgrade Rancher Server 2.x when a new version is released, create a data container for your current Rancher deployment, pull the latest image of Rancher, and then start a new Rancher container using your data container.
|
||||
|
||||
## Before You Start
|
||||
|
||||
During upgrade, you'll enter a series of commands, filling placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (`<EXAMPLE>`). Here's an example of a command with a placeholder:
|
||||
|
||||
|
||||
```
|
||||
docker run --volumes-from rancher-data -v $PWD:/backup busybox tar zcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
|
||||
```
|
||||
|
||||
In this command, `<RANCHER_VERSION>-<DATE>` is the version number and date of creation for a backup of Rancher.
|
||||
|
||||
Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the [procedure below](#completing-the-upgrade).
|
||||
|
||||
<sup>Terminal `docker ps` Command, Displaying Where to Find `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>`</sup>
|
||||

|
||||
|
||||
| Placeholder | Example | Description |
|
||||
| -------------------------- | -------------------------- | --------------------------------------------------------- |
|
||||
| `<RANCHER_CONTAINER_TAG>` | `v2.1.3` | The rancher/rancher image you pulled for initial install. |
|
||||
| `<RANCHER_CONTAINER_NAME>` | `festive_mestorf` | The name of your Rancher container. |
|
||||
| `<RANCHER_VERSION>` | `v2.1.3` | The version of Rancher that you're creating a backup for. |
|
||||
| `<DATE>` | `2018-12-19` | The date that the data container or backup was created. |
|
||||
<br/>
|
||||
|
||||
You can obtain `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>` by logging into your Rancher Server by remote connection and entering the command to view the containers that are running: `docker ps`. You can also view containers that are stopped using a different command: `docker ps -a`. Use these commands for help anytime during while creating backups.
|
||||
|
||||
## Prerequisites
|
||||
**Upgrades to v2.0.7+ only:** Starting in v2.0.7, Rancher introduced the `system` project, which is a project that's automatically created to store important namespaces that Kubernetes needs to operate. During upgrade to v2.0.7+, Rancher expects these namespaces to be unassigned from all projects. Before beginning upgrade, check your system namespaces to make sure that they're unassigned to [prevent cluster networking issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#preventing-cluster-networking-issues).
|
||||
|
||||
## Caveats
|
||||
Upgrades _to_ or _from_ any tag containing [alpha]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#server-tags) aren't supported.
|
||||
|
||||
## Completing the Upgrade
|
||||
|
||||
During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data.
|
||||
|
||||
1. Using a remote Terminal connection, log into the node running your Rancher Server.
|
||||
|
||||
|
||||
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the [name of your Rancher container](#before-you-start).
|
||||
|
||||
```
|
||||
docker stop <RANCHER_CONTAINER_NAME>
|
||||
```
|
||||
|
||||
1. <a id="backup"></a>Use the command below, replacing each [placeholder](#before-you-start), to create a data container from the Rancher container that you just stopped.
|
||||
|
||||
```
|
||||
docker create --volumes-from <RANCHER_CONTAINER_NAME> --name rancher-data rancher/rancher:<RANCHER_CONTAINER_TAG>
|
||||
```
|
||||
|
||||
1. <a id="tarball"></a>From the data container that you just created (`rancher-data`), create a backup tarball (`rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`).
|
||||
|
||||
This tarball will serve as a rollback point if something goes wrong during upgrade. Use the following command, replacing each [placeholder](#before-you-start).
|
||||
|
||||
|
||||
```
|
||||
docker run --volumes-from rancher-data -v $PWD:/backup busybox tar zcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
|
||||
```
|
||||
|
||||
**Step Result:** When you enter this command, a series of commands should run.
|
||||
|
||||
1. Enter the `ls` command to confirm that the backup tarball was created. It will have a name similar to `rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`.
|
||||
|
||||
```
|
||||
[rancher@ip-10-0-0-50 ~]$ ls
|
||||
rancher-data-backup-v2.1.3-20181219.tar.gz
|
||||
```
|
||||
|
||||
1. Move your backup tarball to a safe location external from your Rancher Server.
|
||||
|
||||
|
||||
1. Pull the most recent image of Rancher.
|
||||
|
||||
```
|
||||
docker pull rancher/rancher:latest
|
||||
```
|
||||
|
||||
>**Attention Air Gap Users:**
|
||||
> If you are visiting this page to complete [Air Gap Upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/air-gap-upgrade), prepend your private registry URL to the image when running the `docker run` command.
|
||||
>
|
||||
> Example: `<registry.yourdomain.com:port>/rancher/rancher:latest`
|
||||
>
|
||||
|
||||
1. Start a new Rancher Server container using the data from the `rancher-data` container.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest
|
||||
```
|
||||
|
||||
>**Attention Let’s Encrypt Users:**
|
||||
>
|
||||
>Remember to append `--acme-domain <YOUR.DNS.NAME>` to the run command, otherwise Rancher will fall back to using self signed certificates.
|
||||
>```
|
||||
>docker run -d --volumes-from rancher-data --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest --acme-domain <YOUR.DNS.NAME>
|
||||
>```
|
||||
|
||||
|
||||
>**Want records of all transactions with the Rancher API?**
|
||||
>
|
||||
>Enable the [API Auditing]({{< baseurl >}}/rancher/v2.x/en/installation/api-auditing) feature by adding the flags below into your upgrade command.
|
||||
>```
|
||||
-e AUDIT_LEVEL=1 \
|
||||
-e AUDIT_LOG_PATH=/var/log/auditlog/rancher-api-audit.log \
|
||||
-e AUDIT_LOG_MAXAGE=20 \
|
||||
-e AUDIT_LOG_MAXBACKUP=20 \
|
||||
-e AUDIT_LOG_MAXSIZE=100 \
|
||||
```
|
||||
|
||||
>**Note:** _Do not_ stop the upgrade after initiating it, even if the upgrade process seems longer than expected. Stopping the upgrade may result in database migration errors during future upgrades.
|
||||
><br/>
|
||||
><br/>
|
||||
>**Note:** After upgrading Rancher Server, data from your upgraded server is now saved to the `rancher-data` container for use in future upgrades.
|
||||
|
||||
1. Log into Rancher. Confirm that the upgrade succeeded by checking the version displayed in the bottom-left corner of the browser window.
|
||||
|
||||
<!--/img/rancher/)-->
|
||||
|
||||
1. Remove the previous Rancher Server container.
|
||||
|
||||
If you only stop the previous Rancher Server container (and don't remove it), the container may restart after the next server reboot.
|
||||
|
||||
**Result:** Rancher is upgraded. Log back into Rancher to confirm that the upgrade succeeded.
|
||||
|
||||
>**Having Network Issues Following Upgrade?**
|
||||
>
|
||||
> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking).
|
||||
|
||||
## Rolling Back
|
||||
|
||||
If your upgrade does not complete successfully, you can roll Rancher Server and its data back to its last healthy state. For more information, see [Single Node Rollback]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/).
|
||||
@@ -0,0 +1,331 @@
|
||||
---
|
||||
title: Single Node Upgrade
|
||||
weight: 1010
|
||||
aliases:
|
||||
- /rancher/v2.x/en/upgrades/single-node-upgrade/
|
||||
- /rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/
|
||||
- /rancher/v2.x/en/upgrades/upgrades/single-node-air-gap-upgrade
|
||||
---
|
||||
|
||||
The following instructions will guide you through upgrading a high-availability Rancher server installation.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Review the [Known Upgrade Issues]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/#known-upgrade-issues) and [Caveats]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/#caveats)**
|
||||
|
||||
- **[Air Gap Installs Only:]({{< baseurl >}}/rancher/v2.x/en/installations/air-gap/) Collect and Populate Images for the new Rancher server version**
|
||||
|
||||
Follow the guide to [populate your private registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap/populate-private-registry/) with the images for the Rancher version that you want to upgrade to.
|
||||
|
||||
## Placeholder Review
|
||||
|
||||
During upgrade, you'll enter a series of commands, filling placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (`<EXAMPLE>`).
|
||||
|
||||
Here's an **example** of a command with a placeholder:
|
||||
|
||||
```
|
||||
docker stop <RANCHER_CONTAINER_NAME>
|
||||
```
|
||||
|
||||
In this command, `<RANCHER_CONTAINER_NAME>` is the name of your Rancher container.
|
||||
|
||||
Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the upgrade.
|
||||
|
||||
<sup>Terminal `docker ps` Command, Displaying Where to Find `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>`</sup>
|
||||

|
||||
|
||||
| Placeholder | Example | Description |
|
||||
| -------------------------- | -------------------------- | --------------------------------------------------------- |
|
||||
| `<RANCHER_CONTAINER_TAG>` | `v2.1.3` | The rancher/rancher image you pulled for initial install. |
|
||||
| `<RANCHER_CONTAINER_NAME>` | `festive_mestorf` | The name of your Rancher container. |
|
||||
| `<RANCHER_VERSION>` | `v2.1.3` | The version of Rancher that you're creating a backup for. |
|
||||
| `<DATE>` | `2018-12-19` | The date that the data container or backup was created. |
|
||||
<br/>
|
||||
|
||||
You can obtain `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>` by logging into your Rancher Server by remote connection and entering the command to view the containers that are running: `docker ps`. You can also view containers that are stopped using a different command: `docker ps -a`. Use these commands for help anytime during while creating backups.
|
||||
|
||||
## Upgrade Outline
|
||||
|
||||
During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data. Follow the steps to upgrade Rancher server:
|
||||
|
||||
- A. Create a copy of the data from your Rancher server container
|
||||
- B. Create a backup tarball
|
||||
Get the options set from your current Rancher install
|
||||
- C. Upgrade Rancher
|
||||
- D. Verify the Upgrade
|
||||
- E. Clean up your old Rancher server container
|
||||
|
||||
### A. Create a copy of the data from your Rancher server container
|
||||
|
||||
1. Using a remote Terminal connection, log into the node running your Rancher Server.
|
||||
|
||||
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container.
|
||||
|
||||
```
|
||||
docker stop <RANCHER_CONTAINER_NAME>
|
||||
```
|
||||
|
||||
1. <a id="backup"></a>Use the command below, replacing each placeholder, to create a data container from the Rancher container that you just stopped.
|
||||
|
||||
```
|
||||
docker create --volumes-from <RANCHER_CONTAINER_NAME> --name rancher-data rancher/rancher:<RANCHER_CONTAINER_TAG>
|
||||
```
|
||||
|
||||
### B. Create a backup tarball
|
||||
|
||||
1. <a id="tarball"></a>From the data container that you just created (`rancher-data`), create a backup tarball (`rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`).
|
||||
|
||||
This tarball will serve as a rollback point if something goes wrong during upgrade. Use the following command, replacing each [placeholder](#before-you-start).
|
||||
|
||||
|
||||
```
|
||||
docker run --volumes-from rancher-data -v $PWD:/backup busybox tar zcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
|
||||
```
|
||||
|
||||
**Step Result:** When you enter this command, a series of commands should run.
|
||||
|
||||
1. Enter the `ls` command to confirm that the backup tarball was created. It will have a name similar to `rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`.
|
||||
|
||||
```
|
||||
[rancher@ip-10-0-0-50 ~]$ ls
|
||||
rancher-data-backup-v2.1.3-20181219.tar.gz
|
||||
```
|
||||
|
||||
1. Move your backup tarball to a safe location external from your Rancher Server.
|
||||
|
||||
### C. Upgrade Rancher
|
||||
|
||||
1. Pull the image of the Rancher version that you want to upgrade to.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker pull rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
1. Start a new Rancher server container using the data from the `rancher-data` container. Remember to pass in all the environment variables that you had used when you started the original container.
|
||||
|
||||
>**Note:** After upgrading Rancher Server, data from your upgraded server is now saved to the `rancher-data` container for use in future upgrades.
|
||||
|
||||
>**Important:** _Do not_ stop the upgrade after initiating it, even if the upgrade process seems longer than expected. Stopping the upgrade may result in database migration errors during future upgrades.
|
||||
|
||||
>**Did you...**
|
||||
>
|
||||
>- Use a proxy? See [HTTP Proxy Configuration]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/proxy/)
|
||||
>- Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{< baseurl >}}/rancher/v2.x/en/admin-settings/custom-ca-root-certificate/)
|
||||
>- Record all transactions with the Rancher API? See [API Auditing](#api-audit-log)
|
||||
>
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
* Single Node Upgrade
|
||||
* Single Node Upgrade for Air Gap Installs
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "Single Node Upgrade" %}}
|
||||
|
||||
Select which option you had installed Rancher server
|
||||
|
||||
{{% accordion id="option-a" label="Option A-Default Self-Signed Certificate" %}}
|
||||
|
||||
If you have selected to use the Rancher generated self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
{{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}}
|
||||
|
||||
If you have selected to bring your own self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificate that you had originally installed with.
|
||||
|
||||
>**Reminder of the Cert Prerequisite:** The certificate files must be in [PEM format]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#pem). In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [SSL FAQ / Troubleshooting](#cert-order).
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<CA_CERTS>` | The path to the certificate authority's private key.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="option-c" label="Option C-Bring Your Own Certificate: Signed by Recognized CA" %}}
|
||||
|
||||
If you have selected to use a certificate signed by a recognized CA, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificates that you had originally installed with. Remember to include `--no-cacerts` as an argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
>**Reminder of the Cert Prerequisite:** The certificate files must be in [PEM format]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#pem). In your certificate file, include all intermediate certificates provided by the recognized CA. Order your certificates with your certificate first, followed by the intermediates. For an example, see [SSL FAQ / Troubleshooting](#cert-order).
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG> \
|
||||
--no-cacerts
|
||||
```
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="option-d" label="Option D-Let's Encrypt Certificate" %}}
|
||||
|
||||
>**Remember:** Let's Encrypt provides rate limits for requesting new certificates. Therefore, limit how often you create or destroy the container. For more information, see [Let's Encrypt documentation on rate limits](https://letsencrypt.org/docs/rate-limits/).
|
||||
|
||||
If you have selected to use [Let's Encrypt](https://letsencrypt.org/) certificates, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to provide the domain that you had used when you originally installed Rancher.
|
||||
|
||||
>**Reminder of the Cert Prerequisites:**
|
||||
>
|
||||
>- Create a record in your DNS that binds your Linux host IP address to the hostname that you want to use for Rancher access (`rancher.mydomain.com` for example).
|
||||
>- Open port `TCP/80` on your Linux host. The Let's Encrypt http-01 challenge can come from any source IP address, so port `TCP/80` must be open to all IP addresses.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to upgrade to.
|
||||
`<YOUR.DNS.NAME>` | The domain address that you had originally started with
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG> \
|
||||
--acme-domain <YOUR.DNS.NAME>
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "Single Node Air Gap Upgrade" %}}
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
>**Did you...**
|
||||
>
|
||||
>- Configure custom CA root certificate to access your services? See [Custom CA root certificate]({{< baseurl >}}/rancher/v2.x/en/admin-settings/custom-ca-root-certificate/).
|
||||
>- Record all transactions with the Rancher API? See [API Auditing]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#api-audit-log).
|
||||
|
||||
- For Rancher versions from v2.2.0 to v2.2.x, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach. Then, after Rancher is installed, you will need to configure Rancher to use that repository. For details, refer to the documentation on [setting up the system charts for Rancher prior to v2.3.0.]({{<baseurl>}}/rancher/v2.x/en/installation/options/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0)
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
{{% accordion id="option-a" label="Option A-Default Self-Signed Certificate" %}}
|
||||
|
||||
If you have selected to use the Rancher generated self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
{{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}}
|
||||
|
||||
If you have selected to bring your own self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificate that you had originally installed with.
|
||||
|
||||
>**Reminder of the Prerequisite:** The certificate files must be in [PEM format]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#pem). In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [SSL FAQ / Troubleshooting](#cert-order).
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<CA_CERTS>` | The path to the certificate authority's private key.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
{{% accordion id="option-c" label="Option C-Bring Your Own Certificate: Signed by Recognized CA" %}}
|
||||
|
||||
If you have selected to use a certificate signed by a recognized CA, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificates that you had originally installed with.
|
||||
|
||||
>**Reminder of the Prerequisite:** The certificate files must be in [PEM format]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#pem). In your certificate file, include all intermediate certificates provided by the recognized CA. Order your certificates with your certificate first, followed by the intermediates. For an example, see [SSL FAQ / Troubleshooting](#cert-order).
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/) that you want to upgrade to.
|
||||
|
||||
> **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--no-cacerts \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ #Available as of v2.3.0, use the packaged Rancher system charts
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
{{% /accordion %}}
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
### D. Verify the Upgrade
|
||||
|
||||
Log into Rancher. Confirm that the upgrade succeeded by checking the version displayed in the bottom-left corner of the browser window.
|
||||
|
||||
>**Having network issues in your user clusters following upgrade?**
|
||||
>
|
||||
> See [Restoring Cluster Networking]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking).
|
||||
|
||||
|
||||
### E. Clean up your old Rancher server container
|
||||
|
||||
Remove the previous Rancher Server container. If you only stop the previous Rancher Server container (and don't remove it), the container may restart after the next server reboot.
|
||||
|
||||
## Rolling Back
|
||||
|
||||
If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Single Node Rollback]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/).
|
||||
@@ -74,14 +74,11 @@ Please refer to the [release notes](https://github.com/rancher/rke/releases) of
|
||||
You can also list the supported versions and system images of specific version of RKE release with a quick command.
|
||||
|
||||
```
|
||||
$ rke config --system-images --all
|
||||
|
||||
INFO[0000] Generating images list for version [v1.13.4-rancher1-2]:
|
||||
.......
|
||||
INFO[0000] Generating images list for version [v1.11.8-rancher1-1]:
|
||||
.......
|
||||
INFO[0000] Generating images list for version [v1.12.6-rancher1-2]:
|
||||
.......
|
||||
$ rke config --list-version --all
|
||||
v1.15.3-rancher2-1
|
||||
v1.13.10-rancher1-2
|
||||
v1.14.6-rancher2-1
|
||||
v1.16.0-beta.1-rancher1-1
|
||||
```
|
||||
|
||||
#### Using an unsupported Kubernetes version
|
||||
|
||||
@@ -37,3 +37,18 @@ RKE uses Kubernetes jobs to deploy add-ons. In some cases, add-ons deployment ta
|
||||
```yaml
|
||||
addon_job_timeout: 30
|
||||
```
|
||||
|
||||
## Add-on placement
|
||||
|
||||
_Applies to v0.2.3 and higher_
|
||||
|
||||
| Component | nodeAffinity nodeSelectorTerms | nodeSelector | Tolerations |
|
||||
| ------------------ | ------------------------------------------ | ------------ | ----------- |
|
||||
| Calico | `beta.kubernetes.io/os:NotIn:windows` | none | - `NoSchedule:Exists`<br/>- `NoExecute:Exists`<br/>- `CriticalAddonsOnly:Exists` |
|
||||
| Flannel | `beta.kubernetes.io/os:NotIn:windows` | none | - `operator:Exists` |
|
||||
| Canal | `beta.kubernetes.io/os:NotIn:windows` | none | - `NoSchedule:Exists`<br/>- `NoExecute:Exists`<br/>- `CriticalAddonsOnly:Exists` |
|
||||
| Weave | `beta.kubernetes.io/os:NotIn:windows` | none | - `NoSchedule:Exists`<br/>- `NoExecute:Exists` |
|
||||
| CoreDNS | `node-role.kubernetes.io/worker:Exists` | `beta.kubernetes.io/os:linux` | - `NoSchedule:Exists`<br/>- `NoExecute:Exists`<br/>- `CriticalAddonsOnly:Exists` |
|
||||
| kube-dns | - `beta.kubernetes.io/os:NotIn:windows`<br/>- `node-role.kubernetes.io/worker` `Exists` | none | - `NoSchedule:Exists`<br/>- `NoExecute:Exists`<br/>- `CriticalAddonsOnly:Exists` |
|
||||
| nginx-ingress | - `beta.kubernetes.io/os:NotIn:windows`<br/>- `node-role.kubernetes.io/worker` `Exists` | none | - `NoSchedule:Exists`<br/>- `NoExecute:Exists` |
|
||||
| metrics-server | - `beta.kubernetes.io/os:NotIn:windows`<br/>- `node-role.kubernetes.io/worker` `Exists` | none | - `NoSchedule:Exists`<br/>- `NoExecute:Exists` |
|
||||
|
||||
@@ -36,6 +36,10 @@ nodes:
|
||||
ssh_key_path: /home/user/.ssh/id_rsa
|
||||
ssh_cert: |-
|
||||
ssh-rsa-cert-v01@openssh.com AAAAHHNza...
|
||||
taints: # Available as of v0.3.0
|
||||
- key: test-key
|
||||
value: test-value
|
||||
effect: NoSchedule
|
||||
- address: example.com
|
||||
user: ubuntu
|
||||
role:
|
||||
@@ -123,3 +127,9 @@ If the Docker socket is different than the default, you can set the `docker_sock
|
||||
### Labels
|
||||
|
||||
You have the ability to add an arbitrary map of labels for each node. It can be used when using the [ingress controller's]({{< baseurl >}}/rke/latest/en/config-options/add-ons/ingress-controllers/) `node_selector` option.
|
||||
|
||||
### Taints
|
||||
|
||||
_Available as of v0.3.0_
|
||||
|
||||
You have the ability to add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) for each node.
|
||||
|
||||
@@ -6,9 +6,13 @@ When RKE is deploying Kubernetes, there are several images that are pulled. Thes
|
||||
|
||||
As of `v0.1.6`, the functionality of a couple of the system images were consolidated into a single `rancher/rke-tools` image to simplify and speed the deployment process.
|
||||
|
||||
You can configure the [network plug-ins]({{< baseurl >}}/rke/latest/en/config-options/add-ons/network-plugins/), [ingress controller]({{< baseurl >}}/rke/latest/en/config-options/add-ons/ingress-controllers/) and [dns provider]({{< baseurl >}}/rke/latest/en/config-options/add-ons/dns/) as well as the options for these add-ons separately.
|
||||
You can configure the [network plug-ins]({{<baseurl>}}/rke/latest/en/config-options/add-ons/network-plugins/), [ingress controller]({{<baseurl>}}/rke/latest/en/config-options/add-ons/ingress-controllers/) and [dns provider]({{<baseurl>}}/rke/latest/en/config-options/add-ons/dns/) as well as the options for these add-ons separately.
|
||||
|
||||
This is the example of the full list of system images used to deploy Kubernetes through RKE. The image tags are dependent on the [Kubernetes image/version used](https://github.com/rancher/types/).
|
||||
Below is an example of the list of system images used to deploy Kubernetes through RKE. The default versions of Kubernetes are tied to specific versions of system images.
|
||||
|
||||
- For RKE v0.2.x and below, the map of versions and the system image versions is located here: https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go
|
||||
|
||||
- For RKE v0.3.0 and above, the map of versions and the system image versions is located here: https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go
|
||||
|
||||
> **Note:** As versions of RKE are released, the tags on these images will no longer be up to date. This list is specific for `v1.10.3-rancher2`.
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ One-time snapshots are handled differently depending on your version of RKE.
|
||||
|
||||
To save a snapshot of etcd from each etcd node in the cluster config file, run the `rke etcd snapshot-save` command.
|
||||
|
||||
The snapshot is saved in `/opt/rke/etcd-snapshots`.
|
||||
The snapshot is saved in `/opt/rke/etcd-snapshots`.
|
||||
|
||||
When running the command, an additional container is created to take the snapshot. When the snapshot is completed, the container is automatically removed.
|
||||
|
||||
@@ -34,6 +34,7 @@ $ rke etcd snapshot-save \
|
||||
--access-key S3_ACCESS_KEY \
|
||||
--secret-key S3_SECRET_KEY \
|
||||
--bucket-name s3-bucket-name \
|
||||
--folder s3-folder-name \ # Optional - Available as of v0.3.0
|
||||
--s3-endpoint s3.amazonaws.com
|
||||
```
|
||||
|
||||
@@ -47,15 +48,23 @@ $ rke etcd snapshot-save \
|
||||
| `--config` value | Specify an alternate cluster YAML file (default: `cluster.yml`) [$RKE_CONFIG] | |
|
||||
| `--s3` | Enabled backup to s3 | * |
|
||||
| `--s3-endpoint` value | Specify s3 endpoint url (default: "s3.amazonaws.com") | * |
|
||||
| `--s3-endpoint-ca` value | Specify a path to a CA cert file to connect to a custom s3 endpoint (optional) _Available as of v0.2.5_ | * |
|
||||
| `--access-key` value | Specify s3 accessKey | * |
|
||||
| `--secret-key` value | Specify s3 secretKey | * |
|
||||
| `--bucket-name` value | Specify s3 bucket name | * |
|
||||
| `--folder` value | Specify folder inside bucket where backup will be stored. This is optional. _Available as of v0.3.0_ | * |
|
||||
| `--region` value | Specify the s3 bucket location (optional) | * |
|
||||
| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{< baseurl >}}/rke/latest/en/config-options/#ssh-agent) | |
|
||||
| `--ignore-docker-version` | [Disable Docker version check]({{< baseurl >}}/rke/latest/en/config-options/#supported-docker-versions) |
|
||||
|
||||
The `--access-key` and `--secret-key` options are not required if the `etcd` nodes are AWS EC2 instances that have been configured with a suitable IAM instance profile.
|
||||
|
||||
##### Using a custom CA certificate for S3
|
||||
|
||||
_Available as of v2.2.5_
|
||||
|
||||
The backup snapshot can be stored on a custom `S3` backup like [minio](https://min.io/). If the S3 backend uses a self-signed or custom certificate, provide a custom certificate using the `--s3-endpoint-ca` to connect to the S3 back end.
|
||||
|
||||
### IAM Support for Storing Snapshots in S3
|
||||
|
||||
In addition to API access keys, RKE supports using IAM roles for S3 authentication. The cluster etcd nodes must be assigned an IAM role that has read/write access to the designated backup bucket on S3. Also, the nodes must have network access to the S3 endpoint specified.
|
||||
@@ -105,10 +114,10 @@ $ rke etcd snapshot-save --config cluster.yml --name snapshot-name
|
||||
|
||||
| Option | Description |
|
||||
| --- | --- |
|
||||
| `--name` value | Specify snapshot name |
|
||||
| `--name` value | Specify snapshot name |
|
||||
| `--config` value | Specify an alternate cluster YAML file (default: `cluster.yml`) [$RKE_CONFIG] |
|
||||
| `--ssh-agent-auth` | [Use SSH Agent Auth defined by SSH_AUTH_SOCK]({{< baseurl >}}/rke/latest/en/config-options/#ssh-agent) |
|
||||
| `--ignore-docker-version` | [Disable Docker version check]({{< baseurl >}}/rke/latest/en/config-options/#supported-docker-versions) |
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user