Update Helm 3 catalog docs

This commit is contained in:
Catherine Luse
2020-03-26 15:36:53 -07:00
parent 7cd87eac0e
commit b7e2111d4c
6 changed files with 82 additions and 43 deletions
+1 -1
View File
@@ -12,7 +12,7 @@ Rancher was originally built to work with multiple orchestrators, and it include
Rancher can provision Kubernetes from a hosted provider, provision compute nodes and then install Kubernetes onto them, or import existing Kubernetes clusters running anywhere.
One Rancher server installation can manage up to 2,000 Kubernetes clusters and 100,000 nodes from the same user interface.
One Rancher server installation can manage hundreds of Kubernetes clusters and thousands of nodes from the same user interface.
Rancher adds significant value on top of Kubernetes, first by centralizing authentication and role-based access control (RBAC) for all of the clusters, giving global admins the ability to control cluster access from one location.
@@ -9,6 +9,13 @@ This section describes how to create backups of your high-availability Rancher i
>**Prerequisites:** {{< requirements_rollback >}}
## RKE Kubernetes Cluster Data
In an RKE installation, the cluster data is replicated on each of three etcd nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails.
<figcaption>Architecture of an RKE Kubernetes Cluster Running the Rancher Management Server</figcaption>
![Architecture of an RKE Kubernetes cluster running the Rancher management server]({{<baseurl>}}/img/rancher/rke-server-storage.svg)
## Backup Outline
Backing up your high-availability Rancher cluster is process that involves completing multiple tasks.
@@ -9,6 +9,13 @@ The database administrator will need to back up the external database, or restor
We recommend configuring the database to take recurring snapshots.
### K3s Kubernetes Cluster Data
One main advantage of this K3s architecture is that it allows an external datastore to hold the cluster data, allowing the K3s server nodes to be treated as ephemeral.
<figcaption>Architecture of a K3s Kubernetes Cluster Running the Rancher Management Server</figcaption>
![Architecture of an RKE Kubernetes Cluster Running the Rancher Management Server]({{<baseurl>}}/img/rancher/k3s-server-storage.svg)
### Creating Snapshots and Restoring Databases from Snapshots
For details on taking database snapshots and restoring your database from them, refer to the official database documentation:
+24 -25
View File
@@ -19,6 +19,7 @@ This section covers the following topics:
- [Prerequisites](#prerequisites)
- [Catalog scopes](#catalog-scopes)
- [Catalog Helm Deployment Versions](#catalog-helm-deployment-versions)
- [Enabling built-in global catalogs](#enabling-built-in-global-catalogs)
- [Adding custom global catalogs](#adding-custom-global-catalogs)
- [Add custom Git repositories](#add-custom-git-repositories)
@@ -41,7 +42,7 @@ To launch a catalog app or a multi-cluster app, you should have at least one of
# Catalog Scopes
Within Rancher, you can manage catalogs at three different scopes. Global catalogs are shared across all clusters and project. There are some use cases where you might not want to share catalogs across between different clusters or even projects in the same cluster. By leveraging cluster and project scoped catalogs, you will be able to provide applications for specific teams without needing to share them with all clusters and/or projects.
Within Rancher, you can manage catalogs at three different scopes. Global catalogs are shared across all clusters and project. There are some use cases where you might not want to share catalogs between different clusters or even projects in the same cluster. By leveraging cluster and project scoped catalogs, you will be able to provide applications for specific teams without needing to share them with all clusters and/or projects.
Scope | Description | Available As of |
--- | --- | --- |
@@ -49,6 +50,20 @@ Global | All clusters and all projects can access the Helm charts in this catalo
Cluster | All projects in the specific cluster can access the Helm charts in this catalog | v2.2.0 |
Project | This specific cluster can access the Helm charts in this catalog | v2.2.0 |
# Catalog Helm Deployment Versions
_Applicable as of v2.4.0_
In November 2019, Helm 3 was released, and some features were deprecated or refactored. It is not fully backwards compatible with Helm 2. Therefore, catalogs in Rancher need to be separated, with each catalog only using one Helm version.
When you create a custom catalog, you will have to configure the catalog to use either Helm 2 or Helm 3. This version cannot be changed later. If the catalog is added with the wrong Helm version, it will need to be deleted and re-added.
When you launch a new app from a catalog, the app will be managed by the catalog's Helm version. A Helm 2 catalog will use Helm 2 to manage all of the apps, and a Helm 3 catalog will use Helm 3 to manage all apps.
By default, catalogs are assumed to be deployed using Helm 2. If you run an app in Rancher prior to v2.4.0, then upgrade to Rancher v2.4.0+, the app will still be managed by Helm 2.
Charts that are specific to Helm 2 should only be added to a Helm 2 catalog, and Helm 3 specific charts should only be added to a Helm 3 catalog.
# Enabling Built-in Global Catalogs
Within Rancher, there are default catalogs packaged as part of Rancher. These can be enabled or disabled by an administrator.
@@ -57,19 +72,9 @@ Within Rancher, there are default catalogs packaged as part of Rancher. These ca
2. Toggle the default catalogs that you want use to a setting of **Enabled**.
- **Library**
The Library Catalog includes charts curated by Rancher. Rancher stores charts in a Git repository to expedite the fetch and update of charts.
This catalog features Rancher Charts, which include some [notable advantages]({{<baseurl>}}/rancher/v2.x/en/catalog/custom/#chart-types) over native Helm charts.
- **Helm Stable**
This catalog, , which is maintained by the Kubernetes community, includes native [Helm charts](https://github.com/kubernetes/helm/blob/master/docs/chart_template_guide/getting_started.md). This catalog features the largest pool of apps.
- **Helm Incubator**
Similar in user experience to Helm Stable, but this catalog is filled with applications in **beta**.
- **Library:** The Library Catalog includes charts curated by Rancher. Rancher stores charts in a Git repository to expedite the fetch and update of charts. This catalog features Rancher Charts, which include some [notable advantages]({{<baseurl>}}/rancher/v2.x/en/catalog/custom/#chart-types) over native Helm charts.
- **Helm Stable:** This catalog, which is maintained by the Kubernetes community, includes native [Helm charts](https://helm.sh/docs/chart_template_guide/). This catalog features the largest pool of apps.
- **Helm Incubator:** Similar in user experience to Helm Stable, but this catalog is filled with applications in **beta**.
**Result**: The chosen catalogs are enabled. Wait a few minutes for Rancher to replicate the catalog charts. When replication completes, you'll be able to see them in any of your projects by selecting **Apps** from the main navigation bar. In versions prior to v2.2.0, you can select **Catalog Apps** from the main navigation bar.
@@ -77,6 +82,8 @@ Within Rancher, there are default catalogs packaged as part of Rancher. These ca
Adding a catalog is as simple as adding a catalog name, a URL and a branch name.
**Prerequisite:** An [admin]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) of Rancher has the ability to add or remove catalogs globally in Rancher.
### Add Custom Git Repositories
The Git URL needs to be one that `git clone` [can handle](https://git-scm.com/docs/git-clone#_git_urls_a_id_urls_a) and must end in `.git`. The branch name must be a branch that is in your catalog URL. If no branch name is provided, it will use the `master` branch by default. Whenever you add a catalog to Rancher, it will be available immediately.
@@ -91,23 +98,15 @@ In Rancher, you can add the custom Helm chart repository with only a catalog nam
### Add Private Git/Helm Chart Repositories
_Available as of v2.2.0_
In Rancher v2.2.0, you can add private catalog repositories using credentials like Username and Password. You may also want to use the
OAuth token if your Git or Helm repository server support that.
Private catalog repositories can be added using credentials like Username and Password. You may also want to use the OAuth token if your Git or Helm repository server supports that.
[Read More About Adding Private Git/Helm Catalogs]({{<baseurl>}}/rancher/v2.x/en/catalog/custom/#private-repositories)
<!--There are two types of catalogs that can be added into Rancher. There are global catalogs and project catalogs. In a global catalog, the catalog templates are available in *all* projects. In a project catalog, the catalog charts are only available in the project that the catalog is added to.
An [admin]({{<baseurl>}}/rancher/v2.x/en/admin-settings/#global-Permissions) of Rancher has the ability to add or remove catalogs globally in Rancher.
NEEDS TO BE FIXED FOR 2.0: Any [users]({{site.baseurl}}/rancher/{{page.version}}/{{page.lang}}/configuration/accounts/#account-types) of a Rancher environment has the ability to add or remove environment catalogs in their respective Rancher environment in **Catalog** -> **Manage**.
-->
1. From the **Global** view, choose **Tools > Catalogs** in the navigation bar. In versions prior to v2.2.0, you can select **Catalogs** directly in the navigation bar.
2. Click **Add Catalog**.
3. Complete the form and click **Create**.
**Result**: Your catalog is added to Rancher.
**Result:** Your catalog is added to Rancher.
# Launching Catalog Applications
@@ -130,7 +129,7 @@ After you've either enabled the built-in catalogs or added your own custom catal
* For native Helm charts (i.e., charts from the **Helm Stable** or **Helm Incubator** catalogs), answers are provided as key value pairs in the **Answers** section.
* Keys and values are available within **Detailed Descriptions**.
* When entering answers, you must format them using the syntax rules found in [Using Helm: The format and limitations of --set](https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of-set), as Rancher passes them as `--set` flags to Helm.
* When entering answers, you must format them using the syntax rules found in [Using Helm: The format and limitations of --set]https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set), as Rancher passes them as `--set` flags to Helm.
For example, when entering an answer that includes two values separated by a comma (i.e., `abc, bcd`), wrap the values with double quotes (i.e., `"abc, bcd"`).
@@ -20,20 +20,35 @@ A user cluster is a downstream Kubernetes cluster that runs your apps and servic
If you have a Docker installation of Rancher, the node running the Rancher server should be separate from your downstream clusters.
In Kubernetes Installations of Rancher, the Rancher server cluster should also be separate from the user clusters.
In Kubernetes installations of Rancher, the Rancher server cluster should also be separate from the user clusters.
![Separation of Rancher Server from User Clusters]({{<baseurl>}}/img/rancher/rancher-architecture-separation-of-rancher-server.svg)
# Why HA is Better for Rancher in Production
We recommend installing the Rancher server on a three-node Kubernetes cluster for production, primarily because it protects the Rancher server data. The Rancher server stores its data in etcd in both single-node and Kubernetes Installations.
We recommend installing the Rancher server on a high-availability Kubernetes cluster, primarily because it protects the Rancher server data. In a high-availability installation, a load balancer serves as the single point of contact for clients, distributing network traffic across multiple servers in the cluster and helping to prevent any one server from becoming a point of failure.
When Rancher is installed on a single node, if the node goes down, there is no copy of the etcd data available on other nodes and you could lose the data on your Rancher server.
We don't recommend installing Rancher in a single Docker container, because if the node goes down, there is no copy of the cluster data available on other nodes and you could lose the data on your Rancher server.
By contrast, in the high-availability installation,
Rancher needs to be installed on either a high-availability [RKE (Rancher Kubernetes Engine)]({{<baseurl>}}/rke/latest/en/) Kubernetes cluster, or a high-availability [K3s (5 less than K8s)]({{<baseurl>}}/k3s/latest/en/) Kubernetes cluster. Both RKE and K3s are fully certified Kubernetes distributions.
- The etcd data is replicated on three nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails.
- A load balancer serves as the single point of contact for clients, distributing network traffic across multiple servers in the cluster and helping to prevent any one server from becoming a point of failure. Note: This [example]({{<baseurl>}}/rancher/v2.x/en/installation/options/nginx/) of how to configure an NGINX server as a basic layer 4 load balancer (TCP).
### K3s Kubernetes Cluster Installations
If you are installing Rancher v2.4 for the first time, we recommend installing it on a K3s Kubernetes cluster. One main advantage of this K3s architecture is that it allows an external datastore to hold the cluster data, allowing the K3s server nodes to be treated as ephemeral.
The option to install Rancher on a K3s cluster is a feature introduced in Rancher v2.4. K3s is easy to install, with half the memory of Kubernetes, all in a binary less than 50 MB.
<figcaption>Architecture of a K3s Kubernetes Cluster Running the Rancher Management Server</figcaption>
![Architecture of an RKE Kubernetes Cluster Running the Rancher Management Server]({{<baseurl>}}/img/rancher/k3s-server-storage.svg)
### RKE Kubernetes Cluster Installations
If you are installing Rancher prior to v2.4, you will need to install Rancher on an RKE cluster, in which the cluster data is stored on each node with the etcd role. As of Rancher v2.4, there is no migration path to transition the Rancher server from an RKE cluster to a K3s cluster. All versions of the Rancher server, including v2.4+, can be installed on an RKE cluster.
In an RKE installation, the cluster data is replicated on each of three etcd nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails.
<figcaption>Architecture of an RKE Kubernetes Cluster Running the Rancher Management Server</figcaption>
![Architecture of an RKE Kubernetes cluster running the Rancher management server]({{<baseurl>}}/img/rancher/rke-server-storage.svg)
# Recommended Load Balancer Configuration for Kubernetes Installations
@@ -44,9 +59,8 @@ We recommend the following configurations for the load balancer and Ingress cont
* The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443.
* The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment.
<figcaption>Rancher installed on a Kubernetes cluster with layer 4 load balancer, depicting SSL termination at ingress controllers</figcaption>
<figcaption>Rancher installed on a Kubernetes cluster with layer 4 load balancer, depicting SSL termination at Ingress controllers</figcaption>
![Rancher HA]({{<baseurl>}}/img/rancher/ha/rancher2ha.svg)
<sup>Rancher installed on a Kubernetes cluster with Layer 4 load balancer (TCP), depicting SSL termination at ingress controllers</sup>
# Environment for Kubernetes Installations
@@ -56,17 +70,31 @@ For the best performance and greater security, we recommend a dedicated Kubernet
It is not recommended to install Rancher on top of a managed Kubernetes service such as Amazons EKS or Google Kubernetes Engine. These hosted Kubernetes solutions do not expose etcd to a degree that is manageable for Rancher, and their customizations can interfere with Rancher operations.
# Recommended Node Roles for Kubernetes Installations
# Recommended Node Roles for Kubernetes Installations
We recommend installing Rancher on a Kubernetes cluster in which each node has all three Kubernetes roles: etcd, controlplane, and worker.
Our recommendations for the roles of each node differ depending on whether Rancher is installed on a K3s Kubernetes cluster or an RKE Kubernetes cluster.
### Comparing Node Roles for the Rancher Server Cluster and User Clusters
### K3s Cluster Roles
Our recommendation for node roles on the Rancher server cluster contrast with our recommendations for the downstream user clusters that run your apps and services. We recommend that each node in a user cluster should have a single role for stability and scalability.
In K3s clusters, there are two types of nodes: server nodes and agent nodes. Both servers and agents can have workloads scheduled on them. Server nodes run the Kubernetes master.
For the cluster running the Rancher management server, we recommend using two server nodes. Agent nodes are not required.
### RKE Cluster Roles
If Rancher is installed on an RKE Kubernetes cluster, the cluster should have three nodes, and each node should have all three Kubernetes roles: etcd, controlplane, and worker.
### Contrasting RKE Cluster Architecture for Rancher Server and for Downstream Kubernetes Clusters
Our recommendation for RKE node roles on the Rancher server cluster contrasts with our recommendations for the downstream user clusters that run your apps and services.
Rancher uses RKE as a library when provisioning downstream Kubernetes clusters. Note: The capability to provision downstream K3s clusters will be added in a future version of Rancher.
For downstream Kubernetes clusters, we recommend that each node in a user cluster should have a single role for stability and scalability.
![Kubernetes Roles for Nodes in Rancher Server Cluster vs. User Clusters]({{<baseurl>}}/img/rancher/rancher-architecture-node-roles.svg)
Kubernetes only requires at least one node with each role and does not require nodes to be restricted to one role. However, for the clusters that run your apps, we recommend separate roles for each node so that workloads on worker nodes don't interfere with the Kubernetes master or cluster data as your services scale.
RKE only requires at least one node with each role and does not require nodes to be restricted to one role. However, for the clusters that run your apps, we recommend separate roles for each node so that workloads on worker nodes don't interfere with the Kubernetes master or cluster data as your services scale.
We recommend that downstream user clusters should have at least:
@@ -80,9 +108,9 @@ With that said, it is safe to use all three roles on three nodes when setting up
* It maintains multiple instances of the master components by having multiple `controlplane` nodes.
* No other workloads than Rancher itself should be created on this cluster.
Because no additional workloads will be deployed on the Rancher server cluster, in most cases it is not necessary to use the same architecture that we recommend for the scalability and reliability of user clusters.
Because no additional workloads will be deployed on the Rancher server cluster, in most cases it is not necessary to use the same architecture that we recommend for the scalability and reliability of downstream clusters.
For more best practices for user clusters, refer to the [production checklist]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/production) or our [best practices guide.]({{<baseurl>}}/rancher/v2.x/en/best-practices/management/#tips-for-scaling-and-reliability)
For more best practices for downstream clusters, refer to the [production checklist]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/production) or our [best practices guide.]({{<baseurl>}}/rancher/v2.x/en/best-practices/management/#tips-for-scaling-and-reliability)
# Architecture for an Authorized Cluster Endpoint
-2
View File
@@ -55,8 +55,6 @@ RKE saves the Kubernetes cluster state as a secret. When updating the state, RKE
### Upgrading Kubernetes
> **Note:** RKE does not support rolling back to previous versions.
To upgrade the Kubernetes version of an RKE-provisioned cluster, set the `kubernetes_version` string in the `cluster.yml` to the desired version from the [list of supported Kubernetes versions](#listing-supported-kubernetes-versions) for the specific version of RKE:
```yaml