Merge pull request #1960 from catherineluse/architecture
Revise architecture diagram and overview section
@@ -7,29 +7,38 @@ aliases:
|
||||
- /rancher/v2.x/en/tasks/global-configuration/catalog/
|
||||
---
|
||||
|
||||
## Catalogs
|
||||
|
||||
Rancher provides the ability to use a catalog of Helm charts that make it easy to repeatedly deploy applications.
|
||||
|
||||
_Catalogs_ are GitHub repositories or Helm Chart repositories filled with applications that are ready-made for deployment. Applications are bundled in objects called _Helm charts_.
|
||||
|
||||
>A collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
|
||||
- **Catalogs** are GitHub repositories or Helm Chart repositories filled with applications that are ready-made for deployment. Applications are bundled in objects called _Helm charts_.
|
||||
- **Helm charts** are a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
|
||||
|
||||
Rancher improves on Helm catalogs and charts. All native Helm charts can work within Rancher, but Rancher adds several enhancements to improve their user experience.
|
||||
|
||||
## Catalog Scopes
|
||||
This section covers the following topics:
|
||||
|
||||
Catalogs can be added at different scopes of Rancher.
|
||||
- [Catalog scopes](#catalog-scopes)
|
||||
- [Enabling built-in global catalogs](#enabling-built-in-global-catalogs)
|
||||
- [Adding custom global catalogs](#adding-custom-global-catalogs)
|
||||
- [Add custom Git repositories](#add-custom-git-repositories)
|
||||
- [Add custom Helm chart repositories](#add-custom-helm-chart-repositories)
|
||||
- [Add private Git/Helm chart repositories](#add-private-git-helm-chart-repositories)
|
||||
- [Launching catalog applications](#launching-catalog-applications)
|
||||
- [Working with catalogs](#working-with-catalogs)
|
||||
- [Apps](#apps)
|
||||
- [Global DNS](#global-dns)
|
||||
- [Chart compatibility with Rancher](#chart-compatibility-with-rancher)
|
||||
|
||||
Scope | Description
|
||||
--- | ---
|
||||
Global | Catalogs added at this scope are available for all clusters and all projects in Rancher.
|
||||
Cluster | Catalogs added within a cluster are available for all projects in that cluster.
|
||||
Project | Catalogs added within a project are only available for that project.
|
||||
# Catalog Scopes
|
||||
|
||||
## Global catalogs
|
||||
Within Rancher, you can manage catalogs at three different scopes. Global catalogs are shared across all clusters and project. There are some use cases where you might not want to share catalogs across between different clusters or even projects in the same cluster. By leveraging cluster and project scoped catalogs, you will be able to provide applications for specific teams without needing to share them with all clusters and/or projects.
|
||||
|
||||
## Enabling Built-in Catalogs
|
||||
Scope | Description | Available As of |
|
||||
--- | --- | --- |
|
||||
Global | All clusters and all projects can access the Helm charts in this catalog | v2.0.0 |
|
||||
Cluster | All projects in the specific cluster can access the Helm charts in this catalog | v2.2.0 |
|
||||
Project | This specific cluster can access the Helm charts in this catalog | v2.2.0 |
|
||||
|
||||
# Enabling Built-in Global Catalogs
|
||||
|
||||
Within Rancher, there are default catalogs packaged as part of Rancher. These can be enabled or disabled by an administrator.
|
||||
|
||||
@@ -53,15 +62,14 @@ Within Rancher, there are default catalogs packaged as part of Rancher. These ca
|
||||
|
||||
**Result**: The chosen catalogs are enabled. Wait a few minutes for Rancher to replicate the catalog charts. When replication completes, you'll be able to see them in any of your projects by selecting **Apps** from the main navigation bar. In versions prior to v2.2.0, you can select **Catalog Apps** from the main navigation bar.
|
||||
|
||||
## Adding Custom Catalogs
|
||||
# Adding Custom Global Catalogs
|
||||
|
||||
Adding a catalog is as simple as adding a catalog name, a URL and a branch name.
|
||||
|
||||
#### Add Custom Git Repositories
|
||||
### Add Custom Git Repositories
|
||||
The Git URL needs to be one that `git clone` [can handle](https://git-scm.com/docs/git-clone#_git_urls_a_id_urls_a) and must end in `.git`. The branch name must be a branch that is in your catalog URL. If no branch name is provided, it will use the `master` branch by default. Whenever you add a catalog to Rancher, it will be available immediately.
|
||||
|
||||
|
||||
#### Add Custom Helm Chart Repositories
|
||||
### Add Custom Helm Chart Repositories
|
||||
|
||||
A Helm chart repository is an HTTP server that houses one or more packaged charts. Any HTTP server that can serve YAML files and tar files and can answer GET requests can be used as a repository server.
|
||||
|
||||
@@ -69,7 +77,7 @@ Helm comes with built-in package server for developer testing (helm serve). The
|
||||
|
||||
In Rancher, you can add the custom Helm chart repository with only a catalog name and the URL address of the chart repository.
|
||||
|
||||
#### Add Private Git/Helm Chart Repositories
|
||||
### Add Private Git/Helm Chart Repositories
|
||||
_Available as of v2.2.0_
|
||||
|
||||
In Rancher v2.2.0, you can add private catalog repositories using credentials like Username and Password. You may also want to use the
|
||||
@@ -90,7 +98,7 @@ NEEDS TO BE FIXED FOR 2.0: Any [users]({{site.baseurl}}/rancher/{{page.version}}
|
||||
|
||||
**Result**: Your catalog is added to Rancher.
|
||||
|
||||
## Launching Catalog Applications
|
||||
# Launching Catalog Applications
|
||||
|
||||
After you've either enabled the built-in catalogs or added your own custom catalog, you can start launching any catalog application.>
|
||||
|
||||
@@ -111,7 +119,7 @@ After you've either enabled the built-in catalogs or added your own custom catal
|
||||
|
||||
* For native Helm charts (i.e., charts from the **Helm Stable** or **Helm Incubator** catalogs), answers are provided as key value pairs in the **Answers** section.
|
||||
* Keys and values are available within **Detailed Descriptions**.
|
||||
* When entering answers, you must format them using the syntax rules found in [Using Helm: The format and limitations of --set](https://github.com/helm/helm/blob/master/docs/using_helm.md#the-format-and-limitations-of---set), as Rancher passes them as `--set` flags to Helm.
|
||||
* When entering answers, you must format them using the syntax rules found in [Using Helm: The format and limitations of --set](https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of-set), as Rancher passes them as `--set` flags to Helm.
|
||||
|
||||
For example, when entering an answer that includes two values separated by a comma (i.e., `abc, bcd`), wrap the values with double quotes (i.e., `"abc, bcd"`).
|
||||
|
||||
@@ -121,31 +129,21 @@ After you've either enabled the built-in catalogs or added your own custom catal
|
||||
|
||||
By creating a customized repository with added files, Rancher improves on Helm repositories and charts. All native Helm charts can work within Rancher, but Rancher adds several enhancements to improve their user experience.
|
||||
|
||||
### Catalog Scope
|
||||
|
||||
Within Rancher, you can manage catalogs at three different scopes. Global catalogs is shared across all clusters and project. There are some use cases where you might not want to share catalogs across between different clusters or even projects in the same cluster. By leveraging cluster and project scoped catalogs, you will be able to provide applications for specific teams without needing to share them with all clusters and/or projects.
|
||||
|
||||
Scope | Description | Available As of |
|
||||
--- | --- | --- |
|
||||
Global | All clusters and all projects can access the Helm charts in this catalog | v2.0.0 |
|
||||
Cluster | All projects in the specific cluster can access the Helm charts in this catalog | v2.2.0 |
|
||||
Project | This specific cluster can access the Helm charts in this catalog | v2.2.0 |
|
||||
|
||||
### Working with catalogs
|
||||
# Working with Catalogs
|
||||
|
||||
There are two types of catalogs in Rancher. Learn more about each type:
|
||||
|
||||
* [Built-in Global Catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/built-in/)
|
||||
* [Custom Catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/)
|
||||
|
||||
## Apps
|
||||
### Apps
|
||||
|
||||
In Rancher, applications are deployed from the templates in a catalog. Rancher supports two types of applications:
|
||||
|
||||
* [Multi-cluster applications]({{< baseurl >}}/rancher/v2.x/en/catalog/multi-cluster-apps/)
|
||||
* [Applications deployed in a specific Project]({{< baseurl >}}/rancher/v2.x/en/catalog/apps)
|
||||
|
||||
## Global DNS
|
||||
### Global DNS
|
||||
|
||||
_Available as v2.2.0_
|
||||
|
||||
@@ -153,6 +151,6 @@ When creating applications that span multiple Kubernetes clusters, a Global DNS
|
||||
|
||||
For more information on how to use this feature, see [Global DNS]({{< baseurl >}}/rancher/v2.x/en/catalog/globaldns/).
|
||||
|
||||
## Chart Compatibility with Rancher
|
||||
### Chart Compatibility with Rancher
|
||||
|
||||
Charts now support a field called `rancher_min_version` and `rancher_max_version` in the [`questions.yml` file](https://github.com/rancher/integration-test-charts/blob/master/charts/chartmuseum/v1.6.0/questions.yml) to specify the versions of Rancher that the chart is compatible with. When using the UI, only app versions that are valid for the version of Rancher running will be shown. API validation is done to ensure apps that don't meet the Rancher requirements cannot be launched. An app that is already running will not be affected on a Rancher upgrade if the newer Rancher version does not meet the app's requirements.
|
||||
Charts now support the fields `rancher_min_version` and `rancher_max_version` in the [`questions.yml` file](https://github.com/rancher/integration-test-charts/blob/master/charts/chartmuseum/v1.6.0/questions.yml) to specify the versions of Rancher that the chart is compatible with. When using the UI, only app versions that are valid for the version of Rancher running will be shown. API validation is done to ensure apps that don't meet the Rancher requirements cannot be launched. An app that is already running will not be affected on a Rancher upgrade if the newer Rancher version does not meet the app's requirements.
|
||||
|
||||
@@ -24,11 +24,13 @@ kubectl --kubeconfig /custom/path/kube.config get pods
|
||||
|
||||
## Accessing Rancher Launched Kubernetes clusters without Rancher server running
|
||||
|
||||
By default, all Rancher Launched Kubernetes clusters have [Authorized Cluster Endpoint]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#authorized-cluster-endpoint) enabled.
|
||||
|
||||
> The authorized cluster endpoint only works on Rancher-launched Kubernetes clusters. In other words, it only works in clusters where Rancher [used RKE]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/#tools-for-provisioning-kubernetes-clusters) to provision the cluster. It is not available for clusters in a hosted Kubernetes provider, such as Amazon's EKS.
|
||||
|
||||
By default, Rancher generates a kubeconfig file that will proxy through the Rancher server to connect to the Kubernetes API server on a cluster.
|
||||
|
||||
For [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters) clusters, which have [Authorized Cluster Endpoint]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#authorized-cluster-endpoint) enabled, Rancher generates extra context(s) in the kubeconfig file in order to connect directly to the cluster.
|
||||
|
||||
> **Note:** By default, all Rancher Launched Kubernetes clusters have [Authorized Cluster Endpoint]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#authorized-cluster-endpoint) enabled.
|
||||
For [Rancher Launched Kubernetes]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters) clusters that have the authorized cluster endpoint enabled, Rancher generates extra context(s) in the kubeconfig file in order to connect directly to the cluster.
|
||||
|
||||
To find the name of the context(s), run:
|
||||
|
||||
@@ -38,6 +40,9 @@ CURRENT NAME CLUSTER AUTHINFO N
|
||||
* my-cluster my-cluster user-46tmn
|
||||
my-cluster-controlplane-1 my-cluster-controlplane-1 user-46tmn
|
||||
```
|
||||
For more information on how the authorized cluster endpoint works, refer to the [architecture section.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/4-authorized-cluster-endpoint)
|
||||
|
||||
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture-recommendations/#architecture-for-an-authorized-cluster-endpoint)
|
||||
|
||||
### Clusters with FQDN defined as an Authorized Cluster Endpoint
|
||||
|
||||
@@ -65,7 +70,9 @@ kubectl --kubeconfig /custom/path/kube.config --context <CLUSTER_NAME>-<NODE_NAM
|
||||
|
||||
### kube-api-auth
|
||||
|
||||
The `kube-api-auth` resource is deployed to provide the functionality for Authorized Cluster Endpoint.
|
||||
The `kube-api-auth` resource is deployed to provide the functionality for the [authorized cluster endpoint.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/4-authorized-cluster-endpoint)
|
||||
|
||||
The `kube-api-auth` microservice is deployed to provide the user authentication functionality for the authorized cluster endpoint. When you access the user cluster using `kubectl`, the cluster's Kubernetes API server authenticates you by using the `kube-api-auth` service as a webhook.
|
||||
|
||||
During cluster provisioning, the file `/etc/kubernetes/kube-api-authn-webhook.yaml` is deployed and `kube-apiserver` is configured with `--authentication-token-webhook-config-file=/etc/kubernetes/kube-api-authn-webhook.yaml`. This configures the `kube-apiserver` to query `http://127.0.0.1:6440/v1/authenticate` to determine authentication for bearer tokens.
|
||||
|
||||
|
||||
@@ -8,44 +8,11 @@ aliases:
|
||||
- /rancher/v2.x/en/tasks/clusters/creating-a-cluster/
|
||||
---
|
||||
|
||||
## What's a Kubernetes Cluster?
|
||||
Rancher simplifies the creation of clusters by allowing you to create them through the Rancher UI rather than more complex alternatives. Rancher provides multiple options for launching a cluster. Use the option that best fits your use case.
|
||||
|
||||
A cluster is a group of computers that work together as a single system.
|
||||
|
||||
A _Kubernetes Cluster_ is a cluster that uses the [Kubernetes container-orchestration system](https://kubernetes.io/) to deploy, maintain, and scale Docker containers, allowing your organization to automate application operations.
|
||||
|
||||
### Kubernetes Cluster Node Components
|
||||
|
||||
Each computing resource in a Kubernetes Cluster is called a _node_. Nodes can be either bare-metal servers or virtual machines. Kubernetes classifies nodes into three types: _etcd_ nodes, _control plane_ nodes, and _worker_ nodes.
|
||||
|
||||
#### etcd Nodes
|
||||
|
||||
[etcd](https://kubernetes.io/docs/concepts/overview/components/#etcd) nodes run the etcd database. The etcd database component is a key value store used as Kubernetes storage for all cluster data, such as cluster coordination and state management.
|
||||
|
||||
etcd is a distributed key value store, meaning it runs on multiple nodes so that there's always a backup available for fail over. Even though you can run etcd on a single node, you should run it on multiple nodes. We recommend 3, 5, or 7 etcd nodes for redundancy.
|
||||
|
||||
#### Control Plane Nodes
|
||||
|
||||
[Control plane](https://kubernetes.io/docs/concepts/#kubernetes-control-plane) nodes run the Kubernetes API server, scheduler, and controller manager. These nodes take care of routine tasks to ensure that your cluster maintains your configuration. Because all cluster data is stored on your etcd nodes, control plane nodes are stateless. You can run control plane on a single node, although two or more nodes are recommended for redundancy. Additionally, a single node can share the control plane and etcd roles.
|
||||
|
||||
#### Worker Nodes
|
||||
|
||||
[Worker nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) run:
|
||||
|
||||
- _Kubelets_: An agent that monitors the state of the node, ensuring your containers are healthy.
|
||||
- _Workloads_: The containers and pods that hold your apps, as well as other types of deployments.
|
||||
|
||||
Worker nodes also run storage and networking drivers, and ingress controllers when required. You create as many worker nodes as necessary to run your workloads.
|
||||
|
||||
## Cluster Creation in Rancher
|
||||
|
||||
Now that you know what a Kubernetes Cluster is, how does Rancher fit in?
|
||||
|
||||
Rancher simplifies creation of clusters by allowing you to create them through the Rancher UI rather than more complex alternatives. Rancher provides multiple options for launching a cluster. Use the option that best fits your use case.
|
||||
|
||||
<br/>
|
||||
<sup>Rancher components used for provisioning/managing Kubernetes clusters.</sup>
|
||||
This section assumes a basic familiarity with Docker and Kubernetes. For a brief explanation of how Kubernetes components work together, refer to the [concepts]({{<baseurl>}}/rancher/v2.x/en/overview/concepts) page.
|
||||
|
||||
For a conceptual overview of how the Rancher server provisions clusters and what tools it uses to provision them, refer to the [architecture]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/) page.
|
||||
|
||||
## Cluster Creation Options
|
||||
|
||||
@@ -61,35 +28,46 @@ Options include:
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
### Hosted Kubernetes Cluster
|
||||
# Hosted Kubernetes Cluster
|
||||
|
||||
If you use a Kubernetes provider such as Google GKE, Rancher integrates with its cloud APIs, allowing you to create and manage a hosted cluster from the Rancher UI.
|
||||
|
||||
[Hosted Kubernetes Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters)
|
||||
[Hosted Kubernetes Cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters)
|
||||
|
||||
### Rancher Launched Kubernetes
|
||||
# Rancher Launched Kubernetes
|
||||
|
||||
Alternatively, you can use Rancher to create a cluster on your own nodes, using [Rancher Kubernetes Engine (RKE)]({{< baseurl >}}/rke/latest/en/). RKE is Rancher’s own lightweight Kubernetes installer. In RKE clusters, Rancher manages the deployment of Kubernetes. These clusters can be deployed on any bare metal server, cloud provider, or virtualization platform. These nodes can either:
|
||||
The [Rancher Kubernetes Engine (RKE)]({{<baseurl>}}/rke/latest/en/) allows you to create a Kubernetes cluster on your own nodes. RKE is Rancher’s own lightweight Kubernetes installer.
|
||||
|
||||
- Be provisioned through Rancher's UI, which calls [Docker Machine](https://docs.docker.com/machine/) to launch nodes on various cloud providers.
|
||||
- Be a prior existing node that's brought into the cluster by running a Rancher agent container on it.
|
||||
In RKE clusters, Rancher manages the deployment of Kubernetes. These clusters can be deployed on any bare metal server, cloud provider, or virtualization platform.
|
||||
|
||||
[Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/)
|
||||
These nodes can be dynamically provisioned through Rancher's UI, which calls [Docker Machine](https://docs.docker.com/machine/) to launch nodes on various cloud providers.
|
||||
|
||||
#### Nodes Hosted by an Infrastructure Provider
|
||||
If you already have a node that you want to add to an RKE cluster, you can add it to the cluster by running a Rancher agent container on it.
|
||||
|
||||
Using Rancher, you can create pools of nodes based on a [node template]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates). This template defines the parameters used to launch nodes in your cloud providers. The cloud providers available for creating a node template are decided based on the [node drivers]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-drivers) active in the Rancher UI. The benefit of using nodes hosted by an infrastructure provider is that if a node loses connectivity with the cluster, Rancher automatically replaces it, thus maintaining the expected cluster configuration.
|
||||
For more information, refer to the section on [RKE clusters.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/)
|
||||
|
||||
[Nodes Hosted by an Infrastructure Provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/)
|
||||
### Nodes Hosted by an Infrastructure Provider
|
||||
|
||||
#### Custom Nodes
|
||||
Using Rancher, you can create pools of nodes based on a [node template]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates). This template defines the parameters used to launch nodes in your cloud providers.
|
||||
|
||||
You can bring any nodes you want to Rancher and use them to create a cluster. These nodes include on-premise bare metal servers, cloud-hosted virtual machines, or on-premise virtual machines.
|
||||
The benefit of using nodes hosted by an infrastructure provider is that if a node loses connectivity with the cluster, Rancher automatically replaces it, thus maintaining the expected cluster configuration.
|
||||
|
||||
[Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/)
|
||||
The cloud providers available for creating a node template are decided based on the [node drivers]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-drivers) active in the Rancher UI.
|
||||
|
||||
### Import Existing Cluster
|
||||
For more information, refer to the section on [nodes hosted by an infrastructure provider]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/)
|
||||
|
||||
Users can import an existing Kubernetes cluster into Rancher. Note that Rancher does not automate the provisioning, scaling, or upgrade of imported clusters. All other Rancher features, including management of cluster, policy, and workloads, are available for imported clusters.
|
||||
### Custom Nodes
|
||||
|
||||
[Importing Existing Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/)
|
||||
You can bring any nodes you want to Rancher and use them to create a cluster. Clusters created with custom nodes are also called custom clusters.
|
||||
|
||||
These nodes include on-premise bare metal servers, cloud-hosted virtual machines, or on-premise virtual machines.
|
||||
|
||||
[Custom Nodes]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/)
|
||||
|
||||
# Import Existing Cluster
|
||||
|
||||
Users can import an existing Kubernetes cluster into Rancher.
|
||||
|
||||
Note that Rancher does not automate the provisioning, scaling, or upgrade of imported clusters. All other Rancher features, including management of cluster, policy, and workloads, are available for imported clusters.
|
||||
|
||||
For more information, refer to the section on [importing existing clusters.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/)
|
||||
|
||||
@@ -8,6 +8,8 @@ There are two different agent resources deployed on Rancher managed clusters:
|
||||
- [cattle-cluster-agent](#cattle-cluster-agent)
|
||||
- [cattle-node-agent](#cattle-node-agent)
|
||||
|
||||
For a conceptual overview of how the Rancher server provisions clusters and what tools it uses to provision them, refer to the [architecture]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/)
|
||||
|
||||
### cattle-cluster-agent
|
||||
|
||||
The `cattle-cluster-agent` is used to connect to the Kubernetes API of [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters. The `cattle-cluster-agent` is deployed using a Deployment resource.
|
||||
|
||||
@@ -71,7 +71,15 @@ See the [RKE documentation on private registries]({{< baseurl >}}/rke/latest/en/
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Authorized Cluster Endpoint can be used to directly access the Kubernetes API server, without requiring communication through Rancher. This is enabled by default, using the IP of the node with the `controlplane` role and the default Kubernetes self signed certificates. It is recommended to create an FQDN pointing to a load balancer which load balances across your nodes with the `controlplane` role. If you are using private CA signed certificates on the load balancer, you have to supply the CA certificate which will be included in the generated kubeconfig to validate the certificate chain. See the [Kubeconfig Files]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/kubeconfig/) and [API Keys]({{<baseurl>}}/rancher/v2.x/en/user-settings/api-keys/#creating-an-api-key) documentation for more information.
|
||||
Authorized Cluster Endpoint can be used to directly access the Kubernetes API server, without requiring communication through Rancher.
|
||||
|
||||
> The authorized cluster endpoint only works on Rancher-launched Kubernetes clusters. In other words, it only works in clusters where Rancher [used RKE]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/#tools-for-provisioning-kubernetes-clusters) to provision the cluster. It is not available for clusters in a hosted Kubernetes provider, such as Amazon's EKS.
|
||||
|
||||
This is enabled by default in Rancher-launched Kubernetes clusters, using the IP of the node with the `controlplane` role and the default Kubernetes self signed certificates.
|
||||
|
||||
For more detail on how an authorized cluster endpoint works and why it is used, refer to the [architecture section.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/4-authorized-cluster-endpoint)
|
||||
|
||||
We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture-recommendations/#architecture-for-an-authorized-cluster-endpoint)
|
||||
|
||||
### Advanced Cluster Options
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ CLI | https://github.com/rancher/cli | This repository is the source code for th
|
||||
Telemetry repository | https://github.com/rancher/telemetry | This repository is the source for the Telemetry binary.
|
||||
loglevel repository | https://github.com/rancher/loglevel | This repository is the source of the loglevel binary, used to dynamically change log levels.
|
||||
|
||||
To see all libraries/projects used in Rancher, see the `vendor.conf` in the `rancher/rancher` repository.
|
||||
To see all libraries/projects used in Rancher, see the [`go.mod` file](https://github.com/rancher/rancher/blob/master/go.mod) in the `rancher/rancher` repository.
|
||||
|
||||
<br/>
|
||||
<sup>Rancher components used for provisioning/managing Kubernetes clusters.</sup>
|
||||
|
||||
@@ -17,17 +17,14 @@ This section is about installations of Rancher server in an air gapped environme
|
||||
|
||||
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machines. If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/).
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "HA Install" %}}
|
||||
|
||||
The following CLI tools are required for the HA install. Make sure these tools are installed on your workstation and available in your `$PATH`.
|
||||
|
||||
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool.
|
||||
* [rke]({{< baseurl >}}/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters.
|
||||
* [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements]({{<baseurl>}}/rancher/v2.x/en/installation/helm-version) to choose a version of Helm to install Rancher.
|
||||
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
> **Prerequisites For HA Install Only:**
|
||||
>
|
||||
> The following CLI tools are required for the HA install. Make sure these tools are installed on your workstation and available in your `$PATH`.
|
||||
>
|
||||
> * [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool.
|
||||
> * [rke]({{< baseurl >}}/rke/latest/en/installation/) - Rancher Kubernetes Engine, cli for building Kubernetes clusters.
|
||||
> * [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements]({{<baseurl>}}/rancher/v2.x/en/installation/helm-version) to choose a version of Helm to install Rancher.
|
||||
|
||||
## Installation Outline
|
||||
|
||||
|
||||
@@ -0,0 +1,54 @@
|
||||
---
|
||||
title: Choosing an Installation Method
|
||||
weight: 2
|
||||
---
|
||||
|
||||
We recommend using [Helm,]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/concepts/#about-helm) a Kubernetes package manager, to install Rancher on a dedicated Kubernetes cluster. This is called a high-availability (HA) installation because increased avaialability is achieved by running Rancher on multiple nodes.
|
||||
|
||||
For testing and demonstration purposes, Rancher can be installed with Docker on a single node.
|
||||
|
||||
There are also separate instructions for installing Rancher in an air gap environment or behind an HTTP proxy:
|
||||
|
||||
Level of Internet Access | Installing on a Kubernetes Cluster - Strongly Recommended | Installing in a Single Docker Container
|
||||
---------------------------|-----------------------------|------------------
|
||||
With direct access to the Internet | [Docs]({{<baseurl>}}/rancher/v2.x/en/installation/ha/) | [Docs]({{<baseurl>}}/rancher/v2.x/en/installation/single-node/)
|
||||
Behind an HTTP proxy | These [docs,]({{<baseurl>}}/rancher/v2.x/en/installation/single-node/) plus this [configuration]({{<baseurl>}}/rancher/v2.x/en/installation/single-node/proxy/) | These [docs,]({{<baseurl>}}/rancher/v2.x/en/installation/ha/) plus this [configuration]({{<baseurl>}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#http-proxy)
|
||||
In an air gap environment | [Docs]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap/) | [Docs]({{<baseurl>}}/rancher/v2.x/en/installation/air-gap/)
|
||||
|
||||
### Why We Recommend an HA Installation
|
||||
|
||||
An HA installation of Rancher is recommended for production because it protects the Rancher management server's data from being lost. A single-node installation may be used for development and testing purposes, but there is no migration path from a single-node to an HA installation. Therefore, you may want to use an HA installation from the start.
|
||||
|
||||
> For the best performance and greater security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
|
||||
|
||||
For more architecture recommendations, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture-recommendations)
|
||||
|
||||
### How an HA Rancher Installation Works
|
||||
|
||||
In a typical HA Rancher installation, Kubernetes is first installed on three nodes that are hosted in an infrastructure provider such as Amazon's EC2 or Google Compute Engine.
|
||||
|
||||
Then Helm is used to install Rancher on top of the Kubernetes cluster. Helm uses Rancher's Helm chart to install a replica of Rancher on each of the three nodes in the Kubernetes cluster. We recommend using a load balancer to direct traffic to each replica of Rancher in the cluster, in order to increase Rancher's availability.
|
||||
|
||||
The Rancher server data is stored on etcd. This etcd database also runs on all three nodes, and requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can fail, requiring the cluster to be restored from backup.
|
||||
|
||||
For information on how Rancher works, regardless of the installation method, refer to the [overview section.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture)
|
||||
|
||||
# More Options for HA Installations
|
||||
|
||||
Refer to the [Helm chart options]({{<baseurl>}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/) for details on installing HA Rancher with other configurations, including:
|
||||
|
||||
- With [API auditing to record all transactions]({{<baseurl>}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#api-audit-log)
|
||||
- With [TLS termination on a load balancer]({{<baseurl>}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#external-tls-termination)
|
||||
- With a [custom Ingress]({{<baseurl>}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#customizing-your-ingress)
|
||||
|
||||
# More Options for Single Node Installations
|
||||
|
||||
Refer to the [single node installation docs]({{<baseurl>}}/rancher/v2.x/en/installation/single-node/) for details other configurations including:
|
||||
|
||||
- With [API auditing to record all transactions]({{<baseurl>}}/rancher/v2.x/en/installation/single-node/#api-audit-log)
|
||||
- With an [external load balancer]({{<baseurl>}}/rancher/v2.x/en/installation/single-node/single-node-install-external-lb/)
|
||||
- With a [persistent data store]({{<baseurl>}}/rancher/v2.x/en/installation/single-node/#persistent-data)
|
||||
|
||||
# More Kubernetes Options
|
||||
|
||||
In the Rancher installation instructions, we recommend using RKE (Rancher Kubernetes Engine) to set up a Kubernetes cluster before installing Rancher on the cluster. RKE has many configuration options for customizing the Kubernetes cluster to suit your specific environment. Please see the [RKE Documentation]({{<baseurl>}}/rke/latest/en/config-options/) for the full list of options and capabilities.
|
||||
@@ -9,9 +9,9 @@ This procedure walks you through setting up a 3-node cluster with Rancher Kubern
|
||||
|
||||
> **Important:** The Rancher management server can only be run on an RKE-managed Kubernetes cluster. Use of Rancher on hosted Kubernetes or other providers is not supported.
|
||||
|
||||
> **Important:** For the best performance, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
|
||||
> **Important:** For the best performance and security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
|
||||
|
||||
## Recommended Architecture
|
||||
We recommend the following architecture and configurations for the load balancer and Ingress controllers:
|
||||
|
||||
* DNS for Rancher should resolve to a Layer 4 load balancer (TCP)
|
||||
* The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster.
|
||||
@@ -44,7 +44,7 @@ The following CLI tools are required for this install. Please make sure these to
|
||||
|
||||
[RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/installation/ha/rke-add-on/)
|
||||
|
||||
> ##### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
> **Important: RKE add-on install is only supported up to Rancher v2.0.8**
|
||||
>
|
||||
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
|
||||
>
|
||||
|
||||
@@ -9,8 +9,8 @@ For a high-availability installation of Rancher, which is recommended for produc
|
||||
|
||||
For single node installations of Rancher, which is used for development and testing, you will install Rancher as a Docker image.
|
||||
|
||||
# Single Node Installs
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "Docker Images for Single Node/Docker Installs" %}}
|
||||
When performing [single-node installs]({{< baseurl >}}/rancher/v2.x/en/installation/single-node), upgrades, or rollbacks, you can use _tags_ to install a specific version of Rancher.
|
||||
|
||||
### Server Tags
|
||||
@@ -29,8 +29,8 @@ Tag | Description
|
||||
>- Want to install an alpha review for preview? Install using one of the alpha tags listed on our [announcements page](https://forums.rancher.com/c/announcements) (e.g., `v2.2.0-alpha1`).
|
||||
>
|
||||
> _Caveat:_ Alpha releases cannot be upgraded to or from any other release.
|
||||
|
||||
# High Availability Installs
|
||||
{{% /tab %}}
|
||||
{{% tab "Helm Charts for HA/Kubernetes Installs" %}}
|
||||
|
||||
When installing, upgrading, or rolling back Rancher Server in a [high availability configuration]({{< baseurl >}}/rancher/v2.x/en/installation/ha/), Rancher server is installed using a Helm chart on a Kubernetes cluster. Therefore, as you prepare to install or upgrade a high availability Rancher configuration, you must add a Helm chart repository that contains the charts for installing Rancher.
|
||||
|
||||
@@ -88,3 +88,7 @@ After installing Rancher, if you want to change which Helm chart repository to i
|
||||
```
|
||||
|
||||
4. Continue to follow the steps to [upgrade Rancher]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/) from the new Helm chart repository.
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
|
||||
|
||||
@@ -4,11 +4,11 @@ weight: 1
|
||||
---
|
||||
Rancher is a container management platform built for organizations that deploy containers in production. Rancher makes it easy to run Kubernetes everywhere, meet IT requirements, and empower DevOps teams.
|
||||
|
||||
## Run Kubernetes Everywhere
|
||||
# Run Kubernetes Everywhere
|
||||
|
||||
Kubernetes has become the container orchestration standard. Most cloud and virtualization vendors now offer it as standard infrastructure. Rancher users have the choice of creating Kubernetes clusters with Rancher Kubernetes Engine (RKE) or cloud Kubernetes services, such as GKE, AKS, and EKS. Rancher users can also import and manage their existing Kubernetes clusters created using any Kubernetes distribution or installer.
|
||||
|
||||
## Meet IT requirements
|
||||
# Meet IT requirements
|
||||
|
||||
Rancher supports centralized authentication, access control, and monitoring for all Kubernetes clusters under its control. For example, you can:
|
||||
|
||||
@@ -16,10 +16,52 @@ Rancher supports centralized authentication, access control, and monitoring for
|
||||
- Setup and enforce access control and security policies across all users, groups, projects, clusters, and clouds.
|
||||
- View the health and capacity of your Kubernetes clusters from a single-pane-of-glass.
|
||||
|
||||
## Empower DevOps Teams
|
||||
# Empower DevOps Teams
|
||||
|
||||
Rancher provides an intuitive user interface for DevOps engineers to manage their application workload. The user does not need to have in-depth knowledge of Kubernetes concepts to start using Rancher. Rancher catalog contains a set of useful DevOps tools. Rancher is certified with a wide selection of cloud native ecosystem products, including, for example, security tools, monitoring systems, container registries, and storage and networking drivers.
|
||||
|
||||
The following figure illustrates the role Rancher plays in IT and DevOps organizations. Each team deploys their applications on the public or private clouds they choose. IT administrators gain visibility and enforce policies across all users, clusters, and clouds.
|
||||
|
||||

|
||||
|
||||
# Features of the Rancher API Server
|
||||
|
||||
The Rancher API server is built on top of an embedded Kubernetes API server and an etcd database. It implements the following functionalities:
|
||||
|
||||
### Authorization and Role-Based Access Control
|
||||
|
||||
- **User management:** The Rancher API server [manages user identities]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/) that correspond to external authentication providers like Active Directory or GitHub, in addition to local users.
|
||||
- **Authorization:** The Rancher API server manages [access control]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/) and [security]({{<baseurl>}}/rancher/v2.x/en/admin-settings/pod-security-policies/) policies.
|
||||
|
||||
### Working with Kubernetes
|
||||
|
||||
- **Provisioning Kubernetes clusters:** The Rancher API server can [provision Kubernetes]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/) on existing nodes, or perform [Kubernetes upgrades.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/editing-clusters/#upgrading-kubernetes)
|
||||
- **Catalog management:** Rancher provides the ability to use a [catalog of Helm charts]({{<baseurl>}}/rancher/v2.x/en/catalog/) that make it easy to repeatedly deploy applications.
|
||||
- **Managing projects:** A project is a group of multiple namespaces and access control policies within a cluster. A project is a Rancher concept, not a Kubernetes concept, which allows you manage multiple namespaces as a group and perform Kubernetes operations in them. The Rancher UI provides features for [project administration]({{<baseurl>}}/rancher/v2.x/en/project-admin/) and for [managing applications within projects.]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/)
|
||||
- **Pipelines:** Setting up a [pipeline]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/pipelines/) can help developers deliver new software as quickly and efficiently as possible. Within Rancher, you can configure pipelines for each of your Rancher projects.
|
||||
- **Istio:** Our [integration with Istio]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/istio/) is designed so that a Rancher operator, such as an administrator or cluster owner, can deliver Istio to developers. Then developers can use Istio to enforce security policies, troubleshoot problems, or manage traffic for green/blue deployments, canary deployments, or A/B testing.
|
||||
|
||||
### Working with Cloud Infrastructure
|
||||
|
||||
- **Tracking nodes:** The Rancher API server tracks identities of all the [nodes]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/nodes/) in all clusters.
|
||||
- **Setting up infrastructure:** When configured to use a cloud provider, Rancher can dynamically provision [new nodes]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/) and [persistent storage]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/) in the cloud.
|
||||
|
||||
### Cluster Visibility
|
||||
|
||||
- **Logging:** Rancher can integrate with a variety of popular logging services and tools that exist outside of your Kubernetes clusters. Logging can be set up [at the cluster level]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/) or [at the project level.]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/logging/)
|
||||
- **Monitoring:** Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with Prometheus, a leading open-source monitoring solution. Monitoring can be configured [at the cluster level]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) or [at the project level.]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/)
|
||||
- **Alerting:** To keep your clusters and applications healthy and driving your organizational productivity forward, you need to stay informed of events occurring in your clusters and projects, both planned and unplanned. To help you stay informed of these events, you can configure alerts [at the cluster level]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/alerts/) or [at the project level.]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/alerts/)
|
||||
|
||||
# Editing Downstream Clusters with Rancher
|
||||
|
||||
The options and settings available for an existing cluster change based on the method that you used to provision it. For example, only clusters [provisioned by RKE]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) have **Cluster Options** available for editing.
|
||||
|
||||
After a cluster is created with Rancher, a cluster administrator can manage cluster membership, enable pod security policies, and manage node pools, among [other options.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/editing-clusters/)
|
||||
|
||||
The following table shows an overview of the options and settings available for each cluster type:
|
||||
|
||||
Cluster Type | Manage Member Roles | Edit Cluster Options | Manage Node Pools
|
||||
---------|----------|---------|---------|
|
||||
[RKE-Launched]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#rancher-launched-kubernetes) | ✓ | ✓ | ✓ |
|
||||
[Hosted Kubernetes Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#hosted-kubernetes-cluster) | ✓ | | |
|
||||
[Imported]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#import-existing-cluster) | ✓ | | |
|
||||
@@ -0,0 +1,93 @@
|
||||
---
|
||||
title: Architecture Recommendations
|
||||
weight: 3
|
||||
---
|
||||
|
||||
This page describes our recommendations for how to install Rancher.
|
||||
|
||||
These recommendations focus on how to set up Rancher on a high-availability (HA) cluster. If you are installing Rancher on a single node, the main architecture recommendation that applies to your installation is that the node running Rancher should be [separate from downstream clusters.](#separation-of-rancher-and-user-clusters)
|
||||
|
||||
This section covers the following topics:
|
||||
|
||||
- [Separation of Rancher and User Clusters](#separation-of-rancher-and-user-clusters)
|
||||
- [Why HA is Better for Rancher in Production](#why-ha-is-better-for-rancher-in-production)
|
||||
- [Recommended Load Balancer Configuration for HA Installations](#recommended-load-balancer-configuration-for-ha-installations)
|
||||
- [Environment for HA Installations](#environment-for-ha-installations)
|
||||
- [Recommended Node Roles for HA Installations](#recommended-node-roles-for-ha-installations)
|
||||
- [Architecture for an Authorized Cluster Endpoint](#architecture-for-an-authorized-cluster-endpoint)
|
||||
|
||||
# Separation of Rancher and User Clusters
|
||||
|
||||
A user cluster is a downstream Kubernetes cluster that runs your apps and services.
|
||||
|
||||
If you have a single node installation of Rancher, the node running the Rancher server should be separate from your downstream clusters.
|
||||
|
||||
In HA installations of Rancher, the Rancher server cluster should also be separate from the user clusters.
|
||||
|
||||

|
||||
|
||||
# Why HA is Better for Rancher in Production
|
||||
|
||||
We recommend installing the Rancher server on a three-node Kubernetes cluster for production, primarily because it protects the data stored on etcd. The Rancher server stores its data in etcd in both single-node and HA installations.
|
||||
|
||||
When Rancher is installed on a single node, if the node goes down, there is no copy of the etcd data available on other nodes and you could lose the data on your Rancher server.
|
||||
|
||||
By contrast, in the high-availability installation,
|
||||
|
||||
- The etcd data is replicated on three nodes in the cluster, providing redundancy and data duplication in case one of the nodes fails.
|
||||
- A load balancer serves as the single point of contact for clients, distributing network traffic across multiple servers in the cluster and helping to prevent any one server from becoming a point of failure. Note: This [example]({{<baseurl>}}/rancher/v2.x/en/installation/ha/create-nodes-lb/nginx/) of how to configure an NGINX server as a basic layer 4 load balancer (TCP).
|
||||
|
||||
# Recommended Load Balancer Configuration for HA Installations
|
||||
|
||||
We recommend the following configurations for the load balancer and Ingress controllers:
|
||||
|
||||
* The DNS for Rancher should resolve to a Layer 4 load balancer (TCP)
|
||||
* The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster.
|
||||
* The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443.
|
||||
* The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment.
|
||||
|
||||
<figcaption>HA Rancher install with layer 4 load balancer, depicting SSL termination at ingress controllers</figcaption>
|
||||

|
||||
<sup>HA Rancher install with Layer 4 load balancer (TCP), depicting SSL termination at ingress controllers</sup>
|
||||
|
||||
# Environment for HA Installations
|
||||
|
||||
It is strongly recommended to install Rancher on a Kubernetes cluster on hosted infrastructure such as Amazon's EC2 or Google Compute Engine.
|
||||
|
||||
For the best performance and greater security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
|
||||
|
||||
It is not recommended to install Rancher on top of a managed Kubernetes service such as Amazon’s EKS or Google Kubernetes Engine. These hosted Kubernetes solutions do not expose etcd to a degree that is manageable for Rancher, and their customizations can interfere with Rancher operations.
|
||||
|
||||
# Recommended Node Roles for HA Installations
|
||||
|
||||
We recommend installing Rancher on a Kubernetes cluster in which each node has all three Kubernetes roles: etcd, controlplane, and worker.
|
||||
|
||||
### Comparing Node Roles for the Rancher Server Cluster and User Clusters
|
||||
|
||||
Our recommendation for node roles on the Rancher server cluster contrast with our recommendations for the downstream user clusters that run your apps and services. We recommend that each node in a user cluster should have a single role for stability and scalability.
|
||||
|
||||

|
||||
|
||||
Kubernetes only requires at least one node with each role and does not require nodes to be restricted to one role. However, for the clusters that run your apps, we recommend separate roles for each node so that workloads on worker nodes don't interfere with the Kubernetes master or cluster data as your services scale.
|
||||
|
||||
We recommend that downstream user clusters should have at least:
|
||||
|
||||
- **Three nodes with only the etcd role** to maintain a quorum if one node is lost, making the state of your cluster highly available
|
||||
- **Two nodes with only the controlplane role** to make the master component highly available
|
||||
- **One or more nodes with only the worker role** to run the Kubernetes node components, as well as the workloads for your apps and services
|
||||
|
||||
With that said, it is safe to use all three roles on three nodes when setting up the Rancher server because:
|
||||
|
||||
* It allows one `etcd` node failure.
|
||||
* It maintains multiple instances of the master components by having multiple `controlplane` nodes.
|
||||
* No other workloads than Rancher itself should be created on this cluster.
|
||||
|
||||
Because no additional workloads will be deployed on the Rancher server cluster, in most cases it is not necessary to use the same architecture that we recommend for the scalability and reliability of user clusters.
|
||||
|
||||
For more best practices for user clusters, refer to the [production checklist]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/production) or our [best practices guide.]({{<baseurl>}}/rancher/v2.x/en/best-practices/management/#tips-for-scaling-and-reliability)
|
||||
|
||||
# Architecture for an Authorized Cluster Endpoint
|
||||
|
||||
If you are using an [authorized cluster endpoint,]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/#4-authorized-cluster-endpoint) we recommend creating an FQDN pointing to a load balancer which balances traffic across your nodes with the `controlplane` role.
|
||||
|
||||
If you are using private CA signed certificates on the load balancer, you have to supply the CA certificate, which will be included in the generated kubeconfig file to validate the certificate chain. See the documentation on [kubeconfig files]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/kubeconfig/) and [API keys]({{<baseurl>}}/rancher/v2.x/en/user-settings/api-keys/#creating-an-api-key) for more information.
|
||||
@@ -3,84 +3,173 @@ title: Architecture
|
||||
weight: 1
|
||||
---
|
||||
|
||||
This section explains how Rancher interacts with the two fundamental technologies Rancher is built on: Docker and Kubernetes.
|
||||
This section focuses on the Rancher server, its components, and how Rancher communicates with downstream Kubernetes clusters.
|
||||
|
||||
## Docker
|
||||
For information on the different ways that Rancher can be installed, refer to the [section on choosing an installation method.]({{<baseurl>}}/rancher/v2.x/en/installation/choosing-installation)
|
||||
|
||||
Docker is the container packaging and runtime standard. Developers build container images from Dockerfiles and distribute container images from Docker registries. [Docker Hub](https://hub.docker.com) is the most popular public registry. Many organizations also setup private Docker registries. Docker is primarily used to manage containers on individual nodes.
|
||||
For a list of main eatures of the Rancher API server, refer to the [overview section.]({{<baseurl>}}/rancher/v2.x/en/overview/#features-of-the-rancher-api-server)
|
||||
|
||||
>**Note:** Although Rancher 1.6 supported Docker Swarm clustering technology, it is no longer supported in Rancher 2.x due to the success of Kubernetes.
|
||||
For guidance about setting up the underlying infrastructure for the Rancher server, refer to the [architecture recommendations.]({{<baseurl>}}/rancher/v2.x/en/overview/architecture-recommendations)
|
||||
|
||||
## Kubernetes
|
||||
> This section assumes a basic familiarity with Docker and Kubernetes. For a brief explanation of how Kubernetes components work together, refer to the [concepts]({{<baseurl>}}/rancher/v2.x/en/overview/concepts) page.
|
||||
|
||||
Kubernetes is the container cluster management standard. YAML files specify containers and other resources that form an application. Kubernetes performs functions such as scheduling, scaling, service discovery, health check, secret management, and configuration management.
|
||||
This section covers the following topics:
|
||||
|
||||
A Kubernetes cluster consists of multiple nodes.
|
||||
- [Rancher server architecture](#rancher-server-architecture)
|
||||
- [Communicating with downstream user clusters](#communicating-with-downstream-user-clusters)
|
||||
- [The authentication proxy](#1-the-authentication-proxy)
|
||||
- [Cluster controllers and cluster agents](#2-cluster-controllers-and-cluster-agents)
|
||||
- [Node agents](#3-node-agents)
|
||||
- [Authorized cluster endpoint](#4-authorized-cluster-endpoint)
|
||||
- [Important files](#important-files)
|
||||
- [Tools for provisioning Kubernetes clusters](#tools-for-provisioning-kubernetes-clusters)
|
||||
- [Rancher server components and source code](#rancher-server-components-and-source-code)
|
||||
|
||||
- **etcd database**
|
||||
# Rancher Server Architecture
|
||||
|
||||
Although you can run etcd on just one node, it typically takes 3, 5 or more nodes to create a High Availability (HA) configuration.
|
||||
The majority of Rancher 2.x software runs on the Rancher Server. Rancher Server includes all the software components used to manage the entire Rancher deployment.
|
||||
|
||||
- **Master nodes**
|
||||
The figure below illustrates the high-level architecture of Rancher 2.x. The figure depicts a Rancher Server installation that manages two downstream Kubernetes clusters: one created by RKE and another created by Amazon EKS (Elastic Kubernetes Service).
|
||||
|
||||
Master nodes are stateless and are used to run the API server, scheduler, and controllers.
|
||||
For the best performance and security, we recommend a dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
|
||||
|
||||
- **Worker nodes**
|
||||
The diagram below shows how users can manipulate both [Rancher-launched Kubernetes]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) clusters and [hosted Kubernetes]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) clusters through Rancher's authentication proxy:
|
||||
|
||||
The application workload runs on worker nodes.
|
||||
<figcaption>Managing Kubernetes Clusters through Rancher's Authentication Proxy</figcaption>
|
||||
|
||||
## Rancher
|
||||

|
||||
|
||||
The majority of Rancher 2.x software runs on the Rancher Server. Rancher Server includes all the software components used to manage the entire Rancher deployment.
|
||||
You can install Rancher on a single node, or on a high-availability Kubernetes cluster.
|
||||
|
||||
The figure below illustrates the high-level architecture of Rancher 2.x. The figure depicts a Rancher Server installation that manages two Kubernetes clusters: one created by Rancher Kubernetes Engine (RKE) and another created by Amazon EKS (Elastic Kubernetes Service).
|
||||
A high-availability installation is recommended for production. A single-node installation may be used for development and testing purposes, but there is no migration path from a single-node to a high-availability installation. Therefore, you may want to use a high-availability installation from the start.
|
||||
|
||||

|
||||
The Rancher server, regardless of the installation method, should always run on nodes that are separate from the downstream user clusters that it manages. If Rancher is installed on a high-availability Kubernetes cluster, it should run on a separate cluster from the cluster(s) it manages.
|
||||
|
||||
In this section we describe the functionalities of each Rancher server components.
|
||||
# Communicating with Downstream User Clusters
|
||||
|
||||
#### Rancher API Server
|
||||
This section describes how Rancher provisions and manages the downstream user clusters that run your apps and services.
|
||||
|
||||
Rancher API server is built on top of an embedded Kubernetes API server and etcd database. It implements the following functionalities:
|
||||
The below diagram shows how the cluster controllers, cluster agents, and node agents allow Rancher to control downstream clusters.
|
||||
|
||||
- **User Management**
|
||||
<figcaption>Communicating with Downstream Clusters</figcaption>
|
||||
|
||||
Rancher API server manages user identities that correspond to external authentication providers like Active Directory or GitHub.
|
||||

|
||||
|
||||
- **Authorization**
|
||||
The following descriptions correspond to the numbers in the diagram above:
|
||||
|
||||
Rancher API server manages access control and security policies.
|
||||
1. [The Authentication Proxy](#1-the-authentication-proxy)
|
||||
2. [Cluster Controllers and Cluster Agents](#2-cluster-controllers-and-cluster-agents)
|
||||
3. [Node Agents](#3-node-agents)
|
||||
4. [Authorized Cluster Endpoint](#4-authorized-cluster-endpoint)
|
||||
|
||||
- **Projects**
|
||||
### 1. The Authentication Proxy
|
||||
|
||||
A _project_ is a group of multiple namespaces and access control policies within a cluster.
|
||||
In this diagram, a user named Bob wants to see all pods running on a downstream user cluster called User Cluster 1. From within Rancher, he can run a `kubectl` command to see
|
||||
the pods. Bob is authenticated through Rancher's authentication proxy.
|
||||
|
||||
- **Nodes**
|
||||
The authentication proxy forwards all Kubernetes API calls to downstream clusters. It integrates with authentication services like local authentication, Active Directory, and GitHub. On every Kubernetes API call, the authentication proxy authenticates the caller and sets the proper Kubernetes impersonation headers before forwarding the call to Kubernetes masters.
|
||||
|
||||
Rancher API server tracks identities of all the nodes in all clusters.
|
||||
Rancher communicates with Kubernetes clusters using a [service account,](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) which provides an identity for processes that run in a pod.
|
||||
|
||||
#### Cluster Controller and Agents
|
||||
By default, Rancher generates a [kubeconfig file]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/kubeconfig/) that contains credentials for proxying through the Rancher server to connect to the Kubernetes API server on a downstream user cluster. The kubeconfig file (`kube_config_rancher-cluster.yml`) contains full access to the cluster.
|
||||
|
||||
The cluster controller and cluster agents implement the business logic required to manage Kubernetes clusters.
|
||||
### 2. Cluster Controllers and Cluster Agents
|
||||
|
||||
- The _cluster controller_ implements the logic required for the global Rancher install. It performs the following actions:
|
||||
Each downstream user cluster has a cluster agent, which opens a tunnel to the corresponding cluster controller within the Rancher server.
|
||||
|
||||
- Configuration of access control policies to clusters and projects.
|
||||
There is one cluster controller and one cluster agent for each downstream cluster. Each cluster controller:
|
||||
|
||||
- Provisioning of clusters by calling:
|
||||
- Watches for resource changes in the downstream cluster
|
||||
- Brings the current state of the downstream cluster to the desired state
|
||||
- Configures access control policies to clusters and projects
|
||||
- Provisions clusters by calling the required Docker machine drivers and Kubernetes engines, such as RKE and GKE
|
||||
|
||||
- The required Docker machine drivers.
|
||||
- Kubernetes engines like RKE and GKE.
|
||||
By default, to enable Rancher to communicate with a downstream cluster, the cluster controller connects to the cluster agent. If the cluster agent is not available, the cluster controller can connect to a [node agent](#3-node-agents) instead.
|
||||
|
||||
The cluster agent, also called `cattle-cluster-agent`, is a component that runs in a downstream user cluster. It performs the following tasks:
|
||||
|
||||
- A separate _cluster agent_ instance implements the logic required for the corresponding cluster. It performs the following activities:
|
||||
- Connects to the Kubernetes API of Rancher-launched Kubernetes clusters
|
||||
- Manages workloads, pod creation and deployment within each cluster
|
||||
- Applies the roles and bindings defined in each cluster's global policies
|
||||
- Communicates between the cluster and Rancher server (through a tunnel to the cluster controller) about events, stats, node info, and health
|
||||
|
||||
- Workload Management, such as pod creation and deployment within each cluster.
|
||||
### 3. Node Agents
|
||||
|
||||
- Application of the roles and bindings defined in each cluster's global policies.
|
||||
If the cluster agent (also called `cattle-cluster-agent`) is not available, one of the node agents creates a tunnel to the cluster controller to communicate with Rancher.
|
||||
|
||||
- Communication between clusters and Rancher Server: events, stats, node info, and health.
|
||||
The `cattle-node-agent` is deployed using a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) resource to make sure it runs on every node in a Rancher-launched Kubernetes cluster. It is used to interact with the nodes when performing cluster operations. Examples of cluster operations include upgrading the Kubernetes version and creating or restoring etcd snapshots.
|
||||
|
||||
#### Authentication Proxy
|
||||
### 4. Authorized Cluster Endpoint
|
||||
|
||||
The _authentication proxy_ forwards all Kubernetes API calls. It integrates with authentication services like local authentication, Active Directory, and GitHub. On every Kubernetes API call, the authentication proxy authenticates the caller and sets the proper Kubernetes impersonation headers before forwarding the call to Kubernetes masters. Rancher communicates with Kubernetes clusters using a service account.
|
||||
An authorized cluster endpoint allows users to connect to the Kubernetes API server of a downstream cluster without having to route their requests through the Rancher authentication proxy.
|
||||
|
||||
> The authorized cluster endpoint only works on Rancher-launched Kubernetes clusters. In other words, it only works in clusters where Rancher [used RKE]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/#tools-for-provisioning-kubernetes-clusters) to provision the cluster. It is not available for imported clusters, or for clusters in a hosted Kubernetes provider, such as Amazon's EKS.
|
||||
|
||||
There are two main reasons why a user might need the authorized cluster endpoint:
|
||||
|
||||
- To access a downstream user cluster while Rancher is down
|
||||
- To reduce latency in situations where the Rancher server and downstream cluster are separated by a long distance
|
||||
|
||||
The `kube-api-auth` microservice is deployed to provide the user authentication functionality for the authorized cluster endpoint. When you access the user cluster using `kubectl`, the cluster's Kubernetes API server authenticates you by using the `kube-api-auth` service as a webhook.
|
||||
|
||||
Like the authorized cluster endpoint, the `kube-api-auth` authentication service is also only available for Rancher-launched Kubernetes clusters.
|
||||
|
||||
> **Example scenario:** Let's say that the Rancher server is located in the United States, and User Cluster 1 is located in Australia. A user, Alice, also lives in Australia. Alice can manipulate resources in User Cluster 1 by using the Rancher UI, but her requests will have to be sent from Australia to the Rancher server in the United States, then be proxied back to Australia, where the downstream user cluster is. The geographical distance may cause significant latency. To reduce the latency, which Alice can reduce by using the authorized cluster endpoint.
|
||||
|
||||
With this endpoint enabled for the downstream cluster, Rancher generates an extra Kubernetes context in the kubeconfig file in order to connect directly to the cluster. This file has the credentials for `kubectl` and `helm`.
|
||||
|
||||
You will need to use a context defined in this kubeconfig file to access the cluster if Rancher goes down. Therefore, we recommend exporting the kubeconfig file so that if Rancher goes down, you can still use the credentials in the file to access your cluster. For more information, refer to the [kubeconfig file]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/kubeconfig) documentation.
|
||||
|
||||
# Important Files
|
||||
|
||||
The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster:
|
||||
|
||||
- `rancher-cluster.yml`: The RKE cluster configuration file.
|
||||
- `kube_config_rancher-cluster.yml`: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster. You can use this file to authenticate with a Rancher-launched Kubernetes cluster if Rancher goes down.
|
||||
- `rancher-cluster.rkestate`: The Kubernetes cluster state file. This file contains credentials for full access to the cluster. Note: This state file is only created when using RKE v0.2.0 or higher.
|
||||
|
||||
For more information on connecting to a cluster without the Rancher authentication proxy and other configuration options, refer to the [kubeconfig file]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/kubeconfig/) documentation.
|
||||
|
||||
# Tools for Provisioning Kubernetes Clusters
|
||||
|
||||
The tools that Rancher uses to provision downstream user clusters depends on the type of cluster that is being provisioned.
|
||||
|
||||
### Rancher Launched Kubernetes for Nodes Hosted in an Infrastructure Provider
|
||||
|
||||
Rancher can dynamically provision nodes in a provider such as Amazon EC2, DigitalOcean, Azure, or vSphere, then install Kubernetes on them.
|
||||
|
||||
Rancher provisions this type of cluster using [RKE](https://github.com/rancher/rke) and [docker-machine.](https://github.com/rancher/machine)
|
||||
|
||||
### Rancher Launched Kubernetes for Custom Nodes
|
||||
|
||||
When setting up this type of cluster, Rancher installs Kubernetes on existing nodes, which creates a custom cluster.
|
||||
|
||||
Rancher provisions this type of cluster using [RKE.](https://github.com/rancher/rke)
|
||||
|
||||
### Hosted Kubernetes Providers
|
||||
|
||||
When setting up this type of cluster, Kubernetes is installed by providers such as Google Kubernetes Engine, Amazon Elastic Container Service for Kubernetes, or Azure Kubernetes Service.
|
||||
|
||||
Rancher provisions this type of cluster using [kontainer-engine.](https://github.com/rancher/kontainer-engine)
|
||||
|
||||
### Imported Kubernetes Clusters
|
||||
|
||||
In this type of cluster, Rancher connects to a Kubernetes cluster that has already been set up. Therefore, Rancher does not provision Kubernetes, but only sets up the Rancher agents to communicate with the cluster.
|
||||
|
||||
# Rancher Server Components and Source Code
|
||||
|
||||
This diagram shows each component that the Rancher server is composed of:
|
||||
|
||||

|
||||
|
||||
The GitHub repositories for Rancher can be found at the following links:
|
||||
|
||||
- [Main Rancher server repository](https://github.com/rancher/rancher)
|
||||
- [Rancher UI](https://github.com/rancher/ui)
|
||||
- [Rancher API UI](https://github.com/rancher/api-ui)
|
||||
- [Norman,](https://github.com/rancher/norman) Rancher's API framework
|
||||
- [Types](https://github.com/rancher/types)
|
||||
- [Rancher CLI](https://github.com/rancher/cli)
|
||||
- [Catalog applications](https://github.com/rancher/helm)
|
||||
|
||||
This is a partial list of the most important Rancher repositories. For more details about Rancher source code, refer to the section on [contributing to Rancher.]({{<baseurl>}}/rancher/v2.x/en/contributing/#repositories) To see all libraries and projects used in Rancher, see the [`go.mod` file](https://github.com/rancher/rancher/blob/master/go.mod) in the `rancher/rancher` repository.
|
||||
@@ -0,0 +1,72 @@
|
||||
---
|
||||
title: Kubernetes Concepts
|
||||
weight: 4
|
||||
---
|
||||
|
||||
This page explains concepts related to Kubernetes that are important for understanding how Rancher works. The descriptions below provide a simplified interview of Kubernetes components. For more details, refer to the [official documentation on Kubernetes components.](https://kubernetes.io/docs/concepts/overview/components/)
|
||||
|
||||
This section covers the following topics:
|
||||
|
||||
- [About Docker](#about-docker)
|
||||
- [About Kubernetes](#about-kubernetes)
|
||||
- [What is a Kubernetes Cluster?](#what-is-a-kubernetes-cluster)
|
||||
- [Roles for Nodes in Kubernetes Clusters](#roles-for-nodes-in-kubernetes-clusters)
|
||||
- [etcd Nodes](#etcd-nodes)
|
||||
- [Controlplane Nodes](#controlplane-nodes)
|
||||
- [Worker Nodes](#worker-nodes)
|
||||
- [About Helm](#about-helm)
|
||||
|
||||
# About Docker
|
||||
|
||||
Docker is the container packaging and runtime standard. Developers build container images from Dockerfiles and distribute container images from Docker registries. [Docker Hub](https://hub.docker.com) is the most popular public registry. Many organizations also set up private Docker registries. Docker is primarily used to manage containers on individual nodes.
|
||||
|
||||
>**Note:** Although Rancher 1.6 supported Docker Swarm clustering technology, it is no longer supported in Rancher 2.x due to the success of Kubernetes.
|
||||
|
||||
# About Kubernetes
|
||||
|
||||
Kubernetes is the container cluster management standard. YAML files specify containers and other resources that form an application. Kubernetes performs functions such as scheduling, scaling, service discovery, health check, secret management, and configuration management.
|
||||
|
||||
# What is a Kubernetes Cluster?
|
||||
|
||||
A cluster is a group of computers that work together as a single system.
|
||||
|
||||
A _Kubernetes Cluster_ is a cluster that uses the [Kubernetes container-orchestration system](https://kubernetes.io/) to deploy, maintain, and scale Docker containers, allowing your organization to automate application operations.
|
||||
|
||||
# Roles for Nodes in Kubernetes Clusters
|
||||
|
||||
Each computing resource in a Kubernetes cluster is called a _node_. Nodes can be either bare-metal servers or virtual machines. Kubernetes classifies nodes into three types: _etcd_ nodes, _control plane_ nodes, and _worker_ nodes.
|
||||
|
||||
A Kubernetes cluster consists of at least one etcd, controlplane, and worker node.
|
||||
|
||||
### etcd Nodes
|
||||
|
||||
Rancher uses etcd as a data store in both single node and high-availability installations. In Kubernetes, etcd is also a role for nodes that store the cluster state.
|
||||
|
||||
The state of a Kubernetes cluster is maintained in [etcd.](https://kubernetes.io/docs/concepts/overview/components/#etcd) The etcd nodes run the etcd database.
|
||||
|
||||
The etcd database component is a distributed key-value store used as Kubernetes storage for all cluster data, such as cluster coordination and state management. It is recommended to run etcd on multiple nodes so that there's always a backup available for failover.
|
||||
|
||||
Although you can run etcd on just one node, etcd requires a majority of nodes, a quorum, to agree on updates to the cluster state. The cluster should always contain enough healthy etcd nodes to form a quorum. For a cluster with n members, a quorum is (n/2)+1. For any odd-sized cluster, adding one node will always increase the number of nodes necessary for a quorum.
|
||||
|
||||
Three etcd nodes is generally sufficient for smaller clusters and five etcd nodes for large clusters.
|
||||
|
||||
### Controlplane Nodes
|
||||
|
||||
Controlplane nodes run the Kubernetes API server, scheduler, and controller manager. These nodes take care of routine tasks to ensure that your cluster maintains your configuration. Because all cluster data is stored on your etcd nodes, control plane nodes are stateless. You can run control plane on a single node, although two or more nodes are recommended for redundancy. Additionally, a single node can share the control plane and etcd roles.
|
||||
|
||||
### Worker Nodes
|
||||
|
||||
Each [worker node](https://kubernetes.io/docs/concepts/architecture/nodes/) runs the following:
|
||||
|
||||
- **Kubelets:** An agent that monitors the state of the node, ensuring your containers are healthy.
|
||||
- **Workloads:** The containers and pods that hold your apps, as well as other types of deployments.
|
||||
|
||||
Worker nodes also run storage and networking drivers, and ingress controllers when required. You create as many worker nodes as necessary to run your [workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/).
|
||||
|
||||
# About Helm
|
||||
|
||||
For high-availability installations of Rancher, Helm is the tool used to install Rancher on a Kubernetes cluster.
|
||||
|
||||
Helm is the package management tool of choice for Kubernetes. Helm charts provide templating syntax for Kubernetes YAML manifest documents. With Helm we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at [https://helm.sh/](https://helm.sh).
|
||||
|
||||
For more information on service accounts and cluster role binding, refer to the [Kubernetes documentation.](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
|
||||
|
After Width: | Height: | Size: 92 KiB |
|
After Width: | Height: | Size: 68 KiB |
|
After Width: | Height: | Size: 76 KiB |
|
After Width: | Height: | Size: 30 KiB |
|
After Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 28 KiB After Width: | Height: | Size: 28 KiB |