diff --git a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/_index.md index 6062bdb0288..29a9583537c 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/_index.md @@ -8,27 +8,22 @@ If your organization uses Microsoft Active Directory Federation Services (AD FS) ## Prerequisites +You must have Rancher installed. -- You must have Rancher installed. - - - Obtain your Rancher Server URL. During AD FS configuration, substitute this URL for the `` placeholder. - - - You must have a global administrator account on your Rancher installation. - -- You must have a [Microsoft AD FS Server](https://docs.microsoft.com/en-us/windows-server/identity/active-directory-federation-services) configured. - - - Obtain your AD FS Server IP/DNS name. During AD FS configuration, substitute this IP/DNS name for the `` placeholder. - - - You must have access to add [Relying Party Trusts](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/create-a-relying-party-trust) on your AD FS Server. +- Obtain your Rancher Server URL. During AD FS configuration, substitute this URL for the `` placeholder. +- You must have a global administrator account on your Rancher installation. +You must have a [Microsoft AD FS Server](https://docs.microsoft.com/en-us/windows-server/identity/active-directory-federation-services) configured. +- Obtain your AD FS Server IP/DNS name. During AD FS configuration, substitute this IP/DNS name for the `` placeholder. +- You must have access to add [Relying Party Trusts](https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/create-a-relying-party-trust) on your AD FS Server. ## Setup Outline Setting up Microsoft AD FS with Rancher Server requires configuring AD FS on your Active Directory server, and configuring Rancher to utilize your AD FS server. The following pages serve as guides for setting up Microsoft AD FS authentication on your Rancher installation. -- [1 — Configuring Microsoft AD FS for Rancher]({{}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup) -- [2 — Configuring Rancher for Microsoft AD FS]({{}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup) +- [1. Configuring Microsoft AD FS for Rancher]({{}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup) +- [2. Configuring Rancher for Microsoft AD FS]({{}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup) {{< saml_caveats >}} diff --git a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/_index.md index 152834ec60c..e71b192758c 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/_index.md @@ -1,5 +1,5 @@ --- -title: 1 — Configuring Microsoft AD FS for Rancher +title: 1. Configuring Microsoft AD FS for Rancher weight: 1205 --- @@ -25,12 +25,12 @@ Before configuring Rancher to support AD FS users, you must add Rancher as a [re 1. Leave the **optional token encryption certificate** empty, as Rancher AD FS will not be using one. - {{< img "/img/rancher/adfs/adfs-add-rpt-5.png" "">}} + {{< img "/img/rancher/adfs/adfs-add-rpt-5.png" "">}} 1. Select **Enable support for the SAML 2.0 WebSSO protocol** and enter `https:///v1-saml/adfs/saml/acs` for the service URL. - {{< img "/img/rancher/adfs/adfs-add-rpt-6.png" "">}} + {{< img "/img/rancher/adfs/adfs-add-rpt-6.png" "">}} 1. Add `https:///v1-saml/adfs/saml/metadata` as the **Relying party trust identifier**. diff --git a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md index 67706dddcf3..c6d45667d4c 100644 --- a/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md +++ b/content/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/rancher-adfs-setup/_index.md @@ -1,5 +1,5 @@ --- -title: 2 — Configuring Rancher for Microsoft AD FS +title: 2. Configuring Rancher for Microsoft AD FS weight: 1205 --- _Available as of v2.0.7_ @@ -17,22 +17,12 @@ After you complete [Configuring Microsoft AD FS for Rancher]({{}}/ranch 1. Select **Microsoft Active Directory Federation Services**. -1. Complete the **Configure AD FS Account** form. Microsoft AD FS lets you specify an existing Active Directory (AD) server. The examples below describe how you can map AD attributes to fields within Rancher. +1. Complete the **Configure AD FS Account** form. Microsoft AD FS lets you specify an existing Active Directory (AD) server. The [configuration section below](#configuration) describe how you can map AD attributes to fields within Rancher. - | Field | Description | - | ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | - | Display Name Field | The AD attribute that contains the display name of users.

Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name` | - | User Name Field | The AD attribute that contains the user name/given name.

Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname` | - | UID Field | An AD attribute that is unique to every user.

Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` | - | Groups Field | Make entries for managing group memberships.

Example: `http://schemas.xmlsoap.org/claims/Group` | - | Rancher API Host | The URL for your Rancher Server. | - | Private Key / Certificate | This is a key-certificate pair to create a secure shell between Rancher and your AD FS. Ensure you set the Common Name (CN) to your Rancher Server URL.

[Certificate creation command](#cert-command) | - | Metadata XML | The `federationmetadata.xml` file exported from your AD FS server.

You can find this file at `https:///federationmetadata/2007-06/federationmetadata.xml`. | - + + - >**Tip:** You can generate a certificate using an openssl command. For example: - > - > openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com" + @@ -43,3 +33,24 @@ After you complete [Configuring Microsoft AD FS for Rancher]({{}}/ranch >**Note:** You may have to disable your popup blocker to see the AD FS login page. **Result:** Rancher is configured to work with MS FS. Your users can now sign into Rancher using their MS FS logins. + +# Configuration + +| Field | Description | +|---------------------------|-----------------| +| Display Name Field | The AD attribute that contains the display name of users.

Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name` | +| User Name Field | The AD attribute that contains the user name/given name.

Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname` | +| UID Field | An AD attribute that is unique to every user.

Example: `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` | +| Groups Field | Make entries for managing group memberships.

Example: `http://schemas.xmlsoap.org/claims/Group` | +| Rancher API Host | The URL for your Rancher Server. | +| Private Key / Certificate | This is a key-certificate pair to create a secure shell between Rancher and your AD FS. Ensure you set the Common Name (CN) to your Rancher Server URL.

[Certificate creation command](#cert-command) | +| Metadata XML | The `federationmetadata.xml` file exported from your AD FS server.

You can find this file at `https:///federationmetadata/2007-06/federationmetadata.xml`. | + + + + +**Tip:** You can generate a certificate using an openssl command. For example: + +``` +openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com" +``` \ No newline at end of file diff --git a/content/rancher/v2.x/en/admin-settings/drivers/_index.md b/content/rancher/v2.x/en/admin-settings/drivers/_index.md index 11cc9d71582..30b8d47acec 100644 --- a/content/rancher/v2.x/en/admin-settings/drivers/_index.md +++ b/content/rancher/v2.x/en/admin-settings/drivers/_index.md @@ -14,7 +14,7 @@ There are two types of drivers within Rancher: * [Cluster Drivers](#cluster-drivers) * [Node Drivers](#node-drivers) -## Cluster Drivers +### Cluster Drivers _Available as of v2.2.0_ @@ -32,7 +32,7 @@ There are several other hosted Kubernetes cloud providers that are disabled by d * [Huawei CCE]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/) * [Tencent]({{}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/) -## Node Drivers +### Node Drivers Node drivers are used to provision hosts, which Rancher uses to launch and manage Kubernetes clusters. A node driver is the same as a [Docker Machine driver](https://docs.docker.com/machine/drivers/). The availability of which node driver to display when creating node templates is defined based on the node driver's status. Only `active` node drivers will be displayed as an option for creating node templates. By default, Rancher is packaged with many existing Docker Machine drivers, but you can also create custom node drivers to add to Rancher. diff --git a/content/rancher/v2.x/en/admin-settings/pod-security-policies/_index.md b/content/rancher/v2.x/en/admin-settings/pod-security-policies/_index.md index 12616772261..248ad1a58d9 100644 --- a/content/rancher/v2.x/en/admin-settings/pod-security-policies/_index.md +++ b/content/rancher/v2.x/en/admin-settings/pod-security-policies/_index.md @@ -7,44 +7,65 @@ aliases: - /rancher/v2.x/en/tasks/clusters/adding-a-pod-security-policy/ --- -_Pod Security Policies_ (or PSPs) are objects that control security-sensitive aspects of pod specification (like root privileges). If a pod does not meet the conditions specified in the PSP, Kubernetes will not allow it to start, and Rancher will display an error message of `Pod is forbidden: unable to validate...`. +_Pod Security Policies_ (or PSPs) are objects that control security-sensitive aspects of pod specification (like root privileges). -> **Note:** Assigning Pod Security Policies are only available for clusters that are [launched using RKE.]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) +If a pod does not meet the conditions specified in the PSP, Kubernetes will not allow it to start, and Rancher will display an error message of `Pod is forbidden: unable to validate...`. -- You can assign PSPs at the cluster or project level. -- PSPs work through inheritance. +- [How PSPs Work](#how-psps-work) +- [Default PSPs](#default-psps) + - [Restricted](#restricted) + - [Unrestricted](#unrestricted) +- [Creating PSPs](#creating-psps) + - [Requirements](#requirements) + - [Creating PSPs in the Rancher UI](#creating-psps-in-the-rancher-ui) +- [Configuration](#configuration) - - By default, PSPs assigned to a cluster are inherited by its projects, as well as any namespaces added to those projects. - - **Exception:** Namespaces that are not assigned to projects do not inherit PSPs, regardless of whether the PSP is assigned to a cluster or project. Because these namespaces have no PSPs, workload deployments to these namespaces will fail, which is the default Kubernetes behavior. - - You can override the default PSP by assigning a different PSP directly to the project. -- Any workloads that are already running in a cluster or project before a PSP is assigned will not be checked if it complies with the PSP. Workloads would need to be cloned or upgraded to see if they pass the PSP. +# How PSPs Work ->**Note:** You must enable PSPs at the cluster level before you can assign them to a project. This can be configured by [editing the cluster.]({{}}/rancher/v2.x/en/cluster-admin/editing-clusters/) +You can assign PSPs at the cluster or project level. + +PSPs work through inheritance: + +- By default, PSPs assigned to a cluster are inherited by its projects, as well as any namespaces added to those projects. +- **Exception:** Namespaces that are not assigned to projects do not inherit PSPs, regardless of whether the PSP is assigned to a cluster or project. Because these namespaces have no PSPs, workload deployments to these namespaces will fail, which is the default Kubernetes behavior. +- You can override the default PSP by assigning a different PSP directly to the project. + +Any workloads that are already running in a cluster or project before a PSP is assigned will not be checked if it complies with the PSP. Workloads would need to be cloned or upgraded to see if they pass the PSP. Read more about Pod Security Policies in the [Kubernetes Documentation](https://kubernetes.io/docs/concepts/policy/pod-security-policy/). ->**Best Practice:** Set pod security at the cluster level. - -Using Rancher, you can create a Pod Security Policy using our GUI rather than creating a YAML file. - -## Default Pod Security Policies +# Default PSPs _Available as of v2.0.7_ Rancher ships with two default Pod Security Policies (PSPs): the `restricted` and `unrestricted` policies. -- `restricted` +### Restricted - This policy is based on the Kubernetes [example restricted policy](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/restricted-psp.yaml). It significantly restricts what types of pods can be deployed to a cluster or project. This policy: - - - Prevents pods from running as a privileged user and prevents escalation of privileges. - - Validates that server-required security mechanisms are in place (such as restricting what volumes can be mounted to only the core volume types and preventing root supplemental groups from being added). +This policy is based on the Kubernetes [example restricted policy](https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/policy/restricted-psp.yaml). It significantly restricts what types of pods can be deployed to a cluster or project. This policy: -- `unrestricted` +- Prevents pods from running as a privileged user and prevents escalation of privileges. +- Validates that server-required security mechanisms are in place (such as restricting what volumes can be mounted to only the core volume types and preventing root supplemental groups from being added. - This policy is equivalent to running Kubernetes with the PSP controller disabled. It has no restrictions on what pods can be deployed into a cluster or project. +### Unrestricted -## Creating Pod Security Policies +This policy is equivalent to running Kubernetes with the PSP controller disabled. It has no restrictions on what pods can be deployed into a cluster or project. + +# Creating PSPs + +Using Rancher, you can create a Pod Security Policy using our GUI rather than creating a YAML file. + +### Requirements + +Rancher can only assign PSPs for clusters that are [launched using RKE.]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) + +You must enable PSPs at the cluster level before you can assign them to a project. This can be configured by [editing the cluster.]({{}}/rancher/v2.x/en/cluster-admin/editing-clusters/) + +It is a best practice to set PSP at the cluster level. + +We recommend adding PSPs during cluster and project creation instead of adding it to an existing one. + +### Creating PSPs in the Rancher UI 1. From the **Global** view, select **Security** > **Pod Security Policies** from the main menu. Then click **Add Policy**. @@ -52,33 +73,13 @@ Rancher ships with two default Pod Security Policies (PSPs): the `restricted` an 2. Name the policy. -3. Complete each section of the form. Refer to the Kubernetes documentation linked below for more information on what each policy does. +3. Complete each section of the form. Refer to the [Kubernetes documentation]((https://kubernetes.io/docs/concepts/policy/pod-security-policy/)) for more information on what each policy does. - - Basic Policies: - - [Privilege Escalation](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privilege-escalation) - - [Host Namespaces][2] - - [Read Only Root Filesystems][1] +# Configuration - - [Capability Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#capabilities) - - [Volume Policy][1] - - [Allowed Host Paths Policy][1] - - [FS Group Policy][1] - - [Host Ports Policy][2] - - [Run As User Policy][3] - - [SELinux Policy](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#selinux) - - [Supplemental Groups Policy][3] +The Kubernetes documentation on PSPs is [here.](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) -### What's Next? - -You can add a Pod Security Policy (PSPs hereafter) in the following contexts: - -- [When creating a cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/pod-security-policies/) -- [When editing an existing cluster]({{}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/) -- [When creating a project]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#creating-a-project/) -- [When editing an existing project]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/) - -> **Note:** We recommend adding PSPs during cluster and project creation instead of adding it to an existing one. diff --git a/content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/backup/rke-backups/_index.md b/content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/backup/rke-backups/_index.md index 7a660a3e2bf..fbb21303ca5 100644 --- a/content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/backup/rke-backups/_index.md +++ b/content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/backup/rke-backups/_index.md @@ -49,25 +49,27 @@ Take snapshots of your `etcd` database. You can use these snapshots later to rec - [Option A: Recurring Snapshots](#option-a-recurring-snapshots) - After you stand up a high-availability Rancher install, we recommend configuring RKE to automatically take recurring snapshots so that you always have a safe restoration point available. + After you stand up a high-availability Rancher install, we recommend configuring RKE to automatically take recurring snapshots so that you always have a safe restore point available. - [Option B: One-Time Snapshots](#option-b-one-time-snapshots) - We advise taking one-time snapshots before events like upgrades or restoration of another snapshot. + We advise taking one-time snapshots before events like upgrades or restore of another snapshot. ### Option A: Recurring Snapshots -For all high-availability Rancher installs, we recommend taking recurring snapshots so that you always have a safe restoration point available. +For all high-availability Rancher installs, we recommend taking recurring snapshots so that you always have a safe restore point available. To take recurring snapshots, enable the `etcd-snapshot` service, which is a service that's included with RKE. This service runs in a service container alongside the `etcd` container. You can enable this service by adding some code to `rancher-cluster.yml`. **To Enable Recurring Snapshots:** +The steps to enable recurring snapshots differ based on the version of RKE. + +{{% tabs %}} +{{% tab "RKE v0.2.0+" %}} + 1. Open `rancher-cluster.yml` with your favorite text editor. - -2. Edit the code for the `etcd` service to enable recurring snapshots. As of RKE v0.2.0, snapshots can be saved in a S3 compatible backend. - - _Using RKE v0.2.0+_ +2. Edit the code for the `etcd` service to enable recurring snapshots. Snapshots can be saved in a S3 compatible backend. ``` services: @@ -89,8 +91,19 @@ To take recurring snapshots, enable the `etcd-snapshot` service, which is a serv $CERTIFICATE -----END CERTIFICATE----- ``` +4. Save and close `rancher-cluster.yml`. +5. Open **Terminal** and change directory to the location of the RKE binary. Your `rancher-cluster.yml` file must reside in the same directory. +6. Run the following command: + ``` + rke up --config rancher-cluster.yml + ``` - _Using RKE v0.1.x_ +**Result:** RKE is configured to take recurring snapshots of `etcd` on all nodes running the `etcd` role. Snapshots are saved locally to the following directory: `/opt/rke/etcd-snapshots/`. If configured, the snapshots are also uploaded to your S3 compatible backend. +{{% /tab %}} +{{% tab "RKE v0.1.x" %}} + +1. Open `rancher-cluster.yml` with your favorite text editor. +2. Edit the code for the `etcd` service to enable recurring snapshots. ``` services: @@ -99,16 +112,17 @@ To take recurring snapshots, enable the `etcd-snapshot` service, which is a serv creation: 6h0s # time increment between snapshots retention: 24h # time increment before snapshot purge ``` - 4. Save and close `rancher-cluster.yml`. 5. Open **Terminal** and change directory to the location of the RKE binary. Your `rancher-cluster.yml` file must reside in the same directory. 6. Run the following command: - ``` rke up --config rancher-cluster.yml ``` -**Result:** RKE is configured to take recurring snapshots of `etcd` on all nodes running the `etcd` role. Snapshots are saved locally to the following directory: `/opt/rke/etcd-snapshots/`. If configured, the snapshots are also uploaded to your S3 compatible backend. +**Result:** RKE is configured to take recurring snapshots of `etcd` on all nodes running the `etcd` role. Snapshots are saved locally to the following directory: `/opt/rke/etcd-snapshots/`. +{{% /tab %}} +{{% /tabs %}} + ### Option B: One-Time Snapshots diff --git a/content/rancher/v2.x/en/backups/v2.5/_index.md b/content/rancher/v2.x/en/backups/v2.5/_index.md index 12edfac9015..a20fe5e4f83 100644 --- a/content/rancher/v2.x/en/backups/v2.5/_index.md +++ b/content/rancher/v2.x/en/backups/v2.5/_index.md @@ -19,6 +19,7 @@ The Rancher version must be v2.5.0 and up to use this approach of backing up and - [Installing the rancher-backup Operator](#installing-the-rancher-backup-operator) - [Installing rancher-backup with the Rancher UI](#installing-rancher-backup-with-the-rancher-ui) - [Installing rancher-backup with the Helm CLI](#installing-rancher-backup-with-the-helm-cli) + - [RBAC](#rbac) - [Backing up Rancher](#backing-up-rancher) - [Restoring Rancher](#restoring-rancher) - [Migrating Rancher to a New Cluster](#migrating-rancher-to-a-new-cluster) @@ -95,11 +96,12 @@ helm install rancher-backup rancher-charts/rancher-backup -n cattle-resources-sy ### RBAC -Only the rancher admins, and local cluster’s cluster-owner can: +Only the rancher admins and the local cluster’s cluster-owner can: * Install the Chart * See the navigation links for Backup and Restore CRDs -* Perform a backup or restore by creating a Backup CR and Restore CR respectively, list backups/restores performed so far +* Perform a backup or restore by creating a Backup CR and Restore CR respectively +* List backups/restores performed so far # Backing up Rancher diff --git a/content/rancher/v2.x/en/best-practices/v2.0-v2.4/management/_index.md b/content/rancher/v2.x/en/best-practices/v2.0-v2.4/management/_index.md index 4a500287193..f6c12740d4c 100644 --- a/content/rancher/v2.x/en/best-practices/v2.0-v2.4/management/_index.md +++ b/content/rancher/v2.x/en/best-practices/v2.0-v2.4/management/_index.md @@ -7,6 +7,14 @@ aliases: Rancher allows you to set up numerous combinations of configurations. Some configurations are more appropriate for development and testing, while there are other best practices for production environments for maximum availability and fault tolerance. The following best practices should be followed for production. +- [Tips for Preventing and Handling Problems](#tips-for-preventing-and-handling-problems) +- [Network Topology](#network-topology) +- [Tips for Scaling and Reliability](#tips-for-scaling-and-reliability) +- [Tips for Security](#tips-for-security) +- [Tips for Multi-Tenant Clusters](#tips-for-multi-tenant-clusters) +- [Class of Service and Kubernetes Clusters](#class-of-service-and-kubernetes-clusters) +- [Network Security](#network-security) + # Tips for Preventing and Handling Problems These tips can help you solve problems before they happen. diff --git a/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md b/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md index 438b976bc8b..c517623ec3d 100644 --- a/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/backing-up-etcd/_index.md @@ -42,7 +42,7 @@ Because the Kubernetes version is now included in the snapshot, it is possible t The multiple components of the snapshot allow you to select from the following options if you need to restore a cluster from a snapshot: -- **Restore just the etcd contents:** This restoration is similar to restoring to snapshots in Rancher prior to v2.4.0. +- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher prior to v2.4.0. - **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes. - **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading. diff --git a/content/rancher/v2.x/en/cluster-admin/cluster-access/_index.md b/content/rancher/v2.x/en/cluster-admin/cluster-access/_index.md index 1e530ae86cf..519679de5e4 100644 --- a/content/rancher/v2.x/en/cluster-admin/cluster-access/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/cluster-access/_index.md @@ -3,32 +3,30 @@ title: Cluster Access weight: 1 --- -There are many ways you can interact with Kubernetes clusters that are managed by Rancher: +This section is about what tools can be used to access clusters managed by Rancher. -- **Rancher UI** +For information on how to give users permission to access a cluster, see the section on [adding users to clusters.]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/cluster-members/) - Rancher provides an intuitive user interface for interacting with your clusters. All options available in the UI use the Rancher API. Therefore any action possible in the UI is also possible in the Rancher CLI or Rancher API. +For more information on roles-based access control, see [this section.]({{}}/rancher/v2.x/en/admin-settings/rbac/) -- **kubectl** +For information on how to set up an authentication system, see [this section.]({{}}/rancher/v2.x/en/admin-settings/authentication/) - You can use the Kubernetes command-line tool, [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), to manage your clusters. You have two options for using kubectl: - - **Rancher kubectl shell** +### Rancher UI - Interact with your clusters by launching a kubectl shell available in the Rancher UI. This option requires no configuration actions on your part. +Rancher provides an intuitive user interface for interacting with your clusters. All options available in the UI use the Rancher API. Therefore any action possible in the UI is also possible in the Rancher CLI or Rancher API. - For more information, see [Accessing Clusters with kubectl Shell]({{}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-shell). +### kubectl - - **Terminal remote connection** +You can use the Kubernetes command-line tool, [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), to manage your clusters. You have two options for using kubectl: - You can also interact with your clusters by installing [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your local desktop and then copying the cluster's kubeconfig file to your local `~/.kube/config` directory. +- **Rancher kubectl shell:** Interact with your clusters by launching a kubectl shell available in the Rancher UI. This option requires no configuration actions on your part. For more information, see [Accessing Clusters with kubectl Shell]({{}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-shell). +- **Terminal remote connection:** You can also interact with your clusters by installing [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) on your local desktop and then copying the cluster's kubeconfig file to your local `~/.kube/config` directory. For more information, see [Accessing Clusters with kubectl and a kubeconfig File]({{}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-and-a-kubeconfig-file). - For more information, see [Accessing Clusters with kubectl and a kubeconfig File]({{}}/rancher/v2.x/en/k8s-in-rancher/kubectl/#accessing-clusters-with-kubectl-and-a-kubeconfig-file). +### Rancher CLI -- **Rancher CLI** +You can control your clusters by downloading Rancher's own command-line interface, [Rancher CLI]({{}}/rancher/v2.x/en/cli/). This CLI tool can interact directly with different clusters and projects or pass them `kubectl` commands. - You can control your clusters by downloading Rancher's own command-line interface, [Rancher CLI]({{}}/rancher/v2.x/en/cli/). This CLI tool can interact directly with different clusters and projects or pass them `kubectl` commands. +### Rancher API -- **Rancher API** - - Finally, you can interact with your clusters over the Rancher API. Before you use the API, you must obtain an [API key]({{}}/rancher/v2.x/en/user-settings/api-keys/). To view the different resource fields and actions for an API object, open the API UI, which can be accessed by clicking on **View in API** for any Rancher UI object. \ No newline at end of file +Finally, you can interact with your clusters over the Rancher API. Before you use the API, you must obtain an [API key]({{}}/rancher/v2.x/en/user-settings/api-keys/). To view the different resource fields and actions for an API object, open the API UI, which can be accessed by clicking on **View in API** for any Rancher UI object. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-admin/editing-clusters/_index.md b/content/rancher/v2.x/en/cluster-admin/editing-clusters/_index.md index 3a1303e4db2..d95af0109e0 100644 --- a/content/rancher/v2.x/en/cluster-admin/editing-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/editing-clusters/_index.md @@ -3,10 +3,16 @@ title: Cluster Configuration weight: 2025 --- -After you provision a Kubernetes cluster using Rancher, you can still edit options and settings for the cluster. To edit your cluster, open the **Global** view, make sure the **Clusters** tab is selected, and then select **⋮ > Edit** for the cluster that you want to edit. +After you provision a Kubernetes cluster using Rancher, you can still edit options and settings for the cluster. -To Edit an Existing Cluster -![Edit Cluster]({{}}/img/rancher/edit-cluster.png) +For information on editing cluster membership, go to [this page.]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/cluster-members) + +- [Cluster Management Capabilities by Cluster Type](#cluster-management-capabilities-by-cluster-type) +- [Editing Clusters in the Rancher UI](#editing-clusters-in-the-rancher-ui) +- [Editing Clusters with YAML](#editing-clusters-with-yaml) +- [Updating ingress-nginx](#updating-ingress-nginx) + +### Cluster Management Capabilities by Cluster Type The options and settings available for an existing cluster change based on the method that you used to provision it. For example, only clusters [provisioned by RKE]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) have **Cluster Options** available for editing. @@ -14,29 +20,13 @@ The following table summarizes the options and settings available for each clust {{% include file="/rancher/v2.x/en/cluster-provisioning/cluster-capabilities-table" %}} -## Editing Cluster Membership +### Editing Clusters in the Rancher UI -Cluster administrators can [edit the membership for a cluster,]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/cluster-members) controlling which Rancher users can access the cluster and what features they can use. - -## Cluster Options - -When editing clusters, clusters that are [launched using RKE]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) feature more options than clusters that are imported or hosted by a Kubernetes provider. The headings that follow document options available only for RKE clusters. - -### Updating ingress-nginx - -Clusters that were created before Kubernetes 1.16 will have an `ingress-nginx` `updateStrategy` of `OnDelete`. Clusters that were created with Kubernetes 1.16 or newer will have `RollingUpdate`. - -If the `updateStrategy` of `ingress-nginx` is `OnDelete`, you will need to delete these pods to get the correct version for your deployment. - -# Editing Other Cluster Options +To edit your cluster, open the **Global** view, make sure the **Clusters** tab is selected, and then select **⋮ > Edit** for the cluster that you want to edit. In [clusters launched by RKE]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), you can edit any of the remaining options that follow. ->**Note:** These options are not available for imported clusters or hosted Kubernetes clusters. - -Options for RKE Clusters -![Cluster Options]({{}}/img/rancher/cluster-options.png) - +Note that these options are not available for imported clusters or hosted Kubernetes clusters. Option | Description | ---------|----------| @@ -50,19 +40,29 @@ Option | Description | Docker Root Directory | The directory on your cluster nodes where you've installed Docker. If you install Docker on your nodes to a non-default directory, update this path. | Default Pod Security Policy | If you enable **Pod Security Policy Support**, use this drop-down to choose the pod security policy that's applied to the cluster. | Cloud Provider | If you're using a cloud provider to host cluster nodes launched by RKE, enable [this option]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) so that you can use the cloud provider's native features. If you want to store persistent data for your cloud-hosted cluster, this option is required. | -
-# Editing Cluster as YAML - ->**Note:** In Rancher v2.0.5 and v2.0.6, the names of services in the Config File (YAML) should contain underscores only: `kube_api` and `kube_controller`. +### Editing Clusters with YAML Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML. - To edit an RKE config file directly from the Rancher UI, click **Edit as YAML**. - To read from an existing RKE file, click **Read from File**. -In Rancher v2.0.0-v2.2.x, the config file is identical to the [cluster config file for the Rancher Kubernetes Engine]({{}}/rke/latest/en/config-options/), which is the tool Rancher uses to provision clusters. In Rancher v2.3.0, the RKE information is still included in the config file, but it is separated from other options, so that the RKE cluster config options are nested under the `rancher_kubernetes_engine_config` directive. For more information, see the [cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) - ![image]({{}}/img/rancher/cluster-options-yaml.png) -For an example of RKE config file syntax, see the [RKE documentation]({{}}/rke/latest/en/example-yamls/). +For an example of RKE config file syntax, see the [RKE documentation]({{}}/rke/latest/en/example-yamls/). + +For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.]({{}}/rke/latest/en/config-options/) + +In Rancher v2.0.0-v2.2.x, the config file is identical to the [cluster config file for the Rancher Kubernetes Engine]({{}}/rke/latest/en/config-options/), which is the tool Rancher uses to provision clusters. In Rancher v2.3.0, the RKE information is still included in the config file, but it is separated from other options, so that the RKE cluster config options are nested under the `rancher_kubernetes_engine_config` directive. For more information, see the [cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) + +>**Note:** In Rancher v2.0.5 and v2.0.6, the names of services in the Config File (YAML) should contain underscores only: `kube_api` and `kube_controller`. + + + + +### Updating ingress-nginx + +Clusters that were created before Kubernetes 1.16 will have an `ingress-nginx` `updateStrategy` of `OnDelete`. Clusters that were created with Kubernetes 1.16 or newer will have `RollingUpdate`. + +If the `updateStrategy` of `ingress-nginx` is `OnDelete`, you will need to delete these pods to get the correct version for your deployment. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md b/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md index b85e59a3ec3..1db7ba0c9cc 100644 --- a/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/restoring-etcd/_index.md @@ -30,14 +30,14 @@ The list of all available snapshots for the cluster is available. If your Kubernetes cluster is broken, you can restore the cluster from a snapshot. -Restorations changed in Rancher v2.4.0. +Restores changed in Rancher v2.4.0. {{% tabs %}} {{% tab "Rancher v2.4.0+" %}} Snapshots are composed of the cluster data in etcd, the Kubernetes version, and the cluster configuration in the `cluster.yml.` These components allow you to select from the following options when restoring a cluster from a snapshot: -- **Restore just the etcd contents:** This restoration is similar to restoring to snapshots in Rancher prior to v2.4.0. +- **Restore just the etcd contents:** This restore is similar to restoring to snapshots in Rancher prior to v2.4.0. - **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes. - **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading. @@ -51,7 +51,7 @@ When rolling back to a prior Kubernetes version, the [upgrade strategy options]( 3. Select the snapshot that you want to use for restoring your cluster from the dropdown of available snapshots. -4. In the **Restoration Type** field, choose one of the restoration options described above. +4. In the **Restoration Type** field, choose one of the restore options described above. 5. Click **Save**. diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/_index.md index 2fc9d2799df..5bb96aa36b5 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/_index.md @@ -10,5 +10,6 @@ Rancher supports persistent storage with a variety of volume plugins. However, b For your convenience, Rancher offers documentation on how to configure some of the popular storage methods: -- [NFS]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/examples/nfs/) -- [vSphere]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/examples/vsphere/) +- [NFS](./nfs) +- [vSphere](./vsphere) +- [EBS](./ebs) diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/_index.md index 0750143fe22..41437462a9f 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/_index.md @@ -7,11 +7,19 @@ aliases: To provide stateful workloads with vSphere storage, we recommend creating a vSphereVolume [storage class]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#storage-classes). This practice dynamically provisions vSphere storage when workloads request volumes through a [persistent volume claim]({{}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/persistent-volume-claims/). +In order to dynamically provision storage in vSphere, the vSphere provider must be [enabled.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) + +- [Prerequisites](#prerequisites) +- [Creating a StorageClass](#creating-a-storageclass) +- [Creating a Workload with a vSphere Volume](#creating-a-workload-with-a-vsphere-volume) +- [Verifying Persistence of the Volume](#verifying-persistence-of-the-volume) +- [Why to Use StatefulSets Instead of Deployments](#why-to-use-statefulsets-instead-of-deployments) + ### Prerequisites In order to provision vSphere volumes in a cluster created with the [Rancher Kubernetes Engine (RKE)]({{< baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), the [vSphere cloud provider]({{}}/rke/latest/en/config-options/cloud-providers/vsphere) must be explicitly enabled in the [cluster options]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/). -### Creating A Storage Class +### Creating a StorageClass > **Note:** > @@ -56,7 +64,7 @@ In order to provision vSphere volumes in a cluster created with the [Rancher Kub ![workload-persistent-data]({{}}/img/rancher/workload-persistent-data.png) -## Why to Use StatefulSets Instead of Deployments +### Why to Use StatefulSets Instead of Deployments You should always use [StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) for workloads consuming vSphere storage, as this resource type is designed to address a VMDK block storage caveat. @@ -64,7 +72,7 @@ Since vSphere volumes are backed by VMDK block storage, they only support an [ac Even using a deployment resource with just a single replica may result in a deadlock situation while updating the deployment. If the updated pod is scheduled to a node different from where the existing pod lives, it will fail to start because the VMDK is still attached to the other node. -## Related Links +### Related Links - [vSphere Storage for Kubernetes](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) - [Kubernetes Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) diff --git a/content/rancher/v2.x/en/cluster-provisioning/_index.md b/content/rancher/v2.x/en/cluster-provisioning/_index.md index 1fc0f8509d3..e4c8f1ce492 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/_index.md @@ -69,7 +69,7 @@ When setting up this type of cluster, Rancher installs Kubernetes on existing [c You can bring any nodes you want to Rancher and use them to create a cluster. -These nodes include on-premise bare metal servers, cloud-hosted virtual machines, or on-premise virtual machines. +These nodes include on-prem bare metal servers, cloud-hosted virtual machines, or on-prem virtual machines. # Importing Existing Clusters diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md index 5a84152ebef..5bcbab42fde 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/_index.md @@ -7,7 +7,7 @@ In this scenario, Rancher does not provision Kubernetes because it is installed If you use a Kubernetes provider such as Google GKE, Rancher integrates with its cloud APIs, allowing you to create and manage role-based access control for the hosted cluster from the Rancher UI. -In this use case, Rancher sends a request to a hosted provider using the provider's API. The provider then provisions and hosts the cluster for you. When the cluster finishes building, you can manage it from the Rancher UI along with clusters you've provisioned that are hosted on-premise or in an infrastructure provider. +In this use case, Rancher sends a request to a hosted provider using the provider's API. The provider then provisions and hosts the cluster for you. When the cluster finishes building, you can manage it from the Rancher UI along with clusters you've provisioned that are hosted on-prem or in an infrastructure provider. Rancher supports the following Kubernetes providers: diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/_index.md index 32d75c76a00..75edd05e4a6 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/ack/_index.md @@ -33,7 +33,7 @@ You can use Rancher to create a cluster hosted in Alibaba Cloud Kubernetes (ACK) 1. Enter a **Cluster Name**. -1. {{< step_create-cluster_member-roles >}} +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. 1. Configure **Account Access** for the ACK cluster. Choose the geographical region in which to build your cluster, and input the access key that was created as part of the prerequisite steps. @@ -45,4 +45,13 @@ You can use Rancher to create a cluster hosted in Alibaba Cloud Kubernetes (ACK) 1. Review your options to confirm they're correct. Then click **Create**. -{{< result_create-cluster >}} +**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/aks/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/aks/_index.md index fa5d692d15e..666b03b74ac 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/aks/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/aks/_index.md @@ -120,14 +120,14 @@ Use Rancher to set up and configure your Kubernetes cluster. 1. Enter a **Cluster Name**. -1. {{< step_create-cluster_member-roles >}} +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. 1. Use your subscription ID, tenant ID, app ID, and client secret to give your cluster access to AKS. If you don't have all of that information, you can retrieve it using these instructions: - **App ID and tenant ID:** To get the app ID and tenant ID, you can go to the Azure Portal, then click **Azure Active Directory**, then click **App registrations,** then click the name of the service principal. The app ID and tenant ID are both on the app registration detail page. - **Client secret:** If you didn't copy the client secret when creating the service principal, you can get a new one if you go to the app registration detail page, then click **Certificates & secrets**, then click **New client secret.** - **Subscription ID:** You can get the subscription ID is available in the portal from **All services > Subscriptions.** -1. {{< step_create-cluster_cluster-options >}} +1. Use **Cluster Options** to choose the version of Kubernetes, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options.** 1. Complete the **Account Access** form using the output from your Service Principal. This information is used to authenticate with Azure. @@ -139,4 +139,13 @@ Use Rancher to set up and configure your Kubernetes cluster.
1. Review your options to confirm they're correct. Then click **Create**. -{{< result_create-cluster >}} +**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/_index.md index f01af1c27b3..f59a024d856 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/cce/_index.md @@ -24,59 +24,64 @@ Huawei CCE service doesn't support the ability to create clusters with public ac ## Create the CCE Cluster 1. From the **Clusters** page, click **Add Cluster**. +1. Choose **Huawei CCE**. +1. Enter a **Cluster Name**. +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. +1. Enter **Project Id**, Access Key ID as **Access Key** and Secret Access Key **Secret Key**. Then Click **Next: Configure cluster**. Fill in the cluster configuration. For help filling out the form, refer to [Huawei CCE Configuration.](#huawei-cce-configuration) +1. Fill the following node configuration of the cluster. For help filling out the form, refer to [Node Configuration.](#node-configuration) +1. Click **Create** to create the CCE cluster. -2. Choose **Huawei CCE**. +**Result:** -3. Enter a **Cluster Name**. +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. -4. {{< step_create-cluster_member-roles >}} +You can access your cluster after its state is updated to **Active.** -5. Enter **Project Id**, Access Key ID as **Access Key** and Secret Access Key **Secret Key**. Then Click **Next: Configure cluster**. +**Active** clusters are assigned two Projects: -6. Fill the following cluster configuration: +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces - |Settings|Description| - |---|---| - | Cluster Type | Which type or node you want to include into the cluster, `VirtualMachine` or `BareMetal`. | - | Description | The description of the cluster. | - | Master Version | The Kubernetes version. | - | Management Scale Count | The max node count of the cluster. The options are 50, 200 and 1000. The larger of the scale count, the more the cost. | - | High Availability | Enable master node high availability. The cluster with high availability enabled will have more cost. | - | Container Network Mode | The network mode used in the cluster. `overlay_l2` and `vpc-router` is supported in `VirtualMachine` type and `underlay_ipvlan` is supported in `BareMetal` type | - | Container Network CIDR | Network CIDR for the cluster. | - | VPC Name | The VPC name which the cluster is going to deploy into. Rancher will create one if it is blank. | - | Subnet Name | The Subnet name which the cluster is going to deploy into. Rancher will create one if it is blank. | - | External Server | This option is reserved for the future we can enable CCE cluster public access via API. For now, it is always disabled. | - | Cluster Label | The labels for the cluster. | - | Highway Subnet | This option is only supported in `BareMetal` type. It requires you to select a VPC with high network speed for the bare metal machines. | +# Huawei CCE Configuration - **Note:** If you are editing the cluster in the `cluster.yml` instead of the Rancher UI, note that as of Rancher v2.3.0, cluster configuration directives must be nested under the `rancher_kubernetes_engine_config` directive in `cluster.yml`. For more information, refer to the section on [the config file structure in Rancher v2.3.0+.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file-structure-in-rancher-v2-3-0) +|Settings|Description| +|---|---| +| Cluster Type | Which type or node you want to include into the cluster, `VirtualMachine` or `BareMetal`. | +| Description | The description of the cluster. | +| Master Version | The Kubernetes version. | +| Management Scale Count | The max node count of the cluster. The options are 50, 200 and 1000. The larger of the scale count, the more the cost. | +| High Availability | Enable master node high availability. The cluster with high availability enabled will have more cost. | +| Container Network Mode | The network mode used in the cluster. `overlay_l2` and `vpc-router` is supported in `VirtualMachine` type and `underlay_ipvlan` is supported in `BareMetal` type | +| Container Network CIDR | Network CIDR for the cluster. | +| VPC Name | The VPC name which the cluster is going to deploy into. Rancher will create one if it is blank. | +| Subnet Name | The Subnet name which the cluster is going to deploy into. Rancher will create one if it is blank. | +| External Server | This option is reserved for the future we can enable CCE cluster public access via API. For now, it is always disabled. | +| Cluster Label | The labels for the cluster. | +| Highway Subnet | This option is only supported in `BareMetal` type. It requires you to select a VPC with high network speed for the bare metal machines. | -7. Fill the following node configuration of the cluster: +**Note:** If you are editing the cluster in the `cluster.yml` instead of the Rancher UI, note that as of Rancher v2.3.0, cluster configuration directives must be nested under the `rancher_kubernetes_engine_config` directive in `cluster.yml`. For more information, refer to the section on [the config file structure in Rancher v2.3.0+.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file-structure-in-rancher-v2-3-0) - |Settings|Description| - |---|---| - | Zone | The available zone at where the node(s) of the cluster is deployed. | - | Billing Mode | The bill mode for the cluster node(s). In `VirtualMachine` type, only `Pay-per-use` is supported. in `BareMetal`, you can choose `Pay-per-use` or `Yearly/Monthly`. | - | Validity Period | This option only shows in `Yearly/Monthly` bill mode. It means how long you want to pay for the cluster node(s). | - | Auto Renew | This option only shows in `Yearly/Monthly` bill mode. It means that the cluster node(s) will renew the `Yearly/Monthly` payment automatically or not. | - | Data Volume Type | Data volume type for the cluster node(s). `SATA`, `SSD` or `SAS` for this option. | - | Data Volume Size | Data volume size for the cluster node(s) | - | Root Volume Type | Root volume type for the cluster node(s). `SATA`, `SSD` or `SAS` for this option. | - | Root Volume Size | Root volume size for the cluster node(s) | - | Node Flavor | The node flavor of the cluster node(s). The flavor list in Rancher UI is fetched from Huawei Cloud. It includes all the supported node flavors. | - | Node Count | The node count of the cluster | - | Node Operating System | The operating system for the cluster node(s). Only `EulerOS 2.2` and `CentOS 7.4` are supported right now. | - | SSH Key Name | The ssh key for the cluster node(s) | - | EIP | The public IP options for the cluster node(s). `Disabled` means that the cluster node(s) are not going to bind a public IP. `Create EIP` means that the cluster node(s) will bind one or many newly created Eips after provisioned and more options will be shown in the UI to set the to-create EIP parameters. And `Select Existed EIP` means that the node(s) will bind to the EIPs you select. | - | EIP Count | This option will only be shown when `Create EIP` is selected. It means how many EIPs you want to create for the node(s). | - | EIP Type | This option will only be shown when `Create EIP` is selected. The options are `5_bgp` and `5_sbgp`. | - | EIP Share Type | This option will only be shown when `Create EIP` is selected. The only option is `PER`. | - | EIP Charge Mode | This option will only be shown when `Create EIP` is selected. The options are pay by `BandWidth` and pay by `Traffic`. | - | EIP Bandwidth Size | This option will only be shown when `Create EIP` is selected. The BandWidth of the EIPs. | - | Authentication Mode | It means enabling `RBAC` or also enabling `Authenticating Proxy`. If you select `Authenticating Proxy`, the certificate which is used for authenticating proxy will be also required. | - | Node Label | The labels for the cluster node(s). | +# Node Configuration -8. Click **Create** to create the CCE cluster. - -{{< result_create-cluster >}} +|Settings|Description| +|---|---| +| Zone | The available zone at where the node(s) of the cluster is deployed. | +| Billing Mode | The bill mode for the cluster node(s). In `VirtualMachine` type, only `Pay-per-use` is supported. in `BareMetal`, you can choose `Pay-per-use` or `Yearly/Monthly`. | +| Validity Period | This option only shows in `Yearly/Monthly` bill mode. It means how long you want to pay for the cluster node(s). | +| Auto Renew | This option only shows in `Yearly/Monthly` bill mode. It means that the cluster node(s) will renew the `Yearly/Monthly` payment automatically or not. | +| Data Volume Type | Data volume type for the cluster node(s). `SATA`, `SSD` or `SAS` for this option. | +| Data Volume Size | Data volume size for the cluster node(s) | +| Root Volume Type | Root volume type for the cluster node(s). `SATA`, `SSD` or `SAS` for this option. | +| Root Volume Size | Root volume size for the cluster node(s) | +| Node Flavor | The node flavor of the cluster node(s). The flavor list in Rancher UI is fetched from Huawei Cloud. It includes all the supported node flavors. | +| Node Count | The node count of the cluster | +| Node Operating System | The operating system for the cluster node(s). Only `EulerOS 2.2` and `CentOS 7.4` are supported right now. | +| SSH Key Name | The ssh key for the cluster node(s) | +| EIP | The public IP options for the cluster node(s). `Disabled` means that the cluster node(s) are not going to bind a public IP. `Create EIP` means that the cluster node(s) will bind one or many newly created Eips after provisioned and more options will be shown in the UI to set the to-create EIP parameters. And `Select Existed EIP` means that the node(s) will bind to the EIPs you select. | +| EIP Count | This option will only be shown when `Create EIP` is selected. It means how many EIPs you want to create for the node(s). | +| EIP Type | This option will only be shown when `Create EIP` is selected. The options are `5_bgp` and `5_sbgp`. | +| EIP Share Type | This option will only be shown when `Create EIP` is selected. The only option is `PER`. | +| EIP Charge Mode | This option will only be shown when `Create EIP` is selected. The options are pay by `BandWidth` and pay by `Traffic`. | +| EIP Bandwidth Size | This option will only be shown when `Create EIP` is selected. The BandWidth of the EIPs. | +| Authentication Mode | It means enabling `RBAC` or also enabling `Authenticating Proxy`. If you select `Authenticating Proxy`, the certificate which is used for authenticating proxy will be also required. | +| Node Label | The labels for the cluster node(s). | \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md index 63b763b38ef..aca3043c154 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/eks/_index.md @@ -66,13 +66,23 @@ Use Rancher to set up and configure your Kubernetes cluster. 1. Enter a **Cluster Name.** -1. {{< step_create-cluster_member-roles >}} +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. 1. Fill out the rest of the form. For help, refer to the [configuration reference.](#eks-cluster-configuration-reference) 1. Click **Create**. -{{< result_create-cluster >}} +**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces + # EKS Cluster Configuration Reference diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md index f26c83b8a57..898b8db3cac 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md @@ -24,7 +24,7 @@ The service account requires the following roles: ## Create the GKE Cluster -Use {{< product >}} to set up and configure your Kubernetes cluster. +Use Rancher to set up and configure your Kubernetes cluster. 1. From the **Clusters** page, click **Add Cluster**. @@ -32,7 +32,7 @@ Use {{< product >}} to set up and configure your Kubernetes cluster. 3. Enter a **Cluster Name**. -4. {{< step_create-cluster_member-roles >}} +4. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. 5. Either paste your service account private key in the **Service Account** text box or **Read from a file**. Then click **Next: Configure Nodes**. @@ -44,4 +44,13 @@ Use {{< product >}} to set up and configure your Kubernetes cluster. 8. Select your **Security Options** 9. Review your options to confirm they're correct. Then click **Create**. -{{< result_create-cluster >}} +**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces diff --git a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/_index.md b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/_index.md index dc6c66b9efb..5eb529df3ca 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/tke/_index.md @@ -29,7 +29,7 @@ You can use Rancher to create a cluster hosted in Tencent Kubernetes Engine (TKE 3. Enter a **Cluster Name**. -4. {{< step_create-cluster_member-roles >}} +4. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. 5. Configure **Account Access** for the TKE cluster. Complete each drop-down and field using the information obtained in [Prerequisites](#prerequisites). @@ -74,4 +74,13 @@ You can use Rancher to create a cluster hosted in Tencent Kubernetes Engine (TKE 9. Click **Create**. -{{< result_create-cluster >}} +**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces diff --git a/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md index 54a861f240c..abca96447be 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md @@ -72,11 +72,11 @@ By default, GKE users are not given this privilege, so you will need to run the 1. From the **Clusters** page, click **Add Cluster**. 2. Choose **Import**. 3. Enter a **Cluster Name**. -4. {{< step_create-cluster_member-roles >}} +4. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.} 5. Click **Create**. 6. The prerequisite for `cluster-admin` privileges is shown (see **Prerequisites** above), including an example command to fulfil the prerequisite. -7. Copy the `kubectl` command to your clipboard and run it on a node where kubeconfig is configured to point to the cluster you want to import. If you are unsure it is configured correctly, run `kubectl get nodes` to verify before running the command shown in {{< product >}}. -8. If you are using self signed certificates, you will receive the message `certificate signed by unknown authority`. To work around this validation, copy the command starting with `curl` displayed in {{< product >}} to your clipboard. Then run the command on a node where kubeconfig is configured to point to the cluster you want to import. +7. Copy the `kubectl` command to your clipboard and run it on a node where kubeconfig is configured to point to the cluster you want to import. If you are unsure it is configured correctly, run `kubectl get nodes` to verify before running the command shown in Rancher. +8. If you are using self signed certificates, you will receive the message `certificate signed by unknown authority`. To work around this validation, copy the command starting with `curl` displayed in Rancher to your clipboard. Then run the command on a node where kubeconfig is configured to point to the cluster you want to import. 9. When you finish running the command(s) on your node, click **Done**. **Result:** diff --git a/content/rancher/v2.x/en/cluster-provisioning/registered-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/registered-clusters/_index.md index 6f24cf77651..d105b33811b 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/registered-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/registered-clusters/_index.md @@ -44,7 +44,7 @@ If you are registering a K3s cluster, make sure the `cluster.yml` is readable. I 1. From the **Clusters** page, click **Add Cluster**. 2. Choose **Register**. 3. Enter a **Cluster Name**. -4. {{< step_create-cluster_member-roles >}} +4. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. 5. Click **Create**. 6. The prerequisite for `cluster-admin` privileges is shown (see **Prerequisites** above), including an example command to fulfil the prerequisite. 7. Copy the `kubectl` command to your clipboard and run it on a node where kubeconfig is configured to point to the cluster you want to import. If you are unsure it is configured correctly, run `kubectl get nodes` to verify before running the command shown in Rancher. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md index ce7512f5ba7..d118db75b23 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/_index.md @@ -27,7 +27,7 @@ For more information, refer to the section on [launching Kubernetes on new nodes ### Launching Kubernetes on Existing Custom Nodes -In this scenario, you want to install Kubernetes on bare-metal servers, on-premise virtual machines, or virtual machines that already exist in a cloud provider. With this option, you will run a Rancher agent Docker container on the machine. +In this scenario, you want to install Kubernetes on bare-metal servers, on-prem virtual machines, or virtual machines that already exist in a cloud provider. With this option, you will run a Rancher agent Docker container on the machine. If you want to reuse a node from a previous custom cluster, [clean the node]({{}}/rancher/v2.x/en/admin-settings/removing-rancher/rancher-cluster-nodes/) before using it in a cluster again. If you reuse a node that hasn't been cleaned, cluster provisioning may fail. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md index e883a5a8916..0000e447a29 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/_index.md @@ -18,6 +18,7 @@ The following cloud providers can be enabled: * Amazon * Azure * GCE (Google Compute Engine) +* vSphere ### Setting up the Amazon Cloud Provider @@ -31,6 +32,10 @@ For details on enabling the Azure cloud provider, refer to [this page.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/gce) +### Setting up the vSphere Cloud Provider + +For details on enabling the vSphere cloud provider, refer to [this page.](./vsphere) + ### Setting up a Custom Cloud Provider The `Custom` cloud provider is available if you want to configure any [Kubernetes cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/). diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/_index.md new file mode 100644 index 00000000000..c9dd3762981 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere/_index.md @@ -0,0 +1,25 @@ +--- +title: Setting up the vSphere Cloud Provider +weight: 4 +--- + +In this section, you'll learn how to set up the vSphere cloud provider for a Rancher managed RKE Kubernetes cluster in vSphere. + +Follow these steps while creating the vSphere cluster in Rancher: + +1. Set **Cloud Provider** option to `Custom`. + + {{< img "/img/rancher/vsphere-node-driver-cloudprovider.png" "vsphere-node-driver-cloudprovider">}} + +1. Click on **Edit as YAML** +1. Insert the following structure to the pre-populated cluster YAML. As of Rancher v2.3+, this structure must be placed under `rancher_kubernetes_engine_config`. In versions prior to v2.3, it has to be defined as a top-level field. Note that the `name` *must* be set to `vsphere`. + + ```yaml + rancher_kubernetes_engine_config: # Required as of Rancher v2.3+ + cloud_provider: + name: vsphere + vsphereCloudProvider: + [Insert provider configuration] + ``` + +Rancher uses RKE (the Rancher Kubernetes Engine) to provision Kubernetes clusters. Refer to the [vSphere configuration reference in the RKE documentation]({{}}/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/) for details about the properties of the `vsphereCloudProvider` directive. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md index 9d28b9b3378..b6106a15488 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/_index.md @@ -8,7 +8,7 @@ aliases: - /rancher/v2.x/en/cluster-provisioning/custom-clusters/ --- -When you create a custom cluster, Rancher uses RKE (the Rancher Kubernetes Engine) to create a Kubernetes cluster in on-premise bare-metal servers, on-premise virtual machines, or in any node hosted by an infrastructure provider. +When you create a custom cluster, Rancher uses RKE (the Rancher Kubernetes Engine) to create a Kubernetes cluster in on-prem bare-metal servers, on-prem virtual machines, or in any node hosted by an infrastructure provider. To use this option you'll need access to servers you intend to use in your Kubernetes cluster. Provision each server according to the [requirements]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements), which includes some hardware specifications and Docker. After you install Docker on each server, run the command provided in the Rancher UI to turn each server into a Kubernetes node. @@ -33,7 +33,7 @@ This section describes how to set up a custom cluster. Begin creation of a custom cluster by provisioning a Linux host. Your host can be: - A cloud-host virtual machine (VM) -- An on-premise VM +- An on-prem VM - A bare-metal server If you want to reuse a node from a previous custom cluster, [clean the node]({{}}/rancher/v2.x/en/admin-settings/removing-rancher/rancher-cluster-nodes/) before using it in a cluster again. If you reuse a node that hasn't been cleaned, cluster provisioning may fail. @@ -48,9 +48,9 @@ Provision the host according to the [installation requirements]({{}}/ra 3. Enter a **Cluster Name**. -4. {{< step_create-cluster_member-roles >}} +4. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. -5. {{< step_create-cluster_cluster-options >}} +5. Use **Cluster Options** to choose the version of Kubernetes, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options.** >**Using Windows nodes as Kubernetes workers?** > @@ -75,7 +75,17 @@ Provision the host according to the [installation requirements]({{}}/ra 11. When you finish running the command(s) on your Linux host(s), click **Done**. -{{< result_create-cluster >}} +**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces + ### 3. Amazon Only: Tag Resources diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md index fbac23befac..906ddc88638 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/_index.md @@ -58,10 +58,24 @@ To access all node templates, an administrator will need to do the following: # Node Pools -Using Rancher, you can create pools of nodes based on a [node template](#node-templates). The benefit of using a node pool is that if a node is destroyed or deleted, you can increase the number of live nodes to compensate for the node that was lost. The node pool helps you ensure that the count of the node pool is as expected. +Using Rancher, you can create pools of nodes based on a [node template](#node-templates). + +A node template defines the configuration of a node, like what operating system to use, number of CPUs and amount of memory. + +The benefit of using a node pool is that if a node is destroyed or deleted, you can increase the number of live nodes to compensate for the node that was lost. The node pool helps you ensure that the count of the node pool is as expected. Each node pool is assigned with a [node component]({{}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) to specify how these nodes should be configured for the Kubernetes cluster. +Each node pool must have one or more nodes roles assigned. + +Each node role (i.e. etcd, control plane, and worker) should be assigned to a distinct node pool. Although it is possible to assign multiple node roles to a node pool, this should not be done for production clusters. + +The recommended setup is to have: + +- a node pool with the etcd node role and a count of three +- a node pool with the control plane node role and a count of at least two +- a node pool with the worker node role and a count of at least two + ### Node Pool Taints _Available as of Rancher v2.3.0_ @@ -78,11 +92,9 @@ _Available as of Rancher v2.3.0_ If a node is in a node pool, Rancher can automatically replace unreachable nodes. Rancher will use the existing node template for the given node pool to recreate the node if it becomes inactive for a specified number of minutes. -> **Important:** Self-healing node pools are designed to help you replace worker nodes for **stateless** applications. It is not recommended to enable node auto-replace on a node pool of master nodes or nodes with persistent volumes attached, because VMs are treated ephemerally. When a node in a node pool loses connectivity with the cluster, its persistent volumes are destroyed, resulting in data loss for stateful applications. +> **Important:** Self-healing node pools are designed to help you replace worker nodes for stateless applications. It is not recommended to enable node auto-replace on a node pool of master nodes or nodes with persistent volumes attached, because VMs are treated ephemerally. When a node in a node pool loses connectivity with the cluster, its persistent volumes are destroyed, resulting in data loss for stateful applications. -{{% accordion id="how-does-node-auto-replace-work" label="How does Node Auto-replace Work?" %}} - Node auto-replace works on top of the Kubernetes node controller. The node controller periodically checks the status of all the nodes (configurable via the `--node-monitor-period` flag of the `kube-controller`). When a node is unreachable, the node controller will taint that node. When this occurs, Rancher will begin its deletion countdown. You can configure the amount of time Rancher waits to delete the node. If the taint is not removed before the deletion countdown ends, Rancher will proceed to delete the node object. Rancher will then provision a node in accordance with the set quantity of the node pool. -{{% /accordion %}} +Node auto-replace works on top of the Kubernetes node controller. The node controller periodically checks the status of all the nodes (configurable via the `--node-monitor-period` flag of the `kube-controller`). When a node is unreachable, the node controller will taint that node. When this occurs, Rancher will begin its deletion countdown. You can configure the amount of time Rancher waits to delete the node. If the taint is not removed before the deletion countdown ends, Rancher will proceed to delete the node object. Rancher will then provision a node in accordance with the set quantity of the node pool. ### Enabling Node Auto-replace diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md index 79d803f23cb..a71e7e3a906 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/_index.md @@ -6,84 +6,117 @@ aliases: - /rancher/v2.x/en/tasks/clusters/creating-a-cluster/create-cluster-azure/ --- -In this section, you'll learn how to set up a Kubernetes cluster in Azure through Rancher. During this process, Rancher will provision new nodes in Azure. +In this section, you'll learn how to install an [RKE]({{}}/rke/latest/en/) Kubernetes cluster in Azure through Rancher. +First, you will set up your Azure cloud credentials in Rancher. Then you will use your cloud credentials to create a node template, which Rancher will use to provision new nodes in Azure. + +Then you will create an Azure cluster in Rancher, and when configuring the new cluster, you will define node pools for it. Each node pool will have a Kubernetes role of etcd, controlplane, or worker. Rancher will install Kubernetes on the new nodes, and it will set up each node with the Kubernetes role defined by the node pool. + +For more information on configuring the Kubernetes cluster that Rancher will install on the Azure nodes, refer to the [RKE cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) + +For more information on configuring Azure node templates, refer to the [Azure node template configuration reference.](./azure-node-template-config) + +- [Preparation in Azure](#preparation-in-azure) - [Creating an Azure Cluster](#creating-an-azure-cluster) -- [Creating an Azure Node Template](#creating-an-azure-node-template) - - [Preparation in Azure](#preparation-in-azure) - - [Creating the Template](#creating-the-template) - - [Template Configuration](#template-configuration) -# Creating an Azure Cluster - -> **Prerequisite:** Before Rancher can create a cluster in Azure, a node template needs to be created using your Azure credentials and configuration. For details, see [this section.](#creating-an-azure-node-template) - -Use {{< product >}} to create a Kubernetes cluster in Azure. - -1. From the **Clusters** page, click **Add Cluster**. - -2. Choose **Azure**. - -3. Enter a **Cluster Name**. - -4. {{< step_create-cluster_member-roles >}} - -5. {{< step_create-cluster_cluster-options >}} For more information, see the [cluster configuration reference.](../../options) - -6. {{< step_create-cluster_node-pools >}} - -7. **Optional:** Add additional node pools. - -8. Review your options to confirm they're correct. Then click **Create**. - -{{< result_create-cluster >}} - -### Optional Next Steps - -After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster: - -- **Access your cluster with the kubectl CLI:** Follow [these steps]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#accessing-clusters-with-kubectl-on-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI. -- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. - -# Creating an Azure Node Template - -Creating a node template for Azure will allow Rancher to provision new nodes when it sets up a Kubernetes cluster in Azure. - -### Preparation in Azure +# Preparation in Azure -Before creating a **node template** in Rancher using a cloud infrastructure such as Azure, we must configure Rancher to allow the manipulation of resources in an Azure subscription. +Before creating a node template in Rancher using a cloud infrastructure such as Azure, we must configure Rancher to allow the manipulation of resources in an Azure subscription. To do this, we will first create a new Azure **service principal (SP)** in Azure **Active Directory (AD)**, which, in Azure, is an application user who has permission to manage Azure resources. The following is a template `az cli` script that you have to run for creating an service principal, where you have to enter your SP name, role, and scope: ``` -az ad sp create-for-rbac --name="" --role="Contributor" --scopes="/subscriptions/" +az ad sp create-for-rbac \ + --name="" \ + --role="Contributor" \ + --scopes="/subscriptions/" ``` -The creation of this service principal returns three pieces of identification information, *The application ID, also called the client ID*, *The client secret*, and *The tenant ID*. This information will be used in the following section adding the **node template**. +The creation of this service principal returns three pieces of identification information, *The application ID, also called the client ID*, *The client secret*, and *The tenant ID*. This information will be used when you create a node template for Azure. -### Creating the Template +# Creating an Azure Cluster -1. Click **Add Node Template**. +{{%tabs %}} +{{% tab "Rancher v2.2.0+" %}} -1. Complete the **Azure Options** form. For help filling out the form, refer to [Configuration](#azure-node-template-configuration) below. +1. [Create your cloud credentials](#1-create-your-cloud-credentials) +2. [Create a node template with your cloud credentials](#2-create-a-node-template-with-your-cloud-credentials) +3. [Create a cluster with node pools using the node template](#3-create-a-cluster-with-node-pools-using-the-node-template) + +### 1. Create your cloud credentials -1. Click **Create**. +1. In the Rancher UI, click the user profile button in the upper right corner, and click **Cloud Credentials.** +1. Click **Add Cloud Credential.** +1. Enter a name for the cloud credential. +1. In the **Cloud Credential Type** field, select **Azure**. +1. Enter your Azure credentials. +1. Click **Create.** -**Result:** The node template can be used during the cluster creation process. +**Result:** You have created the cloud credentials that will be used to provision nodes in your cluster. You can reuse these credentials for other node templates, or in other clusters. +### 2. Create a node template with your cloud credentials -### Template Configuration +Creating a [node template]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) for Azure will allow Rancher to provision new nodes in Azure. Node templates can be reused for other clusters. -- **Account Access** stores your account information for authenticating with Azure. Note: As of v2.2.0, account access information is stored as a cloud credentials. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one. To create a new cloud credential, enter **Name** and **Account Access** data, then click **Create.** +1. In the Rancher UI, click the user profile button in the upper right corner, and click **Node Templates.** +1. Click **Add Template.** +1. Fill out a node template for Azure. For help filling out the form, refer to [Azure Node Template Configuration.](./azure-node-template-config) -- **Placement** sets the geographical region where your cluster is hosted and other location metadata. +### 3. Create a cluster with node pools using the node template -- **Network** configures the networking used in your cluster. +Use Rancher to create a Kubernetes cluster in Azure. -- **Instance** customizes your VM configuration. +1. From the **Clusters** page, click **Add Cluster**. +1. Choose **Azure**. +1. Enter a **Cluster Name**. +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. +1. Use **Cluster Options** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options.** For help configuring the cluster, refer to the [RKE cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) +1. Add one or more node pools to your cluster. Each node pool uses a node template to provision new nodes. For more information about node pools, including best practices, see [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools) +1. Review your options to confirm they're correct. Then click **Create**. -{{< step_rancher-template >}} +**Result:** +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces + +{{% /tab %}} +{{% tab "Rancher prior to v2.2.0" %}} + +Use Rancher to create a Kubernetes cluster in Azure. + +1. From the **Clusters** page, click **Add Cluster**. +1. Choose **Azure**. +1. Enter a **Cluster Name**. +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. +1. Use **Cluster Options** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options.** For help configuring the cluster, refer to the [RKE cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) +1. Add one or more node pools to your cluster. Each node pool uses a node template to provision new nodes. To create a node template, click **Add Node Template** and complete the **Azure Options** form. For help filling out the form, refer to the [Azure node template configuration reference.](./azure-node-template-config) For more information about node pools, including best practices for assigning Kubernetes roles to them, see [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools) +1. Review your options to confirm they're correct. Then click **Create**. + +**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces + +{{% /tab %}} +{{% /tabs %}} + +### Optional Next Steps + +After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster: + +- **Access your cluster with the kubectl CLI:** Follow [these steps]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#accessing-clusters-with-kubectl-on-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI. +- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/azure-node-template-config/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/azure-node-template-config/_index.md new file mode 100644 index 00000000000..a9fd0d1fb09 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/azure/azure-node-template-config/_index.md @@ -0,0 +1,39 @@ +--- +title: Azure Node Template Configuration +weight: 1 +--- + +For more information about Azure, refer to the official [Azure documentation.](https://docs.microsoft.com/en-us/azure/?product=featured) + +{{% tabs %}} +{{% tab "Rancher v2.2.0+" %}} + +Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one. + +- **Placement** sets the geographical region where your cluster is hosted and other location metadata. +- **Network** configures the networking used in your cluster. +- **Instance** customizes your VM configuration. + +The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include: + +- **Labels:** For information on labels, refer to the [Docker object label documentation.](https://docs.docker.com/config/labels-custom-metadata/) +- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance. +- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon +- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/) + +{{% /tab %}} +{{% tab "Rancher prior to v2.2.0" %}} + +- **Account Access** stores your account information for authenticating with Azure. +- **Placement** sets the geographical region where your cluster is hosted and other location metadata. +- **Network** configures the networking used in your cluster. +- **Instance** customizes your VM configuration. + +The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include: + +- **Labels:** For information on labels, refer to the [Docker object label documentation.](https://docs.docker.com/config/labels-custom-metadata/) +- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance. +- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon +- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/) +{{% /tab %}} +{{% /tabs %}} diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md index 21f6438048c..3a26d0f6911 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/_index.md @@ -5,37 +5,81 @@ weight: 2215 aliases: - /rancher/v2.x/en/tasks/clusters/creating-a-cluster/create-cluster-digital-ocean/ --- -Use {{< product >}} to create a Kubernetes cluster using DigitalOcean. +In this section, you'll learn how to use Rancher to install an [RKE](https://rancher.com/docs/rke/latest/en/) Kubernetes cluster in DigitalOcean. + +First, you will set up your DigitalOcean cloud credentials in Rancher. Then you will use your cloud credentials to create a node template, which Rancher will use to provision new nodes in DigitalOcean. + +Then you will create a DigitalOcean cluster in Rancher, and when configuring the new cluster, you will define node pools for it. Each node pool will have a Kubernetes role of etcd, controlplane, or worker. Rancher will install RKE Kubernetes on the new nodes, and it will set up each node with the Kubernetes role defined by the node pool. + +{{% tabs %}} +{{% tab "Rancher v2.2.0+" %}} +1. [Create your cloud credentials](#1-create-your-cloud-credentials) +2. [Create a node template with your cloud credentials](#2-create-a-node-template-with-your-cloud-credentials) +3. [Create a cluster with node pools using the node template](#3-create-a-cluster-with-node-pools-using-the-node-template) + +### 1. Create your cloud credentials + +1. In the Rancher UI, click the user profile button in the upper right corner, and click **Cloud Credentials.** +1. Click **Add Cloud Credential.** +1. Enter a name for the cloud credential. +1. In the **Cloud Credential Type** field, select **DigitalOcean**. +1. Enter your Digital Ocean credentials. +1. Click **Create.** + +**Result:** You have created the cloud credentials that will be used to provision nodes in your cluster. You can reuse these credentials for other node templates, or in other clusters. + +### 2. Create a node template with your cloud credentials + +Creating a [node template]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) for DigitalOcean will allow Rancher to provision new nodes in DigitalOcean. Node templates can be reused for other clusters. + +1. In the Rancher UI, click the user profile button in the upper right corner, and click **Node Templates.** +1. Click **Add Template.** +1. Fill out a node template for DigitalOcean. For help filling out the form, refer to [DigitalOcean Node Template Configuration.](./do-node-template-config) + +### 3. Create a cluster with node pools using the node template 1. From the **Clusters** page, click **Add Cluster**. +1. Choose **DigitalOcean**. +1. Enter a **Cluster Name**. +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. +1. Use **Cluster Options** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options.** For help configuring the cluster, refer to the [RKE cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) +1. Add one or more node pools to your cluster. Add one or more node pools to your cluster. Each node pool uses a node template to provision new nodes. For more information about node pools, including best practices for assigning Kubernetes roles to them, see [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools) +1. Review your options to confirm they're correct. Then click **Create**. -2. Choose **DigitalOcean**. +**Result:** -3. Enter a **Cluster Name**. +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. -4. {{< step_create-cluster_member-roles >}} +You can access your cluster after its state is updated to **Active.** -5. {{< step_create-cluster_cluster-options >}} +**Active** clusters are assigned two Projects: -6. {{< step_create-cluster_node-pools >}} +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces - 1. Click **Add Node Template**. Note: As of v2.2.0, account access information is stored as a cloud credentials. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one. To create a new cloud credential, enter **Name** and **Account Access** data, then click **Create.** +{{% /tab %}} +{{% tab "Rancher prior to v2.2.0" %}} - 2. Complete the **Digital Ocean Options** form. +1. From the **Clusters** page, click **Add Cluster**. +1. Choose **DigitalOcean**. +1. Enter a **Cluster Name**. +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. +1. Use **Cluster Options** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options.** For help configuring the cluster, refer to the [RKE cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) +1. Add one or more node pools to your cluster. Each node pool uses a node template to provision new nodes. To create a node template, click **Add Node Template** and complete the **Digital Ocean Options** form. For help filling out the form, refer to the [Digital Ocean node template configuration reference.](./do-node-template-config) For more information about node pools, including best practices for assigning Kubernetes roles to them, see [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools) +1. Review your options to confirm they're correct. Then click **Create**. - - **Access Token** stores your DigitalOcean Personal Access Token. Refer to [DigitalOcean Instructions: How To Generate a Personal Access Token](https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-api-v2#how-to-generate-a-personal-access-token). +**Result:** - - **Droplet Options** provision your cluster's geographical region and specifications. +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. - 4. {{< step_rancher-template >}} +You can access your cluster after its state is updated to **Active.** - 5. Click **Create**. +**Active** clusters are assigned two Projects: - 6. **Optional:** Add additional node pools. -
-7. Review your options to confirm they're correct. Then click **Create**. - -{{< result_create-cluster >}} +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces +{{% /tab %}} +{{% /tabs %}} # Optional Next Steps diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/do-node-template-config/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/do-node-template-config/_index.md new file mode 100644 index 00000000000..9e2ad91e795 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/digital-ocean/do-node-template-config/_index.md @@ -0,0 +1,43 @@ +--- +title: DigitalOcean Node Template Configuration +weight: 1 +---- + +{{% tabs %}} +{{% tab "Rancher v2.2.0+" %}} + +Account access information is stored as a cloud credential. Cloud credentials are stored as Kubernetes secrets. Multiple node templates can use the same cloud credential. You can use an existing cloud credential or create a new one. + +### Droplet Options + +The **Droplet Options** provision your cluster's geographical region and specifications. + +### Docker Daemon + +The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include: + +- **Labels:** For information on labels, refer to the [Docker object label documentation.](https://docs.docker.com/config/labels-custom-metadata/) +- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance. +- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon +- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/) +{{% /tab %}} +{{% tab "Rancher prior to v2.2.0" %}} + +### Access Token + +The **Access Token** stores your DigitalOcean Personal Access Token. Refer to [DigitalOcean Instructions: How To Generate a Personal Access Token](https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-api-v2#how-to-generate-a-personal-access-token). + +### Droplet Options + +The **Droplet Options** provision your cluster's geographical region and specifications. + +### Docker Daemon + +The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include: + +- **Labels:** For information on labels, refer to the [Docker object label documentation.](https://docs.docker.com/config/labels-custom-metadata/) +- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance. +- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon +- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/) +{{% /tab %}} +{{% /tabs %}} \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md index 71bd25fdbce..106a2d6f300 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/_index.md @@ -4,7 +4,11 @@ shortTitle: Amazon EC2 description: Learn the prerequisites and steps required in order for you to create an Amazon EC2 cluster using Rancher weight: 2210 --- -Use Rancher to create a Kubernetes cluster in Amazon EC2. +In this section, you'll learn how to use Rancher to install an [RKE](https://rancher.com/docs/rke/latest/en/) Kubernetes cluster in Amazon EC2. + +First, you will set up your EC2 cloud credentials in Rancher. Then you will use your cloud credentials to create a node template, which Rancher will use to provision new nodes in EC2. + +Then you will create an EC2 cluster in Rancher, and when configuring the new cluster, you will define node pools for it. Each node pool will have a Kubernetes role of etcd, controlplane, or worker. Rancher will install RKE Kubernetes on the new nodes, and it will set up each node with the Kubernetes role defined by the node pool. ### Prerequisites @@ -41,77 +45,62 @@ The steps to create a cluster differ based on your Rancher version. **Result:** You have created the cloud credentials that will be used to provision nodes in your cluster. You can reuse these credentials for other node templates, or in other clusters. ### 2. Create a node template with your cloud credentials and information from EC2 -Complete each of the following forms using information available from the [EC2 Management Console](https://aws.amazon.com/ec2). + +Creating a [node template]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) for EC2 will allow Rancher to provision new nodes in EC2. Node templates can be reused for other clusters. 1. In the Rancher UI, click the user profile button in the upper right corner, and click **Node Templates.** 1. Click **Add Template.** -1. In the **Region** field, select the same region that you used when creating your cloud credentials. -1. In the **Cloud Credentials** field, select your newly created cloud credentials. -1. Click **Next: Authenticate & configure nodes.** -1. Choose an availability zone and network settings for your cluster. Click **Next: Select a Security Group.** -1. Choose the default security group or configure a security group. Please refer to [Amazon EC2 security group when using Node Driver]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#security-group-for-nodes-on-aws-ec2) to see what rules are created in the `rancher-nodes` Security Group. Then click **Next: Set Instance options.** -1. Configure the instances that will be created. Make sure you configure the correct **SSH User** for the configured AMI. - -> If you need to pass an IAM Instance Profile Name (not ARN), for example, when you want to use a [Kubernetes Cloud Provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers), you will need an additional permission in your policy. See [Example IAM policy with PassRole](#example-iam-policy-with-passrole) for an example policy. - -Optional: In the **Engine Options** section of the node template, you can configure the Docker daemon. You may want to specify the docker version or a Docker registry mirror. +1. Fill out a node template for EC2. For help filling out the form, refer to [EC2 Node Template Configuration.](./ec2-node-template-config) ### 3. Create a cluster with node pools using the node template -{{< step_create-cluster_node-pools >}} +Add one or more node pools to your cluster. For more information about node pools, see [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools) 1. From the **Clusters** page, click **Add Cluster**. - 1. Choose **Amazon EC2**. - 1. Enter a **Cluster Name**. - -1. Create a node pool for each Kubernetes role. For each node pool, choose a node template that you created. - -1. Click **Add Member** to add users that can access the cluster. - -1. Use the **Role** drop-down to set permissions for each user. - -1. Use **Cluster Options** to choose the version of Kubernetes, what network provider will be used and if you want to enable project network isolation. Refer to [Selecting Cloud Providers]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) to configure the Kubernetes Cloud Provider. - +1. Create a node pool for each Kubernetes role. For each node pool, choose a node template that you created. For more information about node pools, including best practices for assigning Kubernetes roles to them, see [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools) +1. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. +1. Use **Cluster Options** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. Refer to [Selecting Cloud Providers]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) to configure the Kubernetes Cloud Provider. For help configuring the cluster, refer to the [RKE cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) 1. Click **Create**. -{{< result_create-cluster >}} +**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces + {{% /tab %}} -{{% tab "Rancher prior to v2.2.0+" %}} +{{% tab "Rancher prior to v2.2.0" %}} 1. From the **Clusters** page, click **Add Cluster**. - 1. Choose **Amazon EC2**. - 1. Enter a **Cluster Name**. - -1. {{< step_create-cluster_member-roles >}} - -1. {{< step_create-cluster_cluster-options >}}Refer to [Selecting Cloud Providers]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) to configure the Kubernetes Cloud Provider. - -1. {{< step_create-cluster_node-pools >}} - - 1. Click **Add Node Template**. - - 1. Complete each of the following forms using information available from the [EC2 Management Console](https://aws.amazon.com/ec2). - - - **Account Access** is where you configure the region of the nodes, and the credentials (Access Key and Secret Key) used to create the machine. See [Prerequisites](#prerequisites) how to create the Access Key and Secret Key and the needed permissions. - - **Zone and Network** configures the availability zone and network settings for your cluster. - - **Security Groups** creates or configures the Security Groups applied to your nodes. Please refer to [Amazon EC2 security group when using Node Driver]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#security-group-for-nodes-on-aws-ec2) to see what rules are created in the `rancher-nodes` Security Group. - - **Instance** configures the instances that will be created. Make sure you configure the correct **SSH User** for the configured AMI. -

- If you need to pass an **IAM Instance Profile Name** (not ARN), for example, when you want to use a [Kubernetes Cloud Provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers), you will need an additional permission in your policy. See [Example IAM policy with PassRole](#example-iam-policy-with-passrole) for an example policy. - -1. {{< step_rancher-template >}} +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. +1. Use **Cluster Options** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options.** Refer to [Selecting Cloud Providers]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) to configure the Kubernetes Cloud Provider. For help configuring the cluster, refer to the [RKE cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) +1. Add one or more node pools to your cluster. Each node pool uses a node template to provision new nodes. For more information about node pools, including best practices for assigning Kubernetes roles to them, see [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools) To create a node template, click **Add Node Template**. For help filling out the node template, refer to [EC2 Node Template Configuration.](./ec2-node-template-config) 1. Click **Create**. 1. **Optional:** Add additional node pools. 1. Review your cluster settings to confirm they are correct. Then click **Create**. -{{< result_create-cluster >}} +**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces + {{% /tab %}} {{% /tabs %}} - ### Optional Next Steps After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster: diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md new file mode 100644 index 00000000000..ef7393f0f66 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/ec2-node-template-config/_index.md @@ -0,0 +1,99 @@ +--- +title: EC2 Node Template Configuration +weight: 1 +--- + +For more details about EC2, nodes, refer to the official documentation for the [EC2 Management Console](https://aws.amazon.com/ec2). + +{{% tabs %}} +{{% tab "Rancher v2.2.0+" %}} + +### Region + +In the **Region** field, select the same region that you used when creating your cloud credentials. + +### Cloud Credentials + +Your AWS account access information, stored in a [cloud credential.]({{}}/rancher/v2.x/en/user-settings/cloud-credentials/) + +See [Amazon Documentation: Creating Access Keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) how to create an Access Key and Secret Key. + +See [Amazon Documentation: Creating IAM Policies (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html#access_policies_create-start) how to create an IAM policy. + +See [Amazon Documentation: Adding Permissions to a User (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) how to attach an IAM + +See our three example JSON policies: + +- [Example IAM Policy]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/#example-iam-policy) +- [Example IAM Policy with PassRole]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/#example-iam-policy-with-passrole) (needed if you want to use [Kubernetes Cloud Provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers) or want to pass an IAM Profile to an instance) +- [Example IAM Policy to allow encrypted EBS volumes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/#example-iam-policy-to-allow-encrypted-ebs-volumes) policy to an user. + +### Authenticate & Configure Nodes + +Choose an availability zone and network settings for your cluster. + +### Security Group + +Choose the default security group or configure a security group. + +Please refer to [Amazon EC2 security group when using Node Driver]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#security-group-for-nodes-on-aws-ec2) to see what rules are created in the `rancher-nodes` Security Group. + +### Instance Options + +Configure the instances that will be created. Make sure you configure the correct **SSH User** for the configured AMI. + +If you need to pass an **IAM Instance Profile Name** (not ARN), for example, when you want to use a [Kubernetes Cloud Provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers), you will need an additional permission in your policy. See [Example IAM policy with PassRole](#example-iam-policy-with-passrole) for an example policy. + +### Engine Options + +In the **Engine Options** section of the node template, you can configure the Docker daemon. You may want to specify the docker version or a Docker registry mirror. + +{{% /tab %}} +{{% tab "Rancher prior to v2.2.0" %}} + +### Account Access + +**Account Access** is where you configure the region of the nodes, and the credentials (Access Key and Secret Key) used to create the machine. + +See [Amazon Documentation: Creating Access Keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) how to create an Access Key and Secret Key. + +See [Amazon Documentation: Creating IAM Policies (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html#access_policies_create-start) how to create an IAM policy. + +See [Amazon Documentation: Adding Permissions to a User (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) how to attach an IAM + +See our three example JSON policies: + +- [Example IAM Policy]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/#example-iam-policy) +- [Example IAM Policy with PassRole]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/#example-iam-policy-with-passrole) (needed if you want to use [Kubernetes Cloud Provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers) or want to pass an IAM Profile to an instance) +- [Example IAM Policy to allow encrypted EBS volumes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/#example-iam-policy-to-allow-encrypted-ebs-volumes) policy to an user. + +### Zone and Network + +**Zone and Network** configures the availability zone and network settings for your cluster. + +### Security Groups + +**Security Groups** creates or configures the Security Groups applied to your nodes. Please refer to [Amazon EC2 security group when using Node Driver]({{}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#security-group-for-nodes-on-aws-ec2) to see what rules are created in the `rancher-nodes` Security Group. + +### Instance + +**Instance** configures the instances that will be created. + +### SSH User + +Make sure you configure the correct **SSH User** for the configured AMI. + +### IAM Instance Profile Name + +If you need to pass an **IAM Instance Profile Name** (not ARN), for example, when you want to use a [Kubernetes Cloud Provider]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers), you will need an additional permission in your policy. See [Example IAM policy with PassRole](#example-iam-policy-with-passrole) for an example policy. + +### Docker Daemon + +The [Docker daemon](https://docs.docker.com/engine/docker-overview/#the-docker-daemon) configuration options include: + +- **Labels:** For information on labels, refer to the [Docker object label documentation.](https://docs.docker.com/config/labels-custom-metadata/) +- **Docker Engine Install URL:** Determines what Docker version will be installed on the instance. +- **Registry mirrors:** Docker Registry mirror to be used by the Docker daemon +- **Other advanced options:** Refer to the [Docker daemon option reference](https://docs.docker.com/engine/reference/commandline/dockerd/) +{{% /tab %}} +{{% /tabs %}} diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md index e512460253d..388fe585aac 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/_index.md @@ -14,7 +14,12 @@ Rancher can provision nodes in vSphere and install Kubernetes on them. When crea A vSphere cluster may consist of multiple groups of VMs with distinct properties, such as the amount of memory or the number of vCPUs. This grouping allows for fine-grained control over the sizing of nodes for each Kubernetes role. -# vSphere Enhancements +- [vSphere Enhancements in Rancher v2.3](#vsphere-enhancements-in-rancher-v2-3) +- [Creating a vSphere Cluster](#creating-a-vsphere-cluster) +- [Provisioning Storage](#provisioning-storage) +- [Enabling the vSphere Cloud Provider](#enabling-the-vsphere-cloud-provider) + +# vSphere Enhancements in Rancher v2.3 The vSphere node templates have been updated, allowing you to bring cloud operations on-premises with the following enhancements: @@ -40,8 +45,22 @@ In Rancher v2.3.3+, you can provision VMs with any operating system that support In Rancher prior to v2.3.3, the vSphere node driver included in Rancher only supported the provisioning of VMs with [RancherOS]({{}}/os/v1.x/en/) as the guest operating system. -# Video Walkthrough of v2.3.3 Node Template Features +### Video Walkthrough of v2.3.3 Node Template Features In this YouTube video, we demonstrate how to set up a node template with the new features designed to help you bring cloud operations to on-premises clusters. {{< youtube id="dPIwg6x1AlU">}} + +# Creating a vSphere Cluster + +In [this section,](./provisioning-vsphere-clusters) you'll learn how to use Rancher to install an [RKE]({{}}/rke/latest/en/) Kubernetes cluster in vSphere. + +# Provisioning Storage + +For an example of how to provision storage in vSphere using Rancher, refer to [this section.]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere) In order to dynamically provision storage in vSphere, the vSphere provider must be [enabled.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) + +# Enabling the vSphere Cloud Provider + +When a cloud provider is set up in Rancher, the Rancher server can automatically provision new infrastructure for the cluster, including new nodes or persistent storage devices. + +For details, refer to the section on [enabling the vSphere cloud provider.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/creating-credentials/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/creating-credentials/_index.md similarity index 92% rename from content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/creating-credentials/_index.md rename to content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/creating-credentials/_index.md index 9c5bc71a0e2..90b0aef3317 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/creating-credentials/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/creating-credentials/_index.md @@ -1,6 +1,8 @@ --- title: Creating Credentials in the vSphere Console -weight: 1 +weight: 3 +aliases: + - /rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/creating-credentials --- This section describes how to create a vSphere username and password. You will need to provide these vSphere credentials to Rancher, which allows Rancher to provision resources in vSphere. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md index f59e5e35b86..c56461fc35f 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md @@ -3,18 +3,25 @@ title: Provisioning Kubernetes Clusters in vSphere weight: 1 --- -This section explains how to configure Rancher with vSphere credentials, provision nodes in vSphere, and set up Kubernetes clusters on those nodes. +In this section, you'll learn how to use Rancher to install an [RKE]({{}}/rke/latest/en/) Kubernetes cluster in vSphere. -# Prerequisites +First, you will set up your vSphere cloud credentials in Rancher. Then you will use your cloud credentials to create a node template, which Rancher will use to provision nodes in vSphere. + +Then you will create a vSphere cluster in Rancher, and when configuring the new cluster, you will define node pools for it. Each node pool will have a Kubernetes role of etcd, controlplane, or worker. Rancher will install RKE Kubernetes on the new nodes, and it will set up each node with the Kubernetes role defined by the node pool. + +For details on configuring the vSphere node template, refer to the [vSphere node template configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/) + +For details on configuring RKE Kubernetes clusters in Rancher, refer to the [cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) + +- [Preparation in vSphere](#preparation-in-vsphere) +- [Creating a vSphere Cluster](#creating-a-vsphere-cluster) + +# Preparation in vSphere This section describes the requirements for setting up vSphere so that Rancher can provision VMs and clusters. The node templates are documented and tested with the vSphere Web Services API version 6.5. -- [Create credentials in vSphere](#create-credentials-in-vsphere) -- [Network permissions](#network-permissions) -- [Valid ESXi License for vSphere API Access](#valid-esxi-license-for-vsphere-api-access) - ### Create Credentials in vSphere Before proceeding to create a cluster, you must ensure that you have a vSphere user with sufficient permissions. When you set up a node template, the template will need to use these vSphere credentials. @@ -35,280 +42,98 @@ See [Node Networking Requirements]({{}}/rancher/v2.x/en/cluster-provisi The free ESXi license does not support API access. The vSphere servers must have a valid or evaluation ESXi license. -# Creating Clusters in vSphere with Rancher +### VM-VM Affinity Rules for Clusters with DRS -This section describes how to set up vSphere credentials, node templates, and vSphere clusters using the Rancher UI. +If you have a cluster with DRS enabled, setting up [VM-VM Affinity Rules](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.resmgmt.doc/GUID-7297C302-378F-4AF2-9BD6-6EDB1E0A850A.html) is recommended. These rules allow VMs assigned the etcd and control-plane roles to operate on separate ESXi hosts when they are assigned to different node pools. This practice ensures that the failure of a single physical machine does not affect the availability of those planes. -You will need to do the following: +# Creating a vSphere Cluster -1. [Create a node template using vSphere credentials](#1-create-a-node-template-using-vsphere-credentials) -2. [Create a Kubernetes cluster using the node template](#2-create-a-kubernetes-cluster-using-the-node-template) -3. [Optional: Provision storage](#3-optional-provision-storage) - - [Enable the vSphere cloud provider for the cluster](#enable-the-vsphere-cloud-provider-for-the-cluster) - -### Configuration References - -For details on configuring the node template, refer to the [node template configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/) - -Rancher uses the RKE library to provision Kubernetes clusters. For details on configuring clusters in vSphere, refer to the [cluster configuration reference in the RKE documentation.]({{}}/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/) - -Note that the vSphere cloud provider must be [enabled](#enable-the-vsphere-cloud-provider-for-the-cluster) to allow dynamic provisioning of volumes. - -# 1. Create a Node Template Using vSphere Credentials - -To create a cluster, you need to create at least one vSphere [node template]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) that specifies how VMs are created in vSphere. - -After you create a node template, it is saved, and you can re-use it whenever you create additional vSphere clusters. - -To create a node template, - -1. Log in with an administrator account to the Rancher UI. - -1. From the user settings menu, select **Node Templates.** - -1. Click **Add Template** and then click on the **vSphere** icon. - -Then, configure your template: - -- [A. Configure the vSphere credential](#a-configure-the-vsphere-credential) -- [B. Configure node scheduling](#b-configure-node-scheduling) -- [C. Configure instances and operating systems](#c-configure-instances-and-operating-systems) -- [D. Add networks](#d-add-networks) -- [E. If not already enabled, enable disk UUIDs](#e-if-not-already-enabled-enable-disk-uuids) -- [F. Optional: Configure node tags and custom attributes](#f-optional-configure-node-tags-and-custom-attributes) -- [G. Optional: Configure cloud-init](#g-optional-configure-cloud-init) -- [H. Saving the node template](#h-saving-the-node-template) - -### A. Configure the vSphere Credential - -The steps for configuring your vSphere credentials for the cluster are different depending on your version of Rancher. +The a vSphere cluster is created in Rancher depends on the Rancher version. {{% tabs %}} {{% tab "Rancher v2.2.0+" %}} +1. [Create your cloud credentials](#1-create-your-cloud-credentials) +2. [Create a node template with your cloud credentials](#2-create-a-node-template-with-your-cloud-credentials) +3. [Create a cluster with node pools using the node template](#3-create-a-cluster-with-node-pools-using-the-node-template) -Your account access information is in a [cloud credential.]({{}}/rancher/v2.x/en/user-settings/cloud-credentials/) Cloud credentials are stored as Kubernetes secrets. +### 1. Create your cloud credentials -You can use an existing cloud credential or create a new one. To create a new cloud credential, - -1. Click **Add New.** -1. In the **Name** field, enter a name for your vSphere credentials. -1. In the **vCenter or ESXi Server** field, enter the vCenter or ESXi hostname/IP. ESXi is the virtualization platform where you create and run virtual machines and virtual appliances. vCenter Server is the service through which you manage multiple hosts connected in a network and pool host resources. -1. Optional: In the **Port** field, configure the port of the vCenter or ESXi server. -1. In the **Username** and **Password** fields, enter your vSphere login username and password. +1. In the Rancher UI, click the user profile button in the upper right corner, and click **Cloud Credentials.** +1. Click **Add Cloud Credential.** +1. Enter a name for the cloud credential. +1. In the **Cloud Credential Type** field, select **vSphere**. +1. Enter your vSphere credentials. For help, refer to **Account Access** in the [configuration reference for your Rancher version.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/) 1. Click **Create.** -**Result:** The node template has the credentials required to provision nodes in vSphere. +**Result:** You have created the cloud credentials that will be used to provision nodes in your cluster. You can reuse these credentials for other node templates, or in other clusters. +### 2. Create a node template with your cloud credentials + +Creating a [node template]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) for vSphere will allow Rancher to provision new nodes in vSphere. Node templates can be reused for other clusters. + +1. In the Rancher UI, click the user profile button in the upper right corner, and click **Node Templates.** +1. Click **Add Template.** +1. Fill out a node template for vSphere. For help filling out the form, refer to the vSphere node template configuration reference. Refer to the newest version of the configuration reference that is less than or equal to your Rancher version: + - [v2.3.3]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.3.3) + - [v2.3.0]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.3.0) + - [v2.2.0]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.2.0) + +### 3. Create a cluster with node pools using the node template + +Use Rancher to create a Kubernetes cluster in vSphere. + +1. Navigate to **Clusters** in the **Global** view. +1. Click **Add Cluster** and select the **vSphere** infrastructure provider. +1. Enter a **Cluster Name.** +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. +1. Use **Cluster Options** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options.** For help configuring the cluster, refer to the [RKE cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) +1. If you want to dynamically provision persistent storage or other infrastructure later, you will need to enable the vSphere cloud provider by modifying the cluster YAML file. For details, refer to [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) +1. Add one or more node pools to your cluster. Each node pool uses a node template to provision new nodes. For more information about node pools, including best practices for assigning Kubernetes roles to the nodes, see [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-pools) +1. Review your options to confirm they're correct. Then click **Create**. + +**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces {{% /tab %}} {{% tab "Rancher prior to v2.2.0" %}} -In the **Account Access** section, enter the vCenter FQDN or IP address and the credentials for the vSphere user account. -{{% /tab %}} -{{% /tabs %}} -### B. Configure Node Scheduling +Use Rancher to create a Kubernetes cluster in vSphere. -Choose what hypervisor the virtual machine will be scheduled to. The configuration options depend on your version of Rancher. +For Rancher versions prior to v2.0.4, when you create the cluster, you will also need to follow the steps in [this section](http://localhost:9001/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vpshere-node-template-config/prior-to-2.0.4/#disk-uuids) to enable disk UUIDs. -{{% tabs %}} -{{% tab "Rancher v2.3.3+" %}} +1. From the **Clusters** page, click **Add Cluster**. +1. Choose **vSphere**. +1. Enter a **Cluster Name**. +1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. +1. Use **Cluster Options** to choose the version of Kubernetes that will be installed, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on **Show advanced options.** For help configuring the cluster, refer to the [RKE cluster configuration reference.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options) +1. If you want to dynamically provision persistent storage or other infrastructure later, you will need to enable the vSphere cloud provider by modifying the cluster YAML file. For details, refer to [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) +1. Add one or more [node pools]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-pools) to your cluster. Each node pool uses a node template to provision new nodes. To create a node template, click **Add Node Template** and complete the **vSphere Options** form. For help filling out the form, refer to the vSphere node template configuration reference. Refer to the newest version of the configuration reference that is less than or equal to your Rancher version: + - [v2.0.4]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.0.4) + - [prior to v2.0.4]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4) +1. Review your options to confirm they're correct. Then click **Create** to start provisioning the VMs and Kubernetes services. -The fields in the **Scheduling** section should auto-populate with the data center and other scheduling options that are available to you in vSphere. +**Result:** -1. In the **Data Center** field, choose the data center where the VM will be scheduled. -1. Optional: Select a **Resource Pool.** Resource pools can be used to partition available CPU and memory resources of a standalone host or cluster, and they can also be nested. -1. If you have a data store cluster, you can toggle the **Data Store** field. This lets you select a data store cluster where your VM will be scheduled to. If the field is not toggled, you can select an individual disk. -1. Optional: Select a folder where the VM will be placed. The VM folders in this dropdown menu directly correspond to your VM folders in vSphere. Note: The folder name should be prefaced with `vm/` in your vSphere config file. -1. Optional: Choose a specific host to create the VM on. Leave this field blank for a standalone ESXi or for a cluster with DRS (Distributed Resource Scheduler). If specified, the host system's pool will be used and the **Resource Pool** parameter will be ignored. -{{% /tab %}} -{{% tab "Rancher prior to v2.3.3" %}} +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. -In the **Scheduling** section, enter: +You can access your cluster after its state is updated to **Active.** -- The name/path of the **Data Center** to create the VMs in -- The name of the **VM Network** to attach to -- The name/path of the **Datastore** to store the disks in +**Active** clusters are assigned two Projects: - {{< img "/img/rancher/vsphere-node-template-2.png" "image" >}} +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces {{% /tab %}} {{% /tabs %}} -### C. Configure Instances and Operating Systems -Depending on the Rancher version there are different options available to configure instances. - -{{% tabs %}} -{{% tab "Rancher v2.3.3+" %}} - -In the **Instance Options** section, configure the number of vCPUs, memory, and disk size for the VMs created by this template. - -In the **Creation method** field, configure the method used to provision VMs in vSphere. Available options include creating VMs that boot from a RancherOS ISO or creating VMs by cloning from an existing virtual machine or [VM template](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-F7BF0E6B-7C4F-4E46-8BBF-76229AEA7220.html). - -The existing VM or template may use any modern Linux operating system that is configured with support for [cloud-init](https://cloudinit.readthedocs.io/en/latest/) using the [NoCloud datasource](https://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html). - -Choose the way that the VM will be created: - -- **Deploy from template: Data Center:** Choose a VM template that exists in the data center that you selected. -- **Deploy from template: Content Library:** First, select the [Content Library](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-254B2CE8-20A8-43F0-90E8-3F6776C2C896.html) that contains your template, then select the template from the populated list `Library templates`. -- **Clone an existing virtual machine:** In the **Virtual machine** field, choose an existing VM that the new VM will be cloned from. -- **Install from boot2docker ISO:** Ensure that the `OS ISO URL` field contains the URL of a VMware ISO release for RancherOS (rancheros-vmware.iso). Note that this URL must be accessible from the nodes running your Rancher server installation. - -{{% /tab %}} -{{% tab "Rancher prior to v2.3.3" %}} - -In the **Instance Options** section, configure the number of vCPUs, memory, and disk size for the VMs created by this template. - -Only VMs booting from RancherOS ISO are supported. - -Ensure that the [OS ISO URL](#instance-options) contains the URL of the VMware ISO release for RancherOS: `rancheros-vmware.iso`. - - ![image]({{}}/img/rancher/vsphere-node-template-1.png) - -{{% /tab %}} -{{% /tabs %}} - -### D. Add Networks - -_Available as of v2.3.3_ - -The node template now allows a VM to be provisioned with multiple networks. In the **Networks** field, you can now click **Add Network** to add any networks available to you in vSphere. - -### E. If Not Already Enabled, Enable Disk UUIDs - -In order to provision nodes with RKE, all nodes must be configured with disk UUIDs. - -As of Rancher v2.0.4, disk UUIDs are enabled in vSphere node templates by default. - -If you are using Rancher prior to v2.0.4, refer to these [instructions]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/#enabling-disk-uuids-with-a-node-template) for details on how to enable a UUID with a Rancher node template. - -### F. Optional: Configure Node Tags and Custom Attributes - -The way to attach metadata to the VM is different depending on your Rancher version. - -{{% tabs %}} -{{% tab "Rancher v2.3.3+" %}} - -**Optional:** Add vSphere tags and custom attributes. Tags allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects. - -For tags, all your vSphere tags will show up as options to select from in your node template. - -In the custom attributes, Rancher will let you select all the custom attributes you have already set up in vSphere. The custom attributes are keys and you can enter values for each one. - - > **Note:** Custom attributes are a legacy feature that will eventually be removed from vSphere. These attributes allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects. - -{{% /tab %}} -{{% tab "Rancher prior to v2.3.3" %}} - -**Optional:** - - - Provide a set of configuration parameters (instance-options) for the VMs. - - Assign labels to the VMs that can be used as a base for scheduling rules in the cluster. - - Customize the configuration of the Docker daemon on the VMs that will be created. - -> **Note:** Custom attributes are a legacy feature that will eventually be removed from vSphere. These attributes allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects. - -{{% /tab %}} -{{% /tabs %}} - -### G. Optional: Configure cloud-init - -[Cloud-init](https://cloudinit.readthedocs.io/en/latest/) allows you to initialize your nodes by applying configuration on the first boot. This may involve things such as creating users, authorizing SSH keys or setting up the network. - -The scope of cloud-init support for the VMs differs depending on the Rancher version. - -{{% tabs %}} -{{% tab "Rancher v2.3.3+" %}} - -To make use of cloud-init initialization, create a cloud config file using valid YAML syntax and paste the file content in the the **Cloud Init** field. Refer to the [cloud-init documentation.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) for a commented set of examples of supported cloud config directives. - -*Note that cloud-init is not supported when using the ISO creation method.* - -{{% /tab %}} -{{% tab "Rancher prior to v2.3.3" %}} - -You may specify the URL of a RancherOS cloud-config.yaml file in the the **Cloud Init** field. Refer to the [RancherOS Documentation]https://rancher.com/docs/os/v1.x/en/configuration/#cloud-config) for details on the supported configuration directives. Note that the URL must be network accessible from the VMs created by the template. - -{{% /tab %}} -{{% /tabs %}} - -### H. Saving the Node Template - -Assign a descriptive **Name** for this template and click **Create.** - -### Node Template Configuration Reference - -Refer to [this section]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/) for a reference on the configuration options available for vSphere node templates. - -# 2. Create a Kubernetes Cluster Using the Node Template - -After you've created a template, you can use it to stand up the vSphere cluster itself. - -To install Kubernetes on vSphere nodes, you will need to enable the vSphere cloud provider by modifying the cluster YAML file. This requirement applies to both pre-created [custom nodes]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/) and for nodes created in Rancher using the vSphere node driver. - -To create the cluster and enable the vSphere provider for cluster, follow these steps: - -- [A. Set up the cluster name and member roles](#a-set-up-the-cluster-name-and-member-roles) -- [B. Configure Kubernetes options](#b-configure-kubernetes-options) -- [C. Add node pools to the cluster](#c-add-node-pools-to-the-cluster) -- [D. Optional: Add a self-healing node pool](#d-optional-add-a-self-healing-node-pool) -- [E. Create the cluster](#e-create-the-cluster) - -### A. Set up the Cluster Name and Member Roles - -1. Log in to the Rancher UI as an administrator. -2. Navigate to **Clusters** in the **Global** view. -3. Click **Add Cluster** and select the **vSphere** infrastructure provider. -4. Assign a **Cluster Name.** -5. Assign **Member Roles** as required. {{< step_create-cluster_member-roles >}} - -> **Note:** -> -> If you have a cluster with DRS enabled, setting up [VM-VM Affinity Rules](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.resmgmt.doc/GUID-7297C302-378F-4AF2-9BD6-6EDB1E0A850A.html) is recommended. These rules allow VMs assigned the etcd and control-plane roles to operate on separate ESXi hosts when they are assigned to different node pools. This practice ensures that the failure of a single physical machine does not affect the availability of those planes. - - -### B. Configure Kubernetes Options -{{}} - -### C. Add Node Pools to the Cluster -{{}} - -### D. Optional: Add a Self-Healing Node Pool - -To make a node pool self-healing, enter a number greater than zero in the **Auto Replace** column. Rancher will use the node template for the given node pool to recreate the node if it becomes inactive for that number of minutes. - -> **Note:** Self-healing node pools are designed to help you replace worker nodes for stateless applications. It is not recommended to enable node auto-replace on a node pool of master nodes or nodes with persistent volumes attached, because VMs are treated ephemerally. When a node in a node pool loses connectivity with the cluster, its persistent volumes are destroyed, resulting in data loss for stateful applications. - -### E. Create the Cluster - -Click **Create** to start provisioning the VMs and Kubernetes services. - -{{< result_create-cluster >}} - -# 3. Optional: Provision Storage - -For an example of how to provision storage in vSphere using Rancher, refer to the - [cluster administration section.]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere) - - In order to provision storage in vSphere, the vSphere provider must be enabled. - -### Enable the vSphere Cloud Provider for the Cluster - -1. Set **Cloud Provider** option to `Custom`. - - {{< img "/img/rancher/vsphere-node-driver-cloudprovider.png" "vsphere-node-driver-cloudprovider">}} - -1. Click on **Edit as YAML** -1. Insert the following structure to the pre-populated cluster YAML. As of Rancher v2.3+, this structure must be placed under `rancher_kubernetes_engine_config`. In versions prior to v2.3, it has to be defined as a top-level field. Note that the `name` *must* be set to `vsphere`. - - ```yaml - rancher_kubernetes_engine_config: # Required as of Rancher v2.3+ - cloud_provider: - name: vsphere - vsphereCloudProvider: - [Insert provider configuration] - ``` - - Rancher uses RKE (the Rancher Kubernetes Engine) to provision Kubernetes clusters. Refer to the [vSphere configuration reference in the RKE documentation]({{}}/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/) for details about the properties of the `vsphereCloudProvider` directive. # Optional Next Steps @@ -316,4 +141,5 @@ For an example of how to provision storage in vSphere using Rancher, refer to th After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster: - **Access your cluster with the kubectl CLI:** Follow [these steps]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#accessing-clusters-with-kubectl-on-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI. -- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. \ No newline at end of file +- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. +- **Provision Storage:** For an example of how to provision storage in vSphere using Rancher, refer to [this section.]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere) In order to dynamically provision storage in vSphere, the vSphere provider must be [enabled.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/enabling-uuids/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/enabling-uuids/_index.md deleted file mode 100644 index 2388ad4e8ad..00000000000 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/enabling-uuids/_index.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: Enabling Disk UUIDs in Node Templates -weight: 3 ---- - -As of Rancher v2.0.4, disk UUIDs are enabled in vSphere node templates by default. - -For Rancher prior to v2.0.4, we recommend configuring a vSphere node template to automatically enable disk UUIDs because they are required for Rancher to manipulate vSphere resources. - -To enable disk UUIDs for all VMs created for a cluster, - -1. Navigate to the **Node Templates** in the Rancher UI while logged in as an administrator. - -2. Add or edit an existing vSphere node template. - -3. Under **Instance Options** click on **Add Parameter**. - -4. Enter `disk.enableUUID` as key with a value of **TRUE**. - - {{< img "/img/rke/vsphere-nodedriver-enable-uuid.png" "vsphere-nodedriver-enable-uuid" >}} - -5. Click **Create** or **Save**. - -**Result:** The disk UUID is enabled in the vSphere node template. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/_index.md deleted file mode 100644 index adf7cdbe8d4..00000000000 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/_index.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: vSphere Node Template Configuration Reference -weight: 4 ---- - -The tables below describe the configuration options available in the vSphere node template: - -- [Account access](#account-access) -- [Instance options](#instance-options) -- [Scheduling options](#scheduling-options) - -# Account Access - -The account access parameters are different based on the Rancher version. - -{{% tabs %}} -{{% tab "Rancher v2.2.0+" %}} - -| Parameter | Required | Description | -|:----------------------|:--------:|:-----| -| Cloud Credentials | * | Your vSphere account access information, stored in a [cloud credential.]({{}}/rancher/v2.x/en/user-settings/cloud-credentials/) | - -{{% /tab %}} -{{% tab "Rancher prior to v2.2.0" %}} - -| Parameter | Required | Description | -|:------------------------|:--------:|:------------------------------------------------------------| -| vCenter or ESXi Server | * | IP or FQDN of the vCenter or ESXi server used for managing VMs. | -| Port | * | Port to use when connecting to the server. Defaults to `443`. | -| Username | * | vCenter/ESXi user to authenticate with the server. | -| Password | * | User's password. | - -{{% /tab %}} -{{% /tabs %}} - -# Instance Options - -The options for creating and configuring an instance are different depending on your Rancher version. - -{{% tabs %}} -{{% tab "Rancher v2.3.3+" %}} - -| Parameter | Required | Description | -|:----------------|:--------:|:-----------| -| CPUs | * | Number of vCPUS to assign to VMs. | -| Memory | * | Amount of memory to assign to VMs. | -| Disk | * | Size of the disk (in MB) to attach to the VMs. | -| Creation method | * | The method for setting up an operating system on the node. The operating system can be installed from an ISO or from a VM template. Depending on the creation method, you will also have to specify a VM template, content library, existing VM, or ISO. For more information on creation methods, refer to the section on [configuring instances.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/#c-configure-instances-and-operating-systems) | -| Cloud Init | | URL of a `cloud-config.yml` file or URL to provision VMs with. This file allows further customization of the operating system, such as network configuration, DNS servers, or system daemons. The operating system must support `cloud-init`. | -| Networks | | Name(s) of the network to attach the VM to. | -| Configuration Parameters used for guestinfo | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). | - -{{% /tab %}} -{{% tab "Rancher prior to v2.3.3" %}} - -| Parameter | Required | Description | -|:------------------------|:--------:|:------------------------------------------------------------| -| CPUs | * | Number of vCPUS to assign to VMs. | -| Memory | * | Amount of memory to assign to VMs. | -| Disk | * | Size of the disk (in MB) to attach to the VMs. | -| Cloud Init | | URL of a [RancherOS cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/) file to provision VMs with. This file allows further customization of the RancherOS operating system, such as network configuration, DNS servers, or system daemons.| -| OS ISO URL | * | URL of a RancherOS vSphere ISO file to boot the VMs from. You can find URLs for specific versions in the [Rancher OS GitHub Repo](https://github.com/rancher/os). | -| Configuration Parameters | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). | - -{{% /tab %}} -{{% /tabs %}} - -# Scheduling Options -The options for scheduling VMs to a hypervisor are different depending on your Rancher version. -{{% tabs %}} -{{% tab "Rancher v2.3.3+" %}} - -| Parameter | Required | Description | -|:------------------------|:--------:|:-------| -| Data Center | * | Name/path of the datacenter to create VMs in. | -| Resource Pool | | Name of the resource pool to schedule the VMs in. Leave blank for standalone ESXi. If not specified, the default resource pool is used. | -| Data Store | * | If you have a data store cluster, you can toggle the **Data Store** field. This lets you select a data store cluster where your VM will be scheduled to. If the field is not toggled, you can select an individual disk. | -| Folder | | Name of a folder in the datacenter to create the VMs in. Must already exist. The folder name should be prefaced with `vm/` in your vSphere config file. | -| Host | | The IP of the host system to schedule VMs in. If specified, the host system's pool will be used and the *Pool* parameter will be ignored. | - -{{% /tab %}} -{{% tab "Rancher prior to v2.3.3" %}} - -| Parameter | Required | Description | -|:------------------------|:--------:|:------------------------------------------------------------| -| Data Center | * | Name/path of the datacenter to create VMs in. | -| Pool | | Name/path of the resource pool to schedule the VMs in. If not specified, the default resource pool is used. | -| Host | | Name/path of the host system to schedule VMs in. If specified, the host system's pool will be used and the *Pool* parameter will be ignored. | -| Network | * | Name of the VM network to attach VMs to. | -| Data Store | * | Datastore to store the VM disks. | -| Folder | | Name of a folder in the datacenter to create the VMs in. Must already exist. The folder name should be prefaced with `vm/` in your vSphere config file. | -{{% /tab %}} -{{% /tabs %}} \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md new file mode 100644 index 00000000000..71f5b3d573d --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md @@ -0,0 +1,16 @@ +--- +title: VSphere Node Template Configuration +weight: 2 +aliases: + - /rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference + - /rancher/v2.x/en/cluster-provisionin/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/enabling-uuids +--- + +The vSphere node templates in Rancher were updated in the following Rancher versions. Refer to the newest configuration reference that is less than or equal to your Rancher version: + +- [v2.3.3](./v2.3.3) +- [v2.3.0](./v2.3.0) +- [v2.2.0](./v2.2.0) +- [v2.0.4](./v2.0.4) + +For Rancher versions prior to v2.0.4, refer to [this version.](./prior-to-2.0.4) \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4/_index.md new file mode 100644 index 00000000000..b1683f18d2e --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4/_index.md @@ -0,0 +1,88 @@ +--- +title: vSphere Node Template Configuration in Rancher prior to v2.0.4 +shortTitle: Prior to v2.0.4 +weight: 5 +--- + +- [Account access](#account-access) +- [Scheduling](#scheduling) +- [Instance options](#instance-options) +- [Disk UUIDs](#disk-uuids) +- [Node Tags and Custom Attributes](#node-tags-and-custom-attributes) +- [Cloud Init](#cloud-init) + +# Account Access +In the **Account Access** section, enter the vCenter FQDN or IP address and the credentials for the vSphere user account. + +| Parameter | Required | Description | +|:------------------------|:--------:|:------------------------------------------------------------| +| vCenter or ESXi Server | * | IP or FQDN of the vCenter or ESXi server used for managing VMs. Enter the vCenter or ESXi hostname/IP. ESXi is the virtualization platform where you create and run virtual machines and virtual appliances. vCenter Server is the service through which you manage multiple hosts connected in a network and pool host resources. | +| Port | * | Port to use when connecting to the server. Defaults to `443`. | +| Username | * | vCenter/ESXi user to authenticate with the server. | +| Password | * | User's password. | + + +# Scheduling + +Choose what hypervisor the virtual machine will be scheduled to. + +| Parameter | Required | Description | +|:------------------------|:--------:|:------------------------------------------------------------| +| Data Center | * | Name/path of the datacenter to create VMs in. | +| Pool | | Name/path of the resource pool to schedule the VMs in. If not specified, the default resource pool is used. | +| Host | | Name/path of the host system to schedule VMs in. If specified, the host system's pool will be used and the *Pool* parameter will be ignored. | +| Network | * | Name of the VM network to attach VMs to. | +| Data Store | * | Datastore to store the VM disks. | +| Folder | | Name of a folder in the datacenter to create the VMs in. Must already exist. The folder name should be prefaced with `vm/` in your vSphere config file. | + +# Instance Options +In the **Instance Options** section, configure the number of vCPUs, memory, and disk size for the VMs created by this template. + +Only VMs booting from RancherOS ISO are supported. + +Ensure that the OS ISO URL contains the URL of the VMware ISO release for RancherOS: `rancheros-vmware.iso`. + + +| Parameter | Required | Description | +|:------------------------|:--------:|:------------------------------------------------------------| +| CPUs | * | Number of vCPUS to assign to VMs. | +| Memory | * | Amount of memory to assign to VMs. | +| Disk | * | Size of the disk (in MB) to attach to the VMs. | +| Cloud Init | | URL of a [RancherOS cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/) file to provision VMs with. This file allows further customization of the RancherOS operating system, such as network configuration, DNS servers, or system daemons.| +| OS ISO URL | * | URL of a RancherOS vSphere ISO file to boot the VMs from. You can find URLs for specific versions in the [Rancher OS GitHub Repo](https://github.com/rancher/os). | +| Configuration Parameters | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). | + +# Disk UUIDs + +In order to provision nodes with RKE, all nodes must be configured with disk UUIDs. Follow these instructions to enable UUIDs for the nodes in your vSphere cluster. + +To enable disk UUIDs for all VMs created for a cluster, + +1. Navigate to the **Node Templates** in the Rancher UI while logged in as an administrator. +2. Add or edit an existing vSphere node template. +3. Under **Instance Options** click on **Add Parameter**. +4. Enter `disk.enableUUID` as key with a value of **TRUE**. + + {{< img "/img/rke/vsphere-nodedriver-enable-uuid.png" "vsphere-nodedriver-enable-uuid" >}} + +5. Click **Create** or **Save**. + +**Result:** The disk UUID is enabled in the vSphere node template. + +# Node Tags and Custom Attributes + +These attributes allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects. + +Optionally, you can: + +- Provide a set of configuration parameters (instance-options) for the VMs. +- Assign labels to the VMs that can be used as a base for scheduling rules in the cluster. +- Customize the configuration of the Docker daemon on the VMs that will be created. + +> **Note:** Custom attributes are a legacy feature that will eventually be removed from vSphere. + +# Cloud Init + +[Cloud-init](https://cloudinit.readthedocs.io/en/latest/) allows you to initialize your nodes by applying configuration on the first boot. This may involve things such as creating users, authorizing SSH keys or setting up the network. + +You may specify the URL of a RancherOS cloud-config.yaml file in the the **Cloud Init** field. Refer to the [RancherOS Documentation](https://rancher.com/docs/os/v1.x/en/configuration/#cloud-config) for details on the supported configuration directives. Note that the URL must be network accessible from the VMs created by the template. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.0.4/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.0.4/_index.md new file mode 100644 index 00000000000..f53ea208781 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.0.4/_index.md @@ -0,0 +1,67 @@ +--- +title: vSphere Node Template Configuration in Rancher v2.0.4 +shortTitle: v2.0.4 +weight: 4 +--- +- [Account access](#account-access) +- [Scheduling](#scheduling) +- [Instance options](#instance-options) +- [Node Tags and Custom Attributes](#node-tags-and-custom-attributes) +- [Cloud Init](#cloud-init) + +# Account Access +In the **Account Access** section, enter the vCenter FQDN or IP address and the credentials for the vSphere user account. + +| Parameter | Required | Description | +|:------------------------|:--------:|:------------------------------------------------------------| +| vCenter or ESXi Server | * | IP or FQDN of the vCenter or ESXi server used for managing VMs. Enter the vCenter or ESXi hostname/IP. ESXi is the virtualization platform where you create and run virtual machines and virtual appliances. vCenter Server is the service through which you manage multiple hosts connected in a network and pool host resources. | +| Port | * | Port to use when connecting to the server. Defaults to `443`. | +| Username | * | vCenter/ESXi user to authenticate with the server. | +| Password | * | User's password. | + +# Scheduling + +Choose what hypervisor the virtual machine will be scheduled to. + +| Parameter | Required | Description | +|:------------------------|:--------:|:------------------------------------------------------------| +| Data Center | * | Name/path of the datacenter to create VMs in. | +| Pool | | Name/path of the resource pool to schedule the VMs in. If not specified, the default resource pool is used. | +| Host | | Name/path of the host system to schedule VMs in. If specified, the host system's pool will be used and the *Pool* parameter will be ignored. | +| Network | * | Name of the VM network to attach VMs to. | +| Data Store | * | Datastore to store the VM disks. | +| Folder | | Name of a folder in the datacenter to create the VMs in. Must already exist. The folder name should be prefaced with `vm/` in your vSphere config file. | + +# Instance Options +In the **Instance Options** section, configure the number of vCPUs, memory, and disk size for the VMs created by this template. + +Only VMs booting from RancherOS ISO are supported. + +Ensure that the OS ISO URL contains the URL of the VMware ISO release for RancherOS: `rancheros-vmware.iso`. + +| Parameter | Required | Description | +|:------------------------|:--------:|:------------------------------------------------------------| +| CPUs | * | Number of vCPUS to assign to VMs. | +| Memory | * | Amount of memory to assign to VMs. | +| Disk | * | Size of the disk (in MB) to attach to the VMs. | +| Cloud Init | | URL of a [RancherOS cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/) file to provision VMs with. This file allows further customization of the RancherOS operating system, such as network configuration, DNS servers, or system daemons.| +| OS ISO URL | * | URL of a RancherOS vSphere ISO file to boot the VMs from. You can find URLs for specific versions in the [Rancher OS GitHub Repo](https://github.com/rancher/os). | +| Configuration Parameters | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). | + +# Node Tags and Custom Attributes + +These attributes allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects. + +Optionally, you can: + +- Provide a set of configuration parameters (instance-options) for the VMs. +- Assign labels to the VMs that can be used as a base for scheduling rules in the cluster. +- Customize the configuration of the Docker daemon on the VMs that will be created. + +> **Note:** Custom attributes are a legacy feature that will eventually be removed from vSphere. + +# Cloud Init + +[Cloud-init](https://cloudinit.readthedocs.io/en/latest/) allows you to initialize your nodes by applying configuration on the first boot. This may involve things such as creating users, authorizing SSH keys or setting up the network. + +You may specify the URL of a RancherOS cloud-config.yaml file in the the **Cloud Init** field. Refer to the [RancherOS Documentation](https://rancher.com/docs/os/v1.x/en/configuration/#cloud-config) for details on the supported configuration directives. Note that the URL must be network accessible from the VMs created by the template. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.2.0/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.2.0/_index.md new file mode 100644 index 00000000000..60410e3a669 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.2.0/_index.md @@ -0,0 +1,70 @@ +--- +title: vSphere Node Template Configuration in Rancher v2.2.0 +shortTitle: v2.2.0 +weight: 3 +--- +- [Account Access](#account-access) +- [Scheduling](#scheduling) +- [Instance Options](#instance-options) +- [Node tags and custom attributes](#node-tags-and-custom-attributes) +- [Cloud Init](#cloud-init) + +# Account Access + +| Parameter | Required | Description | +|:----------------------|:--------:|:-----| +| Cloud Credentials | * | Your vSphere account access information, stored in a [cloud credential.]({{}}/rancher/v2.x/en/user-settings/cloud-credentials/) | + +Your cloud credential has these fields: + +| Credential Field | Description | +|-----------|----------| +| vCenter or ESXi Server | Enter the vCenter or ESXi hostname/IP. ESXi is the virtualization platform where you create and run virtual machines and virtual appliances. vCenter Server is the service through which you manage multiple hosts connected in a network and pool host resources. | +| Port | Optional: configure configure the port of the vCenter or ESXi server. | +| Username and password | Enter your vSphere login username and password. | + +# Scheduling +Choose what hypervisor the virtual machine will be scheduled to. + +| Parameter | Required | Description | +|:------------------------|:--------:|:------------------------------------------------------------| +| Data Center | * | Name/path of the datacenter to create VMs in. | +| Pool | | Name/path of the resource pool to schedule the VMs in. If not specified, the default resource pool is used. | +| Host | | Name/path of the host system to schedule VMs in. If specified, the host system's pool will be used and the *Pool* parameter will be ignored. | +| Network | * | Name of the VM network to attach VMs to. | +| Data Store | * | Datastore to store the VM disks. | +| Folder | | Name of a folder in the datacenter to create the VMs in. Must already exist. The folder name should be prefaced with `vm/` in your vSphere config file. | + +# Instance Options + +In the **Instance Options** section, configure the number of vCPUs, memory, and disk size for the VMs created by this template. + +Only VMs booting from RancherOS ISO are supported. + +Ensure that the OS ISO URL contains the URL of the VMware ISO release for RancherOS: `rancheros-vmware.iso`. + +| Parameter | Required | Description | +|:------------------------|:--------:|:------------------------------------------------------------| +| CPUs | * | Number of vCPUS to assign to VMs. | +| Memory | * | Amount of memory to assign to VMs. | +| Disk | * | Size of the disk (in MB) to attach to the VMs. | +| Cloud Init | | URL of a [RancherOS cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/) file to provision VMs with. This file allows further customization of the RancherOS operating system, such as network configuration, DNS servers, or system daemons.| +| OS ISO URL | * | URL of a RancherOS vSphere ISO file to boot the VMs from. You can find URLs for specific versions in the [Rancher OS GitHub Repo](https://github.com/rancher/os). | +| Configuration Parameters | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). | + +# Node Tags and Custom Attributes + +These attributes allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects. + +Optionally, you can: + +- Provide a set of configuration parameters (instance-options) for the VMs. +- Assign labels to the VMs that can be used as a base for scheduling rules in the cluster. +- Customize the configuration of the Docker daemon on the VMs that will be created. + +> **Note:** Custom attributes are a legacy feature that will eventually be removed from vSphere. + +# Cloud Init +[Cloud-init](https://cloudinit.readthedocs.io/en/latest/) allows you to initialize your nodes by applying configuration on the first boot. This may involve things such as creating users, authorizing SSH keys or setting up the network. + +You may specify the URL of a RancherOS cloud-config.yaml file in the the **Cloud Init** field. Refer to the [RancherOS Documentation](https://rancher.com/docs/os/v1.x/en/configuration/#cloud-config) for details on the supported configuration directives. Note that the URL must be network accessible from the VMs created by the template. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.3.0/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.3.0/_index.md new file mode 100644 index 00000000000..337c621032a --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.3.0/_index.md @@ -0,0 +1,78 @@ +--- +title: vSphere Node Template Configuration in Rancher v2.3.0 +shortTitle: v2.3.0 +weight: 2 +--- +- [Account Access](#account-access) +- [Scheduling](#scheduling) +- [Instance Options](#instance-options) +- [Node tags and custom attributes](#node-tags-and-custom-attributes) +- [Cloud Init](#cloud-init) + +# Account Access + +| Parameter | Required | Description | +|:----------------------|:--------:|:-----| +| Cloud Credentials | * | Your vSphere account access information, stored in a [cloud credential.]({{}}/rancher/v2.x/en/user-settings/cloud-credentials/) | + +Your cloud credential has these fields: + +| Credential Field | Description | +|-----------------|-----------------| +| vCenter or ESXi Server | Enter the vCenter or ESXi hostname/IP. ESXi is the virtualization platform where you create and run virtual machines and virtual appliances. vCenter Server is the service through which you manage multiple hosts connected in a network and pool host resources. | +| Port | Optional: configure configure the port of the vCenter or ESXi server. | +| Username and password | Enter your vSphere login username and password. | + +# Scheduling +Choose what hypervisor the virtual machine will be scheduled to. + +In the **Scheduling** section, enter: + +- The name/path of the **Data Center** to create the VMs in +- The name of the **VM Network** to attach to +- The name/path of the **Datastore** to store the disks in + +| Parameter | Required | Description | +|:------------------------|:--------:|:------------------------------------------------------------| +| Data Center | * | Name/path of the datacenter to create VMs in. | +| Pool | | Name/path of the resource pool to schedule the VMs in. If not specified, the default resource pool is used. | +| Host | | Name/path of the host system to schedule VMs in. If specified, the host system's pool will be used and the *Pool* parameter will be ignored. | +| Network | * | Name of the VM network to attach VMs to. | +| Data Store | * | Datastore to store the VM disks. | +| Folder | | Name of a folder in the datacenter to create the VMs in. Must already exist. The folder name should be prefaced with `vm/` in your vSphere config file. | + +# Instance Options + +In the **Instance Options** section, configure the number of vCPUs, memory, and disk size for the VMs created by this template. + +Only VMs booting from RancherOS ISO are supported. + +Ensure that the OS ISO URL contains the URL of the VMware ISO release for RancherOS: `rancheros-vmware.iso`. + +| Parameter | Required | Description | +|:------------------------|:--------:|:------------------------------------------------------------| +| CPUs | * | Number of vCPUS to assign to VMs. | +| Memory | * | Amount of memory to assign to VMs. | +| Disk | * | Size of the disk (in MB) to attach to the VMs. | +| Cloud Init | | URL of a [RancherOS cloud-config]({{< baseurl >}}/os/v1.x/en/configuration/) file to provision VMs with. This file allows further customization of the RancherOS operating system, such as network configuration, DNS servers, or system daemons.| +| OS ISO URL | * | URL of a RancherOS vSphere ISO file to boot the VMs from. You can find URLs for specific versions in the [Rancher OS GitHub Repo](https://github.com/rancher/os). | +| Configuration Parameters | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). | + + +# Node Tags and Custom Attributes + +These attributes allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects. + +Optionally, you can: + +- Provide a set of configuration parameters (instance-options) for the VMs. +- Assign labels to the VMs that can be used as a base for scheduling rules in the cluster. +- Customize the configuration of the Docker daemon on the VMs that will be created. + +> **Note:** Custom attributes are a legacy feature that will eventually be removed from vSphere. + +# Cloud Init + +[Cloud-init](https://cloudinit.readthedocs.io/en/latest/) allows you to initialize your nodes by applying configuration on the first boot. This may involve things such as creating users, authorizing SSH keys or setting up the network. + +You may specify the URL of a RancherOS cloud-config.yaml file in the the **Cloud Init** field. Refer to the [RancherOS Documentation](https://rancher.com/docs/os/v1.x/en/configuration/#cloud-config) for details on the supported configuration directives. Note that the URL must be network accessible from the VMs created by the template. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.3.3/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.3.3/_index.md new file mode 100644 index 00000000000..5021bdf6702 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/v2.3.3/_index.md @@ -0,0 +1,89 @@ +--- +title: vSphere Node Template Configuration in Rancher v2.3.3 +shortTitle: v2.3.3 +weight: 1 +--- +- [Account Access](#account-access) +- [Scheduling](#scheduling) +- [Instance Options](#instance-options) +- [Networks](#networks) +- [Node tags and custom attributes](#node-tags-and-custom-attributes) +- [cloud-init](#cloud-init) + +# Account Access + +| Parameter | Required | Description | +|:----------------------|:--------:|:-----| +| Cloud Credentials | * | Your vSphere account access information, stored in a [cloud credential.]({{}}/rancher/v2.x/en/user-settings/cloud-credentials/) | + +Your cloud credential has these fields: + +| Credential Field | Description | +|-----------------|--------------| +| vCenter or ESXi Server | Enter the vCenter or ESXi hostname/IP. ESXi is the virtualization platform where you create and run virtual machines and virtual appliances. vCenter Server is the service through which you manage multiple hosts connected in a network and pool host resources. | +| Port | Optional: configure configure the port of the vCenter or ESXi server. | +| Username and password | Enter your vSphere login username and password. | + +# Scheduling + +Choose what hypervisor the virtual machine will be scheduled to. + +The fields in the **Scheduling** section should auto-populate with the data center and other scheduling options that are available to you in vSphere. + +| Field | Required | Explanation | +|---------|---------------|-----------| +| Data Center | * | Choose the name/path of the data center where the VM will be scheduled. | +| Resource Pool | | Name of the resource pool to schedule the VMs in. Resource pools can be used to partition available CPU and memory resources of a standalone host or cluster, and they can also be nested. Leave blank for standalone ESXi. If not specified, the default resource pool is used. | +| Data Store | * | If you have a data store cluster, you can toggle the **Data Store** field. This lets you select a data store cluster where your VM will be scheduled to. If the field is not toggled, you can select an individual disk. | +| Folder | | Name of a folder in the datacenter to create the VMs in. Must already exist. The VM folders in this dropdown menu directly correspond to your VM folders in vSphere. The folder name should be prefaced with `vm/` in your vSphere config file. | +| Host | | The IP of the host system to schedule VMs in. Leave this field blank for a standalone ESXi or for a cluster with DRS (Distributed Resource Scheduler). If specified, the host system's pool will be used and the **Resource Pool** parameter will be ignored. | + +# Instance Options + +In the **Instance Options** section, configure the number of vCPUs, memory, and disk size for the VMs created by this template. + +| Parameter | Required | Description | +|:----------------|:--------:|:-----------| +| CPUs | * | Number of vCPUS to assign to VMs. | +| Memory | * | Amount of memory to assign to VMs. | +| Disk | * | Size of the disk (in MB) to attach to the VMs. | +| Creation method | * | The method for setting up an operating system on the node. The operating system can be installed from an ISO or from a VM template. Depending on the creation method, you will also have to specify a VM template, content library, existing VM, or ISO. For more information on creation methods, refer to [About VM Creation Methods.](#about-vm-creation-methods) | +| Cloud Init | | URL of a `cloud-config.yml` file or URL to provision VMs with. This file allows further customization of the operating system, such as network configuration, DNS servers, or system daemons. The operating system must support `cloud-init`. | +| Networks | | Name(s) of the network to attach the VM to. | +| Configuration Parameters used for guestinfo | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). | + + +### About VM Creation Methods + +In the **Creation method** field, configure the method used to provision VMs in vSphere. Available options include creating VMs that boot from a RancherOS ISO or creating VMs by cloning from an existing virtual machine or [VM template](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-F7BF0E6B-7C4F-4E46-8BBF-76229AEA7220.html). + +The existing VM or template may use any modern Linux operating system that is configured with support for [cloud-init](https://cloudinit.readthedocs.io/en/latest/) using the [NoCloud datasource](https://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html). + +Choose the way that the VM will be created: + +- **Deploy from template: Data Center:** Choose a VM template that exists in the data center that you selected. +- **Deploy from template: Content Library:** First, select the [Content Library](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-254B2CE8-20A8-43F0-90E8-3F6776C2C896.html) that contains your template, then select the template from the populated list **Library templates.** +- **Clone an existing virtual machine:** In the **Virtual machine** field, choose an existing VM that the new VM will be cloned from. +- **Install from boot2docker ISO:** Ensure that the **OS ISO URL** field contains the URL of a VMware ISO release for RancherOS (`rancheros-vmware.iso`). Note that this URL must be accessible from the nodes running your Rancher server installation. + +# Networks + +The node template now allows a VM to be provisioned with multiple networks. In the **Networks** field, you can now click **Add Network** to add any networks available to you in vSphere. + +# Node Tags and Custom Attributes + +Tags allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects. + +For tags, all your vSphere tags will show up as options to select from in your node template. + +In the custom attributes, Rancher will let you select all the custom attributes you have already set up in vSphere. The custom attributes are keys and you can enter values for each one. + +> **Note:** Custom attributes are a legacy feature that will eventually be removed from vSphere. + +# cloud-init + +[Cloud-init](https://cloudinit.readthedocs.io/en/latest/) allows you to initialize your nodes by applying configuration on the first boot. This may involve things such as creating users, authorizing SSH keys or setting up the network. + +To make use of cloud-init initialization, create a cloud config file using valid YAML syntax and paste the file content in the the **Cloud Init** field. Refer to the [cloud-init documentation.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) for a commented set of examples of supported cloud config directives. + +Note that cloud-init is not supported when using the ISO creation method. \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md index ea103035b2e..d30f69de66a 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md @@ -1,16 +1,18 @@ --- -title: Cluster Configuration Reference +title: RKE Cluster Configuration Reference weight: 2250 --- -As you configure a new cluster that's [provisioned using RKE]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/), you can choose custom Kubernetes options. +When Rancher installs Kubernetes, it uses [RKE]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) as the Kubernetes distribution. -You can configure Kubernetes options one of two ways: +This section covers the configuration options that are available in Rancher for a new or existing RKE Kubernetes cluster. + +You can configure the Kubernetes options one of two ways: - [Rancher UI](#rancher-ui): Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster. - [Cluster Config File](#cluster-config-file): Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML. -In Rancher v2.0.0-v2.2.x, the config file is identical to the [cluster config file for the Rancher Kubernetes Engine]({{}}/rke/latest/en/config-options/), which is the tool Rancher uses to provision clusters. In Rancher v2.3.0, the RKE information is still included in the config file, but it is separated from other options, so that the RKE cluster config options are nested under the `rancher_kubernetes_engine_config` directive. For more information, see the section about the [cluster config file.](#cluster-config-file) +In Rancher v2.0.0-v2.2.x, the RKE cluster config file in Rancher is identical to the [cluster config file for the Rancher Kubernetes Engine]({{}}/rke/latest/en/config-options/), which is the tool Rancher uses to provision clusters. In Rancher v2.3.0, the RKE information is still included in the config file, but it is separated from other options, so that the RKE cluster config options are nested under the `rancher_kubernetes_engine_config` directive. For more information, see the section about the [cluster config file.](#cluster-config-file) This section is a cluster configuration reference, covering the following topics: @@ -20,6 +22,7 @@ This section is a cluster configuration reference, covering the following topics - [Kubernetes cloud providers](#kubernetes-cloud-providers) - [Private registries](#private-registries) - [Authorized cluster endpoint](#authorized-cluster-endpoint) + - [Node pools](#node-pools) - [Advanced Options](#advanced-options) - [NGINX Ingress](#nginx-ingress) - [Node port range](#node-port-range) @@ -113,6 +116,10 @@ For more detail on how an authorized cluster endpoint works and why it is used, We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the [recommended architecture section.]({{}}/rancher/v2.x/en/overview/architecture-recommendations/#architecture-for-an-authorized-cluster-endpoint) +### Node Pools + +For information on using the Rancher UI to set up node pools in an RKE cluster, refer to [this page.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools) + # Advanced Options The following options are available when you create clusters in the Rancher UI. They are located under **Advanced Options.** diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md index 5ff394c2ed2..a14009355a6 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md @@ -5,13 +5,18 @@ weight: 2240 _Available as of v2.3.0_ -When provisioning a [custom cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes) using Rancher, Rancher uses RKE (the Rancher Kubernetes Engine) to provision the Kubernetes custom cluster on your existing infrastructure. +When provisioning a [custom cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes) using Rancher, Rancher uses RKE (the Rancher Kubernetes Engine) to install Kubernetes on your existing nodes. -You can use a mix of Linux and Windows hosts as your cluster nodes. Windows nodes can only be used for deploying workloads, while Linux nodes are required for cluster management. +In a Windows cluster provisioned with Rancher, the cluster must contain both Linux and Windows nodes. The Kubernetes controlplane can only run on Linux nodes, and the Windows nodes can only have the worker role. Windows nodes can only be used for deploying workloads. -You can only add Windows nodes to a cluster if Windows support is enabled. Windows support can be enabled for new custom clusters that use Kubernetes 1.15+ and the Flannel network provider. Windows support cannot be enabled for existing clusters. +Some other requirements for Windows clusters include: -> Windows clusters have more requirements than Linux clusters. For example, Windows nodes must have 50 GB of disk space. Make sure your Windows cluster fulfills all of the [requirements.](#requirements-for-windows-clusters) +- You can only add Windows nodes to a cluster if Windows support is enabled when the cluster is created. Windows support cannot be enabled for existing clusters. +- Kubernetes 1.15+ is required. +- The Flannel network provider must be used. +- Windows nodes must have 50 GB of disk space. + +For the full list of requirements, see [this section.](#requirements-for-windows-clusters) For a summary of Kubernetes features supported in Windows, see the Kubernetes documentation on [supported functionality and limitations for using Kubernetes with Windows](https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#supported-functionality-and-limitations) or the [guide for scheduling Windows containers in Kubernetes](https://kubernetes.io/docs/setup/production-environment/windows/user-guide-windows-containers/). @@ -20,19 +25,13 @@ This guide covers the following topics: - [Requirements](#requirements-for-windows-clusters) - - [OS and Docker](#os-and-docker-requirements) - - [Nodes](#node-requirements) - - [Networking](#networking-requirements) - - [Architecture](#architecture-requirements) - - [Containers](#container-requirements) - - [Cloud Providers](#cloud-providers) - [Tutorial: How to Create a Cluster with Windows Support](#tutorial-how-to-create-a-cluster-with-windows-support) - [Configuration for Storage Classes in Azure](#configuration-for-storage-classes-in-azure) # Requirements for Windows Clusters -For a custom cluster, the general node requirements for networking, operating systems, and Docker are the same as the node requirements for a [Rancher installation]({{}}/rancher/v2.x/en/installation/requirements/). +The general node requirements for networking, operating systems, and Docker are the same as the node requirements for a [Rancher installation]({{}}/rancher/v2.x/en/installation/requirements/). ### OS and Docker Requirements @@ -46,6 +45,10 @@ In order to add Windows worker nodes to a cluster, the node must be running one > - If you are using AWS, Rancher recommends _Microsoft Windows Server 2019 Base with Containers_ as the Amazon Machine Image (AMI). > - If you are using GCE, Rancher recommends _Windows Server 2019 Datacenter for Containers_ as the OS image. +### Kubernetes Version + +Kubernetes v1.15+ is required. + ### Node Requirements The hosts in the cluster need to have at least: @@ -71,6 +74,7 @@ For **VXLAN (Overlay)** networking, the [KB4489899](https://support.microsoft.co If you are configuring DHCP options sets for an AWS virtual private cloud, note that in the `domain-name` option field, only one domain name can be specified. According to the DHCP options [documentation:](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_DHCP_Options.html) > Some Linux operating systems accept multiple domain names separated by spaces. However, other Linux operating systems and Windows treat the value as a single domain, which results in unexpected behavior. If your DHCP options set is associated with a VPC that has instances with multiple operating systems, specify only one domain name. + ### Architecture Requirements The Kubernetes cluster management nodes (`etcd` and `controlplane`) must be run on Linux nodes. @@ -91,7 +95,7 @@ We recommend the minimum three-node architecture listed in the table below, but Windows requires that containers must be built on the same Windows Server version that they are being deployed on. Therefore, containers must be built on Windows Server core version 1809 or above. If you have existing containers built for an earlier Windows Server core version, they must be re-built on Windows Server core version 1809 or above. -### Cloud Providers +### Cloud Provider Specific Requirements If you set a Kubernetes cloud provider in your cluster, some additional steps are required. You might want to set a cloud provider if you want to want to leverage a cloud provider's capabilities, for example, to automatically provision storage, load balancers, or other infrastructure for your cluster. Refer to [this page]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/) for details on how to configure a cloud provider cluster of nodes that meet the prerequisites. @@ -104,21 +108,21 @@ If you are using the GCE (Google Compute Engine) cloud provider, you must do the This tutorial describes how to create a Rancher-provisioned cluster with the three nodes in the [recommended architecture.](#guide-architecture) -When you provision a custom cluster with Rancher, you will add nodes to the cluster by installing the [Rancher agent]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/agent-options/) on each one. When you create or edit your cluster from the Rancher UI, you will see a **Customize Node Run Command** that you can run on each server to add it to your custom cluster. +When you provision a cluster with Rancher on existing nodes, you will add nodes to the cluster by installing the [Rancher agent]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/agent-options/) on each one. When you create or edit your cluster from the Rancher UI, you will see a **Customize Node Run Command** that you can run on each server to add it to your cluster. -To set up a custom cluster with support for Windows nodes and containers, you will need to complete the tasks below. +To set up a cluster with support for Windows nodes and containers, you will need to complete the tasks below. 1. [Provision Hosts](#1-provision-hosts) -1. [Create the Custom Cluster](#2-create-the-custom-cluster) +1. [Create the Cluster on Existing Nodes](#2-create-the-cluster-on-existing-nodes) 1. [Add Nodes to the Cluster](#3-add-nodes-to-the-cluster) 1. [Optional: Configuration for Azure Files](#5-optional-configuration-for-azure-files) # 1. Provision Hosts -To begin provisioning a custom cluster with Windows support, prepare your hosts. +To begin provisioning a cluster on existing nodes with Windows support, prepare your hosts. Your hosts can be: @@ -142,77 +146,69 @@ If your nodes are hosted by a **Cloud Provider** and you want automation support # 2. Create the Custom Cluster -The instructions for creating a custom cluster that supports Windows nodes are very similar to the general [instructions for creating a custom cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster) with some Windows-specific requirements. - -Windows support only be enabled if the cluster uses Kubernetes v1.15+ and the Flannel network provider. +The instructions for creating a Windows cluster on existing nodes are very similar to the general [instructions for creating a custom cluster]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster) with some Windows-specific requirements. 1. From the **Global** view, click on the **Clusters** tab and click **Add Cluster**. - 1. Click **From existing nodes (Custom)**. - 1. Enter a name for your cluster in the **Cluster Name** text box. - 1. In the **Kubernetes Version** dropdown menu, select v1.15 or above. - 1. In the **Network Provider** field, select **Flannel.** - 1. In the **Windows Support** section, click **Enable.** - 1. Optional: After you enable Windows support, you will be able to choose the Flannel backend. There are two network options: [**Host Gateway (L2bridge)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) and [**VXLAN (Overlay)**](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan). The default option is **VXLAN (Overlay)** mode. - 1. Click **Next**. > **Important:** For Host Gateway (L2bridge) networking, it's best to use the same Layer 2 network for all nodes. Otherwise, you need to configure the route rules for them. For details, refer to the [documentation on configuring cloud-hosted VM routes.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/host-gateway-requirements/#cloud-hosted-vm-routes-configuration) You will also need to [disable private IP address checks]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/host-gateway-requirements/#disabling-private-ip-address-checks) if you are using Amazon EC2, Google GCE, or Azure VM. # 3. Add Nodes to the Cluster -This section describes how to register your Linux and Worker nodes to your custom cluster. +This section describes how to register your Linux and Worker nodes to your cluster. You will run a command on each node, which will install the Rancher agent and allow Rancher to manage each node. ### Add Linux Master Node -The first node in your cluster should be a Linux host has both the **Control Plane** and **etcd** roles. At a minimum, both of these roles must be enabled for this node, and this node must be added to your cluster before you can add Windows hosts. - In this section, we fill out a form on the Rancher UI to get a custom command to install the Rancher agent on the Linux master node. Then we will copy the command and run it on our Linux master node to register the node in the cluster. +The first node in your cluster should be a Linux host has both the **Control Plane** and **etcd** roles. At a minimum, both of these roles must be enabled for this node, and this node must be added to your cluster before you can add Windows hosts. + 1. In the **Node Operating System** section, click **Linux**. - 1. In the **Node Role** section, choose at least **etcd** and **Control Plane**. We recommend selecting all three. - 1. Optional: If you click **Show advanced options,** you can customize the settings for the [Rancher agent]({{}}/rancher/v2.x/en/admin-settings/agent-options/) and [node labels.](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) - 1. Copy the command displayed on the screen to your clipboard. - 1. SSH into your Linux host and run the command that you copied to your clipboard. - 1. When you are finished provisioning your Linux node(s), select **Done**. -{{< result_create-cluster >}} +**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces + It may take a few minutes for the node to be registered in your cluster. ### Add Linux Worker Node -After the initial provisioning of your custom cluster, your cluster only has a single Linux host. Next, we add another Linux `worker` host, which will be used to support _Rancher cluster agent_, _Metrics server_, _DNS_ and _Ingress_ for your cluster. +In this section, we run a command to register the Linux worker node to the cluster. + +After the initial provisioning of your cluster, your cluster only has a single Linux host. Next, we add another Linux `worker` host, which will be used to support _Rancher cluster agent_, _Metrics server_, _DNS_ and _Ingress_ for your cluster. 1. From the **Global** view, click **Clusters.** - -1. Go to the custom cluster that you created and click **⋮ > Edit.** - +1. Go to the cluster that you created and click **⋮ > Edit.** 1. Scroll down to **Node Operating System**. Choose **Linux**. - 1. In the **Customize Node Run Command** section, go to the **Node Options** and select the **Worker** role. - 1. Copy the command displayed on screen to your clipboard. - 1. Log in to your Linux host using a remote Terminal connection. Run the command copied to your clipboard. - 1. From **Rancher**, click **Save**. **Result:** The **Worker** role is installed on your Linux host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster. > **Note:** Taints on Linux Worker Nodes > -> For each Linux worker node added into the cluster, the following taints will be added to Linux worker node. By adding this taint to the Linux worker node, any workloads added to the windows cluster will be automatically scheduled to the Windows worker node. If you want to schedule workloads specifically onto the Linux worker node, you will need to add tolerations to those workloads. +> For each Linux worker node added into the cluster, the following taints will be added to Linux worker node. By adding this taint to the Linux worker node, any workloads added to the Windows cluster will be automatically scheduled to the Windows worker node. If you want to schedule workloads specifically onto the Linux worker node, you will need to add tolerations to those workloads. > | Taint Key | Taint Value | Taint Effect | > | -------------- | ----------- | ------------ | @@ -220,20 +216,16 @@ After the initial provisioning of your custom cluster, your cluster only has a s ### Add a Windows Worker Node -You can add Windows hosts to a custom cluster by editing the cluster and choosing the **Windows** option. +In this section, we run a command to register the Windows worker node to the cluster. + +You can add Windows hosts to the cluster by editing the cluster and choosing the **Windows** option. 1. From the **Global** view, click **Clusters.** - -1. Go to the custom cluster that you created and click **⋮ > Edit.** - +1. Go to the cluster that you created and click **⋮ > Edit.** 1. Scroll down to **Node Operating System**. Choose **Windows**. Note: You will see that the **worker** role is the only available role. - 1. Copy the command displayed on screen to your clipboard. - 1. Log in to your Windows host using your preferred tool, such as [Microsoft Remote Desktop](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients). Run the command copied to your clipboard in the **Command Prompt (CMD)**. - 1. From Rancher, click **Save**. - 1. Optional: Repeat these instructions if you want to add more Windows nodes to your cluster. **Result:** The **Worker** role is installed on your Windows host, and the node registers with Rancher. It may take a few minutes for the node to be registered in your cluster. You now have a Windows Kubernetes cluster. @@ -247,41 +239,4 @@ After creating your cluster, you can access it through the Rancher UI. As a best # Configuration for Storage Classes in Azure -If you are using Azure VMs for your nodes, you can use [Azure files](https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv) as a [storage class]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/#adding-storage-classes) for the cluster. - -In order to have the Azure platform create the required storage resources, follow these steps: - -1. [Configure the Azure cloud provider.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/#azure) - -1. Configure `kubectl` to connect to your cluster. - -1. Copy the `ClusterRole` and `ClusterRoleBinding` manifest for the service account: - - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: system:azure-cloud-provider - rules: - - apiGroups: [''] - resources: ['secrets'] - verbs: ['get','create'] - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: system:azure-cloud-provider - roleRef: - kind: ClusterRole - apiGroup: rbac.authorization.k8s.io - name: system:azure-cloud-provider - subjects: - - kind: ServiceAccount - name: persistent-volume-binder - namespace: kube-system - -1. Create these in your cluster using one of the follow command. - - ``` - # kubectl create -f - ``` +If you are using Azure VMs for your nodes, you can use [Azure files](https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv) as a [storage class]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/#adding-storage-classes) for the cluster. For details, refer to [this section.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/azure-storageclass) \ No newline at end of file diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/azure-storageclass/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/azure-storageclass/_index.md new file mode 100644 index 00000000000..798916a2bc5 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/azure-storageclass/_index.md @@ -0,0 +1,41 @@ +--- +title: Configuration for Storage Classes in Azure +weight: 3 +--- + +If you are using Azure VMs for your nodes, you can use [Azure files](https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv) as a [storage class]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/#adding-storage-classes) for the cluster. + +In order to have the Azure platform create the required storage resources, follow these steps: + +1. [Configure the Azure cloud provider.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers/#azure) +1. Configure `kubectl` to connect to your cluster. +1. Copy the `ClusterRole` and `ClusterRoleBinding` manifest for the service account: + + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: system:azure-cloud-provider + rules: + - apiGroups: [''] + resources: ['secrets'] + verbs: ['get','create'] + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: system:azure-cloud-provider + roleRef: + kind: ClusterRole + apiGroup: rbac.authorization.k8s.io + name: system:azure-cloud-provider + subjects: + - kind: ServiceAccount + name: persistent-volume-binder + namespace: kube-system + +1. Create these in your cluster using one of the follow command. + + ``` + # kubectl create -f + ``` diff --git a/content/rancher/v2.x/en/faq/networking/cni-providers/_index.md b/content/rancher/v2.x/en/faq/networking/cni-providers/_index.md index 223b58a02b4..88719ac4dd0 100644 --- a/content/rancher/v2.x/en/faq/networking/cni-providers/_index.md +++ b/content/rancher/v2.x/en/faq/networking/cni-providers/_index.md @@ -78,7 +78,7 @@ For more information, see the [Flannel GitHub Page](https://github.com/coreos/fl ![Calico Logo]({{}}/img/rancher/calico-logo.png) -Calico enables networking and network policy in Kubernetes clusters across the cloud. Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-premise using BGP. +Calico enables networking and network policy in Kubernetes clusters across the cloud. Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP. Calico also provides a stateless IP-in-IP encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies. diff --git a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/_index.md b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/_index.md index a30aa567414..db38295d4c4 100644 --- a/content/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/_index.md +++ b/content/rancher/v2.x/en/helm-charts/legacy-catalogs/globaldns/_index.md @@ -11,7 +11,20 @@ Rancher's Global DNS feature provides a way to program an external DNS provider > **Note:** Global DNS is only available in [Kubernetes installations]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/) with the [`local` cluster enabled]({{}}/rancher/v2.x/en/installation/resources/chart-options/#import-local-cluster). -## Global DNS Providers +- [Global DNS Providers](#global-dns-providers) +- [Global-DNS-Entries](#global-dns-entries) +- [Permissions for Global DNS Providers and Entries](#permissions-for-global-dns-providers-and-entries) +- [Setting up Global DNS for Applications](#setting-up-global-dns-for-applications) +- [Adding a Global DNS Entry](#adding-a-global-dns-entry) +- [Editing a Global DNS Provider](#editing-a-global-dns-provider) +- [Global DNS Entry Configuration](#global-dns-entry-configuration) +- [DNS Provider Configuration](#dns-provider-configuration) + - [Route53](#route53) + - [CloudFlare](#cloudflare) + - [AliDNS](#alidns) +- [Adding Annotations to Ingresses to program the External DNS](#adding-annotations-to-ingresses-to-program-the-external-dns) + +# Global DNS Providers Prior to adding in Global DNS entries, you will need to configure access to an external provider. @@ -23,74 +36,29 @@ The following table lists the first version of Rancher each provider debuted. | [CloudFlare](https://www.cloudflare.com/dns/) | v2.2.0 | | [AliDNS](https://www.alibabacloud.com/product/dns) | v2.2.0 | -## Global DNS Entries +# Global DNS Entries For each application that you want to route traffic to, you will need to create a Global DNS Entry. This entry will use a fully qualified domain name (a.k.a FQDN) from a global DNS provider to target applications. The applications can either resolve to a single [multi-cluster application]({{}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or to specific projects. You must [add specific annotation labels](#adding-annotations-to-ingresses-to-program-the-external-dns) to the ingresses in order for traffic to be routed correctly to the applications. Without this annotation, the programming for the DNS entry will not work. -## Permissions for Global DNS Providers/Entries +# Permissions for Global DNS Providers and Entries By default, only [global administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) and the creator of the Global DNS provider or Global DNS entry have access to use, edit and delete them. When creating the provider or entry, the creator can add additional users in order for those users to access and manage them. By default, these members will get `Owner` role to manage them. -## Setting up Global DNS for Applications - -### Add a Global DNS Provider +# Setting up Global DNS for Applications 1. From the **Global View**, select **Tools > Global DNS Providers**. -1. To add a provider, choose from the available provider options and configure the Global DNS Provider with necessary credentials and an optional domain. +1. To add a provider, choose from the available provider options and configure the Global DNS Provider with necessary credentials and an optional domain. For help, see [DNS Provider Configuration.](#dns-provider-configuration) 1. (Optional) Add additional users so they could use the provider when creating Global DNS entries as well as manage the Global DNS provider. 1. (Optional) Pass any custom values in the Additional Options section. -{{% accordion id="route53" label="Route53" %}} -1. Enter a **Name** for the provider. -1. (Optional) Enter the **Root Domain** of the hosted zone on AWS Route53. If this is not provided, Rancher's Global DNS Provider will work with all hosted zones that the AWS keys can access. -1. Enter the AWS **Access Key**. -1. Enter the AWS **Secret Key**. -1. Under **Member Access**, search for any users that you want to have the ability to use this provider. By adding this user, they will also be able to manage the Global DNS Provider entry. -1. Click **Create**. -{{% /accordion %}} -{{% accordion id="cloudflare" label="CloudFlare" %}} -1. Enter a **Name** for the provider. -1. Enter the **Root Domain**, this field is optional, in case this is not provided, Rancher's Global DNS Provider will work with all domains that the keys can access. -1. Enter the CloudFlare **API Email**. -1. Enter the CloudFlare **API Key**. -1. Under **Member Access**, search for any users that you want to have the ability to use this provider. By adding this user, they will also be able to manage the Global DNS Provider entry. -1. Click **Create**. -{{% /accordion %}} -{{% accordion id="alidns" label="AliDNS" %}} -1. Enter a **Name** for the provider. -1. Enter the **Root Domain**, this field is optional, in case this is not provided, Rancher's Global DNS Provider will work with all domains that the keys can access. -1. Enter the **Access Key**. -1. Enter the **Secret Key**. -1. Under **Member Access**, search for any users that you want to have the ability to use this provider. By adding this user, they will also be able to manage the Global DNS Provider entry. -1. Click **Create**. - ->**Notes:** -> ->- Alibaba Cloud SDK uses TZ data. It needs to be present on `/usr/share/zoneinfo` path of the nodes running [`local` cluster]({{}}/rancher/v2.x/en/installation/resources/chart-options/#import-local-cluster), and it is mounted to the external DNS pods. If it is not available on the nodes, please follow the [instruction](https://www.ietf.org/timezones/tzdb-2018f/tz-link.html) to prepare it. ->- Different versions of AliDNS have different allowable TTL range, where the default TTL for a global DNS entry may not be valid. Please see the [reference](https://www.alibabacloud.com/help/doc-detail/34338.htm) before adding an AliDNS entry. -{{% /accordion %}} - -### Add a Global DNS Entry +# Adding a Global DNS Entry 1. From the **Global View**, select **Tools > Global DNS Entries**. 1. Click on **Add DNS Entry**. -1. Enter the **FQDN** you wish to program on the external DNS. -1. Select a Global DNS **Provider** from the list. -1. Select if this DNS entry will be for a [multi-cluster application]({{}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or for workloads in different [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). You will need to ensure that [annotations are added to any ingresses](#adding-annotations-to-ingresses-to-program-the-external-dns) for the applications that you want to target. -1. Configure the **DNS TTL** value in seconds. By default, it will be 300 seconds. -1. Under **Member Access**, search for any users that you want to have the ability to manage this Global DNS entry. +1. Fill out the form. For help, refer to [Global DNS Entry Configuration.](#global-dns-entry-configuration) +1. Click **Create.** -## Adding Annotations to Ingresses to program the External DNS - -In order for Global DNS entries to be programmed, you will need to add a specific annotation on an ingress in your application or target project and this ingress needs to use a specific `hostname` and an annotation that should match the FQDN of the Global DNS entry. - -1. For any application that you want targeted for your Global DNS entry, find an ingress associated with the application. -1. In order for the DNS to be programmed, the following requirements must be met: - * The ingress routing rule must be set to use a `hostname` that matches the FQDN of the Global DNS entry. - * The ingress must have an annotation (`rancher.io/globalDNS.hostname`) and the value of this annotation should match the FQDN of the Global DNS entry. -1. Once the ingress in your [multi-cluster application]({{}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or in your target projects are in `active` state, the FQDN will be programmed on the external DNS against the Ingress IP addresses. - -## Editing a Global DNS Provider +# Editing a Global DNS Provider The [global administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), creator of the Global DNS provider and any users added as `members` to a Global DNS provider, have _owner_ access to that provider. Any members can edit the following fields: @@ -103,7 +71,7 @@ The [global administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/gl 1. For the Global DNS provider that you want to edit, click the **⋮ > Edit**. -## Editing a Global DNS Entry +# Editing a Global DNS Entry The [global administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), creator of the Global DNS entry and any users added as `members` to a Global DNS entry, have _owner_ access to that DNS entry. Any members can edit the following fields: @@ -120,3 +88,73 @@ Permission checks are relaxed for removing target projects in order to support s 1. From the **Global View**, select **Tools > Global DNS Entries**. 1. For the Global DNS entry that you want to edit, click the **⋮ > Edit**. + + +# Global DNS Entry Configuration + +| Field | Description | +|----------|--------------------| +| FQDN | Enter the **FQDN** you wish to program on the external DNS. | +| Provider | Select a Global DNS **Provider** from the list. | +| Resolves To | Select if this DNS entry will be for a [multi-cluster application]({{}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or for workloads in different [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/). | +| Multi-Cluster App Target | The target for the global DNS entry. You will need to ensure that [annotations are added to any ingresses](#adding-annotations-to-ingresses-to-program-the-external-dns) for the applications that you want to target. | +| DNS TTL | Configure the DNS time to live value in seconds. By default, it will be 300 seconds. | +| Member Access | Search for any users that you want to have the ability to manage this Global DNS entry. | + +# DNS Provider Configuration + +### Route53 + +| Field | Explanation | +|---------|---------------------| +| Name | Enter a **Name** for the provider. | +| Root Domain | (Optional) Enter the **Root Domain** of the hosted zone on AWS Route53. If this is not provided, Rancher's Global DNS Provider will work with all hosted zones that the AWS keys can access. | +| Credential Path | The [AWS credential path.](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-where) | +| Role ARN | An [Amazon Resource Name.](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) | +| Region | An [AWS region.](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html#Concepts.RegionsAndAvailabilityZones.Regions) | +| Zone | An [AWS zone.](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html#Concepts.RegionsAndAvailabilityZones.AvailabilityZones) | +| Access Key | Enter the AWS **Access Key**. | +| Secret Key | Enter the AWS **Secret Key**. | +| Member Access | Under **Member Access**, search for any users that you want to have the ability to use this provider. By adding this user, they will also be able to manage the Global DNS Provider entry. | + + +### CloudFlare + +| Field | Explanation | +|---------|---------------------| +| Name | Enter a **Name** for the provider. | +| Root Domain | Optional: Enter the **Root Domain**. In case this is not provided, Rancher's Global DNS Provider will work with all domains that the keys can access. | +| Proxy Setting | When set to yes, the global DNS entry that gets created for the provider has proxy settings on. | +| API Email | Enter the CloudFlare **API Email**. | +| API Key | Enter the CloudFlare **API Key**. | +| Member Access | Search for any users that you want to have the ability to use this provider. By adding this user, they will also be able to manage the Global DNS Provider entry. | + +### AliDNS + +>**Notes:** +> +>- Alibaba Cloud SDK uses TZ data. It needs to be present on `/usr/share/zoneinfo` path of the nodes running [`local` cluster]({{}}/rancher/v2.x/en/installation/resources/chart-options/#import-local-cluster), and it is mounted to the external DNS pods. If it is not available on the nodes, please follow the [instruction](https://www.ietf.org/timezones/tzdb-2018f/tz-link.html) to prepare it. +>- Different versions of AliDNS have different allowable TTL range, where the default TTL for a global DNS entry may not be valid. Please see the [reference](https://www.alibabacloud.com/help/doc-detail/34338.htm) before adding an AliDNS entry. + +| Field | Explanation | +|---------|---------------------| +| Name | Enter a **Name** for the provider. | +| Root Domain | Optional: Enter the **Root Domain**. In case this is not provided, Rancher's Global DNS Provider will work with all domains that the keys can access. | +| Access Key | Enter the **Access Key**. | +| Secret Key | Enter the **Secret Key**. | +| Member Access | Search for any users that you want to have the ability to use this provider. By adding this user, they will also be able to manage the Global DNS Provider entry. | + +# Adding Annotations to Ingresses to program the External DNS + +In order for Global DNS entries to be programmed, you will need to add a specific annotation on an ingress in your application or target project. + +For any application that you want targeted for your Global DNS entry, find an ingress associated with the application. + +This ingress needs to use a specific `hostname` and an annotation that should match the FQDN of the Global DNS entry. + +In order for the DNS to be programmed, the following requirements must be met: + +* The ingress routing rule must be set to use a `hostname` that matches the FQDN of the Global DNS entry. +* The ingress must have an annotation (`rancher.io/globalDNS.hostname`) and the value of this annotation should match the FQDN of the Global DNS entry. + +Once the ingress in your [multi-cluster application]({{}}/rancher/v2.x/en/catalog/multi-cluster-apps/) or in your target projects is in an `active` state, the FQDN will be programmed on the external DNS against the Ingress IP addresses. \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/chart-options/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/chart-options/_index.md index 09464b6fe66..c93ddd8e191 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/chart-options/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/chart-options/_index.md @@ -1,5 +1,5 @@ --- -title: Helm Chart Options +title: Rancher Helm Chart Options weight: 1 aliases: - /rancher/v2.x/en/installation/options/ @@ -45,7 +45,6 @@ For information on enabling experimental features, refer to [this page.]({{}}/rancher/v2.x/en/installation/api-auditing) level. 0 is off. [0-3] | @@ -53,25 +52,26 @@ For information on enabling experimental features, refer to [this page.]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/#restricted-admin) | +| `systemDefaultRegistry` | "" | `string` - private registry to be used for all system Docker images, e.g., http://registry.example.com/ | +| `tls` | "ingress" | `string` - See [External TLS Termination](#external-tls-termination) for details. - "ingress, external" | +| `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. | + -
### API Audit Log @@ -169,7 +169,7 @@ For details on installing Rancher with a private registry, see: - [Air Gap: Docker Install]({{}}/rancher/v2.x/en/installation/air-gap-single-node/) - [Air Gap: Kubernetes Install]({{}}/rancher/v2.x/en/installation/air-gap-high-availability/) -### External TLS Termination +# External TLS Termination We recommend configuring your load balancer as a Layer 4 balancer, forwarding plain 80/tcp and 443/tcp to the Rancher Management cluster nodes. The Ingress Controller on the cluster will redirect http traffic on port 80 to https on port 443. @@ -179,7 +179,7 @@ You may terminate the SSL/TLS on a L7 load balancer external to the Rancher clus Your load balancer must support long lived websocket connections and will need to insert proxy headers so Rancher can route links correctly. -#### Configuring Ingress for External TLS when Using NGINX v0.25 +### Configuring Ingress for External TLS when Using NGINX v0.25 In NGINX v0.25, the behavior of NGINX has [changed](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0220) regarding forwarding headers and external TLS termination. Therefore, in the scenario that you are using external TLS termination configuration with NGINX v0.25, you must edit the `cluster.yml` to enable the `use-forwarded-headers` option for ingress: @@ -190,24 +190,24 @@ ingress: use-forwarded-headers: 'true' ``` -#### Required Headers +### Required Headers - `Host` - `X-Forwarded-Proto` - `X-Forwarded-Port` - `X-Forwarded-For` -#### Recommended Timeouts +### Recommended Timeouts - Read Timeout: `1800 seconds` - Write Timeout: `1800 seconds` - Connect Timeout: `30 seconds` -#### Health Checks +### Health Checks Rancher will respond `200` to health checks on the `/healthz` endpoint. -#### Example NGINX config +### Example NGINX config This NGINX configuration is tested on NGINX 1.14. diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md index f32a0fccbad..d8c84aa10c0 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/_index.md @@ -12,6 +12,7 @@ aliases: - /rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/ - /rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/ha - /rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades-rollbacks/upgrades + - /rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades-rollbacks/upgrades/ha --- The following instructions will guide you through upgrading a Rancher server that was installed on a Kubernetes cluster with Helm. These steps also apply to air gap installs with Helm. diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md index fcf2b346c4b..b5d8556e2e6 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md @@ -44,7 +44,7 @@ Follow the steps to upgrade Rancher server: ### A. Back up Your Kubernetes Cluster that is Running Rancher Server [Take a one-time snapshot]({{}}/rancher/v2.x/en/backups/backups/ha-backups/#option-b-one-time-snapshots) -of your Kubernetes cluster running Rancher server. You'll use the snapshot as a restoration point if something goes wrong during upgrade. +of your Kubernetes cluster running Rancher server. You'll use the snapshot as a restore point if something goes wrong during upgrade. ### B. Update the Helm chart repository diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md index 567ca98ae14..487f6058742 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md @@ -160,6 +160,7 @@ If you watch the pods, you will see the following pods installed: - a `rancher` pod and `rancher-webhook` pod in the `cattle-system` namespace - a `fleet-agent`, `fleet-controller`, and `gitjob` pod in the `fleet-system` namespace - a `rancher-operator` pod in the `rancher-operator-system` namespace + ### 5. Set the initial Rancher password Once the `rancher` pod is up and running, run the following: diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md index 24c413fff8e..66ec384fa00 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/_index.md @@ -24,4 +24,8 @@ Throughout the installation instructions, there will be _tabs_ for each installa 3. [Set up a Kubernetes cluster (Skip this step for Docker installations)]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/) 4. [Install Rancher]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/) +# Upgrades + +To upgrade Rancher with Helm CLI in an air gap environment, follow [this procedure.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/) + ### [Next: Prepare your Node(s)]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/) diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md index 58a9e26b8de..8cb50bd7752 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/install-rancher/_index.md @@ -21,13 +21,13 @@ Rancher recommends installing Rancher on a Kubernetes cluster. A highly availabl This section describes installing Rancher in five parts: -- [A. Add the Helm Chart Repository](#a-add-the-helm-chart-repository) -- [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration) -- [C. Render the Rancher Helm Template](#c-render-the-rancher-helm-template) -- [D. Install Rancher](#d-install-rancher) -- [E. For Rancher versions prior to v2.3.0, Configure System Charts](#e-for-rancher-versions-prior-to-v2-3-0-configure-system-charts) +- [1. Add the Helm Chart Repository](#1-add-the-helm-chart-repository) +- [2. Choose your SSL Configuration](#2-choose-your-ssl-configuration) +- [3. Render the Rancher Helm Template](#3-render-the-rancher-helm-template) +- [4. Install Rancher](#4-install-rancher) +- [5. For Rancher versions prior to v2.3.0, Configure System Charts](#5-for-rancher-versions-prior-to-v2-3-0-configure-system-charts) -### A. Add the Helm Chart Repository +# 1. Add the Helm Chart Repository From a system that has access to the internet, fetch the latest Helm chart and copy the resulting manifests to a system that has access to the Rancher server cluster. @@ -49,7 +49,7 @@ From a system that has access to the internet, fetch the latest Helm chart and c helm fetch rancher-stable/rancher --version=v2.4.8 ``` -### B. Choose your SSL Configuration +# 2. Choose your SSL Configuration Rancher Server is designed to be secure by default and requires SSL/TLS configuration. @@ -62,7 +62,7 @@ When Rancher is installed on an air gapped Kubernetes cluster, there are two rec | Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)
This is the **default** and does not need to be added when rendering the Helm template. | yes | | Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s).
This option must be passed when rendering the Rancher Helm template. | no | -### C. Render the Rancher Helm Template +# 3. Render the Rancher Helm Template When setting up the Rancher Helm template, there are several options in the Helm chart that are designed specifically for air gap installations. @@ -74,7 +74,9 @@ When setting up the Rancher Helm template, there are several options in the Helm Based on the choice your made in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), complete one of the procedures below. -{{% accordion id="self-signed" label="Option A-Default Self-Signed Certificate" %}} +### Option A: Default Self-Signed Certificate + +{{% accordion id="k8s-1" label="Click to expand" %}} By default, Rancher generates a CA and uses cert-manager to issue the certificate for access to the Rancher server interface. @@ -89,9 +91,9 @@ By default, Rancher generates a CA and uses cert-manager to issue the certificat 1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://hub.helm.sh/charts/jetstack/cert-manager). - ```plain - helm fetch jetstack/cert-manager --version v1.0.4 - ``` + ```plain + helm fetch jetstack/cert-manager --version v1.0.4 + ``` 1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files. ```plain @@ -131,7 +133,9 @@ By default, Rancher generates a CA and uses cert-manager to issue the certificat {{% /accordion %}} -{{% accordion id="secret" label="Option B: Certificates From Files using Kubernetes Secrets" %}} +### Option B: Certificates From Files using Kubernetes Secrets + +{{% accordion id="k8s-2" label="Click to expand" %}} Create Kubernetes secrets from your own certificates for Rancher to use. The common name for the cert will need to match the `hostname` option in the command below, or the ingress controller will fail to provision the site for Rancher. @@ -172,15 +176,17 @@ Then refer to [Adding TLS Secrets]({{}}/rancher/v2.x/en/installation/re {{% /accordion %}} -### D. Install Rancher +# 4. Install Rancher Copy the rendered manifest directories to a system that has access to the Rancher server cluster to complete installation. Use `kubectl` to create namespaces and apply the rendered manifests. -If you chose to use self-signed certificates in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), install cert-manager. +If you choose to use self-signed certificates in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), install cert-manager. -{{% accordion id="install-cert-manager" label="Self-Signed Certificate Installs - Install Cert-manager" %}} +### For Self-Signed Certificate Installs, Install Cert-manager + +{{% accordion id="install-cert-manager" label="Click to expand" %}} If you are using self-signed certificates, install cert-manager: @@ -204,7 +210,7 @@ kubectl apply -R -f ./cert-manager {{% /accordion %}} -Install Rancher: +### Install Rancher with kubectl ```plain kubectl create namespace cattle-system @@ -214,11 +220,11 @@ kubectl -n cattle-system apply -R -f ./rancher > **Note:** If you don't intend to send telemetry data, opt out [telemetry]({{}}/rancher/v2.x/en/faq/telemetry/) during the initial login. Leaving this active in an air-gapped environment can cause issues if the sockets cannot be opened successfully. -### E. For Rancher versions prior to v2.3.0, Configure System Charts +# 5. For Rancher versions prior to v2.3.0, Configure System Charts If you are installing Rancher versions prior to v2.3.0, you will not be able to use the packaged system charts. Since the Rancher system charts are hosted in Github, an air gapped installation will not be able to access these charts. Therefore, you must [configure the Rancher system charts]({{}}/rancher/v2.x/en/installation/resources/local-system-charts/#setting-up-system-charts-for-rancher-prior-to-v2-3-0). -### Additional Resources +# Additional Resources These resources could be helpful when installing Rancher: @@ -229,7 +235,13 @@ These resources could be helpful when installing Rancher: {{% /tab %}} {{% tab "Docker Install" %}} -The Docker installation is for Rancher users that are wanting to **test** out Rancher. Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. **Important: If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker installation to a Kubernetes Installation.** Instead of running the single node installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation. +The Docker installation is for Rancher users who want to test out Rancher. + +Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server. + +> **Important:** For Rancher v2.0-v2.4, there is no upgrade path to transition your Docker installation to a Kubernetes Installation.** Instead of running the single node installation, you have the option to follow the Kubernetes Install guide, but only use one node to install Rancher. Afterwards, you can scale up the etcd nodes in your Kubernetes cluster to make it a Kubernetes Installation. + +For Rancher v2.5+, the backup application can be used to migrate the Rancher server from a Docker install to a Kubernetes install using [these steps.]({{}}/rancher/v2.x/en/backups/v2.5/migrating-rancher/) For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster. @@ -247,7 +259,9 @@ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher Choose from the following options: -{{% accordion id="option-a" label="Option A-Default Self-Signed Certificate" %}} +### Option A: Default Self-Signed Certificate + +{{% accordion id="option-a" label="Click to expand" %}} If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself. @@ -270,7 +284,10 @@ docker run -d --restart=unless-stopped \ ``` {{% /accordion %}} -{{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}} + +### Option B: Bring Your Own Certificate: Self-Signed + +{{% accordion id="option-b" label="Click to expand" %}} In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher. @@ -306,7 +323,10 @@ docker run -d --restart=unless-stopped \ ``` {{% /accordion %}} -{{% accordion id="option-c" label="Option C-Bring Your Own Certificate: Signed by Recognized CA" %}} + +### Option C: Bring Your Own Certificate: Signed by Recognized CA + +{{% accordion id="option-c" label="Click to expand" %}} In development or testing environments where you're exposing an app publicly, use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings. diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md index 0e0390de878..5f1e867eb7b 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/launch-kubernetes/_index.md @@ -1,5 +1,5 @@ --- -title: '3. Install Kubernetes (RKE and K3s installs only)' +title: '3. Install Kubernetes (Skip for Docker Installs)' weight: 300 aliases: - /rancher/v2.x/en/installation/air-gap-high-availability/install-kube diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md index 6c0b68fceea..9231581f370 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/_index.md @@ -114,16 +114,14 @@ _Available as of v2.3.0_ For Rancher servers that will provision Linux and Windows clusters, there are distinctive steps to populate your private registry for the Windows images and the Linux images. Since a Windows cluster is a mix of Linux and Windows nodes, the Linux images pushed into the private registry are manifests. -### Windows Steps +# Windows Steps The Windows images need to be collected and pushed from a Windows server workstation. -A. Find the required assets for your Rancher version
-B. Save the images to your Windows Server workstation
-C. Prepare the Docker daemon
-D. Populate the private registry - -{{% accordion label="Collecting and Populating Windows Images into the Private Registry"%}} +1. Find the required assets for your Rancher version +2. Save the images to your Windows Server workstation +3. Prepare the Docker daemon +4. Populate the private registry ### Prerequisites @@ -133,7 +131,9 @@ The workstation must have Docker 18.02+ in order to support manifests, which are Your registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests. -### A. Find the required assets for your Rancher version + + +### 1. Find the required assets for your Rancher version 1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. @@ -145,7 +145,9 @@ Your registry must support manifests. As of April 2020, Amazon Elastic Container | `rancher-save-images.ps1` | This script pulls all the images in the `rancher-windows-images.txt` from Docker Hub and saves all of the images as `rancher-windows-images.tar.gz`. | | `rancher-load-images.ps1` | This script loads the images from the `rancher-windows-images.tar.gz` file and pushes them to your private registry. | -### B. Save the images to your Windows Server workstation + + +### 2. Save the images to your Windows Server workstation 1. Using `powershell`, go to the directory that has the files that were downloaded in the previous step. @@ -156,7 +158,9 @@ Your registry must support manifests. As of April 2020, Amazon Elastic Container **Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-windows-images.tar.gz`. Check that the output is in the directory. -### C. Prepare the Docker daemon + + +### 3. Prepare the Docker daemon Append your private registry address to the `allow-nondistributable-artifacts` config field in the Docker daemon (`C:\ProgramData\Docker\config\daemon.json`). Since the base image of Windows images are maintained by the `mcr.microsoft.com` registry, this step is required as the layers in the Microsoft registry are missing from Docker Hub and need to be pulled into the private registry. @@ -171,7 +175,9 @@ Append your private registry address to the `allow-nondistributable-artifacts` c } ``` -### D. Populate the private registry + + +### 4. Populate the private registry Move the images in the `rancher-windows-images.tar.gz` to your private registry using the scripts to load the images. @@ -187,18 +193,14 @@ The `rancher-windows-images.txt` is expected to be on the workstation in the sam ./rancher-load-images.ps1 --registry ``` -{{% /accordion %}} - -### Linux Steps +# Linux Steps The Linux images needs to be collected and pushed from a Linux host, but _must be done after_ populating the Windows images into the private registry. These step are different from the Linux only steps as the Linux images that are pushed will actually manifests that support Windows and Linux images. -A. Find the required assets for your Rancher version
-B. Collect all the required images
-C. Save the images to your Linux workstation
-D. Populate the private registry - -{{% accordion label="Collecting and Populating Linux Images into the Private Registry" %}} +1. Find the required assets for your Rancher version +2. Collect all the required images +3. Save the images to your Linux workstation +4. Populate the private registry ### Prerequisites @@ -208,7 +210,9 @@ These steps expect you to use a Linux workstation that has internet access, acce The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters. -### A. Find the required assets for your Rancher version + + +### 1. Find the required assets for your Rancher version 1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. Click **Assets.** @@ -221,62 +225,72 @@ The workstation must have Docker 18.02+ in order to support manifests, which are | `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. | | `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. | -### B. Collect all the required images + + +### 2. Collect all the required images **For Kubernetes Installs using Rancher Generated Self-Signed Certificate:** In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://hub.helm.sh/charts/jetstack/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates. - 1. Fetch the latest `cert-manager` Helm chart and parse the template for image details: - > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). - ```plain - helm repo add jetstack https://charts.jetstack.io - helm repo update - helm fetch jetstack/cert-manager --version v0.12.0 - helm template ./cert-manager-.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt - ``` +1. Fetch the latest `cert-manager` Helm chart and parse the template for image details: + > **Note:** Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/). + ```plain + helm repo add jetstack https://charts.jetstack.io + helm repo update + helm fetch jetstack/cert-manager --version v0.12.0 + helm template ./cert-manager-.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt + ``` - 2. Sort and unique the images list to remove any overlap between the sources: - ```plain - sort -u rancher-images.txt -o rancher-images.txt - ``` +2. Sort and unique the images list to remove any overlap between the sources: + ```plain + sort -u rancher-images.txt -o rancher-images.txt + ``` -### C. Save the images to your workstation + + +### 3. Save the images to your workstation 1. Make `rancher-save-images.sh` an executable: + ``` chmod +x rancher-save-images.sh ``` 1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images: + ```plain ./rancher-save-images.sh --image-list ./rancher-images.txt ``` - **Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory. +**Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory. -### D. Populate the private registry + + +### 4. Populate the private registry Move the images in the `rancher-images.tar.gz` to your private registry using the `rancher-load-images.sh script` to load the images. The image list, `rancher-images.txt` or `rancher-windows-images.txt`, is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. The `rancher-images.tar.gz` should also be in the same directory. 1. Log into your private registry if required: - ```plain - docker login - ``` + +```plain +docker login +``` 1. Make `rancher-load-images.sh` an executable: - ``` - chmod +x rancher-load-images.sh - ``` + +``` +chmod +x rancher-load-images.sh +``` 1. Use `rancher-load-images.sh` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry: - ```plain - ./rancher-load-images.sh --image-list ./rancher-images.txt \ - --windows-image-list ./rancher-windows-images.txt \ - --registry - ``` -{{% /accordion %}} +```plain +./rancher-load-images.sh --image-list ./rancher-images.txt \ + --windows-image-list ./rancher-windows-images.txt \ + --registry +``` + {{% /tab %}} {{% /tabs %}} diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/_index.md index 7465fe5a2cd..02698e12901 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/air-gap/prepare-nodes/_index.md @@ -11,6 +11,8 @@ An air gapped environment is an environment where the Rancher server is installe The infrastructure depends on whether you are installing Rancher on a K3s Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container. For more information on each installation option, refer to [this page.]({{}}/rancher/v2.x/en/installation/) +As of Rancher v2.5, Rancher can be installed on any Kubernetes cluster. The RKE and K3s Kubernetes infrastructure tutorials below are still included for convenience. + {{% tabs %}} {{% tab "K3s" %}} We recommend setting up the following infrastructure for a high-availability installation: diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md index 80ea46df0ec..f8a717013fc 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/_index.md @@ -46,6 +46,11 @@ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher Choose from the following options: +- [Option A: Default Rancher-generated Self-signed Certificate](#option-a-default-rancher-generated-self-signed-certificate) +- [Option B: Bring Your Own Certificate, Self-signed](#option-b-bring-your-own-certificate-self-signed) +- [Option C: Bring Your Own Certificate, Signed by a Recognized CA](#option-c-bring-your-own-certificate-signed-by-a-recognized-ca) +- [Option D: Let's Encrypt Certificate](#option-d-let-s-encrypt-certificate) + ### Option A: Default Rancher-generated Self-signed Certificate If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself. @@ -170,5 +175,5 @@ Refer to [this page](./troubleshooting) for frequently asked questions and troub ## What's Next? -- **Recommended:** Review [Single Node Backup and Restoration]({{}}/rancher/v2.x/en/installation/backups-and-restoration/single-node-backup-and-restoration/). Although you don't have any data you need to back up right now, we recommend creating backups after regular Rancher use. +- **Recommended:** Review [Single Node Backup and Restore]({{}}/rancher/v2.x/en/installation/backups-and-restoration/single-node-backup-and-restoration/). Although you don't have any data you need to back up right now, we recommend creating backups after regular Rancher use. - Create a Kubernetes cluster: [Provisioning Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/). diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/_index.md index 2493ab4c99a..00db6582884 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-upgrades/_index.md @@ -13,7 +13,7 @@ The following instructions will guide you through upgrading a Rancher server tha # Prerequisites -- **Review the [known upgrade issues]({{}}/rancher/v2.x/en/upgrades/upgrades/#known-upgrade-issues) and [caveats]({{}}/rancher/v2.x/en/upgrades/upgrades/#caveats)** in the Rancher documentation for the most noteworthy issues to consider when upgrading Rancher. A more complete list of known issues for each Rancher version can be found in the release notes on [GitHub](https://github.com/rancher/rancher/releases) and on the [Rancher forums.](https://forums.rancher.com/c/announcements/12) +- **Review the [known upgrade issues]({{}}/rancher/v2.x/en/upgrades/upgrades/#known-upgrade-issues) in the Rancher documentation for the most noteworthy issues to consider when upgrading Rancher. A more complete list of known issues for each Rancher version can be found in the release notes on [GitHub](https://github.com/rancher/rancher/releases) and on the [Rancher forums.](https://forums.rancher.com/c/announcements/12) Note that upgrades to or from any chart in the [rancher-alpha repository]({{}}/rancher/v2.x/en/installation/resources/chart-options/#helm-chart-repositories/) aren’t supported. - **For [air gap installs only,]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap) collect and populate images for the new Rancher server version.** Follow the guide to [populate your private registry]({{}}/rancher/v2.x/en/installation/other-installation-methods/air-gap/populate-private-registry/) with the images for the Rancher version that you want to upgrade to. # Placeholder Review @@ -28,7 +28,15 @@ docker stop In this command, `` is the name of your Rancher container. -Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the upgrade. +# Get Data for Upgrade Commands + +To obtain the data to replace the placeholders, run: + +``` +docker ps +``` + +Write down or copy this information before starting the upgrade. Terminal `docker ps` Command, Displaying Where to Find `` and `` ![Placeholder Reference]({{}}/img/rancher/placeholder-ref.png) @@ -47,14 +55,14 @@ You can obtain `` and `` by loggi During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data. Follow the steps to upgrade Rancher server: -- [A. Create a copy of the data from your Rancher server container](#a-create-a-copy-of-the-data-from-your-rancher-server-container) -- [B. Create a backup tarball](#b-create-a-backup-tarball) -- [C. Pull the new Docker image](#c-pull-the-new-docker-image) -- [D. Start the new Rancher server container](#d-start-the-new-rancher-server-container) -- [E. Verify the Upgrade](#e-verify-the-upgrade) -- [F. Clean up your old Rancher server container](#f-clean-up-your-old-rancher-server-container) +- [1. Create a copy of the data from your Rancher server container](#1-create-a-copy-of-the-data-from-your-rancher-server-container) +- [2. Create a backup tarball](#2-create-a-backup-tarball) +- [3. Pull the new Docker image](#3-pull-the-new-docker-image) +- [4. Start the new Rancher server container](#4-start-the-new-rancher-server-container) +- [5. Verify the Upgrade](#5-verify-the-upgrade) +- [6. Clean up your old Rancher server container](#6-clean-up-your-old-rancher-server-container) -### A. Create a copy of the data from your Rancher server container +# 1. Create a copy of the data from your Rancher server container 1. Using a remote Terminal connection, log into the node running your Rancher server. @@ -70,7 +78,7 @@ During upgrade, you create a copy of the data from your current Rancher containe docker create --volumes-from --name rancher-data rancher/rancher: ``` -### B. Create a backup tarball +# 2. Create a backup tarball 1. From the data container that you just created (`rancher-data`), create a backup tarball (`rancher-data-backup--.tar.gz`). @@ -92,7 +100,7 @@ During upgrade, you create a copy of the data from your current Rancher containe 1. Move your backup tarball to a safe location external from your Rancher server. -### C. Pull the New Docker Image +# 3. Pull the New Docker Image Pull the image of the Rancher version that you want to upgrade to. @@ -104,7 +112,7 @@ Placeholder | Description docker pull rancher/rancher: ``` -### D. Start the New Rancher Server Container +# 4. Start the New Rancher Server Container Start a new Rancher server container using the data from the `rancher-data` container. Remember to pass in all the environment variables that you had used when you started the original container. @@ -126,7 +134,9 @@ To see the command to use when starting the new Rancher server container, choose Select which option you had installed Rancher server -{{% accordion id="option-a" label="Option A-Default Self-Signed Certificate" %}} +### Option A: Default Self-Signed Certificate + +{{% accordion id="option-a" label="Click to expand" %}} If you have selected to use the Rancher generated self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container. @@ -146,7 +156,9 @@ As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-r {{% /accordion %}} -{{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}} +### Option B: Bring Your Own Certificate: Self-Signed + +{{% accordion id="option-b" label="Click to expand" %}} If you have selected to bring your own self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificate that you had originally installed with. @@ -174,7 +186,10 @@ docker run -d --volumes-from rancher-data \ As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) {{% /accordion %}} -{{% accordion id="option-c" label="Option C-Bring Your Own Certificate: Signed by Recognized CA" %}} + +### Option C: Bring Your Own Certificate: Signed by Recognized CA + +{{% accordion id="option-c" label="Click to expand" %}} If you have selected to use a certificate signed by a recognized CA, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificates that you had originally installed with. Remember to include `--no-cacerts` as an argument to the container to disable the default CA certificate generated by Rancher. @@ -200,7 +215,10 @@ docker run -d --volumes-from rancher-data \ As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) {{% /accordion %}} -{{% accordion id="option-d" label="Option D-Let's Encrypt Certificate" %}} + +### Option D: Let's Encrypt Certificate + +{{% accordion id="option-d" label="Click to expand" %}} >**Remember:** Let's Encrypt provides rate limits for requesting new certificates. Therefore, limit how often you create or destroy the container. For more information, see [Let's Encrypt documentation on rate limits](https://letsencrypt.org/docs/rate-limits/). @@ -238,7 +256,9 @@ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher When starting the new Rancher server container, choose from the following options: -{{% accordion id="option-a" label="Option A-Default Self-Signed Certificate" %}} +### Option A: Default Self-Signed Certificate + +{{% accordion id="option-a" label="Click to expand" %}} If you have selected to use the Rancher generated self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container. @@ -260,7 +280,9 @@ Placeholder | Description As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) {{% /accordion %}} -{{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}} +### Option B: Bring Your Own Certificate: Self-Signed + +{{% accordion id="option-b" label="Click to expand" %}} If you have selected to bring your own self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificate that you had originally installed with. @@ -289,7 +311,9 @@ docker run -d --restart=unless-stopped \ As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-rancher-v2-5) {{% /accordion %}} -{{% accordion id="option-c" label="Option C-Bring Your Own Certificate: Signed by Recognized CA" %}} +### Option C: Bring Your Own Certificate: Signed by Recognized CA + +{{% accordion id="option-c" label="Click to expand" %}} If you have selected to use a certificate signed by a recognized CA, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificates that you had originally installed with. @@ -324,7 +348,7 @@ As of Rancher v2.5, privileged access is [required.](../#privileged-access-for-r **Result:** You have upgraded Rancher. Data from your upgraded server is now saved to the `rancher-data` container for use in future upgrades. -### E. Verify the Upgrade +# 5. Verify the Upgrade Log into Rancher. Confirm that the upgrade succeeded by checking the version displayed in the bottom-left corner of the browser window. @@ -333,10 +357,10 @@ Log into Rancher. Confirm that the upgrade succeeded by checking the version dis > See [Restoring Cluster Networking]({{}}/rancher/v2.x/en/upgrades/upgrades/namespace-migration/#restoring-cluster-networking). -### F. Clean up Your Old Rancher Server Container +# 6. Clean up Your Old Rancher Server Container Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot. -## Rolling Back +# Rolling Back If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback]({{}}/rancher/v2.x/en/upgrades/rollbacks/single-node-rollbacks/). diff --git a/content/rancher/v2.x/en/installation/requirements/installing-docker/_index.md b/content/rancher/v2.x/en/installation/requirements/installing-docker/_index.md index b9f42e85179..4414cb08794 100644 --- a/content/rancher/v2.x/en/installation/requirements/installing-docker/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/installing-docker/_index.md @@ -3,7 +3,7 @@ title: Installing Docker weight: 1 --- -Docker is required to be installed on any node that runs the Rancher server. +For Helm CLI installs, Docker is required to be installed on any node that runs the Rancher server. There are a couple of options for installing Docker. One option is to refer to the [official Docker documentation](https://docs.docker.com/install/) about how to install Docker on Linux. The steps will vary based on the Linux distribution. diff --git a/content/rancher/v2.x/en/installation/resources/advanced/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/_index.md index 3c7b9e72cd6..f5e42195535 100644 --- a/content/rancher/v2.x/en/installation/resources/advanced/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/_index.md @@ -1,6 +1,6 @@ --- title: Advanced -weight: 5 +weight: 1000 --- The documents in this section contain resources for less common use cases. \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/resources/advanced/air-gap-helm2/install-rancher/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/air-gap-helm2/install-rancher/_index.md index 251d448b9c0..2e3fc143acc 100644 --- a/content/rancher/v2.x/en/installation/resources/advanced/air-gap-helm2/install-rancher/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/air-gap-helm2/install-rancher/_index.md @@ -174,7 +174,7 @@ Copy the rendered manifest directories to a system that has access to the Ranche Use `kubectl` to create namespaces and apply the rendered manifests. -If you chose to use self-signed certificates in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), install cert-manager. +If you choose to use self-signed certificates in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), install cert-manager. {{% accordion id="install-cert-manager" label="Self-Signed Certificate Installs - Install Cert-manager" %}} diff --git a/content/rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/layer-4-lb/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/layer-4-lb/_index.md index 4e1706da721..128ae1697ab 100644 --- a/content/rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/layer-4-lb/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/layer-4-lb/_index.md @@ -392,7 +392,7 @@ During installation, RKE automatically generates a config file named `kube_confi You have a couple of options: -- Create a backup of your Rancher Server in case of a disaster scenario: [High Availability Back Up and Restoration]({{}}/rancher/v2.x/en/installation/backups-and-restoration/ha-backup-and-restoration). +- Create a backup of your Rancher Server in case of a disaster scenario: [High Availability Back Up and Restore]({{}}/rancher/v2.x/en/installation/backups-and-restoration/ha-backup-and-restoration). - Create a Kubernetes cluster: [Provisioning Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/).
diff --git a/content/rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/layer-7-lb/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/layer-7-lb/_index.md index d4f5bab4941..99126e52803 100644 --- a/content/rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/layer-7-lb/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/helm2/rke-add-on/layer-7-lb/_index.md @@ -279,7 +279,7 @@ During installation, RKE automatically generates a config file named `kube_confi ## What's Next? -- **Recommended:** Review [Creating Backups—High Availability Back Up and Restoration]({{}}/rancher/v2.x/en/backups/backups/ha-backups/) to learn how to backup your Rancher Server in case of a disaster scenario. +- **Recommended:** Review [Creating Backups—High Availability Back Up and Restore]({{}}/rancher/v2.x/en/backups/backups/ha-backups/) to learn how to backup your Rancher Server in case of a disaster scenario. - Create a Kubernetes cluster: [Creating a Cluster]({{}}/rancher/v2.x/en/tasks/clusters/creating-a-cluster/).
diff --git a/content/rancher/v2.x/en/installation/resources/advanced/rke-add-on/layer-4-lb/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/rke-add-on/layer-4-lb/_index.md index 0c5000fd3d6..423ba54fb6a 100644 --- a/content/rancher/v2.x/en/installation/resources/advanced/rke-add-on/layer-4-lb/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/rke-add-on/layer-4-lb/_index.md @@ -390,7 +390,7 @@ During installation, RKE automatically generates a config file named `kube_confi You have a couple of options: -- Create a backup of your Rancher Server in case of a disaster scenario: [High Availability Back Up and Restoration]({{}}/rancher/v2.x/en/installation/backups-and-restoration/ha-backup-and-restoration). +- Create a backup of your Rancher Server in case of a disaster scenario: [High Availability Back Up and Restore]({{}}/rancher/v2.x/en/installation/backups-and-restoration/ha-backup-and-restoration). - Create a Kubernetes cluster: [Provisioning Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/).
diff --git a/content/rancher/v2.x/en/installation/resources/advanced/single-node-install-external-lb/_index.md b/content/rancher/v2.x/en/installation/resources/advanced/single-node-install-external-lb/_index.md index eeabd44485b..3f34c911832 100644 --- a/content/rancher/v2.x/en/installation/resources/advanced/single-node-install-external-lb/_index.md +++ b/content/rancher/v2.x/en/installation/resources/advanced/single-node-install-external-lb/_index.md @@ -33,7 +33,7 @@ Make sure that your node fulfills the general [installation requirements.]({{}}/rancher/v2.x/en/installation/requirements) to launch your {{< product >}} Server. +Provision a single Linux host according to our [Requirements]({{}}/rancher/v2.x/en/installation/requirements) to launch your Rancher Server. ## 2. Choose an SSL Option and Install Rancher @@ -166,7 +166,7 @@ http { ## What's Next? -- **Recommended:** Review [Single Node Backup and Restoration]({{}}/rancher/v2.x/en/installation/backups-and-restoration/single-node-backup-and-restoration/). Although you don't have any data you need to back up right now, we recommend creating backups after regular Rancher use. +- **Recommended:** Review [Single Node Backup and Restore]({{}}/rancher/v2.x/en/installation/backups-and-restoration/single-node-backup-and-restoration/). Although you don't have any data you need to back up right now, we recommend creating backups after regular Rancher use. - Create a Kubernetes cluster: [Provisioning Kubernetes Clusters]({{}}/rancher/v2.x/en/cluster-provisioning/).
diff --git a/content/rancher/v2.x/en/installation/resources/chart-options/_index.md b/content/rancher/v2.x/en/installation/resources/chart-options/_index.md new file mode 100644 index 00000000000..5a40f79404e --- /dev/null +++ b/content/rancher/v2.x/en/installation/resources/chart-options/_index.md @@ -0,0 +1,6 @@ +--- +title: Rancher Helm Chart Options +weight: 50 +--- + +The Rancher Helm chart options reference moved to [this page.]({{}}/rancher/v2.x/en/installation/install-rancher-on-k8s/chart-options/) \ No newline at end of file diff --git a/content/rancher/v2.x/en/installation/resources/choosing-version/_index.md b/content/rancher/v2.x/en/installation/resources/choosing-version/_index.md index 328f9066724..e05682c8ecd 100644 --- a/content/rancher/v2.x/en/installation/resources/choosing-version/_index.md +++ b/content/rancher/v2.x/en/installation/resources/choosing-version/_index.md @@ -11,6 +11,8 @@ For a high-availability installation of Rancher, which is recommended for produc For Docker installations of Rancher, which is used for development and testing, you will install Rancher as a **Docker image.** +The Helm chart version also applies to RancherD installs because RancherD installs the Rancher Helm chart on a Kubernetes cluster. + {{% tabs %}} {{% tab "Helm Charts" %}} diff --git a/content/rancher/v2.x/en/installation/resources/installing-docker/_index.md b/content/rancher/v2.x/en/installation/resources/installing-docker/_index.md deleted file mode 100644 index 9e040067e5d..00000000000 --- a/content/rancher/v2.x/en/installation/resources/installing-docker/_index.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Installing Docker -weight: 1 -aliases: - - /rancher/v2.x/en/installation/requirements/installing-docker ---- - -Docker is required to be installed on any node that runs the Rancher server. - -There are a couple of options for installing Docker. One option is to refer to the [official Docker documentation](https://docs.docker.com/install/) about how to install Docker on Linux. The steps will vary based on the Linux distribution. - -Another option is to use one of Rancher's Docker installation scripts, which are available for most recent versions of Docker. - -For example, this command could be used to install Docker 19.03 on Ubuntu: - -``` -curl https://releases.rancher.com/install-docker/19.03.sh | sh -``` - -Rancher has installation scripts for every version of upstream Docker that Kubernetes supports. To find out whether a script is available for installing a certain Docker version, refer to this [GitHub repository,](https://github.com/rancher/install-docker) which contains all of Rancher's Docker installation scripts. diff --git a/content/rancher/v2.x/en/installation/resources/local-system-charts/_index.md b/content/rancher/v2.x/en/installation/resources/local-system-charts/_index.md index 2819f2c00a1..e7f4eef10d8 100644 --- a/content/rancher/v2.x/en/installation/resources/local-system-charts/_index.md +++ b/content/rancher/v2.x/en/installation/resources/local-system-charts/_index.md @@ -1,6 +1,6 @@ --- title: Setting up Local System Charts for Air Gapped Installations -weight: 1120 +weight: 120 aliases: - /rancher/v2.x/en/installation/air-gap-single-node/config-rancher-system-charts/_index.md - /rancher/v2.x/en/installation/air-gap-high-availability/config-rancher-system-charts/_index.md diff --git a/content/rancher/v2.x/en/installation/resources/upgrading-cert-manager/_index.md b/content/rancher/v2.x/en/installation/resources/upgrading-cert-manager/_index.md index 00b29200ab3..a66d6a181d3 100644 --- a/content/rancher/v2.x/en/installation/resources/upgrading-cert-manager/_index.md +++ b/content/rancher/v2.x/en/installation/resources/upgrading-cert-manager/_index.md @@ -29,7 +29,7 @@ To address these changes, this guide will do two things: > For reinstalling Rancher with Helm, please check [Option B: Reinstalling Rancher Chart]({{}}/rancher/v2.x/en/installation/upgrades-rollbacks/upgrades/ha/#c-upgrade-rancher) under the upgrade Rancher section. -## Upgrade Cert-Manager +# Upgrade Cert-Manager The namespace used in these instructions depends on the namespace cert-manager is currently installed in. If it is in kube-system use that in the instructions below. You can verify by running `kubectl get pods --all-namespaces` and checking which namespace the cert-manager-\* pods are listed in. Do not change the namespace cert-manager is running in or this can cause issues. @@ -37,7 +37,9 @@ The namespace used in these instructions depends on the namespace cert-manager i In order to upgrade cert-manager, follow these instructions: -{{% accordion id="normal" label="Upgrading cert-manager with Internet access" %}} +### Option A: Upgrade cert-manager with Internet Access + +{{% accordion id="normal" label="Click to expand" %}} 1. [Back up existing resources](https://cert-manager.io/docs/tutorials/backup/) as a precaution ```plain @@ -104,7 +106,10 @@ In order to upgrade cert-manager, follow these instructions: {{% /accordion %}} -{{% accordion id="airgap" label="Upgrading cert-manager in an airgapped environment" %}} +### Option B: Upgrade cert-manager in an Air Gap Environment + +{{% accordion id="airgap" label="Click to expand" %}} + ### Prerequisites Before you can perform the upgrade, you must prepare your air gapped environment by adding the necessary container images to your private registry and downloading or rendering the required Kubernetes manifest files. @@ -208,6 +213,7 @@ Before you can perform the upgrade, you must prepare your air gapped environment {{% /accordion %}} +### Verify the Deployment Once you’ve installed cert-manager, you can verify it is deployed correctly by checking the kube-system namespace for running pods: diff --git a/content/rancher/v2.x/en/istio/v2.5/_index.md b/content/rancher/v2.x/en/istio/v2.5/_index.md index a3151365d15..569d5ca3d4d 100644 --- a/content/rancher/v2.x/en/istio/v2.5/_index.md +++ b/content/rancher/v2.x/en/istio/v2.5/_index.md @@ -122,33 +122,4 @@ By default the Egress gateway is disabled, but can be enabled on install or upgr # Additional Steps for Installing Istio on an RKE2 Cluster -Through the **Cluster Explorer,** when installing or upgrading Istio through **Apps & Marketplace,** - -1. Click **Components.** -1. Check the box next to **Enabled CNI.** -1. Add a custom overlay file specifying `cniBinDir` and `cniConfDir`. For more information on these options, refer to the [Istio documentation.](https://istio.io/latest/docs/setup/additional-setup/cni/#helm-chart-parameters) An example is below: - - ```yaml - apiVersion: install.istio.io/v1alpha1 - kind: IstioOperator - spec: - components: - cni: - enabled: true - values: - cni: - image: rancher/istio-install-cni:1.7.3 - excludeNamespaces: - - istio-system - - kube-system - logLevel: info - cniBinDir: /opt/cni/bin - cniConfDir: /etc/cni/net.d - ``` -1. After installing Istio, you'll notice the cni-node pods in the istio-system namespace in a CrashLoopBackoff error. Manually edit the `istio-cni-node` daemonset to include the following on the `install-cni` container: - ```yaml - securityContext: - privileged: true - ``` - -**Result:** Now you should be able to utilize Istio as desired, including sidecar injection and monitoring via Kiali. +To install Istio on an RKE2 cluster, follow the steps in [this section.](./setup/enable-istio-in-cluster/rke2) diff --git a/content/rancher/v2.x/en/istio/v2.5/configuration-reference/_index.md b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/_index.md new file mode 100644 index 00000000000..b4498667136 --- /dev/null +++ b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/_index.md @@ -0,0 +1,48 @@ +--- +title: Configuration Options +weight: 3 +--- + +- [Egress Support](#egress-support) +- [Enabling Automatic Sidecar Injection](#enabling-automatic-sidecar-injection) +- [Overlay File](#overlay-file) +- [Selectors and Scrape Configs](#selectors-and-scrape-configs) +- [Enable Istio with Pod Security Policies](#enable-istio-with-pod-security-policies) +- [Additional Steps for Installing Istio on an RKE2 Cluster](#additional-steps-for-installing-istio-on-an-rke2-cluster) +- [Additional Steps for Canal Network Plug-in with Project Network Isolation](#additional-steps-for-canal-network-plug-in-with-project-network-isolation) + +### Egress Support + +By default the Egress gateway is disabled, but can be enabled on install or upgrade through the values.yaml or via the [overlay file]({{}}/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/#overlay-file). + +### Enabling Automatic Sidecar Injection + +Automatic sidecar injection is disabled by default. To enable this, set the `sidecarInjectorWebhook.enableNamespacesByDefault=true` in the values.yaml on install or upgrade. This automatically enables Istio sidecar injection into all new namespaces that are deployed. + +### Overlay File + +An Overlay File is designed to support extensive configuration of your Istio installation. It allows you to make changes to any values available in the [IstioOperator API](https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/). This will ensure you can customize the default installation to fit any scenario. + +The Overlay File will add configuration on top of the default installation that is provided from the Istio chart installation. This means you do not need to redefine the components that already defined for installation. + +For more information on Overlay Files, refer to the [Istio documentation.](https://istio.io/latest/docs/setup/install/istioctl/#configure-component-settings) + +### Selectors and Scrape Configs + +The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false` which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. + +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. + +For details, refer to [this section.](./selectors-and-scrape) + +### Enable Istio with Pod Security Policies + +Refer to [this section.](./enable-istio-with-psp) + +### Additional Steps for Installing Istio on an RKE2 Cluster + +Refer to [this section.](./rke2) + +### Additional Steps for Canal Network Plug-in with Project Network Isolation + +Refer to [this section.](./canal-and-project-network) \ No newline at end of file diff --git a/content/rancher/v2.x/en/istio/v2.5/configuration-reference/canal-and-project-network/_index.md b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/canal-and-project-network/_index.md new file mode 100644 index 00000000000..03fc9c11637 --- /dev/null +++ b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/canal-and-project-network/_index.md @@ -0,0 +1,22 @@ +--- +title: Additional Steps for Canal Network Plug-in with Project Network Isolation +weight: 4 +--- + +In clusters where: + +- The [Canal network plug-in]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#canal) is in use. +- The Project Network Isolation option is enabled. +- You install the Istio Ingress module + +The Istio Ingress Gateway pod won't be able to redirect ingress traffic to the workloads by default. This is because all the namespaces will be inaccessible from the namespace where Istio is installed. You have two options. + +The first option is to add a new Network Policy in each of the namespaces where you intend to have ingress controlled by Istio. Your policy should include the following lines: + +``` +- podSelector: + matchLabels: + app: istio-ingressgateway +``` + +The second option is to move the `istio-system` namespace to the `system` project, which by default is excluded from the network isolation. \ No newline at end of file diff --git a/content/rancher/v2.x/en/istio/v2.5/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/enable-istio-with-psp/_index.md similarity index 98% rename from content/rancher/v2.x/en/istio/v2.5/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md rename to content/rancher/v2.x/en/istio/v2.5/configuration-reference/enable-istio-with-psp/_index.md index 7c5f1df618a..247baf1a86e 100644 --- a/content/rancher/v2.x/en/istio/v2.5/setup/enable-istio-in-cluster/enable-istio-with-psp/_index.md +++ b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/enable-istio-with-psp/_index.md @@ -4,6 +4,7 @@ weight: 1 aliases: - /rancher/v2.x/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/enable-istio-with-psp - /rancher/v2.x/en/istio/legacy/setup/enable-istio-in-cluster/enable-istio-with-psp + - /rancher/v2.x/en/istio/v2.5/setup/enable-istio-in-cluster/enable-istio-with-psp --- If you have restrictive Pod Security Policies enabled, then Istio may not be able to function correctly, because it needs certain permissions in order to install itself and manage pod infrastructure. In this section, we will configure a cluster with PSPs enabled for an Istio install, and also set up the Istio CNI plugin. diff --git a/content/rancher/v2.x/en/istio/v2.5/configuration-reference/rke2/_index.md b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/rke2/_index.md new file mode 100644 index 00000000000..1999d64cc99 --- /dev/null +++ b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/rke2/_index.md @@ -0,0 +1,35 @@ +--- +title: Additional Steps for Installing Istio on an RKE2 Cluster +weight: 3 +--- + +Through the **Cluster Explorer,** when installing or upgrading Istio through **Apps & Marketplace,** + +1. Click **Components.** +1. Check the box next to **Enabled CNI.** +1. Add a custom overlay file specifying `cniBinDir` and `cniConfDir`. For more information on these options, refer to the [Istio documentation.](https://istio.io/latest/docs/setup/additional-setup/cni/#helm-chart-parameters) An example is below: + + ```yaml + apiVersion: install.istio.io/v1alpha1 + kind: IstioOperator + spec: + components: + cni: + enabled: true + values: + cni: + image: rancher/istio-install-cni:1.7.3 + excludeNamespaces: + - istio-system + - kube-system + logLevel: info + cniBinDir: /opt/cni/bin + cniConfDir: /etc/cni/net.d + ``` +1. After installing Istio, you'll notice the cni-node pods in the istio-system namespace in a CrashLoopBackoff error. Manually edit the `istio-cni-node` daemonset to include the following on the `install-cni` container: + ```yaml + securityContext: + privileged: true + ``` + +**Result:** Now you should be able to utilize Istio as desired, including sidecar injection and monitoring via Kiali. diff --git a/content/rancher/v2.x/en/istio/v2.5/configuration-reference/selectors-and-scrape/_index.md b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/selectors-and-scrape/_index.md new file mode 100644 index 00000000000..f0f3d415e08 --- /dev/null +++ b/content/rancher/v2.x/en/istio/v2.5/configuration-reference/selectors-and-scrape/_index.md @@ -0,0 +1,126 @@ +--- +title: Selectors and Scrape Configs +weight: 2 +--- + +The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false`, which enables monitoring across all namespaces by default. + +This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. + +If you would like to limit Prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. + +- [Limiting Monitoring to Specific Namespaces by Setting ignoreNamespaceSelectors to True](#limiting-monitoring-to-specific-namespaces-by-setting-ignorenamespaceselectors-to-true) +- [Enabling Prometheus to Detect Resources in Other Namespaces](#enabling-prometheus-to-detect-resources-in-other-namespaces) +- [Monitoring Specific Namespaces: Create a Service Monitor or Pod Monitor](#monitoring-specific-namespaces-create-a-service-monitor-or-pod-monitor) +- [Monitoring Across Namespaces: Set ignoreNamespaceSelectors to False](#monitoring-across-namespaces-set-ignorenamespaceselectors-to-false) + +### Limiting Monitoring to Specific Namespaces by Setting ignoreNamespaceSelectors to True + +This limits monitoring to specific namespaces. + +1. From the **Cluster Explorer**, navigate to **Installed Apps** if Monitoring is already installed, or **Charts** in **Apps & Marketplace** +1. If starting a new install, **Click** the **rancher-monitoring** chart, then in **Chart Options** click **Edit as Yaml**. +1. If updating an existing installation, click on **Upgrade**, then in **Chart Options** click **Edit as Yaml**. +1. Set`prometheus.prometheusSpec.ignoreNamespaceSelectors=true` +1. Complete install or upgrade + +**Result:** Prometheus will be limited to specific namespaces which means one of the following configurations will need to be set up to continue to view data in various dashboards + +### Enabling Prometheus to Detect Resources in Other Namespaces + +There are two different ways to enable Prometheus to detect resources in other namespaces when `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`: + +- **Monitoring specific namespaces:** Add a Service Monitor or Pod Monitor in the namespace with the targets you want to scrape. +- **Monitoring across namespaces:** Add an `additionalScrapeConfig` to your rancher-monitoring instance to scrape all targets in all namespaces. + +### Monitoring Specific Namespaces: Create a Service Monitor or Pod Monitor + +This option allows you to define which specific services or pods you would like monitored in a specific namespace. + +The usability tradeoff is that you have to create the service monitor or pod monitor per namespace since you cannot monitor across namespaces. + +> **Prerequisite:** Define a ServiceMonitor or PodMonitor for ``. An example ServiceMonitor is provided below. + +1. From the **Cluster Explorer**, open the kubectl shell +1. Run `kubectl create -f .yaml` if the file is stored locally in your cluster. +1. Or run `cat<< EOF | kubectl apply -f -`, paste the file contents into the terminal, then run `EOF` to complete the command. +1. If starting a new install, **Click** the **rancher-monitoring** chart and scroll down to **Preview Yaml**. +1. Run `kubectl label namespace istio-injection=enabled` to enable the envoy sidecar injection + +**Result:** `` can be scraped by prometheus. + +
Example Service Monitor for Istio Proxies
+ +```yaml +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: envoy-stats-monitor + namespace: istio-system + labels: + monitoring: istio-proxies +spec: + selector: + matchExpressions: + - {key: istio-prometheus-ignore, operator: DoesNotExist} + namespaceSelector: + any: true + jobLabel: envoy-stats + endpoints: + - path: /stats/prometheus + targetPort: 15090 + interval: 15s + relabelings: + - sourceLabels: [__meta_kubernetes_pod_container_port_name] + action: keep + regex: '.*-envoy-prom' + - action: labeldrop + regex: "__meta_kubernetes_pod_label_(.+)" + - sourceLabels: [__meta_kubernetes_namespace] + action: replace + targetLabel: namespace + - sourceLabels: [__meta_kubernetes_pod_name] + action: replace + targetLabel: pod_name +``` + +### Monitoring across namespaces: Set ignoreNamespaceSelectors to False + +This enables monitoring across namespaces by giving Prometheus additional scrape configurations. + +The usability tradeoff is that all of Prometheus' `additionalScrapeConfigs` are maintained in a single Secret. This could make upgrading difficult if monitoring is already deployed with additionalScrapeConfigs prior to installing Istio. + +1. If starting a new install, **Click** the **rancher-monitoring** chart, then in **Chart Options** click **Edit as Yaml**. +1. If updating an existing installation, click on **Upgrade**, then in **Chart Options** click **Edit as Yaml**. +1. If updating an existing installation, click on **Upgrade** and then **Preview Yaml**. +1. Set`prometheus.prometheusSpec.additionalScrapeConfigs` array to the **Additional Scrape Config** provided below. +1. Complete install or upgrade + +**Result:** All namespaces with the `istio-injection=enabled` label will be scraped by prometheus. + +
Additional Scrape Config
+ +``` yaml +- job_name: 'istio/envoy-stats' + scrape_interval: 15s + metrics_path: /stats/prometheus + kubernetes_sd_configs: + - role: pod + relabel_configs: + - source_labels: [__meta_kubernetes_pod_container_port_name] + action: keep + regex: '.*-envoy-prom' + - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] + action: replace + regex: ([^:]+)(?::\d+)?;(\d+) + replacement: $1:15090 + target_label: __address__ + - action: labelmap + regex: __meta_kubernetes_pod_label_(.+) + - source_labels: [__meta_kubernetes_namespace] + action: replace + target_label: namespace + - source_labels: [__meta_kubernetes_pod_name] + action: replace + target_label: pod_name +``` diff --git a/content/rancher/v2.x/en/istio/v2.5/setup/enable-istio-in-cluster/_index.md b/content/rancher/v2.x/en/istio/v2.5/setup/enable-istio-in-cluster/_index.md index 8a79876fed9..cca2fbda815 100644 --- a/content/rancher/v2.x/en/istio/v2.5/setup/enable-istio-in-cluster/_index.md +++ b/content/rancher/v2.x/en/istio/v2.5/setup/enable-istio-in-cluster/_index.md @@ -9,162 +9,20 @@ aliases: >**Prerequisites:** > >- Only a user with the `cluster-admin` [Kubernetes default role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) assigned can configure and install Istio in a Kubernetes cluster. ->- If you have pod security policies, you will need to install Istio with the CNI enabled. For details, see [this section.](./enable-istio-with-psp) ->- To install Istio on an RKE2 cluster, additional steps are enabled. For details, see [this section.](./rke2) +>- If you have pod security policies, you will need to install Istio with the CNI enabled. For details, see [this section.]({{}}/rancher/v2.x/en/istio/v2.5/configuration-reference/enable-istio-with-psp) +>- To install Istio on an RKE2 cluster, additional steps are required. For details, see [this section.]({{}}/rancher/v2.x/en/istio/v2.5/configuration-reference/rke2) +>- To install Istio in a cluster where the Canal network plug-in is in use and the Project Network isolation option is enabled, additional steps are required. For details, see [this section.]({{}}/rancher/v2.x/en/istio/v2.5/configuration-reference/canal-and-project-network) 1. From the **Cluster Explorer**, navigate to available **Charts** in **Apps & Marketplace** 1. Select the Istio chart from the rancher provided charts 1. If you have not already installed your own monitoring app, you will be prompted to install the rancher-monitoring app. Optional: Set your Selector or Scrape config options on rancher-monitoring app install. 1. Optional: Configure member access and [resource limits]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/resources/) for the Istio components. Ensure you have enough resources on your worker nodes to enable Istio. -1. Optional: Make additional configuration changes to values.yaml if needed -1. Optional: Add additional resources or configuration via the [overlay file](#overlay-file) +1. Optional: Make additional configuration changes to values.yaml if needed. +1. Optional: Add additional resources or configuration via the [overlay file.]({{}}/rancher/v2.x/en/istio/v2.5/configuration-reference/#overlay-file) 1. Click **Install**. **Result:** Istio is installed at the cluster level. -Automatic sidecar injection is disabled by default. To enable this, set the `sidecarInjectorWebhook.enableNamespacesByDefault=true` in the values.yaml on install or upgrade. This automatically enables Istio sidecar injection into all new namespaces that are deployed. +# Additional Config Options -**Note:** In clusters where: - - - The [Canal network plug-in]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#canal) is in use. - - The Project Network Isolation option is enabled. - - You install the Istio Ingress module - -The Istio Ingress Gateway pod won't be able to redirect ingress traffic to the workloads by default. This is because all the namespaces will be inaccessible from the namespace where Istio is installed. You have two options. - - -The first option is to add a new Network Policy in each of the namespaces where you intend to have ingress controlled by Istio. Your policy should include the following lines: - -``` -- podSelector: - matchLabels: - app: istio-ingressgateway -``` -The second option is to move the `istio-system` namespace to the `system` project, which by default is excluded from the network isolation - -## Additional Config Options - -### Overlay File - -An Overlay File is designed to support extensive configuration of your Istio installation. It allows you to make changes to any values available in the [IstioOperator API](https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/). This will ensure you can customize the default installation to fit any scenario. - -The Overlay File will add configuration on top of the default installation that is provided from the Istio chart installation. This means you do not need to redefine the components that already defined for installation. - -For more information on Overlay Files, refer to the [documentation](https://istio.io/latest/docs/setup/install/istioctl/#configure-component-settings) - -## Selectors & Scrape Configs - -The Monitoring app sets `prometheus.prometheusSpec.ignoreNamespaceSelectors=false` which enables monitoring across all namespaces by default. This ensures you can view traffic, metrics and graphs for resources deployed in a namespace with `istio-injection=enabled` label. - -If you would like to limit prometheus to specific namespaces, set `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`. Once you do this, you will need to add additional configuration to continue to monitor your resources. - -**Set ignoreNamespaceSelectors to True** - -This limits monitoring to specific namespaces. - - -1. From the **Cluster Explorer**, navigate to **Installed Apps** if Monitoring is already installed, or **Charts** in **Apps & Marketplace** -1. If starting a new install, **Click** the **rancher-monitoring** chart, then in **Chart Options** click **Edit as Yaml**. -1. If updating an existing installation, click on **Upgrade**, then in **Chart Options** click **Edit as Yaml**. -1. Set`prometheus.prometheusSpec.ignoreNamespaceSelectors=true` -1. Complete install or upgrade - -**Result:** Prometheus will be limited to specific namespaces which means one of the following configurations will need to be set up to continue to view data in various dashboards - -There are two different ways to enable prometheus to detect resources in other namespaces when `prometheus.prometheusSpec.ignoreNamespaceSelectors=true`: - -1. Add a Service Monitor or Pod Monitor in the namespace with the targets you want to scrape. -1. Add an `additionalScrapeConfig` to your rancher-monitoring instance to scrape all targets in all namespaces. - -**Option 1: Create a Service Monitor or Pod Monitor** - -This option allows you to define which specific services or pods you would like monitored in a specific namespace. - - >Usability tradeoff is that you have to create the service monitor / pod monitor per namespace since you cannot monitor across namespaces. - - **Prerequisite:** define a ServiceMonitor or PodMonitor for ``. An example ServiceMonitor is provided below. - -1. From the **Cluster Explorer**, open the kubectl shell -1. Run `kubectl create -f .yaml` if the file is stored locally in your cluster. -1. Or run `cat<< EOF | kubectl apply -f -`, paste the file contents into the terminal, then run `EOF` to complete the command. -1. If starting a new install, **Click** the **rancher-monitoring** chart and scroll down to **Preview Yaml**. -1. Run `kubectl label namespace istio-injection=enabled` to enable the envoy sidecar injection - -**Result:** `` can be scraped by prometheus. - -**Example Service Monitor for Istio Proxies** - -```yaml -apiVersion: monitoring.coreos.com/v1 -kind: ServiceMonitor -metadata: - name: envoy-stats-monitor - namespace: istio-system - labels: - monitoring: istio-proxies -spec: - selector: - matchExpressions: - - {key: istio-prometheus-ignore, operator: DoesNotExist} - namespaceSelector: - any: true - jobLabel: envoy-stats - endpoints: - - path: /stats/prometheus - targetPort: 15090 - interval: 15s - relabelings: - - sourceLabels: [__meta_kubernetes_pod_container_port_name] - action: keep - regex: '.*-envoy-prom' - - action: labeldrop - regex: "__meta_kubernetes_pod_label_(.+)" - - sourceLabels: [__meta_kubernetes_namespace] - action: replace - targetLabel: namespace - - sourceLabels: [__meta_kubernetes_pod_name] - action: replace - targetLabel: pod_name -``` - - - -**Option 3: Set ignoreNamespaceSelectors to False** - -This enables monitoring across namespaces by giving prometheus additional scrape configurations. - - >The usability tradeoff is that all of prometheus' `additionalScrapeConfigs` are maintained in a single Secret. This could make upgrading difficult if monitoring is already deployed with additionalScrapeConfigs prior to installing Istio. - -1. If starting a new install, **Click** the **rancher-monitoring** chart, then in **Chart Options** click **Edit as Yaml**. -1. If updating an existing installation, click on **Upgrade**, then in **Chart Options** click **Edit as Yaml**. -1. If updating an existing installation, click on **Upgrade** and then **Preview Yaml**. -1. Set`prometheus.prometheusSpec.additionalScrapeConfigs` array to the **Additional Scrape Config** provided below. -1. Complete install or upgrade - -**Result:** All namespaces with the `istio-injection=enabled` label will be scraped by prometheus. - -**Additional Scrape Config:** -``` yaml -- job_name: 'istio/envoy-stats' - scrape_interval: 15s - metrics_path: /stats/prometheus - kubernetes_sd_configs: - - role: pod - relabel_configs: - - source_labels: [__meta_kubernetes_pod_container_port_name] - action: keep - regex: '.*-envoy-prom' - - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] - action: replace - regex: ([^:]+)(?::\d+)?;(\d+) - replacement: $1:15090 - target_label: __address__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - source_labels: [__meta_kubernetes_namespace] - action: replace - target_label: namespace - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: pod_name -``` +For more information on configuring Istio, refer to the [configuration reference.]({{}}/rancher/v2.x/en/istio/v2.5/configuration-reference) diff --git a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md index fc75a74b6cc..7550691df6f 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress/_index.md @@ -10,74 +10,63 @@ aliases: Ingress can be added for workloads to provide load balancing, SSL termination and host/path based routing. When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{}}/rancher/v2.x/en/catalog/globaldns/). 1. From the **Global** view, open the project that you want to add ingress to. - 1. Click **Resources** in the main navigation bar. Click the **Load Balancing** tab. (In versions prior to v2.3.0, just click the **Load Balancing** tab.) Then click **Add Ingress**. - 1. Enter a **Name** for the ingress. - 1. Select an existing **Namespace** from the drop-down list. Alternatively, you can create a new [namespace]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#namespaces) on the fly by clicking **Add to a new namespace**. - -1. Create ingress forwarding **Rules**. - - - **Automatically generate a xip.io hostname** - - If you choose this option, ingress routes requests to hostname to a DNS name that's automatically generated. Rancher uses [xip.io](http://xip.io/) to automatically generates the DNS name. This option is best used for testing, _not_ production environments. - - >**Note:** To use this option, you must be able to resolve to `xip.io` addresses. - - 1. Add a **Target Backend**. By default, a workload is added to the ingress, but you can add more targets by clicking either **Service** or **Workload**. - - 1. **Optional:** If you want specify a workload or service when a request is sent to a particular hostname path, add a **Path** for the target. For example, if you want requests for `www.mysite.com/contact-us` to be sent to a different service than `www.mysite.com`, enter `/contact-us` in the **Path** field. - - Typically, the first rule that you create does not include a path. - - 1. Select a workload or service from the **Target** drop-down list for each target you've added. - - 1. Enter the **Port** number that each target operates on. - - - **Specify a hostname to use** - - If you use this option, ingress routes requests for a hostname to the service or workload that you specify. - - 1. Enter the hostname that your ingress will handle request forwarding for. For example, `www.mysite.com`. - - 1. Add a **Target Backend**. By default, a workload is added to the ingress, but you can add more targets by clicking either **Service** or **Workload**. - - 1. **Optional:** If you want specify a workload or service when a request is sent to a particular hostname path, add a **Path** for the target. For example, if you want requests for `www.mysite.com/contact-us` to be sent to a different service than `www.mysite.com`, enter `/contact-us` in the **Path** field. - - Typically, the first rule that you create does not include a path. - - 1. Select a workload or service from the **Target** drop-down list for each target you've added. - - 1. Enter the **Port** number that each target operates on. - - - - **Use as the default backend** - - Use this option to set an ingress rule for handling requests that don't match any other ingress rules. For example, use this option to route requests that can't be found to a `404` page. - - >**Note:** If you deployed Rancher using RKE, a default backend for 404s and 202s is already configured. - - 1. Add a **Target Backend**. Click either **Service** or **Workload** to add the target. - - 1. Select a service or workload from the **Target** drop-down list. - -1. **Optional:** click **Add Rule** to create additional ingress rules. For example, after you create ingress rules to direct requests for your hostname, you'll likely want to create a default backend to handle 404s. - -1. If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications. - - >**Note:** You must have an SSL certificate that the ingress can use to encrypt/decrypt communications. For more information see [Adding SSL Certificates]({{}}/rancher/v2.x/en/k8s-in-rancher/certificates/). - - 1. Click **Add Certificate**. - - 1. Select a **Certificate** from the drop-down list. - - 1. Enter the **Host** using encrypted communication. - - 1. To add additional hosts that use the certificate, click **Add Hosts**. - -1. **Optional:** Add [Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) and/or [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) to provide metadata for your ingress. - - For a list of annotations available for use, see the [Nginx Ingress Controller Documentation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/). +1. Create ingress forwarding **Rules**. For help configuring the rules, refer to [this section.](#ingress-rule-configuration) If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications. +1. **Optional:** click **Add Rule** to create additional ingress rules. For example, after you create ingress rules to direct requests for your hostname, you'll likely want to create a default backend to handle 404s. **Result:** Your ingress is added to the project. The ingress begins enforcing your ingress rules. + + +# Ingress Rule Configuration + +- [Automatically generate a xip.io hostname](#automatically-generate-a-xip-io-hostname) +- [Specify a hostname to use](#specify-a-hostname-to-use) +- [Use as the default backend](#use-as-the-default-backend) +- [Certificates](#certificates) +- [Labels and Annotations](#labels-and-annotations) + +### Automatically generate a xip.io hostname + +If you choose this option, ingress routes requests to hostname to a DNS name that's automatically generated. Rancher uses [xip.io](http://xip.io/) to automatically generates the DNS name. This option is best used for testing, _not_ production environments. + +>**Note:** To use this option, you must be able to resolve to `xip.io` addresses. + +1. Add a **Target Backend**. By default, a workload is added to the ingress, but you can add more targets by clicking either **Service** or **Workload**. +1. **Optional:** If you want specify a workload or service when a request is sent to a particular hostname path, add a **Path** for the target. For example, if you want requests for `www.mysite.com/contact-us` to be sent to a different service than `www.mysite.com`, enter `/contact-us` in the **Path** field. Typically, the first rule that you create does not include a path. +1. Select a workload or service from the **Target** drop-down list for each target you've added. +1. Enter the **Port** number that each target operates on. + +### Specify a hostname to use + +If you use this option, ingress routes requests for a hostname to the service or workload that you specify. + +1. Enter the hostname that your ingress will handle request forwarding for. For example, `www.mysite.com`. +1. Add a **Target Backend**. By default, a workload is added to the ingress, but you can add more targets by clicking either **Service** or **Workload**. +1. **Optional:** If you want specify a workload or service when a request is sent to a particular hostname path, add a **Path** for the target. For example, if you want requests for `www.mysite.com/contact-us` to be sent to a different service than `www.mysite.com`, enter `/contact-us` in the **Path** field. Typically, the first rule that you create does not include a path. +1. Select a workload or service from the **Target** drop-down list for each target you've added. +1. Enter the **Port** number that each target operates on. + +### Use as the default backend + +Use this option to set an ingress rule for handling requests that don't match any other ingress rules. For example, use this option to route requests that can't be found to a `404` page. + +>**Note:** If you deployed Rancher using RKE, a default backend for 404s and 202s is already configured. + +1. Add a **Target Backend**. Click either **Service** or **Workload** to add the target. +1. Select a service or workload from the **Target** drop-down list. + +### Certificates +>**Note:** You must have an SSL certificate that the ingress can use to encrypt/decrypt communications. For more information see [Adding SSL Certificates]({{}}/rancher/v2.x/en/k8s-in-rancher/certificates/). + +1. Click **Add Certificate**. +1. Select a **Certificate** from the drop-down list. +1. Enter the **Host** using encrypted communication. +1. To add additional hosts that use the certificate, click **Add Hosts**. + +### Labels and Annotations + +Add [Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) and/or [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) to provide metadata for your ingress. + +For a list of annotations available for use, see the [Nginx Ingress Controller Documentation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/). \ No newline at end of file diff --git a/content/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md index af82746d90b..638cd0ccdec 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md @@ -17,7 +17,7 @@ When a new version of an application image is released on Docker Hub, you can up 1. Review and edit the workload's **Scaling/Upgrade** policy. - These options control how the upgrade rolls out to containers that are currently running. For example, for scalable deployments, you can chose whether you want to stop old pods before deploying new ones, or vice versa, as well as the upgrade batch size. + These options control how the upgrade rolls out to containers that are currently running. For example, for scalable deployments, you can choose whether you want to stop old pods before deploying new ones, or vice versa, as well as the upgrade batch size. 1. Click **Upgrade**. diff --git a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/_index.md b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/_index.md index 81fbcc45dd5..aea40f6beee 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/_index.md @@ -10,6 +10,25 @@ aliases: To keep your clusters and applications healthy and driving your organizational productivity forward, you need to stay informed of events occurring in your clusters and projects, both planned and unplanned. When an event occurs, your alert is triggered, and you are sent a notification. You can then, if necessary, follow up with corrective actions. +This section covers the following topics: + +- [About Alerts](#about-alerts) + - [Alert Event Examples](#alert-event-examples) + - [Alerts Triggered by Prometheus Queries](#alerts-triggered-by-prometheus-queries) + - [Urgency Levels](#urgency-levels) + - [Scope of Alerts](#scope-of-alerts) + - [Managing Cluster Alerts](#managing-cluster-alerts) +- [Adding Cluster Alerts](#adding-cluster-alerts) +- [Cluster Alert Configuration](#cluster-alert-configuration) + - [System Service Alerts](#system-service-alerts) + - [Resource Event Alerts](#resource-event-alerts) + - [Node Alerts](#node-alerts) + - [Node Selector Alerts](#node-selector-alerts) + - [CIS Scan Alerts](#cis-scan-alerts) + - [Metric Expression Alerts](#metric-expression-alerts) + +# About Alerts + Notifiers and alerts are built on top of the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/alertmanager/). Leveraging these tools, Rancher can notify [cluster owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) and [project owners]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) of events they need to address. Before you can receive alerts, you must configure one or more notifier in Rancher. @@ -18,16 +37,7 @@ When you create a cluster, some alert rules are predefined. You can receive thes For details about what triggers the predefined alerts, refer to the [documentation on default alerts.]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts) -This section covers the following topics: - -- [Alert event examples](#alert-event-examples) - - [Prometheus queries](#prometheus-queries) -- [Urgency levels](#urgency-levels) -- [Scope of alerts](#scope-of-alerts) -- [Adding cluster alerts](#adding-cluster-alerts) -- [Managing cluster alerts](#managing-cluster-alerts) - -# Alert Event Examples +### Alert Event Examples Some examples of alert events are: @@ -36,17 +46,17 @@ Some examples of alert events are: - A scheduled deployment taking place as planned. - A node's hardware resources becoming overstressed. -### Prometheus Queries +### Alerts Triggered by Prometheus Queries -> **Prerequisite:** Monitoring must be [enabled]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring) before you can trigger alerts with custom Prometheus queries or expressions. +When you edit an alert rule, you will have the opportunity to configure the alert to be triggered based on a Prometheus expression. For examples of expressions, refer to [this page.]({{}}/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/expression/) -When you edit an alert rule, you will have the opportunity to configure the alert to be triggered based on a Prometheus expression. For examples of expressions, refer to [this page.]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression) +Monitoring must be [enabled]({{}}/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-monitoring/#enabling-cluster-monitoring) before you can trigger alerts with custom Prometheus queries or expressions. -# Urgency Levels +### Urgency Levels You can set an urgency level for each alert. This urgency appears in the notification you receive, helping you to prioritize your response actions. For example, if you have an alert configured to inform you of a routine deployment, no action is required. These alerts can be assigned a low priority level. However, if a deployment fails, it can critically impact your organization, and you need to react quickly. Assign these alerts a high priority level. -# Scope of Alerts +### Scope of Alerts The scope for alerts can be set at either the cluster level or [project level]({{}}/rancher/v2.x/en/project-admin/tools/alerts/). @@ -57,187 +67,7 @@ At the cluster level, Rancher monitors components in your Kubernetes cluster, an - The resource events from specific system services. - The Prometheus expression cross the thresholds -# Adding Cluster Alerts - -As a [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to send you alerts for cluster events. - ->**Prerequisite:** Before you can receive cluster alerts, you must [add a notifier]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/#adding-notifiers). - -1. From the **Global** view, navigate to the cluster that you want to configure cluster alerts for. Select **Tools > Alerts**. Then click **Add Alert Group**. - -1. Enter a **Name** for the alert that describes its purpose, you could group alert rules for the different purpose. - -1. Based on the type of alert you want to create, complete one of the instruction subsets below. -{{% accordion id="system-service" label="System Service Alerts" %}} -This alert type monitor for events that affect one of the Kubernetes master components, regardless of the node it occurs on. - -1. Select the **System Services** option, and then select an option from the drop-down. - - - [controller-manager](https://kubernetes.io/docs/concepts/overview/components/#kube-controller-manager) - - [etcd](https://kubernetes.io/docs/concepts/overview/components/#etcd) - - [scheduler](https://kubernetes.io/docs/concepts/overview/components/#kube-scheduler) - -1. Select the urgency level of the alert. The options are: - - - **Critical**: Most urgent - - **Warning**: Normal urgency - - **Info**: Least urgent -
-
- Select the urgency level based on the importance of the service and how many nodes fill the role within your cluster. For example, if you're making an alert for the `etcd` service, select **Critical**. If you're making an alert for redundant schedulers, **Warning** is more appropriate. - -1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. - - - **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. - - **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. - - **Repeat Wait Time**: How long to wait before re-sending a given alert that has already been sent, default to 1 hour. - -{{% /accordion %}} -{{% accordion id="resource-event" label="Resource Event Alerts" %}} -This alert type monitors for specific events that are thrown from a resource type. - -1. Choose the type of resource event that triggers an alert. The options are: - - - **Normal**: triggers an alert when any standard resource event occurs. - - **Warning**: triggers an alert when unexpected resource events occur. - -1. Select a resource type from the **Choose a Resource** drop-down that you want to trigger an alert. - - - [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) - - [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) - - [Node](https://kubernetes.io/docs/concepts/architecture/nodes/) - - [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/) - - [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) - -1. Select the urgency level of the alert. - - - **Critical**: Most urgent - - **Warning**: Normal urgency - - **Info**: Least urgent -
-
- Select the urgency level of the alert by considering factors such as how often the event occurs or its importance. For example: - - - If you set a normal alert for pods, you're likely to receive alerts often, and individual pods usually self-heal, so select an urgency of **Info**. - - If you set a warning alert for StatefulSets, it's very likely to impact operations, so select an urgency of **Critical**. - -1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. - - - **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. - - **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. - - **Repeat Wait Time**: How long to wait before re-sending a given alert that has already been sent, default to 1 hour. - -{{% /accordion %}} -{{% accordion id="node" label="Node Alerts" %}} -This alert type monitors for events that occur on a specific node. - -1. Select the **Node** option, and then make a selection from the **Choose a Node** drop-down. - -1. Choose an event to trigger the alert. - - - **Not Ready**: Sends you an alert when the node is unresponsive. - - **CPU usage over**: Sends you an alert when the node raises above an entered percentage of its processing allocation. - - **Mem usage over**: Sends you an alert when the node raises above an entered percentage of its memory allocation. - -1. Select the urgency level of the alert. - - - **Critical**: Most urgent - - **Warning**: Normal urgency - - **Info**: Least urgent -
-
- Select the urgency level of the alert based on its impact on operations. For example, an alert triggered when a node's CPU raises above 60% deems an urgency of **Info**, but a node that is **Not Ready** deems an urgency of **Critical**. - -1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. - - - **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. - - **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. - - **Repeat Wait Time**: How long to wait before re-sending a given alert that has already been sent, default to 1 hour. - -{{% /accordion %}} -{{% accordion id="node-selector" label="Node Selector Alerts" %}} -This alert type monitors for events that occur on any node on marked with a label. For more information, see the Kubernetes documentation for [Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). - -1. Select the **Node Selector** option, and then click **Add Selector** to enter a key value pair for a label. This label should be applied to one or more of your nodes. Add as many selectors as you'd like. - -1. Choose an event to trigger the alert. - - - **Not Ready**: Sends you an alert when selected nodes are unresponsive. - - **CPU usage over**: Sends you an alert when selected nodes raise above an entered percentage of processing allocation. - - **Mem usage over**: Sends you an alert when selected nodes raise above an entered percentage of memory allocation. - -1. Select the urgency level of the alert. - - - **Critical**: Most urgent - - **Warning**: Normal urgency - - **Info**: Least urgent -
-
- Select the urgency level of the alert based on its impact on operations. For example, an alert triggered when a node's CPU raises above 60% deems an urgency of **Info**, but a node that is **Not Ready** deems an urgency of **Critical**. - -1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. - - - **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. - - **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. - - **Repeat Wait Time**: How long to wait before re-sending a given alert that has already been sent, default to 1 hour. - -{{% /accordion %}} -{{% accordion id="cluster-expression" label="Metric Expression Alerts" %}} -This alert type monitors for the overload from Prometheus expression querying, it would be available after you enable monitoring. - -1. Input or select an **Expression**, the drop down shows the original metrics from Prometheus, including: - - - [**Node**](https://github.com/prometheus/node_exporter) - - [**Container**](https://github.com/google/cadvisor) - - [**ETCD**](https://etcd.io/docs/v3.4.0/op-guide/monitoring/) - - [**Kubernetes Components**](https://github.com/kubernetes/metrics) - - [**Kubernetes Resources**](https://github.com/kubernetes/kube-state-metrics) - - [**Fluentd**](https://docs.fluentd.org/v1.0/articles/monitoring-prometheus) (supported by [Logging]({{}}/rancher/v2.x//en/cluster-admin/tools/logging)) - - [**Cluster Level Grafana**](http://docs.grafana.org/administration/metrics/) - - **Cluster Level Prometheus** - -1. Choose a **Comparison**. - - - **Equal**: Trigger alert when expression value equal to the threshold. - - **Not Equal**: Trigger alert when expression value not equal to the threshold. - - **Greater Than**: Trigger alert when expression value greater than to threshold. - - **Less Than**: Trigger alert when expression value equal or less than the threshold. - - **Greater or Equal**: Trigger alert when expression value greater to equal to the threshold. - - **Less or Equal**: Trigger alert when expression value less or equal to the threshold. - -1. Input a **Threshold**, for trigger alert when the value of expression cross the threshold. - -1. Choose a **Comparison**. - -1. Select a duration, for trigger alert when expression value crosses the threshold longer than the configured duration. - -1. Select the urgency level of the alert. - - - **Critical**: Most urgent - - **Warning**: Normal urgency - - **Info**: Least urgent -
-
- Select the urgency level of the alert based on its impact on operations. For example, an alert triggered when a node's load expression ```sum(node_load5) / count(node_cpu_seconds_total{mode="system"})``` raises above 0.6 deems an urgency of **Info**, but 1 deems an urgency of **Critical**. - -1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. - - - **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. - - **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. - - **Repeat Wait Time**: How long to wait before re-sending a given alert that has already been sent, default to 1 hour. - -{{% /accordion %}} - -1. Continue adding more **Alert Rule** to the group. - -1. Finally, choose the [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) to send the alerts to. - - - You can set up multiple notifiers. - - You can change notifier recipients on the fly. - -**Result:** Your alert is configured. A notification is sent when the alert is triggered. - -# Managing Cluster Alerts +### Managing Cluster Alerts After you set up cluster alerts, you can manage each alert object. To manage alerts, browse to the cluster containing the alerts, and then select **Tools > Alerts** that you want to manage. You can: @@ -246,3 +76,271 @@ After you set up cluster alerts, you can manage each alert object. To manage ale - Delete unnecessary alerts - Mute firing alerts - Unmute muted alerts + +# Adding Cluster Alerts + +As a [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to send you alerts for cluster events. + +>**Prerequisite:** Before you can receive cluster alerts, you must [add a notifier]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/#adding-notifiers). + +1. From the **Global** view, navigate to the cluster that you want to configure cluster alerts for. Select **Tools > Alerts**. Then click **Add Alert Group**. +1. Enter a **Name** for the alert that describes its purpose, you could group alert rules for the different purpose. +1. Based on the type of alert you want to create, refer to the [cluster alert configuration section.](#cluster-alert-configuration) +1. Continue adding more **Alert Rule** to the group. +1. Finally, choose the [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) to send the alerts to. + + - You can set up multiple notifiers. + - You can change notifier recipients on the fly. +1. Click **Create.** + +**Result:** Your alert is configured. A notification is sent when the alert is triggered. + + +# Cluster Alert Configuration + + - [System Service Alerts](#system-service-alerts) + - [Resource Event Alerts](#resource-event-alerts) + - [Node Alerts](#node-alerts) + - [Node Selector Alerts](#node-selector-alerts) + - [CIS Scan Alerts](#cis-scan-alerts) + - [Metric Expression Alerts](#metric-expression-alerts) + +# System Service Alerts + +This alert type monitor for events that affect one of the Kubernetes master components, regardless of the node it occurs on. + +Each of the below sections corresponds to a part of the alert rule configuration section in the Rancher UI. + +### When a + +Select the **System Services** option, and then select an option from the dropdown: + +- [controller-manager](https://kubernetes.io/docs/concepts/overview/components/#kube-controller-manager) +- [etcd](https://kubernetes.io/docs/concepts/overview/components/#etcd) +- [scheduler](https://kubernetes.io/docs/concepts/overview/components/#kube-scheduler) + +### Is + +The alert will be triggered when the selected Kubernetes master component is unhealthy. + +### Send a + +Select the urgency level of the alert. The options are: + +- **Critical**: Most urgent +- **Warning**: Normal urgency +- **Info**: Least urgent + + Select the urgency level based on the importance of the service and how many nodes fill the role within your cluster. For example, if you're making an alert for the `etcd` service, select **Critical**. If you're making an alert for redundant schedulers, **Warning** is more appropriate. + +### Advanced Options + +By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. + +- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. +- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. +- **Repeat Wait Time**: How long to wait before re-sending a given alert that has already been sent, default to 1 hour. + +# Resource Event Alerts + +This alert type monitors for specific events that are thrown from a resource type. + +Each of the below sections corresponds to a part of the alert rule configuration section in the Rancher UI. + +### When a + +Choose the type of resource event that triggers an alert. The options are: + +- **Normal**: triggers an alert when any standard resource event occurs. +- **Warning**: triggers an alert when unexpected resource events occur. + +Select a resource type from the **Choose a Resource** drop-down that you want to trigger an alert. + +- [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) +- [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) +- [Node](https://kubernetes.io/docs/concepts/architecture/nodes/) +- [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/) +- [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) + +### Send a + +Select the urgency level of the alert. + +- **Critical**: Most urgent +- **Warning**: Normal urgency +- **Info**: Least urgent + +Select the urgency level of the alert by considering factors such as how often the event occurs or its importance. For example: + +- If you set a normal alert for pods, you're likely to receive alerts often, and individual pods usually self-heal, so select an urgency of **Info**. +- If you set a warning alert for StatefulSets, it's very likely to impact operations, so select an urgency of **Critical**. + +### Advanced Options + +By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. + +- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. +- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. +- **Repeat Wait Time**: How long to wait before re-sending a given alert that has already been sent, default to 1 hour. + +# Node Alerts + +This alert type monitors for events that occur on a specific node. + +Each of the below sections corresponds to a part of the alert rule configuration section in the Rancher UI. + +### When a + +Select the **Node** option, and then make a selection from the **Choose a Node** drop-down. + +### Is + +Choose an event to trigger the alert. + +- **Not Ready**: Sends you an alert when the node is unresponsive. +- **CPU usage over**: Sends you an alert when the node raises above an entered percentage of its processing allocation. +- **Mem usage over**: Sends you an alert when the node raises above an entered percentage of its memory allocation. + +### Send a + +Select the urgency level of the alert. + +- **Critical**: Most urgent +- **Warning**: Normal urgency +- **Info**: Least urgent + +Select the urgency level of the alert based on its impact on operations. For example, an alert triggered when a node's CPU raises above 60% deems an urgency of **Info**, but a node that is **Not Ready** deems an urgency of **Critical**. + +### Advanced Options + +By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. + +- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. +- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. +- **Repeat Wait Time**: How long to wait before re-sending a given alert that has already been sent, default to 1 hour. + +# Node Selector Alerts + +This alert type monitors for events that occur on any node on marked with a label. For more information, see the Kubernetes documentation for [Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). + +Each of the below sections corresponds to a part of the alert rule configuration section in the Rancher UI. + +### When a + +Select the **Node Selector** option, and then click **Add Selector** to enter a key value pair for a label. This label should be applied to one or more of your nodes. Add as many selectors as you'd like. + +### Is + +Choose an event to trigger the alert. + +- **Not Ready**: Sends you an alert when selected nodes are unresponsive. +- **CPU usage over**: Sends you an alert when selected nodes raise above an entered percentage of processing allocation. +- **Mem usage over**: Sends you an alert when selected nodes raise above an entered percentage of memory allocation. + +### Send a + +Select the urgency level of the alert. + +- **Critical**: Most urgent +- **Warning**: Normal urgency +- **Info**: Least urgent + +Select the urgency level of the alert based on its impact on operations. For example, an alert triggered when a node's CPU raises above 60% deems an urgency of **Info**, but a node that is **Not Ready** deems an urgency of **Critical**. + +### Advanced Options + +By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. + +- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. +- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. +- **Repeat Wait Time**: How long to wait before re-sending a given alert that has already been sent, default to 1 hour. + +# CIS Scan Alerts +_Available as of v2.4.0_ + +This alert type is triggered based on the results of a CIS scan. + +Each of the below sections corresponds to a part of the alert rule configuration section in the Rancher UI. + +### When a + +Select **CIS Scan.** + +### Is + +Choose an event to trigger the alert: + +- Completed Scan +- Has Failure + +### Send a + +Select the urgency level of the alert. + +- **Critical**: Most urgent +- **Warning**: Normal urgency +- **Info**: Least urgent + +Select the urgency level of the alert based on its impact on operations. For example, an alert triggered when a node's CPU raises above 60% deems an urgency of **Info**, but a node that is **Not Ready** deems an urgency of **Critical**. + +### Advanced Options + +By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. + +- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. +- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. +- **Repeat Wait Time**: How long to wait before re-sending a given alert that has already been sent, default to 1 hour. + +# Metric Expression Alerts + +This alert type monitors for the overload from Prometheus expression querying, it would be available after you enable monitoring. + +Each of the below sections corresponds to a part of the alert rule configuration section in the Rancher UI. + +### When a + +Input or select an **Expression**, the dropdown shows the original metrics from Prometheus, including: + +- [**Node**](https://github.com/prometheus/node_exporter) +- [**Container**](https://github.com/google/cadvisor) +- [**ETCD**](https://etcd.io/docs/v3.4.0/op-guide/monitoring/) +- [**Kubernetes Components**](https://github.com/kubernetes/metrics) +- [**Kubernetes Resources**](https://github.com/kubernetes/kube-state-metrics) +- [**Fluentd**](https://docs.fluentd.org/v1.0/articles/monitoring-prometheus) (supported by [Logging]({{}}/rancher/v2.x//en/cluster-admin/tools/logging)) +- [**Cluster Level Grafana**](http://docs.grafana.org/administration/metrics/) +- **Cluster Level Prometheus** + +### Is + +Choose a comparison: + +- **Equal**: Trigger alert when expression value equal to the threshold. +- **Not Equal**: Trigger alert when expression value not equal to the threshold. +- **Greater Than**: Trigger alert when expression value greater than to threshold. +- **Less Than**: Trigger alert when expression value equal or less than the threshold. +- **Greater or Equal**: Trigger alert when expression value greater to equal to the threshold. +- **Less or Equal**: Trigger alert when expression value less or equal to the threshold. + +If applicable, choose a comparison value or a threshold for the alert to be triggered. + +### For + +Select a duration for a trigger alert when the expression value crosses the threshold longer than the configured duration. + +### Send a + +Select the urgency level of the alert. + +- **Critical**: Most urgent +- **Warning**: Normal urgency +- **Info**: Least urgent + +Select the urgency level of the alert based on its impact on operations. For example, an alert triggered when a node's load expression ```sum(node_load5) / count(node_cpu_seconds_total{mode="system"})``` raises above 0.6 deems an urgency of **Info**, but 1 deems an urgency of **Critical**. + +### Advanced Options + +By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. + +- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. +- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. +- **Repeat Wait Time**: How long to wait before re-sending a given alert that has already been sent, default to 1 hour. \ No newline at end of file diff --git a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/project-alerts/_index.md b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/project-alerts/_index.md index 0bb9c6a6f4c..d775555110a 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/project-alerts/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/v2.0.x-v2.4.x/cluster-alerts/project-alerts/_index.md @@ -20,8 +20,14 @@ This section covers the following topics: - [Default project-level alerts](#default-project-level-alerts) - [Adding project alerts](#adding-project-alerts) - [Managing project alerts](#managing-project-alerts) +- [Project Alert Rule Configuration](#project-alert-rule-configuration) + - [Pod Alerts](#pod-alerts) + - [Workload Alerts](#workload-alerts) + - [Workload Selector Alerts](#workload-selector-alerts) + - [Metric Expression Alerts](#metric-expression-alerts) -## Alerts Scope + +# Alerts Scope The scope for alerts can be set at either the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) or project level. @@ -32,7 +38,7 @@ At the project level, Rancher monitors specific deployments and sends alerts for * Pod status * The Prometheus expression cross the thresholds -## Default Project-level Alerts +# Default Project-level Alerts When you enable monitoring for the project, some project-level alerts are provided. You can receive these alerts if a [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers) for them is configured at the cluster level. @@ -43,7 +49,7 @@ When you enable monitoring for the project, some project-level alerts are provid For information on other default alerts, refer to the section on [cluster-level alerts.]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/default-alerts) -## Adding Project Alerts +# Adding Project Alerts >**Prerequisite:** Before you can receive project alerts, you must add a notifier. @@ -53,131 +59,21 @@ For information on other default alerts, refer to the section on [cluster-level 1. Enter a **Name** for the alert that describes its purpose, you could group alert rules for the different purpose. -1. Based on the type of alert you want to create, complete one of the instruction subsets below. +1. Based on the type of alert you want to create, fill out the form. For help, refer to the [configuration](#project-alert-rule-configuration) section below. -{{% accordion id="pod" label="Pod Alerts" %}} -This alert type monitors for the status of a specific pod. - -1. Select the **Pod** option, and then select a pod from the drop-down. -1. Select a pod status that triggers an alert: - - - **Not Running** - - **Not Scheduled** - - **Restarted `` times with the last `` Minutes** - -1. Select the urgency level of the alert. The options are: - - - **Critical**: Most urgent - - **Warning**: Normal urgency - - **Info**: Least urgent - - Select the urgency level of the alert based on pod state. For example, select **Info** for Job pod which stop running after job finished. However, if an important pod isn't scheduled, it may affect operations, so choose **Critical**. - -1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. - - - **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. - - **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. - - **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour. - -{{% /accordion %}} -{{% accordion id="workload" label="Workload Alerts" %}} -This alert type monitors for the availability of a workload. - -1. Choose the **Workload** option. Then choose a workload from the drop-down. - -1. Choose an availability percentage using the slider. The alert is triggered when the workload's availability on your cluster nodes drops below the set percentage. - -1. Select the urgency level of the alert. - - - **Critical**: Most urgent - - **Warning**: Normal urgency - - **Info**: Least urgent - - Select the urgency level of the alert based on the percentage you choose and the importance of the workload. - -1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. - - - **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. - - **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. - - **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour. - -{{% /accordion %}} -{{% accordion id="workload-selector" label="Workload Selector Alerts" %}} -This alert type monitors for the availability of all workloads marked with tags that you've specified. - -1. Select the **Workload Selector** option, and then click **Add Selector** to enter the key value pair for a label. If one of the workloads drops below your specifications, an alert is triggered. This label should be applied to one or more of your workloads. - -1. Select the urgency level of the alert. - - - **Critical**: Most urgent - - **Warning**: Normal urgency - - **Info**: Least urgent - - Select the urgency level of the alert based on the percentage you choose and the importance of the workload. - -1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. - - - **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. - - **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. - - **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour. - -{{% /accordion %}} -{{% accordion id="project-expression" label="Metric Expression Alerts" %}} -
-_Available as of v2.2.4_ - -If you enable [project monitoring]({{}}/rancher/v2.x/en/project-admin/tools/#monitoring), this alert type monitors for the overload from Prometheus expression querying. - -1. Input or select an **Expression**, the drop down shows the original metrics from Prometheus, including: - - - [**Container**](https://github.com/google/cadvisor) - - [**Kubernetes Resources**](https://github.com/kubernetes/kube-state-metrics) - - [**Customize**]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#project-metrics) - - [**Project Level Grafana**](http://docs.grafana.org/administration/metrics/) - - **Project Level Prometheus** - -1. Choose a comparison. - - - **Equal**: Trigger alert when expression value equal to the threshold. - - **Not Equal**: Trigger alert when expression value not equal to the threshold. - - **Greater Than**: Trigger alert when expression value greater than to threshold. - - **Less Than**: Trigger alert when expression value equal or less than the threshold. - - **Greater or Equal**: Trigger alert when expression value greater to equal to the threshold. - - **Less or Equal**: Trigger alert when expression value less or equal to the threshold. - -1. Input a **Threshold**, for trigger alert when the value of expression cross the threshold. - -1. Choose a **Comparison**. - -1. Select a **Duration**, for trigger alert when expression value crosses the threshold longer than the configured duration. - -1. Select the urgency level of the alert. - - - **Critical**: Most urgent - - **Warning**: Normal urgency - - **Info**: Least urgent -
-
- Select the urgency level of the alert based on its impact on operations. For example, an alert triggered when a expression for container memory close to the limit raises above 60% deems an urgency of **Info**, but raised about 95% deems an urgency of **Critical**. - -1. Configure advanced options. By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. - - - **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. - - **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. - - **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour. -
-{{% /accordion %}} - -1. Continue adding more **Alert Rule** to the group. +1. Continue adding more alert rules to the group. 1. Finally, choose the [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) that send you alerts. - You can set up multiple notifiers. - You can change notifier recipients on the fly. +1. Click **Create.** + **Result:** Your alert is configured. A notification is sent when the alert is triggered. -## Managing Project Alerts + +# Managing Project Alerts To manage project alerts, browse to the project that alerts you want to manage. Then select **Tools > Alerts**. In versions prior to v2.2.0, you can choose **Resources > Alerts**. You can: @@ -186,3 +82,169 @@ To manage project alerts, browse to the project that alerts you want to manage. - Delete unnecessary alerts - Mute firing alerts - Unmute muted alerts + + +# Project Alert Rule Configuration + +- [Pod Alerts](#pod-alerts) +- [Workload Alerts](#workload-alerts) +- [Workload Selector Alerts](#workload-selector-alerts) +- [Metric Expression Alerts](#metric-expression-alerts) + +# Pod Alerts + +This alert type monitors for the status of a specific pod. + +Each of the below sections corresponds to a part of the alert rule configuration section in the Rancher UI. + +### When a + +Select the **Pod** option, and then select a pod from the drop-down. + +### Is + +Select a pod status that triggers an alert: + +- **Not Running** +- **Not Scheduled** +- **Restarted times within the last Minutes** + +### Send a + +Select the urgency level of the alert. The options are: + +- **Critical**: Most urgent +- **Warning**: Normal urgency +- **Info**: Least urgent + +Select the urgency level of the alert based on pod state. For example, select **Info** for Job pod which stop running after job finished. However, if an important pod isn't scheduled, it may affect operations, so choose **Critical**. + +### Advanced Options + +By default, the below options will apply to all alert rules within the group. + +You can disable these advanced options when configuring a specific rule. + +- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. +- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. +- **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour. + +# Workload Alerts + +This alert type monitors for the availability of a workload. + +Each of the below sections corresponds to a part of the alert rule configuration section in the Rancher UI. + +### When a + +Choose the **Workload** option. Then choose a workload from the drop-down. + +### Is + +Choose an availability percentage using the slider. The alert is triggered when the workload's availability on your cluster nodes drops below the set percentage. + +### Send a + +Select the urgency level of the alert. + +- **Critical**: Most urgent +- **Warning**: Normal urgency +- **Info**: Least urgent + +Select the urgency level of the alert based on the percentage you choose and the importance of the workload. + +### Advanced Options + +By default, the below options will apply to all alert rules within the group. + +You can disable these advanced options when configuring a specific rule. + +- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. +- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. +- **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour. + +# Workload Selector Alerts + +This alert type monitors for the availability of all workloads marked with tags that you've specified. + +Each of the below sections corresponds to a part of the alert rule configuration section in the Rancher UI. + +### When a + +Select the **Workload Selector** option, and then click **Add Selector** to enter the key value pair for a label. If one of the workloads drops below your specifications, an alert is triggered. This label should be applied to one or more of your workloads. + +### Is + +Choose an availability percentage using the slider. The alert is triggered when the workload's availability on your cluster nodes drops below the set percentage. + +### Send a + +Select the urgency level of the alert. + +- **Critical**: Most urgent +- **Warning**: Normal urgency +- **Info**: Least urgent + +Select the urgency level of the alert based on the percentage you choose and the importance of the workload. + +### Advanced Options + +By default, the below options will apply to all alert rules within the group. + +You can disable these advanced options when configuring a specific rule. + +- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. +- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. +- **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour. + +# Metric Expression Alerts +_Available as of v2.2.4_ + +If you enable [project monitoring]({{}}/rancher/v2.x/en/project-admin/tools/#monitoring), this alert type monitors for the overload from Prometheus expression querying. + +Each of the below sections corresponds to a part of the alert rule configuration section in the Rancher UI. + +### When A + +Input or select an **Expression**. The dropdown shows the original metrics from Prometheus, including: + +- [**Container**](https://github.com/google/cadvisor) +- [**Kubernetes Resources**](https://github.com/kubernetes/kube-state-metrics) +- [**Customize**]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#project-metrics) +- [**Project Level Grafana**](http://docs.grafana.org/administration/metrics/) +- **Project Level Prometheus** + +### Is + +Choose a comparison. + +- **Equal**: Trigger alert when expression value equal to the threshold. +- **Not Equal**: Trigger alert when expression value not equal to the threshold. +- **Greater Than**: Trigger alert when expression value greater than to threshold. +- **Less Than**: Trigger alert when expression value equal or less than the threshold. +- **Greater or Equal**: Trigger alert when expression value greater to equal to the threshold. +- **Less or Equal**: Trigger alert when expression value less or equal to the threshold. + +If applicable, choose a comparison value or a threshold for the alert to be triggered. + +### For + +Select a duration for a trigger alert when the expression value crosses the threshold longer than the configured duration. + +### Send a + +Select the urgency level of the alert. + +- **Critical**: Most urgent +- **Warning**: Normal urgency +- **Info**: Least urgent + +Select the urgency level of the alert based on its impact on operations. For example, an alert triggered when a expression for container memory close to the limit raises above 60% deems an urgency of **Info**, but raised about 95% deems an urgency of **Critical**. + +### Advanced Options + +By default, the below options will apply to all alert rules within the group. You can disable these advanced options when configuring a specific rule. + +- **Group Wait Time**: How long to wait to buffer alerts of the same group before sending initially, default to 30 seconds. +- **Group Interval Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 30 seconds. +- **Repeat Wait Time**: How long to wait before sending an alert that has been added to a group which contains already fired alerts, default to 1 hour. \ No newline at end of file diff --git a/content/rancher/v2.x/en/monitoring-alerting/v2.5/_index.md b/content/rancher/v2.x/en/monitoring-alerting/v2.5/_index.md index 5e1f52acf6b..ee9a33060da 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/v2.5/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/v2.5/_index.md @@ -4,34 +4,26 @@ shortTitle: Rancher v2.5 weight: 1 --- -Using Rancher, you can quickly deploy leading open-source monitoring & alerting solutions such as [Prometheus](https://prometheus.io/), [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), and [Grafana](https://grafana.com/docs/grafana/latest/getting-started/what-is-grafana/) onto your cluster. +Using Rancher, you can quickly deploy leading open-source monitoring alerting solutions onto your cluster. -Rancher's solution (powered by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)) allows users to: +The `rancher-monitoring` operator, introduced in Rancher v2.5, is powered by [Prometheus](https://prometheus.io/), [Grafana](https://grafana.com/grafana/), [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator), and the [Prometheus adapter.](https://github.com/DirectXMan12/k8s-prometheus-adapter) This page describes how to enable monitoring and alerting within a cluster using the new monitoring application. -- Monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments via [Prometheus](https://prometheus.io/), a leading open-source monitoring solution. +Rancher's solution allows users to: -- Defines alerts based on metrics collected via [Prometheus](https://prometheus.io/) -- Creates custom dashboards to make it easy to visualize collected metrics via [Grafana](https://grafana.com/docs/grafana/latest/getting-started/what-is-grafana/) -- Configures alert-based notifications via Email, Slack, PagerDuty, etc. using [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/) -- Defines precomputed frequently needed / computationally expensive expressions as new time series based on metrics collected via [Prometheus](https://prometheus.io/) (only available in 2.5.x) -- Exposes collected metrics from Prometheus to the Kubernetes Custom Metrics API via [Prometheus Adapter](https://github.com/DirectXMan12/k8s-prometheus-adapter) for use in HPA (only available in 2.5) +- Monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments via Prometheus, a leading open-source monitoring solution. +- Define alerts based on metrics collected via Prometheus +- Create custom dashboards to make it easy to visualize collected metrics via Grafana +- Configure alert-based notifications via Email, Slack, PagerDuty, etc. using Prometheus Alertmanager +- Defines precomputed, frequently needed or computationally expensive expressions as new time series based on metrics collected via Prometheus (only available in 2.5) +- Expose collected metrics from Prometheus to the Kubernetes Custom Metrics API via Prometheus Adapter for use in HPA (only available in 2.5) More information about the resources that get deployed onto your cluster to support this solution can be found in the [`rancher-monitoring`](https://github.com/rancher/charts/tree/main/charts/rancher-monitoring) Helm chart, which closely tracks the upstream [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) Helm chart maintained by the Prometheus community with certain changes tracked in the [CHANGELOG.md](https://github.com/rancher/charts/blob/main/charts/rancher-monitoring/CHANGELOG.md). -This page describes how to enable monitoring & alerting within a cluster using Rancher's new monitoring application, which was introduced in Rancher v2.5. - -If you previously enabled Monitoring, Alerting, or Notifiers in Rancher prior to v2.5, there is no upgrade path for switching to the new monitoring/ alerting solution. You will need to disable monitoring/ alerting/notifiers in Cluster Manager before deploying the new monitoring solution via Cluster Explorer. +> If you previously enabled Monitoring, Alerting, or Notifiers in Rancher prior to v2.5, there is no upgrade path for switching to the new monitoring/ alerting solution. You will need to disable monitoring/ alerting/notifiers in Cluster Manager before deploying the new monitoring solution via Cluster Explorer. For more information about upgrading the Monitoring app in Rancher 2.5, please refer to the [migration docs](./migrating). -> Before enabling monitoring, be sure to review the resource requirements. The default values in [this section](#setting-resource-limits-and-requests) are the minimum required resource limits and requests. - -- [Monitoring Components](#monitoring-components) - - [Prometheus](#about-prometheus) - - [Grafana](#about-grafana) - - [Alertmanager](#about-alertmanager) - - [Prometheus Operator](#about-prometheus-operator) - - [Prometheus Adapter](#about-prometheus-adapter) +- [About Prometheus](#about-prometheus) - [Enable Monitoring](#enable-monitoring) - [Default Alerts, Targets, and Grafana Dashboards](#default-alerts-targets-and-grafana-dashboards) - [Using Monitoring](#using-monitoring) @@ -44,11 +36,7 @@ For more information about upgrading the Monitoring app in Rancher 2.5, please r - [Setting Resource Limits and Requests](#setting-resource-limits-and-requests) - [Known Issues](#known-issues) -# Monitoring Components - -The `rancher-monitoring` operator is powered by Prometheus, Grafana, Alertmanager, the Prometheus Operator, and the Prometheus adapter. - -### About Prometheus +# About Prometheus Prometheus provides a time series of your data, which is, according to the [Prometheus documentation:](https://prometheus.io/docs/concepts/data_model/) @@ -58,21 +46,16 @@ In other words, Prometheus lets you view metrics from your different Rancher and By viewing data that Prometheus scrapes from your cluster control plane, nodes, and deployments, you can stay on top of everything happening in your cluster. You can then use these analytics to better run your organization: stop system emergencies before they start, develop maintenance strategies, restore crashed servers, etc. -### About Grafana - -[Grafana](https://grafana.com/grafana/) allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture. - # Enable Monitoring As an [administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Prometheus to monitor your Kubernetes cluster. -> If you want to set up Alertmanager, Grafana or Ingress, it has to be done with the settings on the Helm chart deployment. It's problematic to create Ingress outside the deployment. - -> **Prerequisites:** +> **Requirements:** > > - Make sure that you are allowing traffic on port 9796 for each of your nodes because Prometheus will scrape metrics from here. > - Make sure your cluster fulfills the resource requirements. The cluster should have at least 1950Mi memory available, 2700m CPU, and 50Gi storage. A breakdown of the resource limits and requests is [here.](#resource-requirements) + 1. In the Rancher UI, go to the cluster where you want to install monitoring and click **Cluster Explorer.** 1. Click **Apps.** 1. Click the `rancher-monitoring` app. @@ -99,8 +82,12 @@ To configure Prometheus resources from the Rancher UI, click **Apps & Marketplac Installing `rancher-monitoring` makes the following dashboards available from the Rancher UI. +> **Note:** If you want to set up Alertmanager, Grafana or Ingress, it has to be done with the settings on the Helm chart deployment. It's problematic to create Ingress outside the deployment. + ### Grafana UI +[Grafana](https://grafana.com/grafana/) allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture. + Rancher allows any users who are authenticated by Kubernetes and have access the Grafana service deployed by the Rancher Monitoring chart to access Grafana via the Rancher Dashboard UI. By default, all users who are able to access Grafana are given the [Viewer](https://grafana.com/docs/grafana/latest/permissions/organization_roles/#viewer-role) role, which allows them to view any of the default dashboards deployed by Rancher. However, users can choose to log in to Grafana as an [Admin](https://grafana.com/docs/grafana/latest/permissions/organization_roles/#admin-role) if necessary. The default Admin username and password for the Grafana instance will be `admin`/`prom-operator`, but alternative credentials can also be supplied on deploying or upgrading the chart. diff --git a/content/rancher/v2.x/en/project-admin/resource-quotas/_index.md b/content/rancher/v2.x/en/project-admin/resource-quotas/_index.md index e4df1e5fdee..5613dce83e7 100644 --- a/content/rancher/v2.x/en/project-admin/resource-quotas/_index.md +++ b/content/rancher/v2.x/en/project-admin/resource-quotas/_index.md @@ -11,7 +11,7 @@ This page is a how-to guide for creating resource quotas in existing projects. Resource quotas can also be set when a new project is created. For details, refer to the section on [creating new projects.]({{}}/rancher/v2.x/en/cluster-admin/projects-and-namespaces/#creating-projects) -> Resource quotas in Rancher include the same functionality as the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). However, in Rancher, resource quotas have been extended so that you can apply them to [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects). For details on how resource quotas work with projects in Rancher, refer to [this page.](./quotas-for-projects) +Resource quotas in Rancher include the same functionality as the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). In Rancher, resource quotas have been extended so that you can apply them to [projects]({{}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects). For details on how resource quotas work with projects in Rancher, refer to [this page.](./quotas-for-projects) ### Applying Resource Quotas to Existing Projects diff --git a/content/rancher/v2.x/en/quick-start-guide/_index.md b/content/rancher/v2.x/en/quick-start-guide/_index.md index 1aae6c15c02..28783b31768 100644 --- a/content/rancher/v2.x/en/quick-start-guide/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/_index.md @@ -12,6 +12,6 @@ We have Quick Start Guides for: - [Deploying Rancher Server]({{}}/rancher/v2.x/en/quick-start-guide/deployment/): Get started running Rancher using the method most convenient for you. -- [Deploying Workloads]({{}}/rancher/v2.x/en/quick-start-guide/workload/): Deploy a simple workload and expose it, letting you access it from outside the cluster. +- [Deploying Workloads]({{}}/rancher/v2.x/en/quick-start-guide/workload/): Deploy a simple [workload](https://kubernetes.io/docs/concepts/workloads/) and expose it, letting you access it from outside the cluster. - [Using the CLI]({{}}/rancher/v2.x/en/quick-start-guide/cli/): Use `kubectl` or Rancher command line interface (CLI) to interact with your Rancher instance. diff --git a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md index f0ee9913026..a55a0b89f25 100644 --- a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md @@ -4,7 +4,7 @@ weight: 300 --- Howdy Partner! This tutorial walks you through: -- Installation of {{< product >}} 2.x +- Installation of Rancher 2.x - Creation of your first cluster - Deployment of an application, Nginx @@ -30,7 +30,7 @@ This Quick Start Guide is divided into different tasks for easier consumption. Begin creation of a custom cluster by provisioning a Linux host. Your host can be: - A cloud-host virtual machine (VM) -- An on-premise VM +- An on-prem VM - A bare-metal server >**Note:** @@ -49,8 +49,8 @@ To install Rancher on your host, connect to it and then use a shell to install. 2. From your shell, enter the following command: ``` -sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher - ``` + sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher + ``` **Result:** Rancher is installed. @@ -72,7 +72,7 @@ Log in to Rancher to begin using the application. After you log in, you'll make Welcome to Rancher! You are now able to create your first Kubernetes cluster. -In this task, you can use the versatile **Custom** option. This option lets you add _any_ Linux host (cloud-hosted VM, on-premise VM, or bare-metal) to be used in a cluster. +In this task, you can use the versatile **Custom** option. This option lets you add _any_ Linux host (cloud-hosted VM, on-prem VM, or bare-metal) to be used in a cluster. 1. From the **Clusters** page, click **Add Cluster**. @@ -96,9 +96,17 @@ In this task, you can use the versatile **Custom** option. This option lets you 11. When you finish running the command on your Linux host, click **Done**. -{{< result_create-cluster >}} -
-
+**Result:** + +Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. + +You can access your cluster after its state is updated to **Active.** + +**Active** clusters are assigned two Projects: + +- `Default`, containing the `default` namespace +- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces + #### Finished Congratulations! You have created your first cluster. diff --git a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md index df4b32406cc..3580b314f71 100644 --- a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-ingress/_index.md @@ -9,7 +9,7 @@ You have a running cluster with at least 1 node. ### 1. Deploying a Workload -You're ready to create your first _workload_. A workload is an object that includes pods along with other files and info needed to deploy your application. +You're ready to create your first Kubernetes [workload](https://kubernetes.io/docs/concepts/workloads/). A workload is an object that includes pods along with other files and info needed to deploy your application. For this workload, you'll be deploying the application Rancher Hello-World. diff --git a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md index 71d79215dd9..fbe0f995ce6 100644 --- a/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/workload/quickstart-deploy-workload-nodeport/_index.md @@ -9,7 +9,7 @@ You have a running cluster with at least 1 node. ### 1. Deploying a Workload -You're ready to create your first _workload_. A workload is an object that includes pods along with other files and info needed to deploy your application. +You're ready to create your first Kubernetes [workload](https://kubernetes.io/docs/concepts/workloads/). A workload is an object that includes pods along with other files and info needed to deploy your application. For this workload, you'll be deploying the application Rancher Hello-World. diff --git a/layouts/shortcodes/beta-note_azure.html b/layouts/shortcodes/beta-note_azure.html deleted file mode 100644 index 3ad381dde2f..00000000000 --- a/layouts/shortcodes/beta-note_azure.html +++ /dev/null @@ -1,4 +0,0 @@ -
- Note: -

As of Rancher v2.0 GA, the Azure Kubernetes Service option is still in beta.

-
diff --git a/layouts/shortcodes/note_server-tags.html b/layouts/shortcodes/note_server-tags.html deleted file mode 100644 index 3911c33106a..00000000000 --- a/layouts/shortcodes/note_server-tags.html +++ /dev/null @@ -1,8 +0,0 @@ -
-

Notes:

-
    -
  • If you are using RancherOS, make sure you switch the Docker engine to a supported version using sudo ros engine switch docker-17.03.2-ce -
  • -
  • The rancher/rancher container is hosted on DockerHub. If you don't have access to DockerHub, or you are installing Rancher without an Internet connection, refer to how to prepare for an Air Gap Installation.
  • -
-
diff --git a/layouts/shortcodes/ports-rancher-nodes.html b/layouts/shortcodes/ports-rancher-nodes.html deleted file mode 100644 index 30f7bda69ac..00000000000 --- a/layouts/shortcodes/ports-rancher-nodes.html +++ /dev/null @@ -1,55 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ProtocolPortSourceDestinationDescription
TCP80Load Balancer / Reverse ProxyHTTP traffic to Rancher UI / API.
TCP443Load Balancer / Reverse Proxy

Otherwise IPs of all cluster nodes and other Rancher API / UI clients.
HTTPS traffic to Rancher UI / API.
TCP44335.160.43.145
35.167.242.46
52.33.59.17
Rancher catalog (git.rancher.io).
TCP22Any node created using node driver.SSH provisioning of node by node driver.
TCP2376Any node created using node driver.Docker daemon TLS port used by node driver.
TCPProvider DependentPort of the Kubernetes API endpoint in hosted clusters.Kubernetes API.
\ No newline at end of file diff --git a/layouts/shortcodes/ports_aws_securitygroup_nodedriver.html b/layouts/shortcodes/ports_aws_securitygroup_nodedriver.html deleted file mode 100644 index 5de8799c280..00000000000 --- a/layouts/shortcodes/ports_aws_securitygroup_nodedriver.html +++ /dev/null @@ -1,103 +0,0 @@ -

Amazon EC2 security group when using Node Driver

-

If you are Creating an Amazon EC2 Cluster, you can choose to let Rancher create a Security Group called rancher-nodes. The following rules are automatically added to this Security Group. -

-
-

Security group: rancher-nodes

-

Inbound rules

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TypeProtocolPort RangeSource
SSHTCP220.0.0.0/0
HTTPTCP800.0.0.0/0
Custom TCP RuleTCP4430.0.0.0/0
Custom TCP RuleTCP23760.0.0.0/0
Custom TCP RuleTCP2379-2380sg-xxx (rancher-nodes)
Custom UDP RuleUDP4789sg-xxx (rancher-nodes)
Custom TCP RuleTCP64430.0.0.0/0
Custom UDP RuleUDP8472sg-xxx (rancher-nodes)
Custom TCP RuleTCP10250-10252sg-xxx (rancher-nodes)
Custom TCP RuleTCP10256sg-xxx (rancher-nodes)
Custom TCP RuleTCP30000-327670.0.0.0/0
Custom UDP RuleUDP30000-327670.0.0.0/0
-

Outbound rules

- - - - - - - - - - - - - -
TypeProtocolPort RangeDestination
All trafficAllAll0.0.0.0/0
-
-
diff --git a/layouts/shortcodes/prereq_cluster.html b/layouts/shortcodes/prereq_cluster.html deleted file mode 100644 index cc9da6071ac..00000000000 --- a/layouts/shortcodes/prereq_cluster.html +++ /dev/null @@ -1,5 +0,0 @@ -
-

- Prerequisites: Review the Requirements for your Linux host. -

-
diff --git a/layouts/shortcodes/prereq_install.html b/layouts/shortcodes/prereq_install.html deleted file mode 100644 index 9941f520b66..00000000000 --- a/layouts/shortcodes/prereq_install.html +++ /dev/null @@ -1,4 +0,0 @@ -
-

Before You Start

-

Provision a Linux host according to our Requirements.

-
diff --git a/layouts/shortcodes/product.html b/layouts/shortcodes/product.html deleted file mode 100644 index e56e7808e0c..00000000000 --- a/layouts/shortcodes/product.html +++ /dev/null @@ -1 +0,0 @@ -Rancher diff --git a/layouts/shortcodes/requirements_ha.html b/layouts/shortcodes/requirements_ha.html deleted file mode 100644 index 4f21347e035..00000000000 --- a/layouts/shortcodes/requirements_ha.html +++ /dev/null @@ -1,11 +0,0 @@ -
-
    -
  • RKE Cluster
  • -
      -
    • 3 nodes total minimum
    • -
    • 3+ nodes for etcd role
    • -
    • 2+ nodes for controlplane role
    • -
    • 1+ node for worker role
    • -
    -
-
diff --git a/layouts/shortcodes/requirements_os.html b/layouts/shortcodes/requirements_os.html deleted file mode 100644 index 47f0b8c8339..00000000000 --- a/layouts/shortcodes/requirements_os.html +++ /dev/null @@ -1,7 +0,0 @@ -
-
    -
  • Ubuntu 16.04 (64-bit)
  • -
  • Red Hat Enterprise Linux 7.5 (64-bit)
  • -
  • RancherOS 1.4 (64-bit)
  • -
-
diff --git a/layouts/shortcodes/result_create-cluster.html b/layouts/shortcodes/result_create-cluster.html deleted file mode 100644 index d49bc3d334a..00000000000 --- a/layouts/shortcodes/result_create-cluster.html +++ /dev/null @@ -1,8 +0,0 @@ -
-

Result:

-
    -
  • Your cluster is created and assigned a state of Provisioning. Rancher is standing up your cluster.
  • -
  • You can access your cluster after its state is updated to Active.
  • -
  • Active clusters are assigned two Projects, Default (containing the namespace default) and System (containing the namespaces cattle-system,ingress-nginx,kube-public and kube-system, if present). -
-
diff --git a/layouts/shortcodes/step_create-cluster_cluster-options.html b/layouts/shortcodes/step_create-cluster_cluster-options.html deleted file mode 100644 index 92485db7912..00000000000 --- a/layouts/shortcodes/step_create-cluster_cluster-options.html +++ /dev/null @@ -1 +0,0 @@ -

Use Cluster Options to choose the version of Kubernetes, what network provider will be used and if you want to enable project network isolation. To see more cluster options, click on Show advanced options. diff --git a/layouts/shortcodes/step_create-cluster_member-roles.html b/layouts/shortcodes/step_create-cluster_member-roles.html deleted file mode 100644 index bd01144a6f5..00000000000 --- a/layouts/shortcodes/step_create-cluster_member-roles.html +++ /dev/null @@ -1,8 +0,0 @@ -

-

Use Member Roles to configure user authorization for the cluster.

-
    -
  • Click Add Member to add users that can access the cluster.
  • -
  • Use the Role drop-down to set permissions for each user.
  • -
-
-
diff --git a/layouts/shortcodes/step_create-cluster_node-pools.html b/layouts/shortcodes/step_create-cluster_node-pools.html deleted file mode 100644 index ada69c017f6..00000000000 --- a/layouts/shortcodes/step_create-cluster_node-pools.html +++ /dev/null @@ -1,9 +0,0 @@ -

Add one or more node pools to your cluster.

A node pool is a collection of nodes based on a node template. A node template defines the configuration of a node, like what operating system to use, number of CPUs and amount of memory. Each node pool must have one or more nodes roles assigned.

- -
-

Notes:

-
    -
  • Each node role (i.e. etcd, Control Plane, and Worker) should be assigned to a distinct node pool. Although it is possible to assign multiple node roles to a node pool, this should not be done for production clusters.
  • -
  • The recommended setup is to have a node pool with the etcd node role and a count of three, a node pool with the Control Plane node role and a count of at least two, and a node pool with the Worker node role and a count of at least two. Regarding the etcd node role, refer to the etcd Admin Guide.
  • -
-
diff --git a/layouts/shortcodes/step_rancher-template.html b/layouts/shortcodes/step_rancher-template.html deleted file mode 100644 index 96a1584194c..00000000000 --- a/layouts/shortcodes/step_rancher-template.html +++ /dev/null @@ -1,24 +0,0 @@ -

The Docker daemon configuration options include:

-
    -
  • -

    - Labels: For information on labels, refer to the Docker - object label documentation. -

    -
  • -
  • -

    - Docker Engine Install URL: Determines what Docker version will be installed on the instance. Note: If you are using RancherOS, please check what Docker versions are available using sudo ros engine list on the RancherOS version you want to use, as the default Docker version configured might not be available. If you experience issues installing Docker on other operating systems, please try to install Docker manually using the configured Docker Engine Install URL to troubleshoot. -

    -
  • -
  • -

    - Registry mirrors: Docker Registry mirror to be used by the Docker daemon -

    -
  • -
  • -

    Other advanced options: Refer to the Docker daemon option reference - -

    -
  • -
diff --git a/layouts/shortcodes/tag_latest.html b/layouts/shortcodes/tag_latest.html deleted file mode 100644 index e87b233790f..00000000000 --- a/layouts/shortcodes/tag_latest.html +++ /dev/null @@ -1 +0,0 @@ -v2.0.1 diff --git a/layouts/shortcodes/version.html b/layouts/shortcodes/version.html deleted file mode 100644 index 4149d033b05..00000000000 --- a/layouts/shortcodes/version.html +++ /dev/null @@ -1 +0,0 @@ -v2.0