Revise and update vSphere docs for Rancher v2.3.3 changes (#2016)

* Revise and update vSphere docs for Rancher v2.3.3 changes

* Edit vSphere docs

* Update _index.md

* Update _index.md

* Update _index.md
This commit is contained in:
Catherine Luse
2019-11-27 08:06:59 -07:00
committed by Denise
parent 8810b6d056
commit 08b6cceabc
10 changed files with 708 additions and 397 deletions
@@ -45,6 +45,8 @@ _Available as of Rancher v2.3.0_
If a node is in a node pool, Rancher can automatically replace unreachable nodes. Rancher will use the existing node template for the given node pool to recreate the node if it becomes inactive for a specified number of minutes.
> **Important** Self-healing node pools are designed to help you replace worker nodes for stateless applications. It is not recommended to enable node auto-replace on a node pool of master nodes or nodes with persistent volumes attached, because VMs are treated ephemerally. When a node in a node pool loses connectivity with the cluster, its persistent volumes are destroyed, resulting in data loss for stateful applications.
{{% accordion id="how-does-node-auto-replace-work" label="How does Node Auto-replace Work?" %}}
Node auto-replace works on top of the Kubernetes node controller. The node controller periodically checks the status of all the nodes (configurable via the `--node-monitor-period` flag of the `kube-controller`). When a node is unreachable, the node controller will taint that node. When this occurs, Rancher will begin its deletion countdown. You can configure the amount of time Rancher waits to delete the node. If the taint is not removed before the deletion countdown ends, Rancher will proceed to delete the node object. Rancher will then provision a node in accordance with the set quantity of the node pool.
{{% /accordion %}}
@@ -5,170 +5,39 @@ weight: 2225
aliases:
- /rancher/v2.x/en/tasks/clusters/creating-a-cluster/create-cluster-vsphere/
---
Use {{< product >}} to create a Kubernetes cluster in vSphere.
## Introduction
By using Rancher with vSphere, you can bring cloud operations on-premises.
When creating a vSphere cluster, Rancher first provisions the specified amount of virtual machines by communicating with the vCenter API. Then it installs Kubernetes on top of them. A vSphere cluster may consist of multiple groups of VMs with distinct properties, such as the amount of memory or the number of vCPUs. This grouping allows for fine-grained control over the sizing of nodes for the data, control, and worker plane respectively.
Rancher can provision nodes in vSphere and install Kubernetes on them. When creating a Kubernetes cluster in vSphere, Rancher first provisions the specified number of virtual machines by communicating with the vCenter API. Then it installs Kubernetes on top of them.
>**Note:**
>The vSphere node driver included in Rancher currently only supports the provisioning of VMs with [RancherOS]({{< baseurl >}}/os/v1.x/en/) as the guest operating system.
A vSphere cluster may consist of multiple groups of VMs with distinct properties, such as the amount of memory or the number of vCPUs. This grouping allows for fine-grained control over the sizing of nodes for each Kubernetes role.
## Prerequisites
# vSphere Enhancements
### vSphere API permissions
The vSphere node templates have been updated, allowing you to bring cloud operations on-premises with the following enhancements:
Before proceeding to create a cluster, you must ensure that you have a vSphere user with sufficient permissions. If you are planning to make use of vSphere volumes for persistent storage in the cluster, there are [additional requirements]({{< baseurl >}}/rke/latest/en/config-options/cloud-providers/vsphere/) that must be met.
### Self-healing Node Pools
### Network permissions
_Available as of v2.3.0_
You must ensure that the hosts running Rancher servers are able to establish network connections to the following network endpoints:
One of the biggest advantages of provisioning vSphere nodes with Rancher is that it allows you to take advantage of Rancher's self-healing node pools, also called the [node auto-replace feature,]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-auto-replace) in your on-premises clusters. Self-healing node pools are designed to help you replace worker nodes for stateless applications. When Rancher provisions nodes from a node template, Rancher can automatically replace unreachable nodes.
- vCenter server (usually port 443/TCP)
- Every ESXi host that is part of the datacenter to be used to provision virtual machines for your clusters (port 443/TCP).
> **Important:** It is not recommended to enable node auto-replace on a node pool of master nodes or nodes with persistent volumes attached, because VMs are treated ephemerally. When a node in a node pool loses connectivity with the cluster, its persistent volumes are destroyed, resulting in data loss for stateful applications.
### Dynamically Populated Options for Instances and Scheduling
## Provisioning a vSphere Cluster
_Available as of v2.3.3_
The following steps create a role with the required privileges and then assign it to a new user in the vSphere console:
Node templates for vSphere have been updated so that when you create a node template with your vSphere credentials, the template is automatically populated with the same options for provisioning VMs that you have access to in the vSphere console.
1. From the **vSphere** console, go to the **Administration** page.
For the fields to be populated, your setup needs to fulfill the [prerequisites.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/#prerequisites)
2. Go to the **Roles** tab.
### More Supported Operating Systems
3. Create a new role. Give it a name and select the privileges listed in the [permissions table](#annex-vsphere-permissions).
As of Rancher v2.3.3+, you can provision VMs with any operating system that supports cloud init.
![image]({{< baseurl >}}/img/rancher/rancherroles1.png)
In Rancher prior to v2.3.3, the vSphere node driver included in Rancher only supported the provisioning of VMs with [RancherOS]({{<baseurl>}}/os/v1.x/en/) as the guest operating system.
4. Go to the **Users and Groups** tab.
# Video Walkthrough of v2.3.3 Node Template Features
5. Create a new user. Fill out the form and then click **OK**. Make sure to note the username and password, as you will need it when configuring node templates in Rancher.
![image]({{< baseurl >}}/img/rancher/rancheruser.png)
6. Go to the **Global Permissions** tab.
7. Create a new Global Permission. Add the user you created earlier and assign it the role you created earlier. Click **OK**.
![image]({{< baseurl >}}/img/rancher/globalpermissionuser.png)
![image]({{< baseurl >}}/img/rancher/globalpermissionrole.png)
## Creating vSphere Clusters
### Create a vSphere Node Template
To create a cluster, you need to create at least one vSphere [node template]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) that specifies how VMs are created in vSphere.
>**Note:**
>Once you create a node template, it is saved, and you can re-use it whenever you create additional vSphere clusters.
1. Log in with an admin account to the Rancher UI.
2. From the user settings menu, select **Node Templates**.
3. Click **Add Template** and then click on the **vSphere** icon.
4. Under [Account Access](#account-access) enter the vCenter FQDN or IP address and the credentials for the vSphere user account (see [Prerequisites](#prerequisites)).
{{< step_create-cloud-credential >}}
5. Under [Instance Options](#instance-options), configure the number of vCPUs, memory, and disk size for the VMs created by this template.
6. **Optional:** Enter the URL pointing to a [RancherOS]({{< baseurl >}}/os/v1.x/en/) cloud-config file in the [Cloud Init](#instance-options) field.
7. Ensure that the [OS ISO URL](#instance-options) contains the URL of a VMware ISO release for RancherOS (`rancheros-vmware.iso`).
![image]({{< baseurl >}}/img/rancher/vsphere-node-template-1.png)
8. **Optional:** Provide a set of [Configuration Parameters](#instance-options) for the VMs.
9. Under **Scheduling**, enter the name/path of the **Data Center** to create the VMs in, the name of the **VM Network** to attach to, and the name/path of the **Datastore** to store the disks in.
![image]({{< baseurl >}}/img/rancher/vsphere-node-template-2.png)
10. **Optional:** Assign labels to the VMs that can be used as a base for scheduling rules in the cluster.
11. **Optional:** Customize the configuration of the Docker daemon on the VMs that will be created.
10. Assign a descriptive **Name** for this template and click **Create**.
___
### Create a vSphere Cluster
After you've created a template, you can use it stand up the vSphere cluster itself.
1. From the **Global** view, click **Add Cluster**.
2. Choose **vSphere**.
3. Enter a **Cluster Name**.
4. {{< step_create-cluster_member-roles >}}
5. {{< step_create-cluster_cluster-options >}}
6. {{< step_create-cluster_node-pools >}}
![image]({{< baseurl >}}/img/rancher/vsphere-cluster-create-1.png)
7. Review your configuration, then click **Create**.
> **Note:**
>
> If you have a cluster with DRS enabled, setting up [VM-VM Affinity Rules](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.resmgmt.doc/GUID-7297C302-378F-4AF2-9BD6-6EDB1E0A850A.html) is recommended. These rules allow VMs assigned the etcd and control-plane roles to operate on separate ESXi hosts when they are assigned to different node pools. This practice ensures that the failure of a single physical machine does not affect the availability of those planes.
{{< result_create-cluster >}}
## Annex - Node Template Configuration Reference
The tables below describe the configuration options available in the vSphere node template.
### Account Access
| Parameter | Required | Description |
|:------------------------:|:--------:|:------------------------------------------------------------:|
| vCenter or ESXi Server | * | IP or FQDN of the vCenter or ESXi server used for managing VMs. |
| Port | * | Port to use when connecting to the server. Defaults to `443`. |
| Username | * | vCenter/ESXi user to authenticate with the server. |
| Password | * | User's password. |
___
### Instance Options
| Parameter | Required | Description |
|:------------------------:|:--------:|:------------------------------------------------------------:|
| CPUs | * | Number of vCPUS to assign to VMs. |
| Memory | * | Amount of memory to assign to VMs. |
| Disk | * | Size of the disk (in MB) to attach to the VMs. |
| Cloud Init | | URL of a [RancherOS cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/) file to provision VMs with. This file allows further customization of the RancherOS operating system, such as network configuration, DNS servers, or system daemons.|
| OS ISO URL | * | URL of a RancherOS vSphere ISO file to boot the VMs from. You can find URLs for specific versions in the [Rancher OS GitHub Repo](https://github.com/rancher/os). |
| Configuration Parameters | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). |
___
### Scheduling Options
| Parameter | Required | Description |
|:------------------------:|:--------:|:------------------------------------------------------------:|
| Data Center | * | Name/path of the datacenter to create VMs in. |
| Pool | | Name/path of the resource pool to schedule the VMs in. If not specified, the default resource pool is used. |
| Host | | Name/path of the host system to schedule VMs in. If specified, the host system's pool will be used and the *Pool* parameter will be ignored. |
| Network | * | Name of the VM network to attach VMs to. |
| Data Store | * | Datastore to store the VM disks. |
| Folder | | Name/path of folder in the datastore to create the VMs in. Must already exist. |
___
## Annex - vSphere Permissions
The following table lists the permissions required for the vSphere user account configured in the node templates:
| Privilege Group | Operations |
|:----------------------|:-----------------------------------------------------------------------|
| Datastore | AllocateSpace </br> Browse </br> FileManagement (Low level file operations) </br> UpdateVirtualMachineFiles </br> UpdateVirtualMachineMetadata |
| Network | Assign |
| Resource | AssignVMToPool |
| Virtual Machine | Config (All) </br> GuestOperations (All) </br> Interact (All) </br> Inventory (All) </br> Provisioning (All) |
In [this YouTube video,](https://www.youtube.com/watch?v=dPIwg6x1AlU) we demonstrate how to set up a node template with the new features designed to help you bring cloud operations to on-premises clusters.
@@ -0,0 +1,305 @@
---
title: Provisioning Kubernetes Clusters in vSphere
weight: 1
---
This section explains how to configure Rancher with vSphere credentials, provision nodes in vSphere, and set up Kubernetes clusters on those nodes.
# Prerequisites
This section describes the requirements for setting up vSphere so that Rancher can provision VMs and clusters.
The node templates are documented and tested with the vSphere Web Services API version 6.5.
- [Create credentials in vSphere](#create-credentials-in-vsphere)
- [Network permissions](#network-permissions)
- [Valid ESXi License for vSphere API Access](#valid-esxi-license-for-vsphere-api-access)
### Create Credentials in vSphere
Before proceeding to create a cluster, you must ensure that you have a vSphere user with sufficient permissions. When you set up a node template, the template will need to use these vSphere credentials.
Refer to this [how-to guide]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/creating-credentials) for instructions on how to create a user in vSphere with the required permissions. These steps result in a username and password that you will need to provide to Rancher, which allows Rancher to provision resources in vSphere.
### Network Permissions
There needs to be two-way communication between Rancher and the vSphere API.
You must ensure that the hosts running Rancher servers are able to establish network connections to the following network endpoints:
- vCenter server (usually port 443/TCP)
- Every ESXi host that is part of the datacenter to be used to provision virtual machines for your clusters (port 443/TCP).
By default, Rancher uses port 443 to communicate with vSphere.
The vSphere API websocket port will be 84453 by default.
### Valid ESXi License for vSphere API Access
The free ESXi license does not support API access. The vSphere servers must have a valid or evaluation ESXi license.
# Creating Clusters in vSphere with Rancher
This section describes how to set up vSphere credentials, node templates, and vSphere clusters using the Rancher UI.
You will need to do the following:
1. [Create a node template using vSphere credentials](#1-create-a-node-template-using-vsphere-credentials)
2. [Create a Kubernetes cluster using the node template](#2-create-a-kubernetes-cluster-using-the-node-template)
3. [Optional: Provision storage](#3-optional-provision-storage)
- [Enable the vSphere cloud provider for the cluster](#enable-the-vsphere-cloud-provider-for-the-cluster)
### Configuration References
For details on configuring the node template, refer to the [node template configuration reference.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/)
Rancher uses the RKE library to provision Kubernetes clusters. For details on configuring clusters in vSphere, refer to the [cluster configuration reference in the RKE documentation.]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/)
Note that the vSphere cloud provider must be [enabled](#enable-the-vsphere-cloud-provider-for-the-cluster) to allow dynamic provisioning of volumes.
# 1. Create a Node Template Using vSphere Credentials
To create a cluster, you need to create at least one vSphere [node template]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-templates) that specifies how VMs are created in vSphere.
After you create a node template, it is saved, and you can re-use it whenever you create additional vSphere clusters.
To create a node template,
1. Log in with an admin account to the Rancher UI.
1. From the user settings menu, select **Node Templates.**
1. Click **Add Template** and then click on the **vSphere** icon.
Then, configure your template:
- [A. Configure the vSphere credential](#a-configure-the-vsphere-credential)
- [B. Configure node scheduling](#b-configure-node-scheduling)
- [C. Configure instances and operating systems](#c-configure-instances-and-operating-systems)
- [D. Add networks](#d-add-networks)
- [E. If not already enabled, enable disk UUIDs](#e-if-not-already-enabled-enable-disk-uuids)
- [F. Optional: Configure node tags and custom attributes](#f-optional-configure-node-tags-and-custom-attributes)
- [G. Optional: Configure cloud-init](#g-optional-configure-cloud-init)
- [H. Saving the node template](#h-saving-the-node-template)
### A. Configure the vSphere Credential
The steps for configuring your vSphere credentials for the cluster are different depending on your version of Rancher.
{{% tabs %}}
{{% tab "Rancher v2.2.0+" %}}
Your account access information is in a [cloud credential.]({{<baseurl>}}/rancher/v2.x/en/user-settings/cloud-credentials/) Cloud credentials are stored as Kubernetes secrets.
You can use an existing cloud credential or create a new one. To create a new cloud credential,
1. Click **Add New.**
1. In the **Name** field, enter a name for your vSphere credentials.
1. In the **vCenter or ESXi Server** field, enter the vCenter or ESXi hostname/IP. ESXi is the virtualization platform where you create and run virtual machines and virtual appliances. vCenter Server is the service through which you manage multiple hosts connected in a network and pool host resources.
1. Optional: In the **Port** field, configure the port of the vCenter or ESXi server.
1. In the **Username** and **Password** fields, enter your vSphere login username and password.
1. Click **Create.**
**Result:** The node template has the credentials required to provision nodes in vSphere.
{{% /tab %}}
{{% tab "Rancher prior to v2.2.0" %}}
In the **Account Access** section, enter the vCenter FQDN or IP address and the credentials for the vSphere user account.
{{% /tab %}}
{{% /tabs %}}
### B. Configure Node Scheduling
Choose what hypervisor the virtual machine will be scheduled to. The configuration options depend on your version of Rancher.
{{% tabs %}}
{{% tab "Rancher v2.3.3+" %}}
The fields in the **Scheduling** section should auto-populate with the data center and other scheduling options that are available to you in vSphere.
1. In the **Data Center** field, choose the data center where the VM will be scheduled.
1. Optional: Select a **Resource Pool.** Resource pools can be used to partition available CPU and memory resources of a standalone host or cluster, and they can also be nested.
1. If you have a data store cluster, you can toggle the **Data Store** field. This lets you select a data store cluster where your VM will be scheduled to. If the field is not toggled, you can select an individual disk.
1. Optional: Select a folder where the VM will be placed. The VM folders in this dropdown menu directly correspond to your VM folders in vSphere. Note: The folder name should be prefaced with `vm/` in your vSphere config file.
1. Optional: Choose a specific host to create the VM on. Leave this field blank for a standalone ESXi or for a cluster with DRS (Distributed Resource Scheduler). If specified, the host system's pool will be used and the **Resource Pool** parameter will be ignored.
{{% /tab %}}
{{% tab "Rancher prior to v2.3.3" %}}
In the **Scheduling** section, enter:
- The name/path of the **Data Center** to create the VMs in
- The name of the **VM Network** to attach to
- The name/path of the **Datastore** to store the disks in
![image]({{< baseurl >}}/img/rancher/vsphere-node-template-2.png)
{{% /tab %}}
{{% /tabs %}}
### C. Configure Instances and Operating Systems
The instances are configured differently depending on your Rancher version.
{{% tabs %}}
{{% tab "Rancher v2.3.3+" %}}
In this section, configure the number of vCPUs, memory, and disk size for the VMs created by this template.
In the **Creation method** field, you will configure the method for setting up an operating system on the node. The operating system can be installed from an ISO or from a VM template.
[VM templates](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-F7BF0E6B-7C4F-4E46-8BBF-76229AEA7220.html) are useful for setting up the operating system and other software, because they allow you to save time. For example, you could use a VM template to automatically install Kubernetes and Docker on each node. You can choose ISOs defined from templates in a vSphere data center or content library.
The node can be created with any operating system that supports `cloud-init`.
Choose the way that the VM will be created:
- **Deploy from template: Data Center:** Choose a template that exists in the data center that you selected.
- **Deploy from template: Content Library:** In the two fields that appear when you select this option, choose the [content library](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-254B2CE8-20A8-43F0-90E8-3F6776C2C896.html). Then select the [VM template](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-F7BF0E6B-7C4F-4E46-8BBF-76229AEA7220.html) from the list of templates within the content library. This template will be used to create the new VM.
- **Clone an existing virtual machine:** In the **Virtual machine** field, choose an existing VM that the new VM will be cloned from.
- **Install from boot2docker ISO:** Ensure that the OS ISO URL contains the URL of a VMware ISO release for RancherOS (rancheros-vmware.iso).
{{% /tab %}}
{{% tab "Rancher prior to v2.3.3" %}}
In the **Instance Options** section, configure the number of vCPUs, memory, and disk size for the VMs created by this template.
Only RancherOS VMs are supported.
Ensure that the [OS ISO URL](#instance-options) contains the URL of the VMware ISO release for RancherOS: `rancheros-vmware.iso`.
![image]({{< baseurl >}}/img/rancher/vsphere-node-template-1.png)
{{% /tab %}}
{{% /tabs %}}
### D. Add Networks
_Available as of v2.3.3_
The node template now allows a VM to be provisioned with multiple networks. In the **Networks** field, you can now click **Add Network** to add any networks available to you in vSphere.
### E. If Not Already Enabled, Enable Disk UUIDs
In order to provision nodes with RKE, all nodes must be configured with disk UUIDs.
As of Rancher v2.0.4, disk UUIDs are enabled in vSphere node templates by default.
If you are using Rancher prior to v2.0.4, refer to these [instructions]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/#enabling-disk-uuids-with-a-node-template) for details on how to enable a UUID with a Rancher node template.
### F. Optional: Configure Node Tags and Custom Attributes
The way to attach metadata to the VM is different depending on your Rancher version.
{{% tabs %}}
{{% tab "Rancher v2.3.3+" %}}
**Optional:** Add vSphere tags and custom attributes. Tags allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects.
For tags, all your vSphere tags will show up as options to select from in your node template.
In the custom attributes, Rancher will let you select all the custom attributes you have already set up in vSphere. The custom attributes are keys and you can enter values for each one.
> **Note:** Custom attributes are a legacy feature that will eventually be removed from vSphere. These attributes allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects.
{{% /tab %}}
{{% tab "Rancher prior to v2.3.3" %}}
**Optional:**
- Provide a set of configuration parameters (instance-options) for the VMs.
- Assign labels to the VMs that can be used as a base for scheduling rules in the cluster.
- Customize the configuration of the Docker daemon on the VMs that will be created.
> **Note:** Custom attributes are a legacy feature that will eventually be removed from vSphere. These attributes allow you to attach metadata to objects in the vSphere inventory to make it easier to sort and search for these objects.
{{% /tab %}}
{{% /tabs %}}
### G. Optional: Configure Cloud Init
[Cloud-init](https://cloud-init.io/) is a tool that applies user data to your nodes when they boot for the first time.
The configuration file for `cloud-init` is named `cloud-config.yml.` In the **Cloud Init** field, it is optional to enter a file name or URL pointing to a `cloud-config.yml` file.
You can use `cloud-init` to automate tasks that should happen when the instance boots, such as creating users, running shell commands, adding a load balancer, or preinstalling Kubernetes on the VM.
For examples of how to write a `cloud-config` file, refer to the [cloud-init documentation.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html)
### H. Saving the Node Template
Assign a descriptive **Name** for this template and click **Create.**
### Node Template Configuration Reference
Refer to [this section]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/node-template-reference/) for a reference on the configuration options available for vSphere node templates.
# 2. Create a Kubernetes Cluster Using the Node Template
After you've created a template, you can use it to stand up the vSphere cluster itself.
To install Kubernetes on vSphere nodes, you will need to enable the vSphere cloud provider by modifying the cluster YAML file. This requirement applies to both pre-created [custom nodes]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/) and for nodes created in Rancher using the vSphere node driver.
To create the cluster and enable the vSphere provider for cluster, follow these steps:
- [A. Set up the cluster name and member roles](#a-set-up-the-cluster-name-and-member-roles)
- [B. Configure Kubernetes options](#b-configure-kubernetes-options)
- [C. Add node pools to the cluster](#c-add-node-pools-to-the-cluster)
- [D. Optional: Add a self-healing node pool](#d-optional-add-a-self-healing-node-pool)
- [E. Create the cluster](#e-create-the-cluster)
### A. Set up the Cluster Name and Member Roles
1. Log in to the Rancher UI as an admin user.
2. Navigate to **Clusters** in the **Global** view.
3. Click **Add Cluster** and select the **vSphere** infrastructure provider.
4. Assign a **Cluster Name.**
5. Assign **Member Roles** as required. {{< step_create-cluster_member-roles >}}
> **Note:**
>
> If you have a cluster with DRS enabled, setting up [VM-VM Affinity Rules](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.resmgmt.doc/GUID-7297C302-378F-4AF2-9BD6-6EDB1E0A850A.html) is recommended. These rules allow VMs assigned the etcd and control-plane roles to operate on separate ESXi hosts when they are assigned to different node pools. This practice ensures that the failure of a single physical machine does not affect the availability of those planes.
### B. Configure Kubernetes Options
{{<step_create-cluster_cluster-options>}}
### C. Add Node Pools to the Cluster
{{<step_create-cluster_node-pools>}}
### D. Optional: Add a Self-Healing Node Pool
To make a node pool self-healing, enter a number greater than zero in the **Auto Replace** column. Rancher will use the node template for the given node pool to recreate the node if it becomes inactive for that number of minutes.
> **Note:** Self-healing node pools are designed to help you replace worker nodes for stateless applications. It is not recommended to enable node auto-replace on a node pool of master nodes or nodes with persistent volumes attached, because VMs are treated ephemerally. When a node in a node pool loses connectivity with the cluster, its persistent volumes are destroyed, resulting in data loss for stateful applications.
### E. Create the Cluster
Click **Create** to start provisioning the VMs and Kubernetes services.
{{< result_create-cluster >}}
# 3. Optional: Provision Storage
For an example of how to provision storage in vSphere using Rancher, refer to the
[cluster administration section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere)
In order to provision storage in vSphere, the vSphere provider must be enabled.
### Enable the vSphere Cloud Provider for the Cluster
1. Set **Cloud Provider** option to `Custom`.
![vsphere-node-driver-cloudprovider]({{< baseurl >}}/img/rancher/vsphere-node-driver-cloudprovider.png)
1. Click on **Edit as YAML**
1. Insert the following structure to the pre-populated cluster YAML. As of Rancher v2.3+, this structure must be placed under `rancher_kubernetes_engine_config`. In versions prior to v2.3, it has to be defined as a top-level field. Note that the `name` *must* be set to `vsphere`.
```yaml
rancher_kubernetes_engine_config: # Required as of Rancher v2.3+
cloud_provider:
name: vsphere
vsphereCloudProvider:
[Insert provider configuration]
```
Rancher uses RKE (the Rancher Kubernetes Engine) to provision Kubernetes clusters. Refer to the [vSphere configuration reference in the RKE documentation]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/vsphere/config-reference/) for details about the properties of the `vsphereCloudProvider` directive.
@@ -0,0 +1,41 @@
---
title: Creating Credentials in the vSphere Console
weight: 1
---
This section describes how to create a vSphere username and password. You will need to provide these vSphere credentials to Rancher, which allows Rancher to provision resources in vSphere.
The following table lists the permissions required for the vSphere user account:
| Privilege Group | Operations |
|:----------------------|:-----------------------------------------------------------------------|
| Datastore | AllocateSpace </br> Browse </br> FileManagement (Low level file operations) </br> UpdateVirtualMachineFiles </br> UpdateVirtualMachineMetadata |
| Network | Assign |
| Resource | AssignVMToPool |
| Virtual Machine | Config (All) </br> GuestOperations (All) </br> Interact (All) </br> Inventory (All) </br> Provisioning (All) |
The following steps create a role with the required privileges and then assign it to a new user in the vSphere console:
1. From the **vSphere** console, go to the **Administration** page.
2. Go to the **Roles** tab.
3. Create a new role. Give it a name and select the privileges listed in the permissions table above.
![image]({{< baseurl >}}/img/rancher/rancherroles1.png)
4. Go to the **Users and Groups** tab.
5. Create a new user. Fill out the form and then click **OK**. Make sure to note the username and password, because you will need it when configuring node templates in Rancher.
![image]({{< baseurl >}}/img/rancher/rancheruser.png)
6. Go to the **Global Permissions** tab.
7. Create a new Global Permission. Add the user you created earlier and assign it the role you created earlier. Click **OK**.
![image]({{< baseurl >}}/img/rancher/globalpermissionuser.png)
![image]({{< baseurl >}}/img/rancher/globalpermissionrole.png)
**Result:** You now have credentials that Rancher can use to manipulate vSphere resources.
@@ -0,0 +1,24 @@
---
title: Enabling Disk UUIDs in Node Templates
weight: 3
---
As of Rancher v2.0.4, disk UUIDs are enabled in vSphere node templates by default.
For Rancher prior to v2.0.4, we recommend configuring a vSphere node template to automatically enable disk UUIDs because they are required for Rancher to manipulate vSphere resources.
To enable disk UUIDs for all VMs created for a cluster,
1. Navigate to the **Node Templates** in the Rancher UI while logged in as admin user.
2. Add or edit an existing vSphere node template.
3. Under **Instance Options** click on **Add Parameter**.
4. Enter `disk.enableUUID` as key with a value of **TRUE**.
![vsphere-nodedriver-enable-uuid]({{< baseurl >}}/img/rke/vsphere-nodedriver-enable-uuid.png)
5. Click **Create** or **Save**.
**Result:** The disk UUID is enabled in the vSphere node template.
@@ -0,0 +1,93 @@
---
title: vSphere Node Template Configuration Reference
weight: 4
---
The tables below describe the configuration options available in the vSphere node template:
- [Account access](#account-access)
- [Instance options](#instance-options)
- [Scheduling options](#scheduling-options)
# Account Access
The account access parameters are different based on the Rancher version.
{{% tabs %}}
{{% tab "Rancher v2.2.0+" %}}
| Parameter | Required | Description |
|:----------------------|:--------:|:-----|
| Cloud Credentials | * | Your vSphere account access information, stored in a [cloud credential.]({{<baseurl>}}/rancher/v2.x/en/user-settings/cloud-credentials/) |
{{% /tab %}}
{{% tab "Rancher prior to v2.2.0" %}}
| Parameter | Required | Description |
|:------------------------|:--------:|:------------------------------------------------------------|
| vCenter or ESXi Server | * | IP or FQDN of the vCenter or ESXi server used for managing VMs. |
| Port | * | Port to use when connecting to the server. Defaults to `443`. |
| Username | * | vCenter/ESXi user to authenticate with the server. |
| Password | * | User's password. |
{{% /tab %}}
{{% /tabs %}}
# Instance Options
The options for creating and configuring an instance are different depending on your Rancher version.
{{% tabs %}}
{{% tab "Rancher v2.3.3+" %}}
| Parameter | Required | Description |
|:----------------|:--------:|:-----------|
| CPUs | * | Number of vCPUS to assign to VMs. |
| Memory | * | Amount of memory to assign to VMs. |
| Disk | * | Size of the disk (in MB) to attach to the VMs. |
| Creation method | * | The method for setting up an operating system on the node. The operating system can be installed from an ISO or from a VM template. Depending on the creation method, you will also have to specify a VM template, content library, existing VM, or ISO. For more information on creation methods, refer to the section on [configuring instances.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/#c-configure-instances-and-operating-systems) |
| Cloud Init | | URL of a `cloud-config.yml` file or URL to provision VMs with. This file allows further customization of the operating system, such as network configuration, DNS servers, or system daemons. The operating system must support `cloud-init`. |
| Networks | | Name(s) of the network to attach the VM to. |
| Configuration Parameters used for guestinfo | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). |
{{% /tab %}}
{{% tab "Rancher prior to v2.3.3" %}}
| Parameter | Required | Description |
|:------------------------|:--------:|:------------------------------------------------------------|
| CPUs | * | Number of vCPUS to assign to VMs. |
| Memory | * | Amount of memory to assign to VMs. |
| Disk | * | Size of the disk (in MB) to attach to the VMs. |
| Cloud Init | | URL of a [RancherOS cloud-config]({{< baseurl >}}/os/v1.x/en/installation/configuration/) file to provision VMs with. This file allows further customization of the RancherOS operating system, such as network configuration, DNS servers, or system daemons.|
| OS ISO URL | * | URL of a RancherOS vSphere ISO file to boot the VMs from. You can find URLs for specific versions in the [Rancher OS GitHub Repo](https://github.com/rancher/os). |
| Configuration Parameters | | Additional configuration parameters for the VMs. These correspond to the [Advanced Settings](https://kb.vmware.com/s/article/1016098) in the vSphere console. Example use cases include providing RancherOS [guestinfo]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/cloud/vmware-esxi/#vmware-guestinfo) parameters or enabling disk UUIDs for the VMs (`disk.EnableUUID=TRUE`). |
{{% /tab %}}
{{% /tabs %}}
# Scheduling Options
The options for scheduling VMs to a hypervisor are different depending on your Rancher version.
{{% tabs %}}
{{% tab "Rancher v2.3.3+" %}}
| Parameter | Required | Description |
|:------------------------|:--------:|:-------|
| Data Center | * | Name/path of the datacenter to create VMs in. |
| Resource Pool | | Name of the resource pool to schedule the VMs in. Leave blank for standalone ESXi. If not specified, the default resource pool is used. |
| Data Store | * | If you have a data store cluster, you can toggle the **Data Store** field. This lets you select a data store cluster where your VM will be scheduled to. If the field is not toggled, you can select an individual disk. |
| Folder | | Name of a folder in the datacenter to create the VMs in. Must already exist. The folder name should be prefaced with `vm/` in your vSphere config file. |
| Host | | The IP of the host system to schedule VMs in. If specified, the host system's pool will be used and the *Pool* parameter will be ignored. |
{{% /tab %}}
{{% tab "Rancher prior to v2.3.3" %}}
| Parameter | Required | Description |
|:------------------------|:--------:|:------------------------------------------------------------|
| Data Center | * | Name/path of the datacenter to create VMs in. |
| Pool | | Name/path of the resource pool to schedule the VMs in. If not specified, the default resource pool is used. |
| Host | | Name/path of the host system to schedule VMs in. If specified, the host system's pool will be used and the *Pool* parameter will be ignored. |
| Network | * | Name of the VM network to attach VMs to. |
| Data Store | * | Datastore to store the VM disks. |
| Folder | | Name of a folder in the datacenter to create the VMs in. Must already exist. The folder name should be prefaced with `vm/` in your vSphere config file. |
{{% /tab %}}
{{% /tabs %}}
@@ -3,256 +3,27 @@ title: vSphere Cloud Provider
weight: 254
---
In order to provision Kubernetes clusters in vSphere with the RKE CLI, you must enable the vSphere cloud provider.
The vSphere cloud provider must also be enabled in order to provision clusters with Rancher, which uses RKE as a library when provisioning [RKE clusters.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/)
The [vSphere Cloud Provider](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) interacts with VMware infrastructure (vCenter or standalone ESXi server) to provision and manage storage for persistent volumes in a Kubernetes cluster.
When provisioning Kubernetes using RKE CLI or using [RKE clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) in Rancher, the vSphere Cloud Provider can be enabled by configuring the `cloud_provider` directive in the cluster YAML file.
### Prerequisites
1. You'll need to have credentials of a vCenter/ESXi user account with privileges allowing the cloud provider to interact with the vSphere infrastructure to provision storage. Refer to [this document](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/vcp-roles.html) to create and assign a role with the required permissions in vCenter.
2. VMware Tools must be running in the Guest OS for all nodes in the cluster.
3. All nodes must be configured with disk UUIDs. This is required so that attached VMDKs present a consistent UUID to the VM, allowing the disk to be mounted properly. See [Enabling Disk UUIDs](#enabling-disk-uuids-for-vsphere-vms).
## Clusters provisioned with RKE CLI
To enable the vSphere Cloud Provider in the cluster, you must add the top-level `cloud_provider` directive to the cluster configuration file, set the `name` property to `vsphere` and add the `vsphereCloudProvider` directive containing the configuration matching your infrastructure. See the [configuration reference](#configuration-reference) for the gory details.
## Clusters provisioned with Rancher
When provisioning clusters in Rancher using the [vSphere node driver]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/) or on pre-created [custom nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/) the cluster YAML file must be modified in order to enable the cloud provider.
1. Log in to the Rancher UI as admin user.
2. Navigate to **Clusters** in the **Global** view.
3. Click **Add Cluster** and select the **vSphere** infrastructure provider.
4. Assign a **Cluster Name**.
5. Assign **Member Roles** as required.
6. Expand **Cluster Options** and configure as required.
7. Set **Cloud Provider** option to `Custom`.
![vsphere-node-driver-cloudprovider]({{< baseurl >}}/img/rancher/vsphere-node-driver-cloudprovider.png)
8. Click on **Edit as YAML**
9. Insert the following top-level structure to the pre-populated cluster YAML. Note that the `name` *must* be set to `vsphere`. Refer to the [configuration reference](#configuration-reference) to learn about the properties of the `vsphereCloudProvider` directive.
```yaml
cloud_provider:
name: vsphere
vsphereCloudProvider:
[Insert provider configuration]
```
10. Configure the **Node Pools** per your requirements while ensuring to use a node template that enables disk UUIDs for the VMs (See [Annex - Enable disk UUIDs for vSphere VMs]).
11. Click on **Create** to start provisioning the VMs and Kubernetes services.
## Configuration Reference
> **Note:** This documentation reflects the new vSphere Cloud Provider configuration schema introduced in Kubernetes v1.9 which differs from previous versions.
The vSphere configuration options are divided into 5 groups:
* global
* virtual_center
* workspace
* disk
* network
### global
The main purpose of global options is to be able to define a common set of configuration parameters that will be inherited by all vCenters defined under the `virtual_center` directive unless explicitly defined there.
Accordingly, the `global` directive accepts the same configuration options that are available under the `virtual_center` directive. Additionally it accepts a single parameter that can only be specified here:
| global Options | Type | Required | Description |
|:---------------:|:-------:|:---------:|:-----------------------------------------------------------------------------:|
| insecure-flag | boolean | | Set to **true** if the vCenter/ESXi uses a self-signed certificate. |
___
**Example:**
```yaml
(...)
global:
insecure-flag: true
```
### virtual_center
This configuration directive specifies the vCenters that are managing the nodes in the cluster. You must define at least one vCenter/ESXi server. If the nodes span multiple vCenters then all must be defined.
Each vCenter is defined by adding a new entry under the `virtual_center` directive with the vCenter IP or FQDN as the name. All required parameters must be provided for each vCenter unless they are already defined under the `global` directive.
| virtual_center Options | Type | Required | Description |
|:----------------------:|:--------:|:---------:|:-----------------------------------------------------------------------------:|
| user | string | * | vCenter/ESXi user used to authenticate with this server. |
| password | string | * | User's password. |
| port | string | | Port to use to connect to this server. Defaults to 443. |
| datacenters | string | * | Comma-separated list of all datacenters in which cluster nodes are running in.|
| soap-roundtrip-count | uint | | Round tripper count for API requests to the vCenter (num retries = value - 1).|
> The following additional options (introduced in Kubernetes v1.11) are not yet supported in RKE.
| virtual_center Options | Type | Required | Description |
|:----------------------:|:--------:|:---------:|:-----------------------------------------------------------------------------:|
| secret-name | string | | Name of secret resource containing credential key/value pairs. Can be specified in lieu of user/password parameters.|
| secret-namespace | string | | Namespace in which the secret resource was created in. |
| ca-file | string | | Path to CA cert file used to verify the vCenter certificate. |
___
**Example:**
```yaml
(...)
virtual_center:
172.158.111.1: {} # This vCenter inherits all it's properties from global options
172.158.110.2: # All required options are set explicitly
user: vc-user
password: othersecret
datacenters: eu-west-2
```
### workspace
This configuration group specifies how storage for volumes is created in vSphere.
The following configuration options are available:
| workspace Options | Type | Required | Description |
|:----------------------:|:--------:|:---------:|:-----------------------------------------------------------------------------:|
| server | string | * | IP or FQDN of the vCenter/ESXi that should be used for creating the volumes. Must match one of the vCenters defined under the `virtual_center` directive.|
| datacenter | string | * | Name of the datacenter that should be used for creating volumes. For ESXi enter *ha-datacenter*.|
| folder | string | * | Path of folder in which to create dummy VMs used for volume provisioning (relative from the root folder in vCenter), e.g. "kubernetes".|
| default-datastore | string | | Name of default datastore to place VMDKs if neither datastore or storage policy are specified in the volume options of a PVC. If datastore is located in a storage folder or is a member of a datastore cluster, specify the full path. |
| resourcepool-path | string | | Absolute or relative path to the resource pool where the dummy VMs for [Storage policy based provisioning](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html) should be created. If a relative path is specified, it is resolved with respect to the datacenter's *host* folder. Examples: `/<dataCenter>/host/<hostOrClusterName>/Resources/<poolName>`, `Resources/<poolName>`. For standalone ESXi specify `Resources`.|
___
**Example:**
```yaml
(...)
workspace:
server: 172.158.111.1 # matches IP of vCenter defined in the virtual_center block
datacenter: eu-west-1
folder: kubernetes
default-datastore: ds-1
```
### disk
The following configuration options are available under the disk directive:
| disk Options | Type | Required | Description |
|:--------------------:|:--------:|:---------:|:-----------------------------------------------------------------------------:|
| scsicontrollertype | string | | SCSI controller type to use when attaching block storage to VMs. Must be one of: *lsilogic-sas* or *pvscsi*. Default: *pvscsi*.|
___
### network
The following configuration options are available under the network directive:
| network Options | Type | Required | Description |
|:-------------------:|:--------:|:---------:|:-----------------------------------------------------------------------------:|
| public-network | string | | Name of public **VM Network** to which the VMs in the cluster are connected. Used to determine public IP addresses of VMs.|
## Configuration Example
Given the following:
- VMs in the cluster are running in the same datacenter `eu-west-1` managed by the vCenter `vc.example.com`.
- The vCenter has a user `provisioner` with password `secret` with the required roles assigned, see [Prerequisites](#prerequisites).
- The vCenter has a datastore named `ds-1` which should be used to store the VMDKs for volumes.
- A `kubernetes` folder exists in vCenter.
The corresponding configuration for the provider would then be as follows:
```yaml
(...)
cloud_provider:
name: vsphere
vsphereCloudProvider:
virtual_center:
vc.example.com:
user: provisioner
password: secret
datacenters: eu-west-1
workspace:
server: vc.example.com
folder: kubernetes
default-datastore: ds-1
datacenter: eu-west-1
```
## Annex
### Enabling disk UUIDs for vSphere VMs
Depending on whether you are provisioning the VMs using the [vSphere node driver]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere) in Rancher or using your own scripts or third-party tools, there are different methods available to enable disk UUIDs for VMs.
#### Using the Vsphere Console
The required property can be set while creating or modifying VMs in the vSphere Console:
1. For each VM navigate to the tab **VM Options** and click on **Edit Configuration**.
2. Add the parameter `disk.EnableUUID` with a value of **TRUE**.
![vsphere-advanced-parameters]({{< baseurl >}}/img/rke/vsphere-advanced-parameters.png)
#### Using the GOVC CLI tool
You can also modify properties of VMs with the [govc](https://github.com/vmware/govmomi/tree/master/govc) command-line tool to enable disk UUIDs:
```sh
$ govc vm.change -vm <vm-path> -e disk.enableUUID=TRUE
```
#### Using Rancher node template
When creating new clusters in Rancher using vSphere node templates, you can configure the template to automatically enable disk UUIDs for all VMs created for a cluster:
1. Navigate to the **Node Templates** in the Rancher UI while logged in as admin user.
2. Add or edit an existing vSphere node template.
3. Under **Instance Options** click on **Add Parameter**.
4. Enter `disk.enableUUID` as key with a value of **TRUE**.
![vsphere-nodedriver-enable-uuid]({{< baseurl >}}/img/rke/vsphere-nodedriver-enable-uuid.png)
5. Click **Create** or **Save**.
### Troubleshooting
If you are experiencing issues while provisioning a cluster with enabled vSphere Cloud Provider or while creating vSphere volumes for your workloads, you should inspect the logs of the following K8s services:
- controller-manager (Manages volumes in vCenter)
- kubelet: (Mounts vSphere volumes to pods)
If your cluster is not configured with external [Cluster Logging]({{< baseurl >}}/rancher/v2.x/en/tools/logging/), you will need to SSH into nodes to get the logs of the `kube-controller-manager` (running on one of the control plane nodes) and the `kubelet` (pertaining to the node where the stateful pod has been scheduled).
The easiest way to create a SSH session with a node is the Rancher CLI tool.
1. [Configure the Rancher CLI]({{< baseurl >}}/rancher/v2.x/en/cli/) for your cluster.
2. Run the following command to get a shell to the corresponding nodes:
```sh
$ rancher ssh <nodeName>
```
3. Inspect the logs of the controller-manager and kubelet containers looking for errors related to the vSphere cloud provider:
```sh
$ docker logs --since 15m kube-controller-manager
$ docker logs --since 15m kubelet
```
This section describes how to enable the vSphere cloud provider. You will need to use the `cloud_provider` directive in the cluster YAML file.
### Related Links
- [vSphere Storage for Kubernetes](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/)
- [Kubernetes Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
- **Configuration:** For details on vSphere configuration in RKE, refer to the [configuration reference.]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/vsphere/config-reference)
- **Troubleshooting:** For guidance on troubleshooting a cluster with the vSphere cloud provider enabled, refer to the [troubleshooting section.]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/vsphere/troubleshooting)
- **Storage:** If you are setting up storage, see the [official vSphere documentation on storage for Kubernetes,](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) or the [official Kubernetes documentation on persistent volumes.](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) If you are using Rancher, refer to the [Rancher documentation on provisioning storage in vSphere.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere)
- **For Rancher users:** Refer to the Rancher documentation on [creating vSphere Kubernetes clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere) and [provisioning storage.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere)
# Prerequisites
- **Credentials:** You'll need to have credentials of a vCenter/ESXi user account with privileges allowing the cloud provider to interact with the vSphere infrastructure to provision storage. Refer to [this document](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/vcp-roles.html) to create and assign a role with the required permissions in vCenter.
- **VMware Tools** must be running in the Guest OS for all nodes in the cluster.
- **Disk UUIDs:** All nodes must be configured with disk UUIDs. This is required so that attached VMDKs present a consistent UUID to the VM, allowing the disk to be mounted properly. See the section on [enabling disk UUIDs.]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/vsphere/enabling-uuid)
# Enabling the vSphere Provider with the RKE CLI
To enable the vSphere Cloud Provider in the cluster, you must add the top-level `cloud_provider` directive to the cluster configuration file, set the `name` property to `vsphere` and add the `vsphereCloudProvider` directive containing the configuration matching your infrastructure. See the [configuration reference]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/vsphere/config-reference) for the gory details.
@@ -0,0 +1,144 @@
---
title: vSphere Configuration Reference
weight: 3
---
This section shows an example of how to configure the vSphere cloud provider.
The vSphere cloud provider must be enabled to allow dynamic provisioning of volumes.
For more details on deploying a Kubernetes cluster on vSphere, refer to the [official cloud provider documentation.](https://cloud-provider-vsphere.sigs.k8s.io/tutorials/kubernetes-on-vsphere-with-kubeadm.html)
> **Note:** This documentation reflects the new vSphere Cloud Provider configuration schema introduced in Kubernetes v1.9 which differs from previous versions.
# vSphere Configuration Example
Given the following:
- VMs in the cluster are running in the same datacenter `eu-west-1` managed by the vCenter `vc.example.com`.
- The vCenter has a user `provisioner` with password `secret` with the required roles assigned, see [Prerequisites](#prerequisites).
- The vCenter has a datastore named `ds-1` which should be used to store the VMDKs for volumes.
- A `vm/kubernetes` folder exists in vCenter.
The corresponding configuration for the provider would then be as follows:
```yaml
(...)
cloud_provider:
name: vsphere
vsphereCloudProvider:
virtual_center:
vc.example.com:
user: provisioner
password: secret
port: 443
datacenters: /us-west-1
workspace:
server: vc.example.com
folder: /us-west-1/folder/myvmfolder
default-datastore: /us-west-1/datastore/ds-1
datacenter: /us-west-1
resourcepool-path: /us-west-1/host/hn1/resources/myresourcepool
```
# Configuration Options
The vSphere configuration options are divided into 5 groups:
* [global](#global)
* [virtual_center](#virtual_center)
* [workspace](#workspace)
* [disk](#disk)
* [network](#network)
### global
The main purpose of global options is to be able to define a common set of configuration parameters that will be inherited by all vCenters defined under the `virtual_center` directive unless explicitly defined there.
Accordingly, the `global` directive accepts the same configuration options that are available under the `virtual_center` directive. Additionally it accepts a single parameter that can only be specified here:
| global Options | Type | Required | Description |
|:---------------:|:-------:|:---------:|:---------|
| insecure-flag | boolean | | Set to **true** if the vCenter/ESXi uses a self-signed certificate. |
**Example:**
```yaml
(...)
global:
insecure-flag: true
```
### virtual_center
This configuration directive specifies the vCenters that are managing the nodes in the cluster. You must define at least one vCenter/ESXi server. If the nodes span multiple vCenters then all must be defined.
Each vCenter is defined by adding a new entry under the `virtual_center` directive with the vCenter IP or FQDN as the name. All required parameters must be provided for each vCenter unless they are already defined under the `global` directive.
| virtual_center Options | Type | Required | Description |
|:----------------------:|:--------:|:---------:|:-----------|
| user | string | * | vCenter/ESXi user used to authenticate with this server. |
| password | string | * | User's password. |
| port | string | | Port to use to connect to this server. Defaults to 443. |
| datacenters | string | * | Comma-separated list of all datacenters in which cluster nodes are running in. |
| soap-roundtrip-count | uint | | Round tripper count for API requests to the vCenter (num retries = value - 1). |
> The following additional options (introduced in Kubernetes v1.11) are not yet supported in RKE.
| virtual_center Options | Type | Required | Description |
|:----------------------:|:--------:|:---------:|:-------|
| secret-name | string | | Name of secret resource containing credential key/value pairs. Can be specified in lieu of user/password parameters.|
| secret-namespace | string | | Namespace in which the secret resource was created in. |
| ca-file | string | | Path to CA cert file used to verify the vCenter certificate. |
**Example:**
```yaml
(...)
virtual_center:
172.158.111.1: {} # This vCenter inherits all it's properties from global options
172.158.110.2: # All required options are set explicitly
user: vc-user
password: othersecret
datacenters: eu-west-2
```
### workspace
This configuration group specifies how storage for volumes is created in vSphere.
The following configuration options are available:
| workspace Options | Type | Required | Description |
|:----------------------:|:--------:|:---------:|:---------|
| server | string | * | IP or FQDN of the vCenter/ESXi that should be used for creating the volumes. Must match one of the vCenters defined under the `virtual_center` directive.|
| datacenter | string | * | Name of the datacenter that should be used for creating volumes. For ESXi enter *ha-datacenter*.|
| folder | string | * | Path of folder in which to create dummy VMs used for volume provisioning (relative from the root folder in vCenter), e.g. "vm/kubernetes".|
| default-datastore | string | | Name of default datastore to place VMDKs if neither datastore or storage policy are specified in the volume options of a PVC. If datastore is located in a storage folder or is a member of a datastore cluster, specify the full path. |
| resourcepool-path | string | | Absolute or relative path to the resource pool where the dummy VMs for [Storage policy based provisioning](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/policy-based-mgmt.html) should be created. If a relative path is specified, it is resolved with respect to the datacenter's *host* folder. Examples: `/<dataCenter>/host/<hostOrClusterName>/Resources/<poolName>`, `Resources/<poolName>`. For standalone ESXi specify `Resources`. |
**Example:**
```yaml
(...)
workspace:
server: 172.158.111.1 # matches IP of vCenter defined in the virtual_center block
datacenter: eu-west-1
folder: vm/kubernetes
default-datastore: ds-1
```
### disk
The following configuration options are available under the disk directive:
| disk Options | Type | Required | Description |
|:--------------------:|:--------:|:---------:|:----------------|
| scsicontrollertype | string | | SCSI controller type to use when attaching block storage to VMs. Must be one of: *lsilogic-sas* or *pvscsi*. Default: *pvscsi*. |
### network
The following configuration options are available under the network directive:
| network Options | Type | Required | Description |
|:-------------------:|:--------:|:---------:|:-----------------------------------------------------------------------------|
| public-network | string | | Name of public **VM Network** to which the VMs in the cluster are connected. Used to determine public IP addresses of VMs.|
@@ -0,0 +1,35 @@
---
title: Enabling Disk UUIDs for vSphere VMs
weight: 2
---
In order to provision nodes with RKE, all nodes must be configured with disk UUIDs. This is required so that attached VMDKs present a consistent UUID to the VM, allowing the disk to be mounted properly.
Depending on whether you are provisioning the VMs using the [vSphere node driver]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere) in Rancher or using your own scripts or third-party tools, there are different methods available to enable disk UUIDs for VMs:
- [Using the vSphere console](#using-the-vsphere-console)
- [Using the GOVC CLI tool](#using-the-govc-cli-tool)
- [Using a Rancher node template](#using-a-rancher-node-template)
### Using the vSphere Console
The required property can be set while creating or modifying VMs in the vSphere Console:
1. For each VM navigate to the tab **VM Options** and click on **Edit Configuration**.
2. Add the parameter `disk.EnableUUID` with a value of **TRUE**.
![vsphere-advanced-parameters]({{< baseurl >}}/img/rke/vsphere-advanced-parameters.png)
### Using the GOVC CLI tool
You can also modify properties of VMs with the [govc](https://github.com/vmware/govmomi/tree/master/govc) command-line tool to enable disk UUIDs:
```sh
$ govc vm.change -vm <vm-path> -e disk.enableUUID=TRUE
```
### Using a Rancher Node Template
In Rancher v2.0.4+, disk UUIDs are enabled in vSphere node templates by default.
If you are using Rancher prior to v2.0.4, refer to the [Rancher documentation.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/#enabling-disk-uuids-with-a-node-template) for details on how to enable a UUID with a Rancher node template.
@@ -0,0 +1,27 @@
---
title: Troubleshooting vSphere Clusters
weight: 4
---
If you are experiencing issues while provisioning a cluster with enabled vSphere Cloud Provider or while creating vSphere volumes for your workloads, you should inspect the logs of the following K8s services:
- controller-manager (Manages volumes in vCenter)
- kubelet: (Mounts vSphere volumes to pods)
If your cluster is not configured with external [Cluster Logging]({{< baseurl >}}/rancher/v2.x/en/tools/logging/), you will need to SSH into nodes to get the logs of the `kube-controller-manager` (running on one of the control plane nodes) and the `kubelet` (pertaining to the node where the stateful pod has been scheduled).
The easiest way to create a SSH session with a node is the Rancher CLI tool.
1. [Configure the Rancher CLI]({{< baseurl >}}/rancher/v2.x/en/cli/) for your cluster.
2. Run the following command to get a shell to the corresponding nodes:
```sh
$ rancher ssh <nodeName>
```
3. Inspect the logs of the controller-manager and kubelet containers looking for errors related to the vSphere cloud provider:
```sh
$ docker logs --since 15m kube-controller-manager
$ docker logs --since 15m kubelet
```