mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-04-17 03:45:39 +00:00
updating storage docs
This commit is contained in:
@@ -2,6 +2,28 @@
|
||||
title: Users, Global Permissions, and Roles
|
||||
weight: 15
|
||||
---
|
||||
In This Document:
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
- [Users and Roles](#users-and-roles)
|
||||
- [Global Permissions](#global-permissions)
|
||||
- [Global Permission Assignment](#global-permission-assignment)
|
||||
- [Custom Global Permissions](#custom-global-permissions)
|
||||
- [Global Permissions Reference](#global-permissions-reference)
|
||||
- [Cluster and Project Roles](#cluster-and-project-roles)
|
||||
- [Membership and Role Assignment](#membership-and-role-assignment)
|
||||
- [Cluster Roles](#cluster-roles)
|
||||
- [Custom Cluster Roles](#custom-cluster-roles)
|
||||
- [Cluster Role Reference](#cluster-role-reference)
|
||||
- [Project Roles](#project-roles)
|
||||
- [Custom Project Roles](#custom-project-roles)
|
||||
- [Project Role Reference](#project-role-reference)
|
||||
- [Defining Custom Roles](#defining-custom-roles)
|
||||
- [Locked Roles](#locked-roles)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
|
||||
Within Rancher, each user authenticates as a _user_, which is a login that grants you access to Rancher. As mentioned in [Authenitcation]({{< baseurl >}}/rancher/v2.x/en/concepts/global-configuration/authentication), users can either be local or external.
|
||||
|
||||
|
||||
@@ -4,13 +4,28 @@ weight: 3500
|
||||
draft: true
|
||||
---
|
||||
|
||||
### Adding a Persistent Volume
|
||||
In This Document:
|
||||
|
||||
Your containers can store data on themselves, but if a container fails, that data is lost. To solve this issue, Kubernetes offers _persistent volumes_, which are external storage disks or filesystems that your container can access. If a container goes down, the container that replaces it can access the data in a persistent volume without any data loss. Persistent volumes can either be a disk hosted by you on premise, or externally by a vendor, such as Amazon EBS, Azure Disk, and many more.
|
||||
<!-- TOC -->
|
||||
|
||||
- [Adding a Persistent Volume](#adding-a-persistent-volume)
|
||||
- [Adding Storage Classes](#adding-storage-classes)
|
||||
- [What's Next?](#whats-next)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
>**Prerequisite:** Completion of all tasks on this page require the `Manage Volumes` [role](../../../concepts/global-configuration/users-permissions-roles/#project-role-reference):
|
||||
|
||||
## Adding a Persistent Volume
|
||||
|
||||
Your containers can store data on themselves, but if a container fails, that data is lost. To solve this issue, Kubernetes offers _persistent volumes_, which are external storage disks or file systems that your containers can access. If a container crashes, its replacement container can access the data in a persistent volume without any data loss.
|
||||
|
||||
Persistent volumes can either be a disk or file system that you host on premise, or they can be hosted by a vendor, such as Amazon EBS or Azure Disk.
|
||||
|
||||
>**Prerequisite:**
|
||||
>
|
||||
>Create a storage volume either on premise or using one of the vendor services listed in the `Volume Plugin` drop-down.
|
||||
>- Create a storage volume either on premise or in the cloud, using one of the vendor services listed in [Types of Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes).
|
||||
>- Gather metadata about your storage volume after you create it. You'll need to enter this information into Rancher.
|
||||
|
||||
1. From the **Global** view, open the cluster running the containers that you want to add persistent volume storage to.
|
||||
|
||||
@@ -22,29 +37,34 @@ Your containers can store data on themselves, but if a container fails, that dat
|
||||
|
||||
1. Select the **Volume Plugin** for the disk type or service that you're using.
|
||||
|
||||
>**Note:** You can only use the `Amazon EBS Disk` volume plugin in an Amazon EKS or Amazon EC2 cluster.
|
||||
|
||||
1. Enter the **Capacity** of your volume in gigabytes.
|
||||
|
||||
1. Complete the **Plugin Configuration** form. Each plugin type requires information specific to the vendor of disk type. For more information about each plugin's form, refer to the reference table below.
|
||||
|
||||
1. **Optional:** Complete the **Customize** form. This form features:
|
||||
|
||||
- [Access Modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes):
|
||||
|
||||
This options sets how many nodes can access the volume, along with the node read/write permissions. The [Kubernettes Documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) includes a table that lists which access modes are supported by the plugins available.
|
||||
- [Access Modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes):
|
||||
|
||||
- [Mount Options](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options):
|
||||
|
||||
Each volume plugin allows you to specify additional command line options during the mounting process. You can enter these options in the **Mount Option** fields. Consult each plugin's documentation for the mount options available.
|
||||
This options sets how many nodes can access the volume, along with the node read/write permissions. The [Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) includes a table that lists which access modes are supported by the plugins available.
|
||||
|
||||
- **Assign to Storage Class:**
|
||||
|
||||
If you want to automatically provision persistent volumes identical to volume that you've specified here, assign it a storage class. Later, when you create a workload that includes persistent volume claims, Rancher will automatically provision a persistent volume for each container with a claim.
|
||||
- [Mount Options](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options):
|
||||
|
||||
Each volume plugin allows you to specify additional command line options during the mounting process. You can enter these options in the **Mount Option** fields. Consult each plugin's vendor documentation for the mount options available.
|
||||
|
||||
- **Assign to Storage Class:**
|
||||
|
||||
If you later want to automatically provision persistent volumes identical to the volume that you've specified here, assign it a storage class. Later, when you create a workload that includes persistent volume claims, Rancher automatically provisions a persistent volume for each container with a claim.
|
||||
|
||||
>**Note:** You must [add a storage class](#adding-storage-classes) before you can assign it to a persistent volume.
|
||||
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:** Your new persistent volume is created. For example, this volume is a _hostPath_ volume in Kubernetes:
|
||||
**Result:** Your new persistent volume is created.
|
||||
|
||||
For example, this volume is a _hostPath_ volume in Kubernetes:
|
||||
|
||||
```
|
||||
> kubectl get pv
|
||||
|
||||
@@ -71,62 +91,25 @@ status:
|
||||
|
||||
## Adding Storage Classes
|
||||
|
||||
_Storage Classes_ allow you to automate the creation of persistent volumes.
|
||||
_Storage Classes_ allow you to dynamically provision persistent volumes on demand. Think of storage classes as storage profiles that are created automatically upon a request (which is known as a _persistent volume claim_).
|
||||
|
||||
To add a storage class in Rancher:
|
||||
1. From the **Global** view, open the cluster for which you want to dynamically provision persistent storage volumes.
|
||||
|
||||
1. From the **Cluster** View select **Storage** > **Storage Classes**.
|
||||
2. Click on **Add Class**.
|
||||
3. Select **Name** of the class.
|
||||
4. Select one of the available **Provisioners**.
|
||||
5. Each provisioner has its own options, for example Amazon EBS:
|
||||
- **Volume Type**: this can be `gp2`, `io1`, `st1`, and `sc1`
|
||||
- **Availability Zone**: can be `Automatic` or `Manual`.
|
||||
- **Encryption**: whether to `enable` or `disable` encryption.
|
||||
6. Click on **Save**.
|
||||
1. From the main menu, select `Storage > Storage Classes`. Click `Add Class`.
|
||||
|
||||
1. Enter a `Name` for your storage class.
|
||||
|
||||
## Mounting Persistent volumes
|
||||
1. From the `Provisioner` drop-down, select the service that you want to use to dynamically provision storage volumes.
|
||||
|
||||
Persistent volumes can be mounted by either adding persistent volume claim or mount the persistent volume directly into a workload:
|
||||
1. From the `Parameters` section, fill out the information required for the service to dynamically provision storage volumes. Each provisioner requires different information to dynamically provision storage volumes. Consult the service's documentation for help on how to obtain this information.
|
||||
|
||||
### Create Persistent Volume Claim and mount it to Workload
|
||||
1. Click `Save`.
|
||||
|
||||
1. In Project view, for example **Default** go to **Volumes**.
|
||||
2. Click on **Add Volume**.
|
||||
3. Select a **Name** for the volume claim.
|
||||
4. Select the **Namespace** of the volume claim.
|
||||
5. Select **Use an existing persistent volume**.
|
||||
6. From the drop down menu of **Persistent Volume**, select the pre-created persistent volume or storage class.
|
||||
7. Click **Create**
|
||||
## What's Next?
|
||||
|
||||
After creating a pvc, it can be used in any workload:
|
||||
```
|
||||
> kubectl get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
ebsvolumeclaim Bound ebsdisk 10Gi RWO 1m
|
||||
```
|
||||
Mount Persistent Volumes to workloads so that your applications can store their data. You can mount a either a manually created Persistent Volumes or a dynamically created Persistent Volume, which is created from a a Storage Class.
|
||||
|
||||
To use it in a workload:
|
||||
You can mount Persistent Volumes in one of two contexts:
|
||||
|
||||
1. In Project view, for example **Default** go to **Workloads**.
|
||||
2. Click on **Deploy**.
|
||||
3. In **Deploy Workload** view select **Volumes**.
|
||||
4. Click on **Add Volume**.
|
||||
5. Select **Use an existing persistent Volume (claim)**.
|
||||
6. Specify **Mount Point** path.
|
||||
7. Optionally **Subpath** can be specified.
|
||||
|
||||
>**Note:**
|
||||
>
|
||||
> For EBS volumes to be mounted, the nodes must be in the same AZ and have the IAM permission to attach/unattach volumes and more importantly the Cluster must be using AWS cloud provider to be able to use ebs-volume plugin
|
||||
|
||||
The volume be seen mounted within the pods:
|
||||
|
||||
```
|
||||
root@nginx-6b8c57bbb7-hk2pc:/# lsblk
|
||||
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
|
||||
xvda 202:0 0 16G 0 disk
|
||||
`-xvda1 202:1 0 16G 0 part /etc/hosts
|
||||
xvdcc 202:20480 0 10G 0 disk /tmp/test
|
||||
```
|
||||
- During deployment of a workload (recommended if possible). For more information, see [Deploying Workloads](../../workloads/deploy-workloads/).
|
||||
- Following workload creation. For more information, see [Adding Persistent Volume Claims](../../workloads/add-persistent-volume-claim/).
|
||||
@@ -0,0 +1,74 @@
|
||||
---
|
||||
title: Provisioning NFS Storage
|
||||
weight: 3500
|
||||
draft: true
|
||||
---
|
||||
|
||||
Before you can use the NFS storage volume plug-in with Rancher deployments, you need to provision an NFS server.
|
||||
|
||||
>**Note:**
|
||||
>
|
||||
>- If you already have an NFS share, you don't need to provision a new NFS server to use the NFS volume plugin within Rancher. Instead, skip the rest of this procedure and complete [Adding Storage](../..).
|
||||
>
|
||||
>- This procedure demonstrates how to setup an NFS server using Ubuntu, although you should be able to use these instructions for other Linux distros (e.g. Debian, RHEL, Arch Linux, etc.). For official instruction on how to create an NFS server using another Linux distro, consult the distro's documentation.
|
||||
|
||||
>**Prerequisites:**
|
||||
>
|
||||
>- To simplify the process of managing firewall rules, use NFSv4.
|
||||
>
|
||||
>- If using a firewall and NFSv4, open port 2049.
|
||||
>
|
||||
>-
|
||||
|
||||
3 node cluster call
|
||||
|
||||
To simplify the process of managing firewall rules, you should use **NFSv4**.
|
||||
|
||||
**Note:** if using a firewall, make sure to allow port **2049** for NFSv4.
|
||||
|
||||
For other versions of NFS, you will most likely need to allow 111, 2049, and
|
||||
other ports.
|
||||
|
||||
To examine the ports beings used by NFS, execute the following command:
|
||||
|
||||
```
|
||||
rpcinfo -p | grep nfs
|
||||
```
|
||||
|
||||
To install NFS server, execute the following command:
|
||||
|
||||
```
|
||||
sudo apt-get install nfs-kernel-server
|
||||
```
|
||||
|
||||
As a simple example, a **/nfs** directory will be created in the **root** of
|
||||
the host. To permit access to the directory, the **nobody:nogroup** owner and
|
||||
group will be used.
|
||||
|
||||
```
|
||||
mkdir -p /nfs && chown nobody:nogroup /nfs
|
||||
```
|
||||
|
||||
The final step is to create the NFS exports table. This is where you specify the
|
||||
paths on the host that you would like to expose to NFS clients.
|
||||
|
||||
Edit the **/etc/exports** file and add the the **/nfs** directory. In this
|
||||
specific example I have allowed all 3 nodes to connect to the share.
|
||||
|
||||
**Note:** You can replace the 3 nodes with a subnet such as **10.212.50.12/24**
|
||||
|
||||
```
|
||||
/nfs 159.89.139.111(rw,sync,no_subtree_check) \
|
||||
159.65.102.218(rw,sync,no_subtree_check) \
|
||||
159.65.102.232(rw,sync,no_subtree_check)
|
||||
```
|
||||
|
||||
Make sure that all entries are in one line followed by a space as the delimiter.
|
||||
To make my example more readable, I seperated into multiple lines. In the actual
|
||||
**/etc/exports** file this will not work.
|
||||
|
||||
Update the NFS table by issuing the following commad:
|
||||
|
||||
```
|
||||
exportfs -ra
|
||||
```
|
||||
@@ -3,4 +3,41 @@ title: Adding a Persistent Volume Claim
|
||||
weight:
|
||||
draft: true
|
||||
---
|
||||
Coming Soon
|
||||
|
||||
_Persistent Volume Claims_ (or PVCs) are objects that request storage resources from your cluster. They're similar to a voucher that your deployment can redeem for storage access. When you create a deployment, you should usually attach a PVC so that your application can lay claim to persistent storage. This claim lets your deployment application store its data in an external location, so that if one of the application's containers fails, it can be replaced with a new container and continue accessing its data stored externally, as though an outage never occured.
|
||||
|
||||
- Rancher lets you create as many PVCs within a project as you'd like.
|
||||
- You can mount PVCs to a deployment as you create it, or later after its running.
|
||||
- Each Rancher project contains a list of PVCs that you've created, available from the **Volumes** tab. You can reuse these PVCs when creating deployments in the future.
|
||||
|
||||
>**Prerequisite:**
|
||||
> You must have a pre-provisioned [persistent volume](../../clusters/adding-storage/#adding-a-persistent-volume) available for use, or you must have a [storage class created](../../clusters/adding-storage/adding-storage-classes) that dynamically creates a volume upon request from the workload.
|
||||
|
||||
1. From the **Global** view, open the project containing a workload that you want to add a PVC to.
|
||||
|
||||
1. From the main menu, make sure that **Workloads** is selected. Then select the **Volumes** tab. Click **Add Volume**.
|
||||
|
||||
1. Enter a **Name** for the volume claim.
|
||||
|
||||
1. Select the **Namespace** of the volume claim.
|
||||
|
||||
1. Select a **Source** option:
|
||||
|
||||
- **To dynamically provision a storage volume for the deployment:**
|
||||
|
||||
1. Choose **Use a Storage Class to provision a new persistent volume**
|
||||
|
||||
1. From the **Storage Class** drop-down, choose a pre-created storage class.
|
||||
|
||||
1. Enter a volume **Capacity**.
|
||||
|
||||
- **To use an existing persistent volume:**
|
||||
|
||||
1. Choose **Use an existing persistent volume:**
|
||||
|
||||
1. From the **Persistent Volume** drop-down, choose a pre-created persistent volume.
|
||||
|
||||
7. **Optional:** From **Customize**, select the [Access Modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) that you want to use.
|
||||
|
||||
**Result:** Your PVC is created. You can now attach it to any workload in the project.
|
||||
|
||||
|
||||
@@ -33,7 +33,16 @@ Deploy a workload to run an application in one or more containers.
|
||||
|
||||
- **Scaling/Upgrade Policy**
|
||||
|
||||
>**Amazon Note for Volumes:**
|
||||
>
|
||||
> To mount an Amazon EBS volume:
|
||||
>
|
||||
>- In [Amazon AWS](https://aws.amazon.com/), the nodes must be in the same Availability Zone and possess IAM permissions to attach/unattach volumes.
|
||||
>- The cluster must be using AWS cloud provider.
|
||||
|
||||
|
||||
1. Click **Show Advanced Options** and configure:
|
||||
|
||||
- **Command**
|
||||
- **Networking**
|
||||
- **Labels & Annotations**
|
||||
|
||||
Reference in New Issue
Block a user