Merge pull request #825 from rancher/staging

2.1 Features
This commit is contained in:
Denise
2018-10-05 16:12:47 -07:00
committed by GitHub
69 changed files with 2559 additions and 647 deletions
@@ -1,63 +1,85 @@
---
title: API Auditing
title: API Audit Log
weight: 10000
aliases:
- /rancher/v2.x/en/installation/api-auditing
---
Rancher ships with API Auditing to record the sequence of system events initiated by individual users. You can know what happened, when it happened, who initiated it, and what cluster it affected. API auditing records all requests and responses to and from the Rancher API, which includes use of the Rancher UI and any other use of the Rancher API through programmatic use.
You can enable the API audit log to record the sequence of system events initiated by individual users. You can know what happened, when it happened, who initiated it, and what cluster it affected. When you enable this feature, all requests to the Rancher API and all responses from it are written to a log.
You can enable API Auditing during Rancher installation or upgrade.
## API Auditing Usage
## Enabling API Audit Log
The Audit Log is enabled and configured by passing environment variables to the Rancher server container. See the following to enable on your installation.
- [Single Node Install - Enable API Audit Log]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#enable-api-audit-log)
- [HA Install - Enable API Audit Log]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#enable-api-audit-log)
## API Audit Log Options
The usage below defines rules about what the audit log should record and what data it should include:
Parameter | Description |
---------|----------|
<a id="audit-level"></a>`AUDIT_LEVEL` | `0` - Disable audit log (default setting).<br/>`1` - Log event metadata.<br/>`2` - Log event metadata and request body.</br>`3` - Log event metadata, request body, and response body. Each log transaction for a request/response pair uses the same `auditID` value.<br/><br/>See [Audit Level Logging](#audit-level-logging) for a table that displays what each setting logs. |
`AUDIT_LOG_PATH` | Log path for Rancher Server API. Default path is `/var/log/auditlog/rancher-api-audit.log`. You can mount the log directory to host. <br/><br/>Usage Example: `AUDIT_LOG_PATH=/my/custom/path/`<br/> |
`AUDIT_LOG_PATH` | Log path for Rancher Server API. Default path is `/var/log/auditlog/rancher-api-audit.log`. You can mount the log directory to host. <br/><br/>Usage Example: `AUDIT_LOG_PATH=/my/custom/path/`<br/> |
`AUDIT_LOG_MAXAGE` | Defined the maximum number of days to retain old audit log files. Default is 10 days. |
`AUDIT_LOG_MAXBACKUP` | Defines the maximum number of audit log files to retain. Default is 10.
`AUDIT_LOG_MAXSIZE` | Defines the maximum size in megabytes of the audit log file before it gets rotated. Default size is 100M.
<br/>
#### Audit Level Logging
### Audit Log Levels
The following table displays what parts of API transactions are logged for each [`AUDIT_LEVEL`](#audit-level) setting.
| `AUDIT_LEVEL` Setting | Request Header | Request Body | Response Header | Response Header |
| `AUDIT_LEVEL` Setting | Request Metadata | Request Body | Response Metadata | Response Body |
| --------------------- | ------------------ | ------------ | ------------------- | ------------------- |
| `0` | | | | |
| `1` | ✓ | | | |
| `2` | ✓ | ✓ | | |
| `3` | ✓ | ✓ | ✓ | ✓ |
## Enabling API Auditing
To enable API auditing, stop the Docker container that's running Rancher, and then restart it using the following command. This command includes parameters that turns on API auditing. For more information about usage for each switch related to API auditing, see [API Auditing Usage](#api-auditing-usage).
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /root/var/log/auditlog:/var/log/auditlog \
-e AUDIT_LEVEL=1 \
-e AUDIT_LOG_PATH=/var/log/auditlog/rancher-api-audit.log \
-e AUDIT_LOG_MAXAGE=20 \
-e AUDIT_LOG_MAXBACKUP=20 \
-e AUDIT_LOG_MAXSIZE=100 \
rancher/rancher:latest
```
## Viewing API Audit Logs
By default, you can view your audit logs on any of your cluster nodes at `root/var/log/auditlog/rancher-api-audit.log` using your favorite text editor. For example:
### Single Node Install
```
less /var/log/auditlog/rancher-api-audit.log
Share the `AUDIT_LOG_PATH` directory (Default: `/var/log/auditlog`) with the host system. The log can be parsed by standard CLI tools or forwarded on to a log collection tool like Fluentd, Filebeat, Logstash, etc.
### HA Install
Enabling the API Audit Log with the Helm chart install will create a `rancher-audit-log` sidecar container in the Rancher pod. This container will stream the log to standard output (stdout). You can view the log as you would any container log.
The `rancher-audit-log` container is part of the `rancher` pod in the `cattle-system` namespace.
#### CLI
```bash
kubectl -n cattle-system logs -f rancher-84d886bdbb-s4s69 rancher-audit-log
```
If you changed the `AUDIT_LOG_PATH` parameter, look in that location for `rancher-api-audit.log` instead.
#### Rancher Web GUI
1. From the context menu, select **Cluster: local > System**.
![Local Cluster: System Project]({{< baseurl >}}/img/rancher/audit_logs_gui/context_local_system.png)
1. From the **Workloads** tab, find the `cattle-system` namespace. Open the `rancher` workload by clicking its link.
![Rancher Workload]({{< baseurl >}}/img/rancher/audit_logs_gui/rancher_workload.png)
1. Pick one of the `rancher` pods and select **Ellipsis (...) > View Logs**.
![View Logs]({{< baseurl >}}/img/rancher/audit_logs_gui/view_logs.png)
1. From the **Logs** drop-down, select `rancher-audit-log`.
![Select Audit Log]({{< baseurl >}}/img/rancher/audit_logs_gui/rancher_audit_log_container.png)
#### Shipping the Audit Log
You can enable Rancher's built in log collection and shipping for the cluster to ship the audit and other services logs to a supported collection endpoint. See [Rancher Tools - Logging]({{< baseurl >}}/rancher/v2.x/en/tools/logging) for details.
## Audit Log Samples
@@ -16,17 +16,16 @@ This centralized user authentication is accomplished using the Rancher authentic
The Rancher authentication proxy integrates with the following external authentication services. The following table lists the first version of Rancher each service debuted.
| Auth Service | First Appearance |
| ----------------------------------------------------------------------------------------------- | ---------------- |
| [Microsoft Active Directory]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/ad/) | v2.0.0 |
| [GitHub]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/github/) | v2.0.0 |
| [Microsoft Azure AD]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/azure-ad/) | v2.0.3 |
| [FreeIPA]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/freeipa/) | v2.0.5 |
| [OpenLDAP]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/openldap/) | v2.0.5 |
| [Microsoft AD FS]({{< baseurl >}}rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/) | v2.0.7 |
| [PingIdentity]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/ping-federate/) | v2.0.7 |
<!-- | [Keycloak]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/keycloak/) | v2.1.0 -->
| Auth Service | Available as of |
| ------------------------------------------------------------------------------------------------ | ---------------- |
| [Microsoft Active Directory]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/ad/) | v2.0.0 |
| [GitHub]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/github/) | v2.0.0 |
| [Microsoft Azure AD]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/azure-ad/) | v2.0.3 |
| [FreeIPA]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/freeipa/) | v2.0.5 |
| [OpenLDAP]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/openldap/) | v2.0.5 |
| [Microsoft AD FS]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/) | v2.0.7 |
| [PingIdentity]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/ping-federate/) | v2.0.7 |
| [Keycloak]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/keycloak/) | v2.1.0 |
<br/>
However, Rancher also provides local authentication.
@@ -1,22 +1,21 @@
---
title: Configuring KeyCloak (SAML)
title: Configuring Keycloak (SAML)
weight: 1200
draft: true
---
_Available as of v2.0.1_
_Available as of v2.1.0_
If your organization uses KeyCloak Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
If your organization uses Keycloak Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
>**Prerequisites:**
>
>- You must have a [KeyCloak IdP Server](https://www.keycloak.org/docs/3.2/server_installation/index.html) configured.
>- Export a `metadata.xml` file from your IdP Server. For more information, see the [KeyCloak documentation](https://www.keycloak.org/docs/3.2/server_admin/topics/clients/client-saml.html) to create a SAML Client, under Installation tab, you can find your metadata.
>- You must have a [Keycloak IdP Server](https://www.keycloak.org/docs/3.2/server_installation/index.html) configured.
>- Export a `metadata.xml` file from your IdP Server. For more information, see the [Keycloak documentation](https://www.keycloak.org/docs/3.2/server_admin/topics/clients/client-saml.html) to create a SAML Client, under Installation tab, you can find your metadata.
1. From the **Global** view, select **Security > Authentication** from the main menu.
1. Select **KeyCloak**.
1. Select **Keycloak**.
1. Complete the **Configure KeyCloak Account** form. KeyCloak IdP lets you specify what data store you want to use. You can either add a database or use an existing LDAP server. For example, if you select your Active Directory (AD) server, the examples below describe how you can map AD attributes to fields within Rancher.
1. Complete the **Configure Keycloak Account** form. Keycloak IdP lets you specify what data store you want to use. You can either add a database or use an existing LDAP server. For example, if you select your Active Directory (AD) server, the examples below describe how you can map AD attributes to fields within Rancher.
| Field | Description |
@@ -28,22 +27,23 @@ If your organization uses KeyCloak Identity Provider (IdP) for user authenticati
| Rancher API Host | The URL for your Rancher Server. |
| Private Key / Certificate | A key/certificate pair to create a secure shell between Rancher and your IdP. |
| IDP-metadata | The `metadata.xml` file that you exported from your IdP server. |
>**Tip:** You can generate a key/certificate pair using an openssl command. For example:
>
> openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout myservice.key -out myservice.cert
1. After you complete the **Configure KeyCloak Account** form, click **Authenticate with KeyCloak**, which is at the bottom of the page.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with KeyCloak IdP to validate your Rancher KeyCloak configuration.
1. After you complete the **Configure Keycloak Account** form, click **Authenticate with Keycloak**, which is at the bottom of the page.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Keycloak IdP to validate your Rancher Keycloak configuration.
>**Note:** You may have to disable your popup blocker to see the IdP login page.
**Result:** Rancher is configured to work with KeyCloak. Your users can now sign into Rancher using their KeyCloak logins.
**Result:** Rancher is configured to work with Keycloak. Your users can now sign into Rancher using their Keycloak logins.
>**KeyCloak Identity Provider Caveats:**
>**Keycloak Identity Provider Caveats:**
>
>- IdP does not support search or lookup. When adding users to [clusters]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/) or [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/), the exact IDs must be entered correctly.
>- When adding users to [clusters]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/) or [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/), group IDs are not supported unless the admin who turned on access control is a member of the group.
>- When adding a group that includes an admin to clusters or projects, add it from the drop-down rather than the search bar. If you add the group using the search bar, the group will not get added.
>- SAML Protocol does not support search or lookup for users or groups. Therefore, there is no validation on users or groups when adding them to Rancher.
>- When adding users, the exact user IDs (i.e. `UID Field`) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
>- When adding groups, you *must* select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
> - The group drop-down shows *only* the groups that you are a member of. You will not be able to add groups that you are not a member of.
@@ -32,8 +32,10 @@ Setting up Microsoft AD FS with Rancher Server requires configuring AD FS on you
>**Active Directory Federation Service Caveats:**
>
>- AD FS does not support search or lookup. When adding users to [clusters]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/) or [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/), the exact IDs must be entered correctly.
>- When adding users to [clusters]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/) or [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/), group IDs are not supported unless the admin who turned on access control is a member of the group.
>- When adding a group that includes an admin to [clusters]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/) or [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/), add it from the drop-down rather than the search bar. If you add the group using the search bar, the group will not get added.
>- SAML Protocol does not support search or lookup for users or groups. Therefore, there is no validation on users or groups when adding them to Rancher.
>- When adding users, the exact user IDs (i.e. `UID Field`) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
>- When adding groups, you *must* select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
> - The group drop-down shows *only* the groups that you are a member of. You will not be able to add groups that you are not a member of.
### [Next: Configuring Microsoft AD FS for Rancher]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup)
@@ -16,28 +16,28 @@ If your organization uses Ping Identity Provider (IdP) for user authentication,
1. Select **PingIdentity**.
1. Complete the **Configure Ping Account** form. Ping IdP lets you specify what data store you want to use. You can either add a database or use an existing ldap server. For example, if you select your Active Directory (AD) server, the examples below describe how you can map AD attributes to fields within Rancher.
1. **Display Name Field**: Enter the AD attribute that contains the display name of users (example: `displayName`).
1. **User Name Field**: Enter the AD attribute that contains the user name/given name (example: `givenName`).
1. **UID Field**: Enter an AD attribute that is unique to every user (example: `sAMAccountName`, `distinguishedName`).
1. **Groups Field**: Make entries for managing group memberships (example: `memberOf`).
1. **Rancher API Host**: Enter the URL for your Rancher Server.
1. **Private Key** and **Certificate**: This is a key-certificate pair to create a secure shell between Rancher and your IdP.
1. **Private Key** and **Certificate**: This is a key-certificate pair to create a secure shell between Rancher and your IdP.
You can generate one using an openssl command. For example:
```
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
```
1. **IDP-metadata**: The `metadata.xml` file that you [exported from your IdP server](https://documentation.pingidentity.com/pingfederate/pf83/index.shtml#concept_exportingMetadata.html).
1. After you complete the **Configure Ping Account** form, click **Authenticate with Ping**, which is at the bottom of the page.
1. After you complete the **Configure Ping Account** form, click **Authenticate with Ping**, which is at the bottom of the page.
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Ping IdP to validate your Rancher PingIdentity configuration.
@@ -45,8 +45,9 @@ If your organization uses Ping Identity Provider (IdP) for user authentication,
**Result:** Rancher is configured to work with PingIdentity. Your users can now sign into Rancher using their PingIdentity logins.
>**Ping Identity Provider Caveats:**
>**Ping Identity Provider Caveats:**
>
>- IdP does not support search or lookup. When adding users to [clusters]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/) or [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/), the exact IDs must be entered correctly.
>- When adding users to [clusters]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters/) or [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/), group IDs are not supported unless the admin who turned on access control is a member of the group.
>- When adding a group that includes an admin to clusters or projects, add it from the drop-down rather than the search bar. If you add the group using the search bar, the group will not get added.
>- SAML Protocol does not support search or lookup for users or groups. Therefore, there is no validation on users or groups when adding them to Rancher.
>- When adding users, the exact user IDs (i.e. `UID Field`) must be entered correctly. As you type the user ID, there will be no search for other user IDs that may match.
>- When adding groups, you *must* select the group from the drop-down that is next to the text box. Rancher assumes that any input from the text box is a user.
> - The group drop-down shows *only* the groups that you are a member of. You will not be able to add groups that you are not a member of.
@@ -60,6 +60,11 @@ _Project roles_ are roles that can be used to grant users access to a project. T
- **Read Only:**
These users can view everything in the project but cannot create, update, or delete anything.
><a id="caveat">**Caveat:**
>
>Users assigned the `Owner` or `Member` role for a project automatically inherit the `namespace creation` role. However, this role is a [Kubernetes ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole), meaning its scope extends to all projects in the cluster. Therefore, users explicitly assigned the `owner` or `member` role for a project can create namespaces in other projects they're assigned to, even with only the `Read Only` role assigned.
#### Custom Project Roles
@@ -142,4 +147,4 @@ When you revoke the cluster membership for a user that's explicitly assigned mem
- Access the projects they hold membership in.
- Exercise any [individual project roles](#project-role-reference) they are assigned.
If you want to completely revoke a user's access within a cluster, revoke both their cluster and project memberships.
If you want to completely revoke a user's access within a cluster, revoke both their cluster and project memberships.
@@ -0,0 +1,16 @@
---
title: Removing Rancher
weight: 5000
---
When you deploy Rancher and use it to provision clusters, Rancher installs its components on the nodes you use. This section features instructions on how to remove Rancher's components from your nodes that you no longer want to use with Rancher.
There are two contexts in which you'd remove Rancher from a Kubernetes cluster node.
- [Removing Rancher Components from Rancher Launched Kubernetes Clusters]({{< baseurl >}}/rancher/v2.x/en/admin-settings/removing-rancher/user-cluster-nodes/)
In this context, you are removing Rancher components from Kubernetes clusters that you [launched using Rancher]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/).
- [Removing Rancher from Your Rancher Server Nodes]({{< baseurl >}}/rancher/v2.x/en/admin-settings/removing-rancher/rancher-cluster-nodes/)
In this context, you are removing Rancher from the Kubernetes cluster that you configured for your [Rancher installation]({{< baseurl >}}/rancher/v2.x/en/installation/ha/).
@@ -0,0 +1,81 @@
---
title: Removing Rancher from Your Rancher Server Nodes
weight: 2000
---
When you want to remove Rancher from your [installation cluster]({{< baseurl >}}/rancher/v2.x/en/installation/ha/) as part of a Rancher reinstall (or uninstall), follow the instructions below to download and run _system-tools_, a utility that removes all Rancher components from Rancher Server nodes provisioned by RKE.
### Download and Configuration
You can download the latest version of Rancher system-tools from its GitHub [releases page](https://github.com/rancher/system-tools/releases). Download the version of system-tools for the OS that you're running the tool from.
Operating System | File
-----------------|-----
MacOS | `system-tools_darwin-amd64`
Linux | `system-tools_linux-amd64`
Windows | `system-tools_windows-amd64.exe`
<br>
After you download the tools, complete the following actions:
1. Rename the file to `system-tools`.
1. Give the file executable permissions by running the following command:
```
chmod +x system-tools
```
1. Find the kubeconfig file that was generated during your Rancher installation, `kube_config_rancher-cluster.yml`. Move it to the `/.kube` on your workstation, if it isn't already there. Create this directory if it doesn't exist.
System-tools uses this file to access your installation cluster.
### Using the System-Tool
System-tools is a utility for running operational tasks on Rancher clusters. In this use case, it will help you remove the Rancher from your installation nodes.
#### Usage
After you move the `system-tools` and kubeconfig file to your workstation's `~/.kube` directory, you can run system-tools by changing to the `~/.kube` directory and entering the following command.
>**Warning:** This command will remove data from your etcd nodes. Make sure you have created a [backup of etcd]({{< baseurl >}}/rancher/v2.x/en/backups/backups) before executing the command.
```
./system-tools remove --kubeconfig <$KUBECONFIG> --namespace <NAMESPACE>
```
<br/>
When you run this command, the components listed in [What Gets Removed?](#what-gets-removed) are deleted.
##### Options
| Option | Description |
| ---------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- |
| `--kubeconfig <KUBECONFIG_PATH>, -c <KUBECONFIG_PATH>` | The cluster's kubeconfig file absolute path, usually `~/.kube/kube_config_rancher-cluster.yml`.<sup>1</sup> |
| `--namespace <NAMESPACE>, -n cattle-system` | Rancher 2.x deployment namespace (`<NAMESPACE>`). If no namespace is defined, the options defaults to `cattle-system`. |
| `--force` | Skips the the interactive removal confirmation and removes the Rancher deployment without prompt. |
> <sup>1</sup> If you are working with multiple Kubernetes clusters, you can place `kube_config_rancher-cluster.yml` in another directory path and then set the `KUBECONFIG` environment variable to its path.
>```
export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
```
## What Gets Removed?
When removing Rancher from server nodes launched using RKE, the following components are deleted.
- The Rancher deployment namespace (`cattle-system` by default).
- Any `serviceAccount`, `clusterRoles`, and `clusterRoleBindings` that Rancher applied the `cattle.io/creator:norman` label to. Rancher applies this label to any resource that it creates as of v2.1.0.
- Labels, annotations, and finalizers.
- Rancher Deployment.
- Machines, clusters, projects, and user custom resource deployments (CRDs).
- All resources create under the `management.cattle.io` API Group.
- All CRDs created by Rancher v2.0.x.
>**Using 2.0.8 or Earlier?**
>
>These versions of Rancher do not automatically delete the `serviceAccount`, `clusterRole`, and `clusterRoleBindings` resources after the job runs. You'll have to delete them yourself.
@@ -0,0 +1,265 @@
---
title: Removing Rancher Components from Rancher Launched Kubernetes Nodes
weight: 375
aliases:
- /rancher/v2.x/en/installation/removing-rancher/cleaning-cluster-nodes/
- /rancher/v2.x/en/installation/removing-rancher/
- /rancher/v2.x/en/faq/cleaning-cluster-nodes/
---
When you use Rancher to [launch nodes for a cluster]({{< baseurl >}}rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher), resources (containers/virtual network interfaces) and configuration items (certificates/configuration files) are created.
When removing nodes from your Rancher-launched cluster (provided that they are in `Active` state), those resources automatically cleaned, and the only action needed is to restart the node. When a node has become unreachable and the automatic cleanup process cannot be used, we describe the steps that need to be executed before the node can be added to a cluster again.
## What Gets Removed?
When cleaning nodes provisioned using Rancher, the following components are deleted based on the type of cluster node you're removing.
| Removed Component | [IaaS Nodes][1] | [Custom Nodes][2] | [Hosted Cluster][3] | [Imported Nodes][4] |
| ------------------------------------------------------------------------------ | --------------- | ----------------- | ------------------- | ------------------- |
| The Rancher deployment namespace (`cattle-system` by default) | ✓ | ✓ | ✓ | ✓ |
| `serviceAccount`, `clusterRoles`, and `clusterRoleBindings` labeled by Rancher | ✓ | ✓ | ✓ | ✓ |
| Labels, Annotations, and Finalizers | ✓ | ✓ | ✓ | ✓ |
| Rancher Deployment | ✓ | ✓ | ✓ | |
| Machines, clusters, projects, and user custom resource deployments (CRDs) | ✓ | ✓ | ✓ | |
| All resources create under the `management.cattle.io` API Group | ✓ | ✓ | ✓ | |
| All CRDs created by Rancher v2.0.x | ✓ | ✓ | ✓ | |
[1]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/
[2]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/
[3]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/
[4]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/
## Removing A Node from a Cluster by Rancher UI
When the node is in `Active` state, removing the node from a cluster will trigger a process to clean up the node. Please restart the node after the automatic cleanup process is done to make sure any non-persistent data is properly removed.
**To restart a node:**
```
# using reboot
reboot
# using shutdown
shutdown -r now
```
## Cleaning a Node Manually
When a node is unreachable and removed from the cluster, the automatic cleaning process can't be triggered because the node is unreachable. Please follow the steps below to manually clean the node.
>**Warning:** The commands listed below will remove data from the node. Make sure you have created a backup of files you want to keep before executing any of the commands as data will be lost.
## Imported Cluster Nodes
For imported clusters, the process for removing Rancher from its nodes is a little different. You can the option of simply deleting the cluster in the Rancher UI, or your can run a script that removes Rancher components from the nodes. Both options make the same deletions.
{{% tabs %}}
{{% tab "By UI / API" %}}
>**Warning:** This process will remove data from your nodes. Make sure you have created a backup of files you want to keep before executing the command, as data will be lost.
After you initiate the removal of an [imported cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#import-existing-cluster) using the Rancher UI (or API), the following events occur.
1. Rancher creates a `serviceAccount` that it uses to remove the cluster. This account is assigned the [clusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) and [clusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) permissions, which are required to remove the cluster.
1. Using the `serviceAccount`, Rancher schedules and runs a [job](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) that cleans the Rancher and Kubernetes components off of the node. This job also references the `serviceAccount` and its roles as dependencies, so the job deletes them before its completion.
1. Rancher is removed from the cluster nodes. However, the cluster persists, running the native version of Kubernetes.
**Result:** All components listed for imported clusters in [What Gets Removed?](#what-gets-removed) are deleted.
{{% /tab %}}
{{% tab "By Script" %}}
Rather than cleaning imported cluster nodes using the Rancher UI, you can run a script instead.
>**Prerequisite:**
>
>Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
1. Open a web browser, navigate to [GitHub](https://github.com/rancher/rancher/blob/master/cleanup/user-cluster.sh), and download `user-cluster.sh`.
1. Make the script executable by running the following command from the same directory as `user-cluster.sh`:
```
chmod +x user-cluster.sh
```
1. **Air Gap Users Only:** Open `user-cluster.sh` and replace `yaml_url` with the URL in `user-cluster.yml`.
If you aren't an air gap user, skip this step.
1. From the same directory, run the script:
>**Tip:**
>
>Add the `-dry-run` flag to preview the script's outcome without making changes.
```
./user-cluster.sh rancher/agent:latest
```
**Result:** The script runs. All components listed for imported clusters in [What Gets Removed?](#what-gets-removed) are deleted.
{{% /tab %}}
{{% /tabs %}}
### Docker Containers, Images, and Volumes
Based on what role you assigned to the node, Kubernetes components in containers, containers belonging to overlay networking, DNS, ingress controller and Rancher agent. (and pods you created that have been scheduled to this node)
**To clean all Docker containers, images and volumes:**
```
docker rm -f $(docker ps -qa)
docker rmi -f $(docker images -q)
docker volume rm $(docker volume ls -q)
```
### Mounts
Kubernetes components and secrets leave behind mounts on the system that need to be unmounted.
Mounts |
--------|
`/var/lib/kubelet/pods/XXX` (miscellaneous mounts) |
`/var/lib/kubelet` |
`/var/lib/rancher` |
**To unmount all mounts:**
```
for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do umount $mount; done
```
### Directories and Files
The following directories are used when adding a node to a cluster, and should be removed. You can remove a directory using `rm -rf /directory_name`.
>**Note:** Depending on the role you assigned to the node, some of the directories will or won't be present on the node.
Directories |
--------|
`/etc/ceph` |
`/etc/cni` |
`/etc/kubernetes` |
`/opt/cni` |
`/opt/rke` |
`/run/secrets/kubernetes.io` |
`/run/calico` |
`/run/flannel` |
`/var/lib/calico` |
`/var/lib/etcd` |
`/var/lib/cni` |
`/var/lib/kubelet` |
`/var/lib/rancher/rke/log` |
`/var/log/containers` |
`/var/log/pods` |
`/var/run/calico` |
**To clean the directories:**
```
rm -rf /etc/ceph \
/etc/cni \
/etc/kubernetes \
/opt/cni \
/opt/rke \
/run/secrets/kubernetes.io \
/run/calico \
/run/flannel \
/var/lib/calico \
/var/lib/etcd \
/var/lib/cni \
/var/lib/kubelet \
/var/lib/rancher/rke/log \
/var/log/containers \
/var/log/pods \
/var/run/calico
```
### Network Interfaces and Iptables
The remaining two components that are changed/configured are (virtual) network interfaces and iptables rules. Both are non-persistent to the node, meaning that they will be cleared after a restart of the node.
This is the recommended method.
**To restart a node:**
```
# using reboot
reboot
# using shutdown
shutdown -r now
```
If you want to know more on (virtual) network interfaces or iptables rules, please see the specific subjects below.
### Network Interfaces
>**Note:** Depending on the network provider configured for the cluster the node was part of, some of the interfaces will or won't be present on the node.
Interfaces |
--------|
`flannel.1` |
`cni0` |
`tunl0` |
`caliXXXXXXXXXXX` (random interface names) |
`vethXXXXXXXX` (random interface names) |
**To list all interfaces:**
```
# Using ip
ip address show
# Using ifconfig
ifconfig -a
```
**To remove an interface:**
```
ip link delete interface_name
```
### Iptables
>**Note:** Depending on the network provider configured for the cluster the node was part of, some of the chains will or won't be present on the node.
Iptables rules are used to route traffic from and to containers. The created rules are not persistent, so restarting the node will restore iptables to it's original state.
Chains |
--------|
`cali-failsafe-in` |
`cali-failsafe-out` |
`cali-fip-dnat` |
`cali-fip-snat` |
`cali-from-hep-forward` |
`cali-from-host-endpoint` |
`cali-from-wl-dispatch` |
`cali-fw-caliXXXXXXXXXXX` (random chain names) |
`cali-nat-outgoing` |
`cali-pri-kns.NAMESPACE` (chain per namespace) |
`cali-pro-kns.NAMESPACE` (chain per namespace) |
`cali-to-hep-forward` |
`cali-to-host-endpoint` |
`cali-to-wl-dispatch` |
`cali-tw-caliXXXXXXXXXXX` (random chain names) |
`cali-wl-to-host` |
`KUBE-EXTERNAL-SERVICES` |
`KUBE-FIREWALL` |
`KUBE-MARK-DROP` |
`KUBE-MARK-MASQ` |
`KUBE-NODEPORTS` |
`KUBE-SEP-XXXXXXXXXXXXXXXX` (random chain names) |
`KUBE-SERVICES` |
`KUBE-SVC-XXXXXXXXXXXXXXXX` (random chain names) |
**To list all iptables rules:**
```
iptables -L -t nat
iptables -L -t mangle
iptables -L
```
@@ -88,6 +88,9 @@ After you've either enabled the built-in catalogs or added your own custom catal
* For native Helm charts (i.e., charts from the **Helm Stable** or **Helm Incubator** catalogs), answers are provided as key value pairs in the **Answers** section.
* Keys and values are available within **Detailed Descriptions**.
* When entering answers, you must format them using the syntax rules found in [Using Helm: The format and limitations of --set](https://github.com/helm/helm/blob/master/docs/using_helm.md#the-format-and-limitations-of---set), as Rancher passes them as `--set` flags to Helm.
For example, when entering an answer that includes two values separated by a comma (i.e., `abc, bcd`), wrap the values with double quotes (i.e., `"abc, bcd"`).
7. Review the files in **Preview**. When you're satisfied, click **Launch**.
@@ -8,25 +8,25 @@ aliases:
## Custom Nodes
Use Rancher to create a Kubernetes cluster on your on-premise bare metal servers. This option creates a cluster using a combination of <a href='https://docs.docker.com/machine/' target='_blank'>Docker Machine</a> and RKE, which is Rancher's own lightweight Kubernetes installer. In addition to bare metal servers, RKE can also create clusters on _any_ IaaS providers by integrating with node drivers.
Use Rancher to create a Kubernetes cluster on your on-premise bare metal servers. This option creates a cluster using a combination of [Docker Machine](https://docs.docker.com/machine/) and RKE, which is Rancher's own lightweight Kubernetes installer. In addition to bare metal servers, RKE can also create clusters on _any_ IaaS providers by integrating with node drivers.
To use this option you'll need access to servers you intend to use as your Kubernetes cluster. Provision each server according to Rancher [requirements](#requirements), which includes some hardware specifications and Docker. After you install Docker on each server, run the command provided in the Rancher UI to turn each server into a Kubernetes node.
## Objectives for Creating Cluster with Custom Nodes
1. [Provision a Linux Host](#provision-a-linux-host)
>**Want to use Windows hosts as Kubernetes workers?**
>
>See [Configuring Custom Clusters for Windows]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/windows-clusters/) before you start.
Begin by provisioning a Linux host.
<!-- TOC -->
2. [Create the Cluster](#create-the-custom-cluster)
- [1. Provision a Linux Host](#1-provision-a-linux-host)
- [2. Create the Custom Cluster](#2-create-the-custom-cluster)
- [3. Amazon Only: Tag Resources](#3-amazon-only-tag-resources)
Use your new Linux host as a template for your new Kubernetes cluster.
<!-- /TOC -->
2. **Amazon Only:** [Tag Resources](#amazon-only-br-tag-resources)
If you're using Amazon to create your custom cluster, log into AWS and tag your resources with a cluster ID.
## Provision a Linux Host
## 1. Provision a Linux Host
Begin creation of a custom cluster by provisioning a Linux host. Your host can be:
@@ -44,7 +44,7 @@ Provision the host according to the requirements below.
Each node in your cluster must meet our [Requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements).
## Create the Custom Cluster
## 2. Create the Custom Cluster
Use {{< product >}} to clone your Linux host and configure them as Kubernetes nodes.
@@ -58,15 +58,20 @@ Use {{< product >}} to clone your Linux host and configure them as Kubernetes no
5. {{< step_create-cluster_cluster-options >}}
6. Click **Next**.
>**Using Windows nodes as Kubernetes workers?**
>
>- See [Enable the Windows Support Option]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/windows-clusters/#enable-the-windows-support-option).
>- The only Network Provider available for clusters with Windows support is Flannel. See [Networking Option]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/windows-clusters/#networking-option).
6. <a id="step-6"></a>Click **Next**.
7. From **Node Role**, choose the roles that you want filled by a cluster node.
>**Bare-Metal Server Reminder:**
>**Notes:**
>
If you plan on dedicating bare-metal servers to each role, you must provision a bare-metal server for each role (i.e. provision multiple bare-metal servers).
>- Using Windows nodes as Kubernetes workers? See [Node Configuration]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/windows-clusters/#node-configuration).
>- Bare-Metal Server Reminder: If you plan on dedicating bare-metal servers to each role, you must provision a bare-metal server for each role (i.e. provision multiple bare-metal servers).
8. **Optional**: Add **Labels** to your cluster nodes to help schedule workloads later.
8. <a id="step-8"></a>**Optional**: Add **Labels** to your cluster nodes to help schedule workloads later.
[Kubernetes Documentation: Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
@@ -80,7 +85,7 @@ Use {{< product >}} to clone your Linux host and configure them as Kubernetes no
{{< result_create-cluster >}}
## Amazon Only:<br/>Tag Resources
## 3. Amazon Only: Tag Resources
If you have configured your cluster to use Amazon as **Cloud Provider**, tag your AWS resources with a cluster ID.
@@ -0,0 +1,179 @@
---
title: Configuring Custom Clusters for Windows (Experimental)
weight: 2240
---
_Available as of v2.1.0_
>**Important:**
>
>Support for Windows nodes is currently an experimental feature and is not yet officially supported in Rancher. Therefore, we do not recommend using Windows nodes in a production environment.
When provisioning a [custom cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/custom-clusters/) using Rancher, you can use a mix of Linux and Windows hosts as your cluster nodes.
This guide walks you through create of a custom cluster that includes 3 nodes: a Linux node, which serves as a Kubernetes control plane node; another Linux node, which serves as a Kubernetes worker used to support Ingress for the cluster; and a Windows node, which is assigned the Kubernetes worker role and runs your Windows containers.
>**Notes:**
>
>- For a summary of Kubernetes features supported in Windows, see [Using Windows Server Containers in Kubernetes](https://kubernetes.io/docs/getting-started-guides/windows/#supported-features).
>- Windows containers must run on Windows Server 1803 nodes. Windows Server 1709 and earlier versions do not support Kubernetes properly.
>- Containers built for Windows Server 1709 or earlier do not run on Windows Server 1803. You must build containers on Windows Server 1803 to run these containers on Windows Server 1803.
## Objectives for Creating Cluster with Windows Support
When setting up a custom cluster with support for Windows nodes and containers, complete the series of tasks below.
<!-- TOC -->
- [1. Provision Hosts](#1-provision-hosts)
- [2. Cloud-host VM Networking Configuration](#2-cloud-host-vm-networking-configuration)
- [3. Create the Custom Cluster](#3-create-the-custom-cluster)
- [4. Add Linux Host for Ingress Support](#4-add-linux-host-for-ingress-support)
- [5. Adding Windows Workers](#5-adding-windows-workers)
- [6. Cloud-host VM Routes Configuration](#6-cloud-host-vm-routes-configuration)
- [Troubleshooting](#troubleshooting)
<!-- /TOC -->
## 1. Provision Hosts
To begin provisioning a custom cluster with Windows support, prepare your host servers. Provision three nodes according to our [requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/)—two Linux, one Windows. Your hosts can be:
- Cloud-hosted VMs
- VMs from virtualization clusters
- Bare-metal servers
The table below lists the [Kubernetes roles]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#kubernetes-cluster-node-components) you'll assign to each host, although you won't enable these roles until further along in the configuration process—we're just informing you of each node's purpose. The first node, a Linux host, is primarily responsible for managing the Kubernetes control plane, although, in this use case, we're installing all three roles on this node. Node 2 is also a Linux worker, which is responsible for Ingress support. Finally, the third node is your Windows worker, which will run your Windows applications.
Node | Operating System | Future Cluster Role(s)
--------|------------------|------
Node 1 | Linux (Ubuntu Server 16.04 recommended) | [Control Plane]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#control-plane-nodes), [etcd]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#etcd), [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes)
Node 2 | Linux (Ubuntu Server 16.04 recommended) | [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes) (This node is used for Ingress support)
Node 3 | Windows (*Windows Server 1803 required*) | [Worker]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#worker-nodes)
### Requirements
- You can view node requirements for Linux and Windows nodes in the [installation section]({{< baseurl >}}/rancher/v2.x/en/installation/requirements/).
- All nodes in a virtualization cluster or a bare metal cluster must be connected using a layer 2 network.
- To support [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/), your cluster must include at least one Linux node dedicated to the worker role.
- Although we recommend the three node architecture listed in the table above, you can add additional Linux and Windows workers to scale up your cluster for redundancy.
## 2. Cloud-hosted VM Networking Configuration
>**Note:** This step only applies to nodes hosted on cloud-hosted virtual machines. If you're using virtualization clusters or bare-metal servers, skip ahead to [Create the Custom Cluster](#3-create-the-custom-cluster).
If you're hosting your nodes on any of the cloud services listed below, you must disable the private IP address checks for both your Linux or Windows hosts on startup. To disable this check for each node, follow the directions provided by each service below.
Service | Directions to disable private IP address checks
--------|------------------------------------------------
Amazon EC2 | [Disabling Source/Destination Checks](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html#EIP_Disable_SrcDestCheck)
Google GCE | [Enabling IP Forwarding for Instances](https://cloud.google.com/vpc/docs/using-routes#canipforward)
Azure VM | [Enable or Disable IP Forwarding](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-network-interface#enable-or-disable-ip-forwarding)
## 3. Create the Custom Cluster
To create a custom cluster that supports Windows nodes, follow the instructions in [Creating a Cluster with Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster), starting from [2. Create the Custom Cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#2-create-the-custom-cluster). While completing the linked instructions, look for steps that requires special actions for Windows nodes, which are flagged with a note. These notes will link back here, to the special Windows instructions listed in the subheadings below.
### Enable the Windows Support Option
While choosing **Cluster Options**, set **Windows Support (Experimental)** to **Enabled**.
![Enable Windows Support]({{< baseurl >}}/img/rancher/enable-windows-support.png)
After you select this option, resume [Creating a Cluster with Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#create-the-custom-cluster) from [step 6]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#step-6).
### Networking Option
When choosing a network provider for a cluster that supports Windows, the only option available is Flannel, as [host-gw](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) is needed for IP routing.
![Flannel]({{< baseurl >}}/img/rancher/flannel.png)
If your nodes are hosted by a cloud provider and you want automation support such as load balancers or persistent storage devices, see [Selecting Cloud Providers]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers) for configuration info.
### Node Configuration
The first node in your cluster should be a Linux host that fills the Control Plane role. This role must be fulfilled before you can add Windows hosts to your cluster. At minimum, the node must have this role enabled, but we recommend enabling all three. The following table lists our recommended settings (we'll provide the recommended settings for nodes 2 and 3 later).
Option | Setting
-------|--------
Node Operating System | Linux
Node Roles | etcd <br/> Control Plane <br/> Worker
![Recommended Linux Control Plane Configuration]({{< baseurl >}}/img/rancher/linux-control-plane.png)
When you're done with these configurations, resume [Creating a Cluster with Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#create-the-custom-cluster) from [step 8]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#step-8).
## 4. Add Linux Host for Ingress Support
After the initial provisioning of your custom cluster, your cluster only has a single Linux host. Add another Linux host, which will be used to support Ingress for your cluster.
1. Using the content menu, open the custom cluster your created in [2. Create the Custom Cluster](#2-create-the-custom-cluster).
1. From the main menu, select **Nodes**.
1. Click **Edit Cluster**.
1. Scroll down to **Node Operating System**. Choose **Linux**.
1. Select the **Worker** role.
1. Copy the command displayed on screen to your clipboard.
1. Log in to your Linux host using a remote Terminal connection. Run the command copied to your clipboard.
1. From **Rancher**, click **Save**.
**Result:** The worker role is installed on your Linux host, and the node registers with Rancher.
## 5. Adding Windows Workers
You can add Windows hosts to a custom cluster by editing the cluster and choosing the **Windows** option.
1. From the main menu, select **Nodes**.
1. Click **Edit Cluster**.
1. Scroll down to **Node Operating System**. Choose **Windows**.
1. Select the **Worker** role.
1. Copy the command displayed on screen to your clipboard.
1. Log in to your Windows host using your preferred tool, such as [Microsoft Remote Desktop](https://docs.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients). Run the command copied to your clipboard in the **Command Prompt (CMD)**.
1. From Rancher, click **Save**.
1. **Optional:** Repeat these instruction if you want to add more Windows nodes to your cluster.
**Result:** The worker role is installed on your Windows host, and the node registers with Rancher.
## 6. Cloud-hosted VM Routes Configuration
In Windows clusters, containers communicate with each other using the `host-gw` mode of Flannel. In `host-gw` mode, all containers on the same node belong to a private subnet, and traffic routes from a subnet on one node to a subnet on another node through the host network.
- When worker nodes are provisioned on AWS, virtualization clusters, or bare metal servers, make sure they belong to the same layer 2 subnet. If the nodes don't belong to the same layer 2 subnet, `host-gw` networking will not work. Please contact [Rancher support](https://rancher.com/support/) if your worker nodes on AWS, virtualization clusters, or bare metal servers don't belong to the same layer 2 network.
- When worker nodes are provisioned on GCE or Azure, they are not on the same layer 2 subnet. Nodes on GCE and Azure belong to a routable layer 3 network. Follow the instructions below to configure GCE and Azure so that the cloud network knows how to route the host subnets on each node.
To configure host subnet routing on GCE or Azure, first run the following command to find out the host subnets on each worker node:
```bash
kubectl get nodes -o custom-columns=nodeName:.metadata.name,nodeIP:status.addresses[0].address,routeDestination:.spec.podCIDR
```
Then follow the instructions for each cloud provider to configure routing rules for each node:
Service | Instructions
--------|-------------
Google GCE | For GCE, add a static route for each node: [Adding a Static Route](https://cloud.google.com/vpc/docs/using-routes#addingroute).
Azure VM | For Azure, create a routing table: [Custom Routes: User-defined](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview#user-defined).
` `
@@ -1,189 +0,0 @@
---
title: Cleaning cluster nodes
weight: 375
aliases:
- /rancher/v2.x/en/installation/removing-rancher/cleaning-cluster-nodes/
- /rancher/v2.x/en/installation/removing-rancher/
---
When adding a node to a cluster, resources (containers/(virtual) network interfaces) and configuration items (certificates/configuration files) are created. When removing a node from a cluster (if it is in `Active` state), those resources will be automatically cleaned and the only action needed is to restart the node. When a node has become unreachable and the automatic cleanup process cannot be used, we describe the steps that need to be executed before the node can be added to a cluster again.
## Removing a node from a cluster via Rancher UI
When the node is in `Active` state, removing the node from a cluster will trigger a process to clean up the node. Please restart the node after the automatic cleanup process is done to make sure any non-persistent data is properly removed.
* How to restart a node
```
# using reboot
reboot
# using shutdown
shutdown -r now
```
## Cleaning a node manually
When a node is unreachable and removed from the cluster, the automatic cleaning process can't be triggered because the node is unreachable. Please follow the steps below to manually clean the node.
>**Warning:** The commands listed below will remove data from the node. Make sure you have created a backup of files you want to keep before executing any of the commands as data will be lost.
### Docker containers, images and volumes
Based on what role you assigned to the node, Kubernetes components in containers, containers belonging to overlay networking, DNS, ingress controller and Rancher agent. (and pods you created that have been scheduled to this node)
* How to clean all Docker containers, images and volumes:
```
docker rm -f $(docker ps -qa)
docker rmi -f $(docker images -q)
docker volume rm $(docker volume ls -q)
```
### Mounts
Kubernetes components and secrets leave behind mounts on the system that need to be unmounted.
Mounts |
--------|
`/var/lib/kubelet/pods/XXX` (miscellaneous mounts) |
`/var/lib/kubelet` |
`/var/lib/rancher` |
* How to unmount all mounts:
```
for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do umount $mount; done
```
### Directories and files
The following directories are used when adding a node to a cluster, and should be removed. You can remove a directory using `rm -rf /directory_name`.
>**Note:** Depending on the role you assigned to the node, some of the directories will or won't be present on the node.
Directories |
--------|
`/etc/ceph` |
`/etc/cni` |
`/etc/kubernetes` |
`/opt/cni` |
`/opt/rke` |
`/run/secrets/kubernetes.io` |
`/run/calico` |
`/run/flannel` |
`/var/lib/calico` |
`/var/lib/etcd` |
`/var/lib/cni` |
`/var/lib/kubelet` |
`/var/lib/rancher/rke/log` |
`/var/log/containers` |
`/var/log/pods` |
`/var/run/calico` |
* How to clean the directories:
```
rm -rf /etc/ceph \
/etc/cni \
/etc/kubernetes \
/opt/cni \
/opt/rke \
/run/secrets/kubernetes.io \
/run/calico \
/run/flannel \
/var/lib/calico \
/var/lib/etcd \
/var/lib/cni \
/var/lib/kubelet \
/var/lib/rancher/rke/log \
/var/log/containers \
/var/log/pods \
/var/run/calico
```
### Network interfaces and iptables
The remaining two components that are changed/configured are (virtual) network interfaces and iptables rules. Both are non-persistent to the node, meaning that they will be cleared after a restart of the node.
This is the recommended method.
* How to restart a node
```
# using reboot
reboot
# using shutdown
shutdown -r now
```
If you want to know more on (virtual) network interfaces or iptables rules, please see the specific subjects below.
### Network interfaces
>**Note:** Depending on the network provider configured for the cluster the node was part of, some of the interfaces will or won't be present on the node.
Interfaces |
--------|
`flannel.1` |
`cni0` |
`tunl0` |
`caliXXXXXXXXXXX` (random interface names) |
`vethXXXXXXXX` (random interface names) |
* How to list all interfaces:
```
# Using ip
ip address show
# Using ifconfig
ifconfig -a
```
* How to remove an interface:
```
ip link delete interface_name
```
### Iptables
>**Note:** Depending on the network provider configured for the cluster the node was part of, some of the chains will or won't be present on the node.
Iptables rules are used to route traffic from and to containers. The created rules are not persistent, so restarting the node will restore iptables to it's original state.
Chains |
--------|
`cali-failsafe-in` |
`cali-failsafe-out` |
`cali-fip-dnat` |
`cali-fip-snat` |
`cali-from-hep-forward` |
`cali-from-host-endpoint` |
`cali-from-wl-dispatch` |
`cali-fw-caliXXXXXXXXXXX` (random chain names) |
`cali-nat-outgoing` |
`cali-pri-kns.NAMESPACE` (chain per namespace) |
`cali-pro-kns.NAMESPACE` (chain per namespace) |
`cali-to-hep-forward` |
`cali-to-host-endpoint` |
`cali-to-wl-dispatch` |
`cali-tw-caliXXXXXXXXXXX` (random chain names) |
`cali-wl-to-host` |
`KUBE-EXTERNAL-SERVICES` |
`KUBE-FIREWALL` |
`KUBE-MARK-DROP` |
`KUBE-MARK-MASQ` |
`KUBE-NODEPORTS` |
`KUBE-SEP-XXXXXXXXXXXXXXXX` (random chain names) |
`KUBE-SERVICES` |
`KUBE-SVC-XXXXXXXXXXXXXXXX` (random chain names) |
* How to list all iptables rules
```
iptables -L -t nat
iptables -L -t mangle
iptables -L
```
@@ -20,6 +20,24 @@ New password for default admin user (user-xxxxx):
<new_password>
```
### I deleted/deactivated the last admin, how can I fix it?
Single node install:
```
$ docker exec -ti <container_id> ensure-default-admin
New default admin user (user-xxxxx)
New password for default admin user (user-xxxxx):
<new_password>
```
High Availability install:
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG exec -n cattle-system $(kubectl --kubeconfig $KUBECONFIG get pods -n cattle-system -o json | jq -r '.items[] | select(.spec.containers[].name=="cattle-server") | .metadata.name') -- ensure-default-admin
New password for default admin user (user-xxxxx):
<new_password>
```
### How can I enable debug logging?
* Single node install
@@ -1,98 +1,30 @@
---
title: Preparing for Air Gap Install
title: Air Gap Install
weight: 300
---
Rancher supports installing from a private registry. In every [release](https://github.com/rancher/rancher/releases), we provide you with the needed Docker images and scripts to mirror those images to your own registry. The Docker images are used when nodes are added to a cluster, or when you enable features like pipelines or logging.
In environments where security is high priority, you can set up Rancher in an air gap configuration. Air gap installs are more secure than standard single-node or HA deployments because the network that runs Rancher is disconnected from the Internet, reducing your security surface area.
>**Prerequisite:** It is assumed you either have your own private registry or other means of distributing docker images to your machine. If you need help with creating a private registry, please refer to the [Docker documentation for private registries](https://docs.docker.com/registry/).
## Prerequisites
>**Note:** In Rancher v2.0.x, registries with authentication are not supported for installing from a private registry. The Docker images can only be pulled from a registry without authentication enabled. This limitation only applies to Docker images.
- Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machine. If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/).
## Release Files
For each Rancher [release](https://github.com/rancher/rancher/releases), we provide the Docker images and scripts needed to mirror these images to your own registry. The Docker images are used when installing Rancher in a HA setup, when provisioning a cluster where Rancher is launching Kubernetes, or when you enable features like pipelines or logging.
* **rancher-images.txt**: Contains all images needed for that release.
* **rancher-save-images.sh**: This script will pull all needed images from DockerHub, and save all of the images as a compressed file called `rancher-images.tar.gz`. This file can be transferred to your on-premise host that can access your private registry.
* **rancher-load-images.sh**: This script will load images from rancher-images.tar.gz and push them to your private registry. You have to supply the hostname of your private registry as first argument to the script.<br/>`rancher-load-images.sh registry.yourdomain.com:5000`
- **Installation Option:** Before beginning your air gap installation, choose whether you want ~~a~~ [single-node install]({{< baseurl >}}/rancher/v2.x/en/installation/single-node) or a [high availability install]({{< baseurl >}}/rancher/v2.x/en/installation/ha). View your chosen configuration's introduction notes along with Rancher's [node requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements).
## Making the Rancher Images Available
## Caveats
We will cover two scenarios:
Any Rancher version prior to v2.1.0, registries with authentication are not supported when installing Rancher in HA or provisioning clusters, but after clusters are provisioned, registries with authentication can be used in the Kubernetes clusters.
* **Scenario 1**: You have a node that can access DockerHub to pull and save the images, and a separate node(s) that access your private registry to push the images.
* **Scenario 2**: You have node(s) that can access both DockerHub and your private registry.
As of v2.1.0, registries with authentication work for installing Rancher as well as provisioning clusters.
### Scenario 1: A Node that Can Access DockerHub, Separate Node(s) That Can Access the Private Registry
## Air Gap Installation Outline
![Scenario1]({{< baseurl >}}/img/rancher/airgap/privateregistry.svg)
While installing Rancher in an air gap configuration, you'll complete several different tasks.
1. Browse to the release page of your version of Rancher (e.g. `https://github.com/rancher/rancher/releases/tag/v2.0.0`) and download `rancher-save-images.sh` and `rancher-load-images.sh`.
2. Transfer and run `rancher-save-images.sh` on the host the can access DockerHub. This will require at least 20GB of disk space.
3. Transfer the output file from step 2 (`rancher-images.tar.gz`) to the host that can access the private registry.
4. Transfer and run `rancher-load-images.sh` on the host that can access the private registry. It should be run in the same directory as `rancher-images.tar.gz`.
### Scenario 2: You have node(s) that can access both DockerHub and your private registry.
![Scenario2]({{< baseurl >}}/img/rancher/airgap/privateregistrypushpull.svg)
1. Browse to the release page of your version of Rancher (e.g. `https://github.com/rancher/rancher/releases/tag/v2.0.0`) and download `rancher-images.txt`.
2. Pull all the images present in `rancher-images.txt`, re-tag each image with the location of your registry, and push the image to the registry. This will require at least 20GB of disk space. See an example script below:
```
#!/bin/sh
IMAGES=`curl -s -L https://github.com/rancher/rancher/releases/download/v2.0.0/rancher-images.txt`
for IMAGE in $IMAGES; do
until docker inspect $IMAGE > /dev/null 2>&1; do
docker pull $IMAGE
done
docker tag $IMAGE <registry.yourdomain.com:port>/$IMAGE
docker push <registry.yourdomain.com:port>/$IMAGE
done
```
## Completing the Rancher Installation
After your private registry is setup on all node(s) for your Rancher installation, complete your Rancher installation.
### Single Node Install
Complete installation of Rancher using the instructions in [Single Node Install]({{< baseurl >}}/rancher/v2.x/en/installation/single-node-install/).
>**Note:**
> When completing [Single Node Install]({{< baseurl >}}/rancher/v2.x/en/installation/single-node-install/), prepend your private registry URL to the image when running the `docker run` command.
>
> Example:
> ```
> docker run -d --restart=unless-stopped \
> -p 80:80 -p 443:443 \
> <registry.yourdomain.com:port>/rancher/rancher:latest
```
## Configuring Rancher to Use the Private Registry
Rancher needs to be configured to use the private registry as source for the needed images.
1. Go into the **Settings** view.
![Settings]({{< baseurl >}}/img/rancher/airgap/settings.png)
2. Look for the setting called `system-default-registry` and choose **Edit**.
![Edit]({{< baseurl >}}/img/rancher/airgap/edit-system-default-registry.png)
3. Change the value to your registry (e.g. `registry.yourdomain.com:port`). Do not prefix the registry with `http://` or `https://`.
![Save]({{< baseurl >}}/img/rancher/airgap/enter-system-default-registry.png)
- [1—Preparing the Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/prepare-private-reg/)
- [2—Installing Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/)
- [3—Configuring Rancher to default to the Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/private-registry/)
>**Note:** If you want to configure the setting when starting the rancher/rancher container, you can use the environment variable `CATTLE_SYSTEM_DEFAULT_REGISTRY`.
>
> Example:
> ```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<registry.yourdomain.com:port> \
<registry.yourdomain.com:port>/rancher/rancher:v2.0.0
```
### [Next: Prepare the Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/prepare-private-reg/)
@@ -0,0 +1,22 @@
---
title: 3—Configuring Rancher for the Private Registry
weight: 75
---
Rancher needs to be configured to use the private registry in order to provision any [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) or [Rancher tools]({{< baseurl >}}/rancher/v2.x/en/tools/) .
1. Log into Rancher and configure the default admin password.
1. Go into the **Settings** view.
![Settings]({{< baseurl >}}/img/rancher/airgap/settings.png)
1. Look for the setting called `system-default-registry` and choose **Edit**.
![Edit]({{< baseurl >}}/img/rancher/airgap/edit-system-default-registry.png)
1. Change the value to your registry (e.g. `registry.yourdomain.com:port`). Do not prefix the registry with `http://` or `https://`.
![Save]({{< baseurl >}}/img/rancher/airgap/enter-system-default-registry.png)
>**Note:** If you want to configure the setting when starting the rancher/rancher container, you can use the environment variable `CATTLE_SYSTEM_DEFAULT_REGISTRY`.
@@ -0,0 +1,156 @@
---
title: 2—Installing Rancher
weight: 50
---
After your private registry is set up for your Rancher installation, complete your installation. Follow one of the procedures below based on the configuration in which you want to run Rancher.
{{% tabs %}}
{{% tab "HA Install" %}}
This guide will take you through the basic process of installing Rancher Server HA in a Air Gap environment. Please see the [High Availability Install]({{< baseurl >}}/rancher/v2.x/en/installation/ha) guide for additional options and troubleshooting.
## RKE
On a system that has access (22/tcp and 6443/tcp) to the nodes you have built to host the Rancher server cluster, use the sample below create the `rancher-cluster.yml` file. Define your nodes and fill out the details for the private registry.
See [Install Kubernetes with RKE]({{< baseurl >}}/rancher/v2.x/en/installation/ha/kubernetes-rke/) for more details on the options available.
Replace values in the code sample according to the table below.
| Directive Replacement | Description |
| ----------------------- | --------------------------------------------------------------------- |
| `address` | The IP address for each of your air gap nodes outside of the cluster. |
| `internal_address` | The IP address for each of your air gap nodes within the cluster. |
| `url` | The URL for your private registry. |
```yaml
nodes:
- address: 18.222.121.187 # air gap node external IP
internal_address: 172.31.7.22 # air gap node internal IP
user: rancher
role: [ "controlplane", "etcd", "worker" ]
ssh_key_file: /home/user/.ssh/id_rsa
- address: 18.220.193.254 # air gap node external IP
internal_address: 172.31.13.132 # air gap node internal IP
user: rancher
role: [ "controlplane", "etcd", "worker" ]
ssh_key_file: /home/user/.ssh/id_rsa
- address: 13.59.83.89 # air gap node external IP
internal_address: 172.31.3.216 # air gap node internal IP
user: rancher
role: [ "controlplane", "etcd", "worker" ]
ssh_key_file: /home/user/.ssh/id_rsa
private_registries:
- url: <REGISTRY.YOURDOMAIN.COM:PORT> # private registry url
user: rancher
password: "*********"
is_default: true
```
### Run RKE
```plain
rke up --config ./rancher-cluster.yml
```
### Testing the Cluster
Follow the rest of the [Install Kubernetes with RKE]({{< baseurl >}}/rancher/v2.x/en/installation/ha/kubernetes-rke/) guide to test your cluster and verify the health of your pods before continuing.
## Helm
Instead of installing the `tiller` agent on the cluster, render the installs on a system that has access to the internet and copy resulting manifests to a system that has access to the Rancher server cluster.
### Initialize Helm Locally
Skip the [Initialize Helm (Install Tiller)]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/#helm-init) and initialize `helm` locally on a system that has internet access.
```plain
helm init -c
```
## Installing Rancher
If you set up a default private registry with credentials in RKE, the Kubernetes `kubelet` will have the credentials for your private registry configured.
### Render Templates
Fetch and render the `helm` charts on a system that has internet access.
#### Cert-Manager
If you are installing Rancher with Rancher Self-Signed certificates you will need to install 'cert-manager' on your cluster. If you are installing your own certificates you may skip this section.
Fetch the latest `stable/cert-manager` chart. This will pull down the chart and save it in the current directory as a `.tgz` file.
```plain
helm fetch stable/cert-manager
```
Render the template with the option you would use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
```plain
helm template ./cert-manager-<version>.tgz --output-dir . \
--name cert-manager --namespace kube-system \
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
```
#### Rancher
Install the Rancher chart repo.
```plain
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
```
Fetch the latest `rancher-stable/rancher` chart. This will pull down the chart and save it in the current directory as a `.tgz` file.
```plain
helm fetch rancher-stable/rancher
```
Render the template with the options you would use to install the chart. See [Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/) for details on the various options. Remember to set the `rancherImage` option to pull the image from your private registry. This will create a `rancher` directory with the Kubernetes manifest files.
```plain
helm template ./rancher-<version>.tgz --output-dir . \
--name rancher --namespace cattle-system \
--set hostname=<RANCHER.YOURDOMAIN.COM> \
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher
```
### Copy Manifests
Copy the rendered manifest directories to a system that has access to the Rancher server cluster.
### Apply the Manifests
Use `kubectl` to create namespaces and apply the rendered manifests.
```plain
kubectl -n kube-system apply -R -f ./cert-manager
kubectl create namespace cattle-system
kubectl -n cattle-system apply -R -f ./rancher
```
Make sure you follow any additional instructions required by SSL install options. See [Choose your SSL Configuration]({{< baseurl >}}rancher/v2.x/en/installation/ha/helm-rancher/#choose-your-ssl-configuration) for details.
{{% /tab %}}
{{% tab "Single Node" %}}
To deploy Rancher on a single node in an air gap environment, follow the instructions in the standard [Single Node Install]({{< baseurl >}}/rancher/v2.x/en/installation/single-node-install/). Parts of the install where you must complete a special action for air gap are flagged with a substitute step, which is listed in the subheading below.
### Add Private Registry URL to Run Command
When you get to the section [Choose an SSL Option and Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/single-node/#2-choose-an-ssl-option-and-install-rancher), regardless of which install option you choose, prepend your Rancher image tag with your private registry URL (`<REGISTRY.YOURDOMAIN.COM:PORT>`), as shown in the example below.
```plain
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
```
{{% /tab %}}
{{% /tabs %}}
### [Next: Configuring Rancher for the Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/config-rancher-for-private-reg/)
@@ -0,0 +1,81 @@
---
title: 1—Preparing the Private Registry
weight: 25
---
For the first part of your air gap install, you'll prepare your private registry in order to be able to install and start using Rancher.
<a id="step-1"></a>
## Image Sources
Collect the list of images required for Rancher. These steps will require internet access.
{{% tabs %}}
{{% tab "HA Install" %}}
The Rancher HA install uses images from 3 sources. Combine the 3 sources into a file named `rancher-images.txt`.
* **Rancher** - Images required by Rancher. Download the `rancher-images.txt` file from [Rancher releases](https://github.com/rancher/rancher/releases) page for the version of Rancher you are installing.
* **RKE** - Images required by `rke` to install Kubernetes. Run `rke` and add the images to the end of `rancher-images.txt`.
```plain
rke config --system-images >> ./rancher-images.txt
```
* **Cert-Manager** - (Optional) If you choose to install with Rancher Self-Signed TLS certificates, you will need the [`cert-manager`](https://github.com/helm/charts/tree/master/stable/cert-manager) image. You may skip this image if you are using you using your own certificates.
Fetch the latest `cert-manager` Helm chart and parse the template for image details.
```plain
helm fetch stable/cert-manager
helm template ./cert-manager-<version>.tgz | grep -oP '(?<=image: ").*(?=")' >> ./rancher-images.txt
```
Sort and unique the images list to remove any overlap between the sources.
```plain
sort -u rancher-images.txt -o rancher-images.txt
```
{{% /tab %}}
{{% tab "Single Node" %}}
All the required images for a Single Node install can be found in the `rancher-images.txt` included with the release of Rancher you are installing.
Download the `rancher-images.txt` from the [Rancher releases](https://github.com/rancher/rancher/releases) page.
{{% /tab %}}
{{% /tabs %}}
## Publish Images
Once you have the `rancher-images.txt` file populated, publish the images from the list to your private registry.
> **NOTE** This may require up to 20GB of disk space.
1. Browse to the [Rancher releases page](https://github.com/rancher/rancher/releases) and download the following tools for saving and publishing the images.
| Release File | Description |
| --- | --- |
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from various public registries and saves all of the images as `rancher-images.tar.gz`. |
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
1. From a system with internet access, use the `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images.
```plain
./rancher-save-images.sh --image-list ./rancher-images.txt
```
1. Copy `rancher-load-images.sh`, `rancher-images.txt` and `rancher-images.tar.gz` files to a system that can reach your private registry.
Log into your registry if required.
```plain
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
```
Use `rancher-load-images.sh` to extract, tag and push the images to your private registry.
```plain
./rancher-load-images.sh --image-list ./rancher-images.txt --registry <REGISTRY.YOURDOMAIN.COM:PORT>
```
### [Next: Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/)
@@ -1,5 +1,5 @@
---
title: 1 - Create Nodes and Load Balancer
title: 1Create Nodes and Load Balancer
weight: 185
---
@@ -5,7 +5,9 @@ weight: 195
Helm is the package management tool of choice for Kubernetes. Helm "charts" provide templating syntax for Kubernetes YAML manifest documents. With Helm we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at [https://helm.sh/](https://helm.sh/).
### Initialize Helm on the cluster
> **Note:** For systems without direct internet access see [Helm - Air Gap]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#helm) for install details.
### Initialize Helm on the Cluster
Helm installs the `tiller` service on your cluster to manage charts. Since RKE enables RBAC by default we will need to use `kubectl` to create a `serviceaccount` and `clusterrolebinding` so `tiller` has permission to deploy to the cluster.
@@ -13,18 +15,13 @@ Helm installs the `tiller` service on your cluster to manage charts. Since RKE e
* Create the `ClusterRoleBinding` to give the `tiller` account access to the cluster.
* Finally use `helm` to initialize the `tiller` service
```
```plain
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
--clusterrole cluster-admin \
--serviceaccount=kube-system:tiller
```
##### Helm init
`helm init` installs the `tiller` service in the `kube-system` namespace on your cluster.
```
helm init --service-account tiller
```
@@ -5,6 +5,8 @@ weight: 200
Rancher installation is now managed using the Helm package manager for Kubernetes. Use `helm` to install the prerequisite and Rancher charts.
> **Note:** For systems without direct internet access see [Installing Rancher - Air Gap]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/) for install details.
### Add the Chart Repo
Use `helm repo add` to add the Rancher chart repository.
@@ -39,7 +41,7 @@ There are three options for the source of the certificate.
2. `letsEncrypt` - Use [LetsEncrypt](https://letsencrypt.org/) to issue a cert.
3. `secret` - Configure a Kubernetes Secret with your certificate files.
<br\>
<br/>
#### (Default) Rancher Generated Certificates
@@ -47,6 +49,8 @@ The default is for Rancher to generate a CA and use the `cert-manager` to issue
The only requirement is to set the `hostname` to the DNS name you pointed at your Load Balancer.
>**Using Air Gap?** [Set the `rancherImage` option]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#install-rancher-using-private-registry) in your command, pointing toward your private registry.
```
helm install rancher-stable/rancher \
--name rancher \
@@ -60,6 +64,8 @@ Use [LetsEncrypt](https://letsencrypt.org/)'s free service to issue trusted SSL
Set `hostname`, `ingress.tls.source=letEncrypt` and LetsEncrypt options.
>**Using Air Gap?** [Set the `rancherImage` option]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#install-rancher-using-private-registry) in your command, pointing toward your private registry.
```
helm install rancher-stable/rancher \
--name rancher \
@@ -94,7 +100,7 @@ Now that Rancher is running, see [Adding TLS Secrets]({{< baseurl >}}/rancher/v2
The Rancher chart configuration has many options for customizing the install to suit your specific environment. Here are some common advanced scenarios.
* [HTTP Proxy]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#http-proxy)
* [Private Docker Image Registry]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#private-registry)
* [Private Docker Image Registry]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#private-registry-and-air-gap-installs)
* [TLS Termination on an External Load Balancer]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#external-tls-termination)
See the [Chart Options]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/) for the full list of options.
@@ -19,17 +19,28 @@ weight: 276
| Option | Default Value | Description |
| --- | --- | --- |
| `auditLog.level` | 0 | `int` - set the [API Audit Log]({{< baseurl >}}/rancher/v2.x/en/installation/api-auditing) level. 0 is off. [0-3] |
| `debug` | false | `bool` - set debug flag on rancher server |
| `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials |
| `proxy` | "" | `string` - string - HTTP[S] proxy server for Rancher |
| `noProxy` | "localhost,127.0.0.1" | `string` - comma seperated list of hostnames or ip address not to use the proxy |
| `noProxy` | "localhost,127.0.0.1" | `string` - comma separated list of hostnames or ip address not to use the proxy |
| `resources` | {} | `map` - rancher pod resource requests & limits |
| `rancherImage` | "rancher/rancher" | `string` - rancher image source |
| `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag |
| `tls` | "ingress" | `string` - Where to terminate SSL. - "ingress, external"
| `tls` | "ingress" | `string` - Where to terminate SSL. - "ingress, external" |
<br/>
### API Audit Log
Enabling the [API Audit Log](https://rancher.com/docs/rancher/v2.x/en/installation/api-auditing/) will create a sidecar container in the Rancher pod. This container (`rancher-audit-log`) will stream the log to `stdout`.
You can collect this log as you would any container log. Enable the [Logging service under Rancher Tools](https://rancher.com/docs/rancher/v2.x/en/tools/logging/) for the `System` Project on the Rancher server cluster.
```
--set auditLog.level=1
```
### HTTP Proxy
Rancher requires internet access for some functionality (helm charts). Use `proxy` to set your proxy server.
@@ -41,34 +52,9 @@ Add your IP exceptions to the `noProxy` list. Make sure you add the Service clus
--set noProxy="127.0.0.1,localhost,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
```
### Private Registry
### Private Registry and Air Gap Installs
You can point to a private registry for the rancher image.
#### Images
Populate your private registry with Rancher images.
You can get the list of images required for rancher and worker cluster installs from the [Releases](https://github.com/rancher/rancher/releases/latest) page.
#### Create Registry Secret
Use `kubectl` to create a docker-registry secret in the `cattle-system` namespace.
```
kubectl -n cattle-system create secret docker-registry regcred \
--docker-server="reg.example.com:5000" \
--docker-email=<email>
```
#### Registry Options
Add the `rancherImage` to point to your private registry image and `imagePullSecrets` to your install command.
```
--set rancherImage=reg.example.com:5000/rancher/rancher \
--set imagePullSecrets[0].name=regcred
```
See [Installing Rancher - Air Gap]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/) for details on installing Rancher with a private registry.
### External TLS Termination
@@ -1,15 +1,18 @@
---
title: 2 - Install Kubernetes with RKE
title: 2Install Kubernetes with RKE
weight: 190
---
Use RKE to install Kubernetes with a high-availability etcd configuration.
Use RKE to install Kubernetes with a high availability etcd configuration.
### Create the rancher-cluster.yml file
> **Note:** For systems without direct internet access see [RKE - Air Gap]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#rke) for install details.
### Create the `rancher-cluster.yml` File
Using the sample below create the `rancher-cluster.yml` file. Replace the IP Addresses in the `nodes` list with the IP address or DNS names of the 3 Nodes you created.
> **Note:** If your node has public and internal addresses, it is recommended to set the `internal_address:` so Kubernetes will use it for intra-cluster communication. Some services like AWS EC2 require setting the `internal_address:` if you want to use self-referencing security groups or firewalls.
> **Note:** If your node has public and internal addresses, it is recommended to set the `internal_address:` so Kubernetes will use it for intra-cluster communication. Some services like AWS EC2 require setting the `internal_address:` if you want to use self-referencing security groups or firewalls.
```yaml
nodes:
@@ -27,7 +30,7 @@ nodes:
role: [controlplane,worker,etcd]
```
#### Common RKE nodes: options
#### Common RKE Nodes: Options
| Option | Description |
| --- | --- |
@@ -37,9 +40,7 @@ nodes:
| `ssh_key_path` | (optional) Path to SSH private key used to authenticate to the node |
| `user` | (required) A user that can run docker commands |
<br/>
#### Advanced configurations
#### Advanced Configurations
RKE has many configuration options for customizing the install to suit your specific environment.
@@ -51,7 +52,7 @@ Please see the [RKE Documentation]({{< baseurl >}}/rke/v0.1.x/en/) for the full
rke up --config ./rancher-cluster.yml
```
### Testing your cluster
### Testing Your Cluster
RKE should have created a file `kube_config_rancher-cluster.yml`. This file has the credentials for `kubectl` and `helm`.
@@ -74,7 +75,7 @@ NAME STATUS ROLES AGE VER
165.227.127.226 Ready controlplane,etcd,worker 11m v1.10.1
```
### Check the health of your cluster pods
### Check the Health of Your Cluster Pods
Check that all the required pods and containers are healthy are ready to continue.
@@ -101,7 +102,7 @@ kube-system rke-metrics-addon-deploy-job-7ljkc 0/1 Completed
kube-system rke-network-plugin-deploy-job-6pbgj 0/1 Completed 0 30s
```
### Save your files
### Save Your Files
Save a copy of the `kube_config_rancher-cluster.yml` and `rancher-cluster.yml` files. You will need these files to maintain and upgrade your Rancher instance.
@@ -13,6 +13,7 @@ Rancher is supported on the following operating systems and their subsequent rel
* Ubuntu 16.04 (64-bit)
* Red Hat Enterprise Linux 7.5 (64-bit)
* RancherOS 1.4 (64-bit)
* Windows Server version 1803 (64-bit)
If you are using RancherOS, make sure you switch the Docker engine to a supported version using:<br>
`sudo ros engine switch docker-17.03.2-ce`
@@ -61,6 +62,7 @@ Supported Versions:
* `1.12.6`
* `1.13.1`
* `17.03.2`
* `17.06` (for Windows)
If you are using RancherOS, make sure you switch the Docker engine to a supported version using:<br>
`sudo ros engine switch docker-17.03.2-ce`
@@ -33,10 +33,13 @@ If you are installing Rancher in a development or testing environment where iden
Log into your Linux host, and then run the minimum installation command below.
>**Air Gap User?** [Add your private registry URL]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#add-private-registry-url-to-run-command) before the `rancher/rancher` image.
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
rancher/rancher:latest
{{% /accordion %}}
{{% accordion id="option-b" label="Option B-Bring Your Own Certificate: Self-Signed" %}}
In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher.
@@ -52,6 +55,8 @@ After creating your certificate, run the Docker command below to install Rancher
- Replace `<CERT_DIRECTORY>` with the directory path to your certificate file.
- Replace `<FULL_CHAIN.pem>`,`<PRIVATE_KEY.pem>`, and `<CA_CERTS>` with your certificate names.
>**Air Gap User?** [Add your private registry URL]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#add-private-registry-url-to-run-command) before the `rancher/rancher` image.
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
@@ -76,6 +81,8 @@ After obtaining your certificate, run the Docker command below.
- Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
>**Air Gap User?** [Add your private registry URL]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#add-private-registry-url-to-run-command) before the `rancher/rancher` image.
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
@@ -97,6 +104,7 @@ For production environments, you also have the options of using [Let's Encrypt](
After you fulfill the prerequisites, you can install Rancher using a Let's Encrypt certificate by running the following command. Replace `<YOUR.DNS.NAME>` with your your domain.
>**Air Gap User?** [Add your private registry URL]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#add-private-registry-url-to-run-command) before the `rancher/rancher` image.
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
@@ -119,15 +127,22 @@ After you fulfill the prerequisites, you can install Rancher using a Let's Encry
## Advanced Options
### API Auditing
### Enable API Audit Log
If you want to record all transations with the Rancher API, enable the [API Auditing]({{< baseurl >}}/rancher/v2.x/en/installation/api-auditing) feature by adding the flags below into your install command.
The API Audit Log records all the user and system transactions made through Rancher server.
The API Audit Log writes to `/var/log/auditlog` inside the rancher container by default. Share that directory as a volume and set your `AUDIT_LEVEL` to enable the log.
See [API Audit Log]({{< baseurl >}}/rancher/v2.x/en/installation/api-auditing) for more information and options.
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /var/log/rancher/auditlog:/var/log/auditlog \
-e AUDIT_LEVEL=1 \
rancher/rancher:latest
```
-e AUDIT_LEVEL=1 \
-e AUDIT_LOG_PATH=/var/log/auditlog/rancher-api-audit.log \
-e AUDIT_LOG_MAXAGE=20 \
-e AUDIT_LOG_MAXBACKUP=20 \
-e AUDIT_LOG_MAXSIZE=100 \
### Air Gap
@@ -47,6 +47,9 @@ If you elect to use a self-signed certificate to encrypt communication, you must
1. While running the Docker command to deploy Rancher, point Docker toward your CA certificate file.
>**Air Gap User?** [Add your private registry URL]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#add-private-registry-url-to-run-command) before the `rancher/rancher` image tag.
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
@@ -68,6 +71,8 @@ If you use a certificate signed by a recognized CA, installing your certificate
1. Enter the following command.
>**Air Gap User?** [Add your private registry URL]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#add-private-registry-url-to-run-command) before the `rancher/rancher` image tag.
```
docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
@@ -6,67 +6,138 @@ aliases:
After you launch a Kubernetes cluster in Rancher, you can manage individual nodes from the cluster's **Node** tab. Depending on the [option used]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) to provision the cluster, there are different node options available.
To manage individual nodes, browse to the cluster that you want to manage and then select **Nodes** from the main menu. You can open the options menu for a node by clicking its **Ellipsis** icon (**...**).
![Node Options]({{< baseurl >}}/img/rancher/node-edit.png)
>**Note:** If you want to manage the _cluster_ and not individual nodes, see [Editing Clusters]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/editing-clusters).
To manage individual nodes, browse to the cluster that you want to manage and then select **Nodes** from the main menu. The following sections list what node management options are available for each cluster type.
The following table lists which node options are available for each [type of cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-options) in Rancher. Click the links in the **Option** column for more detailed information about each feature.
<!-- TOC -->
| Option | [Node Pool][1] | [Custom Node][2] | [Hosted Cluster][3] | [Imported Nodes][4] | Description |
| ------------------------------------------------ | ------------------------------------------------ | ---------------- | ------------------- | ------------------- | ------------------------------------------------------------------ |
| [Cordon](#cordoning-a-node) | ✓ | ✓ | ✓ | | Marks the node as unschedulable. |
| [Drain](#draining-a-node) | ✓ | ✓ | ✓ | | Marks the node as unschedulable _and_ evicts all pods. |
| [Edit](#editing-a-node) | ✓ | ✓ | ✓ | | Enter a custom name, description, or label for a node. |
| [View API](#viewing-a-node-api) | ✓ | ✓ | ✓ | | View API data. |
| [Delete](#deleting-a-node) | ✓ | ✓ | | | Deletes defective nodes from the cluster. |
| [Download Keys](#remoting-into-a-node-pool-node) | ✓ | | | | Download SSH key for remoting into the node. |
| [Node Scaling](#scaling-nodes) | ✓ | | | | Scale the number of nodes in the node pool up or down. |
- [Nodes Provisioned by Node Pool](#nodes-provisioned-by-node-pool)
- [Nodes Provisioned with the Custom Nodes Option](#nodes-provisioned-with-the-custom-nodes-option)
- [Nodes Provisioned by Hosted Kubernetes Providers](#nodes-provisioned-by-hosted-kubernetes-providers)
- [Imported Nodes](#imported-nodes)
[1]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/
[2]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/
[3]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/
[4]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/
<!-- /TOC -->
## Cordoning a Node
_Cordoning_ a node marks it as unschedulable. This feature is useful for performing short tasks on the node during small maintenance windows, like reboots, upgrades, or decommissions. When you're done, power back on and make the node schedulable again by uncordoning it.
## Nodes Provisioned by Node Pool
## Draining a Node
_Draining_ is the process of first cordoning the node, and then evicting all its pods. This feature is useful for performing node maintenance (like kernel upgrades or hardware maintenance). It prevents new pods from deploying to the node while redistributing existing pods so that users don't experience service interruption.
- For pods with a replica set, the pod is replaced by a new pod that will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
- For pods with no replica set, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
Clusters provisioned using [one of the node pool options]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-pools) automatically maintain the node scale that's set during the initial cluster provisioning. This scale determines the number of active nodes that Rancher maintains for the cluster.
You can drain nodes that are in either a `cordoned` or `active` state. When you drain a node, the node is cordoned, the nodes are evaluated for conditions they must meet to be drained, and then (if it meets the conditions) the node evicts its pods.
- Mark nodes as unschedulable (i.e., **Cordon**). When a node is cordoned, no new pods are scheduled for the node, but the existing pods continue to run.
- Delete defective nodes from the cloud provider. When you the delete a defective node, Rancher automatically replaces it with an identically provisioned node.
>**Note:** If you want to scale down the number of nodes, use the scaling controls rather than deleting the node.
- Scale the number of nodes in the cluster up or down.
- Enter a **Custom Name**, **Description**, or **Label** for a node.
- Download the SSH key pair for a node. You can use this key pair to remote into the node using an SSH connection from your workstation. For more instructions on how to remote into the node, see [Remoting into a Node Pool Node](#remoting-into-a-node-pool-node).
- View API Data.
However, you can override the conditions draining when you initiate the drain (see [below](#below)). You're also given an opportunity to set a grace period and timeout value.
![Drain]({{< baseurl >}}/img/rancher/node-drain.png)
<a id="below"></a>
The following list describes each drain option:
- **Even if there are pods not managed by a ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet**
These types of pods won't get rescheduled to a new node, since they do not have a controller. Kubernetes expects you to have your own logic that handles the deletion of these pods. Kubernetes forces you to choose this option (which will delete/evict these pods) or drain won't proceed.
- **Even if there are DaemonSet-managed pods**
Similar to above, if you have any daemonsets, drain would proceed only if this option is selected. Even when this option is on, pods won't be deleted since they'll immediately be replaced. On startup, Rancher currently has a few daemonsets running by default in the system, so this option is turned on by default.
- **Even if there are pods using emptyDir**
If a pod uses emptyDir to store local data, you might not be able to safely delete it, since the data in the emptyDir will be deleted once the pod is removed from the node. Similar to the first option, Kubernetes expects the implementation to decide what to do with these pods. Choosing this option will delete these pods.
- **Grace Period**
The timeout given to each pod for cleaning things up, so they will have chance to exit gracefully. For example, when pods might need to finish any outstanding requests, roll back transactions or save state to some external storage. If negative, the default value specified in the pod will be used.
- **Timeout**
The amount of time drain should continue to wait before giving up.
>**Kubernetes Known Issue:** Currently, the [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) is not enforced while draining a node. This issue will be corrected as of Kubernetes 1.12.
If there's any error related to user input, the node enters a `cordoned` state because the drain failed. You can either correct the input and attempt to drain the node again, or you can abort by uncordoning the node.
If the drain continues without error, the node enters a `draining` state. You'll have the option to stop the drain when the node is in this state, which will stop the drain process and change the node's state to `cordoned`.
Once drain successfully completes, the node will be in a state of `drained`. You can then power off or delete the node.
>**Want to know more about cordon and drain?** See the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node).
## Editing a Node
Editing a node lets you change its name, add a description of the node, or add [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/).
## Viewing a Node API
Select this option to view the node's [API endpoints]({{< baseurl >}}/rancher/v2.x/en/api/).
## Deleting a Node
Use **Delete** to remove defective nodes from the cloud provider. When you the delete a defective node, Rancher automatically replaces it with an identically provisioned node.
>**Tip:** If your cluster is hosted on IaaS nodes, and you want to scale your cluster down instead of deleting a defective node, [scale down](#scaling-nodes) rather than delete.
## Scaling Nodes
For nodes hosted by an IaaS, you can scale the number of nodes in each node pool by using the scale controls. This option isn't available for other cluster types.
![Scaling Nodes]({{< baseurl >}}/img/rancher/iaas-scale-nodes.png)
## Remoting into a Node Pool Node
For [nodes hosted by an IaaS]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/), you have the option of downloading its SSH key so that you can connect to it remotely from your desktop.
### Remoting into a Node Pool Node
1. From the Node Pool cluster, select **Nodes** from the main menu.
1. Find the node that you want to remote into. Select **Ellipsis (...) > Download Keys**.
**Step Result:** A ZIP file containing files used for SSH is downloaded.
1. Extract the ZIP file to any location.
1. Open Terminal. Change your location to the extracted ZIP file.
1. Enter the following command:
```
ssh -i id_rsa root@<IP_OF_HOST>
```
## Nodes Provisioned with the Custom Nodes Option
For nodes provisioned using the [custom nodes option]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/#custom-nodes), you can use the following options from the Rancher UI:
- Mark nodes as unschedulable (i.e., **Cordon**). When a node is cordoned, no new pods are scheduled for the node, but the existing pods continue to run.
- Delete node objects from the **Nodes** ist. When you the delete a custom node, you still have to delete it from the node itself.
- Enter a **Custom Name**, **Description**, or **Label** for a node.
- View API Data.
## Notes for Node Pool Nodes
## Nodes Provisioned by Hosted Kubernetes Providers
Clusters provisioned using [one of the node pool options]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-pools) automatically maintain the node scale that's set during the initial cluster provisioning. This scale determines the number of active nodes that Rancher maintains for the cluster.
## Notes for Nodes Provisioned by Hosted Kubernetes Providers
Options for managing nodes [hosted by a Kubernetes provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/) are somewhat limited in Rancher. Rather than using the Rancher UI to make edits such as scaling the number of nodes up or down, edit the cluster directly.
From the Rancher UI, you can:
- Mark nodes as unschedulable (i.e., **Cordon**). When a node is cordoned, no new pods are scheduled for the node, but the existing pods continue to run.
- Enter a **Custom Name**, **Description**, or **Label** for a node.
- View node API Data.
## Imported Nodes
## Notes for Imported Nodes
Although you can deploy workloads to an [imported cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/) using Rancher, you cannot manage individual cluster nodes. All management of imported cluster nodes must take place outside of Rancher.
@@ -89,7 +89,7 @@ Rancher extends Kubernetes to allow the application of [Pod Security Policies](h
1. **Recommended:** Add project members.
Use the **Members** accordion to provide other users with project access and roles.
Use the **Members** section to provide other users with project access and roles.
By default, your user is added as the project `Owner`.
@@ -100,12 +100,34 @@ Rancher extends Kubernetes to allow the application of [Pod Security Policies](h
>**Note:** You can only search for groups if external authentication is enabled.
1. From the **Role** drop-down, choose a role.
[What are Roles?]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/)
>**Tip:** Choose Custom to create a custom role on the fly: [Custom Project Roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#custom-project-roles).
>**Notes:**
>
>- Users assigned the `Owner` or `Member` role for a project automatically inherit the `namespace creation` role. However, this role is a [Kubernetes ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole), meaning its scope extends to all projects in the cluster. Therefore, users explicitly assigned the `Owner` or `Member` role for a project can create namespaces in other projects they're assigned to, even with only the `Read Only` role assigned.
>
>- Choose `Custom` to create a custom role on the fly: [Custom Project Roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#custom-project-roles).
1. To add more members, repeat substeps a—c.
1. **Optional:** Add **Resource Quotas**, which limit the resources that a project (and its namespaces) can consume. For more information, see [Resource Quotas]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas).
>**Note:** This option is only available in v2.1.0 and later.
1. Click **Add Quota**.
1. Select a [Resource Type]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/#resource-quota-types).
1. Enter values for the **Project Limit** and the **Namespace Default Limit**.
| Field | Description |
| ----------------------- | -------------------------------------------------------------------------------------------------------- |
| Project Limit | The overall resource limit for the project. |
| Namespace Default Limit | The default resource limit available for each namespace. This limit is propagated to each namespace in the project. The combined limit of all project namespaces shouldn't exceed the project limit. |
1. **Optional:** Repeat these substeps to add more quotas.
1. Click **Create**.
@@ -153,6 +175,8 @@ Create a new namespace to isolate apps and resources in a project.
1. From the main menu, select **Namespace**. The click **Add Namespace**.
1. **Optional:** If your project has [Resource Quotas]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) in effect, you can override the default resource **Limits** (which places a cap on the resources that the namespace can consume).
1. Enter a **Name** and then click **Create**.
**Result:** Your namespace is added to the project. You can begin assigning cluster resources to the namespace.
@@ -167,8 +191,33 @@ Cluster admins and members may occasionally need to move a namespace to another
1. Select the namespace(s) that you want to move to a different project. Then click **Move**. You can move multiple namespaces at one.
>**Note:** Don't move the namespaces in the `System` project. Moving these namespaces can adversely affect cluster networking.
>**Notes:**
>
>- Don't move the namespaces in the `System` project. Moving these namespaces can adversely affect cluster networking.
>- You cannot move a namespace into a project that already has a [resource quota]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/) configured.
>- If you move a namespace from a project that has a quota set to a project with no quota set, the quota is removed from the namespace.
1. Choose a new project for the new namespace and then click **Move**. Alternatively, you can remove the namespace from all projects by selecting **None**.
**Result:** Your namespace is moved to a different project (or is unattached from all projects). If any project resources are attached to the namespace, the namespace releases them and then attached resources from the new project.
**Result:** Your namespace is moved to a different project (or is unattached from all projects). If any project resources are attached to the namespace, the namespace releases them and then attached resources from the new project.
### Editing Namespace Resource Quotas
If there is a [resource quota]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) configured for a project, you can override the namespace default limit to provide a specific namespace with access to more (or less) project resources.
1. From the **Global** view, open the cluster that contains the namespace for which you want to edit the resource quota.
1. From the main menu, select **Projects/Namespaces**.
1. Find the namespace for which you want to edit the resource quota. Select **Ellipsis (...) > Edit**.
1. Edit the Resource Quota **Limits**. These limits determine the resources available to the namespace. The limits must be set within the configured [project limits]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/#project-limits).
For more information about each **Resource Type**, see [Resource Quota Types]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/#resource-quota-types).
>**Note:**
>
>- If a resource quota is not configured for the project, these options will not be available.
>- If you enter limits that exceed the configured project limits, Rancher will not let you save your edits.
**Result:** The namespace's default resource quota is overwritten with your override.
@@ -35,11 +35,15 @@ Following project creation, you can add users as project members so that they ca
[What are Project Roles?]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/)
>**Tip:** For Custom Roles, you can modify the list of individual roles available for assignment.
>
> - To add roles to the list, [Add a Custom Role]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles).
> - To remove roles from the list, [Lock/Unlock Roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/).
>**Notes:**
>
>- Users assigned the `Owner` or `Member` role for a project automatically inherit the `namespace creation` role. However, this role is a [Kubernetes ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole), meaning its scope extends to all projects in the cluster. Therefore, users explicitly assigned the `Owner` or `Member` role for a project can create namespaces in other projects they're assigned to, even with only the `Read Only` role assigned.
>
>- For `Custom` roles, you can modify the list of individual roles available for assignment.
>
> - To add roles to the list, [Add a Custom Role]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/default-custom-roles).
> - To remove roles from the list, [Lock/Unlock Roles]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/locked-roles/).
**Result:** The chosen users are added to the project.
- To revoke project membership, select the user and click **Delete**. This action deletes membership, not the user.
@@ -69,13 +73,40 @@ You can always assign a PSP to an existing project if you didn't assign one duri
- Apply the PSP to the project.
- Apply the PSP to any namespaces you add to the project later.
>**Prerequisites:**
>
> - Create a Pod Security Policy within Rancher. Before you can assign a default PSP to a new project, you must have a PSP available for assignment. For instruction, see [Creating Pod Security Policies]({{< baseurl >}}/rancher/v2.x/en/admin-settings/pod-security-policies/).
> - Assign a default Pod Security Policy to the project's cluster. You can't assign a PSP to a project until one is already applied to the cluster.
5. Click **Save**.
**Result:** The PSP is applied to the project and any namespaces added to the project.
>**Note:** Any workloads that are already running in a cluster or project before a PSP is assigned will not be checked if it complies with the PSP. Workloads would need to be cloned or upgraded to see if they pass the PSP.
## Editing Resource Quotas
_Available as of v2.0.1_
Edit [resource quotas]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) when:
- You want to limit the resources that a project and its namespaces can use.
- You want to scale the resources available to a project up or down when a research quota is already in effect.
1. From the **Global** view, open the cluster containing the project to which you want to apply a resource quota.
1. From the main menu, select **Projects/Namespaces**.
1. Find the project that you want to add a resource quota to. From that project, select **Ellipsis (...) > Edit**.
1. Expand **Resource Quotas** and click **Add Quota**. Alternatively, you can edit existing quotas.
1. Select a [Resource Type]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/#resource-quota-types).
1. Enter values for the **Project Limit** and the **Namespace Default Limit**.
| Field | Description |
| ----------------------- | -------------------------------------------------------------------------------------------------------- |
| Project Limit | The overall resource limit for the project. |
| Namespace Default Limit | The default resource limit available for each namespace. This limit is propagated to each namespace in the project. The combined limit of all project namespaces shouldn't exceed the project limit. |
1. **Optional:** Add more quotas.
1. Click **Create**.
**Result:** The resource quota is applied to your project and namespaces. When you add more namespaces in the future, Rancher validates that the project can accommodate the namespace. If the project can't allocate the resources, Rancher won't let you save your changes.
@@ -1,62 +0,0 @@
---
title: Project Quotas
weight: 5000
draft: true
---
_Available as of v2.1.0_
When you are creating or editing a project, you can configure a _resource quotas_, which is a Rancher feature that limits the resources available to a project and the namespaces within it.
In situations where several teams share a cluster, one team may overconsume the resources available. To prevent this overconsumption, you can apply a _project quota_, which creates a pool of resources that the project's namespaces can use, resources being things like data or processing power.
## Rancher Resource Quotas vs. Native Kubernetes Resource Quotas
Resource quotas in Rancher work similarly to how they do in the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). However, Rancher's version of resource quotas have a few key differences from the Kubernetes version.
In a standard Kubernetes deployment, resource quotas are applied to individual namespaces. However, you cannot apply the quota to multiple namespaces with a single action. Instead, the resource quota must be applied each namespace, which can be tedious. The following diagram depict resource quotas in a native Kubernetes deployment. Notice that:
- Resource quotas apply only to namespaces they are directly assigned to.
- Quotas are applied to individual namespaces, rather than collectively. Even though each quota sets the same limits, a unique quota is applied to each namespace.
![Native Kubernetes Resource Quota Implementation]({{< baseurl >}}/img/rancher/kubernetes-resource-quota.svg)
<sup>Native Kubernetes Resource Quota Implementation Example</sup>
In Rancher's implementation of resource quotas, the quota is applied to a [project]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects) instead. The resource quota includes two limits:
- **Project Limits:**
This set of values is the overall limit for the project. When the overall limit for the project is exceeded, Kubernetes uses logic to determine which namespaces to stop to get back under the quota.
- **Namespace Default Limits:**
This value is the default resource limit that an individual namespace inherits from the project. If an individual namespace exceeds its namespace limit, Kubernetes stops anything objects in the namespace from operating.
Each namespace inherits this default limit unless you [override it](#namespace-default-limit-overrides).
The following diagram depict resource quotas in a native Kubernetes deployment. Notice that:
- The resource quota is applied to the entire project.
- The project limit sets what resources are available for the entire project.
- Each namespace in the project inherits the namespace default limit, which sets the cap for resources available for each individual namespace. The same namespace default limit is automatically applied to each namespace.
![Rancher Resource Quota Implementation]({{< baseurl >}}/img/rancher/rancher-resource-quota.svg)
<sup>Rancher Resource Quota Implementation Example</sup>
The following table explains the key differences between the two quota types.
Rancher Resource Quotas | Native Kubernetes Resource Quotas
---------|----------
Applied to projects. | Applied to namespaces.
Applies resource limits to the project and all its namespaces. | Applies resource limits to individual namespaces.
Applies resource quotas to namespaces through inheritance. | Apply only to the assigned namespace.
## Resource Quota Types
When you create a resource quota, you are configuring the pool of resources available to the project. You can set limits for a variety of different resources, for both your project and your namespaces.
### Namespace Default Limit Overrides
Although each namespace in a project inherits the **Namespace Default Limit**, you can also override this setting for specific namespaces that require additional (or fewer) resources.
@@ -0,0 +1,88 @@
---
title: Resource Quotas
weight: 5000
---
_Available as of v2.1.0_
In situations where several teams share a cluster, one team may overconsume the resources available: CPU, memory, storage, services, Kubernetes objects like pods or secrets, and so on. To prevent this overconsumption, you can apply a _resource quota_, which is a Rancher feature that limits the resources available to a project or namespace.
## Resource Quotas in Rancher
Resource quotas in Rancher include the same functionality as the [native version of Kubernetes](https://kubernetes.io/docs/concepts/policy/resource-quotas/). However, in Rancher, resource quotas have been extended so that you can apply them to [projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects).
In a standard Kubernetes deployment, resource quotas are applied to individual namespaces. However, you cannot apply the quota to your namespaces simultaneously with a single action. Instead, the resource quota must be applied multiple times.
In the following diagram, a Kubernetes admin is trying to enforce a resource quota without Rancher. The admin wants to apply a resource quota that sets the same CPU and memory limit to every namespace in his cluster (`Namespace 1-4`) . However, in the base version of Kubernetes, each namespace requires a unique resource quota. The admin has to create four different resource quotas that have the same specs configured (`Resource Quota 1-4`) and apply them individually.
<sup>Base Kubernetes: Unique Resource Quotas Being Applied to Each Namespace</sup>
![Native Kubernetes Resource Quota Implementation]({{< baseurl >}}/img/rancher/kubernetes-resource-quota.svg)
Resource quotas are a little different in Rancher. In Rancher, you apply a resource quota to the [project]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects), and then the quota propagates to each namespace, whereafter Kubernetes enforces you limits using the native version of resource quotas. If you want to change the quota for a specific namespace, you can [override it](#namespace-default-limit-overrides).
The resource quota includes two limits, which you set while creating or editing a project:
<a id="project-limits"></a>
- **Project Limits:**
This set of values configures an overall resource limit for the project. If you try to add a new namespace to the project, Rancher uses the limits you've set to validate that the project has enough resources to accommodate the namespace. In other words, if you try to add a namespace to a project near its resource quota, Rancher blocks you from adding the namespace.
- **Namespace Default Limits:**
This value is the default resource limit available for each namespace. The project propagates the limit to each namespace. Each namespace is bound to this default limit unless you [override it](#namespace-default-limit-overrides).
In the following diagram, a Rancher admin wants to apply a resource quota that sets the same CPU and memory limit for every namespace in their project (`Namespace 1-4`). However, in Rancher, the admin can set a resource quota for the project (`Project Resource Quota`) rather than individual namespaces. This quota includes resource limits for both the entire project (`Project Limit`) and individual namespaces (`Namespace Default Limit`). Rancher then propagates this quota to each namespace (`Namespace Resource Quota`).
<sup>Rancher: Resource Quotas Propagating to Each Namespace</sup>
![Rancher Resource Quota Implementation]({{< baseurl >}}/img/rancher/rancher-resource-quota.svg)
The following table explains the key differences between the two quota types.
| Rancher Resource Quotas | Kubernetes Resource Quotas |
| ---------------------------------------------------------- | -------------------------------------------------------- |
| Applies to projects and namespace. | Applies to namespaces only. |
| Creates resource pool for all namespaces in project. | Applies static resource limits to individual namespaces. |
| Applies resource quotas to namespaces through propagation. | Applies only to the assigned namespace.
## Creating Resource Quotas
You can create resource quotas in the following contexts:
- [While creating projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#creating-projects)
- [While editing projects]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/editing-projects/#editing-resource-quotas)
## Resource Quota Types
When you create a resource quota, you are configuring the pool of resources available to the project. You can set the following resource limits for the following resource types.
| Resource Type | Description |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| CPU Limit | The maximum amount of CPU (in [millicores](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu)) allocated to the project/namespace.<sup>1</sup> |
| CPU Reservation | The minimum amount of CPU (in millicores) guaranteed to the project/namespace.<sup>1</sup> |
| Memory Limit | The maximum amount of memory (in bytes) allocated to the project/namespace.<sup>1</sup> |
| Memory Reservation | The minimum amount of memory (in bytes) guaranteed to the project/namespace.<sup>1</sup> |
| Storage Reservation | The minimum amount of storage (in gigabytes) guaranteed to the project/namespace. |
| Services Load Balancers | The maximum number of load balancers services that can exist in the project/namespace. |
| Services Node Ports | The maximum number of node port services that can exist in the project/namespace. |
| Pods | The maximum number of pods that can exist in the project/namespace in a non-terminal state (i.e., pods with a state of `.status.phase in (Failed, Succeeded)` equal to true). |
| Services | The maximum number of services that can exist in the project/namespace. |
| ConfigMaps | The maximum number of ConfigMaps that can exist in the project/namespace. |
| Persistent Volume Claims | The maximum number of persistent volume claims that can exist in the project/namespace. |
| Replications Controllers | The maximum number of replication controllers that can exist in the project/namespace. |
| Secrets | The maximum number of secrets that can exist in the project/namespace. |
>**<sup>1</sup>** In the quota, if you set CPU or Memory limits, all containers you create in the project / namespace must explicitly satisfy the quota. See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits) for more details.
### Overriding the Default Limit for a Namespace
Although the **Namespace Default Limit** propagates from the project to each namespace, in some cases, you may need to increase (or decrease) the performance for a specific namespace. In this situation, you can override the default limits by editing the namespace.
In the diagram below, the Rancher admin has a resource quota in effect for their project. However, the admin wants to override the namespace limits for `Namespace 3` so that it performs better. Therefore, the admin [raises the namespace limits]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#editing-namespace-resource-quotas) for `Namespace 3` so that the namespace can access more resources.
<sup>Namespace Default Limit Override</sup>
![Namespace Default Limit Override]({{< baseurl >}}/img/rancher/rancher-resource-quota-override.svg)
How to: [Editing Namespace Resource Quotas]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#editing-namespace-resource-quotas)
@@ -5,131 +5,88 @@ aliases:
- /rancher/v2.x/en/concepts/ci-cd-pipelines/
- /rancher/v2.x/en/tasks/pipelines/
---
>**Notes:**
>
>- Pipelines are new and improved for Rancher v2.1! Therefore, if you configured pipelines while using v2.0.x, you'll have to reconfigure them after upgrading to v2.1.
>- Still using v2.0.x? See the pipeline documentation for [previous versions](/rancher/v2.x/en/tools/pipelines/docs-for-v2.0.x).
Pipelines help you automate the software delivery process. You can integrate Rancher with GitHub to create a pipeline.
A _pipeline_ is a software delivery process that is broken into different stages, allowing developers to deliver new software as quickly and efficiently as possible. Within Rancher, you can configure a pipeline for each of your Rancher projects.
You can set up your pipeline to run a series of stages and steps to test your code and deploy it.
The pipeline stages are:
<dl>
<dt>Pipelines</dt>
<dd>Contain a series of stages and steps. Out-of-the-box, the pipelines feature supports fan out and in capabilities.</dd>
<dt>Stages</dt>
<dd>Executed sequentially. The next stage will not execute until all of the steps within the stage execute.</dd>
<dt>Steps</dt>
<dd>Are executed in parallel within a stage. </dd>
</dl>
- **Build:**
## Enabling CI Pipelines
Each time code is checked into your repository, the pipeline automatically clones the repo and builds a new iteration of your software. Throughout this process, the software is typically reviewed by automated tests.
1. Select cluster from drop down.
- **Publish:**
2. Under tools menu select pipelines.
After each build is completed, it's automatically published to a Docker registry, where it can be pulled for manual testing.
3. Follow instructions for setting up github auth on page.
- **Deploy:**
A natural extension of the publish stage, the deploy stage lets you release your software to customers with the click of a button.
## Creating CI Pipelines
## Overview
1. Go to the project you want this pipeline to run in.
Rancher Pipeline provides a simple CI/CD experience. Use it to automatically checkout code, run builds, perform tests, publish docker images, and deploy Kubernetes resources to your clusters.
2. Select workloads from the top level Nav bar
You can configure a pipeline for each project in Rancher. Every project can have individual configurations and setups.
3. Select pipelines from from the secondary Nav bar
Pipelines are represented as pipeline files that are checked into source code repositories. Users can read and edit the pipeline configuration by either:
4. Click Add pipeline button.
- Using the Rancher UI.
- Updating the configuration in the repository, using tools like Git CLI to trigger a build with the latest CI definition.
5. Enter in your repository name (Autocomplete should help zero in on it quickly).
>**Note:** Rancher Pipeline provides a simple CI/CD experience, but it does not offer the full power and flexibility of and is not a replacement of enterprise-grade Jenkins or other CI tools your team uses.
6. Select Branch options.
## Supported Version Control Platforms
- Only the branch {BRANCH NAME}: Only events triggered by changes to this branch will be built.
Rancher pipelines currently supports GitHub and GitLab (available as of Rancher v2.1.0).
- Evertyhing but {BRANCH NAME}: Build any branch that triggered an event EXCEPT events from this branch.
>**Note:** Additions to pipelines are scoped for future releases of Rancher, such as:
>
>- Additional version control systems such as BitBucket
>- Deployment via Helm charts
>- Deployment via Rancher catalog
- All branches: Regardless of the branch that triggered the event always build.
## How Pipelines Work
>**Note:** If you want one path for master, but another for PRs or development/test/feature branches, create two separate pipelines.
When you configure a pipeline in one of your projects, a namespace specifically for the pipeline is automatically created. The following components are deployed to it:
7. Select the build trigger events. By default, builds will only happen by manually clicking build now in Rancher UI.
- **Jenkins:**
- Automatically build this pipeline whenever there is a git commit. (This respects the branch selection above)
The pipeline's build engine. Because project users do not directly interact with Jenkins, it's managed and locked.
- Automatically build this pipeline whenever there is a new PR.
>**Note:** There is no option to use existing Jenkins deployments as the pipeline engine.
<a id="reg"></a>
- Automatically build the pipeline. (Allows you to configure scheduled builds similar to Cron)
- **Docker Registry:**
8. Click Add button.
Out-of-the-box, the default target for your build-publish step is an internal Docker Registry. However, you can make configurations to push to a remote registry instead. The internal Docker Registry is only accessible from cluster nodes and cannot be directly accessed by users. Images are not persisted beyond the lifetime of the pipeline and should only be used in pipeline runs. If you need to access your images outside of pipeline runs, please push to an external registry.
By default, Rancher provides a three stage pipeline for you. It consists of a build stage where you would compile, unit test, and scan code. The publish stage has a single step to publish a docker image.
<a id="minio"></a>
- **Minio:**
Minio storage is used to store the logs for pipeline executions.
>**Note:** The managed Jenkins instance works statelessly, so don't worry about its data persistency. The Docker Registry and Minio instances use ephemeral volumes by default, which is fine for most use cases. If you want to make sure pipeline logs can survive node failures, you can configure persistent volumes for them, as described in [data persistency for pipeline components](/rancher/v2.x/en/tools/pipelines/configurations/#data-persistency-for-pipeline-components).
8. Add a name to the pipeline in order to complete adding a pipeline.
## Pipeline Triggers
9. Click on the run a script box under the Build stage.
Here you can set the image, or select from pre-packaged envs.
10. Configure a shell script to run inside the container when building.
11. Click Save to persist the changes.
12. Click the “publish an image box under the “Publish” stage.
13. Set the location of the Dockerfile. By default it looks in the root of the workspace. Instead, set the build context for building the image relative to the root of the workspace.
14. Set the image information.
The registry is the remote registry URL. It is defaulted to Docker hub.
Repository is the `<org>/<repo>` in the repository.
15. Select the Tag. You can hard code a tag like latest or select from a list of available variables.
16. If this is the first time using this registry, you can add the username/password for pushing the image. You must click save for the registry credentials AND also save for the modal.
After you configure a pipeline, you can trigger it using different methods:
- **Manually:**
After you configure a pipeline, you can trigger a build using the latest CI definition from either Rancher UI or Git CLI. When a pipeline execution is triggered, Rancher dynamically provisions a Kubernetes pod to run your CI tasks and then remove it upon completion.
## Creating a New Stage
- **Automatically:**
1. To add a new stage the user must click the add a new stage link in either create or edit mode of the pipeline view.
When you enable a repository for a pipeline, webhooks are automatically added to the version control system. When project users interact with the repo—push code, open pull requests, or create a tag—the version control system sends a webhook to Rancher Server, triggering a pipeline execution.
2. Provide a name for the stage.
3. Click save.
## Creating a New Step
1. Go to create / edit mode of the pipeline.
2. Click “Add Step” button in the stage that you would like to add a step in.
3. Fill out the form as detailed above
## Environment Variables
For your convenience the following environment variables are available in your build steps:
Variable Name | Description
------------------------|------------------------------------------------------------
CICD_GIT_REPO_NAME | Repository Name (Stripped of Github Organization)
CICD_PIPELINE_NAME | Name of the pipeline
CICD_GIT_BRANCH | Git branch of this event
CICD_TRIGGER_TYPE | Event that triggered the build
CICD_PIPELINE_ID | Rancher ID for the pipeline
CICD_GIT_URL | URL of the Git repository
CICD_EXECUTION_SEQUENCE | Build number of the pipeline
CICD_EXECUTION_ID | Combination of {CICD_PIPELINE_ID}-{CICD_EXECUTION_SEQUENCE}
CICD_GIT_COMMIT | Git commit ID being executed.
<!--### Adding a Build Script
Coming Soon
### Publishing an Image
Coming Soon-->
## Importing a Pipeline From YAML
If there is a ##YAML FILE### already checked into the github repository click import.
To use this automation, webhook management permission is required for the repo. Therefore, when users authenticate and fetch their repositories, only those on which they have admin permission will be shown.
@@ -0,0 +1,22 @@
---
title: Pipeline Terminology
weight: 1000
---
When setting up a pipeline, it's helpful to know a few related terms.
- **Pipeline:**
A pipeline consists of stages and steps. It defines the process to build, test, and deploy your code. Rancher pipeline uses the [pipeline as code](https://jenkins.io/doc/book/pipeline-as-code/) model—pipeline configuration is represented as a pipeline file in the source code repository, using the file name `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`.
- **Stages:**
A pipeline stage consists of multiple steps. Stages are executed in the order defined in the pipeline file. The steps in a stage are executed concurrently. A stage starts when all steps in the former stage finish without failure.
- **Steps:**
A pipeline step is executed inside a specified stage. A step fails if it exits with a code other than `0`. If a step exits with this failure code, the entire pipeline fails and terminates.
- **Workspace:**
The workspace is the working directory shared by all pipeline steps. In the beginning of a pipeline, source code is checked out to the workspace. The command for every step bootstraps in the workspace. During a pipeline execution, the artifacts from a previous step will be available in future steps. The working directory is an ephemeral volume and will be cleaned out with the executor pod when a pipeline execution is finished.
@@ -0,0 +1,593 @@
---
title: Configuring Pipelines
weight: 3725
---
Configuring a pipeline automates the process of triggering and publishing builds. This section describes how to set up a pipeline in a production environment.
- The [Basic Configuration](#basic-configuration) section provides sequential instruction on how to configure a functional pipeline.
- The [Advanced Configuration](#advanced-configuration) section provides instructions for configuring pipeline options.
## Basic Configuration
To configure a functional pipeline for your project, begin by completing the mandatory basic configuration steps.
### Pipeline Configuration Outline
Initial configuration of a pipeline in a production environment involves completion of several mandatory procedures.
>**Note:** Before setting up a pipeline for a production environment, we recommend trying the [Pipeline Quick Start Guide]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/quick-start-guide).
<!-- TOC -->
- [1. Configuring Version Control Providers](#1-configuring-version-control-providers)
- [2. Configuring Pipeline Stages and Steps](#2-configuring-pipeline-stages-and-steps)
- [3. Running the Pipeline](#3-running-the-pipeline)
- [4. Configuring Persistent Data for Pipeline Components](#4-configuring-persistent-data-for-pipeline-components)
- [Advanced Configuration](#advanced-configuration)
<!-- /TOC -->
### 1. Configuring Version Control Providers
Begin configuration of your pipeline by enabling authentication with your version control provider. Rancher Pipeline supports integration with GitHub and GitLab.
Select your provider's tab below and follow the directions.
{{% tabs %}}
{{% tab "GitHub" %}}
1. From the context menu, open the project for which you're configuring a pipeline.
1. From the main menu, select **Resources > Pipelines**.
1. Follow the directions displayed to setup an OAuth application in GitHub.
![GitHub Pipeline Instructions]({{< baseurl >}}/img/rancher/github-pipeline.png)
1. From GitHub, copy the **Client ID** and **Client Secret**. Paste them into Rancher.
1. If you're using GitHub for enterprise, select **Use a private github enterprise installation**. Enter the host address of your GitHub installation.
1. Click **Authenticate**.
1. Enable the repository for which you want to run a pipeline. Then click **Done**.
{{% /tab %}}
{{% tab "GitLab" %}}
1. From the context menu, open the project for which you're configuring a pipeline.
1. From the main menu, select **Resources > Pipelines**.
1. Follow the directions displayed to setup a GitLab application.
![GitLab Pipeline Instructions]({{< baseurl >}}/img/rancher/gitlab-pipeline.png)
1. From GitLab, copy the **Application ID** and **Secret**. Paste them into Rancher.
1. If you're using GitLab for enterprise setup, select **Use a private gitlab enterprise installation**. Enter the host address of your GitLab installation.
1. Click **Authenticate**.
1. Enable the repository for which you want to run a pipeline. Then click **Done**.
>**Note:** If you use GitLab and your Rancher setup is in a local network, enable the **Allow requests to the local network from hooks and services** option in GitLab admin settings.
{{% /tab %}}
{{% /tabs %}}
**Result:** A pipeline is added to the project.
<!-- What happens if you change this value while builds are running? -->
### 2. Configuring Pipeline Stages and Steps
Now that the pipeline is added to your project, you need to configure its automated stages and steps. For your convenience, there are multiple built-in step types for dedicated tasks.
1. From your project's **Pipeline** tab, find your new pipeline, and select **Ellipsis (...) > Edit Config**.
>**Note:** When configuring a pipeline, it takes a few moments for Rancher to check for an existing pipeline configuration.
1. Click **Configure pipeline for this branch**.
1. Add stages to your pipeline execution by clicking **Add Stage**.
1. Add steps to each stage by clicking **Add a Step**. You can add multiple steps to each stage.
>**Note:** As you build out each stage and step, click `Show advanced options` to make [Advanced Configurations](#advanced-configuration), such as rules to trigger or skip pipeline actions, add environment variables, or inject environment variables using secrets. Advanced options are available the pipeline, each stage, and each individual step.
**Step types available include:**
{{% accordion id="clone" label="Clone" %}}
The first stage is preserved to be a cloning step that checks out source code from your repo. Rancher handles the cloning of the git repository. This action is equivalent to `git clone <repository_link> <workspace_dir>`.
{{% /accordion %}}
{{% accordion id="run-script" label="Run Script" %}}
The **Run Script** step executes arbitrary commands in the workspace inside a specified container. You can use it to build, test and do more, given whatever utilities the base image provides. For your convenience you can use variables to refer to metadata of a pipeline execution. Please go to [reference page](/rancher/v2.x/en/tools/pipelines/reference/#variable-substitution) for the list of available vairables.
{{% tabs %}}
{{% tab "By UI" %}}
<br/>
1. From the **Step Type** drop-down, choose **Run Script** and fill in the form.
1. Click **Add**.
{{% /tab %}}
{{% tab "By YAML" %}}
```yaml
# example
stages:
- name: Build something
steps:
- runScriptConfig:
image: golang
shellScript: go build
```
{{% /tab %}}
{{% /tabs %}}
{{% /accordion %}}
{{% accordion id="build-publish-image" label="Build and Publish Images" %}}
The **Build and Publish Image** step builds and publishes a Docker image. This process requires a Dockerfile in your source code's repository to complete successfully.
{{% tabs %}}
{{% tab "By UI" %}}
1. From the **Step Type** drop-down, choose **Build and Publish**.
1. Fill in the rest of the form. Descriptions for each field are listed below. When you're done, click **Add**.
Field | Description |
---------|----------|
Dockerfile Path | The relative path to the Dockerfile in the source code repo. By default, this path is `./Dockerfile`, which assumes the Dockerfile is in the root directory. You can set it to other paths in different use cases (`./path/to/myDockerfile` for example). |
Image Name | The image name in `name:tag` format. The registry address is not required. For example, to build `example.com/repo/my-image:dev`, enter `repo/my-image:dev`. |
Push image to remote repository | An option to set the registry that publishes the image that's built. To use this option, enable it and choose a registry from the drop-down. If this option is disabled, the image is pushed to the internal registry. |
Build Context <br/><br/> (**Show advanced options**)| By default, the root directory of the source code (`.`). For more details, see the Docker [build command documentation](https://docs.docker.com/engine/reference/commandline/build/).
{{% /tab %}}
{{% tab "By YAML" %}}
```yaml
# example
stages:
- name: Publish Image
steps:
- publishImageConfig:
dockerfilePath: ./Dockerfile
buildContext: .
tag: repo/app:v1
pushRemote: true
registry: example.com
```
You can use specific arguments for Docker daemon and the build. They are not exposed in the UI, but they are available in pipeline YAML format, as indicated in the example above. Available variables includes:
Variable Name | Description
------------------------|------------------------------------------------------------
PLUGIN_DRY_RUN | Disable docker push
PLUGIN_DEBUG | Docker daemon executes in debug mode
PLUGIN_MIRROR | Docker daemon registry mirror
PLUGIN_INSECURE | Docker daemon allows insecure registries
PLUGIN_BUILD_ARGS | Docker build args, a comma separated list
{{% /tab %}}
{{% /tabs %}}
{{% /accordion %}}
{{% accordion id="deploy-yaml" label="Deploy YAML" %}}
This step deploys arbitrary Kubernetes resources to the project. This deployment requires a Kubernetes manifest file to be present in the source code repository. Pipeline variable substitution is supported in the manifest file. You can view an example file at [GitHub](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml). For available variables, refer to [Pipeline Variable Reference]({{< baseurl >}}rancher/v2.x/en/tools/pipelines/reference/).
{{% tabs %}}
{{% tab "By UI" %}}
1. From the **Step Type** drop-down, choose **Deploy YAML** and fill in the form.
1. Enter the **YAML Path**, which is the path to the manifest file in the source code.
1. Click **Add**.
{{% /tab %}}
{{% tab "By YAML" %}}
```yaml
# example
stages:
- name: Deploy
steps:
- applyYamlConfig:
path: ./deployment.yaml
```
{{% /tab %}}
{{% /tabs %}}
{{% /accordion %}}
1. When you're finished adding stages and steps, click **Done.**
### 3. Running the Pipeline
Run your pipeline for the first time. From the **Pipeline** tab, find your pipeline and select **Ellipsis (...) > Run**.
During this initial run, your pipeline is tested, and the following [pipeline components](/Users/markbishop/Documents/GitHub/docs/content/rancher/v2.x/en/tools/pipelines/#how-pipelines-work) are deployed to your project as workloads in a new namespace dedicated to the pipeline:
- `docker-registry`
- `jenkins`
- `minio`
This process takes several minutes. When it completes, you can view each pipeline component from the project **Workloads** tab.
### 4. Configuring Persistent Data for Pipeline Components
The internal [Docker registry]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/#reg) and the [Minio]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/#minio) wokrloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.
Complete both [A—Configuring Persistent Data for Docker Registry](#a—configuring-persistent-data-for-docker-registry) _and_ [B—Configuring Persistent Data for Minio](#b—configuring-persistent-data-for-minio).
>**Prerequisites (for both parts A and B):**
>
>[Persistent volumes]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#persistent-volumes) must be available for the cluster.
#### A. Configuring Persistent Data for Docker Registry
1. From the project that you're configuring a pipeline for, select the **Workloads** tab.
1. Find the `docker-registry` workload and select **Ellipsis (...) > Edit**.
1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section:
- **Add Volume > Add a new persistent volume (claim)**
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
{{% tabs %}}
{{% tab "Add a new persistent volume" %}}
<br/>
1. Enter a **Name** for the volume claim.
1. Select a volume claim **Source**:
- If you select **Use a Storage Class to provision a new persistent volume**, select a [Storage Class]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#storage-classes) and enter a **Capacity**.
- If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down.
1. From the **Customize** section, choose the read/write access for the volume.
1. Click **Define**.
{{% /tab %}}
{{% tab "Use an existing persistent volume" %}}
<br/>
1. Enter a **Name** for the volume claim.
1. Choose a **Persistent Volume Claim** from the drop-down.
1. From the **Customize** section, choose the read/write access for the volume.
1. Click **Define**.
{{% /tab %}}
{{% /tabs %}}
1. From the **Mount Point** field, enter `/var/lib/registry`, which is the data storage path inside the Docker registry container.
1. Click **Upgrade**.
#### B. Configuring Persistent Data for Minio
1. From the **Workloads** tab, find the `minio` workload and select **Ellipsis (...) > Edit**.
1. Scroll to the **Volumes** section and expand it. Make one of the following selections from the **Add Volume** menu, which is near the bottom of the section:
- **Add Volume > Add a new persistent volume (claim)**
- **Add Volume > Use an existing persistent volume (claim)**
1. Complete the form that displays to choose a persistent volume for the internal Docker registry.
{{% tabs %}}
{{% tab "Add a new persistent volume" %}}
<br/>
1. Enter a **Name** for the volume claim.
1. Select a volume claim **Source**:
- If you select **Use a Storage Class to provision a new persistent volume**, select a [Storage Class]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#storage-classes) and enter a **Capacity**.
- If you select **Use an existing persistent volume**, choose a **Persistent Volume** from the drop-down.
1. From the **Customize** section, choose the read/write access for the volume.
1. Click **Define**.
{{% /tab %}}
{{% tab "Use an existing persistent volume" %}}
<br/>
1. Enter a **Name** for the volume claim.
1. Choose a **Persistent Volume Claim** from the drop-down.
1. From the **Customize** section, choose the read/write access for the volume.
1. Click **Define**.
{{% /tab %}}
{{% /tabs %}}
1. From the **Mount Point** field, enter `/data`, which is the data storage path inside the Minio container.
1. Click **Upgrade**.
**Result:** Persistent storage is configured for your pipeline components.
## Advanced Configuration
During the process of configuring a pipeline, you can configure advanced options for triggering the pipeline or configuring environment variables.
- [Configuring Pipeline Trigger Rules](#configuring-pipeline-trigger-rules)
- [Configuring Timeouts](#configuring-timeouts)
- [Configuring Environment Variables](#configuring-environment-variables)
- [Configuring Pipeline Secrets](#configuring-pipeline-secrets)
- [Configuring the Executor Quota](#configuring-the-executor-quota)
### Configuring Pipeline Trigger Rules
When a repository is enabled, a webhook for it is automatically set in the version control system. By default, the project pipeline is triggered by a push event to a specific repository, but you can add (or change) events that trigger a build, such as a pull request or a tagging.
Trigger rules come in two types:
- **Run this when:**
This type of rule starts the pipeline, stage, or step when a trigger explicitly occurs.
- **Do Not Run this when:**
If all conditions evaluate to true, then the pipeline/stage/step is executed. Otherwise it is skipped. When a stage/step is skipped, it is considered successful and follow-up stages/steps continue to run. Wildcard character (`*`) expansion is supported in conditions.
{{% tabs %}}
{{% tab "Pipeline Trigger" %}}
You can configure trigger rules for the entire pipeline in two different contexts:
{{% accordion id="pipeline-creation" label="During Initial Pipeline Configuration" %}}
1. From the context menu, open the project for which you've configured a pipeline. Then select the **Pipelines** tab.
1. From the pipeline for which you want to edit build triggers, select **Ellipsis (...) > Edit Config**.
1. Click **Show advanced options**.
1. From **Trigger Rules**, configure rules to run or skip the pipeline.
1. Click **Add Rule**. In the **Value** field, enter the name of the branch that triggers the pipeline.
1. **Optional:** Add more branches that trigger a build.
{{% /accordion %}}
{{% accordion id="pipeline-settings" label="While Editing Pipeline Settings" %}}
After you've configured a pipeline, you can go back and choose the events that trigger a pipeline execution.
>**Note:** This option is not available for example repositories.
1. From the context menu, open the project for which you've configured a pipeline. Then select the **Pipelines** tab.
1. From the pipeline for which you want to edit build triggers, select **Ellipsis (...) > Setting**.
1. Select (or clear) the events that you want to trigger a pipeline execution.
1. Click **Save**.
{{% /accordion %}}
{{% /tab %}}
{{% tab "Stage Trigger" %}}
1. From the context menu, open the project for which you've configured a pipeline. Then select the **Pipelines** tab.
1. From the pipeline for which you want to edit triggers, select **Ellipsis (...) > Edit Config**.
1. From the pipeline stage that you want to configure a trigger for, click the **Edit** icon.
1. Click **Show advanced options**.
1. Add one or more trigger rules.
1. Click **Add Rule**.
1. Choose the **Type** that triggers the stage.
| Type | Value |
| ------ | -------------------------------------------------------------------- |
| Branch | The name of the branch that triggers the stage. |
| Event | The type of event that triggers the stage (Push, Pull Request, Tag). |
1. Click **Save**.
{{% /tab %}}
{{% tab "Step Trigger" %}}
1. From the context menu, open the project for which you've configured a pipeline. Then select the **Pipelines** tab.
1. From the pipeline for which you want to edit triggers, select **Ellipsis (...) > Edit Config**.
1. From the pipeline step that you want to configure a trigger for, click the **Edit** icon.
1. Click **Show advanced options**.
1. Add one or more trigger rules.
1. Click **Add Rule**.
1. Choose the **Type** that triggers the step.
| Type | Value |
| ------ | -------------------------------------------------------------------- |
| Branch | The name of the branch that triggers the stage. |
| Event | The type of event that triggers the stage (Push, Pull Request, Tag). |
1. Click **Save**.
{{% /tab %}}
{{% tab "Do Not Run YAML" %}}
```yaml
# example
stages:
- name: Build something
# Conditions for stages
when:
branch: master
event: [ push, pull_request ]
# Multiple steps run concurrently
steps:
- runScriptConfig:
image: busybox
shellScript: date -R
# Conditions for steps
when:
branch: [ master, dev ]
event: push
# branch conditions for the pipeline
branch:
include: [ master, feature/*]
exlclude: [ dev ]
```
{{% /tab %}}
{{% /tabs %}}
### Configuring Timeouts
Each pipeline execution has a default timeout of 60 minutes. If the pipeline execution cannot complete within its timeout period, the pipeline is aborted. If a pipeline has more executions than can be completed in 60 minutes,
{{% tabs %}}
{{% tab "By UI" %}}
1. From the context menu, open the project for which you've configured a pipeline. Then select the **Pipelines** tab.
1. From the pipeline for which you want to edit the timeout, select **Ellipsis (...) > Edit Config**.
1. Click **Show advanced options**.
1. Enter a new value in the **Timeout** field.
{{% /tab %}}
{{% tab "By YAML" %}}
```yaml
# example
stages:
- name: Build something
steps:
- runScriptConfig:
image: busybox
shellScript: ls
timeout: 30
```
{{% /tab %}}
{{% /tabs %}}
### Configuring Environment Variables
When configuring a pipeline, you can use environment variables to configure the step's script.
{{% tabs %}}
{{% tab "By UI" %}}
1. From the context menu, open the project for which you've configured a pipeline. Then select the **Pipelines** tab.
1. From the pipeline in which you want to use environment variables, select **Ellipsis (...) > Edit Config**.
1. Click the **Edit** icon for the step in which you want to use environment variables.
1. Click **Show advanced options**.
1. Click **Add Variable**, and then enter a key and value in the fields that appear. Add more variables if needed.
1. Edit the script, adding your environment variable(s).
1. Click **Save**.
{{% /tab %}}
{{% tab "By YAML" %}}
```yaml
# example
stages:
- name: Build something
steps:
- runScriptConfig:
image: busybox
shellScript: echo ${FIRST_KEY} && echo ${SECOND_KEY}
env:
FIRST_KEY: VALUE
SECOND_KEY: VALUE2
```
{{% /tab %}}
{{% /tabs %}}
### Configuring Pipeline Secrets
If you need to use security-sensitive information in your pipeline scripts (like a password), you can pass them in using Kubernetes [secrets]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/secrets/).
>**Prerequisite:** Create a secret for your project for use in pipelines.
>**Note:** Secret injection is disabled on pull request events.
{{% tabs %}}
{{% tab "By UI" %}}
1. From the context menu, open the project for which you've configured a pipeline. Then select the **Pipelines** tab.
1. From the pipeline in which you want to use environment variables, select **Ellipsis (...) > Edit Config**.
1. Click the **Edit** icon for the step in which you want to use environment variables.
1. Click **Show advanced options**.
1. Click **Add From Secret**. Select the secret file that you want to use. Then choose a key. Optionally, you can enter an alias for the key.
1. Click **Save**.
{{% /tab %}}
{{% tab "By YAML" %}}
```yaml
# example
stages:
- name: Build something
steps:
- runScriptConfig:
image: busybox
shellScript: echo ${ALIAS_ENV}
# environment variables from project secrets
envFrom:
- sourceName: my-secret
sourceKey: secret-key
targetKey: ALIAS_ENV
```
{{% /tab %}}
{{% /tabs %}}
### Configuring the Executor Quota
The _executor quota_ decides how many builds can run simultaneously in the project. If the number of triggered builds exceeds the quota, subsequent builds will queue until a vacancy opens. By default, the quota is `2`, but you can change it.
1. From the context menu, open the project for which you've configured a pipeline.
1. From the main menu, select **Resources > Pipelines**.
1. From `The maximum number of pipeline executors` increment the **Scale** up or down to change the quota. A value of `0` or less removes the quota limit.
@@ -0,0 +1,125 @@
---
title: v2.0.x Pipeline Documentation
weight: 9000
---
>**Note:** This section describes the pipeline feature as implemented in Rancher v2.0.x. If you are using Rancher v2.1 or later, where pipelines have been significantly improved, please refer to the new documentation for [v2.1 or later](/rancher/v2.x/en/tools/pipelines).
Pipelines help you automate the software delivery process. You can integrate Rancher with GitHub to create a pipeline.
You can set up your pipeline to run a series of stages and steps to test your code and deploy it.
<dl>
<dt>Pipelines</dt>
<dd>Contain a series of stages and steps. Out-of-the-box, the pipelines feature supports fan out and in capabilities.</dd>
<dt>Stages</dt>
<dd>Executed sequentially. The next stage will not execute until all of the steps within the stage execute.</dd>
<dt>Steps</dt>
<dd>Are executed in parallel within a stage. </dd>
</dl>
## Enabling CI Pipelines
1. Select cluster from drop down.
2. Under tools menu select pipelines.
3. Follow instructions for setting up github auth on page.
## Creating CI Pipelines
1. Go to the project you want this pipeline to run in.
2. Select workloads from the top level Nav bar
3. Select pipelines from the secondary Nav bar
4. Click Add pipeline button.
5. Enter in your repository name (Autocomplete should help zero in on it quickly).
6. Select Branch options.
- Only the branch {BRANCH NAME}: Only events triggered by changes to this branch will be built.
- Evertyhing but {BRANCH NAME}: Build any branch that triggered an event EXCEPT events from this branch.
- All branches: Regardless of the branch that triggered the event always build.
>**Note:** If you want one path for master, but another for PRs or development/test/feature branches, create two separate pipelines.
7. Select the build trigger events. By default, builds will only happen by manually clicking build now in Rancher UI.
- Automatically build this pipeline whenever there is a git commit. (This respects the branch selection above)
- Automatically build this pipeline whenever there is a new PR.
- Automatically build the pipeline. (Allows you to configure scheduled builds similar to Cron)
8. Click Add button.
By default, Rancher provides a three stage pipeline for you. It consists of a build stage where you would compile, unit test, and scan code. The publish stage has a single step to publish a docker image.
8. Add a name to the pipeline in order to complete adding a pipeline.
9. Click on the run a script box under the Build stage.
Here you can set the image, or select from pre-packaged envs.
10. Configure a shell script to run inside the container when building.
11. Click Save to persist the changes.
12. Click the “publish an image box under the “Publish” stage.
13. Set the location of the Dockerfile. By default it looks in the root of the workspace. Instead, set the build context for building the image relative to the root of the workspace.
14. Set the image information.
The registry is the remote registry URL. It is defaulted to Docker hub.
Repository is the `<org>/<repo>` in the repository.
15. Select the Tag. You can hard code a tag like latest or select from a list of available variables.
16. If this is the first time using this registry, you can add the username/password for pushing the image. You must click save for the registry credentials AND also save for the modal.
## Creating a New Stage
1. To add a new stage the user must click the add a new stage link in either create or edit mode of the pipeline view.
2. Provide a name for the stage.
3. Click save.
## Creating a New Step
1. Go to create / edit mode of the pipeline.
2. Click “Add Step” button in the stage that you would like to add a step in.
3. Fill out the form as detailed above
## Environment Variables
For your convenience the following environment variables are available in your build steps:
Variable Name | Description
------------------------|------------------------------------------------------------
CICD_GIT_REPO_NAME | Repository Name (Stripped of Github Organization)
CICD_PIPELINE_NAME | Name of the pipeline
CICD_GIT_BRANCH | Git branch of this event
CICD_TRIGGER_TYPE | Event that triggered the build
CICD_PIPELINE_ID | Rancher ID for the pipeline
CICD_GIT_URL | URL of the Git repository
CICD_EXECUTION_SEQUENCE | Build number of the pipeline
CICD_EXECUTION_ID | Combination of {CICD_PIPELINE_ID}-{CICD_EXECUTION_SEQUENCE}
CICD_GIT_COMMIT | Git commit ID being executed.
@@ -0,0 +1,50 @@
---
title: Pipelines Quick Start Guide
weight: 500
---
Rancher ships with several example repositories that you can use to familiarize yourself with pipelines. We recommend configuring and testing the example repository that most resembles your environment before using pipelines with your own repositories in a production environment. Use this example repository as a sandbox for repo configuration, build demonstration, etc. Rancher includes example repositories for:
- Go
- Maven
- php
## 1. Configure Repositories
By default, the example pipeline repositories are disabled. Enable one (or more) to test out the pipeline feature and see how it works.
1. From the context menu, open the project for which you want to run a pipeline.
1. From the main menu, select **Workloads**. Then select the **Pipelines** tab.
1. Click **Configure Repositories**.
**Step Result:** A list of example repositories displays.
>**Note:** Example repositories only display if you haven't fetched your own repos.
1. Click **Enable** for one of the example repos (e.g., `https://github.com/rancher/pipeline-example-go.git`). Then click **Done**.
**Results:**
- A pipeline is configured for the example repository, and it's added to the **Pipeline** tab.
- The following workloads are deployed to a new namespace:
- `docker-registry`
- `jenkins`
- `minio`
## 2. Run Example Pipeline
After configuring an example repository, run the pipeline to see how it works.
1. From the **Pipelines** tab, select **Ellipsis (...) > Run**.
>**Note:** When you run a pipeline the first time, it takes a few minutes to pull relevant images and provision necessary pipeline components.
To understand what the example pipeline is doing, select `Ellipsis (...) > Edit Config` for your repo. Alternatively, view the `.rancher-pipeline.yml` file in the example repositories.
**Result:** The pipeline runs. You can see the results in the logs.
## What's Next?
For detailed information about setting up a pipeline in production, see the [Configuring Pipelines]({{< baseurl >}}/rancher/v2.x/en/tools/pipelines/configurations/).
@@ -0,0 +1,72 @@
---
title: Pipeline Variable Reference
weight: 8000
---
For your convenience, the following variables are available for your pipeline configuration scripts. During pipeline executions, these variables are replaced by metadata. You can reference them in the form of `${VAR_NAME}`.
Variable Name | Description
------------------------|------------------------------------------------------------
`CICD_GIT_REPO_NAME` | Repository name (Github organization omitted).
`CICD_GIT_URL` | URL of the Git repository.
`CICD_GIT_COMMIT` | Git commit ID being executed.
`CICD_GIT_BRANCH` | Git branch of this event.
`CICD_GIT_REF` | Git reference specification of this event.
`CICD_GIT_TAG` | Git tag name, set on tag event.
`CICD_EVENT` | Event that triggered the build (`push`, `pull_request` or `tag`).
`CICD_PIPELINE_ID` | Rancher ID for the pipeline.
`CICD_EXECUTION_SEQUENCE` | Build number of the pipeline.
`CICD_EXECUTION_ID` | Combination of `{CICD_PIPELINE_ID}-{CICD_EXECUTION_SEQUENCE}`.
`CICD_REGISTRY` | Address for the Docker registry for the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step.
`CICD_IMAGE` | Name of the image built from the previous publish image step, available in the Kubernetes manifest file of a `Deploy YAML` step. It does not contain the image tag.<br/><br/> [Example](https://github.com/rancher/pipeline-example-go/blob/master/deployment.yaml)
## Full `.rancher-pipeline.yml` Example
```yaml
# example
stages:
- name: Build something
# Conditions for stages
when:
branch: master
event: [ push, pull_request ]
# Multiple steps run concurrently
steps:
- runScriptConfig:
image: busybox
shellScript: echo ${FIRST_KEY} && echo ${ALIAS_ENV}
# Set environment variables in container for the step
env:
FIRST_KEY: VALUE
SECOND_KEY: VALUE2
# Set environment variables from project secrets
envFrom:
- sourceName: my-secret
sourceKey: secret-key
targetKey: ALIAS_ENV
- runScriptConfig:
image: busybox
shellScript: date -R
# Conditions for steps
when:
branch: [ master, dev ]
event: push
- name: Publish my image
steps:
- publishImageConfig:
dockerfilePath: ./Dockerfile
buildContext: .
tag: rancher/rancher:v2.0.0
# Optionally push to remote registry
pushRemote: true
registry: reg.example.com
- name: Deploy some workloads
steps:
- applyYamlConfig:
path: ./deployment.yaml
# branch conditions for the pipeline
branch:
include: [ master, feature/*]
exlclude: [ dev ]
```
+2 -16
View File
@@ -5,15 +5,9 @@ aliases:
- /rancher/v2.x/en/backups/rollbacks/
---
### Upgrading from Rancher 2.x.x
### Upgrading Rancher
Each new version of Rancher 2.x.x supports upgrades from previous versions of Rancher 2.x.x. This section will be updated as soon as the first release post 2.0 is available.
Complete one of the upgrade procedures below based on your Rancher installation:
- [Single Node Upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/single-node-upgrade)
- [High Availability Upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/ha-server-upgrade)
- [Air Gap Upgrade]({{< baseurl >}}/rancher/v2.x/en/upgrades/air-gap-upgrade)
- [Upgrades]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/)
### Rolling Back Unsuccessful Upgrades
@@ -21,11 +15,3 @@ In the event that your Rancher Server does not upgrade successfully, you can rol
- [Single-Node Rollbacks]({{< baseurl >}}/rancher/v2.x/en/upgrades/single-node-rollbacks)
- [High-Availability Rollbacks]({{< baseurl >}}/rancher/v2.x/en/upgrades/ha-server-rollbacks)
### Migrating from Rancher 1.6.x
Until Rancher 2.1 is released, migrating to from Rancher 1.6.x to 2.x.x is not supported due to major code rewrites.
For the 2.1 release, we plan to release a tool that converts Rancher Compose to Kubernetes YAML. This tool will help our Cattle users migrate from Rancher 1.6.x to 2.x.x. However, we understand that there is a learning curve switching from Cattle to Kubernetes as you deploy new workloads. Therefore, this release will include a cheatsheet for those that enjoy Cattle's simplicity but want to quickly create those workloads in Kubernetes.
We will continue support for Rancher 1.6.x for a minimum of one year after the 2.1 release so that 1.6.x users can plan and complete migration.
@@ -11,7 +11,8 @@ This section contains information about how to upgrade your Rancher server to a
### Upgrading to an HA Helm Chart
- [Upgrading from an HA Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/)
- [Upgrade an HA Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/)
- [Upgrade a Air Gap HA Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm-airgap/)
- [Migrating from a RKE Add-On Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/)
### Upgrading an RKE Add-on Install
@@ -0,0 +1,64 @@
---
title: High Availability (HA) Upgrade - Air Gap
weight: 1021
---
The following instructions will guide you through upgrading a high-availability Rancher Server installed in an air gap environment.
## Prerequisites
- **Populate Images**
Follow the guide to [Prepare the Private Registry]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/prepare-private-reg/) with the images for the upgrade Rancher release.
- **Backup your Rancher Cluster**
[Take a one-time snapshot]({{< baseurl >}}/rancher/v2.x/en/backups/backups/ha-backups/#option-b-one-time-snapshots)
of your Rancher Server cluster. You'll use the snapshot as a restoration point if something goes wrong during upgrade.
- **kubectl**
Follow the kubectl [configuration instructions]({{< baseurl >}}/rancher/v2.x/en/faq/kubectl) and confirm that you can connect to the Kubernetes cluster running Rancher server.
- **helm**
[Install or update](https://docs.helm.sh/using_helm/#installing-helm) Helm to the latest version.
## Upgrade Rancher
1. Update your local helm repo cache.
```
helm repo update
```
1. Fetch the latest `rancher-stable/rancher` chart.
This will pull down the chart and save it in the current directory as a `.tgz` file.
```plain
helm fetch rancher-stable/rancher
```
1. Render the upgrade template.
Use the same `--set` values you used for the install. Remember to set the `--is-upgrade` flag for `helm`. This will create a `rancher` directory with the Kubernetes manifest files.
```plain
helm template ./rancher-<version>.tgz --output-dir . --is-upgrade \
--name rancher --namespace cattle-system \
--set hostname=<RANCHER.YOURDOMAIN.COM> \
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher
```
1. Copy and apply the rendered manifests.
Copy the files to a server with access to the Rancher server cluster and apply the rendered templates.
```plain
kubectl -n cattle-system apply -R -f ./rancher
```
## Rolling Back
Should something go wrong, follow the [HA Rollback]({{< baseurl >}}/rancher/v2.x/en/upgrades/rollbacks/ha-server-rollbacks/) instructions to restore the snapshot you took before you preformed the upgrade.
@@ -40,6 +40,8 @@ The following instructions will guide you through upgrading a high-availability
## Upgrade Rancher
> **Note:** For Air Gap installs see [Upgrading HA Rancher - Air Gap]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#upgrading-rancher)
1. Update your local helm repo cache.
```
@@ -1,5 +1,5 @@
---
title: Single Node Air Gap Upgrade
title: Single Node Upgrade - Air Gap
weight: 1011
aliases:
- /rancher/v2.x/en/upgrades/air-gap-upgrade/
View File
+43
View File
@@ -0,0 +1,43 @@
When setting up an airgap environment, it may be useful to run RKE through a bastion server. This can be helpful if you want to keep your RKE config or SSH keys on your local machine. This requires that the bastion server is accessible from the outside world, and has access to port 22 (SSH) on your airgapped nodes.
local RKE (via port 22 over internet) -> bastion
bastion (via port 22 over internal network) -> airgap_node_1, airgap_node_2, airgap_node3
To enable running RKE through a bastion server, add the following to your RKE yaml config:
```
bastion_host:
address: 18.224.54.35 # public IP of the bastion server
user: ubuntu
port: 22
ssh_key_path: /path/to/ssh/key
```
Full ex:
```
bastion_host:
address: 18.224.54.35 # public IP of the bastion server
user: ubuntu
port: 22
ssh_key_path: /path/to/ssh/key
nodes:
- address: 172.31.6.15 # private IP of airgapped node
user: ubuntu
role: [ "controlplane", "etcd", "worker" ]
ssh_key_path: /path/to/ssh/key
- address: 172.31.12.84 # private IP of airgapped node
user: ubuntu
role: [ "controlplane", "etcd", "worker" ]
ssh_key_path: /path/to/ssh/key
- address: 172.31.15.78 # private IP of airgapped node
user: ubuntu
role: [ "controlplane", "etcd", "worker" ]
ssh_key_path: /path/to/ssh/key
private_registries:
- url: <registry url>
user: <username>
password: <password>
is_default: true
```
Running `rke up` will provision the k8s cluster through the bastion server, and provide the resulting kube_config. However, it's important to note that as your nodes are not accessible via a public IP, the machine you run `kubectl` from in later steps must be able to reach your airgapped nodes through the addresses provided. This may require moving the resulting kube_config after it is created.
@@ -0,0 +1,30 @@
#!/bin/bash
set -e
# Collect images for Air Gap/Private Registry install
# Requires:
# rke - https://rancher.com/docs/rke/v0.1.x/en/installation/
# helm - https://docs.helm.sh/using_helm/#installing-helm
# curl
# jq
echo "RKE Images"
rke config --system-images 2>/dev/null > tmp-images.txt
echo "Helm Tiller Image"
helm init --dry-run --debug | grep image: | awk '{print $2}' >> tmp-images.txt
echo "Rancher Images"
latest_url=$(curl -sS "https://api.github.com/repos/rancher/rancher/releases/latest" | jq -r '.assets[]|select(.name=="rancher-images.txt")|.browser_download_url')
curl -sSL ${latest_url} >> tmp-images.txt
echo "Cert-Manager Image"
cm_repo=$(helm inspect values stable/cert-manager | grep repository: | awk '{print $2}')
cm_tag=$(helm inspect values stable/cert-manager | grep tag: | awk '{print $2}')
echo "${cm_repo}:${cm_tag}" >> tmp-images.txt
echo "Sort and uniq the images list"
cat tmp-images.txt | sort -u | uniq > images.txt
# cleanup tmp file
rm tmp-images.txt
@@ -0,0 +1,40 @@
busybox
rancher/alertmanager-helper:v0.0.2
rancher/alpine-git:1.0.4
rancher/calico-cni:v3.1.1
rancher/calico-ctl:v2.0.0
rancher/calico-node:v3.1.1
rancher/cluster-proportional-autoscaler-amd64:1.0.0
rancher/coreos-etcd:v3.1.12
rancher/coreos-etcd:v3.2.18
rancher/coreos-flannel-cni:v0.2.0
rancher/coreos-flannel:v0.9.1
rancher/docker-elasticsearch-kubernetes:5.6.2
rancher/fluentd-helper:v0.1.2
rancher/fluentd:v0.1.10
rancher/hyperkube:v1.10.5-rancher1
rancher/hyperkube:v1.11.2-rancher1
rancher/hyperkube:v1.9.7-rancher2
rancher/jenkins-jenkins:2.107-slim
rancher/jenkins-jnlp-slave:3.10-1-alpine
rancher/jenkins-plugins-docker:17.12
rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10
rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.7
rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.8
rancher/k8s-dns-kube-dns-amd64:1.14.10
rancher/k8s-dns-kube-dns-amd64:1.14.7
rancher/k8s-dns-kube-dns-amd64:1.14.8
rancher/k8s-dns-sidecar-amd64:1.14.10
rancher/k8s-dns-sidecar-amd64:1.14.7
rancher/k8s-dns-sidecar-amd64:1.14.8
rancher/kibana:5.6.4
rancher/log-aggregator:v0.1.3
rancher/metrics-server-amd64:v0.2.1
rancher/nginx-ingress-controller-defaultbackend:1.4
rancher/nginx-ingress-controller:0.16.2-rancher1
rancher/pause-amd64:3.0
rancher/pause-amd64:3.1
rancher/prom-alertmanager:v0.11.0
rancher/rke-tools:v0.1.13
rancher/rancher:v2.0.8
rancher/rancher-agent:v2.0.8
@@ -0,0 +1,93 @@
#!/bin/sh
if [ -z "$1" ]; then
echo Usage: $0 [REGISTRY]
exit 1
fi
set -e -x
REGISTRY=$1
docker load --input rancher-images.tar.gz
docker tag busybox ${REGISTRY}/busybox
docker push ${REGISTRY}/busybox
docker tag rancher/alertmanager-helper:v0.0.2 ${REGISTRY}/rancher/alertmanager-helper:v0.0.2
docker push ${REGISTRY}/rancher/alertmanager-helper:v0.0.2
docker tag rancher/alpine-git:1.0.4 ${REGISTRY}/rancher/alpine-git:1.0.4
docker push ${REGISTRY}/rancher/alpine-git:1.0.4
docker tag rancher/calico-cni:v3.1.1 ${REGISTRY}/rancher/calico-cni:v3.1.1
docker push ${REGISTRY}/rancher/calico-cni:v3.1.1
docker tag rancher/calico-ctl:v2.0.0 ${REGISTRY}/rancher/calico-ctl:v2.0.0
docker push ${REGISTRY}/rancher/calico-ctl:v2.0.0
docker tag rancher/calico-node:v3.1.1 ${REGISTRY}/rancher/calico-node:v3.1.1
docker push ${REGISTRY}/rancher/calico-node:v3.1.1
docker tag rancher/cluster-proportional-autoscaler-amd64:1.0.0 ${REGISTRY}/rancher/cluster-proportional-autoscaler-amd64:1.0.0
docker push ${REGISTRY}/rancher/cluster-proportional-autoscaler-amd64:1.0.0
docker tag rancher/coreos-etcd:v3.1.12 ${REGISTRY}/rancher/coreos-etcd:v3.1.12
docker push ${REGISTRY}/rancher/coreos-etcd:v3.1.12
docker tag rancher/coreos-etcd:v3.2.18 ${REGISTRY}/rancher/coreos-etcd:v3.2.18
docker push ${REGISTRY}/rancher/coreos-etcd:v3.2.18
docker tag rancher/coreos-flannel-cni:v0.2.0 ${REGISTRY}/rancher/coreos-flannel-cni:v0.2.0
docker push ${REGISTRY}/rancher/coreos-flannel-cni:v0.2.0
docker tag rancher/coreos-flannel:v0.9.1 ${REGISTRY}/rancher/coreos-flannel:v0.9.1
docker push ${REGISTRY}/rancher/coreos-flannel:v0.9.1
docker tag rancher/docker-elasticsearch-kubernetes:5.6.2 ${REGISTRY}/rancher/docker-elasticsearch-kubernetes:5.6.2
docker push ${REGISTRY}/rancher/docker-elasticsearch-kubernetes:5.6.2
docker tag rancher/fluentd-helper:v0.1.2 ${REGISTRY}/rancher/fluentd-helper:v0.1.2
docker push ${REGISTRY}/rancher/fluentd-helper:v0.1.2
docker tag rancher/fluentd:v0.1.10 ${REGISTRY}/rancher/fluentd:v0.1.10
docker push ${REGISTRY}/rancher/fluentd:v0.1.10
docker tag rancher/hyperkube:v1.10.5-rancher1 ${REGISTRY}/rancher/hyperkube:v1.10.5-rancher1
docker push ${REGISTRY}/rancher/hyperkube:v1.10.5-rancher1
docker tag rancher/hyperkube:v1.11.2-rancher1 ${REGISTRY}/rancher/hyperkube:v1.11.2-rancher1
docker push ${REGISTRY}/rancher/hyperkube:v1.11.2-rancher1
docker tag rancher/hyperkube:v1.9.7-rancher2 ${REGISTRY}/rancher/hyperkube:v1.9.7-rancher2
docker push ${REGISTRY}/rancher/hyperkube:v1.9.7-rancher2
docker tag rancher/jenkins-jenkins:2.107-slim ${REGISTRY}/rancher/jenkins-jenkins:2.107-slim
docker push ${REGISTRY}/rancher/jenkins-jenkins:2.107-slim
docker tag rancher/jenkins-jnlp-slave:3.10-1-alpine ${REGISTRY}/rancher/jenkins-jnlp-slave:3.10-1-alpine
docker push ${REGISTRY}/rancher/jenkins-jnlp-slave:3.10-1-alpine
docker tag rancher/jenkins-plugins-docker:17.12 ${REGISTRY}/rancher/jenkins-plugins-docker:17.12
docker push ${REGISTRY}/rancher/jenkins-plugins-docker:17.12
docker tag rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10 ${REGISTRY}/rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10
docker push ${REGISTRY}/rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10
docker tag rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.7 ${REGISTRY}/rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.7
docker push ${REGISTRY}/rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.7
docker tag rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.8 ${REGISTRY}/rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker push ${REGISTRY}/rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker tag rancher/k8s-dns-kube-dns-amd64:1.14.10 ${REGISTRY}/rancher/k8s-dns-kube-dns-amd64:1.14.10
docker push ${REGISTRY}/rancher/k8s-dns-kube-dns-amd64:1.14.10
docker tag rancher/k8s-dns-kube-dns-amd64:1.14.7 ${REGISTRY}/rancher/k8s-dns-kube-dns-amd64:1.14.7
docker push ${REGISTRY}/rancher/k8s-dns-kube-dns-amd64:1.14.7
docker tag rancher/k8s-dns-kube-dns-amd64:1.14.8 ${REGISTRY}/rancher/k8s-dns-kube-dns-amd64:1.14.8
docker push ${REGISTRY}/rancher/k8s-dns-kube-dns-amd64:1.14.8
docker tag rancher/k8s-dns-sidecar-amd64:1.14.10 ${REGISTRY}/rancher/k8s-dns-sidecar-amd64:1.14.10
docker push ${REGISTRY}/rancher/k8s-dns-sidecar-amd64:1.14.10
docker tag rancher/k8s-dns-sidecar-amd64:1.14.7 ${REGISTRY}/rancher/k8s-dns-sidecar-amd64:1.14.7
docker push ${REGISTRY}/rancher/k8s-dns-sidecar-amd64:1.14.7
docker tag rancher/k8s-dns-sidecar-amd64:1.14.8 ${REGISTRY}/rancher/k8s-dns-sidecar-amd64:1.14.8
docker push ${REGISTRY}/rancher/k8s-dns-sidecar-amd64:1.14.8
docker tag rancher/kibana:5.6.4 ${REGISTRY}/rancher/kibana:5.6.4
docker push ${REGISTRY}/rancher/kibana:5.6.4
docker tag rancher/log-aggregator:v0.1.3 ${REGISTRY}/rancher/log-aggregator:v0.1.3
docker push ${REGISTRY}/rancher/log-aggregator:v0.1.3
docker tag rancher/metrics-server-amd64:v0.2.1 ${REGISTRY}/rancher/metrics-server-amd64:v0.2.1
docker push ${REGISTRY}/rancher/metrics-server-amd64:v0.2.1
docker tag rancher/nginx-ingress-controller-defaultbackend:1.4 ${REGISTRY}/rancher/nginx-ingress-controller-defaultbackend:1.4
docker push ${REGISTRY}/rancher/nginx-ingress-controller-defaultbackend:1.4
docker tag rancher/nginx-ingress-controller:0.16.2-rancher1 ${REGISTRY}/rancher/nginx-ingress-controller:0.16.2-rancher1
docker push ${REGISTRY}/rancher/nginx-ingress-controller:0.16.2-rancher1
docker tag rancher/pause-amd64:3.0 ${REGISTRY}/rancher/pause-amd64:3.0
docker push ${REGISTRY}/rancher/pause-amd64:3.0
docker tag rancher/pause-amd64:3.1 ${REGISTRY}/rancher/pause-amd64:3.1
docker push ${REGISTRY}/rancher/pause-amd64:3.1
docker tag rancher/prom-alertmanager:v0.11.0 ${REGISTRY}/rancher/prom-alertmanager:v0.11.0
docker push ${REGISTRY}/rancher/prom-alertmanager:v0.11.0
docker tag rancher/rke-tools:v0.1.13 ${REGISTRY}/rancher/rke-tools:v0.1.13
docker push ${REGISTRY}/rancher/rke-tools:v0.1.13
docker tag rancher/rancher:v2.0.8 ${REGISTRY}/rancher/rancher:v2.0.8
docker push ${REGISTRY}/rancher/rancher:v2.0.8
docker tag rancher/rancher-agent:v2.0.8 ${REGISTRY}/rancher/rancher-agent:v2.0.8
docker push ${REGISTRY}/rancher/rancher-agent:v2.0.8
@@ -0,0 +1,44 @@
#!/bin/sh
set -e -x
docker pull busybox
docker pull rancher/alertmanager-helper:v0.0.2
docker pull rancher/alpine-git:1.0.4
docker pull rancher/calico-cni:v3.1.1
docker pull rancher/calico-ctl:v2.0.0
docker pull rancher/calico-node:v3.1.1
docker pull rancher/cluster-proportional-autoscaler-amd64:1.0.0
docker pull rancher/coreos-etcd:v3.1.12
docker pull rancher/coreos-etcd:v3.2.18
docker pull rancher/coreos-flannel-cni:v0.2.0
docker pull rancher/coreos-flannel:v0.9.1
docker pull rancher/docker-elasticsearch-kubernetes:5.6.2
docker pull rancher/fluentd-helper:v0.1.2
docker pull rancher/fluentd:v0.1.10
docker pull rancher/hyperkube:v1.10.5-rancher1
docker pull rancher/hyperkube:v1.11.2-rancher1
docker pull rancher/hyperkube:v1.9.7-rancher2
docker pull rancher/jenkins-jenkins:2.107-slim
docker pull rancher/jenkins-jnlp-slave:3.10-1-alpine
docker pull rancher/jenkins-plugins-docker:17.12
docker pull rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10
docker pull rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.7
docker pull rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.8
docker pull rancher/k8s-dns-kube-dns-amd64:1.14.10
docker pull rancher/k8s-dns-kube-dns-amd64:1.14.7
docker pull rancher/k8s-dns-kube-dns-amd64:1.14.8
docker pull rancher/k8s-dns-sidecar-amd64:1.14.10
docker pull rancher/k8s-dns-sidecar-amd64:1.14.7
docker pull rancher/k8s-dns-sidecar-amd64:1.14.8
docker pull rancher/kibana:5.6.4
docker pull rancher/log-aggregator:v0.1.3
docker pull rancher/metrics-server-amd64:v0.2.1
docker pull rancher/nginx-ingress-controller-defaultbackend:1.4
docker pull rancher/nginx-ingress-controller:0.16.2-rancher1
docker pull rancher/pause-amd64:3.0
docker pull rancher/pause-amd64:3.1
docker pull rancher/prom-alertmanager:v0.11.0
docker pull rancher/rke-tools:v0.1.13
docker pull rancher/rancher:v2.0.8
docker pull rancher/rancher-agent:v2.0.8
docker save busybox rancher/alertmanager-helper:v0.0.2 rancher/alpine-git:1.0.4 rancher/calico-cni:v3.1.1 rancher/calico-ctl:v2.0.0 rancher/calico-node:v3.1.1 rancher/cluster-proportional-autoscaler-amd64:1.0.0 rancher/coreos-etcd:v3.1.12 rancher/coreos-etcd:v3.2.18 rancher/coreos-flannel-cni:v0.2.0 rancher/coreos-flannel:v0.9.1 rancher/docker-elasticsearch-kubernetes:5.6.2 rancher/fluentd-helper:v0.1.2 rancher/fluentd:v0.1.10 rancher/hyperkube:v1.10.5-rancher1 rancher/hyperkube:v1.11.2-rancher1 rancher/hyperkube:v1.9.7-rancher2 rancher/jenkins-jenkins:2.107-slim rancher/jenkins-jnlp-slave:3.10-1-alpine rancher/jenkins-plugins-docker:17.12 rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.10 rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.7 rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.8 rancher/k8s-dns-kube-dns-amd64:1.14.10 rancher/k8s-dns-kube-dns-amd64:1.14.7 rancher/k8s-dns-kube-dns-amd64:1.14.8 rancher/k8s-dns-sidecar-amd64:1.14.10 rancher/k8s-dns-sidecar-amd64:1.14.7 rancher/k8s-dns-sidecar-amd64:1.14.8 rancher/kibana:5.6.4 rancher/log-aggregator:v0.1.3 rancher/metrics-server-amd64:v0.2.1 rancher/nginx-ingress-controller-defaultbackend:1.4 rancher/nginx-ingress-controller:0.16.2-rancher1 rancher/pause-amd64:3.0 rancher/pause-amd64:3.1 rancher/prom-alertmanager:v0.11.0 rancher/rke-tools:v0.1.13 rancher/rancher:v2.0.8 rancher/rancher-agent:v2.0.8 | gzip -c > rancher-images.tar.gz
Binary file not shown.
+1 -1
View File
@@ -1 +1 @@
<mxfile userAgent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" version="9.1.3" editor="www.draw.io" type="device"><diagram id="e390a0ea-10c4-4c32-dbd2-2c5d3ea7f96f" name="Page-1">7VrbcpswEP0aP6YDCGz8GJNLZ9p00mYyfZZBxpoIRIV8Sb++KxAYbIip49rOFPsB+ezqsquze2YYD5AXre8FTuYPPCBsYBnBeoBuBpZlDu0RPBTymiOOjXIgFDTQThvgif4mGjQ0uqABSWuOknMmaVIHfR7HxJc1DAvBV3W3GWf1XRMckh3gycdsF/1JAznPUdcxNvhnQsN5sbNpaMsU+y+h4ItY7zew0Cz75OYIF2tp/3SOA76qQOh2gDzBucxH0dojTOW2SFs+767FWp5bkFh2mWDlE5aYLUhx4iGDqZOALtX55KvOyfDXQh1qIslaXmFGw3iArtUVwE5EwABsWapjeZVmF6qsppusN3NhFOpntodyzm6RcVHbBNKGAvXdnfsNS7okMOvLYkpETCTwxDI8tkizY+QLQ8D52vX9AM7CKlCrFqGV3RtRmTHBvJpTSZ4S7CvrCngO2FxGTJvLm1M/fB5RH8YqCakU/IV4ZUxo5Hpj7660FISyVAIoYxVPwxg51y7gOr83OrlosiRCUuDntTZInuj83eGIMlVmCU8SGgPxJwxPCZuUTCw2iHkMoUw4hESlmoGMMgtVxmgSqS3JugJpBt0THhEpXsFFVzfSZF5tSsUZa2xeKRM00iDW5RmWS20oCgPN0haKuw2U3bpIFXfSPbCyj+BpsYLRHLCeVfQBHb+7Gz8Euxu/aR0h/qbwq9XUvcKIPzNmZlOFRSTNeA/1u6eipqKhxuqObxbZPy4oZNtj53a7UgSfcsk1WvU21bep/EKBA0o2zrqUGijmtnCstXjsXfKYdkPxuEfgzmgPdxra/XY/Nzr08wNX+UFSvhAZ7b6DBWf0U/5tVNtFuooWIzPZHgaQPy4w7/EZ/L/SiMp8qqOk3gAmMap05/E5rTC/OrNNdN6sp+MH8EAinvEtj0El37Lh8UAnBx38OCds7Btd0nRpkq2baNcOU/SjjgL/rgZTWI2GDtMkT0dR5/H51Rma+CfnfAJdrHEihbb2dJReobdoNm7h2UUodFkFH0airV6ie4nuJfqwDnMOiW5859Mu0Z31uEHN/0Kiy0VOo9HopBpt9xq9V6MbeFblU07Sy9Bo+6NptN1rdK/R/71GH9hhzqLRzvk1eusl94kFenhSgUa9QL9boJ3LEegP954b9QLdC3Qv0Id1mBMINPzc/Ccjs1X++IJu/wA=</diagram></mxfile>
<mxfile userAgent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" version="9.1.3" editor="www.draw.io" type="device"><diagram id="e390a0ea-10c4-4c32-dbd2-2c5d3ea7f96f" name="Page-1">7Vtbc6s2EP41nmkf7BEIbPwYO0k706ST9vRM26czspGxGkAckBP7/PpKIHEVNnF8SWaIHwzLStqVvt1PIusBnAfbX2IUrR+pi/2BCdztAN4OTNNwAOBfQrLLJI4NM4EXE1cqFYIv5AeWQtnO2xAXJxVFRqnPSFQVLmkY4iWryFAc09eq2or61VEj5OGG4MsS+U3p38Rla+UFKOS/YuKt1ciGcniBls9eTDehHG9gwlX6lz0OkOpL6idr5NLXkgjeDeA8ppRlV8F2jn0xt2rasnb3LU9zu2Mcsi4NLGnHC/I3WJmcGsZ2ajKwy+dG3tKYralHQ+TfFdJZ6jAWXQJ+t6QBWcrrNQt8fmnwSxy6N2Jp+G1IQ5xJ7onvS9X/MGM7CQW0YZSLitEeKI1kPysasnsUEF9gK0FhMkxwTFbyyZz6NC4PweLdP2KEkeOo+3/Te1vdPvHmAWY4VvZv4pfUGTFawlDMlN0Lny6flVCaLpSyKRPz1LoMUpTQTbyUWgoDvDMPSzVzmoOABxem3K54x3Vi7CNGXqr9I4lyL9fLmz5Rwkc2gYxIOLWyJjIeoUKy6iKzS7Yq46XW0dBQIbpTTtjVnjJvGj3xKUS7klokFJLuFhvggF0m2KPPLzID1F1pcgtRGhv6ODE1YTL2+aLNXPJSCZfx940I3hnDWzZEPvHCAbwRqYpDgUPMBClegMDqMEnRLp4aTrQt2vIrT36nYwjlNNtl2C4rQuiKT1mkMaZIS0PZiRg09hY/GRNL2GTyGQAmKF0bzs9Ng37PQGiC3zYLHIc8aPgSgrm/SVLfMmv5XGYGV53g4nSulLSWZYocIiLqdU0Y/hKhNFZeOclUc0meNo1avklYTJ/xPJ8oOHHm0/l9/kRlc1PMKg/gkiYAE/tG5Ai5aLdyxeDsBceMcHK4kQ+YyEXVPBTRKCIhx/bMRwvsz/L5ruUjyl0iTLSAIJ8FTZ4QQ+JtSdRMCSqUJe5fC56yp1K2LnEUnNRCvpw1SnGyNwwM5zBdCL+j7o7lJI4Wqgegd1i2squB7jT95842/c8TxHv817lfDtHuYYuXK7AydBEW4CTFPU8KByJqEWtirKq4N8jOHFDQsqb2XT1SYrqgKbmX2VpoG+KjCz8vRi7BDWrXQMxpwVhr8FhN8BiWJnicE2BncgA7mrRdJwnQgSSO7OVPLLcmJviDP0Ep/IR+G9Sakq5M6OMVa3eDgz9UsvnTV67/QALCsqa22GcDjiSfCN55+pqUkF9u2UY6e+Pp9A484oCmeMt8EJNvCo59JLOjDD+Nhdq80WWaPhplyyTaNcOofNSR4N+VYNRToMkwOno6CTtPr8/OPImP7OsRtKk7z56Poc0DGaVn6BrMpi04+xAMnUfBp6Fos6fonqJ7ij4uw1yDorUvkq5O0TltXoaj4UU52uo5+k0cnUP0Y3K09dk42uo5uufonqOPyzBX4Wj7+hxde8l9YYIeX5SgYU/QbyNouwVlH4OgP917btgTdE/QPUEfl2GuQdBQd4Csrdnpi5ZU8U9RtSQUzly1NGiWLG0JKxc08dvqw3o5E/YXqf1q8Qf7KpykAeUCp6JO4c0FTvJdTLnAydHD6J31TcNxrfoH1mB2oqqkZpnR+LR1RlB3vD07tC9Yj6fQ24DuxaruxudBYB0ZJugGwWNA0qEK51OD5CMXbcJmTjtT0aYN6+9pYUdMvbnY0qkWW1rwjdWWtQbvT4O6/8S1H8E7n7c1y9u6pRmadVKxJ409Tn5QrxzCjfEJzlEd/pmf7zVFBLgoWefhXArhtCDxiSaEEardDT7UFPiuktGgdbvY2FVWN7ZqB7FGkbAy2HriJwKjIFkiPEp2CcPBt6zpCLkBCb/xlUpompFqu17bEJ+2Aaqbcs9HSaJJZZrs0gqZMjTgfmxYTSQYYKzZ7Z4ACGorc4La4/1HuTfVDZs2T02qVrh6rakbrpQL34hlF5MSRT4RkjEKBLDCRRJVT2XdD4xn92ATku8bnCb1/AWCUEHCI857nEnQcs2/QvVma3TAkfLptBbXYv2qQcxHJT9kMhO4lhmda9uzgX0rwpKzb/by47b8eik9IbeFsrCThN5fKU0PrTPEi6oBGnfLnPVte4d44bfFz2Uyqil+kwTv/gc=</diagram></mxfile>
@@ -0,0 +1 @@
<mxfile userAgent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" version="9.1.3" editor="www.draw.io" type="device"><diagram id="e390a0ea-10c4-4c32-dbd2-2c5d3ea7f96f" name="Page-1">7Vtbl5s2EP41Pqd9sA8IsPHj2rubPiTtNntymjzlyCBjJYBckHft/vpKIC5CgsUx3nUbZx+CRzc08803GkmMrGW0f5fA7eYD8VE4Aoa/H1m3IwBM07TZf1xyEBKL/+SSIMG+kFWCR/wPEkJDSHfYR6lUkRISUryVhR6JY+RRSQaThDzL1dYklEfdwgApgkcPhqr0L+zTTS51HaOS/4ZwsClGNg1RsoLe9yAhu1iMNwLWOvuXF0ew6EvUTzfQJ881kXU3spYJITR/ivZLFHLlFmrL2923lJbvnaCY9mkgzPIEwx0q3ngasqYLHz/x96MHoZPp3zv+UguK9nQMQxzEI+uGm4CNhBL2wMoyVcd0nGYG5aWmu91XbdlTIP7PxuCVMyuGJJEGYWqzfP6ntv0IY2+TDbgMd2k2dN4Zm2TenzwGE2dTKaRAmhXIbIW4NkxW/LzBFD1uocdLnxm4mWxDo1AUl9biPzwSYY8984mnNCHf0bKchzVzl/PlfVlSgAjwSeMwrNU0jJlz4zK50OmtUKi1eEIJxQyTN6KAkq3Q2T2McMh9a0u2WxwzsC9CuELhokRfMUBMYjaVBWFTwpS3sIxSC3WUCODwIdG+JhKoeYdIhGhyYFWeK39w5gLFm5ovOAW0ofDBoGxb4ZA9CCjqYWn9h2H5kJBvnJOusHxNWIpSy8lbiMBTEHYdtEADWtsaALTmTIPahl25Grb951lGOLgqejC6519MRCiAzVfRgOkCVQOm4ZyugXmr2+pcoMvNkLc21qbqZr/DCKWZIwADvOBiq0TjdHLFTq87s4dZtj137pquk5AVoURI67VN/qfzxyCBPkZVZeFbGpDNWlCmoqlAjwY8tsZ93CG8x+kFnjoAPqKU7JLs8U8GEziU4d+AbgXe+4KhgI4ChpOsLkrnqtXdmSbQD2F09/Ios1zLvwpl6hRwPso0r5R5HGW6LSi7DMqc/gB4VFD0Nn7Ft1MYcVKMV+lWi54rHbfScX9EvT4dF0vjVkBpEq9mZgV0mVWZE2nApkRx3s9PhhN9TtU/6LWCZ+pKsW1uakKbhp3AIPlQezpwbBbflqkzCot7odLsRuV7HGEqAy/vujtpH3YyWp48YbwQrWnrBohWIbJatUuPH1T18uGTpGYDGHyZZTCPCTHfy3v4lL68YzK8TS5JRx9QRDLfldRkMw82PuD22HmUdq4EeiSBFqG2PMnQJAfnWt+Vb3tB2ZHlgIluS8ktpINnSB1nFOdIkaxrinRUinSES71BitRxknDdVTohjTmeSE1TZc5z5THO6XnM3JBDtJ5Q1msjW8eoi4SX4vVQi8beAd44EoJBCNNUtGuHo6CFbiSKSnW4CVGBtGwZZi0S5m4UEy4Z2045H00UU7HYDryijaCfsWkpQNTRz2w2AP3Ybx/BZ/0C+Lk2OEG/ff2hwrd9Dd/HhW+7BWeXEb777XBew/ex4bu/1d/gUKhH2sPUveWP6xDtb/iFL9kaDUXO59CCbof/oD2mn0Vb/vyFG2/CIxBTWXL4LGyZ/ajKJJu320hcbzPBkWjof0oh1JPjXtrJpTAJkKgmtuOQL919677ho7vgI2QJClmwfkLSO+jsLkZ4IDij9n2xMpczazCbyl3k0xGtKvQoHVm2HOBMtxG1ch0oHWUwLKfdD5m6deUVmScj070sZNq2jMxpg9T6IhPMGx2ZjY6GQybocfdoaGReHP70vjJzToFmocdLwaZjdEOqN2s68k05a2pPzsab2lsOr4XOiiy/FEuyN0WneCGBT/MkcDr/T3AC25I6sm1wLmjqeHOIw5xheum+pnGL1nAX6s4v1WRHlRx3RNbnGEw55HO6zvj6H+aduIXWfwKNEzhmFYNnSo3jt3OdQp56LvzT5LOvddhXbj+8xlaHrbst0bAg5/lH8ZMkdEMCEsPwrpIuvF3yVJpY3vytG7FuaMalVATcajuXye4xf1exuvOLGquQeN9zkajAO/mGKD2IeAl3mTWr93tPuH3UYJvCOB2nKMHrhvGLgNkSwquQ2Yh7L8ZQC6gx1J7qIXFiLBybTnkkWsBp3m/5z1QND7VqW14h7R5KGsiZNj76ajSwXburPnvIX+FHY2qhZk1Qbb2wU304NxZ7v5wYk2D1C+ArVfaCbFij8fyrSprVdzg3foT5WIT5f5J9TgiMejxtRM/JCwxfZ2F16ZpzJfcVH6ab0udqfpZ9CfNAUiwOOBQ2e9+owFiRkqiV7hRW1B/MiFV1tA/4p5qTKPUgmqSHlKLoa950ArmivnokTknGIQ3Wdkz+1zaAHFTaDo003j0AXY/LBWZ5O0ND2IalEvbMHoCwh9ibNv4Q6LxuU/c77OtGhN3YC7SB7rBroI1r9rP6UDenyepzaOvuXw==</diagram></mxfile>
+1 -1
View File
@@ -1 +1 @@
<mxfile userAgent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" version="9.1.3" editor="www.draw.io" type="device"><diagram id="e390a0ea-10c4-4c32-dbd2-2c5d3ea7f96f" name="Page-1">7Zvdd6I4FMD/Gh9nDiSg8FhtO/Mw3dOdnjk78xglamYCYTFWu3/9JhDAQFBU/Oip7UPh5ovc/O5NbpL24Chcf0lQPH9iAaY9YAXrHrzvAWD3nYH4IyVvhQRmkllCAiUrBS/kP6yElpIuSYAXWkbOGOUk1oUTFkV4wjUZShK20rNNGdVbjdEM1wQvE0Tr0n9IwOeZ1HOtUv4Vk9k8b9m2VMoYTf7MEraMVHs9AKfpT5YcorwulX8xRwFbbYjgQw+OEsZ49hSuR5hK5eZqy8o9NqQW353giLcpALICr4gucf7FfSqKDgPyKr+Pvymd9P9dyo8acrzmnxAls6gH7+QQiJZwIh5EWqrqiH9apAMqU20vXpdlxdNM/U3bkJnTUaQs0RoRaoOB/K2X/Y6iyTxtcESXi7TprDLRyaw+vQ0hTruSS4HWK5COFZbasEXyak44fonRRKauBNxCNuchVcnFaMmXCQvJRDzLji94wv7gUdEPOPBG/uixSMkhArLThNKNnJY1cO88IVc6vVcKhcNXnHAimLxTCZzFSmePKCRU2lbM4phEAvYhRWNMhwV9eQMRi0RXhkx0iXBZAlqFFjYpUeDIJvF6Q6So+YJZiHnyJrKsSntwfUXxfMMW3BxtpGxwVpQtORQPCkUzlvAdY/mcsN/SJ92wPCeWKhW6WQk18eQOexNaYIDWgR1Aaw8M1FbGVaohbt/PYoZD47wGa3v/844oBYj+1jRge6CuAdtyj9eA32i2JhPYZmZ4MrWmdt3M/kIhXqSGACyww8TGicHo9Ixbre7EFgYdx3cfqqaTsDHjTEk3c9vy12SPswQFBJeZlW0ZIBs0UFanKafHAI9jMB+vC+txD4CnDkV98O/xFC2pJO0bCckuv3wsNBdw1cpW2oKUY1cD6ShiVKpfJ8YbGBYJXQDjXZ+7LeKAs7hbkwJO527tm7vdz916DZRdh7vt39zt1bnb9sSc393my+ZGYAxBWTXqAqao6ztesGWS0vS3SEFp3g/GgjmmapylanNZIyB9T5uffNswPRk8DOgkHmoOB/aN4psideGGolbk2SbyikhduSoNvKzq7UF7t50x+sIj2qN4yhs3QIwK0dVqXD4cqOrR8w9NzRaw5FLJEhZDidzLe/6x2L1j0v2YXJOOnnDIUtvV1OQIC7aeSPP8uJd2PqIDPWpDKZ9O8wU+NCzwT7VGK762XYTTOpzZZwKpRjjQBZ/PuaW05YTiFEEOvAU5O4OcHQuSKwpytpwj3IKcMwY5BxJzgSDHubzHHVzY4R6yD3u4w3VuDvdoh+tcj8O97Sq9C4fbQMwFNvFbLHGFumP5OKV4fScv9+ijUVGk7yOIvC22h9eE/1Rl5fMvOXjCxYq3SHz9TzWW6UuZpo158xipq0w22JOG9rvKSj3Zllk2YWQijpIZVrnUzgsOtGtO2y9zmO5yKFmCKeLkFWufYBp21cIzI+mssM6XYXoQBQZ9vYqsN6pUCU+tIuhUjlu8yoSX6aBWUUph0e12YJq2O29gHgumd11gOo4OZr/i0tqCCfxKRXalou7ABC1umXx4MCt8HUJqptaLkekCDSjh+T5XfF1rp+nqd6Jg36lW1SGdLQ7lL0Bn+forX6q9czrdS9JpV+Zh4B/qN/N74aoixwGnItPkNrvYte+mlv2in/Kgqh7p1CX7nYW0Oe+onea42w5z2p/aNIRz3XegctQiRsWSMVHlnOVUx03HHgB+mGD2XKc6xb7FCfZIxGv57xyZ3yr/aQY+/A8=</diagram></mxfile>
<mxfile userAgent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" version="9.1.3" editor="www.draw.io" type="device"><diagram id="e390a0ea-10c4-4c32-dbd2-2c5d3ea7f96f" name="Page-1">7Ztbc5s4FIB/jR/jAQQ2fkycpPvQ7Hjb6Wz71JFBxtoCoiAnzv76lUACCwTBNb5k6/ghWFd0zneOdCR5BObR9kMKk/UT8VE4sgx/OwL3I8syXcNg/3jKq0iZOaBICVLsi7Qq4TP+F4lEUTHYYB9lSkFKSEhxoiZ6JI6RR5U0mKbkRS22IqHaawID1Ej47MGwmfo39um6SHUdo0r/A+FgLXs25YiX0PsRpGQTi/5GFljlf0V2BGVbony2hj552UkCDyMwTwmhxVO0naOQC1eKraj32JJbvneKYtqnglVUeIbhBsk3noSs6p2Pn/n70Vchk8nPDX+pO4q29AaGOIhH4JargPWEUvbA8nJRx/QmyxXKc0032VZ12VMg/ud98MK5FkOSKp0wsQGff5p1P8HYW+cdzsNNlnddNMYGWbSn9sGS86HIVEsZlZXrCnFpmCz7ZY0p+pxAj+e+MLhZ2ppGocgutcW/eCTCHnvmA89oSn6geTkOMHXns/ljmSMhsvigcRjulDSMqXPrsnQh03shUHD3jFKKGZO3IoOSRMjsEUY45LaVkCTBMYP9LoRLFN6V9MkOYhKzodwRNiRMeQ1glFLYpUSAw7tE250kQc0HRCJE01dW5KWyB2cmKF7v2IIj0YbCBoOybsUhexAo6rEE7xjLRUr+4T7piuUpsRS5wClqiIlHOuxdaC0NtDYYAFpzqqG2plcuhqT/OMsZDi5lC0b3+OVAhADYeBsSMF2rKQHTcA6XwKzVbHUm0GVmyFsZK7NpZn/CCGW5IViG9YaJLVON0akFO63uyBYGbHvmPNRNJyVLQolI3S1t8o/OHoMU+hhVhYVtaSCbtlDWpEnSo4HH1piPO4T1OL3g2QXgE8rIJs0f/2KYwKEUfwZ3K3jvC4NEpwHDQVoXubOm1t2pZqIfQunu5bnMci1/EpepE8DxXKZ5dZn7uUy3hbLLcJmTX4CnCUVv5Vf+dgIj7hTjZZZo6bm641Z33J+o07tjuTRuBUoTeNUjK0sXWZUxkQa2xizO2/nNONHHVP0nvVZ4Jq4yt81MzdSm8U7WIPFQeziwbxTfFqkzFxb3otLspvIjjjBVwSua7g7ahx2M1k8e0F+IVrR1A0QrEFWs2qXHL4p6vviiiNmwDL7MMpjFhJjv5S2+ZG/vmAyvk0uS0ROKSG67iphsZsHGE26fO/eSztWB7ulA5VQrgwOgCQ6Otb4r3/aCoiPgWONTbil1nFAcI0AC1wBprwBpD4M6Q4DUcY5w3VM6IIjZ342eMoixz+81p2d2mv32UodymvbVae7nNO0Wzi7DafbbVbo6zX2dZn+tn2EjvsdSk4k74Y+rEG1v+SUbVRs1Qc5mEEC3w37QFtOvoi5//saVx9wk+xazt/8qdJl/qfIUnbfrSFwpMq09aei/MyzEU3Cv7J5RmAZIFBNbIMhX7ht136rQXaoQaSkKIcXPSHkHnd5FDwuCc9e+leshNZqxphO1iWI4olZFT6MhYNfOTNzarFXIoNFQjmE57H5k6vYkr2QeTKZ7WWTatkrmpObU+pJpzWoNmbWGhiPT6nHfY2gyL44/va1MnUPQlHK8FDYdoxup3l7TUW8ngYk9Pprf1J4sn4rOyll+k0uys9IpXkjwaR4Ep/P/hNOygdKQbVvHQlPnN4fYQB+mle6j8Xu0gptQd2bUDHaaKfsdS/Q5emgcrDhd5yr9D1BaQvnhB1A79WBaMXikVDvyONbJz6Fncb9NPHuqA5Zy++EUWx3Aanii6h73rR9hfrjspQhSxKyoutnd2PAYj968ErNT5WexR2IkKUlgIBpn6rCMWPqebKwjsTl9F7xwRnyYrXPQDBWq/Ab2gmSYYqLV6MdaAUYGJVGryhtkqHCKiVeuLKJtwH8iNI4yD6Jx9ppRFH0vqo4hF/B3j8QZ4SOqk+uY/NPWgWpYQQizTIx913B28VYCJs0Wb4PiVmRvykm2PBXUQGuAJrRTewBoe+xvHykgNsau+w5iYv5+C5RiJlpO6H3124Laou3NBaB0EMoCsGXr/51v4dgyYDh4vce+Vj9gK4pXPxMED/8B</diagram></mxfile>
Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 24 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 22 KiB