Merge branch 'staging'

This commit is contained in:
Denise Schannon
2018-10-18 13:46:55 -07:00
55 changed files with 622 additions and 431 deletions
@@ -35,3 +35,4 @@ weight: 303
| [CVE-2018-8897](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-8897) | A statement in the System Programming Guide of the Intel 64 and IA-32 Architectures Software Developer's Manual (SDM) was mishandled in the development of some or all operating-system kernels, resulting in unexpected behavior for #DB exceptions that are deferred by MOV SS or POP SS, as demonstrated by (for example) privilege escalation in Windows, macOS, some Xen configurations, or FreeBSD, or a Linux kernel crash. | 31 May 2018 | [RancherOS v1.4.0](https://github.com/rancher/os/releases/tag/v1.4.0) using Linux v4.14.32 |
| [L1 Terminal Fault](https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html) | L1 Terminal Fault is a hardware vulnerability which allows unprivileged speculative access to data which is available in the Level 1 Data Cache when the page table entry controlling the virtual address, which is used for the access, has the Present bit cleared or other reserved bits set. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 |
| [CVE-2018-3639](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3639) | Systems with microprocessors utilizing speculative execution and speculative execution of memory reads before the addresses of all prior memory writes are known may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis, aka Speculative Store Bypass (SSB), Variant 4. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 |
| [CVE-2018-17182](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17182) | The vmacache_flush_all function in mm/vmacache.c mishandles sequence number overflows. An attacker can trigger a use-after-free (and possibly gain privileges) via certain thread creation, map, unmap, invalidation, and dereference operations. | 18 Oct 2018 | [RancherOS v1.4.2](https://github.com/rancher/os/releases/tag/v1.4.2) using Linux v4.14.73 |
@@ -58,22 +58,22 @@ rancher:
### Amazon ECS enabled AMIs
Latest Release: [v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1)
Latest Release: [v1.4.2](https://github.com/rancher/os/releases/tag/v1.4.2)
Region | Type | AMI
---|--- | ---
ap-south-1 | HVM - ECS enabled | [ami-0c095bd65873104ea](https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#launchInstanceWizard:ami=ami-0c095bd65873104ea)
eu-west-3 | HVM - ECS enabled | [ami-0a9420a7b9a46517b](https://eu-west-3.console.aws.amazon.com/ec2/home?region=eu-west-3#launchInstanceWizard:ami=ami-0a9420a7b9a46517b)
eu-west-2 | HVM - ECS enabled | [ami-09f7882ec876661f9](https://eu-west-2.console.aws.amazon.com/ec2/home?region=eu-west-2#launchInstanceWizard:ami=ami-09f7882ec876661f9)
eu-west-1 | HVM - ECS enabled | [ami-0dd35c5333b908688](https://eu-west-1.console.aws.amazon.com/ec2/home?region=eu-west-1#launchInstanceWizard:ami=ami-0dd35c5333b908688)
ap-northeast-2 | HVM - ECS enabled | [ami-0272129f9db7717d1](https://ap-northeast-2.console.aws.amazon.com/ec2/home?region=ap-northeast-2#launchInstanceWizard:ami=ami-0272129f9db7717d1)
ap-northeast-1 | HVM - ECS enabled | [ami-0cc3f7df2e7cac07a](https://ap-northeast-1.console.aws.amazon.com/ec2/home?region=ap-northeast-1#launchInstanceWizard:ami=ami-0cc3f7df2e7cac07a)
sa-east-1 | HVM - ECS enabled | [ami-0b8bc2a235e2ba0b8](https://sa-east-1.console.aws.amazon.com/ec2/home?region=sa-east-1#launchInstanceWizard:ami=ami-0b8bc2a235e2ba0b8)
ca-central-1 | HVM - ECS enabled | [ami-0834633a15bc44f0c](https://ca-central-1.console.aws.amazon.com/ec2/home?region=ca-central-1#launchInstanceWizard:ami=ami-0834633a15bc44f0c)
ap-southeast-1 | HVM - ECS enabled | [ami-076072ffb77b9e9c7](https://ap-southeast-1.console.aws.amazon.com/ec2/home?region=ap-southeast-1#launchInstanceWizard:ami=ami-076072ffb77b9e9c7)
ap-southeast-2 | HVM - ECS enabled | [ami-0b39a6595e83e016d](https://ap-southeast-2.console.aws.amazon.com/ec2/home?region=ap-southeast-2#launchInstanceWizard:ami=ami-0b39a6595e83e016d)
eu-central-1 | HVM - ECS enabled | [ami-0a8b8e376349bd511](https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#launchInstanceWizard:ami=ami-0a8b8e376349bd511)
us-east-1 | HVM - ECS enabled | [ami-0683608046ab95a13](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#launchInstanceWizard:ami=ami-0683608046ab95a13)
us-east-2 | HVM - ECS enabled | [ami-0d6a98791e2f98a13](https://us-east-2.console.aws.amazon.com/ec2/home?region=us-east-2#launchInstanceWizard:ami=ami-0d6a98791e2f98a13)
us-west-1 | HVM - ECS enabled | [ami-0880d73d3ea92c89c](https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#launchInstanceWizard:ami=ami-0880d73d3ea92c89c)
us-west-2 | HVM - ECS enabled | [ami-0626403624bc30288](https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#launchInstanceWizard:ami=ami-0626403624bc30288)
ap-south-1 | HVM - ECS enabled | [ami-0721722dd0f0a6b54](https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#launchInstanceWizard:ami=ami-0721722dd0f0a6b54)
eu-west-3 | HVM - ECS enabled | [ami-017eb997502d38415](https://eu-west-3.console.aws.amazon.com/ec2/home?region=eu-west-3#launchInstanceWizard:ami=ami-017eb997502d38415)
eu-west-2 | HVM - ECS enabled | [ami-08772e5a96934e3e5](https://eu-west-2.console.aws.amazon.com/ec2/home?region=eu-west-2#launchInstanceWizard:ami=ami-08772e5a96934e3e5)
eu-west-1 | HVM - ECS enabled | [ami-089bd570fab84ab89](https://eu-west-1.console.aws.amazon.com/ec2/home?region=eu-west-1#launchInstanceWizard:ami=ami-089bd570fab84ab89)
ap-northeast-2 | HVM - ECS enabled | [ami-0420afe0617d4f723](https://ap-northeast-2.console.aws.amazon.com/ec2/home?region=ap-northeast-2#launchInstanceWizard:ami=ami-0420afe0617d4f723)
ap-northeast-1 | HVM - ECS enabled | [ami-05bee9d87b6af1f5c](https://ap-northeast-1.console.aws.amazon.com/ec2/home?region=ap-northeast-1#launchInstanceWizard:ami=ami-05bee9d87b6af1f5c)
sa-east-1 | HVM - ECS enabled | [ami-0bc2d9e3a0c98158c](https://sa-east-1.console.aws.amazon.com/ec2/home?region=sa-east-1#launchInstanceWizard:ami=ami-0bc2d9e3a0c98158c)
ca-central-1 | HVM - ECS enabled | [ami-0c09398512d4ba6b9](https://ca-central-1.console.aws.amazon.com/ec2/home?region=ca-central-1#launchInstanceWizard:ami=ami-0c09398512d4ba6b9)
ap-southeast-1 | HVM - ECS enabled | [ami-0ffa715a6bb9373de](https://ap-southeast-1.console.aws.amazon.com/ec2/home?region=ap-southeast-1#launchInstanceWizard:ami=ami-0ffa715a6bb9373de)
ap-southeast-2 | HVM - ECS enabled | [ami-03cb7478f257c6490](https://ap-southeast-2.console.aws.amazon.com/ec2/home?region=ap-southeast-2#launchInstanceWizard:ami=ami-03cb7478f257c6490)
eu-central-1 | HVM - ECS enabled | [ami-029b85c9d234c4f43](https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#launchInstanceWizard:ami=ami-029b85c9d234c4f43)
us-east-1 | HVM - ECS enabled | [ami-0f274b6c9410c73ed](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#launchInstanceWizard:ami=ami-0f274b6c9410c73ed)
us-east-2 | HVM - ECS enabled | [ami-0cae94276614142ef](https://us-east-2.console.aws.amazon.com/ec2/home?region=us-east-2#launchInstanceWizard:ami=ami-0cae94276614142ef)
us-west-1 | HVM - ECS enabled | [ami-03f86e5bb88269702](https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#launchInstanceWizard:ami=ami-03f86e5bb88269702)
us-west-2 | HVM - ECS enabled | [ami-01bde5d57c4d043ad](https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#launchInstanceWizard:ami=ami-01bde5d57c4d043ad)
@@ -37,7 +37,7 @@ $ sudo ros install -d /dev/sda --append "rancheros.autologin=tty1"
_Available as of v1.1_
RancherOS v1.1.0 added a Syslinux boot menu, which allows you to temporarily edit the boot paramters, or to select "Debug logging", "Autologin", both "Debug logging & Autologin" and "Recovery Console".
RancherOS v1.1.0 added a Syslinux boot menu, which allows you to temporarily edit the boot parameters, or to select "Debug logging", "Autologin", both "Debug logging & Autologin" and "Recovery Console".
On desktop systems the Syslinux boot menu can be switched to graphical mode by adding `UI vesamenu.c32` to a new line in `global.cfg` (use `sudo ros config syslinux` to edit the file).
@@ -39,3 +39,5 @@ write_files:
restrict 127.0.0.1
restrict [::1]
```
> **Note:** Currently, writing files to a specific system service is only supported for RancherOS's built-in services. You are unable to write files to any custom system services.
@@ -11,7 +11,7 @@ mounts:
- ["/dev/vdb", "/mnt/s", "ext4", ""]
```
**Important**: Be aware, the 4th parameter is mandatory and cannot be ommited (server crashes). It also yet cannot be `defaults`
**Important**: Be aware, the 4th parameter is mandatory and cannot be omitted (server crashes). It also yet cannot be `defaults`
As you will use the `ros` cli most probably, it would look like this:
@@ -38,7 +38,7 @@ Rancher ships with two default Pod Security Policies (PSPs): the `restricted` an
- `unrestricted`
This policy is equivilent to running Kubernetes with the PSP controller disabled. It has no restrictions on what pods can be deployed into a cluster or project.
This policy is equivalent to running Kubernetes with the PSP controller disabled. It has no restrictions on what pods can be deployed into a cluster or project.
## Creating Pod Security Policies
@@ -9,26 +9,16 @@ This procedure describes how to use RKE to restore a snapshot of the Rancher Kub
## Restore Outline
1. [Preparation](#1-preparation)
<!-- TOC -->
Install utilities and create new or clean existing nodes to prepare for restore.
2. [Place Snapshot and PKI Bundle](#2-place-snapshot-and-pki-bundle)
Pick a node and place snapshot `.db` and `pki.bundle.tar.gz` files.
3. [Configure RKE](#3-configure-rke)
Configure RKE `cluster.yml`. Remove `addons:` section and point configuration to the clean nodes.
4. [Restore Database](#4-restore-database)
Run RKE command to restore the `etcd` database to a single node.
5. [Bring Up the Cluster](#5-bring-up-the-cluster)
Run RKE commands to bring up cluster one a single node. Clean up old nodes. Verify and add additional nodes.
- [1. Preparation](#1-preparation)
- [2. Place Snapshot and PKI Bundle](#2-place-snapshot-and-pki-bundle)
- [3. Configure RKE](#3-configure-rke)
- [4. Restore Database](#4-restore-database)
- [5. Bring Up the Cluster](#5-bring-up-the-cluster)
<!-- /TOC -->
<br/>
### 1. Preparation
@@ -17,10 +17,12 @@ RKE launched clusters are separated into two categories:
- [Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/):
For use cases where you want to provision bare-metal servers, on-premise virtual machines, or bring virtual machines that are already exist in a cloud provider. With this option, you will run a Rancher agent Docker container on the machine.
For use cases where you want to provision bare-metal servers, on-premise virtual machines, or bring virtual machines that already exist in a cloud provider. With this option, you will run a Rancher agent Docker container on the machine.
>**Note:** If you want to reuse a node from a previous custom cluster, [clean the node]({{< baseurl >}}/rancher/v2.x/en/admin-settings/removing-rancher/rancher-cluster-nodes/) before using it in a cluster again. If you reuse a node that hasn't been cleaned, cluster provisioning may fail.
<br/>
### Requirements
If you use RKE to set up a cluster, your cluster nodes must meet our [Requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements).
If you use RKE to set up a cluster, your cluster nodes must meet our [Requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements).
@@ -34,9 +34,10 @@ Begin creation of a custom cluster by provisioning a Linux host. Your host can b
- An on-premise VM
- A bare-metal server
>**Bare-Metal Server Note:**
>**Notes:**
>
While creating your cluster, you must assign Kubernetes roles to your cluster nodes. If you plan on dedicating bare-metal servers to each role, you must provision a bare-metal server for each role (i.e. provision multiple bare-metal servers).
>- While creating your cluster, you must assign Kubernetes roles to your cluster nodes. If you plan on dedicating bare-metal servers to each role, you must provision a bare-metal server for each role (i.e. provision multiple bare-metal servers).
>- If you want to reuse a node from a previous custom cluster, [clean the node]({{< baseurl >}}/rancher/v2.x/en/admin-settings/removing-rancher/rancher-cluster-nodes/) before using it in a cluster again. If you reuse a node that hasn't been cleaned, cluster provisioning may fail.
Provision the host according to the requirements below.
@@ -15,6 +15,7 @@ Use {{< product >}} to create a Kubernetes cluster in Amazon EC2.
- [Example IAM Policy with PassRole](#example-iam-policy-with-passrole) (needed if you want to use [Kubernetes Cloud Provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/cloud-providers) or want to pass an IAM Profile to an instance)
- IAM Policy added as Permission to the user. See [Amazon Documentation: Adding Permissions to a User (Console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) how to attach it to an user.
## Create the cluster
1. From the **Clusters** page, click **Add Cluster**.
@@ -17,6 +17,9 @@ When creating a vSphere cluster, Rancher first provisions the specified amount o
## Prerequisites
Before proceeding to create a cluster, you must ensure that you have a vSphere user with sufficient permissions. If you are planning to make use of vSphere volumes for persistent storage in the cluster, there are [additional requirements]({{< baseurl >}}/rke/v0.1.x/en/config-options/cloud-providers/vsphere/) that must be met.
## Provisioning a vSphere Cluster
The following steps create a role with the required privileges and then assign it to a new user in the vSphere console:
1. From the **vSphere** console, go to the **Administration** page.
@@ -79,7 +79,7 @@ Please follow this checklist when filing an issue which will helps us investigat
- Docker daemon logging (these might not all exist, depending on operating system)
- `/var/log/docker.log`
If you are experiencing performance issues, please provide as much of data (files or screenshots) of metrics which can help determing what is going on. If you have an issue related to a machine, it helps to supply output of `top`, `free -m`, `df` which shows processes/memory/disk usage.
If you are experiencing performance issues, please provide as much of data (files or screenshots) of metrics which can help determining what is going on. If you have an issue related to a machine, it helps to supply output of `top`, `free -m`, `df` which shows processes/memory/disk usage.
### Docs
+1 -1
View File
@@ -13,7 +13,7 @@ See [Technical FAQ]({{< baseurl >}}/rancher/v2.x/en/faq/technical/), for frequen
#### What does it mean when you say Rancher v2.0 is built on Kubernetes?
Rancher v2.0 is a complete container management platform built on 100% on Kubernetes leveraging its Custom Resource and Controller framework. All features are written as a CustomResourceDefinition (CRD) which extends the existing Kubernetes API and can leverage native features such as RBAC.
Rancher v2.0 is a complete container management platform built 100% on Kubernetes leveraging its Custom Resource and Controller framework. All features are written as a CustomResourceDefinition (CRD) which extends the existing Kubernetes API and can leverage native features such as RBAC.
#### Do you plan to implement upstream Kubernetes, or continue to work on your own fork?
@@ -119,3 +119,7 @@ A node is required to have a static IP configured (or a reserved IP via DHCP). I
When the IP address of the node changed, Rancher lost connection to the node, so it will be unable to clean the node properly. See [Cleaning cluster nodes]({{< baseurl >}}/rancher/v2.x/en/faq/cleaning-cluster-nodes/) to clean the node.
When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster.
### How can I add additional arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster?
You can add additional arguments/binds/environment variables via the [Config File]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#config-file) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables]({{< baseurl >}}/rke/v0.1.x/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls]({{< baseurl >}}/rke/v0.1.x/en/example-yamls/).
@@ -8,7 +8,7 @@ In environments where security is high priority, you can set up Rancher in an ai
- Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing Docker images to your machine. If you need help with creating a private registry, please refer to the [Docker documentation](https://docs.docker.com/registry/).
For each Rancher [release](https://github.com/rancher/rancher/releases), we provide the Docker images and scripts needed to mirror these images to your own registry. The Docker images are used when installing Rancher in a HA setup, when provisioning a cluster where Rancher is launching Kubernetes, or when you enable features like pipelines or logging.
For each Rancher [release](https://github.com/rancher/rancher/releases), we provide the Docker images and scripts needed to mirror these images to your own registry. The Docker images are used when installing Rancher in an HA setup, when provisioning a cluster where Rancher is launching Kubernetes, or when you enable features like pipelines or logging.
- **Installation Option:** Before beginning your air gap installation, choose whether you want ~~a~~ [single-node install]({{< baseurl >}}/rancher/v2.x/en/installation/single-node) or a [high availability install]({{< baseurl >}}/rancher/v2.x/en/installation/ha). View your chosen configuration's introduction notes along with Rancher's [node requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements).
@@ -64,7 +64,7 @@ Instead of installing the `tiller` agent on the cluster, render the installs on
### Initialize Helm Locally
Skip the [Initialize Helm (Install Tiller)]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/#helm-init) and initialize `helm` locally on a system that has internet access.
Skip the [Initialize Helm (Install Tiller)]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/) and initialize `helm` locally on a system that has internet access.
```plain
helm init -c
@@ -80,9 +80,9 @@ Fetch and render the `helm` charts on a system that has internet access.
#### Cert-Manager
If you are installing Rancher with Rancher Self-Signed certificates you will need to install 'cert-manager' on your cluster. If you are installing your own certificates you may skip this section.
If you are installing Rancher with Rancher self-signed certificates you will need to install 'cert-manager' on your cluster. If you are installing your own certificates you may skip this section.
Fetch the latest `stable/cert-manager` chart. This will pull down the chart and save it in the current directory as a `.tgz` file.
Fetch the latest `cert-manager` chart from the [official Helm catalog](https://github.com/helm/charts/tree/master/stable).
```plain
helm fetch stable/cert-manager
@@ -98,16 +98,16 @@ helm template ./cert-manager-<version>.tgz --output-dir . \
#### Rancher
Install the Rancher chart repo.
Install the Rancher chart repo. Replace `<CHART_REPO>` with the [repository that you're using]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#rancher-chart-repositories) ('latest' or 'stable').
```plain
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
```
Fetch the latest `rancher-stable/rancher` chart. This will pull down the chart and save it in the current directory as a `.tgz` file.
Fetch the latest Rancher chart. This will pull down the chart and save it in the current directory as a `.tgz` file. Replace `<CHART_REPO>` with the repo you're using (`latest` or `stable`).
```plain
helm fetch rancher-stable/rancher
helm fetch rancher-<CHART_REPO>/rancher
```
Render the template with the options you would use to install the chart. See [Install Rancher]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/) for details on the various options. Remember to set the `rancherImage` option to pull the image from your private registry. This will create a `rancher` directory with the Kubernetes manifest files.
@@ -7,14 +7,14 @@ For production environments, we recommend installing Rancher in a high-availabil
This procedure walks you through setting up a 3-node cluster with RKE and installing the Rancher chart with the Helm package manager.
> **Note:** For the best performance, we recommend this Kubernetes cluster be dedicated only to the Rancher workload.
> **Important:** For the best performance, we recommend this Kubernetes cluster to be dedicated only to run Rancher.
## Recommended Architecture
* DNS for Rancher should resolve to a layer 4 load balancer
* The Load Balancer should forward ports 80 and 443 TCP to all 3 nodes in the Kubernetes cluster.
* The Ingress controller will redirect http port 80 to https and terminate SSL/TLS on port 443.
* The Ingress controller will forward traffic to port 80 on the pod in the Rancher deployment.
* The Load Balancer should forward port TCP/80 and TCP/443 to all 3 nodes in the Kubernetes cluster.
* The Ingress controller will redirect HTTP to HTTPS and terminate SSL/TLS on port TCP/443.
* The Ingress controller will forward traffic to port TCP/80 on the pod in the Rancher deployment.
<sup>HA Rancher install with layer 4 load balancer, depicting SSL termination at ingress controllers</sup>
![Rancher HA]({{< baseurl >}}/img/rancher/ha/rancher2ha.svg)
@@ -36,7 +36,7 @@ The following CLI tools are required for this install. Please make sure these to
## Additional Install Options
* [Migrating from RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/)
* [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/)
## Previous Methods
@@ -44,6 +44,6 @@ The following CLI tools are required for this install. Please make sure these to
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
* [RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/installation/ha/rke-add-on/)
@@ -7,15 +7,15 @@ Use your provider of choice to provision 3 nodes and a Load Balancer endpoint fo
> **Note:** These nodes must be in the same region/datacenter. You may place these servers in separate availability zones.
**Don't forget to collect the SSH credentials and DNS or IP addresses of your nodes to provide to RKE in the next step.**
### Node Requirements
### Host Requirements
View the supported operating systems and hardware/software/networking requirements for nodes running Rancher at [Node Requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements).
View the requirements for nodes hosting Rancher at [Requirements]({{< baseurl >}}/rancher/v2.x/en/installation/requirements).
View the OS requirements for RKE at [RKE Requirements]({{< baseurl >}}/rke/v0.1.x/en/os/)
### Load Balancer
RKE will configure an ingress-controller pod, on each of your nodes. The ingress-controller pods are bound to ports 80 and 443 TCP on the host network and are the entry point for HTTPS traffic to the Rancher server.
RKE will configure an Ingress controller pod, on each of your nodes. The Ingress controller pods are bound to ports TCP/80 and TCP/443 on the host network and are the entry point for HTTPS traffic to the Rancher server.
Configure a load balancer as a basic Layer 4 TCP forwarder. The exact configuration will vary depending on your environment.
@@ -23,6 +23,4 @@ Configure a load balancer as a basic Layer 4 TCP forwarder. The exact configurat
* [Amazon NLB]({{< baseurl >}}/rancher/v2.x/en/installation/ha/create-nodes-lb/nlb/)
<br/>
### [Next: Install Kubernetes with RKE]({{< baseurl >}}/rancher/v2.x/en/installation/ha/kubernetes-rke/)
### [Next: Install Kubernetes with RKE]({{< baseurl >}}/rancher/v2.x/en/installation/ha/kubernetes-rke/)
@@ -3,17 +3,26 @@ title: 3 - Initialize Helm (Install tiller)
weight: 195
---
Helm is the package management tool of choice for Kubernetes. Helm "charts" provide templating syntax for Kubernetes YAML manifest documents. With Helm we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at [https://helm.sh/](https://helm.sh/).
<<<<<<< HEAD
Helm is the package management tool of choice for Kubernetes. Helm charts provide templating syntax for Kubernetes YAML manifest documents. With Helm we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at [https://helm.sh/](https://helm.sh/).
=======
Helm is the package management tool of choice for Kubernetes. Helm "charts" provide templating syntax for Kubernetes YAML manifest documents. With Helm we can create configurable deployments instead of just using static files. For more information about creating your own catalog of deployments, check out the docs at [https://helm.sh/](https://helm.sh/). To be able to use Helm, the server-side component `tiller` needs to be installed on your cluster.
>>>>>>> Cleanup helm/tiller + added verification steps for helm/tiller
> **Note:** For systems without direct internet access see [Helm - Air Gap]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#helm) for install details.
### Initialize Helm on the Cluster
### Install Tiller on the Cluster
Helm installs the `tiller` service on your cluster to manage charts. Since RKE enables RBAC by default we will need to use `kubectl` to create a `serviceaccount` and `clusterrolebinding` so `tiller` has permission to deploy to the cluster.
* Create the `ServiceAccount` in the `kube-system` namespace.
* Create the `ClusterRoleBinding` to give the `tiller` account access to the cluster.
<<<<<<< HEAD
* Create the `ClusterRoleBinding` to give the `tiller` service account access to the cluster.
* Finally use `helm` to initialize the `tiller` service
=======
* Create the `ClusterRoleBinding` to give the `tiller` account access to the cluster.
* Finally use `helm` to install the `tiller` service
>>>>>>> Cleanup helm/tiller + added verification steps for helm/tiller
```plain
kubectl -n kube-system create serviceaccount tiller
@@ -27,6 +36,24 @@ helm init --service-account tiller
> **Note:** This `tiller` install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the [helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for restricting `tiller` access to suit your security requirements.
### Test your Tiller installation
Run the following command to verify the installation of `tiller` on your cluster:
```
kubectl -n kube-system rollout status deploy/tiller-deploy
Waiting for deployment "tiller-deploy" rollout to finish: 0 of 1 updated replicas are available...
deployment "tiller-deploy" successfully rolled out
```
And run the following command to validate Helm can talk to the `tiller` service:
```
helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
```
### Issues or errors?
See the [Troubleshooting]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/troubleshooting/) page.
@@ -20,4 +20,4 @@ helm version --server
Error: could not find tiller
```
When you have confirmed that `tiller` has been removed, please follow the steps provided in [Initialize Helm on the cluster]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/#initialize-helm-on-the-cluster) to install `tiller` with the correct `ServiceAccount`.
When you have confirmed that `tiller` has been removed, please follow the steps provided in [Initialize Helm (Install tiller)]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/) to install `tiller` with the correct `ServiceAccount`.
@@ -9,12 +9,17 @@ Rancher installation is now managed using the Helm package manager for Kubernete
### Add the Chart Repo
Use `helm repo add` to add the Rancher chart repository.
Use `helm repo add` command to add the Rancher chart repository.
Replace `<CHART_REPO>` with the chart repository that you want to use (either `latest` or `stable`).
>**Note:** For more information about each repository and which is best for your use case, see [Choosing a Version of Rancher: Rancher Chart Repositories]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#rancher-chart-repositories).
```
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
```
## Chart Versioning Notes
Up until the initial helm chart release for v2.1.0, the helm chart version matched the Rancher version (i.e `appVersion`).
@@ -25,7 +30,7 @@ Run `helm search rancher` to view which Rancher version will be launched for the
```
NAME CHART VERSION APP VERSION DESCRIPTION
rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Server to manage Kubernetes clusters acro...
rancher-latest/rancher 2018.10.1 v2.1.0 Install Rancher Server to manage Kubernetes clusters acro...
```
### Install cert-manager
@@ -34,7 +39,8 @@ rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Serve
Rancher relies on [cert-manager](https://github.com/kubernetes/charts/tree/master/stable/cert-manager) from the Kubernetes Helm stable catalog to issue self-signed or LetsEncrypt certificates.
Install `cert-manager` from the Helm stable catalog.
Install `cert-manager` from the [official Helm catalog](https://github.com/helm/charts/tree/master/stable).
```
helm install stable/cert-manager \
@@ -58,12 +64,12 @@ There are three options for the source of the certificate.
The default is for Rancher to generate a CA and use the `cert-manager` to issue the certificate for access to the Rancher server interface.
The only requirement is to set the `hostname` to the DNS name you pointed at your load balancer.
The only requirement is to set the `hostname` to the DNS name you pointed at your load balancer. Replace `<CHART_REPO>` with the repository that you configured in [Add the Chart Repo](#add-the-chart-repo) (`latest` or `stable`).
>**Using Air Gap?** [Set the `rancherImage` option]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#install-rancher-using-private-registry) in your command, pointing toward your private registry.
```
helm install rancher-stable/rancher \
helm install rancher-<CHART_REPO>/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=rancher.my.org
@@ -73,12 +79,12 @@ helm install rancher-stable/rancher \
Use [LetsEncrypt](https://letsencrypt.org/)'s free service to issue trusted SSL certs. This configuration uses http validation so the Load Balancer must have a Public DNS record and be accessible from the internet.
Set `hostname`, `ingress.tls.source=letsEncrypt` and LetsEncrypt options.
Set `hostname`, `ingress.tls.source=letsEncrypt` and LetsEncrypt options. Replace `<CHART_REPO>` with the repository that you configured in [Add the Chart Repo](#add-the-chart-repo) (`latest` or `stable`).
>**Using Air Gap?** [Set the `rancherImage` option]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#install-rancher-using-private-registry) in your command, pointing toward your private registry.
```
helm install rancher-stable/rancher \
helm install rancher-<CHART_REPO>/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=rancher.my.org \
@@ -92,12 +98,12 @@ Create Kubernetes secrets from your own certificates for Rancher to use.
> **Note:** The common name for the cert will need to match the `hostname` option or the ingress controller will fail to provision the site for Rancher.
Set `hostname` and `ingress.tls.source=secret`.
Set `hostname` and `ingress.tls.source=secret`. Replace `<CHART_REPO>` with the repository that you configured in [Add the Chart Repo](#add-the-chart-repo) (`latest` or `stable`).
> **Note:** If you are using a Private CA signed cert, add `--set privateCA=true`
```
helm install rancher-stable/rancher \
helm install rancher-<CHART_REPO>/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=rancher.my.org \
@@ -29,7 +29,7 @@ weight: 276
| `debug` | false | `bool` - set debug flag on rancher server |
| `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials |
| `proxy` | "" | `string` - string - HTTP[S] proxy server for Rancher |
| `noProxy` | "localhost,127.0.0.1" | `string` - comma separated list of hostnames or ip address not to use the proxy |
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16" | `string` - comma separated list of hostnames or ip address not to use the proxy |
| `resources` | {} | `map` - rancher pod resource requests & limits |
| `rancherImage` | "rancher/rancher" | `string` - rancher image source |
| `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag |
@@ -59,7 +59,7 @@ Add your IP exceptions to the `noProxy` list. Make sure you add the Service clus
```plain
--set proxy="http://<username>:<password>@<proxy_url>:<proxy_port>/"
--set noProxy="127.0.0.1,localhost,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
--set noProxy="127.0.0.0/8\,10.0.0.0/8\,172.16.0.0/12\,192.168.0.0/16"
```
### Additional Trusted CAs
@@ -84,7 +84,7 @@ See [Installing Rancher - Air Gap]({{< baseurl >}}/rancher/v2.x/en/installation/
We recommend configuring your load balancer as a Layer 4 balancer, forwarding plain 80/tcp and 443/tcp to the Rancher Management cluster nodes. The Ingress Controller on the cluster will redirect http traffic on port 80 to https on port 443.
You may terminate the SSL/TLS on a L7 load balancer external to the Rancher cluster (ingress). Use the `--tls=external` option and point your load balancer at port http 80 on all of the Rancher cluster nodes. This will expose the Rancher interface on http port 80. Be aware that clients that are allowed to connect directly to the Rancher cluster will not be encrypted. If you choose to do this we recommend that you restrict direct access at the network level to just your load balancer.
You may terminate the SSL/TLS on a L7 load balancer external to the Rancher cluster (ingress). Use the `--set tls=external` option and point your load balancer at port http 80 on all of the Rancher cluster nodes. This will expose the Rancher interface on http port 80. Be aware that clients that are allowed to connect directly to the Rancher cluster will not be encrypted. If you choose to do this we recommend that you restrict direct access at the network level to just your load balancer.
> **Note:** If you are using a Private CA signed cert, add `--set privateCA=true` and see [Adding TLS Secrets - Private CA Signed - Additional Steps]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/tls-secrets/#private-ca-signed---additional-steps) to add the CA cert for Rancher.
@@ -11,8 +11,8 @@ Use `kubectl` with the `tls` secret type to create the secrets.
```
kubectl -n cattle-system create secret tls tls-rancher-ingress \
--cert=./tls.crt \
--key=./tls.key
--cert=tls.crt \
--key=tls.key
```
### Private CA Signed - Additional Steps
@@ -21,6 +21,8 @@ If you are using a private CA, Rancher will need to have a copy of the CA cert t
Copy the CA cert into a file named `cacerts.pem` and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
>**Important:** Make sure the file is called `cacerts.pem` as Rancher uses that filename to configure the CA cert.
```
kubectl -n cattle-system create secret generic tls-ca \
--from-file=cacerts.pem
@@ -9,7 +9,7 @@ Use RKE to install Kubernetes with a high availability etcd configuration.
### Create the `rancher-cluster.yml` File
Using the sample below create the `rancher-cluster.yml` file. Replace the IP Addresses in the `nodes` list with the IP address or DNS names of the 3 Nodes you created.
Using the sample below create the `rancher-cluster.yml` file. Replace the IP Addresses in the `nodes` list with the IP address or DNS names of the 3 nodes you created.
> **Note:** If your node has public and internal addresses, it is recommended to set the `internal_address:` so Kubernetes will use it for intra-cluster communication. Some services like AWS EC2 require setting the `internal_address:` if you want to use self-referencing security groups or firewalls.
@@ -28,23 +28,29 @@ nodes:
internal_address: 172.16.42.73
user: ubuntu
role: [controlplane,worker,etcd]
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
```
#### Common RKE Nodes: Options
#### Common RKE Nodes Options
| Option | Description |
| --- | --- |
| `address` | (required) The public DNS or IP address |
| `internal_address` | (optional) The private DNS or IP address for internal cluster traffic |
| `role` | (required) List of Kubernetes roles assigned to the node |
| `ssh_key_path` | (optional) Path to SSH private key used to authenticate to the node |
| `user` | (required) A user that can run docker commands |
| Option | Required | Description |
| --- | --- | --- |
| `address` | yes | The public DNS or IP address |
| `user` | yes | A user that can run docker commands |
| `role` | yes | List of Kubernetes roles assigned to the node |
| `internal_address` | no | The private DNS or IP address for internal cluster traffic |
| `ssh_key_path` | no | Path to SSH private key used to authenticate to the node (defaults to `~/.ssh/id_rsa`) |
#### Advanced Configurations
RKE has many configuration options for customizing the install to suit your specific environment.
Please see the [RKE Documentation]({{< baseurl >}}/rke/v0.1.x/en/) for the full list of options and capabilities.
Please see the [RKE Documentation]({{< baseurl >}}/rke/v0.1.x/en/config-options/) for the full list of options and capabilities.
### Run RKE
@@ -52,6 +58,8 @@ Please see the [RKE Documentation]({{< baseurl >}}/rke/v0.1.x/en/) for the full
rke up --config ./rancher-cluster.yml
```
When finished, it should end with the line: `Finished building Kubernetes cluster successfully`.
### Testing Your Cluster
RKE should have created a file `kube_config_rancher-cluster.yml`. This file has the credentials for `kubectl` and `helm`.
@@ -64,7 +72,7 @@ You can copy this file to `$HOME/.kube/config` or if you are working with multip
export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
```
Test your connectivity with `kubectl` and see if you can get the list of nodes back.
Test your connectivity with `kubectl` and see if all your nodes are in `Ready` state.
```
kubectl get nodes
@@ -110,4 +118,4 @@ Save a copy of the `kube_config_rancher-cluster.yml` and `rancher-cluster.yml` f
See the [Troubleshooting]({{< baseurl >}}/rancher/v2.x/en/installation/ha/kubernetes-rke/troubleshooting/) page.
### [Next: Initialize Helm]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/)
### [Next: Initialize Helm (Install tiller)]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-init/)
@@ -7,7 +7,7 @@ weight: 276
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
* [High Availability Installation with External Load Balancer (TCP/Layer 4)]({{< baseurl >}}/rancher/v2.x/en/installation/ha/rke-add-on/layer-4-lb)
@@ -9,7 +9,7 @@ aliases:
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
If you're using RKE to install Rancher, you can use directives to enable API Auditing for your Rancher install. You can know what happened, when it happened, who initiated it, and what cluster it affected. API auditing records all requests and responses to and from the Rancher API, which includes use of the Rancher UI and any other use of the Rancher API through programmatic use.
@@ -9,14 +9,14 @@ aliases:
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
This procedure walks you through setting up a 3-node cluster using the Rancher Kubernetes Engine (RKE). The cluster's sole purpose is running pods for Rancher. The setup is based on:
- Layer 4 load balancer (TCP)
- [NGINX ingress controller with SSL termination (HTTPS)](https://kubernetes.github.io/ingress-nginx/)
In a HA setup that uses a layer 4 load balancer, the load balancer accepts Rancher client connections over the TCP/UDP protocols (i.e., the transport level). The load balancer then forwards these connections to individual cluster nodes without reading the request itself. Because the load balancer cannot read the packets it's forwarding, the routing decisions it can make are limited.
In an HA setup that uses a layer 4 load balancer, the load balancer accepts Rancher client connections over the TCP/UDP protocols (i.e., the transport level). The load balancer then forwards these connections to individual cluster nodes without reading the request itself. Because the load balancer cannot read the packets it's forwarding, the routing decisions it can make are limited.
<sup>HA Rancher install with layer 4 load balancer, depicting SSL termination at ingress controllers</sup>
![Rancher HA]({{< baseurl >}}/img/rancher/ha/rancher2ha.svg)
@@ -9,7 +9,7 @@ aliases:
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
## Objectives
@@ -9,14 +9,14 @@ aliases:
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
This procedure walks you through setting up a 3-node cluster using the Rancher Kubernetes Engine (RKE). The cluster's sole purpose is running pods for Rancher. The setup is based on:
- Layer 7 Loadbalancer with SSL termination (HTTPS)
- [NGINX Ingress controller (HTTP)](https://kubernetes.github.io/ingress-nginx/)
In a HA setup that uses a layer 7 load balancer, the load balancer accepts Rancher client connections over the HTTP protocol (i.e., the application level). This application-level access allows the load balancer to read client requests and then redirect to them to cluster nodes using logic that optimally distributes load.
In an HA setup that uses a layer 7 load balancer, the load balancer accepts Rancher client connections over the HTTP protocol (i.e., the application level). This application-level access allows the load balancer to read client requests and then redirect to them to cluster nodes using logic that optimally distributes load.
<sup>HA Rancher install with layer 7 load balancer, depicting SSL termination at load balancer</sup>
![Rancher HA]({{< baseurl >}}/img/rancher/ha/rancher2ha-l7.svg)
@@ -9,7 +9,7 @@ aliases:
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
## Objectives
@@ -9,7 +9,7 @@ aliases:
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
## Install NGINX
@@ -7,7 +7,7 @@ weight: 277
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
If you operate Rancher behind a proxy and you want to access services through the proxy (such as retrieving catalogs), you must provide Rancher information about your proxy. As Rancher is written in Go, it uses the common proxy environment variables as shown below.
@@ -9,7 +9,7 @@ aliases:
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
To debug issues around this error, you will need to download the command-line tool `kubectl`. See [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) how to download `kubectl` for your platform.
@@ -9,7 +9,7 @@ aliases:
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
This section contains common errors seen when setting up a High Availability Installation.
@@ -9,7 +9,7 @@ aliases:
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
Below are steps that you can follow to determine what is wrong in your cluster.
@@ -9,7 +9,7 @@ aliases:
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
To debug issues around this error, you will need to download the command-line tool `kubectl`. See [Install and Set Up kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) how to download `kubectl` for your platform.
@@ -1,13 +1,41 @@
---
title: Server Tags
title: Choosing a Version of Rancher
weight: 230
---
{{< product >}} Server is distributed as a Docker image, which have _tags_ attached to them. Tags are used to identify what version is included in the image. Rancher includes additional tags that point to a specific version. Remember that if you use the additional tags, you must explicitly pull a new version of that image tag. Otherwise it will use the cached image on the host.
You can find Rancher images at [DockerHub](https://hub.docker.com/r/rancher/rancher/tags/).
## Single Node Installs
- `rancher/rancher:latest`: Our latest development release. These builds are validated through our CI automation framework. These releases are not recommended for production environments.
When performing [single-node installs]({{< baseurl >}}/rancher/v2.x/en/installation/single-node), upgrades, or rollbacks, you can use _tags_ to install a specific version of Rancher.
- `rancher/rancher:stable`: Our newest stable release. This tag is recommended for production.
### Server Tags
The `master` tag or any tag with a `-rc` or another suffix is meant for the {{< product >}} testing team to validate. You should not use these tags, as these builds are not officially supported.
Rancher Server is distributed as a Docker image, which have tags attached to them. You can specify this tag when entering the command to deploy Rancher. Remember that if you use a tag without an explicit version (like `latest` or `stable`), you must explicitly pull a new version of that image tag. Otherwise any image cached on the host will be used.
| Tag | Description |
| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `rancher/rancher:latest` | Our latest development release. These builds are validated through our CI automation framework. These releases are not recommended for production environments. |
| `rancher/rancher:stable` | Our newest stable release. This tag is recommended for production. |
| `rancher/rancher:<v2.X.X>` | You can install specific versions of Rancher by using the tag from a previous release. See what's available at DockerHub. |
<br/>
>**Note:** The `master` tag or any tag with `-rc` or another suffix is meant for the Rancher testing team to validate. You should not use these tags, as these builds are not officially supported.
## High Availability Installs
When installing, upgrading, or rolling back Rancher Server in a [high availability configuration]({{< baseurl >}}/rancher/v2.x/en/installation/ha), you can choose what repository from which to pull your Rancher images.
### Rancher Chart Repositories
In high availability Rancher configurations, Rancher Server is distributed by Helm chart. Therefore, as you prepare to install or upgrade a high availability Rancher configuration, you must configure a chart repository that contains Docker images for Rancher. You can install Rancher from two different repos:
Repository | Repo Configuration Command | Description
-----------|-----|-------------
`latest` | `helm repo add rancher-latest https://releases.rancher.com/server-charts/latest` | Adds a repository of Helm charts for the latest versions of Rancher. We recommend using this repo for testing out new Rancher builds.
`stable` | `helm repo add rancher-stable https://releases.rancher.com/server-charts/stable` | Adds a repository of Helm charts for older, stable versions of Rancher. We recommend using this repo for production environments.
<br/>
Instructions on when to make these configurations are available in [High Availability Install]({{< baseurl >}}/rancher/v2.x/en/installation/ha).
>**Important!**
>
>When _upgrading_ or _rolling back_ Rancher in a high availability configuration, you must use the same repository that you used during installation.
@@ -159,7 +159,7 @@ server {
### API Auditing
If you want to record all transations with the Rancher API, enable the [API Auditing]({{< baseurl >}}/rancher/v2.x/en/installation/api-auditing) feature by adding the flags below into your install command.
If you want to record all transactions with the Rancher API, enable the [API Auditing]({{< baseurl >}}/rancher/v2.x/en/installation/api-auditing) feature by adding the flags below into your install command.
-e AUDIT_LEVEL=1 \
-e AUDIT_LOG_PATH=/var/log/auditlog/rancher-api-audit.log \
@@ -111,7 +111,7 @@ For more information, see [Ingress]({{< baseurl >}}/rancher/v2.x/en/k8s-in-ranch
## Service Discovery
After you expose your cluster to external requests using a load balancer and/or ingress, it's only available by IP address. To create a resolveable hostname, you must create a service record, which is a record that maps an IP address, external hostname, DNS record alias, workload(s), or labled pods to a specific hostname.
After you expose your cluster to external requests using a load balancer and/or ingress, it's only available by IP address. To create a resolveable hostname, you must create a service record, which is a record that maps an IP address, external hostname, DNS record alias, workload(s), or labelled pods to a specific hostname.
For more information, see [Service Discovery]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/service-discovery).
@@ -5,6 +5,8 @@ weight: 2300
Using the Kubernetes [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) feature (HPA), you can configure your cluster to automatically scale the services it's running up or down.
>**Note:** Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler.
### Why Use Horizontal Pod Autoscaler?
Using HPA, you can automatically scale the number of pods within a replication controller, deployment, or replica set up or down. HPA automatically scales the number of pods that are running for maximum efficiency. Factors that affect the number of pods include:
@@ -20,11 +22,10 @@ HPA improves your services by:
### How HPA Works
![HPA Schema]({{< baseurl >}}/img/rancher/horizontal-pod-autoscaler.svg)
![HPA Schema]({{< baseurl >}}/img/rancher/horizontal-pod-autoscaler.jpg)
HPA is implemented as a control loop, with a period controlled by the `kube-controller-manager` flags below:
Flag | Default | Description |
---------|----------|----------|
`--horizontal-pod-autoscaler-sync-period` | `30s` | How often HPA audits resource/custom metrics in a deployment.
@@ -36,13 +37,13 @@ For full documentation on HPA, refer to the [Kubernetes Documentation](https://k
### Horizontal Pod Autoscaler API Objects
HPA is an API resource in the Kubernetes `autoscaling` API group. The current stable version is `autoscaling/v1`, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: `autoscaling/v2beta1`.
HPA is an API resource in the Kubernetes `autoscaling` API group. The current stable version is `autoscaling/v1`, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: `autoscaling/v2beta1`.
For more information about the HPA API object, see the [HPA GitHub Readme](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
### kubectl Commands
You can create, manage, and delete HPAs using kubectl:
You can create, manage, and delete HPAs using kubectl:
- Creating HPA
@@ -98,113 +99,39 @@ Directive | Description
`targetAverageValue: 100Mi` | Indicates the deployment will scale pods up when the average running pod uses more that 100Mi of memory.
<br/>
### Installation
Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements.
#### Requirements
Be sure that your Kubernetes cluster services are running with these flags at minimum:
- kube-api: `requestheader-client-ca-file`
- kubelet: `read-only-port` at 10255
- kube-controller: Optional, just needed if distinct values than default are required.
- `horizontal-pod-autoscaler-downscale-delay: "5m0s"`
- `horizontal-pod-autoscaler-upscale-delay: "3m0s"`
- `horizontal-pod-autoscaler-sync-period: "30s"`
For an RKE Kubernetes cluster definition, add this snippet in the `services` section. To add this snippet using the Rancher v2.0 UI, open the **Clusters** view and select **Ellipsis (...) > Edit** for the cluster in which you want to use HPA. Then, from **Cluster Options**, click **Edit as YAML**. Add the following snippet to the `services` section:
```
services:
...
kube-api:
extra_args:
requestheader-client-ca-file: "/etc/kubernetes/ssl/kube-ca.pem"
kube-controller:
extra_args:
horizontal-pod-autoscaler-downscale-delay: "5m0s"
horizontal-pod-autoscaler-upscale-delay: "1m0s"
horizontal-pod-autoscaler-sync-period: "30s"
kubelet:
extra_args:
read-only-port: 10255
```
Once the Kubernetes cluster is configured and deployed, you can deploy metrics services.
>**Note:** kubectl command samples in the sections that follow were tested in a cluster running Rancher v2.0.6 and Kubernetes v1.10.1.
#### Configuring HPA to Scale Using Resource Metrics
To create HPA resources based on resource metrics such as CPU and memory use, you need to deploy the `metrics-server` package in the `kube-system` namespace of your Kubernetes cluster. This deployment allows HPA to consume the `metrics.k8s.io` API.
Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. Run the following commands to check if metrics are available in your installation:
>**Prerequisite:** You must be running kubectl 1.8 or later.
```
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
node-controlplane 196m 9% 1623Mi 42%
node-etcd 80m 4% 1090Mi 28%
node-worker 64m 3% 1146Mi 29%
$ kubectl -n kube-system top pods
NAME CPU(cores) MEMORY(bytes)
canal-pgldr 18m 46Mi
canal-vhkgr 20m 45Mi
canal-x5q5v 17m 37Mi
canal-xknnz 20m 37Mi
kube-dns-7588d5b5f5-298j2 0m 22Mi
kube-dns-autoscaler-5db9bbb766-t24hw 0m 5Mi
metrics-server-97bc649d5-jxrlt 0m 12Mi
$ kubectl -n kube-system logs -l k8s-app=metrics-server
I1002 12:55:32.172841 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:https://kubernetes.default.svc?kubeletHttps=true&kubeletPort=10250&useServiceAccount=true&insecure=true
I1002 12:55:32.172994 1 heapster.go:72] Metrics Server version v0.2.1
I1002 12:55:32.173378 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default.svc" and version
I1002 12:55:32.173401 1 configs.go:62] Using kubelet port 10250
I1002 12:55:32.173946 1 heapster.go:128] Starting with Metric Sink
I1002 12:55:32.592703 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I1002 12:55:32.925630 1 heapster.go:101] Starting Heapster API server...
[restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
[restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
I1002 12:55:32.928597 1 serve.go:85] Serving securely on 0.0.0.0:443
```
1. Connect to your Kubernetes cluster using kubectl.
1. Clone the GitHub `metrics-server` repo:
```
# git clone https://github.com/kubernetes-incubator/metrics-server
```
1. Install the `metrics-server` package.
```
# kubectl create -f metrics-server/deploy/1.8+/
```
1. Check that `metrics-server` is running properly. Check the service pod and logs in the `kube-system` namespace.
1. Check the service pod for a status of `running`. Enter the following command:
```
# kubectl get pods -n kube-system
```
Then check for the status of `running`.
```
NAME READY STATUS RESTARTS AGE
...
metrics-server-6fbfb84cdd-t2fk9 1/1 Running 0 8h
...
```
1. Check the service logs for service availability. Enter the following command:
```
# kubectl -n kube-system logs metrics-server-6fbfb84cdd-t2fk9
```
Then review the log to confirm that that the `metrics-server` package is running.
{{% accordion id="metrics-server-run-check" label="Metrics Server Log Output" %}}
I0723 08:09:56.193136 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
I0723 08:09:56.193574 1 heapster.go:72] Metrics Server version v0.2.1
I0723 08:09:56.194480 1 configs.go:61] Using Kubernetes client with master "https://10.43.0.1:443" and version
I0723 08:09:56.194501 1 configs.go:62] Using kubelet port 10255
I0723 08:09:56.198612 1 heapster.go:128] Starting with Metric Sink
I0723 08:09:56.780114 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I0723 08:09:57.391518 1 heapster.go:101] Starting Heapster API server...
[restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
[restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
I0723 08:09:57.394080 1 serve.go:85] Serving securely on 0.0.0.0:443
{{% /accordion %}}
1. Check that the metrics api is accessible from kubectl.
- If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://<K8s_URL>:6443`.
```
# kubectl get --raw /apis/metrics.k8s.io/v1beta1
```
If the the API is working correctly, you should receive output similar to the output below.
```
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}
```
- If you are accessing the cluster through Rancher, enter your Server URL in the kubectl config in the following format: `https://<RANCHER_URL>/k8s/clusters/<CLUSTER_ID>`. Add the suffix `/k8s/clusters/<CLUSTER_ID>` to API path.
```
# kubectl get --raw /k8s/clusters/<CLUSTER_ID>/apis/metrics.k8s.io/v1beta1
```
If the the API is working correctly, you should receive output similar to the output below.
```
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}
```
If you have created your cluster in Rancher v2.0.6 or before, please refer to [Manual installation](#manual-installation)
#### Configuring HPA to Scale Using Custom Metrics (Prometheus)
@@ -293,210 +220,136 @@ For HPA to use custom metrics from Prometheus, package [k8s-prometheus-adapter](
{{% /accordion %}}
#### Assigning Additional Required Roles to Your HPA
By default, HPA reads resource and custom metrics with the user `system:anonymous`. Assign `system:anonymous` the the `view-resource-metrics` and `view-custom-metrics` in the ClusterRole and ClusterRoleBindings manifests. These roles are used to access metrics.
To do it, follow these steps:
1. Configure kubectl to connect to your cluster.
1. Copy the ClusterRole and ClusterRoleBinding manifest for the type of metrics you're using for your HPA.
{{% accordion id="cluster-role-resource-metrics" label="Resource Metrics: ApiGroups resource.metrics.k8s.io" %}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: view-resource-metrics
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: view-resource-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view-resource-metrics
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:anonymous
{{% /accordion %}}
{{% accordion id="cluster-role-custom-resources" label="Custom Metrics: ApiGroups custom.metrics.k8s.io" %}}
```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: view-custom-metrics
rules:
- apiGroups:
- custom.metrics.k8s.io
resources:
- "*"
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: view-custom-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view-custom-metrics
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:anonymous
```
{{% /accordion %}}
1. Create them in your cluster using one of the follow commands, depending on the metrics you're using.
```
# kubectl create -f <RESOURCE_METRICS_MANIFEST>
# kubectl create -f <CUSTOM_METRICS_MANIFEST>
```
### Testing HPAs with a Service Deployment
For HPA to work correctly, service deployments should have resources request definitions for containers. Follow this hello-world example to test if HPA is working correctly.
For HPA to work correctly, service deployments should have resources request definitions for containers. Follow this hello-world example to test if HPA is working correctly.
1. Configure kubectl to connect to your Kubernetes cluster.
2. Copy the `hello-world` deployment manifest below.
{{% accordion id="hello-world" label="Hello World Manifest" %}}
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: hello-world
name: hello-world
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: rancher/hello-world
imagePullPolicy: Always
name: hello-world
resources:
requests:
cpu: 500m
memory: 64Mi
ports:
- containerPort: 80
protocol: TCP
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hello-world
```
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: hello-world
name: hello-world
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: rancher/hello-world
imagePullPolicy: Always
name: hello-world
resources:
requests:
cpu: 500m
memory: 64Mi
ports:
- containerPort: 80
protocol: TCP
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hello-world
```
{{% /accordion %}}
1. Deploy it to your cluster.
```
# kubectl create -f <HELLO_WORLD_MANIFEST>
```
1. Copy one of the HPAs below based on the metric type you're using:
{{% accordion id="service-deployment-resource-metrics" label="Hello World HPA: Resource Metrics" %}}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hello-world
namespace: default
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: hello-world
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageValue: 1000Mi
{{% /accordion %}}
{{% accordion id="service-deployment-custom-metrics" label="Hello World HPA: Custom Metrics" %}}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hello-world
namespace: default
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: hello-world
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageValue: 100Mi
- type: Pods
pods:
metricName: cpu_system
targetAverageValue: 20m
{{% /accordion %}}
1. Copy one of the HPAs below based on the metric type you're using:
{{% accordion id="service-deployment-resource-metrics" label="Hello World HPA: Resource Metrics" %}}
```
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hello-world
namespace: default
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: hello-world
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageValue: 1000Mi
```
{{% /accordion %}}
{{% accordion id="service-deployment-custom-metrics" label="Hello World HPA: Custom Metrics" %}}
```
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: hello-world
namespace: default
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: hello-world
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageValue: 100Mi
- type: Pods
pods:
metricName: cpu_system
targetAverageValue: 20m
```
{{% /accordion %}}
1. View the HPA info and description. Confirm that metric data is shown.
{{% accordion id="hpa-info-resource-metrics" label="Resource Metrics" %}}
1. Enter the following command.
1. Enter the following commands.
```
# kubectl get hpa
```
You should receive the output that follows:
```
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hello-world Deployment/hello-world 1253376 / 100Mi, 0% / 50% 1 10 1 6m
# kubectl describe hpa
# kubectl describe hpa
Name: hello-world
Namespace: default
Labels: <none>
@@ -552,7 +405,7 @@ For HPA to work correctly, service deployments should have resources request def
1. Test that pod autoscaling works as intended.<br/></br>
**To Test Autoscaling Using Resource Metrics:**
{{% accordion id="observe-upscale-2-pods-cpu" label="Upscale to 2 Pods: CPU Usage Up to Target" %}}
Use your load testing tool to to scale up to two pods based on CPU Usage.
Use your load testing tool to to scale up to two pods based on CPU Usage.
1. View your HPA.
```
@@ -671,7 +524,7 @@ Use your load testing to to scale down to 1 pod when all metrics are below targe
Normal SuccessfulRescale 1s horizontal-pod-autoscaler New size: 1; reason: All metrics below target
```
{{% /accordion %}}
<br/>
<br/>
**To Test Autoscaling Using Custom Metrics:**
{{% accordion id="custom-observe-upscale-2-pods-cpu" label="Upscale to 2 Pods: CPU Usage Up to Target" %}}
Use your load testing tool to upscale two pods based on CPU usage.
@@ -855,12 +708,200 @@ Use your load testing tool to scale down to one pod when all metrics below targe
```
{{% /accordion %}}
### Conclusion
Horizontal Pod Autoscaling is a great way to automate the number of pod you have deployed for maximum efficency. You can use it to accomodate deployment scale to real service load and to meet service level agreements.
Horizontal Pod Autoscaling is a great way to automate the number of pod you have deployed for maximum efficiency. You can use it to accommodate deployment scale to real service load and to meet service level agreements.
By adjusting the `horizontal-pod-autoscaler-downscale-delay` and `horizontal-pod-autoscaler-upscale-delay` flag values, you can adjust the time needed before kube-controller scales your pods up or down.
We've demonstrated how to setup an HPA based on custom metrics provided by Prometheus. We used the `cpu_system` metric as an example, but you can use other metrics that monitor service performance, like `http_request_number`, `http_response_time`, etc.
>**Note:**To facilitate HPA use, we are working to integrate metric-server as an addon on RKE cluster deployments. This feature is included in RKE v0.1.9-rc2 for testing, but is not officially supported as of yet. It would be supported at rke v0.1.9.
### Manual Installation
>**Note:** This is only applicable to clusters created in versions before Rancher v2.0.7.
Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements.
#### Requirements
Be sure that your Kubernetes cluster services are running with these flags at minimum:
- kube-api: `requestheader-client-ca-file`
- kubelet: `read-only-port` at 10255
- kube-controller: Optional, just needed if distinct values than default are required.
- `horizontal-pod-autoscaler-downscale-delay: "5m0s"`
- `horizontal-pod-autoscaler-upscale-delay: "3m0s"`
- `horizontal-pod-autoscaler-sync-period: "30s"`
For an RKE Kubernetes cluster definition, add this snippet in the `services` section. To add this snippet using the Rancher v2.0 UI, open the **Clusters** view and select **Ellipsis (...) > Edit** for the cluster in which you want to use HPA. Then, from **Cluster Options**, click **Edit as YAML**. Add the following snippet to the `services` section:
```
services:
...
kube-api:
extra_args:
requestheader-client-ca-file: "/etc/kubernetes/ssl/kube-ca.pem"
kube-controller:
extra_args:
horizontal-pod-autoscaler-downscale-delay: "5m0s"
horizontal-pod-autoscaler-upscale-delay: "1m0s"
horizontal-pod-autoscaler-sync-period: "30s"
kubelet:
extra_args:
read-only-port: 10255
```
Once the Kubernetes cluster is configured and deployed, you can deploy metrics services.
>**Note:** kubectl command samples in the sections that follow were tested in a cluster running Rancher v2.0.6 and Kubernetes v1.10.1.
#### Configuring HPA to Scale Using Resource Metrics
To create HPA resources based on resource metrics such as CPU and memory use, you need to deploy the `metrics-server` package in the `kube-system` namespace of your Kubernetes cluster. This deployment allows HPA to consume the `metrics.k8s.io` API.
>**Prerequisite:** You must be running kubectl 1.8 or later.
1. Connect to your Kubernetes cluster using kubectl.
1. Clone the GitHub `metrics-server` repo:
```
# git clone https://github.com/kubernetes-incubator/metrics-server
```
1. Install the `metrics-server` package.
```
# kubectl create -f metrics-server/deploy/1.8+/
```
1. Check that `metrics-server` is running properly. Check the service pod and logs in the `kube-system` namespace.
1. Check the service pod for a status of `running`. Enter the following command:
```
# kubectl get pods -n kube-system
```
Then check for the status of `running`.
```
NAME READY STATUS RESTARTS AGE
...
metrics-server-6fbfb84cdd-t2fk9 1/1 Running 0 8h
...
```
1. Check the service logs for service availability. Enter the following command:
```
# kubectl -n kube-system logs metrics-server-6fbfb84cdd-t2fk9
```
Then review the log to confirm that that the `metrics-server` package is running.
{{% accordion id="metrics-server-run-check" label="Metrics Server Log Output" %}}
I0723 08:09:56.193136 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
I0723 08:09:56.193574 1 heapster.go:72] Metrics Server version v0.2.1
I0723 08:09:56.194480 1 configs.go:61] Using Kubernetes client with master "https://10.43.0.1:443" and version
I0723 08:09:56.194501 1 configs.go:62] Using kubelet port 10255
I0723 08:09:56.198612 1 heapster.go:128] Starting with Metric Sink
I0723 08:09:56.780114 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I0723 08:09:57.391518 1 heapster.go:101] Starting Heapster API server...
[restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
[restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
I0723 08:09:57.394080 1 serve.go:85] Serving securely on 0.0.0.0:443
{{% /accordion %}}
1. Check that the metrics api is accessible from kubectl.
- If you are accessing the cluster through Rancher, enter your Server URL in the kubectl config in the following format: `https://<RANCHER_URL>/k8s/clusters/<CLUSTER_ID>`. Add the suffix `/k8s/clusters/<CLUSTER_ID>` to API path.
```
# kubectl get --raw /k8s/clusters/<CLUSTER_ID>/apis/metrics.k8s.io/v1beta1
```
If the the API is working correctly, you should receive output similar to the output below.
```
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}
```
- If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://<K8s_URL>:6443`.
```
# kubectl get --raw /apis/metrics.k8s.io/v1beta1
```
If the the API is working correctly, you should receive output similar to the output below.
```
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}
```
#### Assigning Additional Required Roles to Your HPA
By default, HPA reads resource and custom metrics with the user `system:anonymous`. Assign `system:anonymous` to `view-resource-metrics` and `view-custom-metrics` in the ClusterRole and ClusterRoleBindings manifests. These roles are used to access metrics.
To do it, follow these steps:
1. Configure kubectl to connect to your cluster.
1. Copy the ClusterRole and ClusterRoleBinding manifest for the type of metrics you're using for your HPA.
{{% accordion id="cluster-role-resource-metrics" label="Resource Metrics: ApiGroups resource.metrics.k8s.io" %}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: view-resource-metrics
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: view-resource-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view-resource-metrics
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:anonymous
{{% /accordion %}}
{{% accordion id="cluster-role-custom-resources" label="Custom Metrics: ApiGroups custom.metrics.k8s.io" %}}
```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: view-custom-metrics
rules:
- apiGroups:
- custom.metrics.k8s.io
resources:
- "*"
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: view-custom-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view-custom-metrics
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:anonymous
```
{{% /accordion %}}
1. Create them in your cluster using one of the follow commands, depending on the metrics you're using.
```
# kubectl create -f <RESOURCE_METRICS_MANIFEST>
# kubectl create -f <CUSTOM_METRICS_MANIFEST>
```
@@ -62,7 +62,7 @@ Ingress can be added for workloads to provide load balancing, SSL termination an
1. **Optional:** click **Add Rule** to create additional ingress rules. For example, after you create ingress rules to direct requests for your hostname, you'll likely want to create a default backend to handle 404s.
1. If any of your ingress rules handle requests for encrypted ports, add a certificate to encrpyt/decrypt communications.
1. If any of your ingress rules handle requests for encrypted ports, add a certificate to encrypt/decrypt communications.
>**Note:** You must have an SSL certificate that the ingress can use to encrypt/decrypt communications. For more information see [Adding SSL Certificates]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/certificates/).
@@ -5,7 +5,7 @@ aliases:
- /rancher/v2.x/en/concepts/volumes-and-storage/
- /rancher/v2.x/en/tasks/clusters/adding-storage/
---
When deploying an application that needs to retain data, you'll need to create persistent storage. Persistent storage allows you to store application data external from the pod running your application. This storage practice allows you to maintain application data, even if the application's pod fails.
When deploying an application that needs to retain data, you'll need to create persistent storage. Persistent storage allows you to store application data external from the pod running your application. This storage practice allows you to maintain application data, even if the application's pod fails.
There are two ways to create persistent storage in Kubernetes: Persistent Volumes (PVs) and Storage Classes.
@@ -163,6 +163,37 @@ _Storage Classes_ allow you to dynamically provision persistent volumes on deman
1. Click `Save`.
## iSCSI Volumes With Rancher Launched Kubernetes Clusters
In [Rancher Launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) that store data on iSCSI volumes, you may experience an issue where kubelets fail to automatically connect with iSCSI volumes. This failure is likely due to an incompatibility issue involving the iSCSI initiator tool. You can resolve this issue by installing the iSCSI initiator tool on each of your cluster nodes.
Rancher Launched Kubernetes clusters storing data on iSCSI volumes leverage the [iSCSI initiator tool](http://www.open-iscsi.com/), which is embedded in the kubelet's `rancher/hyperkube` Docker image. From each kubelet (i.e., the _initiator_), the tool discovers and launches sessions with an iSCSI volume (i.e., the _target_). However, in some instances, the versions of the iSCSI initiator tool installed on the initiator and the target may not match, resulting in a connection failure.
If you encounter this issue, you can work around it by installing the initiator tool on each node in your cluster. You can install the iSCSI initiator tool by logging into your cluster nodes and entering one of the following commands:
| Platform | Package Name | Install Command |
| ------------- | ----------------------- | -------------------------------------- |
| Ubuntu/Debian | `open-iscsi` | `sudo apt install open-iscsi` |
| RHEL | `iscsi-initiator-utils` | `yum install iscsi-initiator-utils -y` |
<br/>
After installing the initiator tool on your nodes, edit the YAML for your cluster, editing the kubelet configuration to mount the iSCSI binary and configuration, as shown in the sample below.
>**Note:**
>
>Before updating your Kubernetes YAML to mount the iSCSI binary and configuration, make sure either the `open-iscsi` (deb) or `iscsi-initiator-utils` (yum) package is installed on your cluster nodes. If this package isn't installed _before_ the bind mounts are created in your Kubernetes YAML, Docker will automatically create the directories and files on each node and will not allow the package install to succeed.
```
services:
kubelet:
extra_binds:
- "/etc/iscsi:/etc/iscsi"
- "/sbin/iscsiadm:/sbin/iscsiadm"
```
## What's Next?
Mount Persistent Volumes to workloads so that your applications can store their data. You can mount a either a manually created Persistent Volumes or a dynamically created Persistent Volume, which is created from a a Storage Class.
@@ -62,7 +62,7 @@ Set up a notifier so that you can begin configuring and sending alerts.
1. Enter a **Name** for the notifier.
1. Using the app of your choice, create a webhook URL.
1. Enter your webhook **URL**.
1. Click **Test**. If the test is successfull, the URL you're configuring as a notifier outputs `Webhook setting validated`.
1. Click **Test**. If the test is successful, the URL you're configuring as a notifier outputs `Webhook setting validated`.
{{% /accordion %}}
1. Click **Add** to complete adding the notifier.
@@ -108,7 +108,7 @@ The first stage is preserved to be a cloning step that checks out source code fr
{{% /accordion %}}
{{% accordion id="run-script" label="Run Script" %}}
The **Run Script** step executes arbitrary commands in the workspace inside a specified container. You can use it to build, test and do more, given whatever utilities the base image provides. For your convenience you can use variables to refer to metadata of a pipeline execution. Please go to [reference page](/rancher/v2.x/en/tools/pipelines/reference/#variable-substitution) for the list of available vairables.
The **Run Script** step executes arbitrary commands in the workspace inside a specified container. You can use it to build, test and do more, given whatever utilities the base image provides. For your convenience you can use variables to refer to metadata of a pipeline execution. Please go to [reference page](/rancher/v2.x/en/tools/pipelines/reference/#variable-substitution) for the list of available variables.
{{% tabs %}}
@@ -12,4 +12,4 @@ To restore Rancher follow the procedure detailed here: [Restoring Backups — Hi
Restoring a snapshot of the Rancher Server cluster will revert Rancher to the version and state at the time of the snapshot.
> **Note:** Managed cluster are authoritative for their state. This means restoring the rancher server will not revert workload deployments or changes made on managed clusters after the snapshot was taken.
>**Note:** Managed cluster are authoritative for their state. This means restoring the rancher server will not revert workload deployments or changes made on managed clusters after the snapshot was taken.
@@ -13,14 +13,10 @@ This section contains information about how to upgrade your Rancher server to a
- [Upgrade an HA Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm/)
- [Upgrade a Air Gap HA Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm-airgap/)
- [Migrating from a RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/)
### Upgrading an RKE Add-on Install
- [Migrating from an RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/)
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
- [Upgrading a High Availability Install - RKE Add-On Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade/)
@@ -34,7 +34,7 @@ Run `helm search rancher` to view which Rancher version will be launched for the
```
NAME CHART VERSION APP VERSION DESCRIPTION
rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Server to manage Kubernetes clusters acro...
rancher-latest/rancher 2018.10.1 v2.1.0 Install Rancher Server to manage Kubernetes clusters acro...
```
## Upgrade Rancher
@@ -45,14 +45,18 @@ rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Serve
helm repo update
```
1. Fetch the latest `rancher-stable/rancher` chart.
1. Fetch the latest Rancher Server chart from the helm repository that you used during installation.
This will pull down the chart and save it in the current directory as a `.tgz` file.
This command will pull down the chart and save it in the current directory as a `.tgz` file. Replace `<CHART_REPO>` with the name of the repository that you used during installation (either `stable` or `latest`).
>**Note:** During upgrades, you must fetch from the chart repo that you configured initial installation (either the `stable` or `latest` repository). For more information, see [Choosing a Version of Rancher: Rancher Chart Repositories]({{< baseurl >}}/rancher/v2.x/en/installation/server-tags/#rancher-chart-repositories).
```plain
helm fetch rancher-stable/rancher
helm fetch rancher-<CHART_REPO>/rancher
```
1. Render the upgrade template.
Use the same `--set` values you used for the install. Remember to set the `--is-upgrade` flag for `helm`. This will create a `rancher` directory with the Kubernetes manifest files.
@@ -46,9 +46,11 @@ Since there are times where the helm chart will require changes without any chan
Run `helm search rancher` to view which Rancher version will be launched for the specific helm chart version.
```
NAME CHART VERSION APP VERSION DESCRIPTION
rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Server to manage Kubernetes clusters acro...
rancher-latest/rancher 2018.10.1 v2.1.0 Install Rancher Server to manage Kubernetes clusters acro...
```
## Upgrade Rancher
@@ -73,8 +75,10 @@ rancher-stable/rancher 2018.10.1 v2.1.0 Install Rancher Serve
3. Take all values from the previous command and use `helm` with `--set` options to upgrade Rancher to the latest version.
Replace `<CHART_REPO>` with the name of the repository that you used during installation (either `stable` or `latest`).
```
helm upgrade rancher rancher-stable/rancher --set hostname=rancher.my.org
helm upgrade rancher rancher-<CHART_REPO>/rancher --set hostname=rancher.my.org
```
> **Important:** For any values listed from Step 2, you must use `--set key=value` to apply the same values to the helm chart.
@@ -9,7 +9,7 @@ aliases:
>
>Please use the Rancher helm chart to install HA Rancher. For details, see the [HA Install - Installation Outline]({{< baseurl >}}/rancher/v2.x/en/installation/ha/#installation-outline).
>
>If you are currently using the RKE add-on install method, see [Migrating from a RKE add-on install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
>If you are currently using the RKE add-on install method, see [Migrating from an HA RKE Add-on Install]({{< baseurl >}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
This document is for upgrading Rancher HA installed with the RKE Add-On yaml. See these docs to migrate to or upgrade Rancher installed with the Helm chart.
@@ -1,8 +1,16 @@
---
title: Migrating from a HA RKE Add-on Install
title: Migrating from an HA RKE Add-on Install
weight: 1030
aliases:
- /rancher/v2.x/en/upgrades/ha-server-upgrade/
- /rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade/
---
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
>
>If you are currently using the RKE add-on install method, please follow these directions to migrate to the Helm install.
The following instructions will help guide you through migrating from the RKE Add-on install to managing Rancher with the Helm package manager.
You will need the to have [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) installed and `kube_config_rancher-cluster.yml` credentials file generated by RKE.
@@ -48,6 +56,40 @@ kubectl -n cattle-system delete clusterrolebinding cattle-crb
kubectl -n cattle-system delete serviceaccount cattle-admin
```
### Remove addons section from `rancher-cluster.yml`
The addons section from `rancher-cluster.yml` contains all the resources needed to deploy Rancher using RKE. By switching to Helm, this part of the cluster configuration file is no longer needed. Open `rancher-cluster.yml` in your favorite text editor and remove the addons section:
>**Important:** Make sure you only remove the addons section from the cluster configuration file.
```
nodes:
- address: <IP> # hostname or IP to access nodes
user: <USER> # root user (usually 'root')
role: [controlplane,etcd,worker] # K8s roles for node
ssh_key_path: <PEM_FILE> # path to PEM file
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
# Remove addons section from here til end of file
addons: |-
---
...
# End of file
```
### Follow Helm and Rancher install steps
From here follow the standard install steps.
@@ -58,7 +58,7 @@ The vSphere configuration options are divided into 5 groups:
### global
The main purpose of global options is to be able to define a common set of configuration parameters that will be inherited by all vCenters defined under the `virtual_center` directive unless explicitely defined there.
The main purpose of global options is to be able to define a common set of configuration parameters that will be inherited by all vCenters defined under the `virtual_center` directive unless explicitly defined there.
Accordingly, the `global` directive accepts the same configuration options that are available under the `virtual_center` directive. Additionally it accepts a single parameter that can only be specified here:
@@ -304,6 +304,6 @@
</table>
<br/>
<h3 id="local-node-traffic">Information on local node traffic</h3>
<p>Kubernetes healthchecks (<code>livenessProbe</code> and <code>readinessProbe</code>) are executed on the host itself. On most nodes, this is allowed by default. When you have applied strict host firewall (i.e. <code>iptables</code>) policies on the node, or when you are using nodes that have multiple interfaces (multihomed), this traffic gets blocked. In this case, you have to explicitely allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as Source or Destination in your security group, that this only applies to the private interface of the nodes/instances.
<p>Kubernetes healthchecks (<code>livenessProbe</code> and <code>readinessProbe</code>) are executed on the host itself. On most nodes, this is allowed by default. When you have applied strict host firewall (i.e. <code>iptables</code>) policies on the node, or when you are using nodes that have multiple interfaces (multihomed), this traffic gets blocked. In this case, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as Source or Destination in your security group, that this only applies to the private interface of the nodes/instances.
</p>
</div>
+1 -1
View File
@@ -79,7 +79,7 @@ nodes.forEach(node => {
}
// remove potentially large content (see size limits) and replace with teh summary so that we don't get results with zero highlightable results
// remove potentially large content (see size limits) and replace with the summary so that we don't get results with zero highlightable results
node.content = node.summary;
// remove summary for dedup
+1 -1
View File
@@ -63,7 +63,7 @@ while getopts ":bdp:t:u" opt;do
UPLOAD="true"
;;
\?)
echoerr "Invalid arguemnts"
echoerr "Invalid arguments"
print_help
exit 1
;;
Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB