From 5001476ef256a26d1fcc3e5c5f222272ee614302 Mon Sep 17 00:00:00 2001 From: loganhz Date: Tue, 4 Jun 2019 21:15:50 +0800 Subject: [PATCH 01/33] Istio --- .../tools/service-mesh/_index.md | 46 ++++++++++ .../tools/service-mesh/istio/_index.md | 89 +++++++++++++++++++ .../en/project-admin/service-mesh/_index.md | 48 ++++++++++ 3 files changed, 183 insertions(+) create mode 100644 content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md create mode 100644 content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md create mode 100644 content/rancher/v2.x/en/project-admin/service-mesh/_index.md diff --git a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md new file mode 100644 index 00000000000..6f747286cef --- /dev/null +++ b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md @@ -0,0 +1,46 @@ +--- +title: Service Mesh +weight: 5 +--- + +_Available as of v2.3.0-alpha_ + +Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. + +## Enabling Service Mesh + +As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Istio to your Kubernetes cluster. + +1. From the **Global** view, navigate to the cluster that you want to configure service mesh. + +1. Select **Tools > Service Mesh** in the navigation bar. + +1. Select **Enable** to show the [Service mesh configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/). Ensure you have enough resources for service mesh and on your worker nodes to enable service mesh. Enter in your desired configuration options. + +1. Click **Save**. + +**Result:** The istio will be deployed as well as an application. The istio application, `cluster-istio`, is added as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the application is `active`, you can start using Istio. + +> **Note:** When enabling service mesh, you need to ensure your worker nodes and Istio pod have enough resources. In larger deployments, it is strongly advised that the service mesh infrastructure be placed on dedicated nodes in the cluster. + +## Using Service Mesh + +Once the service mesh is `active`, you can: + +1. Access [Kiali UI](https://www.kiali.io/) by clicking Kiali UI icon in service mesh page. +1. Access [Jaeger UI](https://www.jaegertracing.io/) by clicking Jaeger UI icon in service mesh page. +1. Access [Grafana UI](https://grafana.com/) by clicking Grafana UI icon in service mesh page. +1. Access [Prometheus UI](https://prometheus.io/) by clicking Prometheus UI icon in service mesh page. +1. Go to project to [view traffic graph, traffic metrics and manage traffic]({{< baseurl >}}/rancher/v2.x/en/project-admin/service-mesh/). + +## Disabling Service Mesh + +To disable the service mesh: + +1. From the **Global** view, navigate to the cluster that you want to disable service mesh. + +1. Select **Tools > Service Mesh** in the navigation bar. + +1. Click **Disable Istio**, then click the red button again to confirm the disable action. + +**Result:** The `cluster-istio` application in the cluster's `system` project gets removed. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md new file mode 100644 index 00000000000..d91485dcc96 --- /dev/null +++ b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md @@ -0,0 +1,89 @@ +--- +title: Service Mesh Configuration +weight: 1 +--- + +_Available as of v2.3.0-alpha_ + +While configuring service mesh, there are multiple options that can be configured. + +## PILOT + +Option | Description +-------|------------- +Pilot CPU Limit | CPU resource limit for the istio-pilot pod. +Pilot CPU Reservation | CPU reservation for the istio-pilot pod. +Pilot Memory Limit | Memory resource limit for the istio-pilot pod. +Pilot Memory Reservation | Memory resource requests for the istio-pilot pod. +Trace sampling Percentage | [Trace sampling percentage](https://istio.io/docs/tasks/telemetry/distributed-tracing/overview/#trace-sampling) +Pilot Selector | Ability to select the nodes in which istio-pilot pod is deployed to. To use this option, the nodes must have labels. + +## TELEMETRY + +Option | Description +-------|------------- +Telemetry CPU Limit | CPU resource limit for the istio-telemetry pod. +Telemetry CPU Reservation | CPU reservation for the istio-telemetry pod. +Telemetry Memory Limit | Memory resource limit for the istio-telemetry pod. +Telemetry Memory Reservation | Memory resource requests for the istio-telemetry pod. +Telemetry Selector | Ability to select the nodes in which istio-telemetry pod is deployed to. To use this option, the nodes must have labels. + +## POLICY + +Option | Description +-------|------------- +Enable Policy | Whether or not to deploy the istio-policy. +Policy CPU Limit | CPU resource limit for the istio-policy pod. +Policy CPU Reservation | CPU reservation for the istio-policy pod. +Policy Memory Limit | Memory resource limit for the istio-policy pod. +Policy Memory Reservation | Memory resource requests for the istio-policy pod. +Policy Selector | Ability to select the nodes in which istio-policy pod is deployed to. To use this option, the nodes must have labels. + +## PROMETHEUS + +Option | Description +-------|------------- +Prometheus CPU Limit | CPU resource limit for the Prometheus pod. +Prometheus CPU Reservation | CPU reservation for the Prometheus pod. +Prometheus Memory Limit | Memory resource limit for the Prometheus pod. +Prometheus Memory Reservation | Memory resource requests for the Prometheus pod. +Retention for Prometheus | How long your Prometheus instance retains data +Prometheus Selector | Ability to select the nodes in which Prometheus pod is deployed to. To use this option, the nodes must have labels. + +## GRAFANA + +Option | Description +-------|------------- +Enable Grafana | Whether or not to deploy the Grafana. +Grafana CPU Limit | CPU resource limit for the Grafana pod. +Grafana CPU Reservation | CPU reservation for the Grafana pod. +Grafana Memory Limit | Memory resource limit for the Grafana pod. +Grafana Memory Reservation | Memory resource requests for the Grafana pod. +Grafana Selector | Ability to select the nodes in which Grafana pod is deployed to. To use this option, the nodes must have labels. + +## TRACING + +Option | Description +-------|------------- +Enable Tracing | Whether or not to deploy the istio-tracing. +Tracing CPU Limit | CPU resource limit for the istio-tracing pod. +Tracing CPU Reservation | CPU reservation for the istio-tracing pod. +Tracing Memory Limit | Memory resource limit for the istio-tracing pod. +Tracing Memory Reservation | Memory resource requests for the istio-tracing pod. +Tracing Selector | Ability to select the nodes in which tracing pod is deployed to. To use this option, the nodes must have labels. + +## GATEWAY + +Option | Description +-------|------------- +Enable Gateway | Whether or not to deploy the istio-ingressgateway. +Service Type of Istio Ingress Gateway | How to expose the gateway. You can choose NodePort or Loadbalancer +Http2 Port | The NodePort for http2 requests +Https Port | The NodePort for https requests +Load Balancer IP | Ingress Gateway Load Balancer IP +Load Balancer Source Ranges | Ingress Gateway Load Balancer Source Ranges +Gateway CPU Limit | CPU resource limit for the istio-ingressgateway pod. +Gateway CPU Reservation | CPU reservation for the istio-ingressgateway pod. +Gateway Memory Limit | Memory resource limit for the istio-ingressgateway pod. +Gateway Memory Reservation | Memory resource requests for the istio-ingressgateway pod. +Gateway Selector | Ability to select the nodes in which istio-ingressgateway pod is deployed to. To use this option, the nodes must have labels. diff --git a/content/rancher/v2.x/en/project-admin/service-mesh/_index.md b/content/rancher/v2.x/en/project-admin/service-mesh/_index.md new file mode 100644 index 00000000000..50f79ba842f --- /dev/null +++ b/content/rancher/v2.x/en/project-admin/service-mesh/_index.md @@ -0,0 +1,48 @@ +--- +title: Service Mesh +weight: 3528 +--- + +_Available as of v2.3.0-alpha_ + +Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. + +>**Prerequisites:** +> +>- [Service Mesh]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/) must be enabled in cluster level. +>- To be a part of an Istio service mesh, pods and services in a Kubernetes cluster must satisfy the [Istio Pods and Services Requirements](https://istio.io/docs/setup/kubernetes/prepare/requirements/) + +## Istio sidecar auto injection + +In create and edit namespace page, you can enable or disable [Istio sidecar auto injection](https://istio.io/blog/2019/data-plane-setup/#automatic-injection). When you enable it, Rancher will add `istio-injection=enabled` label to the namespace automatically. + +## View Traffic Graph + +Rancher integrates Kiali Graph into Rancher UI. The Kiali graph provides a powerful way to visualize the topology of your service mesh. It shows you which services communicate with each other. + +To see the traffic graph for a particular namespace: + +1. From the **Global** view, navigate to the project that you want to view traffic graph. + +1. Select **Service Mesh** in the navigation bar. + +1. Select **Traffic Graph** in the navigation bar. + +1. Select the namespace. Note: It only shows the namespaces which has `istio-injection=enabled` label + +## View Traffic Metrics + +With Istio’s monitoring features, it provides visibility into the performance of all your services. + +To see the Success Rate, Request Volume, 4xx Request Count, Project 5xx Request Count and Request Duration metrics: + +1. From the **Global** view, navigate to the project that you want to view traffic metrics. + +1. Select **Service Mesh** in the navigation bar. + +1. Select **Traffic Metrics** in the navigation bar. + + +## Other Istio Features + +As Istio has been deployed in your cluster, you can use all [Istio Features](https://istio.io/docs/concepts/what-is-istio/#core-features) in the cluster. From 8f04373e2d474d71d3ec7da67dd4c1775ac8ce3d Mon Sep 17 00:00:00 2001 From: loganhz Date: Tue, 4 Jun 2019 22:49:00 +0800 Subject: [PATCH 02/33] HPA --- .../horitzontal-pod-autoscaler/_index.md | 58 ++++++++++++++++++- 1 file changed, 57 insertions(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md index c53c1c2706f..70d5324d100 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md @@ -5,7 +5,10 @@ weight: 3026 Using the Kubernetes [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) feature (HPA), you can configure your cluster to automatically scale the services it's running up or down. ->**Note:** Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. +>**Note:** +> +>- Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. +>- You can create, manage, and delete HPAs using Rancher UI in Rancher v2.3.0-alpha and higher version. It only supports HPA in `autoscaling/v2beta2` API. ### Why Use Horizontal Pod Autoscaler? @@ -41,6 +44,59 @@ HPA is an API resource in the Kubernetes `autoscaling` API group. The current st For more information about the HPA API object, see the [HPA GitHub Readme](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object). +### Rancher UI + +You can create, manage, and delete HPAs using Rancher UI: + +#### Creating a HPA + +1. From the **Global** view, open the project that you want to deploy a HPA to. + +1. Select **Workloads** in the navigation bar and then select the **HPA** tab. + +1. Click **Add HPA** + +1. Enter a **Name** for the HPA. + +1. Select a **Namespace** for the HPA. + +1. Select a **Deployment** as scale target for the HPA. + +1. Specify the **Minimum Scale** and **Maximum Scale** for the HPA. + +1. Configure the metrics for the HPA + +1. Click **Create** to create the HPA + +**Result:** The HPA is deployed to the chosen namespace. You can view the HPA's status from the project's **Workloads** -> **HPA** view. + +#### Getting HPA info + +1. From the **Global** view, open the project that you want to deploy a HPA to. + +1. Select **Workloads** in the navigation bar and then select the **HPA** tab. + +1. Find the HPA which you would like to view info + +1. Click the name of the HPA + +1. You can view the HPA info in the HPA detail page + + +#### Deleting HPA + +1. From the **Global** view, open the project that you want to deploy a HPA to. + +1. Select **Workloads** in the navigation bar and then select the **HPA** tab. + +1. Find the HPA which you would like to delete + +1. Click **Ellipsis (...) > Delete**. + +1. Click **Delete** to confim. + +**Result:** The HPA is deleted from current cluster. + ### kubectl Commands You can create, manage, and delete HPAs using kubectl: From fd1813350097d8240a5c419e8368c45dc6e7d17e Mon Sep 17 00:00:00 2001 From: loganhz Date: Wed, 5 Jun 2019 20:25:10 +0800 Subject: [PATCH 03/33] fix --- .../tools/service-mesh/istio/_index.md | 142 +++++++++--------- 1 file changed, 71 insertions(+), 71 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md index d91485dcc96..f4ef83dce95 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md @@ -9,81 +9,81 @@ While configuring service mesh, there are multiple options that can be configure ## PILOT -Option | Description --------|------------- -Pilot CPU Limit | CPU resource limit for the istio-pilot pod. -Pilot CPU Reservation | CPU reservation for the istio-pilot pod. -Pilot Memory Limit | Memory resource limit for the istio-pilot pod. -Pilot Memory Reservation | Memory resource requests for the istio-pilot pod. -Trace sampling Percentage | [Trace sampling percentage](https://istio.io/docs/tasks/telemetry/distributed-tracing/overview/#trace-sampling) -Pilot Selector | Ability to select the nodes in which istio-pilot pod is deployed to. To use this option, the nodes must have labels. +Option | Description| Field +-------|------------|------- +Pilot CPU Limit | CPU resource limit for the istio-pilot pod.| istio-pilot.discovery.resources.limits.cpu +Pilot CPU Reservation | CPU reservation for the istio-pilot pod. | istio-pilot.discovery.resources.requests.cpu +Pilot Memory Limit | Memory resource limit for the istio-pilot pod. | istio-pilot.discovery.resources.limits.memory +Pilot Memory Reservation | Memory resource requests for the istio-pilot pod. | istio-pilot.discovery.resources.requests.memory +Trace sampling Percentage | [Trace sampling percentage](https://istio.io/docs/tasks/telemetry/distributed-tracing/overview/#trace-sampling) | stio-pilot.discovery.env.PILOT_TRACE_SAMPLING +Pilot Selector | Ability to select the nodes in which istio-pilot pod is deployed to. To use this option, the nodes must have labels. | istio-pilot.nodeAffinity.matchExpressions -## TELEMETRY +## MIXER -Option | Description --------|------------- -Telemetry CPU Limit | CPU resource limit for the istio-telemetry pod. -Telemetry CPU Reservation | CPU reservation for the istio-telemetry pod. -Telemetry Memory Limit | Memory resource limit for the istio-telemetry pod. -Telemetry Memory Reservation | Memory resource requests for the istio-telemetry pod. -Telemetry Selector | Ability to select the nodes in which istio-telemetry pod is deployed to. To use this option, the nodes must have labels. - -## POLICY - -Option | Description --------|------------- -Enable Policy | Whether or not to deploy the istio-policy. -Policy CPU Limit | CPU resource limit for the istio-policy pod. -Policy CPU Reservation | CPU reservation for the istio-policy pod. -Policy Memory Limit | Memory resource limit for the istio-policy pod. -Policy Memory Reservation | Memory resource requests for the istio-policy pod. -Policy Selector | Ability to select the nodes in which istio-policy pod is deployed to. To use this option, the nodes must have labels. - -## PROMETHEUS - -Option | Description --------|------------- -Prometheus CPU Limit | CPU resource limit for the Prometheus pod. -Prometheus CPU Reservation | CPU reservation for the Prometheus pod. -Prometheus Memory Limit | Memory resource limit for the Prometheus pod. -Prometheus Memory Reservation | Memory resource requests for the Prometheus pod. -Retention for Prometheus | How long your Prometheus instance retains data -Prometheus Selector | Ability to select the nodes in which Prometheus pod is deployed to. To use this option, the nodes must have labels. - -## GRAFANA - -Option | Description --------|------------- -Enable Grafana | Whether or not to deploy the Grafana. -Grafana CPU Limit | CPU resource limit for the Grafana pod. -Grafana CPU Reservation | CPU reservation for the Grafana pod. -Grafana Memory Limit | Memory resource limit for the Grafana pod. -Grafana Memory Reservation | Memory resource requests for the Grafana pod. -Grafana Selector | Ability to select the nodes in which Grafana pod is deployed to. To use this option, the nodes must have labels. +Option | Description| Field +-------|------------|------- +Mixer Telemetry CPU Limit | CPU resource limit for the istio-telemetry pod.| istio-telemetry.mixer.resources.limits.cpu +Mixer Telemetry CPU Reservation | CPU reservation for the istio-telemetry pod.| istio-telemetry.mixer.resources.requests.cpu +Mixer Telemetry Memory Limit | Memory resource limit for the istio-telemetry pod.| istio-telemetry.mixer.resources.limits.memory +Mixer Telemetry Memory Reservation | Memory resource requests for the istio-telemetry pod.| istio-telemetry.mixer.resources.requests.memory +Enable Mixer Policy | Whether or not to deploy the istio-policy. | n/a +Mixer Policy CPU Limit | CPU resource limit for the istio-policy pod. | istio-policy.mixer.resources.limits.cpu +Mixer Policy CPU Reservation | CPU reservation for the istio-policy pod. | istio-policy.mixer.resources.requests.cpu +Mixer Policy Memory Limit | Memory resource limit for the istio-policy pod. | istio-policy.mixer.resources.limits.memory +Mixer Policy Memory Reservation | Memory resource requests for the istio-policy pod. | istio-policy.mixer.resources.requests.memory +Mixer Selector | Ability to select the nodes in which istio-policy and istio-telemetry pods are deployed to. To use this option, the nodes must have labels. | (istio-policy / istio-telemetry).nodeAffinity.matchExpressions ## TRACING -Option | Description --------|------------- -Enable Tracing | Whether or not to deploy the istio-tracing. -Tracing CPU Limit | CPU resource limit for the istio-tracing pod. -Tracing CPU Reservation | CPU reservation for the istio-tracing pod. -Tracing Memory Limit | Memory resource limit for the istio-tracing pod. -Tracing Memory Reservation | Memory resource requests for the istio-tracing pod. -Tracing Selector | Ability to select the nodes in which tracing pod is deployed to. To use this option, the nodes must have labels. +Option | Description| Field +-------|------------|------- +Enable Tracing | Whether or not to deploy the istio-tracing. | n/a +Tracing CPU Limit | CPU resource limit for the istio-tracing pod. | istio-tracing.jaeger.resources.limits.cpu +Tracing CPU Reservation | CPU reservation for the istio-tracing pod. | istio-tracing.jaeger.resources.requests.cpu +Tracing Memory Limit | Memory resource limit for the istio-tracing pod. | istio-tracing.jaeger.resources.limits.memory +Tracing Memory Reservation | Memory resource requests for the istio-tracing pod. | istio-tracing.jaeger.resources.requests.memory +Tracing Selector | Ability to select the nodes in which tracing pod is deployed to. To use this option, the nodes must have labels. | istio-tracing.nodeAffinity.matchExpressions + +## INGRESS GATEWAY + +Option | Description| Field +-------|------------|------- +Enable Ingress Gateway | Whether or not to deploy the istio-ingressgateway. | n/a +Service Type of Istio Ingress Gateway | How to expose the gateway. You can choose NodePort or Loadbalancer | service.istio-ingressgateway.type +Http2 Port | The NodePort for http2 requests | service.istio-ingressgateway.ports.http2.nodePort +Https Port | The NodePort for https requests | service.istio-ingressgateway.ports.https.nodePort +Load Balancer IP | Ingress Gateway Load Balancer IP | service.istio-ingressgateway.loadBalancerIp +Load Balancer Source Ranges | Ingress Gateway Load Balancer Source Ranges | service.istio-ingressgateway.loadBalancerSourceRanges +Ingress Gateway CPU Limit | CPU resource limit for the istio-ingressgateway pod. | istio-ingressgateway.istio-proxy.resources.limits.cpu +Ingress Gateway CPU Reservation | CPU reservation for the istio-ingressgateway pod. | istio-ingressgateway.istio-proxy.resources.requests.cpu +Ingress Gateway Memory Limit | Memory resource limit for the istio-ingressgateway pod. | istio-ingressgateway.istio-proxy.resources.limits.memory +Ingress Gateway Memory Reservation | Memory resource requests for the istio-ingressgateway pod. | istio-ingressgateway.istio-proxy.resources.requests.memory +Ingress Gateway Selector | Ability to select the nodes in which istio-ingressgateway pod is deployed to. To use this option, the nodes must have labels. | istio-ingressgateway.nodeAffinity.matchExpressions + +## PROMETHEUS + +Option | Description| Field +-------|------------|------- +Prometheus CPU Limit | CPU resource limit for the Prometheus pod.| prometheus.prometheus.resources.limits.cpu +Prometheus CPU Reservation | CPU reservation for the Prometheus pod.| prometheus.prometheus.resources.requests.cpu +Prometheus Memory Limit | Memory resource limit for the Prometheus pod.| prometheus.prometheus.resources.limits.memory +Prometheus Memory Reservation | Memory resource requests for the Prometheus pod.| prometheus.prometheus.resources.requests.memory +Retention for Prometheus | How long your Prometheus instance retains data | prometheus.prometheus.args +Prometheus Selector | Ability to select the nodes in which Prometheus pod is deployed to. To use this option, the nodes must have labels.| prometheus.nodeAffinity.matchExpressions + +## GRAFANA + +Option | Description| Field +-------|------------|------- +Enable Grafana | Whether or not to deploy the Grafana.| n/a +Grafana CPU Limit | CPU resource limit for the Grafana pod.| grafana.grafana.resources.limits.cpu +Grafana CPU Reservation | CPU reservation for the Grafana pod.| grafana.grafana.resources.requests.cpu +Grafana Memory Limit | Memory resource limit for the Grafana pod.| grafana.grafana.resources.limits.memory +Grafana Memory Reservation | Memory resource requests for the Grafana pod.| grafana.grafana.resources.requests.memory +Grafana Selector | Ability to select the nodes in which Grafana pod is deployed to. To use this option, the nodes must have labels. | grafana.nodeAffinity.matchExpressions +Enable Persistent Storage for Grafana | Enable Persistent Storage for Grafana | n/a +Source | Use a Storage Class to provision a new persistent volume or Use an existing persistent volume claim | n/a +Storage Class | Storage Class for provisioning PV for Grafana | volume.istio-grafana-pvc.storageClass +Existing Claim | Use existing PVC for Grafna | grafana.volumes.data.pvc.claimName -## GATEWAY -Option | Description --------|------------- -Enable Gateway | Whether or not to deploy the istio-ingressgateway. -Service Type of Istio Ingress Gateway | How to expose the gateway. You can choose NodePort or Loadbalancer -Http2 Port | The NodePort for http2 requests -Https Port | The NodePort for https requests -Load Balancer IP | Ingress Gateway Load Balancer IP -Load Balancer Source Ranges | Ingress Gateway Load Balancer Source Ranges -Gateway CPU Limit | CPU resource limit for the istio-ingressgateway pod. -Gateway CPU Reservation | CPU reservation for the istio-ingressgateway pod. -Gateway Memory Limit | Memory resource limit for the istio-ingressgateway pod. -Gateway Memory Reservation | Memory resource requests for the istio-ingressgateway pod. -Gateway Selector | Ability to select the nodes in which istio-ingressgateway pod is deployed to. To use this option, the nodes must have labels. From 7db5462583eb337cc85682c92b153bd8f6c2b074 Mon Sep 17 00:00:00 2001 From: loganhz Date: Fri, 7 Jun 2019 23:01:35 +0800 Subject: [PATCH 04/33] Update istio config options --- .../tools/service-mesh/istio/_index.md | 123 +++++++++--------- 1 file changed, 62 insertions(+), 61 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md index f4ef83dce95..f5de6dc7411 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md @@ -9,81 +9,82 @@ While configuring service mesh, there are multiple options that can be configure ## PILOT -Option | Description| Field --------|------------|------- -Pilot CPU Limit | CPU resource limit for the istio-pilot pod.| istio-pilot.discovery.resources.limits.cpu -Pilot CPU Reservation | CPU reservation for the istio-pilot pod. | istio-pilot.discovery.resources.requests.cpu -Pilot Memory Limit | Memory resource limit for the istio-pilot pod. | istio-pilot.discovery.resources.limits.memory -Pilot Memory Reservation | Memory resource requests for the istio-pilot pod. | istio-pilot.discovery.resources.requests.memory -Trace sampling Percentage | [Trace sampling percentage](https://istio.io/docs/tasks/telemetry/distributed-tracing/overview/#trace-sampling) | stio-pilot.discovery.env.PILOT_TRACE_SAMPLING -Pilot Selector | Ability to select the nodes in which istio-pilot pod is deployed to. To use this option, the nodes must have labels. | istio-pilot.nodeAffinity.matchExpressions +Option | Description| Required | Default +-------|------------|-------|------- +Pilot CPU Limit | CPU resource limit for the istio-pilot pod.| Yes | 1000 +Pilot CPU Reservation | CPU reservation for the istio-pilot pod. | Yes | 500 +Pilot Memory Limit | Memory resource limit for the istio-pilot pod. | Yes | 4096 +Pilot Memory Reservation | Memory resource requests for the istio-pilot pod. | Yes | 2048 +Trace sampling Percentage | [Trace sampling percentage](https://istio.io/docs/tasks/telemetry/distributed-tracing/overview/#trace-sampling) | Yes | 1 +Pilot Selector | Ability to select the nodes in which istio-pilot pod is deployed to. To use this option, the nodes must have labels. | No | n/a ## MIXER -Option | Description| Field --------|------------|------- -Mixer Telemetry CPU Limit | CPU resource limit for the istio-telemetry pod.| istio-telemetry.mixer.resources.limits.cpu -Mixer Telemetry CPU Reservation | CPU reservation for the istio-telemetry pod.| istio-telemetry.mixer.resources.requests.cpu -Mixer Telemetry Memory Limit | Memory resource limit for the istio-telemetry pod.| istio-telemetry.mixer.resources.limits.memory -Mixer Telemetry Memory Reservation | Memory resource requests for the istio-telemetry pod.| istio-telemetry.mixer.resources.requests.memory -Enable Mixer Policy | Whether or not to deploy the istio-policy. | n/a -Mixer Policy CPU Limit | CPU resource limit for the istio-policy pod. | istio-policy.mixer.resources.limits.cpu -Mixer Policy CPU Reservation | CPU reservation for the istio-policy pod. | istio-policy.mixer.resources.requests.cpu -Mixer Policy Memory Limit | Memory resource limit for the istio-policy pod. | istio-policy.mixer.resources.limits.memory -Mixer Policy Memory Reservation | Memory resource requests for the istio-policy pod. | istio-policy.mixer.resources.requests.memory -Mixer Selector | Ability to select the nodes in which istio-policy and istio-telemetry pods are deployed to. To use this option, the nodes must have labels. | (istio-policy / istio-telemetry).nodeAffinity.matchExpressions +Option | Description| Required | Default +-------|------------|-------|------- +Mixer Telemetry CPU Limit | CPU resource limit for the istio-telemetry pod.| Yes | 4800 +Mixer Telemetry CPU Reservation | CPU reservation for the istio-telemetry pod.| Yes | 1000 +Mixer Telemetry Memory Limit | Memory resource limit for the istio-telemetry pod.| Yes | 4096 +Mixer Telemetry Memory Reservation | Memory resource requests for the istio-telemetry pod.| Yes | 1024 +Enable Mixer Policy | Whether or not to deploy the istio-policy. | Yes | False +Mixer Policy CPU Limit | CPU resource limit for the istio-policy pod. | Yes, when policy enabled | 4800 +Mixer Policy CPU Reservation | CPU reservation for the istio-policy pod. | Yes, when policy enabled | 1000 +Mixer Policy Memory Limit | Memory resource limit for the istio-policy pod. | Yes, when policy enabled | 4096 +Mixer Policy Memory Reservation | Memory resource requests for the istio-policy pod. | Yes, when policy enabled | 1024 +Mixer Selector | Ability to select the nodes in which istio-policy and istio-telemetry pods are deployed to. To use this option, the nodes must have labels. | No | n/a ## TRACING -Option | Description| Field --------|------------|------- -Enable Tracing | Whether or not to deploy the istio-tracing. | n/a -Tracing CPU Limit | CPU resource limit for the istio-tracing pod. | istio-tracing.jaeger.resources.limits.cpu -Tracing CPU Reservation | CPU reservation for the istio-tracing pod. | istio-tracing.jaeger.resources.requests.cpu -Tracing Memory Limit | Memory resource limit for the istio-tracing pod. | istio-tracing.jaeger.resources.limits.memory -Tracing Memory Reservation | Memory resource requests for the istio-tracing pod. | istio-tracing.jaeger.resources.requests.memory -Tracing Selector | Ability to select the nodes in which tracing pod is deployed to. To use this option, the nodes must have labels. | istio-tracing.nodeAffinity.matchExpressions +Option | Description| Required | Default +-------|------------|-------|------- +Enable Tracing | Whether or not to deploy the istio-tracing. | Yes | True +Tracing CPU Limit | CPU resource limit for the istio-tracing pod. | Yes | 500 +Tracing CPU Reservation | CPU reservation for the istio-tracing pod. | Yes | 100 +Tracing Memory Limit | Memory resource limit for the istio-tracing pod. | Yes | 1024 +Tracing Memory Reservation | Memory resource requests for the istio-tracing pod. | Yes | 100 +Tracing Selector | Ability to select the nodes in which tracing pod is deployed to. To use this option, the nodes must have labels. | No | n/a ## INGRESS GATEWAY -Option | Description| Field --------|------------|------- -Enable Ingress Gateway | Whether or not to deploy the istio-ingressgateway. | n/a -Service Type of Istio Ingress Gateway | How to expose the gateway. You can choose NodePort or Loadbalancer | service.istio-ingressgateway.type -Http2 Port | The NodePort for http2 requests | service.istio-ingressgateway.ports.http2.nodePort -Https Port | The NodePort for https requests | service.istio-ingressgateway.ports.https.nodePort -Load Balancer IP | Ingress Gateway Load Balancer IP | service.istio-ingressgateway.loadBalancerIp -Load Balancer Source Ranges | Ingress Gateway Load Balancer Source Ranges | service.istio-ingressgateway.loadBalancerSourceRanges -Ingress Gateway CPU Limit | CPU resource limit for the istio-ingressgateway pod. | istio-ingressgateway.istio-proxy.resources.limits.cpu -Ingress Gateway CPU Reservation | CPU reservation for the istio-ingressgateway pod. | istio-ingressgateway.istio-proxy.resources.requests.cpu -Ingress Gateway Memory Limit | Memory resource limit for the istio-ingressgateway pod. | istio-ingressgateway.istio-proxy.resources.limits.memory -Ingress Gateway Memory Reservation | Memory resource requests for the istio-ingressgateway pod. | istio-ingressgateway.istio-proxy.resources.requests.memory -Ingress Gateway Selector | Ability to select the nodes in which istio-ingressgateway pod is deployed to. To use this option, the nodes must have labels. | istio-ingressgateway.nodeAffinity.matchExpressions +Option | Description| Required | Default +-------|------------|-------|------- +Enable Ingress Gateway | Whether or not to deploy the istio-ingressgateway. | Yes | False +Service Type of Istio Ingress Gateway | How to expose the gateway. You can choose NodePort or Loadbalancer | Yes | NodePort +Http2 Port | The NodePort for http2 requests | Yes | 31380 +Https Port | The NodePort for https requests | Yes | 31390 +Load Balancer IP | Ingress Gateway Load Balancer IP | No | n/a +Load Balancer Source Ranges | Ingress Gateway Load Balancer Source Ranges | No | n/a +Ingress Gateway CPU Limit | CPU resource limit for the istio-ingressgateway pod. | Yes | 2000 +Ingress Gateway CPU Reservation | CPU reservation for the istio-ingressgateway pod. | Yes | 100 +Ingress Gateway Memory Limit | Memory resource limit for the istio-ingressgateway pod. | Yes | 1024 +Ingress Gateway Memory Reservation | Memory resource requests for the istio-ingressgateway pod. | Yes | 128 +Ingress Gateway Selector | Ability to select the nodes in which istio-ingressgateway pod is deployed to. To use this option, the nodes must have labels. | No | n/a ## PROMETHEUS -Option | Description| Field --------|------------|------- -Prometheus CPU Limit | CPU resource limit for the Prometheus pod.| prometheus.prometheus.resources.limits.cpu -Prometheus CPU Reservation | CPU reservation for the Prometheus pod.| prometheus.prometheus.resources.requests.cpu -Prometheus Memory Limit | Memory resource limit for the Prometheus pod.| prometheus.prometheus.resources.limits.memory -Prometheus Memory Reservation | Memory resource requests for the Prometheus pod.| prometheus.prometheus.resources.requests.memory -Retention for Prometheus | How long your Prometheus instance retains data | prometheus.prometheus.args -Prometheus Selector | Ability to select the nodes in which Prometheus pod is deployed to. To use this option, the nodes must have labels.| prometheus.nodeAffinity.matchExpressions +Option | Description| Required | Default +-------|------------|-------|------- +Prometheus CPU Limit | CPU resource limit for the Prometheus pod.| Yes | 1000 +Prometheus CPU Reservation | CPU reservation for the Prometheus pod.| Yes | 750 +Prometheus Memory Limit | Memory resource limit for the Prometheus pod.| Yes | 1024 +Prometheus Memory Reservation | Memory resource requests for the Prometheus pod.| Yes | 750 +Retention for Prometheus | How long your Prometheus instance retains data | Yes | 6 +Prometheus Selector | Ability to select the nodes in which Prometheus pod is deployed to. To use this option, the nodes must have labels.| No | n/a ## GRAFANA -Option | Description| Field --------|------------|------- -Enable Grafana | Whether or not to deploy the Grafana.| n/a -Grafana CPU Limit | CPU resource limit for the Grafana pod.| grafana.grafana.resources.limits.cpu -Grafana CPU Reservation | CPU reservation for the Grafana pod.| grafana.grafana.resources.requests.cpu -Grafana Memory Limit | Memory resource limit for the Grafana pod.| grafana.grafana.resources.limits.memory -Grafana Memory Reservation | Memory resource requests for the Grafana pod.| grafana.grafana.resources.requests.memory -Grafana Selector | Ability to select the nodes in which Grafana pod is deployed to. To use this option, the nodes must have labels. | grafana.nodeAffinity.matchExpressions -Enable Persistent Storage for Grafana | Enable Persistent Storage for Grafana | n/a -Source | Use a Storage Class to provision a new persistent volume or Use an existing persistent volume claim | n/a -Storage Class | Storage Class for provisioning PV for Grafana | volume.istio-grafana-pvc.storageClass -Existing Claim | Use existing PVC for Grafna | grafana.volumes.data.pvc.claimName +Option | Description| Required | Default +-------|------------|-------|------- +Enable Grafana | Whether or not to deploy the Grafana.| Yes | True +Grafana CPU Limit | CPU resource limit for the Grafana pod.| Yes, when Grafana enabled | 200 +Grafana CPU Reservation | CPU reservation for the Grafana pod.| Yes, when Grafana enabled | 100 +Grafana Memory Limit | Memory resource limit for the Grafana pod.| Yes, when Grafana enabled | 512 +Grafana Memory Reservation | Memory resource requests for the Grafana pod.| Yes, when Grafana enabled | 100 +Grafana Selector | Ability to select the nodes in which Grafana pod is deployed to. To use this option, the nodes must have labels. | No | n/a +Enable Persistent Storage for Grafana | Enable Persistent Storage for Grafana | Yes, when Grafana enabled | False +Source | Use a Storage Class to provision a new persistent volume or Use an existing persistent volume claim | Yes, when Grafana enabled and enabled PV | Use SC +Storage Class | Storage Class for provisioning PV for Grafana | Yes, when Grafana enabled, enabled PV and use storage class | Use the default class +Persistent Volume Size | The size for the PV you would like to provision for Grafana | Yes, when Grafana enabled, enabled PV and use storage class | 5Gi +Existing Claim | Use existing PVC for Grafna | Yes, when Grafana enabled, enabled PV and use existing PVC | n/a From 4ea6e74fe3ba6a735860240aea2a6e25a9030a3d Mon Sep 17 00:00:00 2001 From: loganhz Date: Fri, 7 Jun 2019 23:05:56 +0800 Subject: [PATCH 05/33] Add notes for sidecar auto injection --- content/rancher/v2.x/en/project-admin/service-mesh/_index.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/content/rancher/v2.x/en/project-admin/service-mesh/_index.md b/content/rancher/v2.x/en/project-admin/service-mesh/_index.md index 50f79ba842f..80a55be24f6 100644 --- a/content/rancher/v2.x/en/project-admin/service-mesh/_index.md +++ b/content/rancher/v2.x/en/project-admin/service-mesh/_index.md @@ -16,6 +16,8 @@ Using Rancher, you can connect, secure, control, and observe services through in In create and edit namespace page, you can enable or disable [Istio sidecar auto injection](https://istio.io/blog/2019/data-plane-setup/#automatic-injection). When you enable it, Rancher will add `istio-injection=enabled` label to the namespace automatically. +> **Note:** Injection occurs at pod creation time. If the pod has been created before you enable auto injection. You need to kill the running pod and verify a new pod is created with the injected sidecar. + ## View Traffic Graph Rancher integrates Kiali Graph into Rancher UI. The Kiali graph provides a powerful way to visualize the topology of your service mesh. It shows you which services communicate with each other. From 0bbb2efb19d0da737a03948aef3bda2a7566b549 Mon Sep 17 00:00:00 2001 From: galal-hussein Date: Sat, 27 Apr 2019 00:00:57 +0200 Subject: [PATCH 06/33] Add Prefix Path documentation --- .../rke/latest/en/config-options/_index.md | 22 +++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/content/rke/latest/en/config-options/_index.md b/content/rke/latest/en/config-options/_index.md index 0df26d39165..b1ad9fa6f07 100644 --- a/content/rke/latest/en/config-options/_index.md +++ b/content/rke/latest/en/config-options/_index.md @@ -18,6 +18,7 @@ There are several options that can be configured in cluster configuration option ### Configuring Kubernetes Cluster * [Cluster Name](#cluster-name) * [Kubernetes Version](#kubernetes-version) +* [Prefix Path](#prefix-path) * [System Images]({{< baseurl >}}/rke/latest/en/config-options/system-images/) * [Services]({{< baseurl >}}/rke/latest/en/config-options/services/) * [Extra Args and Binds and Environment Variables]({{< baseurl >}}/rke/latest/en/config-options/services/services-extras/) @@ -68,7 +69,7 @@ In case both `kubernetes_version` and [system images]({{< baseurl >}}/rke/latest Please refer to the [release notes](https://github.com/rancher/rke/releases) of the RKE version that you are running, to find the list of supported Kubernetes versions as well as the default Kubernetes version. -You can also list the supported versions and system images of specific version of RKE release with a quick command. +You can also list the supported versions and system images of specific version of RKE release with a quick command. ``` $ rke config --system-images --all @@ -81,14 +82,27 @@ INFO[0000] Generating images list for version [v1.12.6-rancher1-2]: ....... ``` -#### Using an unsupported Kubernetes version +#### Using an unsupported Kubernetes version -As of v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, then RKE will error out. +As of v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, then RKE will error out. -Prior to v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, the default version from the supported list is used. +Prior to v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, the default version from the supported list is used. If you want to use a different version from the supported list, please use the [system images]({{< baseurl >}}/rke/latest/en/config-options/system-images/) option. +### Prefix Path + +For some operating systems including ROS, and CoreOS, RKE stores its resources to a different prefix path, this prefix path is by default for these operating systems is: +``` +/opt/rke +``` +So `/etc/kubernetes` will be stored in `/opt/rke/etc/kubernetes` and `/var/lib/etcd` will be stored in `/opt/rke/var/lib/etcd` etc. + +To change the default prefix path for any cluster, you can use the following option in the cluster configuration file `cluster.yml`: +``` +prefix_path: /opt/custom_path +``` + ### Cluster Level SSH Key Path RKE connects to host(s) using `ssh`. Typically, each node will have an independent path for each ssh key, i.e. `ssh_key_path`, in the `nodes` section, but if you have a SSH key that is able to access **all** hosts in your cluster configuration file, you can set the path to that ssh key at the top level. Otherwise, you would set the ssh key path in the [nodes]({{< baseurl >}}/rke/latest/en/config-options/nodes/). From 449e42cb73c8962c1251cb9fe46c2b7f3075a5e0 Mon Sep 17 00:00:00 2001 From: Ryanbeta Date: Mon, 27 May 2019 10:41:53 +0800 Subject: [PATCH 07/33] Update _index.md I could not find official release v2.1.14 --- .../v2.x/en/cluster-admin/certificate-rotation/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/cluster-admin/certificate-rotation/_index.md b/content/rancher/v2.x/en/cluster-admin/certificate-rotation/_index.md index aa817ecd9f0..74d52208d35 100644 --- a/content/rancher/v2.x/en/cluster-admin/certificate-rotation/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/certificate-rotation/_index.md @@ -39,7 +39,7 @@ Rancher launched Kubernetes clusters have the ability to rotate the auto-generat ### Certificate Rotation in Rancher v2.1.x and v2.0.x -_Available as of v2.1.14 and v2.0.9_ +_Available as of v2.0.14 and v2.1.9_ Rancher launched Kubernetes clusters have the ability to rotate the auto-generated certificates through the API. From 47880f3fa88cd9544ef6d296df2362da2f69bc15 Mon Sep 17 00:00:00 2001 From: William Jimenez Date: Fri, 24 May 2019 17:08:34 -0700 Subject: [PATCH 08/33] mention Rio --- content/rancher/v2.x/en/faq/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/faq/_index.md b/content/rancher/v2.x/en/faq/_index.md index 469ed869378..c7d4cb741ea 100644 --- a/content/rancher/v2.x/en/faq/_index.md +++ b/content/rancher/v2.x/en/faq/_index.md @@ -80,7 +80,7 @@ We plan to provide Windows support for v2.1 based on Microsoft’s new approach #### Are you planning on supporting Istio in Rancher v2.x? -We like Istio, and it's something we're looking at potentially integrating and supporting. +Yes! Istio is implemented in our micro-PaaS "Rio" (which works on Rancher 2x along wtih any CNCF compliant Kubernetes cluster). You can read more about it here: https://rio.io/ #### Will Rancher v2.x support Hashicorp's Vault for storing secrets? From 6e80e558b18573133e9978d8b9a14aa86c848e72 Mon Sep 17 00:00:00 2001 From: David Noland Date: Fri, 24 May 2019 16:16:17 -0700 Subject: [PATCH 09/33] Clarified Audit log rotation based on https://github.com/rancher/rancher/issues/20444 --- .../ha/helm-rancher/chart-options/_index.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md b/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md index 65afea2b066..8ded6f0ac25 100644 --- a/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md +++ b/content/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/_index.md @@ -23,11 +23,11 @@ weight: 276 | `addLocal` | "auto" | `string` - Have Rancher detect and import the "local" Rancher server cluster [Import "local Cluster](#import-local-cluster) | | `antiAffinity` | "preferred" | `string` - AntiAffinity rule for Rancher pods - "preferred, required" | | `auditLog.destination` | "sidecar" | `string` - Stream to sidecar container console or hostPath volume - "sidecar, hostPath" | -| `auditLog.hostPath` | "/var/log/rancher/audit" | `string` - log file destination on host | +| `auditLog.hostPath` | "/var/log/rancher/audit" | `string` - log file destination on host (only applies when `auditLog.destination` is set to `hostPath`) | | `auditLog.level` | 0 | `int` - set the [API Audit Log]({{< baseurl >}}/rancher/v2.x/en/installation/api-auditing) level. 0 is off. [0-3] | -| `auditLog.maxAge` | 1 | `int` - maximum number of days to retain old audit log files | -| `auditLog.maxBackups` | 1 | `int` - maximum number of audit log files to retain | -| `auditLog.maxSize` | 100 | `int` - maximum size in megabytes of the audit log file before it gets rotated | +| `auditLog.maxAge` | 1 | `int` - maximum number of days to retain old audit log files (only applies when `auditLog.destination` is set to `hostPath`) | +| `auditLog.maxBackups` | 1 | `int` - maximum number of audit log files to retain (only applies when `auditLog.destination` is set to `hostPath`) | +| `auditLog.maxSize` | 100 | `int` - maximum size in megabytes of the audit log file before it gets rotated (only applies when `auditLog.destination` is set to `hostPath`) | | `busyboxImage` | "busybox" | `string` - Image location for busybox image used to collect audit logs _Note: Available as of v2.2.0_ | | `debug` | false | `bool` - set debug flag on rancher server | | `extraEnv` | [] | `list` - set additional environment variables for Rancher _Note: Available as of v2.2.0_ | @@ -53,7 +53,7 @@ You can collect this log as you would any container log. Enable the [Logging ser --set auditLog.level=1 ``` -By default enabling Audit Logging will create a sidecar container in the Rancher pod. This container (`rancher-audit-log`) will stream the log to `stdout`. You can collect this log as you would any container log. Enable the [Logging service under Rancher Tools]({{< baseurl >}}/rancher/v2.x/en/tools/logging/) for the Rancher server cluster or System Project. +By default enabling Audit Logging will create a sidecar container in the Rancher pod. This container (`rancher-audit-log`) will stream the log to `stdout`. You can collect this log as you would any container log. When using the sidecar as the audit log destination, the `hostPath`, `maxAge`, `maxBackups`, and `maxSize` options do not apply. It's advised to use your OS or Docker daemon's log rotation features to control disk space use. Enable the [Logging service under Rancher Tools]({{< baseurl >}}/rancher/v2.x/en/tools/logging/) for the Rancher server cluster or System Project. Set the `auditLog.destination` to `hostPath` to forward logs to volume shared with the host system instead of streaming to a sidecar container. When setting the destination to `hostPath` you may want to adjust the other auditLog parameters for log rotation. From d77a6bf31b7c8fa9fbe1fd2fb6805c21b6d40417 Mon Sep 17 00:00:00 2001 From: William Jimenez Date: Tue, 28 May 2019 12:44:14 -0700 Subject: [PATCH 10/33] Update _index.md link to docker documentation provides incomplete steps if you are using native package manager version. OS req. page already explains this --- .../latest/en/troubleshooting/ssh-connectivity-errors/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rke/latest/en/troubleshooting/ssh-connectivity-errors/_index.md b/content/rke/latest/en/troubleshooting/ssh-connectivity-errors/_index.md index 790bb241676..f8cee4cc5e4 100644 --- a/content/rke/latest/en/troubleshooting/ssh-connectivity-errors/_index.md +++ b/content/rke/latest/en/troubleshooting/ssh-connectivity-errors/_index.md @@ -20,7 +20,7 @@ CONTAINER ID IMAGE COMMAND CREATED See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly. -* When using RedHat/CentOS as operating system, you cannot use the user `root` to connect to the nodes because of [Bugzilla #1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). You will need to add a separate user and configure it to access the Docker socket. See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly. +* When using RedHat/CentOS as operating system, you cannot use the user `root` to connect to the nodes because of [Bugzilla #1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). You will need to add a separate user and configure it to access the Docker socket. See [RKE OS Requirements](https://rancher.com/docs/rke/latest/en/os/#red-hat-enterprise-linux-rhel-oracle-enterprise-linux-oel-centos) for more on how to set this up. * SSH server version is not version 6.7 or higher. This is needed for socket forwarding to work, which is used to connect to the Docker socket over SSH. This can be checked using `sshd -V` on the host you are connecting to, or using netcat: ``` From 1ac056e6952610f891509ed0c10efff4e31d4392 Mon Sep 17 00:00:00 2001 From: William Jimenez Date: Tue, 28 May 2019 13:01:18 -0700 Subject: [PATCH 11/33] use rel URL --- .../latest/en/troubleshooting/ssh-connectivity-errors/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rke/latest/en/troubleshooting/ssh-connectivity-errors/_index.md b/content/rke/latest/en/troubleshooting/ssh-connectivity-errors/_index.md index f8cee4cc5e4..f663021ce45 100644 --- a/content/rke/latest/en/troubleshooting/ssh-connectivity-errors/_index.md +++ b/content/rke/latest/en/troubleshooting/ssh-connectivity-errors/_index.md @@ -20,7 +20,7 @@ CONTAINER ID IMAGE COMMAND CREATED See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly. -* When using RedHat/CentOS as operating system, you cannot use the user `root` to connect to the nodes because of [Bugzilla #1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). You will need to add a separate user and configure it to access the Docker socket. See [RKE OS Requirements](https://rancher.com/docs/rke/latest/en/os/#red-hat-enterprise-linux-rhel-oracle-enterprise-linux-oel-centos) for more on how to set this up. +* When using RedHat/CentOS as operating system, you cannot use the user `root` to connect to the nodes because of [Bugzilla #1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). You will need to add a separate user and configure it to access the Docker socket. See [RKE OS Requirements]({{< baseurl >}}/rke/latest/en/os/#red-hat-enterprise-linux-rhel-oracle-enterprise-linux-oel-centos) for more on how to set this up. * SSH server version is not version 6.7 or higher. This is needed for socket forwarding to work, which is used to connect to the Docker socket over SSH. This can be checked using `sshd -V` on the host you are connecting to, or using netcat: ``` From 287e5a9a0ad5b63d2ce3deb0c38f8094e86bd6a8 Mon Sep 17 00:00:00 2001 From: Vincent Fiduccia Date: Wed, 29 May 2019 17:19:00 -0700 Subject: [PATCH 12/33] Fix content copy links --- assets/js/app.js | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/assets/js/app.js b/assets/js/app.js index 51d31580e26..7dffc3d485e 100644 --- a/assets/js/app.js +++ b/assets/js/app.js @@ -106,14 +106,15 @@ const bootstrapDocsSearch = function() { } const bootstrapIdLinks = function() { - const container = '.wrapper ARTICLE'; + const $container = $('.main-content') const selector = 'h2[id], h3[id], h4[id], h5[id], h6[id]'; - $(container).on('mouseenter', selector, function(e) { - $(e.target).append($('').addClass('header-anchor').attr('href', '#' + e.target.id).html('')); + + $container.on('mouseenter', selector, function(e) { + $(e.target).append($('').addClass('header-anchor').attr('href', '#' + e.target.id).html('')); }); - $(container).on('mouseleave', selector, function(e) { - $(e.target).parent().find('.header-anchor').remove(); + $container.on('mouseleave', selector, function(e) { + $container.find('.header-anchor').remove(); }); } From 4c6c293dab7db4de668f03eabb836c21830339fd Mon Sep 17 00:00:00 2001 From: Vincent Fiduccia Date: Wed, 29 May 2019 18:14:03 -0700 Subject: [PATCH 13/33] Bump --- package.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/package.json b/package.json index 9d8cd9c9167..d00a476da44 100644 --- a/package.json +++ b/package.json @@ -2,7 +2,7 @@ "name": "rancher-docs", "author": "Rancher Labs, Inc.", "license": "Apache-2.0", - "version": "2.1.0", + "version": "2.2.0", "private": true, "scripts": { "dev": "./scripts/dev", From 92a8759b9de00eb4ba449c6fb1a55ad22284610d Mon Sep 17 00:00:00 2001 From: niusmallnan Date: Fri, 31 May 2019 14:27:07 +0800 Subject: [PATCH 14/33] Update for RancherOS v1.5.2 --- content/os/v1.x/en/about/security/_index.md | 2 + .../v1.x/en/installation/amazon-ecs/_index.md | 38 +++++++++---------- .../workstation/boot-from-iso/_index.md | 2 +- 3 files changed, 22 insertions(+), 20 deletions(-) diff --git a/content/os/v1.x/en/about/security/_index.md b/content/os/v1.x/en/about/security/_index.md index 5ff2e307ffe..c9b2320b0ae 100644 --- a/content/os/v1.x/en/about/security/_index.md +++ b/content/os/v1.x/en/about/security/_index.md @@ -36,3 +36,5 @@ weight: 303 | [L1 Terminal Fault](https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html) | L1 Terminal Fault is a hardware vulnerability which allows unprivileged speculative access to data which is available in the Level 1 Data Cache when the page table entry controlling the virtual address, which is used for the access, has the Present bit cleared or other reserved bits set. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 | | [CVE-2018-3639](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3639) | Systems with microprocessors utilizing speculative execution and speculative execution of memory reads before the addresses of all prior memory writes are known may allow unauthorized disclosure of information to an attacker with local user access via a side-channel analysis, aka Speculative Store Bypass (SSB), Variant 4. | 19 Sep 2018 | [RancherOS v1.4.1](https://github.com/rancher/os/releases/tag/v1.4.1) using Linux v4.14.67 | | [CVE-2018-17182](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17182) | The vmacache_flush_all function in mm/vmacache.c mishandles sequence number overflows. An attacker can trigger a use-after-free (and possibly gain privileges) via certain thread creation, map, unmap, invalidation, and dereference operations. | 18 Oct 2018 | [RancherOS v1.4.2](https://github.com/rancher/os/releases/tag/v1.4.2) using Linux v4.14.73 | +| [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736) | runc through 1.0-rc6, as used in Docker before 18.09.2 and other products, allows attackers to overwrite the host runc binary (and consequently obtain host root access) by leveraging the ability to execute a command as root within one of these types of containers: (1) a new container with an attacker-controlled image, or (2) an existing container, to which the attacker previously had write access, that can be attached with docker exec. This occurs because of file-descriptor mishandling, related to /proc/self/exe. | 12 Feb 2019 | [RancherOS v1.5.1](https://github.com/rancher/os/releases/tag/v1.5.1) | +| [Microarchitectural Data Sampling (MDS)](https://www.kernel.org/doc/html/latest/x86/mds.html) | Microarchitectural Data Sampling (MDS) is a family of side channel attacks on internal buffers in Intel CPUs. The variants are: CVE-2018-12126, CVE-2018-12130, CVE-2018-12127, CVE-2019-11091 | 31 May 2019 | [RancherOS v1.5.2](https://github.com/rancher/os/releases/tag/v1.5.2) using Linux v4.14.122 | diff --git a/content/os/v1.x/en/installation/amazon-ecs/_index.md b/content/os/v1.x/en/installation/amazon-ecs/_index.md index 140686886ab..3019855fcf4 100644 --- a/content/os/v1.x/en/installation/amazon-ecs/_index.md +++ b/content/os/v1.x/en/installation/amazon-ecs/_index.md @@ -58,25 +58,25 @@ rancher: ### Amazon ECS enabled AMIs -Latest Release: [v1.5.1](https://github.com/rancher/os/releases/tag/v1.5.1) +Latest Release: [v1.5.2](https://github.com/rancher/os/releases/tag/v1.5.2) Region | Type | AMI ---|--- | --- -eu-north-1 | HVM - ECS enabled | [ami-064549188a66e7ea6](https://eu-north-1.console.aws.amazon.com/ec2/home?region=eu-north-1#launchInstanceWizard:ami=ami-064549188a66e7ea6) -ap-south-1 | HVM - ECS enabled | [ami-08595b2533a6195d2](https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#launchInstanceWizard:ami=ami-08595b2533a6195d2) -eu-west-3 | HVM - ECS enabled | [ami-0e3cd3d86a637b352](https://eu-west-3.console.aws.amazon.com/ec2/home?region=eu-west-3#launchInstanceWizard:ami=ami-0e3cd3d86a637b352) -eu-west-2 | HVM - ECS enabled | [ami-0f6ad4f7e408e1069](https://eu-west-2.console.aws.amazon.com/ec2/home?region=eu-west-2#launchInstanceWizard:ami=ami-0f6ad4f7e408e1069) -eu-west-1 | HVM - ECS enabled | [ami-0d8dae1cc019e6cef](https://eu-west-1.console.aws.amazon.com/ec2/home?region=eu-west-1#launchInstanceWizard:ami=ami-0d8dae1cc019e6cef) -ap-northeast-2 | HVM - ECS enabled | [ami-0c1f5bad8bbc0b6b2](https://ap-northeast-2.console.aws.amazon.com/ec2/home?region=ap-northeast-2#launchInstanceWizard:ami=ami-0c1f5bad8bbc0b6b2) -ap-northeast-1 | HVM - ECS enabled | [ami-0e47cb2a4e9efb985](https://ap-northeast-1.console.aws.amazon.com/ec2/home?region=ap-northeast-1#launchInstanceWizard:ami=ami-0e47cb2a4e9efb985) -sa-east-1 | HVM - ECS enabled | [ami-0e7f3fa6d7434b64c](https://sa-east-1.console.aws.amazon.com/ec2/home?region=sa-east-1#launchInstanceWizard:ami=ami-0e7f3fa6d7434b64c) -ca-central-1 | HVM - ECS enabled | [ami-0b004e903b48ed9a0](https://ca-central-1.console.aws.amazon.com/ec2/home?region=ca-central-1#launchInstanceWizard:ami=ami-0b004e903b48ed9a0) -ap-southeast-1 | HVM - ECS enabled | [ami-05235fc0bc8051a45](https://ap-southeast-1.console.aws.amazon.com/ec2/home?region=ap-southeast-1#launchInstanceWizard:ami=ami-05235fc0bc8051a45) -ap-southeast-2 | HVM - ECS enabled | [ami-057db347305e01f91](https://ap-southeast-2.console.aws.amazon.com/ec2/home?region=ap-southeast-2#launchInstanceWizard:ami=ami-057db347305e01f91) -eu-central-1 | HVM - ECS enabled | [ami-01bd38e3433481d8b](https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#launchInstanceWizard:ami=ami-01bd38e3433481d8b) -us-east-1 | HVM - ECS enabled | [ami-029bd9bf2b4521072](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#launchInstanceWizard:ami=ami-029bd9bf2b4521072) -us-east-2 | HVM - ECS enabled | [ami-06cc66eb6efe0dc0d](https://us-east-2.console.aws.amazon.com/ec2/home?region=us-east-2#launchInstanceWizard:ami=ami-06cc66eb6efe0dc0d) -us-west-1 | HVM - ECS enabled | [ami-050723009f13ccdd5](https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#launchInstanceWizard:ami=ami-050723009f13ccdd5) -us-west-2 | HVM - ECS enabled | [ami-0e85f0edaeed888f1](https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#launchInstanceWizard:ami=ami-0e85f0edaeed888f1) -cn-north-1 | HVM - ECS enabled | [ami-0c0fca27431002bc6](https://cn-north-1.console.amazonaws.cn/ec2/home?region=cn-north-1#launchInstanceWizard:ami=ami-0c0fca27431002bc6) -cn-northwest-1 | HVM - ECS enabled | [ami-067c78822a0314717](https://cn-northwest-1.console.amazonaws.cn/ec2/home?region=cn-northwest-1#launchInstanceWizard:ami=ami-067c78822a0314717) +eu-north-1 | HVM - ECS enabled | [ami-0888272f6e3d16d05](https://eu-north-1.console.aws.amazon.com/ec2/home?region=eu-north-1#launchInstanceWizard:ami=ami-0888272f6e3d16d05) +ap-south-1 | HVM - ECS enabled | [ami-0f433c1f17388f74a](https://ap-south-1.console.aws.amazon.com/ec2/home?region=ap-south-1#launchInstanceWizard:ami=ami-0f433c1f17388f74a) +eu-west-3 | HVM - ECS enabled | [ami-0bde97d3226fb3780](https://eu-west-3.console.aws.amazon.com/ec2/home?region=eu-west-3#launchInstanceWizard:ami=ami-0bde97d3226fb3780) +eu-west-2 | HVM - ECS enabled | [ami-0871c68685772846c](https://eu-west-2.console.aws.amazon.com/ec2/home?region=eu-west-2#launchInstanceWizard:ami=ami-0871c68685772846c) +eu-west-1 | HVM - ECS enabled | [ami-0007e2490a3edba1d](https://eu-west-1.console.aws.amazon.com/ec2/home?region=eu-west-1#launchInstanceWizard:ami=ami-0007e2490a3edba1d) +ap-northeast-2 | HVM - ECS enabled | [ami-001432bab43108869](https://ap-northeast-2.console.aws.amazon.com/ec2/home?region=ap-northeast-2#launchInstanceWizard:ami=ami-001432bab43108869) +ap-northeast-1 | HVM - ECS enabled | [ami-0ca27790cc998f326](https://ap-northeast-1.console.aws.amazon.com/ec2/home?region=ap-northeast-1#launchInstanceWizard:ami=ami-0ca27790cc998f326) +sa-east-1 | HVM - ECS enabled | [ami-0dee69c3e943090d2](https://sa-east-1.console.aws.amazon.com/ec2/home?region=sa-east-1#launchInstanceWizard:ami=ami-0dee69c3e943090d2) +ca-central-1 | HVM - ECS enabled | [ami-08a3c4348c32901c8](https://ca-central-1.console.aws.amazon.com/ec2/home?region=ca-central-1#launchInstanceWizard:ami=ami-08a3c4348c32901c8) +ap-southeast-1 | HVM - ECS enabled | [ami-0e144ba210c6aca27](https://ap-southeast-1.console.aws.amazon.com/ec2/home?region=ap-southeast-1#launchInstanceWizard:ami=ami-0e144ba210c6aca27) +ap-southeast-2 | HVM - ECS enabled | [ami-014ef29b79c6c869a](https://ap-southeast-2.console.aws.amazon.com/ec2/home?region=ap-southeast-2#launchInstanceWizard:ami=ami-014ef29b79c6c869a) +eu-central-1 | HVM - ECS enabled | [ami-0cd059553ae2db346](https://eu-central-1.console.aws.amazon.com/ec2/home?region=eu-central-1#launchInstanceWizard:ami=ami-0cd059553ae2db346) +us-east-1 | HVM - ECS enabled | [ami-0dd393657bf06c830](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#launchInstanceWizard:ami=ami-0dd393657bf06c830) +us-east-2 | HVM - ECS enabled | [ami-02ba4957a8e3c2f14](https://us-east-2.console.aws.amazon.com/ec2/home?region=us-east-2#launchInstanceWizard:ami=ami-02ba4957a8e3c2f14) +us-west-1 | HVM - ECS enabled | [ami-025ab38f4d044be62](https://us-west-1.console.aws.amazon.com/ec2/home?region=us-west-1#launchInstanceWizard:ami=ami-025ab38f4d044be62) +us-west-2 | HVM - ECS enabled | [ami-02ff2946d2cf94ef5](https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#launchInstanceWizard:ami=ami-02ff2946d2cf94ef5) +cn-north-1 | HVM - ECS enabled | [ami-07b80b3fba93cf7c3](https://cn-north-1.console.amazonaws.cn/ec2/home?region=cn-north-1#launchInstanceWizard:ami=ami-07b80b3fba93cf7c3) +cn-northwest-1 | HVM - ECS enabled | [ami-052db9ef3b5ed0e41](https://cn-northwest-1.console.amazonaws.cn/ec2/home?region=cn-northwest-1#launchInstanceWizard:ami=ami-052db9ef3b5ed0e41) diff --git a/content/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/_index.md b/content/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/_index.md index 5df0f0d6fb6..4d8c73ff2e8 100644 --- a/content/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/_index.md +++ b/content/os/v1.x/en/installation/running-rancheros/workstation/boot-from-iso/_index.md @@ -5,7 +5,7 @@ weight: 102 The RancherOS ISO file can be used to create a fresh RancherOS install on KVM, VMware, VirtualBox, or bare metal servers. You can download the `rancheros.iso` file from our [releases page](https://github.com/rancher/os/releases/). -You must boot with at least **1280MB** of memory. If you boot with the ISO, you will automatically be logged in as the `rancher` user. Only the ISO is set to use autologin by default. If you run from a cloud or install to disk, SSH keys or a password of your choice is expected to be used. +You must boot with enough memory which you can refer to [here]({{< baseurl >}}/os/v1.x/en/overview/#hardware-requirements). If you boot with the ISO, you will automatically be logged in as the `rancher` user. Only the ISO is set to use autologin by default. If you run from a cloud or install to disk, SSH keys or a password of your choice is expected to be used. ### Install to Disk From d9f9bdd252bef07b23f13cacef437f4950ed1adb Mon Sep 17 00:00:00 2001 From: Alena Prokharchyk Date: Mon, 3 Jun 2019 13:52:39 -0700 Subject: [PATCH 15/33] Update _index.md --- content/rke/latest/en/config-options/_index.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/content/rke/latest/en/config-options/_index.md b/content/rke/latest/en/config-options/_index.md index b1ad9fa6f07..a2c7f7c5649 100644 --- a/content/rke/latest/en/config-options/_index.md +++ b/content/rke/latest/en/config-options/_index.md @@ -92,17 +92,17 @@ If you want to use a different version from the supported list, please use the [ ### Prefix Path -For some operating systems including ROS, and CoreOS, RKE stores its resources to a different prefix path, this prefix path is by default for these operating systems is: -``` -/opt/rke -``` -So `/etc/kubernetes` will be stored in `/opt/rke/etc/kubernetes` and `/var/lib/etcd` will be stored in `/opt/rke/var/lib/etcd` etc. +As a part of cluster provisioning, RKE uploads config files' content - `/etc/kubernetes`,`/var/lib/etcd`, etc - to the worker nodes. By default, these configs get stored under `/`. +For some operating systems like RancherOS and CoreOS, the location is `/opt/rke` instead of `/`. With that, `/etc/kubernetes` will be stored in `/opt/rke/etc/kubernetes`, `/var/lib/etcd` will be stored in `/opt/rke/var/lib/etcd`, and so on. + +To change the default prefix path for any type of cluster, you can use the following option in the cluster configuration file `cluster.yml`: -To change the default prefix path for any cluster, you can use the following option in the cluster configuration file `cluster.yml`: ``` prefix_path: /opt/custom_path ``` +#### **Important: currently there is a limitation when `prefix_path` can not be changed or reset after the cluster is provisioned** + ### Cluster Level SSH Key Path RKE connects to host(s) using `ssh`. Typically, each node will have an independent path for each ssh key, i.e. `ssh_key_path`, in the `nodes` section, but if you have a SSH key that is able to access **all** hosts in your cluster configuration file, you can set the path to that ssh key at the top level. Otherwise, you would set the ssh key path in the [nodes]({{< baseurl >}}/rke/latest/en/config-options/nodes/). From efe4b6ae7b54cdac3d2279b511aa43813df64d8b Mon Sep 17 00:00:00 2001 From: loganhz Date: Thu, 30 May 2019 07:08:54 +0800 Subject: [PATCH 16/33] Project monitoring available as of v2.2.4 --- content/rancher/v2.x/en/project-admin/tools/alerts/_index.md | 2 +- .../rancher/v2.x/en/project-admin/tools/monitoring/_index.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.x/en/project-admin/tools/alerts/_index.md b/content/rancher/v2.x/en/project-admin/tools/alerts/_index.md index d8b7b2c5ab0..d68b22c393b 100644 --- a/content/rancher/v2.x/en/project-admin/tools/alerts/_index.md +++ b/content/rancher/v2.x/en/project-admin/tools/alerts/_index.md @@ -100,7 +100,7 @@ This alert type monitors for the availability of all workloads marked with tags {{% /accordion %}} {{% accordion id="project-expression" label="Metric Expression Alerts" %}}
-_Available as of v2.2.0_ +_Available as of v2.2.4_ If you enable [project monitoring]({{< baseurl >}}/rancher/v2.x/en/project-admin/tools/#monitoring), this alert type monitors for the overload from Prometheus expression querying. diff --git a/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md b/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md index c675ca29658..c36b6b9c8b2 100644 --- a/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md +++ b/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md @@ -3,7 +3,7 @@ title: Monitoring weight: 2528 --- -_Available as of v2.2.0_ +_Available as of v2.2.4_ Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with [Prometheus](https://prometheus.io/), a leading open-source monitoring solution. Prometheus provides a _time series_ of your data, which is, according to [Prometheus documentation](https://prometheus.io/docs/concepts/data_model/): From c17aec64d54caad054b969cc22713fc6cb56bab5 Mon Sep 17 00:00:00 2001 From: Bill Maxwell Date: Fri, 24 May 2019 10:23:47 -0700 Subject: [PATCH 17/33] added resource table to docs for project monitoring --- .../v2.x/en/project-admin/tools/monitoring/_index.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md b/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md index c36b6b9c8b2..60578c82c98 100644 --- a/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md +++ b/content/rancher/v2.x/en/project-admin/tools/monitoring/_index.md @@ -35,13 +35,21 @@ Using Prometheus, you can monitor Rancher at both the [cluster level]({{< baseur 1. Click **Save**. +### Project Level Monitoring Resource Requirements + +Container| CPU - Request | Mem - Request | CPU - Limit | Mem - Limit | Configurable +---------|---------------|---------------|-------------|-------------|------------- +Prometheus|750m| 750Mi | 1000m | 1000Mi | Yes +Grafana | 100m | 100Mi | 200m | 200Mi | No + + **Result:** A single application,`project-monitoring`, is added as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) to the project. After the application is `active`, you can start viewing [project metrics](#project-metrics) through the [Rancher dashboard]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#rancher-dashboard) or directly from [Grafana]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#grafana). ## Project Metrics If [cluster monitoring]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) is also enabled for the project, [workload metrics]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#workload-metrics) are available for the project. -If only project monitoring is enabled, you can monitor custom metrics from any [exporters](https://prometheus.io/docs/instrumenting/exporters/). You can expose some endpoints on deployments without needing to configure Prometheus for your project. +If only project monitoring is enabled, you can monitor custom metrics from any [exporters](https://prometheus.io/docs/instrumenting/exporters/). You can also expose some custom endpoints on deployments without needing to configure Prometheus for your project. ### Example From 1cc156b875ac6b9b1f8b9fd4de2bf8a67c7a70ba Mon Sep 17 00:00:00 2001 From: loganhz Date: Fri, 7 Jun 2019 23:51:15 +0800 Subject: [PATCH 18/33] Registry --- .../admin-settings/globalregistry/_index.md | 58 ++++++++++ .../globalregistry/harbor/_index.md | 101 ++++++++++++++++++ 2 files changed, 159 insertions(+) create mode 100644 content/rancher/v2.x/en/admin-settings/globalregistry/_index.md create mode 100644 content/rancher/v2.x/en/admin-settings/globalregistry/harbor/_index.md diff --git a/content/rancher/v2.x/en/admin-settings/globalregistry/_index.md b/content/rancher/v2.x/en/admin-settings/globalregistry/_index.md new file mode 100644 index 00000000000..6170072aed4 --- /dev/null +++ b/content/rancher/v2.x/en/admin-settings/globalregistry/_index.md @@ -0,0 +1,58 @@ +--- +title: Global Registry +weight: 1145 +--- + +_Available as of v2.3.0_ + +Rancher's Global Registry provides a way to set up a [Harbor](https://github.com/goharbor/harbor) registry to store and manage your docker images. The Global Registry reuses the same SSL certificate of Rancher server so you don't need to prepare additional certificates for it. The CA root certificate is added to every node of managed kubernetes clusters. Therefore, in the case where you're using a private certificate authority, you can use images from the Global Registry without additional configuration of the docker daemon on cluster nodes. + +> **Note:** Global Registry is only available in [HA setups]({{< baseurl >}}/rancher/v2.x/en/installation/ha/) with the [`local` cluster enabled]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#import-local-cluster). + +## Prerequisites + +Depending on the configuration options you use, check the following prerequisites before enabling Global Registry: + +- If you use `filesystem` type for docker registry storage, or use `internal` type database or Redis, [persistent volumes]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/) are required in the local cluster. +- If you use `external` type database, you need to create databases in PostgreSQL before registry deployment. You can configure which databases to use in the configuration options. + +## Enabling Global Registry + +As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), you can configure Rancher to deploy the Global Registry. + +1. From the **Global** view, select **Tools > Global Registry** from the main menu. + +1. Enter in your desired configuration options. For detail instructions, follow the [Configuration Options]({{< baseurl >}}/rancher/v2.x/en/admin-settings/globalregistry/harbor/) section. + +1. Click **Save**. + +**Result:** A Harbor instance will be deployed as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) named `global-registry-harbor` to local cluster's `system` project. + +## Disabling Global Registry + +To disable the Global Registry: + +1. From the **Global** view, select **Tools > Global Registry** from the main menu. + +1. Click **Disable registry**, then click the red button again to confirm the disable action. + +**Result:** The `global-registry-harbor` application in local cluster's `system` project gets removed. Note that persistent volumes used by the Global Registry will not be removed on disabling, so as to prevent data lost. You need to manually delete relevant volumes in local cluster's `system` project if you want to clean them up. + +## Using Global Registry + +Once the Global Registry is enabled, you can: + +1. Access Harbor UI through the endpoint `/registry`. + +1. Use the Rancher server hostname as the registry hostname in image names. For example: + ``` + docker pull /library/busybox:latest + ``` + +1. If Notary is enabled, the endpoint for notary server is `/registry/notary`. + +1. Use Global Registry as a private registry in Rancher projects, see [how to use registries]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/registries/). + +> **Notes:** +> +>- The authentication of Harbor is independent of Rancher authentication, you should log in to Harbor UI and manage Harbor users for registry account management. diff --git a/content/rancher/v2.x/en/admin-settings/globalregistry/harbor/_index.md b/content/rancher/v2.x/en/admin-settings/globalregistry/harbor/_index.md new file mode 100644 index 00000000000..765c71eb274 --- /dev/null +++ b/content/rancher/v2.x/en/admin-settings/globalregistry/harbor/_index.md @@ -0,0 +1,101 @@ +--- +title: Global Registry Configuration +weight: 1 +--- + +_Available as of v2.3.0-alpha_ + +While configuring global registry, there are multiple options that can be configured. + +## General + +Field | Description | Required | Editable | Default +----|-----------------|------------|------------|------------ +Admin Password | The initial password of Harbor admin. Change it from Harbor UI after the registry is ready | Yes | No | n/a +Encryption Key For Harbor | The key used for encryption. Must be a string of 16 chars | No | Yes | n/a + +## Registry + +Field | Description | Required | Editable | Default +----|-----------------|------------|------------|------------ +Storage Backend Type | Storage type for images: `filesystem` or `s3`. If `filesystem` is selected, persistent volume is required in your local cluster. | Yes | No | filesystem +Source | Whether to use a storage class to provision a new PV or to use an existing PVC | Yes | Yes | Use a storage class +Storage Class | Specify the storage class used to provision the persistent volume(A storage class is required in the local cluster to use this option) | Yes, when use SC | Yes | The default storage class +Persistent Volume Size | Specify the size of the persistent volume | Yes, when use SC | Yes | 100Gi +Existing Claim | Specify the existing PVC for registry images(An existing PVC is required to use this option) | Yes, when use existing PV | Yes | n/a +Registry CPU Limit | CPU limit for the docker registry workload | Yes | Yes | 1000 (milli CPUs) +Registry Memory Limit | Memory limit for the docker registry workload | Yes | Yes | 2048 (MiB) +Registry CPU Reservation | CPU reservation for the docker registry workload | Yes | Yes | 100 (milli CPUs) +Registry Memory Reservation | Memory reservation for the docker registry workload | Yes | Yes | 256 (MiB) +Registry Node Selector | Select the nodes where the docker registry workload will be scheduled to | No | Yes | n/a + +## Database + +Field | Description | Required | Editable | Default +----|-----------------|------------|------------|------------ +Config Database Type | Choose `internal` or `external`. When `internal` is selected, a PostgreSQL workload will be included in the application, and a persistent volume is required for it. When `external` is selected, you can configure an external PostgreSQL. You should create databases for Harbor core service, Clair and Notary before enabling.| Yes | No | internal +Source | Whether to use a storage class to provision a new PV or to use an existing PVC | Yes, when use internal database | Yes | Use a storage class +Storage Class | Specify the storage class used to provision the persistent volume(A storage class is required in the local cluster to use this option) | Yes, when use SC and internal database | Yes | The default storage class +Persistent Volume Size | Specify the size of the persistent volume | Yes, when use SC and internal database | Yes | 5Gi +Existing Claim | Specify the existing PVC for PostgreSQL database(An existing PVC is required to use this option) | Yes, when use existing PV and internal database | Yes | n/a +Database CPU Limit | CPU limit for the database workload | Yes | Yes | 500 (milli CPUs) +Database Memory Limit | Memory limit for the database workload | Yes | Yes | 2048 (MiB) +Database CPU Reservation | CPU reservation for the database workload | Yes | Yes | 100 (milli CPUs) +Database Memory Reservation | Memory reservation for the database workload | Yes | Yes | 256 (MiB) +Database Node Selector | Select the nodes where the database workload will be scheduled to | No (Only shows when use external database) | Yes | n/a +SSL Mode for PostgreSQL | SSL mode used to connect the external database | No (Only shows when use external database) | Yes | disable +Host for PostgreSQL | The hostname for external database | Yes (Only shows when use external database) | Yes | n/a +Port for PostgreSQL | The port for external database | Yes (Only shows when use external database) | Yes | 5432 +Username for PostgreSQL | The username for external database | Yes (Only shows when use external database) | Yes | n/a +Password for PostgreSQL | The password for external database | Yes (Only shows when use external database) | Yes | n/a +Core Database | The database used by core service | No (Only shows when use external database) | Yes | registry +Clair Database | The database used by Clair | No (Only shows when use external database) | Yes | clair +Notary Server Database | The database used by Notary server | No (Only shows when use external database) | Yes | notary_server +Notary Signer Database | The database used by Notary signer | No (Only shows when use external database) | Yes | notary_signer + + +## Redis + +Field | Description | Required | Editable | Default +----|-----------------|------------|------------|------------ +Config Redis Type | Choose `internal` or `external`. When `internal` is selected, a Redis workload will be included in the application, and a persistent volume is required for it. When `external` is selected, you can configure an external Redis. | Yes | No | internal +Source | Whether to use a storage class to provision a new PV or to use an existing PVC | Yes, when use internal Redis | Yes | Use a storage class +Storage Class | Specify the storage class used to provision the persistent volume(A storage class is required in the local cluster to use this option) | Yes, when use SC and internal Redis | Yes | The default storage class +Persistent Volume Size | Specify the size of the persistent volume | Yes, when use SC and internal Redis | Yes | 5Gi +Existing Claim | Specify the existing PVC for Redis(An existing PVC is required to use this option) | Yes, when use existing PV and internal Redis | Yes | n/a +Redis CPU Limit | CPU limit for the Redis workload | Yes | Yes | 500 (milli CPUs) +Redis Memory Limit | Memory limit for the Redis workload | Yes | Yes | 2048 (MiB) +Redis CPU Reservation | CPU reservation for the Redis workload | Yes | Yes | 100 (milli CPUs) +Redis Memory Reservation | Memory reservation for the Redis workload | Yes | Yes | 256 (MiB) +Redis Node Selector | Select the nodes where the Redis workload will be scheduled to | No | Yes | n/a +Host for Redis | The hostname for external Redis | Yes (Only shows when use external Redis) | Yes | n/a +Port for Redis | The port for external Redis | Yes (Only shows when use external Redis) | Yes | 6379 +Password for Redis | The password for external Redis | No (Only shows when use external Redis) | Yes | n/a +Jobservice Database Index | The database index for jobservice | Yes (Only shows when use external Redis) | Yes | n/a +Registry Database Index | The database index for docker registry | Yes (Only shows when use external Redis) | Yes | n/a + +## Clair + +Field | Description | Required | Editable | Default +----|-----------------|------------|------------|------------ +Enable Clair | Whether or not to enable Clair for vulnerabilities scanning | Yes | Yes | true +Clair CPU Limit | CPU limit for the Clair workload | Yes, when Clair enabled | Yes | 500 (milli CPUs) +Clair Memory Limit | Memory limit for the Clair workload | Yes, when Clair enabled | Yes | 2048 (MiB) +Clair CPU Reservation | CPU reservation for the Clair workload | Yes, when Clair enabled | Yes | 100 (milli CPUs) +Clair Memory Reservation | Memory reservation for the Clair workload | Yes, when Clair enabled | Yes | 256 (MiB) +Clair Node Selector | Select the nodes where the Clair workload will be scheduled to | Yes, when Clair enabled | Yes | n/a + +## Notary + +Field | Description | Required | Editable | Default +----|-----------------|------------|------------|------------ +Enable Notary | Whether or not to enable Notary for [Docker Content Trust](https://docs.docker.com/engine/security/trust/content_trust/). When enabled, the access endpoint to the Notary server is `/registry/notary`. | Yes | Yes | true +Notary Server CPU Limit | CPU limit for the Notary Server workload | Yes, when Notary enabled | Yes | 500 (milli CPUs) +Notary Server Memory Limit | Memory limit for the Notary Server workload | Yes, when Notary enabled | Yes | 2048 (MiB) +Notary Server CPU Reservation | CPU reservation for the Notary Server workload | Yes, when Notary enabled | Yes | 100 (milli CPUs) +Notary Server Memory Reservation | Memory reservation for the Notary Server workload | Yes, when Notary enabled | Yes | 256 (MiB) +Notary Signer CPU Limit | CPU limit for the Notary Signer workload | Yes, when Notary enabled | Yes | 500 (milli CPUs) +Notary Signer Memory Limit | Memory limit for the Notary Signer workload | Yes, when Notary enabled | Yes | 2048 (MiB) +Notary Signer CPU Reservation | CPU reservation for the Notary Signer workload | Yes, when Notary enabled | Yes | 100 (milli CPUs) +Notary Signer Memory Reservation | Memory reservation for the Notary Signer workload | Yes, when Notary enabled | Yes | 256 (MiB) +Notary Node Selector | Select the nodes where the Notary Server and Notary Signer workloads will be scheduled to | No | Yes | n/a From 329c262720124ef0d7e0d8326d2b8bd490d96ab5 Mon Sep 17 00:00:00 2001 From: Lev Lazinskiy Date: Mon, 10 Jun 2019 22:54:20 -0700 Subject: [PATCH 19/33] docs: Istio Edits Update the main Istio configuration page to clean up some of the language, remove repetitive statements, and fix a few minor grammar issues. Clean up some of the language in the project-admin Istio documentation and a small update in the Istio configuration options page. --- .../tools/service-mesh/_index.md | 10 +- .../tools/service-mesh/istio/_index.md | 114 +++++++++--------- .../en/project-admin/service-mesh/_index.md | 19 +-- 3 files changed, 72 insertions(+), 71 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md index 6f747286cef..8eb1352def0 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md @@ -11,15 +11,15 @@ Using Rancher, you can connect, secure, control, and observe services through in As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Istio to your Kubernetes cluster. -1. From the **Global** view, navigate to the cluster that you want to configure service mesh. +1. From the **Global** view, navigate to the cluster that you want to configure the service mesh for. 1. Select **Tools > Service Mesh** in the navigation bar. -1. Select **Enable** to show the [Service mesh configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/). Ensure you have enough resources for service mesh and on your worker nodes to enable service mesh. Enter in your desired configuration options. +1. Select **Enable** to show the [Service mesh configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/). Enter in your desired configuration options. 1. Click **Save**. -**Result:** The istio will be deployed as well as an application. The istio application, `cluster-istio`, is added as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the application is `active`, you can start using Istio. +**Result:** The Istio application, `cluster-istio`, is added as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the application is `active`, you can start using Istio. > **Note:** When enabling service mesh, you need to ensure your worker nodes and Istio pod have enough resources. In larger deployments, it is strongly advised that the service mesh infrastructure be placed on dedicated nodes in the cluster. @@ -31,13 +31,13 @@ Once the service mesh is `active`, you can: 1. Access [Jaeger UI](https://www.jaegertracing.io/) by clicking Jaeger UI icon in service mesh page. 1. Access [Grafana UI](https://grafana.com/) by clicking Grafana UI icon in service mesh page. 1. Access [Prometheus UI](https://prometheus.io/) by clicking Prometheus UI icon in service mesh page. -1. Go to project to [view traffic graph, traffic metrics and manage traffic]({{< baseurl >}}/rancher/v2.x/en/project-admin/service-mesh/). +1. Go to a project to [view traffic graph, traffic metrics and manage traffic]({{< baseurl >}}/rancher/v2.x/en/project-admin/service-mesh/). ## Disabling Service Mesh To disable the service mesh: -1. From the **Global** view, navigate to the cluster that you want to disable service mesh. +1. From the **Global** view, navigate to the cluster that you want to disable the service mesh for. 1. Select **Tools > Service Mesh** in the navigation bar. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md index f5de6dc7411..5f84f6eb7fd 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md @@ -5,86 +5,86 @@ weight: 1 _Available as of v2.3.0-alpha_ -While configuring service mesh, there are multiple options that can be configured. +There are several configuration options for the service mesh. ## PILOT -Option | Description| Required | Default +Option | Description| Required | Default -------|------------|-------|------- -Pilot CPU Limit | CPU resource limit for the istio-pilot pod.| Yes | 1000 -Pilot CPU Reservation | CPU reservation for the istio-pilot pod. | Yes | 500 -Pilot Memory Limit | Memory resource limit for the istio-pilot pod. | Yes | 4096 -Pilot Memory Reservation | Memory resource requests for the istio-pilot pod. | Yes | 2048 -Trace sampling Percentage | [Trace sampling percentage](https://istio.io/docs/tasks/telemetry/distributed-tracing/overview/#trace-sampling) | Yes | 1 -Pilot Selector | Ability to select the nodes in which istio-pilot pod is deployed to. To use this option, the nodes must have labels. | No | n/a +Pilot CPU Limit | CPU resource limit for the istio-pilot pod.| Yes | 1000 +Pilot CPU Reservation | CPU reservation for the istio-pilot pod. | Yes | 500 +Pilot Memory Limit | Memory resource limit for the istio-pilot pod. | Yes | 4096 +Pilot Memory Reservation | Memory resource requests for the istio-pilot pod. | Yes | 2048 +Trace sampling Percentage | [Trace sampling percentage](https://istio.io/docs/tasks/telemetry/distributed-tracing/overview/#trace-sampling) | Yes | 1 +Pilot Selector | Ability to select the nodes in which istio-pilot pod is deployed to. To use this option, the nodes must have labels. | No | n/a ## MIXER -Option | Description| Required | Default +Option | Description| Required | Default -------|------------|-------|------- -Mixer Telemetry CPU Limit | CPU resource limit for the istio-telemetry pod.| Yes | 4800 -Mixer Telemetry CPU Reservation | CPU reservation for the istio-telemetry pod.| Yes | 1000 -Mixer Telemetry Memory Limit | Memory resource limit for the istio-telemetry pod.| Yes | 4096 -Mixer Telemetry Memory Reservation | Memory resource requests for the istio-telemetry pod.| Yes | 1024 -Enable Mixer Policy | Whether or not to deploy the istio-policy. | Yes | False -Mixer Policy CPU Limit | CPU resource limit for the istio-policy pod. | Yes, when policy enabled | 4800 -Mixer Policy CPU Reservation | CPU reservation for the istio-policy pod. | Yes, when policy enabled | 1000 -Mixer Policy Memory Limit | Memory resource limit for the istio-policy pod. | Yes, when policy enabled | 4096 -Mixer Policy Memory Reservation | Memory resource requests for the istio-policy pod. | Yes, when policy enabled | 1024 -Mixer Selector | Ability to select the nodes in which istio-policy and istio-telemetry pods are deployed to. To use this option, the nodes must have labels. | No | n/a +Mixer Telemetry CPU Limit | CPU resource limit for the istio-telemetry pod.| Yes | 4800 +Mixer Telemetry CPU Reservation | CPU reservation for the istio-telemetry pod.| Yes | 1000 +Mixer Telemetry Memory Limit | Memory resource limit for the istio-telemetry pod.| Yes | 4096 +Mixer Telemetry Memory Reservation | Memory resource requests for the istio-telemetry pod.| Yes | 1024 +Enable Mixer Policy | Whether or not to deploy the istio-policy. | Yes | False +Mixer Policy CPU Limit | CPU resource limit for the istio-policy pod. | Yes, when policy enabled | 4800 +Mixer Policy CPU Reservation | CPU reservation for the istio-policy pod. | Yes, when policy enabled | 1000 +Mixer Policy Memory Limit | Memory resource limit for the istio-policy pod. | Yes, when policy enabled | 4096 +Mixer Policy Memory Reservation | Memory resource requests for the istio-policy pod. | Yes, when policy enabled | 1024 +Mixer Selector | Ability to select the nodes in which istio-policy and istio-telemetry pods are deployed to. To use this option, the nodes must have labels. | No | n/a ## TRACING -Option | Description| Required | Default +Option | Description| Required | Default -------|------------|-------|------- -Enable Tracing | Whether or not to deploy the istio-tracing. | Yes | True -Tracing CPU Limit | CPU resource limit for the istio-tracing pod. | Yes | 500 -Tracing CPU Reservation | CPU reservation for the istio-tracing pod. | Yes | 100 -Tracing Memory Limit | Memory resource limit for the istio-tracing pod. | Yes | 1024 -Tracing Memory Reservation | Memory resource requests for the istio-tracing pod. | Yes | 100 -Tracing Selector | Ability to select the nodes in which tracing pod is deployed to. To use this option, the nodes must have labels. | No | n/a +Enable Tracing | Whether or not to deploy the istio-tracing. | Yes | True +Tracing CPU Limit | CPU resource limit for the istio-tracing pod. | Yes | 500 +Tracing CPU Reservation | CPU reservation for the istio-tracing pod. | Yes | 100 +Tracing Memory Limit | Memory resource limit for the istio-tracing pod. | Yes | 1024 +Tracing Memory Reservation | Memory resource requests for the istio-tracing pod. | Yes | 100 +Tracing Selector | Ability to select the nodes in which tracing pod is deployed to. To use this option, the nodes must have labels. | No | n/a ## INGRESS GATEWAY -Option | Description| Required | Default +Option | Description| Required | Default -------|------------|-------|------- -Enable Ingress Gateway | Whether or not to deploy the istio-ingressgateway. | Yes | False -Service Type of Istio Ingress Gateway | How to expose the gateway. You can choose NodePort or Loadbalancer | Yes | NodePort -Http2 Port | The NodePort for http2 requests | Yes | 31380 -Https Port | The NodePort for https requests | Yes | 31390 -Load Balancer IP | Ingress Gateway Load Balancer IP | No | n/a -Load Balancer Source Ranges | Ingress Gateway Load Balancer Source Ranges | No | n/a -Ingress Gateway CPU Limit | CPU resource limit for the istio-ingressgateway pod. | Yes | 2000 -Ingress Gateway CPU Reservation | CPU reservation for the istio-ingressgateway pod. | Yes | 100 -Ingress Gateway Memory Limit | Memory resource limit for the istio-ingressgateway pod. | Yes | 1024 -Ingress Gateway Memory Reservation | Memory resource requests for the istio-ingressgateway pod. | Yes | 128 -Ingress Gateway Selector | Ability to select the nodes in which istio-ingressgateway pod is deployed to. To use this option, the nodes must have labels. | No | n/a +Enable Ingress Gateway | Whether or not to deploy the istio-ingressgateway. | Yes | False +Service Type of Istio Ingress Gateway | How to expose the gateway. You can choose NodePort or Loadbalancer | Yes | NodePort +Http2 Port | The NodePort for http2 requests | Yes | 31380 +Https Port | The NodePort for https requests | Yes | 31390 +Load Balancer IP | Ingress Gateway Load Balancer IP | No | n/a +Load Balancer Source Ranges | Ingress Gateway Load Balancer Source Ranges | No | n/a +Ingress Gateway CPU Limit | CPU resource limit for the istio-ingressgateway pod. | Yes | 2000 +Ingress Gateway CPU Reservation | CPU reservation for the istio-ingressgateway pod. | Yes | 100 +Ingress Gateway Memory Limit | Memory resource limit for the istio-ingressgateway pod. | Yes | 1024 +Ingress Gateway Memory Reservation | Memory resource requests for the istio-ingressgateway pod. | Yes | 128 +Ingress Gateway Selector | Ability to select the nodes in which istio-ingressgateway pod is deployed to. To use this option, the nodes must have labels. | No | n/a ## PROMETHEUS -Option | Description| Required | Default +Option | Description| Required | Default -------|------------|-------|------- -Prometheus CPU Limit | CPU resource limit for the Prometheus pod.| Yes | 1000 -Prometheus CPU Reservation | CPU reservation for the Prometheus pod.| Yes | 750 -Prometheus Memory Limit | Memory resource limit for the Prometheus pod.| Yes | 1024 -Prometheus Memory Reservation | Memory resource requests for the Prometheus pod.| Yes | 750 -Retention for Prometheus | How long your Prometheus instance retains data | Yes | 6 -Prometheus Selector | Ability to select the nodes in which Prometheus pod is deployed to. To use this option, the nodes must have labels.| No | n/a +Prometheus CPU Limit | CPU resource limit for the Prometheus pod.| Yes | 1000 +Prometheus CPU Reservation | CPU reservation for the Prometheus pod.| Yes | 750 +Prometheus Memory Limit | Memory resource limit for the Prometheus pod.| Yes | 1024 +Prometheus Memory Reservation | Memory resource requests for the Prometheus pod.| Yes | 750 +Retention for Prometheus | How long your Prometheus instance retains data | Yes | 6 +Prometheus Selector | Ability to select the nodes in which Prometheus pod is deployed to. To use this option, the nodes must have labels.| No | n/a ## GRAFANA -Option | Description| Required | Default +Option | Description| Required | Default -------|------------|-------|------- -Enable Grafana | Whether or not to deploy the Grafana.| Yes | True -Grafana CPU Limit | CPU resource limit for the Grafana pod.| Yes, when Grafana enabled | 200 -Grafana CPU Reservation | CPU reservation for the Grafana pod.| Yes, when Grafana enabled | 100 -Grafana Memory Limit | Memory resource limit for the Grafana pod.| Yes, when Grafana enabled | 512 -Grafana Memory Reservation | Memory resource requests for the Grafana pod.| Yes, when Grafana enabled | 100 -Grafana Selector | Ability to select the nodes in which Grafana pod is deployed to. To use this option, the nodes must have labels. | No | n/a -Enable Persistent Storage for Grafana | Enable Persistent Storage for Grafana | Yes, when Grafana enabled | False -Source | Use a Storage Class to provision a new persistent volume or Use an existing persistent volume claim | Yes, when Grafana enabled and enabled PV | Use SC -Storage Class | Storage Class for provisioning PV for Grafana | Yes, when Grafana enabled, enabled PV and use storage class | Use the default class -Persistent Volume Size | The size for the PV you would like to provision for Grafana | Yes, when Grafana enabled, enabled PV and use storage class | 5Gi -Existing Claim | Use existing PVC for Grafna | Yes, when Grafana enabled, enabled PV and use existing PVC | n/a +Enable Grafana | Whether or not to deploy the Grafana.| Yes | True +Grafana CPU Limit | CPU resource limit for the Grafana pod.| Yes, when Grafana enabled | 200 +Grafana CPU Reservation | CPU reservation for the Grafana pod.| Yes, when Grafana enabled | 100 +Grafana Memory Limit | Memory resource limit for the Grafana pod.| Yes, when Grafana enabled | 512 +Grafana Memory Reservation | Memory resource requests for the Grafana pod.| Yes, when Grafana enabled | 100 +Grafana Selector | Ability to select the nodes in which Grafana pod is deployed to. To use this option, the nodes must have labels. | No | n/a +Enable Persistent Storage for Grafana | Enable Persistent Storage for Grafana | Yes, when Grafana enabled | False +Source | Use a Storage Class to provision a new persistent volume or Use an existing persistent volume claim | Yes, when Grafana enabled and enabled PV | Use SC +Storage Class | Storage Class for provisioning PV for Grafana | Yes, when Grafana enabled, enabled PV and use storage class | Use the default class +Persistent Volume Size | The size for the PV you would like to provision for Grafana | Yes, when Grafana enabled, enabled PV and use storage class | 5Gi +Existing Claim | Use existing PVC for Grafna | Yes, when Grafana enabled, enabled PV and use existing PVC | n/a diff --git a/content/rancher/v2.x/en/project-admin/service-mesh/_index.md b/content/rancher/v2.x/en/project-admin/service-mesh/_index.md index 80a55be24f6..12885629f8a 100644 --- a/content/rancher/v2.x/en/project-admin/service-mesh/_index.md +++ b/content/rancher/v2.x/en/project-admin/service-mesh/_index.md @@ -9,36 +9,36 @@ Using Rancher, you can connect, secure, control, and observe services through in >**Prerequisites:** > ->- [Service Mesh]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/) must be enabled in cluster level. ->- To be a part of an Istio service mesh, pods and services in a Kubernetes cluster must satisfy the [Istio Pods and Services Requirements](https://istio.io/docs/setup/kubernetes/prepare/requirements/) +>- [Service Mesh]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/) must be enabled in the cluster. +>- To be a part of an Istio service mesh, pods and services in a Kubernetes cluster must satisfy the [Istio Pods and Services Requirements](https://istio.io/docs/setup/kubernetes/prepare/requirements/) ## Istio sidecar auto injection -In create and edit namespace page, you can enable or disable [Istio sidecar auto injection](https://istio.io/blog/2019/data-plane-setup/#automatic-injection). When you enable it, Rancher will add `istio-injection=enabled` label to the namespace automatically. +In the create and edit namespace page, you can enable or disable [Istio sidecar auto injection](https://istio.io/blog/2019/data-plane-setup/#automatic-injection). When you enable it, Rancher will add `istio-injection=enabled` label to the namespace automatically. > **Note:** Injection occurs at pod creation time. If the pod has been created before you enable auto injection. You need to kill the running pod and verify a new pod is created with the injected sidecar. ## View Traffic Graph -Rancher integrates Kiali Graph into Rancher UI. The Kiali graph provides a powerful way to visualize the topology of your service mesh. It shows you which services communicate with each other. +Rancher integrates Kiali Graph into the Rancher UI. The Kiali graph provides a powerful way to visualize the topology of your service mesh. It shows you which services communicate with each other. To see the traffic graph for a particular namespace: -1. From the **Global** view, navigate to the project that you want to view traffic graph. +1. From the **Global** view, navigate to the project that you want to view traffic graph for. 1. Select **Service Mesh** in the navigation bar. 1. Select **Traffic Graph** in the navigation bar. -1. Select the namespace. Note: It only shows the namespaces which has `istio-injection=enabled` label +1. Select the namespace. Note: It only shows the namespaces which has `istio-injection=enabled` label. ## View Traffic Metrics -With Istio’s monitoring features, it provides visibility into the performance of all your services. +Istio’s monitoring features provide visibility into the performance of all your services. To see the Success Rate, Request Volume, 4xx Request Count, Project 5xx Request Count and Request Duration metrics: -1. From the **Global** view, navigate to the project that you want to view traffic metrics. +1. From the **Global** view, navigate to the project that you want to view traffic metrics for. 1. Select **Service Mesh** in the navigation bar. @@ -47,4 +47,5 @@ To see the Success Rate, Request Volume, 4xx Request Count, Project 5xx Request ## Other Istio Features -As Istio has been deployed in your cluster, you can use all [Istio Features](https://istio.io/docs/concepts/what-is-istio/#core-features) in the cluster. +There are many other [Istio Features](https://istio.io/docs/concepts/what-is-istio/#core-features) +that you can now use in your cluster. From f0e4f45b7fd29a45f20f4423ca722ae5ca6d1664 Mon Sep 17 00:00:00 2001 From: Denise Date: Tue, 11 Jun 2019 09:24:50 -0700 Subject: [PATCH 20/33] Update _index.md --- .../rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md index 8eb1352def0..d080d5d5f77 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md @@ -15,7 +15,7 @@ As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global 1. Select **Tools > Service Mesh** in the navigation bar. -1. Select **Enable** to show the [Service mesh configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/). Enter in your desired configuration options. +1. Select **Enable** to show the [Service mesh configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/). Enter in your desired configuration options. Ensure you have enough resources for service mesh and on your worker nodes to enable service mesh. 1. Click **Save**. From 9e05630647f0b075f7035aed9a4d4035c2576a95 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 14 Jun 2019 11:20:26 -0700 Subject: [PATCH 21/33] Clarify HPA docs --- .../horitzontal-pod-autoscaler/_index.md | 931 +----------------- .../manage-hpa-with-kubectl/_index.md | 376 +++++++ .../manage-hpa-with-rancher-ui/_index.md | 55 ++ .../testing-hpa/_index.md | 491 +++++++++ 4 files changed, 935 insertions(+), 918 deletions(-) create mode 100644 content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md create mode 100644 content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md create mode 100644 content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md index 70d5324d100..c87c223204d 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md @@ -5,12 +5,11 @@ weight: 3026 Using the Kubernetes [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) feature (HPA), you can configure your cluster to automatically scale the services it's running up or down. ->**Note:** -> ->- Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. ->- You can create, manage, and delete HPAs using Rancher UI in Rancher v2.3.0-alpha and higher version. It only supports HPA in `autoscaling/v2beta2` API. +Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. -### Why Use Horizontal Pod Autoscaler? +You can create, manage, and delete HPAs using the Rancher UI in Rancher v2.3.0-alpha and higher versions. It only supports HPA in the `autoscaling/v2beta2` API. + +## Why Use Horizontal Pod Autoscaler? Using HPA, you can automatically scale the number of pods within a replication controller, deployment, or replica set up or down. HPA automatically scales the number of pods that are running for maximum efficiency. Factors that affect the number of pods include: @@ -23,7 +22,7 @@ HPA improves your services by: - Releasing hardware resources that would otherwise be wasted by an excessive number of pods. - Increase/decrease performance as needed to accomplish service level agreements. -### How HPA Works +## How HPA Works ![HPA Schema]({{< baseurl >}}/img/rancher/horizontal-pod-autoscaler.jpg) @@ -38,925 +37,21 @@ Flag | Default | Description | For full documentation on HPA, refer to the [Kubernetes Documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). -### Horizontal Pod Autoscaler API Objects +## Horizontal Pod Autoscaler API Objects HPA is an API resource in the Kubernetes `autoscaling` API group. The current stable version is `autoscaling/v1`, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: `autoscaling/v2beta1`. For more information about the HPA API object, see the [HPA GitHub Readme](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object). -### Rancher UI +## Managing HPAs -You can create, manage, and delete HPAs using Rancher UI: +In Rancher v2.3.x+, the Rancher UI supports [creating, managing, and deleting HPAs]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/). It lets you configure CPU or memory usage as the metric that the HPA uses to scale. -#### Creating a HPA +For prior versions of Rancher, you can [manage HPAs using `kubectl`]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md). You also need to use `kubectl` if you want to create HPAs that scale based on other metrics than CPU and memory. -1. From the **Global** view, open the project that you want to deploy a HPA to. +## Testing HPAs with a Service Deployment -1. Select **Workloads** in the navigation bar and then select the **HPA** tab. +In Rancher v2.3.x+, you can see your HPA's current number of replicas by going to your project's **HPA** tab. For more information, refer to [Get HPA Metrics and Status]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/). -1. Click **Add HPA** - -1. Enter a **Name** for the HPA. - -1. Select a **Namespace** for the HPA. - -1. Select a **Deployment** as scale target for the HPA. - -1. Specify the **Minimum Scale** and **Maximum Scale** for the HPA. - -1. Configure the metrics for the HPA - -1. Click **Create** to create the HPA - -**Result:** The HPA is deployed to the chosen namespace. You can view the HPA's status from the project's **Workloads** -> **HPA** view. - -#### Getting HPA info - -1. From the **Global** view, open the project that you want to deploy a HPA to. - -1. Select **Workloads** in the navigation bar and then select the **HPA** tab. - -1. Find the HPA which you would like to view info - -1. Click the name of the HPA - -1. You can view the HPA info in the HPA detail page - - -#### Deleting HPA - -1. From the **Global** view, open the project that you want to deploy a HPA to. - -1. Select **Workloads** in the navigation bar and then select the **HPA** tab. - -1. Find the HPA which you would like to delete - -1. Click **Ellipsis (...) > Delete**. - -1. Click **Delete** to confim. - -**Result:** The HPA is deleted from current cluster. - -### kubectl Commands - -You can create, manage, and delete HPAs using kubectl: - -- Creating HPA - - - With manifest: `kubectl create -f ` - - - Without manifest (Just support CPU): `kubectl autoscale deployment hello-world --min=2 --max=5 --cpu-percent=50` - -- Getting HPA info - - - Basic: `kubectl get hpa hello-world` - - - Detailed description: `kubectl describe hpa hello-world` - -- Deleting HPA - - - `kubectl delete hpa hello-world` - -### HPA Manifest Definition Example - -The following snippet demonstrates use of different directives in an HPA manifest. See the list below the sample to understand the purpose of each directive. - -```yml -apiVersion: autoscaling/v2beta1 -kind: HorizontalPodAutoscaler -metadata: - name: hello-world -spec: - scaleTargetRef: - apiVersion: extensions/v1beta1 - kind: Deployment - name: hello-world - minReplicas: 1 - maxReplicas: 10 - metrics: - - type: Resource - resource: - name: cpu - targetAverageUtilization: 50 - - type: Resource - resource: - name: memory - targetAverageValue: 100Mi -``` - - -Directive | Description ----------|----------| - `apiVersion: autoscaling/v2beta1` | The version of the Kubernetes `autoscaling` API group in use. This example manifest uses the beta version, so scaling by CPU and memory is enabled. | - `name: hello-world` | Indicates that HPA is performing autoscaling for the `hello-word` deployment. | - `minReplicas: 1` | Indicates that the minimum number of replicas running can't go below 1. | - `maxReplicas: 10` | Indicates the maximum number of replicas in the deployment can't go above 10. - `targetAverageUtilization: 50` | Indicates the deployment will scale pods up when the average running pod uses more than 50% of its requested CPU. - `targetAverageValue: 100Mi` | Indicates the deployment will scale pods up when the average running pod uses more that 100Mi of memory. -
- -#### Configuring HPA to Scale Using Resource Metrics - -Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. Run the following commands to check if metrics are available in your installation: - -``` -$ kubectl top nodes -NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% -node-controlplane 196m 9% 1623Mi 42% -node-etcd 80m 4% 1090Mi 28% -node-worker 64m 3% 1146Mi 29% -$ kubectl -n kube-system top pods -NAME CPU(cores) MEMORY(bytes) -canal-pgldr 18m 46Mi -canal-vhkgr 20m 45Mi -canal-x5q5v 17m 37Mi -canal-xknnz 20m 37Mi -kube-dns-7588d5b5f5-298j2 0m 22Mi -kube-dns-autoscaler-5db9bbb766-t24hw 0m 5Mi -metrics-server-97bc649d5-jxrlt 0m 12Mi -$ kubectl -n kube-system logs -l k8s-app=metrics-server -I1002 12:55:32.172841 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:https://kubernetes.default.svc?kubeletHttps=true&kubeletPort=10250&useServiceAccount=true&insecure=true -I1002 12:55:32.172994 1 heapster.go:72] Metrics Server version v0.2.1 -I1002 12:55:32.173378 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default.svc" and version -I1002 12:55:32.173401 1 configs.go:62] Using kubelet port 10250 -I1002 12:55:32.173946 1 heapster.go:128] Starting with Metric Sink -I1002 12:55:32.592703 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) -I1002 12:55:32.925630 1 heapster.go:101] Starting Heapster API server... -[restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] listing is available at https:///swaggerapi -[restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ -I1002 12:55:32.928597 1 serve.go:85] Serving securely on 0.0.0.0:443 -``` - -If you have created your cluster in Rancher v2.0.6 or before, please refer to [Manual installation](#manual-installation) - -#### Configuring HPA to Scale Using Custom Metrics (Prometheus) - -You can also configure HPA to autoscale based on custom metrics provided by third-party software. The most common use case for autoscaling using third-party software is based on application-level metrics (i.e., HTTP requests per second). HPA uses the `custom.metrics.k8s.io` API to consume these metrics. This API is enabled by deploying a custom metrics adapter for the metrics collection solution. - -For this example, we are going to use [Prometheus](https://prometheus.io/). We are beginning with the following assumptions: - -- Prometheus is deployed in the cluster. -- Prometheus is configured correctly and collecting proper metrics from pods, nodes, namespaces, etc. -- Prometheus is exposed at the following URL and port: `http://prometheus.mycompany.io:80` - -Prometheus is available for deployment in the Rancher v2.0 catalog. Deploy it from Rancher catalog if it isn't already running in your cluster. - -For HPA to use custom metrics from Prometheus, package [k8s-prometheus-adapter](https://github.com/DirectXMan12/k8s-prometheus-adapter) is required in the `kube-system` namespace of your cluster. To install `k8s-prometheus-adapter`, we are using the Helm chart available at [banzai-charts](https://github.com/banzaicloud/banzai-charts). - -1. Initialize Helm in your cluster. - ``` - # kubectl -n kube-system create serviceaccount tiller - kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller - helm init --service-account tiller - ``` - -1. Clone the `banzai-charts` repo from GitHub: - ``` - # git clone https://github.com/banzaicloud/banzai-charts - ``` - -1. Install the `prometheus-adapter` chart, specifying the Prometheus URL and port number. - ``` - # helm install --name prometheus-adapter banzai-charts/prometheus-adapter --set prometheus.url="http://prometheus.mycompany.io",prometheus.port="80" --namespace kube-system - ``` - -1. Check that `prometheus-adapter` is running properly. Check the service pod and logs in the `kube-system` namespace. - - 1. Check that the service pod is `Running`. Enter the following command. - ``` - # kubectl get pods -n kube-system - ``` - From the resulting output, look for a status of `Running`. - ``` - NAME READY STATUS RESTARTS AGE - ... - prometheus-adapter-prometheus-adapter-568674d97f-hbzfx 1/1 Running 0 7h - ... - ``` - 1. Check the service logs to make sure the service is running correctly by entering the command that follows. - ``` - # kubectl logs prometheus-adapter-prometheus-adapter-568674d97f-hbzfx -n kube-system - ``` - Then review the log output to confirm the service is running. - {{% accordion id="prometheus-logs" label="Prometheus Adaptor Logs" %}} - ... - I0724 10:18:45.696679 1 round_trippers.go:436] GET https://10.43.0.1:443/api/v1/namespaces/default/pods?labelSelector=app%3Dhello-world 200 OK in 2 milliseconds - I0724 10:18:45.696695 1 round_trippers.go:442] Response Headers: - I0724 10:18:45.696699 1 round_trippers.go:445] Date: Tue, 24 Jul 2018 10:18:45 GMT - I0724 10:18:45.696703 1 round_trippers.go:445] Content-Type: application/json - I0724 10:18:45.696706 1 round_trippers.go:445] Content-Length: 2581 - I0724 10:18:45.696766 1 request.go:836] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"6237"},"items":[{"metadata":{"name":"hello-world-54764dfbf8-q6l82","generateName":"hello-world-54764dfbf8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-world-54764dfbf8-q6l82","uid":"484cb929-8f29-11e8-99d2-067cac34e79c","resourceVersion":"4066","creationTimestamp":"2018-07-24T10:06:50Z","labels":{"app":"hello-world","pod-template-hash":"1032089694"},"annotations":{"cni.projectcalico.org/podIP":"10.42.0.7/32"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"ReplicaSet","name":"hello-world-54764dfbf8","uid":"4849b9b1-8f29-11e8-99d2-067cac34e79c","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-ncvts","secret":{"secretName":"default-token-ncvts","defaultMode":420}}],"containers":[{"name":"hello-world","image":"rancher/hello-world","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"requests":{"cpu":"500m","memory":"64Mi"}},"volumeMounts":[{"name":"default-token-ncvts","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"34.220.18.140","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:54Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"}],"hostIP":"34.220.18.140","podIP":"10.42.0.7","startTime":"2018-07-24T10:06:50Z","containerStatuses":[{"name":"hello-world","state":{"running":{"startedAt":"2018-07-24T10:06:54Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"rancher/hello-world:latest","imageID":"docker-pullable://rancher/hello-world@sha256:4b1559cb4b57ca36fa2b313a3c7dde774801aa3a2047930d94e11a45168bc053","containerID":"docker://cce4df5fc0408f03d4adf82c90de222f64c302bf7a04be1c82d584ec31530773"}],"qosClass":"Burstable"}}]} - I0724 10:18:45.699525 1 api.go:74] GET http://prometheus-server.prometheus.34.220.18.140.xip.io/api/v1/query?query=sum%28rate%28container_fs_read_seconds_total%7Bpod_name%3D%22hello-world-54764dfbf8-q6l82%22%2Ccontainer_name%21%3D%22POD%22%2Cnamespace%3D%22default%22%7D%5B5m%5D%29%29+by+%28pod_name%29&time=1532427525.697 200 OK - I0724 10:18:45.699620 1 api.go:93] Response Body: {"status":"success","data":{"resultType":"vector","result":[{"metric":{"pod_name":"hello-world-54764dfbf8-q6l82"},"value":[1532427525.697,"0"]}]}} - I0724 10:18:45.699939 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/fs_read?labelSelector=app%3Dhello-world: (12.431262ms) 200 [[kube-controller-manager/v1.10.1 (linux/amd64) kubernetes/d4ab475/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.42.0.0:24268] - I0724 10:18:51.727845 1 request.go:836] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}} - ... - {{% /accordion %}} - - - -1. Check that the metrics API is accessible from kubectl. - - - If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://:6443`. - ``` - # kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 - ``` - If the API is accessible, you should receive output that's similar to what follows. - {{% accordion id="custom-metrics-api-response" label="API Response" %}} - {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]} - {{% /accordion %}} - - - If you are accessing the cluster through Rancher, enter your Server URL in the kubectl config in the following format: `https:///k8s/clusters/`. Add the suffix `/k8s/clusters/` to API path. - ``` - # kubectl get --raw /k8s/clusters//apis/custom.metrics.k8s.io/v1beta1 - ``` - If the API is accessible, you should receive output that's similar to what follows. - {{% accordion id="custom-metrics-api-response-rancher" label="API Response" %}} - {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]} - {{% /accordion %}} - - - -### Testing HPAs with a Service Deployment - -For HPA to work correctly, service deployments should have resources request definitions for containers. Follow this hello-world example to test if HPA is working correctly. - -1. Configure kubectl to connect to your Kubernetes cluster. - -2. Copy the `hello-world` deployment manifest below. -{{% accordion id="hello-world" label="Hello World Manifest" %}} -``` -apiVersion: apps/v1beta2 -kind: Deployment -metadata: - labels: - app: hello-world - name: hello-world - namespace: default -spec: - replicas: 1 - selector: - matchLabels: - app: hello-world - strategy: - rollingUpdate: - maxSurge: 1 - maxUnavailable: 0 - type: RollingUpdate - template: - metadata: - labels: - app: hello-world - spec: - containers: - - image: rancher/hello-world - imagePullPolicy: Always - name: hello-world - resources: - requests: - cpu: 500m - memory: 64Mi - ports: - - containerPort: 80 - protocol: TCP - restartPolicy: Always ---- -apiVersion: v1 -kind: Service -metadata: - name: hello-world - namespace: default -spec: - ports: - - port: 80 - protocol: TCP - targetPort: 80 - selector: - app: hello-world -``` -{{% /accordion %}} - -1. Deploy it to your cluster. - - ``` - # kubectl create -f - ``` - -1. Copy one of the HPAs below based on the metric type you're using: -{{% accordion id="service-deployment-resource-metrics" label="Hello World HPA: Resource Metrics" %}} -``` -apiVersion: autoscaling/v2beta1 -kind: HorizontalPodAutoscaler -metadata: - name: hello-world - namespace: default -spec: - scaleTargetRef: - apiVersion: extensions/v1beta1 - kind: Deployment - name: hello-world - minReplicas: 1 - maxReplicas: 10 - metrics: - - type: Resource - resource: - name: cpu - targetAverageUtilization: 50 - - type: Resource - resource: - name: memory - targetAverageValue: 1000Mi -``` -{{% /accordion %}} -{{% accordion id="service-deployment-custom-metrics" label="Hello World HPA: Custom Metrics" %}} -``` -apiVersion: autoscaling/v2beta1 -kind: HorizontalPodAutoscaler -metadata: - name: hello-world - namespace: default -spec: - scaleTargetRef: - apiVersion: extensions/v1beta1 - kind: Deployment - name: hello-world - minReplicas: 1 - maxReplicas: 10 - metrics: - - type: Resource - resource: - name: cpu - targetAverageUtilization: 50 - - type: Resource - resource: - name: memory - targetAverageValue: 100Mi - - type: Pods - pods: - metricName: cpu_system - targetAverageValue: 20m -``` -{{% /accordion %}} - -1. View the HPA info and description. Confirm that metric data is shown. - {{% accordion id="hpa-info-resource-metrics" label="Resource Metrics" %}} -1. Enter the following commands. - ``` - # kubectl get hpa - NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE - hello-world Deployment/hello-world 1253376 / 100Mi, 0% / 50% 1 10 1 6m - # kubectl describe hpa - Name: hello-world - Namespace: default - Labels: - Annotations: - CreationTimestamp: Mon, 23 Jul 2018 20:21:16 +0200 - Reference: Deployment/hello-world - Metrics: ( current / target ) - resource memory on pods: 1253376 / 100Mi - resource cpu on pods (as a percentage of request): 0% (0) / 50% - Min replicas: 1 - Max replicas: 10 - Conditions: - Type Status Reason Message - ---- ------ ------ ------- - AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale - ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource - ScalingLimited False DesiredWithinRange the desired count is within the acceptable range - Events: - ``` - {{% /accordion %}} - {{% accordion id="hpa-info-custom-metrics" label="Custom Metrics" %}} -1. Enter the following command. - ``` - # kubectl describe hpa - ``` - You should receive the output that follows. - ``` - Name: hello-world - Namespace: default - Labels: - Annotations: - CreationTimestamp: Tue, 24 Jul 2018 18:36:28 +0200 - Reference: Deployment/hello-world - Metrics: ( current / target ) - resource memory on pods: 3514368 / 100Mi - "cpu_system" on pods: 0 / 20m - resource cpu on pods (as a percentage of request): 0% (0) / 50% - Min replicas: 1 - Max replicas: 10 - Conditions: - Type Status Reason Message - ---- ------ ------ ------- - AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale - ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource - ScalingLimited False DesiredWithinRange the desired count is within the acceptable range - Events: - ``` - {{% /accordion %}} - - -1. Generate a load for the service to test that your pods autoscale as intended. You can use any load-testing tool (Hey, Gatling, etc.), but we're using [Hey](https://github.com/rakyll/hey). - -1. Test that pod autoscaling works as intended.

- **To Test Autoscaling Using Resource Metrics:** - {{% accordion id="observe-upscale-2-pods-cpu" label="Upscale to 2 Pods: CPU Usage Up to Target" %}} -Use your load testing tool to scale up to two pods based on CPU Usage. - -1. View your HPA. - ``` - # kubectl describe hpa - ``` - You should receive output similar to what follows. - ``` - Name: hello-world - Namespace: default - Labels: - Annotations: - CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200 - Reference: Deployment/hello-world - Metrics: ( current / target ) - resource memory on pods: 10928128 / 100Mi - resource cpu on pods (as a percentage of request): 56% (280m) / 50% - Min replicas: 1 - Max replicas: 10 - Conditions: - Type Status Reason Message - ---- ------ ------ ------- - AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 2 - ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) - ScalingLimited False DesiredWithinRange the desired count is within the acceptable range - Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal SuccessfulRescale 13s horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target - ``` -1. Enter the following command to confirm you've scaled to two pods. - ``` - # kubectl get pods - ``` - You should receive output similar to what follows: - ``` - NAME READY STATUS RESTARTS AGE - hello-world-54764dfbf8-k8ph2 1/1 Running 0 1m - hello-world-54764dfbf8-q6l4v 1/1 Running 0 3h - ``` - {{% /accordion %}} - {{% accordion id="observe-upscale-3-pods-cpu-cooldown" label="Upscale to 3 pods: CPU Usage Up to Target" %}} -Use your load testing tool to upspace to 3 pods based on CPU usage with `horizontal-pod-autoscaler-upscale-delay` set to 3 minutes. - -1. Enter the following command. - ``` - # kubectl describe hpa - ``` - You should receive output similar to what follows - ``` - Name: hello-world - Namespace: default - Labels: - Annotations: - CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200 - Reference: Deployment/hello-world - Metrics: ( current / target ) - resource memory on pods: 9424896 / 100Mi - resource cpu on pods (as a percentage of request): 66% (333m) / 50% - Min replicas: 1 - Max replicas: 10 - Conditions: - Type Status Reason Message - ---- ------ ------ ------- - AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3 - ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) - ScalingLimited False DesiredWithinRange the desired count is within the acceptable range - Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal SuccessfulRescale 4m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target - Normal SuccessfulRescale 16s horizontal-pod-autoscaler New size: 3; reason: cpu resource utilization (percentage of request) above target - ``` -2. Enter the following command to confirm three pods are running. - ``` - # kubectl get pods - ``` - You should receive output similar to what follows. - ``` - NAME READY STATUS RESTARTS AGE - hello-world-54764dfbf8-f46kh 0/1 Running 0 1m - hello-world-54764dfbf8-k8ph2 1/1 Running 0 5m - hello-world-54764dfbf8-q6l4v 1/1 Running 0 3h - ``` - {{% /accordion %}} - {{% accordion id="observe-downscale-1-pod" label="Downscale to 1 Pod: All Metrics Below Target" %}} -Use your load testing to scale down to 1 pod when all metrics are below target for `horizontal-pod-autoscaler-downscale-delay` (5 minutes by default). - -1. Enter the following command. - ``` - # kubectl describe hpa - ``` - You should receive output similar to what follows. - ``` - Name: hello-world - Namespace: default - Labels: - Annotations: - CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200 - Reference: Deployment/hello-world - Metrics: ( current / target ) - resource memory on pods: 10070016 / 100Mi - resource cpu on pods (as a percentage of request): 0% (0) / 50% - Min replicas: 1 - Max replicas: 10 - Conditions: - Type Status Reason Message - ---- ------ ------ ------- - AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 1 - ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource - ScalingLimited False DesiredWithinRange the desired count is within the acceptable range - Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal SuccessfulRescale 10m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target - Normal SuccessfulRescale 6m horizontal-pod-autoscaler New size: 3; reason: cpu resource utilization (percentage of request) above target - Normal SuccessfulRescale 1s horizontal-pod-autoscaler New size: 1; reason: All metrics below target - ``` - {{% /accordion %}} -
-**To Test Autoscaling Using Custom Metrics:** - {{% accordion id="custom-observe-upscale-2-pods-cpu" label="Upscale to 2 Pods: CPU Usage Up to Target" %}} -Use your load testing tool to upscale two pods based on CPU usage. - -1. Enter the following command. - ``` - # kubectl describe hpa - ``` - You should receive output similar to what follows. - ``` - Name: hello-world - Namespace: default - Labels: - Annotations: - CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200 - Reference: Deployment/hello-world - Metrics: ( current / target ) - resource memory on pods: 8159232 / 100Mi - "cpu_system" on pods: 7m / 20m - resource cpu on pods (as a percentage of request): 64% (321m) / 50% - Min replicas: 1 - Max replicas: 10 - Conditions: - Type Status Reason Message - ---- ------ ------ ------- - AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 2 - ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) - ScalingLimited False DesiredWithinRange the desired count is within the acceptable range - Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal SuccessfulRescale 16s horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target - ``` -1. Enter the following command to confirm two pods are running. - ``` - # kubectl get pods - ``` - You should receive output similar to what follows. - ``` - NAME READY STATUS RESTARTS AGE - hello-world-54764dfbf8-5pfdr 1/1 Running 0 3s - hello-world-54764dfbf8-q6l82 1/1 Running 0 6h - ``` - {{% /accordion %}} -{{% accordion id="observe-upscale-3-pods-cpu-cooldown-2" label="Upscale to 3 Pods: CPU Usage Up to Target" %}} -Use your load testing tool to scale up to three pods when the cpu_system usage limit is up to target. - -1. Enter the following command. - ``` - # kubectl describe hpa - ``` - You should receive output similar to what follows: - ``` - Name: hello-world - Namespace: default - Labels: - Annotations: - CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200 - Reference: Deployment/hello-world - Metrics: ( current / target ) - resource memory on pods: 8374272 / 100Mi - "cpu_system" on pods: 27m / 20m - resource cpu on pods (as a percentage of request): 71% (357m) / 50% - Min replicas: 1 - Max replicas: 10 - Conditions: - Type Status Reason Message - ---- ------ ------ ------- - AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3 - ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) - ScalingLimited False DesiredWithinRange the desired count is within the acceptable range - Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal SuccessfulRescale 3m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target - Normal SuccessfulRescale 3s horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target - ``` -1. Enter the following command to confirm three pods are running. - ``` - # kubectl get pods - ``` - You should receive output similar to what follows: - ``` - # kubectl get pods - NAME READY STATUS RESTARTS AGE - hello-world-54764dfbf8-5pfdr 1/1 Running 0 3m - hello-world-54764dfbf8-m2hrl 1/1 Running 0 1s - hello-world-54764dfbf8-q6l82 1/1 Running 0 6h - ``` -{{% /accordion %}} -{{% accordion id="observe-upscale-4-pods" label="Upscale to 4 Pods: CPU Usage Up to Target" %}} -Use your load testing tool to upscale to four pods based on CPU usage. `horizontal-pod-autoscaler-upscale-delay` is set to three minutes by default. - -1. Enter the following command. - ``` - # kubectl describe hpa - ``` - You should receive output similar to what follows. - ``` - Name: hello-world - Namespace: default - Labels: - Annotations: - CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200 - Reference: Deployment/hello-world - Metrics: ( current / target ) - resource memory on pods: 8374272 / 100Mi - "cpu_system" on pods: 27m / 20m - resource cpu on pods (as a percentage of request): 71% (357m) / 50% - Min replicas: 1 - Max replicas: 10 - Conditions: - Type Status Reason Message - ---- ------ ------ ------- - AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3 - ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) - ScalingLimited False DesiredWithinRange the desired count is within the acceptable range - Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal SuccessfulRescale 5m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target - Normal SuccessfulRescale 3m horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target - Normal SuccessfulRescale 4s horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target - ``` -1. Enter the following command to confirm four pods are running. - ``` - # kubectl get pods - ``` - You should receive output similar to what follows. - ``` - NAME READY STATUS RESTARTS AGE - hello-world-54764dfbf8-2p9xb 1/1 Running 0 5m - hello-world-54764dfbf8-5pfdr 1/1 Running 0 2m - hello-world-54764dfbf8-m2hrl 1/1 Running 0 1s - hello-world-54764dfbf8-q6l82 1/1 Running 0 6h - ``` -{{% /accordion %}} -{{% accordion id="custom-metrics-observe-downscale-1-pod" label="Downscale to 1 Pod: All Metrics Below Target" %}} -Use your load testing tool to scale down to one pod when all metrics below target for `horizontal-pod-autoscaler-downscale-delay`. - -1. Enter the following command. - ``` - # kubectl describe hpa - ``` - You should receive similar output to what follows. - ``` - Name: hello-world - Namespace: default - Labels: - Annotations: - CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200 - Reference: Deployment/hello-world - Metrics: ( current / target ) - resource memory on pods: 8101888 / 100Mi - "cpu_system" on pods: 8m / 20m - resource cpu on pods (as a percentage of request): 0% (0) / 50% - Min replicas: 1 - Max replicas: 10 - Conditions: - Type Status Reason Message - ---- ------ ------ ------- - AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 1 - ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource - ScalingLimited False DesiredWithinRange the desired count is within the acceptable range - Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal SuccessfulRescale 10m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target - Normal SuccessfulRescale 8m horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target - Normal SuccessfulRescale 5m horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target - Normal SuccessfulRescale 13s horizontal-pod-autoscaler New size: 1; reason: All metrics below target - ``` -1. Enter the following command to confirm a single pods is running. - ``` - # kubectl get pods - ``` - You should receive output similar to what follows. - ``` - NAME READY STATUS RESTARTS AGE - hello-world-54764dfbf8-q6l82 1/1 Running 0 6h - ``` -{{% /accordion %}} - - - -### Conclusion - -Horizontal Pod Autoscaling is a great way to automate the number of pod you have deployed for maximum efficiency. You can use it to accommodate deployment scale to real service load and to meet service level agreements. - -By adjusting the `horizontal-pod-autoscaler-downscale-delay` and `horizontal-pod-autoscaler-upscale-delay` flag values, you can adjust the time needed before kube-controller scales your pods up or down. - -We've demonstrated how to setup an HPA based on custom metrics provided by Prometheus. We used the `cpu_system` metric as an example, but you can use other metrics that monitor service performance, like `http_request_number`, `http_response_time`, etc. - - -### Manual Installation - ->**Note:** This is only applicable to clusters created in versions before Rancher v2.0.7. - -Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements. - -#### Requirements - -Be sure that your Kubernetes cluster services are running with these flags at minimum: - -- kube-api: `requestheader-client-ca-file` -- kubelet: `read-only-port` at 10255 -- kube-controller: Optional, just needed if distinct values than default are required. - - - `horizontal-pod-autoscaler-downscale-delay: "5m0s"` - - `horizontal-pod-autoscaler-upscale-delay: "3m0s"` - - `horizontal-pod-autoscaler-sync-period: "30s"` - -For an RKE Kubernetes cluster definition, add this snippet in the `services` section. To add this snippet using the Rancher v2.0 UI, open the **Clusters** view and select **Ellipsis (...) > Edit** for the cluster in which you want to use HPA. Then, from **Cluster Options**, click **Edit as YAML**. Add the following snippet to the `services` section: - -``` -services: -... - kube-api: - extra_args: - requestheader-client-ca-file: "/etc/kubernetes/ssl/kube-ca.pem" - kube-controller: - extra_args: - horizontal-pod-autoscaler-downscale-delay: "5m0s" - horizontal-pod-autoscaler-upscale-delay: "1m0s" - horizontal-pod-autoscaler-sync-period: "30s" - kubelet: - extra_args: - read-only-port: 10255 -``` - -Once the Kubernetes cluster is configured and deployed, you can deploy metrics services. - ->**Note:** kubectl command samples in the sections that follow were tested in a cluster running Rancher v2.0.6 and Kubernetes v1.10.1. - -#### Configuring HPA to Scale Using Resource Metrics - -To create HPA resources based on resource metrics such as CPU and memory use, you need to deploy the `metrics-server` package in the `kube-system` namespace of your Kubernetes cluster. This deployment allows HPA to consume the `metrics.k8s.io` API. - ->**Prerequisite:** You must be running kubectl 1.8 or later. - -1. Connect to your Kubernetes cluster using kubectl. - -1. Clone the GitHub `metrics-server` repo: - ``` - # git clone https://github.com/kubernetes-incubator/metrics-server - ``` - -1. Install the `metrics-server` package. - ``` - # kubectl create -f metrics-server/deploy/1.8+/ - ``` - -1. Check that `metrics-server` is running properly. Check the service pod and logs in the `kube-system` namespace. - - 1. Check the service pod for a status of `running`. Enter the following command: - ``` - # kubectl get pods -n kube-system - ``` - Then check for the status of `running`. - ``` - NAME READY STATUS RESTARTS AGE - ... - metrics-server-6fbfb84cdd-t2fk9 1/1 Running 0 8h - ... - ``` - 1. Check the service logs for service availability. Enter the following command: - ``` - # kubectl -n kube-system logs metrics-server-6fbfb84cdd-t2fk9 - ``` - Then review the log to confirm that the `metrics-server` package is running. - {{% accordion id="metrics-server-run-check" label="Metrics Server Log Output" %}} - I0723 08:09:56.193136 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:'' - I0723 08:09:56.193574 1 heapster.go:72] Metrics Server version v0.2.1 - I0723 08:09:56.194480 1 configs.go:61] Using Kubernetes client with master "https://10.43.0.1:443" and version - I0723 08:09:56.194501 1 configs.go:62] Using kubelet port 10255 - I0723 08:09:56.198612 1 heapster.go:128] Starting with Metric Sink - I0723 08:09:56.780114 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) - I0723 08:09:57.391518 1 heapster.go:101] Starting Heapster API server... - [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] listing is available at https:///swaggerapi - [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ - I0723 08:09:57.394080 1 serve.go:85] Serving securely on 0.0.0.0:443 - {{% /accordion %}} - - -1. Check that the metrics api is accessible from kubectl. - - - - If you are accessing the cluster through Rancher, enter your Server URL in the kubectl config in the following format: `https:///k8s/clusters/`. Add the suffix `/k8s/clusters/` to API path. - ``` - # kubectl get --raw /k8s/clusters//apis/metrics.k8s.io/v1beta1 - ``` - If the API is working correctly, you should receive output similar to the output below. - ``` - {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} - ``` - - - If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://:6443`. - ``` - # kubectl get --raw /apis/metrics.k8s.io/v1beta1 - ``` - If the API is working correctly, you should receive output similar to the output below. - ``` - {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} - ``` - -#### Assigning Additional Required Roles to Your HPA - -By default, HPA reads resource and custom metrics with the user `system:anonymous`. Assign `system:anonymous` to `view-resource-metrics` and `view-custom-metrics` in the ClusterRole and ClusterRoleBindings manifests. These roles are used to access metrics. - -To do it, follow these steps: - -1. Configure kubectl to connect to your cluster. - -1. Copy the ClusterRole and ClusterRoleBinding manifest for the type of metrics you're using for your HPA. - {{% accordion id="cluster-role-resource-metrics" label="Resource Metrics: ApiGroups resource.metrics.k8s.io" %}} - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: view-resource-metrics - rules: - - apiGroups: - - metrics.k8s.io - resources: - - pods - - nodes - verbs: - - get - - list - - watch - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: view-resource-metrics - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: view-resource-metrics - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: User - name: system:anonymous - {{% /accordion %}} -{{% accordion id="cluster-role-custom-resources" label="Custom Metrics: ApiGroups custom.metrics.k8s.io" %}} - - ``` - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: view-custom-metrics - rules: - - apiGroups: - - custom.metrics.k8s.io - resources: - - "*" - verbs: - - get - - list - - watch - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: view-custom-metrics - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: view-custom-metrics - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: User - name: system:anonymous - ``` -{{% /accordion %}} -1. Create them in your cluster using one of the follow commands, depending on the metrics you're using. - ``` - # kubectl create -f - # kubectl create -f - ``` +You can also use `kubectl` to get the status of HPAs that you test with your load testing tool. For more information, refer to [Testing HPAs with kubectl] +({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/). \ No newline at end of file diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md new file mode 100644 index 00000000000..f55e0164bff --- /dev/null +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md @@ -0,0 +1,376 @@ +--- +title: Managing HPAs with kubectl +weight: 3027 +--- + +In Rancher v2.3.x, a feature was added to the UI to manage HPAs. In the UI, you can create, view, and delete HPAs, and you can configure them to scale based on CPU or memory usage. + +For versions of Rancher prior to 2.3.x, or for scaling HPAs based on other metrics, you need `kubectl` to manage HPAs. + +This section describes HPA management with `kubectl`. + +# Basic kubectl Command for Managing HPAs + +If you have an HPA manifest file, you can create, manage, and delete HPAs using `kubectl`: + +- Creating HPA + + - With manifest: `kubectl create -f ` + + - Without manifest (Just support CPU): `kubectl autoscale deployment hello-world --min=2 --max=5 --cpu-percent=50` + +- Getting HPA info + + - Basic: `kubectl get hpa hello-world` + + - Detailed description: `kubectl describe hpa hello-world` + +- Deleting HPA + + - `kubectl delete hpa hello-world` + +# HPA Manifest Definition Example + +The HPA manifest is the config file used for managing an HPA with `kubectl`. + +The following snippet demonstrates use of different directives in an HPA manifest. See the list below the sample to understand the purpose of each directive. + +```yml +apiVersion: autoscaling/v2beta1 +kind: HorizontalPodAutoscaler +metadata: + name: hello-world +spec: + scaleTargetRef: + apiVersion: extensions/v1beta1 + kind: Deployment + name: hello-world + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + targetAverageUtilization: 50 + - type: Resource + resource: + name: memory + targetAverageValue: 100Mi +``` + + +Directive | Description +---------|----------| + `apiVersion: autoscaling/v2beta1` | The version of the Kubernetes `autoscaling` API group in use. This example manifest uses the beta version, so scaling by CPU and memory is enabled. | + `name: hello-world` | Indicates that HPA is performing autoscaling for the `hello-word` deployment. | + `minReplicas: 1` | Indicates that the minimum number of replicas running can't go below 1. | + `maxReplicas: 10` | Indicates the maximum number of replicas in the deployment can't go above 10. + `targetAverageUtilization: 50` | Indicates the deployment will scale pods up when the average running pod uses more than 50% of its requested CPU. + `targetAverageValue: 100Mi` | Indicates the deployment will scale pods up when the average running pod uses more that 100Mi of memory. +
+ +# Configuring HPA to Scale Using Resource Metrics (CPU and Memory) + +Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. Run the following commands to check if metrics are available in your installation: + +``` +$ kubectl top nodes +NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% +node-controlplane 196m 9% 1623Mi 42% +node-etcd 80m 4% 1090Mi 28% +node-worker 64m 3% 1146Mi 29% +$ kubectl -n kube-system top pods +NAME CPU(cores) MEMORY(bytes) +canal-pgldr 18m 46Mi +canal-vhkgr 20m 45Mi +canal-x5q5v 17m 37Mi +canal-xknnz 20m 37Mi +kube-dns-7588d5b5f5-298j2 0m 22Mi +kube-dns-autoscaler-5db9bbb766-t24hw 0m 5Mi +metrics-server-97bc649d5-jxrlt 0m 12Mi +$ kubectl -n kube-system logs -l k8s-app=metrics-server +I1002 12:55:32.172841 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:https://kubernetes.default.svc?kubeletHttps=true&kubeletPort=10250&useServiceAccount=true&insecure=true +I1002 12:55:32.172994 1 heapster.go:72] Metrics Server version v0.2.1 +I1002 12:55:32.173378 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default.svc" and version +I1002 12:55:32.173401 1 configs.go:62] Using kubelet port 10250 +I1002 12:55:32.173946 1 heapster.go:128] Starting with Metric Sink +I1002 12:55:32.592703 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) +I1002 12:55:32.925630 1 heapster.go:101] Starting Heapster API server... +[restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] listing is available at https:///swaggerapi +[restful] 2018/10/02 12:55:32 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ +I1002 12:55:32.928597 1 serve.go:85] Serving securely on 0.0.0.0:443 +``` + +If you have created your cluster in Rancher v2.0.6 or before, please refer to [Manual installation](#manual-installation) + +# Configuring HPA to Scale Using Custom Metrics with Prometheus + +You can configure HPA to autoscale based on custom metrics provided by third-party software. The most common use case for autoscaling using third-party software is based on application-level metrics (i.e., HTTP requests per second). HPA uses the `custom.metrics.k8s.io` API to consume these metrics. This API is enabled by deploying a custom metrics adapter for the metrics collection solution. + +For this example, we are going to use [Prometheus](https://prometheus.io/). We are beginning with the following assumptions: + +- Prometheus is deployed in the cluster. +- Prometheus is configured correctly and collecting proper metrics from pods, nodes, namespaces, etc. +- Prometheus is exposed at the following URL and port: `http://prometheus.mycompany.io:80` + +Prometheus is available for deployment in the Rancher v2.0 catalog. Deploy it from Rancher catalog if it isn't already running in your cluster. + +For HPA to use custom metrics from Prometheus, package [k8s-prometheus-adapter](https://github.com/DirectXMan12/k8s-prometheus-adapter) is required in the `kube-system` namespace of your cluster. To install `k8s-prometheus-adapter`, we are using the Helm chart available at [banzai-charts](https://github.com/banzaicloud/banzai-charts). + +1. Initialize Helm in your cluster. + ``` + # kubectl -n kube-system create serviceaccount tiller + kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller + helm init --service-account tiller + ``` + +1. Clone the `banzai-charts` repo from GitHub: + ``` + # git clone https://github.com/banzaicloud/banzai-charts + ``` + +1. Install the `prometheus-adapter` chart, specifying the Prometheus URL and port number. + ``` + # helm install --name prometheus-adapter banzai-charts/prometheus-adapter --set prometheus.url="http://prometheus.mycompany.io",prometheus.port="80" --namespace kube-system + ``` + +1. Check that `prometheus-adapter` is running properly. Check the service pod and logs in the `kube-system` namespace. + + 1. Check that the service pod is `Running`. Enter the following command. + ``` + # kubectl get pods -n kube-system + ``` + From the resulting output, look for a status of `Running`. + ``` + NAME READY STATUS RESTARTS AGE + ... + prometheus-adapter-prometheus-adapter-568674d97f-hbzfx 1/1 Running 0 7h + ... + ``` + 1. Check the service logs to make sure the service is running correctly by entering the command that follows. + ``` + # kubectl logs prometheus-adapter-prometheus-adapter-568674d97f-hbzfx -n kube-system + ``` + Then review the log output to confirm the service is running. + {{% accordion id="prometheus-logs" label="Prometheus Adaptor Logs" %}} + ... + I0724 10:18:45.696679 1 round_trippers.go:436] GET https://10.43.0.1:443/api/v1/namespaces/default/pods?labelSelector=app%3Dhello-world 200 OK in 2 milliseconds + I0724 10:18:45.696695 1 round_trippers.go:442] Response Headers: + I0724 10:18:45.696699 1 round_trippers.go:445] Date: Tue, 24 Jul 2018 10:18:45 GMT + I0724 10:18:45.696703 1 round_trippers.go:445] Content-Type: application/json + I0724 10:18:45.696706 1 round_trippers.go:445] Content-Length: 2581 + I0724 10:18:45.696766 1 request.go:836] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"6237"},"items":[{"metadata":{"name":"hello-world-54764dfbf8-q6l82","generateName":"hello-world-54764dfbf8-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-world-54764dfbf8-q6l82","uid":"484cb929-8f29-11e8-99d2-067cac34e79c","resourceVersion":"4066","creationTimestamp":"2018-07-24T10:06:50Z","labels":{"app":"hello-world","pod-template-hash":"1032089694"},"annotations":{"cni.projectcalico.org/podIP":"10.42.0.7/32"},"ownerReferences":[{"apiVersion":"extensions/v1beta1","kind":"ReplicaSet","name":"hello-world-54764dfbf8","uid":"4849b9b1-8f29-11e8-99d2-067cac34e79c","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-ncvts","secret":{"secretName":"default-token-ncvts","defaultMode":420}}],"containers":[{"name":"hello-world","image":"rancher/hello-world","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"requests":{"cpu":"500m","memory":"64Mi"}},"volumeMounts":[{"name":"default-token-ncvts","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"34.220.18.140","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:54Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-24T10:06:50Z"}],"hostIP":"34.220.18.140","podIP":"10.42.0.7","startTime":"2018-07-24T10:06:50Z","containerStatuses":[{"name":"hello-world","state":{"running":{"startedAt":"2018-07-24T10:06:54Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"rancher/hello-world:latest","imageID":"docker-pullable://rancher/hello-world@sha256:4b1559cb4b57ca36fa2b313a3c7dde774801aa3a2047930d94e11a45168bc053","containerID":"docker://cce4df5fc0408f03d4adf82c90de222f64c302bf7a04be1c82d584ec31530773"}],"qosClass":"Burstable"}}]} + I0724 10:18:45.699525 1 api.go:74] GET http://prometheus-server.prometheus.34.220.18.140.xip.io/api/v1/query?query=sum%28rate%28container_fs_read_seconds_total%7Bpod_name%3D%22hello-world-54764dfbf8-q6l82%22%2Ccontainer_name%21%3D%22POD%22%2Cnamespace%3D%22default%22%7D%5B5m%5D%29%29+by+%28pod_name%29&time=1532427525.697 200 OK + I0724 10:18:45.699620 1 api.go:93] Response Body: {"status":"success","data":{"resultType":"vector","result":[{"metric":{"pod_name":"hello-world-54764dfbf8-q6l82"},"value":[1532427525.697,"0"]}]}} + I0724 10:18:45.699939 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/fs_read?labelSelector=app%3Dhello-world: (12.431262ms) 200 [[kube-controller-manager/v1.10.1 (linux/amd64) kubernetes/d4ab475/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.42.0.0:24268] + I0724 10:18:51.727845 1 request.go:836] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}} + ... + {{% /accordion %}} + + + +1. Check that the metrics API is accessible from kubectl. + + - If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://:6443`. + ``` + # kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 + ``` + If the API is accessible, you should receive output that's similar to what follows. + {{% accordion id="custom-metrics-api-response" label="API Response" %}} + {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]} + {{% /accordion %}} + + - If you are accessing the cluster through Rancher, enter your Server URL in the kubectl config in the following format: `https:///k8s/clusters/`. Add the suffix `/k8s/clusters/` to API path. + ``` + # kubectl get --raw /k8s/clusters//apis/custom.metrics.k8s.io/v1beta1 + ``` + If the API is accessible, you should receive output that's similar to what follows. + {{% accordion id="custom-metrics-api-response-rancher" label="API Response" %}} + {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]} + {{% /accordion %}} + + + +# Manual Installation for Clusters Created Before Rancher v2.0.7 + +Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements. + +### Requirements + +Be sure that your Kubernetes cluster services are running with these flags at minimum: + +- kube-api: `requestheader-client-ca-file` +- kubelet: `read-only-port` at 10255 +- kube-controller: Optional, just needed if distinct values than default are required. + + - `horizontal-pod-autoscaler-downscale-delay: "5m0s"` + - `horizontal-pod-autoscaler-upscale-delay: "3m0s"` + - `horizontal-pod-autoscaler-sync-period: "30s"` + +For an RKE Kubernetes cluster definition, add this snippet in the `services` section. To add this snippet using the Rancher v2.0 UI, open the **Clusters** view and select **Ellipsis (...) > Edit** for the cluster in which you want to use HPA. Then, from **Cluster Options**, click **Edit as YAML**. Add the following snippet to the `services` section: + +``` +services: +... + kube-api: + extra_args: + requestheader-client-ca-file: "/etc/kubernetes/ssl/kube-ca.pem" + kube-controller: + extra_args: + horizontal-pod-autoscaler-downscale-delay: "5m0s" + horizontal-pod-autoscaler-upscale-delay: "1m0s" + horizontal-pod-autoscaler-sync-period: "30s" + kubelet: + extra_args: + read-only-port: 10255 +``` + +Once the Kubernetes cluster is configured and deployed, you can deploy metrics services. + +>**Note:** `kubectl` command samples in the sections that follow were tested in a cluster running Rancher v2.0.6 and Kubernetes v1.10.1. + +### Configuring HPA to Scale Using Resource Metrics + +To create HPA resources based on resource metrics such as CPU and memory use, you need to deploy the `metrics-server` package in the `kube-system` namespace of your Kubernetes cluster. This deployment allows HPA to consume the `metrics.k8s.io` API. + +>**Prerequisite:** You must be running `kubectl` 1.8 or later. + +1. Connect to your Kubernetes cluster using `kubectl`. + +1. Clone the GitHub `metrics-server` repo: + ``` + # git clone https://github.com/kubernetes-incubator/metrics-server + ``` + +1. Install the `metrics-server` package. + ``` + # kubectl create -f metrics-server/deploy/1.8+/ + ``` + +1. Check that `metrics-server` is running properly. Check the service pod and logs in the `kube-system` namespace. + + 1. Check the service pod for a status of `running`. Enter the following command: + ``` + # kubectl get pods -n kube-system + ``` + Then check for the status of `running`. + ``` + NAME READY STATUS RESTARTS AGE + ... + metrics-server-6fbfb84cdd-t2fk9 1/1 Running 0 8h + ... + ``` + 1. Check the service logs for service availability. Enter the following command: + ``` + # kubectl -n kube-system logs metrics-server-6fbfb84cdd-t2fk9 + ``` + Then review the log to confirm that the `metrics-server` package is running. + {{% accordion id="metrics-server-run-check" label="Metrics Server Log Output" %}} + I0723 08:09:56.193136 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:'' + I0723 08:09:56.193574 1 heapster.go:72] Metrics Server version v0.2.1 + I0723 08:09:56.194480 1 configs.go:61] Using Kubernetes client with master "https://10.43.0.1:443" and version + I0723 08:09:56.194501 1 configs.go:62] Using kubelet port 10255 + I0723 08:09:56.198612 1 heapster.go:128] Starting with Metric Sink + I0723 08:09:56.780114 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) + I0723 08:09:57.391518 1 heapster.go:101] Starting Heapster API server... + [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] listing is available at https:///swaggerapi + [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ + I0723 08:09:57.394080 1 serve.go:85] Serving securely on 0.0.0.0:443 + {{% /accordion %}} + + +1. Check that the metrics api is accessible from `kubectl`. + + + - If you are accessing the cluster through Rancher, enter your Server URL in the `kubectl` config in the following format: `https:///k8s/clusters/`. Add the suffix `/k8s/clusters/` to API path. + ``` + # kubectl get --raw /k8s/clusters//apis/metrics.k8s.io/v1beta1 + ``` + If the API is working correctly, you should receive output similar to the output below. + ``` + {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} + ``` + + - If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://:6443`. + ``` + # kubectl get --raw /apis/metrics.k8s.io/v1beta1 + ``` + If the API is working correctly, you should receive output similar to the output below. + ``` + {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} + ``` + +### Assigning Additional Required Roles to Your HPA + +By default, HPA reads resource and custom metrics with the user `system:anonymous`. Assign `system:anonymous` to `view-resource-metrics` and `view-custom-metrics` in the ClusterRole and ClusterRoleBindings manifests. These roles are used to access metrics. + +To do it, follow these steps: + +1. Configure `kubectl` to connect to your cluster. + +1. Copy the ClusterRole and ClusterRoleBinding manifest for the type of metrics you're using for your HPA. + {{% accordion id="cluster-role-resource-metrics" label="Resource Metrics: ApiGroups resource.metrics.k8s.io" %}} + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: view-resource-metrics + rules: + - apiGroups: + - metrics.k8s.io + resources: + - pods + - nodes + verbs: + - get + - list + - watch + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: view-resource-metrics + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: view-resource-metrics + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: User + name: system:anonymous + {{% /accordion %}} +{{% accordion id="cluster-role-custom-resources" label="Custom Metrics: ApiGroups custom.metrics.k8s.io" %}} + + ``` + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: view-custom-metrics + rules: + - apiGroups: + - custom.metrics.k8s.io + resources: + - "*" + verbs: + - get + - list + - watch + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: view-custom-metrics + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: view-custom-metrics + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: User + name: system:anonymous + ``` +{{% /accordion %}} +1. Create them in your cluster using one of the follow commands, depending on the metrics you're using. + ``` + # kubectl create -f + # kubectl create -f + ``` diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md new file mode 100644 index 00000000000..64030106794 --- /dev/null +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md @@ -0,0 +1,55 @@ +--- +title: Managing HPAs with the Rancher UI +weight: 3028 +--- + +In Rancher v2.3.x+, the Rancher UI supports creating, managing, and deleting HPAs. You can configure CPU or memory usage as the metric that the HPA uses to scale. + +For prior versions of Rancher, you can [manage HPAs using kubectl]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/). You also need to use `kubectl` if you want to create HPAs that scale based on other metrics than CPU and memory. + +Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use an HPA. + +## Creating an HPA + +1. From the **Global** view, open the project that you want to deploy a HPA to. + +1. Select **Workloads** in the navigation bar and then select the **HPA** tab. + +1. Click **Add HPA.** + +1. Enter a **Name** for the HPA. + +1. Select a **Namespace** for the HPA. + +1. Select a **Deployment** as scale target for the HPA. + +1. Specify the **Minimum Scale** and **Maximum Scale** for the HPA. + +1. Configure the metrics for the HPA. You can choose memory or CPU usage as the metric that will cause the HPA to scale the service up or down. In the **Quantity** field, enter the percentage of the workload's memory or CPU usage that will cause the HPA to scale the service. To configure other HPA metrics, including metrics available from Prometheus, you need to [manage HPAs using kubectl]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus). + +1. Click **Create** to create the HPA. + +> **Result:** The HPA is deployed to the chosen namespace. You can view the HPA's status from the project's Workloads > HPA view. + +## Get HPA Metrics and Status + +1. From the **Global** view, open the project with the HPAs you want to look at. + +1. Select **Workloads** in the navigation bar and then select the **HPA** tab. The **HPA** tab shows the number of current replicas. + +1. For more detailed metrics and status of a specific HPA, click the name of the HPA. This leads to the HPA detail page. + + +## Deleting an HPA + +1. From the **Global** view, open the project that you want to delete an HPA from. + +1. Select **Workloads** in the navigation bar and then select the **HPA** tab. + +1. Find the HPA which you would like to delete. + +1. Click **Ellipsis (...) > Delete**. + +1. Click **Delete** to confim. + +> **Result:** The HPA is deleted from the current cluster. \ No newline at end of file diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md new file mode 100644 index 00000000000..31a296bab94 --- /dev/null +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md @@ -0,0 +1,491 @@ +--- +title: Testing HPAs with kubectl +weight: 3029 +--- + +This document describes how to check the status of your HPAs after scaling them up or down with your load testing tool. For information on how to check the status from the Rancher UI (at least version 2.3.x), refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/). + +For HPA to work correctly, service deployments should have resources request definitions for containers. Follow this hello-world example to test if HPA is working correctly. + +1. Configure `kubectl` to connect to your Kubernetes cluster. + +2. Copy the `hello-world` deployment manifest below. +{{% accordion id="hello-world" label="Hello World Manifest" %}} +``` +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + labels: + app: hello-world + name: hello-world + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + app: hello-world + strategy: + rollingUpdate: + maxSurge: 1 + maxUnavailable: 0 + type: RollingUpdate + template: + metadata: + labels: + app: hello-world + spec: + containers: + - image: rancher/hello-world + imagePullPolicy: Always + name: hello-world + resources: + requests: + cpu: 500m + memory: 64Mi + ports: + - containerPort: 80 + protocol: TCP + restartPolicy: Always +--- +apiVersion: v1 +kind: Service +metadata: + name: hello-world + namespace: default +spec: + ports: + - port: 80 + protocol: TCP + targetPort: 80 + selector: + app: hello-world +``` +{{% /accordion %}} + +1. Deploy it to your cluster. + + ``` + # kubectl create -f + ``` + +1. Copy one of the HPAs below based on the metric type you're using: +{{% accordion id="service-deployment-resource-metrics" label="Hello World HPA: Resource Metrics" %}} +``` +apiVersion: autoscaling/v2beta1 +kind: HorizontalPodAutoscaler +metadata: + name: hello-world + namespace: default +spec: + scaleTargetRef: + apiVersion: extensions/v1beta1 + kind: Deployment + name: hello-world + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + targetAverageUtilization: 50 + - type: Resource + resource: + name: memory + targetAverageValue: 1000Mi +``` +{{% /accordion %}} +{{% accordion id="service-deployment-custom-metrics" label="Hello World HPA: Custom Metrics" %}} +``` +apiVersion: autoscaling/v2beta1 +kind: HorizontalPodAutoscaler +metadata: + name: hello-world + namespace: default +spec: + scaleTargetRef: + apiVersion: extensions/v1beta1 + kind: Deployment + name: hello-world + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: Resource + resource: + name: cpu + targetAverageUtilization: 50 + - type: Resource + resource: + name: memory + targetAverageValue: 100Mi + - type: Pods + pods: + metricName: cpu_system + targetAverageValue: 20m +``` +{{% /accordion %}} + +1. View the HPA info and description. Confirm that metric data is shown. + {{% accordion id="hpa-info-resource-metrics" label="Resource Metrics" %}} +1. Enter the following commands. + ``` + # kubectl get hpa + NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE + hello-world Deployment/hello-world 1253376 / 100Mi, 0% / 50% 1 10 1 6m + # kubectl describe hpa + Name: hello-world + Namespace: default + Labels: + Annotations: + CreationTimestamp: Mon, 23 Jul 2018 20:21:16 +0200 + Reference: Deployment/hello-world + Metrics: ( current / target ) + resource memory on pods: 1253376 / 100Mi + resource cpu on pods (as a percentage of request): 0% (0) / 50% + Min replicas: 1 + Max replicas: 10 + Conditions: + Type Status Reason Message + ---- ------ ------ ------- + AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale + ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource + ScalingLimited False DesiredWithinRange the desired count is within the acceptable range + Events: + ``` + {{% /accordion %}} + {{% accordion id="hpa-info-custom-metrics" label="Custom Metrics" %}} +1. Enter the following command. + ``` + # kubectl describe hpa + ``` + You should receive the output that follows. + ``` + Name: hello-world + Namespace: default + Labels: + Annotations: + CreationTimestamp: Tue, 24 Jul 2018 18:36:28 +0200 + Reference: Deployment/hello-world + Metrics: ( current / target ) + resource memory on pods: 3514368 / 100Mi + "cpu_system" on pods: 0 / 20m + resource cpu on pods (as a percentage of request): 0% (0) / 50% + Min replicas: 1 + Max replicas: 10 + Conditions: + Type Status Reason Message + ---- ------ ------ ------- + AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale + ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource + ScalingLimited False DesiredWithinRange the desired count is within the acceptable range + Events: + ``` + {{% /accordion %}} + + +1. Generate a load for the service to test that your pods autoscale as intended. You can use any load-testing tool (Hey, Gatling, etc.), but we're using [Hey](https://github.com/rakyll/hey). + +1. Test that pod autoscaling works as intended.

+ **To Test Autoscaling Using Resource Metrics:** + {{% accordion id="observe-upscale-2-pods-cpu" label="Upscale to 2 Pods: CPU Usage Up to Target" %}} +Use your load testing tool to scale up to two pods based on CPU Usage. + +1. View your HPA. + ``` + # kubectl describe hpa + ``` + You should receive output similar to what follows. + ``` + Name: hello-world + Namespace: default + Labels: + Annotations: + CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200 + Reference: Deployment/hello-world + Metrics: ( current / target ) + resource memory on pods: 10928128 / 100Mi + resource cpu on pods (as a percentage of request): 56% (280m) / 50% + Min replicas: 1 + Max replicas: 10 + Conditions: + Type Status Reason Message + ---- ------ ------ ------- + AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 2 + ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) + ScalingLimited False DesiredWithinRange the desired count is within the acceptable range + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulRescale 13s horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target + ``` +1. Enter the following command to confirm you've scaled to two pods. + ``` + # kubectl get pods + ``` + You should receive output similar to what follows: + ``` + NAME READY STATUS RESTARTS AGE + hello-world-54764dfbf8-k8ph2 1/1 Running 0 1m + hello-world-54764dfbf8-q6l4v 1/1 Running 0 3h + ``` + {{% /accordion %}} + {{% accordion id="observe-upscale-3-pods-cpu-cooldown" label="Upscale to 3 pods: CPU Usage Up to Target" %}} +Use your load testing tool to upspace to 3 pods based on CPU usage with `horizontal-pod-autoscaler-upscale-delay` set to 3 minutes. + +1. Enter the following command. + ``` + # kubectl describe hpa + ``` + You should receive output similar to what follows + ``` + Name: hello-world + Namespace: default + Labels: + Annotations: + CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200 + Reference: Deployment/hello-world + Metrics: ( current / target ) + resource memory on pods: 9424896 / 100Mi + resource cpu on pods (as a percentage of request): 66% (333m) / 50% + Min replicas: 1 + Max replicas: 10 + Conditions: + Type Status Reason Message + ---- ------ ------ ------- + AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3 + ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) + ScalingLimited False DesiredWithinRange the desired count is within the acceptable range + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulRescale 4m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target + Normal SuccessfulRescale 16s horizontal-pod-autoscaler New size: 3; reason: cpu resource utilization (percentage of request) above target + ``` +2. Enter the following command to confirm three pods are running. + ``` + # kubectl get pods + ``` + You should receive output similar to what follows. + ``` + NAME READY STATUS RESTARTS AGE + hello-world-54764dfbf8-f46kh 0/1 Running 0 1m + hello-world-54764dfbf8-k8ph2 1/1 Running 0 5m + hello-world-54764dfbf8-q6l4v 1/1 Running 0 3h + ``` + {{% /accordion %}} + {{% accordion id="observe-downscale-1-pod" label="Downscale to 1 Pod: All Metrics Below Target" %}} +Use your load testing to scale down to 1 pod when all metrics are below target for `horizontal-pod-autoscaler-downscale-delay` (5 minutes by default). + +1. Enter the following command. + ``` + # kubectl describe hpa + ``` + You should receive output similar to what follows. + ``` + Name: hello-world + Namespace: default + Labels: + Annotations: + CreationTimestamp: Mon, 23 Jul 2018 22:22:04 +0200 + Reference: Deployment/hello-world + Metrics: ( current / target ) + resource memory on pods: 10070016 / 100Mi + resource cpu on pods (as a percentage of request): 0% (0) / 50% + Min replicas: 1 + Max replicas: 10 + Conditions: + Type Status Reason Message + ---- ------ ------ ------- + AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 1 + ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource + ScalingLimited False DesiredWithinRange the desired count is within the acceptable range + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulRescale 10m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target + Normal SuccessfulRescale 6m horizontal-pod-autoscaler New size: 3; reason: cpu resource utilization (percentage of request) above target + Normal SuccessfulRescale 1s horizontal-pod-autoscaler New size: 1; reason: All metrics below target + ``` + {{% /accordion %}} +
+**To Test Autoscaling Using Custom Metrics:** + {{% accordion id="custom-observe-upscale-2-pods-cpu" label="Upscale to 2 Pods: CPU Usage Up to Target" %}} +Use your load testing tool to upscale two pods based on CPU usage. + +1. Enter the following command. + ``` + # kubectl describe hpa + ``` + You should receive output similar to what follows. + ``` + Name: hello-world + Namespace: default + Labels: + Annotations: + CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200 + Reference: Deployment/hello-world + Metrics: ( current / target ) + resource memory on pods: 8159232 / 100Mi + "cpu_system" on pods: 7m / 20m + resource cpu on pods (as a percentage of request): 64% (321m) / 50% + Min replicas: 1 + Max replicas: 10 + Conditions: + Type Status Reason Message + ---- ------ ------ ------- + AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 2 + ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) + ScalingLimited False DesiredWithinRange the desired count is within the acceptable range + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulRescale 16s horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target + ``` +1. Enter the following command to confirm two pods are running. + ``` + # kubectl get pods + ``` + You should receive output similar to what follows. + ``` + NAME READY STATUS RESTARTS AGE + hello-world-54764dfbf8-5pfdr 1/1 Running 0 3s + hello-world-54764dfbf8-q6l82 1/1 Running 0 6h + ``` + {{% /accordion %}} +{{% accordion id="observe-upscale-3-pods-cpu-cooldown-2" label="Upscale to 3 Pods: CPU Usage Up to Target" %}} +Use your load testing tool to scale up to three pods when the cpu_system usage limit is up to target. + +1. Enter the following command. + ``` + # kubectl describe hpa + ``` + You should receive output similar to what follows: + ``` + Name: hello-world + Namespace: default + Labels: + Annotations: + CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200 + Reference: Deployment/hello-world + Metrics: ( current / target ) + resource memory on pods: 8374272 / 100Mi + "cpu_system" on pods: 27m / 20m + resource cpu on pods (as a percentage of request): 71% (357m) / 50% + Min replicas: 1 + Max replicas: 10 + Conditions: + Type Status Reason Message + ---- ------ ------ ------- + AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3 + ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) + ScalingLimited False DesiredWithinRange the desired count is within the acceptable range + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulRescale 3m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target + Normal SuccessfulRescale 3s horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target + ``` +1. Enter the following command to confirm three pods are running. + ``` + # kubectl get pods + ``` + You should receive output similar to what follows: + ``` + # kubectl get pods + NAME READY STATUS RESTARTS AGE + hello-world-54764dfbf8-5pfdr 1/1 Running 0 3m + hello-world-54764dfbf8-m2hrl 1/1 Running 0 1s + hello-world-54764dfbf8-q6l82 1/1 Running 0 6h + ``` +{{% /accordion %}} +{{% accordion id="observe-upscale-4-pods" label="Upscale to 4 Pods: CPU Usage Up to Target" %}} +Use your load testing tool to upscale to four pods based on CPU usage. `horizontal-pod-autoscaler-upscale-delay` is set to three minutes by default. + +1. Enter the following command. + ``` + # kubectl describe hpa + ``` + You should receive output similar to what follows. + ``` + Name: hello-world + Namespace: default + Labels: + Annotations: + CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200 + Reference: Deployment/hello-world + Metrics: ( current / target ) + resource memory on pods: 8374272 / 100Mi + "cpu_system" on pods: 27m / 20m + resource cpu on pods (as a percentage of request): 71% (357m) / 50% + Min replicas: 1 + Max replicas: 10 + Conditions: + Type Status Reason Message + ---- ------ ------ ------- + AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3 + ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request) + ScalingLimited False DesiredWithinRange the desired count is within the acceptable range + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulRescale 5m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target + Normal SuccessfulRescale 3m horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target + Normal SuccessfulRescale 4s horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target + ``` +1. Enter the following command to confirm four pods are running. + ``` + # kubectl get pods + ``` + You should receive output similar to what follows. + ``` + NAME READY STATUS RESTARTS AGE + hello-world-54764dfbf8-2p9xb 1/1 Running 0 5m + hello-world-54764dfbf8-5pfdr 1/1 Running 0 2m + hello-world-54764dfbf8-m2hrl 1/1 Running 0 1s + hello-world-54764dfbf8-q6l82 1/1 Running 0 6h + ``` +{{% /accordion %}} +{{% accordion id="custom-metrics-observe-downscale-1-pod" label="Downscale to 1 Pod: All Metrics Below Target" %}} +Use your load testing tool to scale down to one pod when all metrics below target for `horizontal-pod-autoscaler-downscale-delay`. + +1. Enter the following command. + ``` + # kubectl describe hpa + ``` + You should receive similar output to what follows. + ``` + Name: hello-world + Namespace: default + Labels: + Annotations: + CreationTimestamp: Tue, 24 Jul 2018 18:01:11 +0200 + Reference: Deployment/hello-world + Metrics: ( current / target ) + resource memory on pods: 8101888 / 100Mi + "cpu_system" on pods: 8m / 20m + resource cpu on pods (as a percentage of request): 0% (0) / 50% + Min replicas: 1 + Max replicas: 10 + Conditions: + Type Status Reason Message + ---- ------ ------ ------- + AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 1 + ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource + ScalingLimited False DesiredWithinRange the desired count is within the acceptable range + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal SuccessfulRescale 10m horizontal-pod-autoscaler New size: 2; reason: cpu resource utilization (percentage of request) above target + Normal SuccessfulRescale 8m horizontal-pod-autoscaler New size: 3; reason: pods metric cpu_system above target + Normal SuccessfulRescale 5m horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target + Normal SuccessfulRescale 13s horizontal-pod-autoscaler New size: 1; reason: All metrics below target + ``` +1. Enter the following command to confirm a single pods is running. + ``` + # kubectl get pods + ``` + You should receive output similar to what follows. + ``` + NAME READY STATUS RESTARTS AGE + hello-world-54764dfbf8-q6l82 1/1 Running 0 6h + ``` +{{% /accordion %}} \ No newline at end of file From aeada1af3d5e4124b48b133a393deed6bf513390 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 14 Jun 2019 11:42:51 -0700 Subject: [PATCH 22/33] Remove Global Registry docs --- .../admin-settings/globalregistry/_index.md | 58 ---------- .../globalregistry/harbor/_index.md | 101 ------------------ 2 files changed, 159 deletions(-) delete mode 100644 content/rancher/v2.x/en/admin-settings/globalregistry/_index.md delete mode 100644 content/rancher/v2.x/en/admin-settings/globalregistry/harbor/_index.md diff --git a/content/rancher/v2.x/en/admin-settings/globalregistry/_index.md b/content/rancher/v2.x/en/admin-settings/globalregistry/_index.md deleted file mode 100644 index 6170072aed4..00000000000 --- a/content/rancher/v2.x/en/admin-settings/globalregistry/_index.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: Global Registry -weight: 1145 ---- - -_Available as of v2.3.0_ - -Rancher's Global Registry provides a way to set up a [Harbor](https://github.com/goharbor/harbor) registry to store and manage your docker images. The Global Registry reuses the same SSL certificate of Rancher server so you don't need to prepare additional certificates for it. The CA root certificate is added to every node of managed kubernetes clusters. Therefore, in the case where you're using a private certificate authority, you can use images from the Global Registry without additional configuration of the docker daemon on cluster nodes. - -> **Note:** Global Registry is only available in [HA setups]({{< baseurl >}}/rancher/v2.x/en/installation/ha/) with the [`local` cluster enabled]({{< baseurl >}}/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/#import-local-cluster). - -## Prerequisites - -Depending on the configuration options you use, check the following prerequisites before enabling Global Registry: - -- If you use `filesystem` type for docker registry storage, or use `internal` type database or Redis, [persistent volumes]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/) are required in the local cluster. -- If you use `external` type database, you need to create databases in PostgreSQL before registry deployment. You can configure which databases to use in the configuration options. - -## Enabling Global Registry - -As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/), you can configure Rancher to deploy the Global Registry. - -1. From the **Global** view, select **Tools > Global Registry** from the main menu. - -1. Enter in your desired configuration options. For detail instructions, follow the [Configuration Options]({{< baseurl >}}/rancher/v2.x/en/admin-settings/globalregistry/harbor/) section. - -1. Click **Save**. - -**Result:** A Harbor instance will be deployed as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) named `global-registry-harbor` to local cluster's `system` project. - -## Disabling Global Registry - -To disable the Global Registry: - -1. From the **Global** view, select **Tools > Global Registry** from the main menu. - -1. Click **Disable registry**, then click the red button again to confirm the disable action. - -**Result:** The `global-registry-harbor` application in local cluster's `system` project gets removed. Note that persistent volumes used by the Global Registry will not be removed on disabling, so as to prevent data lost. You need to manually delete relevant volumes in local cluster's `system` project if you want to clean them up. - -## Using Global Registry - -Once the Global Registry is enabled, you can: - -1. Access Harbor UI through the endpoint `/registry`. - -1. Use the Rancher server hostname as the registry hostname in image names. For example: - ``` - docker pull /library/busybox:latest - ``` - -1. If Notary is enabled, the endpoint for notary server is `/registry/notary`. - -1. Use Global Registry as a private registry in Rancher projects, see [how to use registries]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/registries/). - -> **Notes:** -> ->- The authentication of Harbor is independent of Rancher authentication, you should log in to Harbor UI and manage Harbor users for registry account management. diff --git a/content/rancher/v2.x/en/admin-settings/globalregistry/harbor/_index.md b/content/rancher/v2.x/en/admin-settings/globalregistry/harbor/_index.md deleted file mode 100644 index 765c71eb274..00000000000 --- a/content/rancher/v2.x/en/admin-settings/globalregistry/harbor/_index.md +++ /dev/null @@ -1,101 +0,0 @@ ---- -title: Global Registry Configuration -weight: 1 ---- - -_Available as of v2.3.0-alpha_ - -While configuring global registry, there are multiple options that can be configured. - -## General - -Field | Description | Required | Editable | Default -----|-----------------|------------|------------|------------ -Admin Password | The initial password of Harbor admin. Change it from Harbor UI after the registry is ready | Yes | No | n/a -Encryption Key For Harbor | The key used for encryption. Must be a string of 16 chars | No | Yes | n/a - -## Registry - -Field | Description | Required | Editable | Default -----|-----------------|------------|------------|------------ -Storage Backend Type | Storage type for images: `filesystem` or `s3`. If `filesystem` is selected, persistent volume is required in your local cluster. | Yes | No | filesystem -Source | Whether to use a storage class to provision a new PV or to use an existing PVC | Yes | Yes | Use a storage class -Storage Class | Specify the storage class used to provision the persistent volume(A storage class is required in the local cluster to use this option) | Yes, when use SC | Yes | The default storage class -Persistent Volume Size | Specify the size of the persistent volume | Yes, when use SC | Yes | 100Gi -Existing Claim | Specify the existing PVC for registry images(An existing PVC is required to use this option) | Yes, when use existing PV | Yes | n/a -Registry CPU Limit | CPU limit for the docker registry workload | Yes | Yes | 1000 (milli CPUs) -Registry Memory Limit | Memory limit for the docker registry workload | Yes | Yes | 2048 (MiB) -Registry CPU Reservation | CPU reservation for the docker registry workload | Yes | Yes | 100 (milli CPUs) -Registry Memory Reservation | Memory reservation for the docker registry workload | Yes | Yes | 256 (MiB) -Registry Node Selector | Select the nodes where the docker registry workload will be scheduled to | No | Yes | n/a - -## Database - -Field | Description | Required | Editable | Default -----|-----------------|------------|------------|------------ -Config Database Type | Choose `internal` or `external`. When `internal` is selected, a PostgreSQL workload will be included in the application, and a persistent volume is required for it. When `external` is selected, you can configure an external PostgreSQL. You should create databases for Harbor core service, Clair and Notary before enabling.| Yes | No | internal -Source | Whether to use a storage class to provision a new PV or to use an existing PVC | Yes, when use internal database | Yes | Use a storage class -Storage Class | Specify the storage class used to provision the persistent volume(A storage class is required in the local cluster to use this option) | Yes, when use SC and internal database | Yes | The default storage class -Persistent Volume Size | Specify the size of the persistent volume | Yes, when use SC and internal database | Yes | 5Gi -Existing Claim | Specify the existing PVC for PostgreSQL database(An existing PVC is required to use this option) | Yes, when use existing PV and internal database | Yes | n/a -Database CPU Limit | CPU limit for the database workload | Yes | Yes | 500 (milli CPUs) -Database Memory Limit | Memory limit for the database workload | Yes | Yes | 2048 (MiB) -Database CPU Reservation | CPU reservation for the database workload | Yes | Yes | 100 (milli CPUs) -Database Memory Reservation | Memory reservation for the database workload | Yes | Yes | 256 (MiB) -Database Node Selector | Select the nodes where the database workload will be scheduled to | No (Only shows when use external database) | Yes | n/a -SSL Mode for PostgreSQL | SSL mode used to connect the external database | No (Only shows when use external database) | Yes | disable -Host for PostgreSQL | The hostname for external database | Yes (Only shows when use external database) | Yes | n/a -Port for PostgreSQL | The port for external database | Yes (Only shows when use external database) | Yes | 5432 -Username for PostgreSQL | The username for external database | Yes (Only shows when use external database) | Yes | n/a -Password for PostgreSQL | The password for external database | Yes (Only shows when use external database) | Yes | n/a -Core Database | The database used by core service | No (Only shows when use external database) | Yes | registry -Clair Database | The database used by Clair | No (Only shows when use external database) | Yes | clair -Notary Server Database | The database used by Notary server | No (Only shows when use external database) | Yes | notary_server -Notary Signer Database | The database used by Notary signer | No (Only shows when use external database) | Yes | notary_signer - - -## Redis - -Field | Description | Required | Editable | Default -----|-----------------|------------|------------|------------ -Config Redis Type | Choose `internal` or `external`. When `internal` is selected, a Redis workload will be included in the application, and a persistent volume is required for it. When `external` is selected, you can configure an external Redis. | Yes | No | internal -Source | Whether to use a storage class to provision a new PV or to use an existing PVC | Yes, when use internal Redis | Yes | Use a storage class -Storage Class | Specify the storage class used to provision the persistent volume(A storage class is required in the local cluster to use this option) | Yes, when use SC and internal Redis | Yes | The default storage class -Persistent Volume Size | Specify the size of the persistent volume | Yes, when use SC and internal Redis | Yes | 5Gi -Existing Claim | Specify the existing PVC for Redis(An existing PVC is required to use this option) | Yes, when use existing PV and internal Redis | Yes | n/a -Redis CPU Limit | CPU limit for the Redis workload | Yes | Yes | 500 (milli CPUs) -Redis Memory Limit | Memory limit for the Redis workload | Yes | Yes | 2048 (MiB) -Redis CPU Reservation | CPU reservation for the Redis workload | Yes | Yes | 100 (milli CPUs) -Redis Memory Reservation | Memory reservation for the Redis workload | Yes | Yes | 256 (MiB) -Redis Node Selector | Select the nodes where the Redis workload will be scheduled to | No | Yes | n/a -Host for Redis | The hostname for external Redis | Yes (Only shows when use external Redis) | Yes | n/a -Port for Redis | The port for external Redis | Yes (Only shows when use external Redis) | Yes | 6379 -Password for Redis | The password for external Redis | No (Only shows when use external Redis) | Yes | n/a -Jobservice Database Index | The database index for jobservice | Yes (Only shows when use external Redis) | Yes | n/a -Registry Database Index | The database index for docker registry | Yes (Only shows when use external Redis) | Yes | n/a - -## Clair - -Field | Description | Required | Editable | Default -----|-----------------|------------|------------|------------ -Enable Clair | Whether or not to enable Clair for vulnerabilities scanning | Yes | Yes | true -Clair CPU Limit | CPU limit for the Clair workload | Yes, when Clair enabled | Yes | 500 (milli CPUs) -Clair Memory Limit | Memory limit for the Clair workload | Yes, when Clair enabled | Yes | 2048 (MiB) -Clair CPU Reservation | CPU reservation for the Clair workload | Yes, when Clair enabled | Yes | 100 (milli CPUs) -Clair Memory Reservation | Memory reservation for the Clair workload | Yes, when Clair enabled | Yes | 256 (MiB) -Clair Node Selector | Select the nodes where the Clair workload will be scheduled to | Yes, when Clair enabled | Yes | n/a - -## Notary - -Field | Description | Required | Editable | Default -----|-----------------|------------|------------|------------ -Enable Notary | Whether or not to enable Notary for [Docker Content Trust](https://docs.docker.com/engine/security/trust/content_trust/). When enabled, the access endpoint to the Notary server is `/registry/notary`. | Yes | Yes | true -Notary Server CPU Limit | CPU limit for the Notary Server workload | Yes, when Notary enabled | Yes | 500 (milli CPUs) -Notary Server Memory Limit | Memory limit for the Notary Server workload | Yes, when Notary enabled | Yes | 2048 (MiB) -Notary Server CPU Reservation | CPU reservation for the Notary Server workload | Yes, when Notary enabled | Yes | 100 (milli CPUs) -Notary Server Memory Reservation | Memory reservation for the Notary Server workload | Yes, when Notary enabled | Yes | 256 (MiB) -Notary Signer CPU Limit | CPU limit for the Notary Signer workload | Yes, when Notary enabled | Yes | 500 (milli CPUs) -Notary Signer Memory Limit | Memory limit for the Notary Signer workload | Yes, when Notary enabled | Yes | 2048 (MiB) -Notary Signer CPU Reservation | CPU reservation for the Notary Signer workload | Yes, when Notary enabled | Yes | 100 (milli CPUs) -Notary Signer Memory Reservation | Memory reservation for the Notary Signer workload | Yes, when Notary enabled | Yes | 256 (MiB) -Notary Node Selector | Select the nodes where the Notary Server and Notary Signer workloads will be scheduled to | No | Yes | n/a From 0d4d5106b1ef84b8e545087602efdec254b0ca68 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 14 Jun 2019 12:55:49 -0700 Subject: [PATCH 23/33] Change 'service mesh' to Istio in docs --- .../tools/service-mesh/_index.md | 28 +++++++++---------- .../tools/service-mesh/istio/_index.md | 4 +-- .../en/project-admin/service-mesh/_index.md | 4 +-- 3 files changed, 18 insertions(+), 18 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md index d080d5d5f77..d02b6a44023 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md @@ -1,5 +1,5 @@ --- -title: Service Mesh +title: Istio weight: 5 --- @@ -7,39 +7,39 @@ _Available as of v2.3.0-alpha_ Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. -## Enabling Service Mesh +## Enabling Istio As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Istio to your Kubernetes cluster. 1. From the **Global** view, navigate to the cluster that you want to configure the service mesh for. -1. Select **Tools > Service Mesh** in the navigation bar. +1. Select **Tools > Istio** in the navigation bar. -1. Select **Enable** to show the [Service mesh configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/). Enter in your desired configuration options. Ensure you have enough resources for service mesh and on your worker nodes to enable service mesh. +1. Select **Enable** to show the [Istio configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/). Enter in your desired configuration options. Ensure you have enough resources for the service mesh and on your worker nodes to enable the service mesh. 1. Click **Save**. **Result:** The Istio application, `cluster-istio`, is added as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the application is `active`, you can start using Istio. -> **Note:** When enabling service mesh, you need to ensure your worker nodes and Istio pod have enough resources. In larger deployments, it is strongly advised that the service mesh infrastructure be placed on dedicated nodes in the cluster. +> **Note:** When enabling the service mesh, you need to ensure your worker nodes and Istio pod have enough resources. In larger deployments, it is strongly advised that the service mesh infrastructure be placed on dedicated nodes in the cluster. -## Using Service Mesh +## Using Istio -Once the service mesh is `active`, you can: +Once Istio is `active`, you can see visualizations for your service mesh across several services: -1. Access [Kiali UI](https://www.kiali.io/) by clicking Kiali UI icon in service mesh page. -1. Access [Jaeger UI](https://www.jaegertracing.io/) by clicking Jaeger UI icon in service mesh page. -1. Access [Grafana UI](https://grafana.com/) by clicking Grafana UI icon in service mesh page. -1. Access [Prometheus UI](https://prometheus.io/) by clicking Prometheus UI icon in service mesh page. +1. Access [Kiali UI](https://www.kiali.io/) by clicking the Kiali UI icon in the Istio page. +1. Access [Jaeger UI](https://www.jaegertracing.io/) by clicking the Jaeger UI icon in the Istio page. +1. Access [Grafana UI](https://grafana.com/) by clicking the Grafana UI icon in the Istio page. +1. Access [Prometheus UI](https://prometheus.io/) by clicking the Prometheus UI icon in the Istio page. 1. Go to a project to [view traffic graph, traffic metrics and manage traffic]({{< baseurl >}}/rancher/v2.x/en/project-admin/service-mesh/). -## Disabling Service Mesh +## Disabling Istio -To disable the service mesh: +To disable Istio: 1. From the **Global** view, navigate to the cluster that you want to disable the service mesh for. -1. Select **Tools > Service Mesh** in the navigation bar. +1. Select **Tools > Istio** in the navigation bar. 1. Click **Disable Istio**, then click the red button again to confirm the disable action. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md index 5f84f6eb7fd..c2fa0f7bbee 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md @@ -1,11 +1,11 @@ --- -title: Service Mesh Configuration +title: Istio Configuration weight: 1 --- _Available as of v2.3.0-alpha_ -There are several configuration options for the service mesh. +There are several configuration options for Istio. ## PILOT diff --git a/content/rancher/v2.x/en/project-admin/service-mesh/_index.md b/content/rancher/v2.x/en/project-admin/service-mesh/_index.md index 12885629f8a..1c99c4ec7b3 100644 --- a/content/rancher/v2.x/en/project-admin/service-mesh/_index.md +++ b/content/rancher/v2.x/en/project-admin/service-mesh/_index.md @@ -1,5 +1,5 @@ --- -title: Service Mesh +title: Istio weight: 3528 --- @@ -12,7 +12,7 @@ Using Rancher, you can connect, secure, control, and observe services through in >- [Service Mesh]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/) must be enabled in the cluster. >- To be a part of an Istio service mesh, pods and services in a Kubernetes cluster must satisfy the [Istio Pods and Services Requirements](https://istio.io/docs/setup/kubernetes/prepare/requirements/) -## Istio sidecar auto injection +## Istio Sidecar Auto Injection In the create and edit namespace page, you can enable or disable [Istio sidecar auto injection](https://istio.io/blog/2019/data-plane-setup/#automatic-injection). When you enable it, Rancher will add `istio-injection=enabled` label to the namespace automatically. From 3380c5b912de2c813eaf8a7c0368cba4fbf8d825 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 14 Jun 2019 14:01:50 -0700 Subject: [PATCH 24/33] Change folder names from serv0ce-mesh to istio --- .../en/cluster-admin/tools/{service-mesh => istio}/_index.md | 4 ++-- .../tools/{service-mesh/istio => istio/config}/_index.md | 0 .../v2.x/en/project-admin/{service-mesh => istio}/_index.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) rename content/rancher/v2.x/en/cluster-admin/tools/{service-mesh => istio}/_index.md (88%) rename content/rancher/v2.x/en/cluster-admin/tools/{service-mesh/istio => istio/config}/_index.md (100%) rename content/rancher/v2.x/en/project-admin/{service-mesh => istio}/_index.md (95%) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md similarity index 88% rename from content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md rename to content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md index d02b6a44023..1b556986945 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md @@ -15,7 +15,7 @@ As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global 1. Select **Tools > Istio** in the navigation bar. -1. Select **Enable** to show the [Istio configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/). Enter in your desired configuration options. Ensure you have enough resources for the service mesh and on your worker nodes to enable the service mesh. +1. Select **Enable** to show the [Istio configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/istio/config/). Enter in your desired configuration options. Ensure you have enough resources for the service mesh and on your worker nodes to enable the service mesh. 1. Click **Save**. @@ -31,7 +31,7 @@ Once Istio is `active`, you can see visualizations for your service mesh across 1. Access [Jaeger UI](https://www.jaegertracing.io/) by clicking the Jaeger UI icon in the Istio page. 1. Access [Grafana UI](https://grafana.com/) by clicking the Grafana UI icon in the Istio page. 1. Access [Prometheus UI](https://prometheus.io/) by clicking the Prometheus UI icon in the Istio page. -1. Go to a project to [view traffic graph, traffic metrics and manage traffic]({{< baseurl >}}/rancher/v2.x/en/project-admin/service-mesh/). +1. Go to a project to [view traffic graph, traffic metrics and manage traffic]({{< baseurl >}}/rancher/v2.x/en/project-admin/istio/). ## Disabling Istio diff --git a/content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md similarity index 100% rename from content/rancher/v2.x/en/cluster-admin/tools/service-mesh/istio/_index.md rename to content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md diff --git a/content/rancher/v2.x/en/project-admin/service-mesh/_index.md b/content/rancher/v2.x/en/project-admin/istio/_index.md similarity index 95% rename from content/rancher/v2.x/en/project-admin/service-mesh/_index.md rename to content/rancher/v2.x/en/project-admin/istio/_index.md index 1c99c4ec7b3..5bd15a488d1 100644 --- a/content/rancher/v2.x/en/project-admin/service-mesh/_index.md +++ b/content/rancher/v2.x/en/project-admin/istio/_index.md @@ -9,7 +9,7 @@ Using Rancher, you can connect, secure, control, and observe services through in >**Prerequisites:** > ->- [Service Mesh]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/service-mesh/) must be enabled in the cluster. +>- [Istio]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/istio/) must be enabled in the cluster. >- To be a part of an Istio service mesh, pods and services in a Kubernetes cluster must satisfy the [Istio Pods and Services Requirements](https://istio.io/docs/setup/kubernetes/prepare/requirements/) ## Istio Sidecar Auto Injection From 9c8940d244f57c2ca9efa6f8b94cd693582a4a83 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 14 Jun 2019 14:20:32 -0700 Subject: [PATCH 25/33] Change 'service mesh' to Istio in more places --- .../v2.x/en/cluster-admin/tools/istio/_index.md | 10 +++++----- content/rancher/v2.x/en/project-admin/istio/_index.md | 6 +++--- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md index 1b556986945..50ccf14efe1 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md @@ -11,21 +11,21 @@ Using Rancher, you can connect, secure, control, and observe services through in As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Istio to your Kubernetes cluster. -1. From the **Global** view, navigate to the cluster that you want to configure the service mesh for. +1. From the **Global** view, navigate to the cluster that you want to configure Istio for. 1. Select **Tools > Istio** in the navigation bar. -1. Select **Enable** to show the [Istio configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/istio/config/). Enter in your desired configuration options. Ensure you have enough resources for the service mesh and on your worker nodes to enable the service mesh. +1. Select **Enable** to show the [Istio configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/istio/config/). Enter in your desired configuration options. Ensure you have enough resources on your worker nodes to enable Istio. 1. Click **Save**. **Result:** The Istio application, `cluster-istio`, is added as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the application is `active`, you can start using Istio. -> **Note:** When enabling the service mesh, you need to ensure your worker nodes and Istio pod have enough resources. In larger deployments, it is strongly advised that the service mesh infrastructure be placed on dedicated nodes in the cluster. +> **Note:** When enabling Istio, you need to ensure your worker nodes and Istio pod have enough resources. In larger deployments, it is strongly advised that the infrastructure be placed on dedicated nodes in the cluster. ## Using Istio -Once Istio is `active`, you can see visualizations for your service mesh across several services: +Once Istio is `active`, you can see visualizations of your Istio service mesh across several services: 1. Access [Kiali UI](https://www.kiali.io/) by clicking the Kiali UI icon in the Istio page. 1. Access [Jaeger UI](https://www.jaegertracing.io/) by clicking the Jaeger UI icon in the Istio page. @@ -37,7 +37,7 @@ Once Istio is `active`, you can see visualizations for your service mesh across To disable Istio: -1. From the **Global** view, navigate to the cluster that you want to disable the service mesh for. +1. From the **Global** view, navigate to the cluster that you want to disable Istio for. 1. Select **Tools > Istio** in the navigation bar. diff --git a/content/rancher/v2.x/en/project-admin/istio/_index.md b/content/rancher/v2.x/en/project-admin/istio/_index.md index 5bd15a488d1..2c7c26bc6ce 100644 --- a/content/rancher/v2.x/en/project-admin/istio/_index.md +++ b/content/rancher/v2.x/en/project-admin/istio/_index.md @@ -20,13 +20,13 @@ In the create and edit namespace page, you can enable or disable [Istio sidecar ## View Traffic Graph -Rancher integrates Kiali Graph into the Rancher UI. The Kiali graph provides a powerful way to visualize the topology of your service mesh. It shows you which services communicate with each other. +Rancher integrates Kiali Graph into the Rancher UI. The Kiali graph provides a powerful way to visualize the topology of your Istio service mesh. It shows you which services communicate with each other. To see the traffic graph for a particular namespace: 1. From the **Global** view, navigate to the project that you want to view traffic graph for. -1. Select **Service Mesh** in the navigation bar. +1. Select **Istio** in the navigation bar. 1. Select **Traffic Graph** in the navigation bar. @@ -40,7 +40,7 @@ To see the Success Rate, Request Volume, 4xx Request Count, Project 5xx Request 1. From the **Global** view, navigate to the project that you want to view traffic metrics for. -1. Select **Service Mesh** in the navigation bar. +1. Select **Istio** in the navigation bar. 1. Select **Traffic Metrics** in the navigation bar. From 94205e15b0ad721a8006ffe8241865414c281568 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Fri, 14 Jun 2019 16:44:32 -0700 Subject: [PATCH 26/33] Add context to Istio config docs --- .../v2.x/en/cluster-admin/tools/istio/config/_index.md | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md index c2fa0f7bbee..831444a2d89 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md @@ -5,10 +5,14 @@ weight: 1 _Available as of v2.3.0-alpha_ -There are several configuration options for Istio. +There are several configuration options for Istio. You can find more information about Istio configuration in the [official Istio documentation](https://istio.io/docs/concepts/what-is-istio). ## PILOT +Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (e.g., A/B tests, canary rollouts, etc.), and resiliency (timeouts, retries, circuit breakers, etc.). + +For more information on Pilot, refer to the [documentation](https://istio.io/docs/concepts/traffic-management/#pilot-and-envoy). + Option | Description| Required | Default -------|------------|-------|------- Pilot CPU Limit | CPU resource limit for the istio-pilot pod.| Yes | 1000 @@ -20,6 +24,8 @@ Pilot Selector | Ability to select the nodes in which istio-pilot pod is deploye ## MIXER +Mixer is a platform-independent component. Mixer enforces access control and usage policies across the service mesh, and collects telemetry data from the Envoy proxy and other services. For more information on Mixer, policies and telemetry, refer to the [documentation](https://istio.io/docs/concepts/policies-and-telemetry/). + Option | Description| Required | Default -------|------------|-------|------- Mixer Telemetry CPU Limit | CPU resource limit for the istio-telemetry pod.| Yes | 4800 From efd887ea944bc0ee8dc037c8b7048edfb36c6795 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Mon, 17 Jun 2019 14:23:12 -0700 Subject: [PATCH 27/33] Add clarifications to Istio docs --- .../v2.x/en/cluster-admin/tools/istio/_index.md | 9 +++++++++ .../en/cluster-admin/tools/istio/config/_index.md | 8 ++++++++ .../rancher/v2.x/en/project-admin/istio/_index.md | 12 ++++++------ 3 files changed, 23 insertions(+), 6 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md index 50ccf14efe1..2523cd6ae7e 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md @@ -27,6 +27,15 @@ As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global Once Istio is `active`, you can see visualizations of your Istio service mesh across several services: +- **Kiali** helps you define, validate, and observe your Istio service mesh. Kiali shows you what services are in your mesh and how they are connected. Kiali includes Jaeger Tracing to provide distributed tracing out of the box. +- **Jaeger** is a distributed tracing system released as open source by Uber Technologies. It is used for monitoring and troubleshooting microservices-based distributed systems. +- **Grafana** is an analytics platform that allows you to query, visualize, alert on and understand your metrics. Grafana lets you visualize data from Prometheus. +- **Prometheus** is a systems monitoring and alerting toolkit. + +Kiali, Jaeger, Grafana, and Prometheus are open-source. + +With Istio enabled, you can: + 1. Access [Kiali UI](https://www.kiali.io/) by clicking the Kiali UI icon in the Istio page. 1. Access [Jaeger UI](https://www.jaegertracing.io/) by clicking the Jaeger UI icon in the Istio page. 1. Access [Grafana UI](https://grafana.com/) by clicking the Grafana UI icon in the Istio page. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md index 831444a2d89..4336acdb33f 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md @@ -41,6 +41,8 @@ Mixer Selector | Ability to select the nodes in which istio-policy and istio-tel ## TRACING +Istio-enabled applications can collect trace spans. For more information on distributed tracing with Istio, refer to the [documentation](https://istio.io/docs/tasks/telemetry/distributed-tracing/overview/). + Option | Description| Required | Default -------|------------|-------|------- Enable Tracing | Whether or not to deploy the istio-tracing. | Yes | True @@ -52,6 +54,8 @@ Tracing Selector | Ability to select the nodes in which tracing pod is deployed ## INGRESS GATEWAY +The Istio Gateway allows Istio features such as monitoring and route rules to be applied to traffic entering the cluster. For more information, refer to the [documentation](https://istio.io/docs/tasks/traffic-management/ingress/). + Option | Description| Required | Default -------|------------|-------|------- Enable Ingress Gateway | Whether or not to deploy the istio-ingressgateway. | Yes | False @@ -68,6 +72,8 @@ Ingress Gateway Selector | Ability to select the nodes in which istio-ingressgat ## PROMETHEUS +You can query for Istio metrics using Prometheus. Prometheus is an open-source systems monitoring and alerting toolkit. + Option | Description| Required | Default -------|------------|-------|------- Prometheus CPU Limit | CPU resource limit for the Prometheus pod.| Yes | 1000 @@ -79,6 +85,8 @@ Prometheus Selector | Ability to select the nodes in which Prometheus pod is dep ## GRAFANA +You can visualize metrics with Grafana. Grafana is a tool that lets you visualize Istio traffic data. + Option | Description| Required | Default -------|------------|-------|------- Enable Grafana | Whether or not to deploy the Grafana.| Yes | True diff --git a/content/rancher/v2.x/en/project-admin/istio/_index.md b/content/rancher/v2.x/en/project-admin/istio/_index.md index 2c7c26bc6ce..a8980067e54 100644 --- a/content/rancher/v2.x/en/project-admin/istio/_index.md +++ b/content/rancher/v2.x/en/project-admin/istio/_index.md @@ -10,17 +10,19 @@ Using Rancher, you can connect, secure, control, and observe services through in >**Prerequisites:** > >- [Istio]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/istio/) must be enabled in the cluster. ->- To be a part of an Istio service mesh, pods and services in a Kubernetes cluster must satisfy the [Istio Pods and Services Requirements](https://istio.io/docs/setup/kubernetes/prepare/requirements/) +>- To be a part of an Istio service mesh, pods and services in a Kubernetes cluster must satisfy the [Istio Pods and Services Requirements](https://istio.io/docs/setup/kubernetes/prepare/requirements/). ## Istio Sidecar Auto Injection In the create and edit namespace page, you can enable or disable [Istio sidecar auto injection](https://istio.io/blog/2019/data-plane-setup/#automatic-injection). When you enable it, Rancher will add `istio-injection=enabled` label to the namespace automatically. +After the `istio-injection=enabled` label is added to the namespace, all pods that are created in the namespace will have an injected Istio sidecar. + > **Note:** Injection occurs at pod creation time. If the pod has been created before you enable auto injection. You need to kill the running pod and verify a new pod is created with the injected sidecar. ## View Traffic Graph -Rancher integrates Kiali Graph into the Rancher UI. The Kiali graph provides a powerful way to visualize the topology of your Istio service mesh. It shows you which services communicate with each other. +Rancher integrates a Kiali graph into the Rancher UI. The Kiali graph provides a powerful way to visualize the topology of your Istio service mesh. It shows you which services communicate with each other. To see the traffic graph for a particular namespace: @@ -30,13 +32,11 @@ To see the traffic graph for a particular namespace: 1. Select **Traffic Graph** in the navigation bar. -1. Select the namespace. Note: It only shows the namespaces which has `istio-injection=enabled` label. +1. Select the namespace. Note: It only shows the namespaces which have the `istio-injection=enabled` label. ## View Traffic Metrics -Istio’s monitoring features provide visibility into the performance of all your services. - -To see the Success Rate, Request Volume, 4xx Request Count, Project 5xx Request Count and Request Duration metrics: +Istio’s monitoring features provide visibility into the performance of all your services. To see the Success Rate, Request Volume, 4xx Response Count, Project 5xx Response Count and Request Duration metrics: 1. From the **Global** view, navigate to the project that you want to view traffic metrics for. From d2be9ce725f749f6c4cb21130165c54455c898f3 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Mon, 17 Jun 2019 15:04:26 -0700 Subject: [PATCH 28/33] Change numbered list to bullets in Istio docs --- .../v2.x/en/cluster-admin/tools/istio/_index.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md index 2523cd6ae7e..71d4e7e04f8 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md @@ -36,11 +36,11 @@ Kiali, Jaeger, Grafana, and Prometheus are open-source. With Istio enabled, you can: -1. Access [Kiali UI](https://www.kiali.io/) by clicking the Kiali UI icon in the Istio page. -1. Access [Jaeger UI](https://www.jaegertracing.io/) by clicking the Jaeger UI icon in the Istio page. -1. Access [Grafana UI](https://grafana.com/) by clicking the Grafana UI icon in the Istio page. -1. Access [Prometheus UI](https://prometheus.io/) by clicking the Prometheus UI icon in the Istio page. -1. Go to a project to [view traffic graph, traffic metrics and manage traffic]({{< baseurl >}}/rancher/v2.x/en/project-admin/istio/). +- Access [Kiali UI](https://www.kiali.io/) by clicking the Kiali UI icon in the Istio page. +- Access [Jaeger UI](https://www.jaegertracing.io/) by clicking the Jaeger UI icon in the Istio page. +- Access [Grafana UI](https://grafana.com/) by clicking the Grafana UI icon in the Istio page. +- Access [Prometheus UI](https://prometheus.io/) by clicking the Prometheus UI icon in the Istio page. +- Go to a project to [view traffic graph, traffic metrics and manage traffic]({{< baseurl >}}/rancher/v2.x/en/project-admin/istio/). ## Disabling Istio From 1900a8bd0aa3a0a24b0478060b5665b562e10f94 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Tue, 18 Jun 2019 14:45:25 -0700 Subject: [PATCH 29/33] Add clarifications to Istio docs --- .../v2.x/en/cluster-admin/tools/istio/_index.md | 17 ++++++++++++----- .../v2.x/en/project-admin/istio/_index.md | 8 ++++++-- 2 files changed, 18 insertions(+), 7 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md index 71d4e7e04f8..fc5cbd02863 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md @@ -7,6 +7,12 @@ _Available as of v2.3.0-alpha_ Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. +## Prerequisites + +The required resource allocation for each service is listed in the [configuration options]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/istio/config/). Please review it before attempting to enable Istio. + +In larger deployments, it is strongly advised that the infrastructure be placed on dedicated nodes in the cluster. + ## Enabling Istio As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Istio to your Kubernetes cluster. @@ -21,19 +27,16 @@ As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global **Result:** The Istio application, `cluster-istio`, is added as an [application]({{< baseurl >}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the application is `active`, you can start using Istio. -> **Note:** When enabling Istio, you need to ensure your worker nodes and Istio pod have enough resources. In larger deployments, it is strongly advised that the infrastructure be placed on dedicated nodes in the cluster. -## Using Istio +## Using Istio for Metrics Visualization -Once Istio is `active`, you can see visualizations of your Istio service mesh across several services: +Once Istio is `active`, you can see visualizations of your Istio service mesh with Kiali, Jaeger, Grafana, and Prometheus, which are all open-source projects that Rancher has integrated with. - **Kiali** helps you define, validate, and observe your Istio service mesh. Kiali shows you what services are in your mesh and how they are connected. Kiali includes Jaeger Tracing to provide distributed tracing out of the box. - **Jaeger** is a distributed tracing system released as open source by Uber Technologies. It is used for monitoring and troubleshooting microservices-based distributed systems. - **Grafana** is an analytics platform that allows you to query, visualize, alert on and understand your metrics. Grafana lets you visualize data from Prometheus. - **Prometheus** is a systems monitoring and alerting toolkit. -Kiali, Jaeger, Grafana, and Prometheus are open-source. - With Istio enabled, you can: - Access [Kiali UI](https://www.kiali.io/) by clicking the Kiali UI icon in the Istio page. @@ -42,6 +45,10 @@ With Istio enabled, you can: - Access [Prometheus UI](https://prometheus.io/) by clicking the Prometheus UI icon in the Istio page. - Go to a project to [view traffic graph, traffic metrics and manage traffic]({{< baseurl >}}/rancher/v2.x/en/project-admin/istio/). +## Leveraging Istio in Projects + +After you enable Istio, you can see traphic metrics and a traffic graph on the project level. You can see a traffic graph for all namespaces that have Istio sidecar injection enabled. For more information, refer to the [Project Administration docs for Istio]]({{< baseurl >}}/rancher/v2.x/en/project-admin/istio/). + ## Disabling Istio To disable Istio: diff --git a/content/rancher/v2.x/en/project-admin/istio/_index.md b/content/rancher/v2.x/en/project-admin/istio/_index.md index a8980067e54..b3a31e69ae1 100644 --- a/content/rancher/v2.x/en/project-admin/istio/_index.md +++ b/content/rancher/v2.x/en/project-admin/istio/_index.md @@ -7,6 +7,8 @@ _Available as of v2.3.0-alpha_ Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. +Istio requires each pod in the service mesh to run an Istio compatible sidecar. This section describes how to set up Istio sidecar auto injection in the Rancher UI. For more information on the Istio sidecar, refer to the [Istio docs](https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/). + >**Prerequisites:** > >- [Istio]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/istio/) must be enabled in the cluster. @@ -14,11 +16,13 @@ Using Rancher, you can connect, secure, control, and observe services through in ## Istio Sidecar Auto Injection +If an Istio sidecar is not injected into a pod, Istio will not work for that pod. If you enable Istio sidecar auto injection for a namespace, all pods created in the namespace will have an injected Istio sidecar. + In the create and edit namespace page, you can enable or disable [Istio sidecar auto injection](https://istio.io/blog/2019/data-plane-setup/#automatic-injection). When you enable it, Rancher will add `istio-injection=enabled` label to the namespace automatically. -After the `istio-injection=enabled` label is added to the namespace, all pods that are created in the namespace will have an injected Istio sidecar. +Injection occurs at pod creation time. If the pod has been created before you enable auto injection, you need to kill the running pod and verify that a new pod is created with the injected sidecar. -> **Note:** Injection occurs at pod creation time. If the pod has been created before you enable auto injection. You need to kill the running pod and verify a new pod is created with the injected sidecar. +For information on how to inject the Istio sidecar manually, refer to the [Istio docs](https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection/). ## View Traffic Graph From 44930ef252e7aa07ff9459d64ab8a257b1ae4fb8 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Tue, 18 Jun 2019 11:46:51 -0700 Subject: [PATCH 30/33] Change v2.3.0-alpha to v2.3.0-alph4 for HPA and Istio docs --- content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md | 2 +- .../rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md | 2 +- .../v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md | 2 +- content/rancher/v2.x/en/project-admin/istio/_index.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md index 71d4e7e04f8..3cb99dd5f47 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md @@ -3,7 +3,7 @@ title: Istio weight: 5 --- -_Available as of v2.3.0-alpha_ +_Available as of v2.3.0-alpha4_ Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md index 4336acdb33f..7eaa13a31c3 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/config/_index.md @@ -3,7 +3,7 @@ title: Istio Configuration weight: 1 --- -_Available as of v2.3.0-alpha_ +_Available as of v2.3.0-alpha4_ There are several configuration options for Istio. You can find more information about Istio configuration in the [official Istio documentation](https://istio.io/docs/concepts/what-is-istio). diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md index c87c223204d..ccd7c91cabc 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md @@ -7,7 +7,7 @@ Using the Kubernetes [Horizontal Pod Autoscaler](https://kubernetes.io/docs/task Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. -You can create, manage, and delete HPAs using the Rancher UI in Rancher v2.3.0-alpha and higher versions. It only supports HPA in the `autoscaling/v2beta2` API. +You can create, manage, and delete HPAs using the Rancher UI in Rancher v2.3.0-alpha4 and higher versions. It only supports HPA in the `autoscaling/v2beta2` API. ## Why Use Horizontal Pod Autoscaler? diff --git a/content/rancher/v2.x/en/project-admin/istio/_index.md b/content/rancher/v2.x/en/project-admin/istio/_index.md index a8980067e54..f7facdaa464 100644 --- a/content/rancher/v2.x/en/project-admin/istio/_index.md +++ b/content/rancher/v2.x/en/project-admin/istio/_index.md @@ -3,7 +3,7 @@ title: Istio weight: 3528 --- -_Available as of v2.3.0-alpha_ +_Available as of v2.3.0-alpha4_ Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. From fde94d3ef1c13dc4186702faf34a95de7666158b Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Tue, 18 Jun 2019 15:52:33 -0700 Subject: [PATCH 31/33] Clarify version requirements for HPA docs --- .../horitzontal-pod-autoscaler/_index.md | 28 ++- .../hpa-for-rancher-before-2_0_7/_index.md | 188 ++++++++++++++++++ 2 files changed, 214 insertions(+), 2 deletions(-) create mode 100644 content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md index c87c223204d..5dc7312dac0 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md @@ -5,9 +5,33 @@ weight: 3026 Using the Kubernetes [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) feature (HPA), you can configure your cluster to automatically scale the services it's running up or down. -Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use Horizontal Pod Autoscaler. +HPAs are handled differently based on your version of Rancher and your version of the Kubernetes API. -You can create, manage, and delete HPAs using the Rancher UI in Rancher v2.3.0-alpha and higher versions. It only supports HPA in the `autoscaling/v2beta2` API. +### For Kubernetes API version `autoscaling/V2beta1` + +This version of the Kubernetes API lets you autoscale your pods based on the CPU and memory utilization of your application. + +### For Kubernetes API Version `autoscaling/V2beta2` + +This version of the Kubernetes API lets you autoscale your pods based on CPU and memory utilization, in addition to custom metrics. + +### For Rancher v2.0.7+ + +Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use HPA. + +### For Rancher Prior to v2.0.7 + +Clusters created in Rancher prior to v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7). + +### For Rancher Prior to v2.3.0-alpha + +You can [manage HPAs using `kubectl`]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl). + +### For Rancher v2.3.0-alpha+ + +You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. + +For configuring HPA to scale based on custom metrics, you still need to use `kubectl`. For more information, refer to [Managing HPAs using `kubectl`]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md) ## Why Use Horizontal Pod Autoscaler? diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md new file mode 100644 index 00000000000..08be0b5664d --- /dev/null +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md @@ -0,0 +1,188 @@ +--- +title: Managing HPAs with kubectl +weight: 3027 +--- + +# Manual Installation for Clusters Created Before Rancher v2.0.7 + +Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements. + +### Requirements + +Be sure that your Kubernetes cluster services are running with these flags at minimum: + +- kube-api: `requestheader-client-ca-file` +- kubelet: `read-only-port` at 10255 +- kube-controller: Optional, just needed if distinct values than default are required. + + - `horizontal-pod-autoscaler-downscale-delay: "5m0s"` + - `horizontal-pod-autoscaler-upscale-delay: "3m0s"` + - `horizontal-pod-autoscaler-sync-period: "30s"` + +For an RKE Kubernetes cluster definition, add this snippet in the `services` section. To add this snippet using the Rancher v2.0 UI, open the **Clusters** view and select **Ellipsis (...) > Edit** for the cluster in which you want to use HPA. Then, from **Cluster Options**, click **Edit as YAML**. Add the following snippet to the `services` section: + +``` +services: +... + kube-api: + extra_args: + requestheader-client-ca-file: "/etc/kubernetes/ssl/kube-ca.pem" + kube-controller: + extra_args: + horizontal-pod-autoscaler-downscale-delay: "5m0s" + horizontal-pod-autoscaler-upscale-delay: "1m0s" + horizontal-pod-autoscaler-sync-period: "30s" + kubelet: + extra_args: + read-only-port: 10255 +``` + +Once the Kubernetes cluster is configured and deployed, you can deploy metrics services. + +>**Note:** `kubectl` command samples in the sections that follow were tested in a cluster running Rancher v2.0.6 and Kubernetes v1.10.1. + +### Configuring HPA to Scale Using Resource Metrics + +To create HPA resources based on resource metrics such as CPU and memory use, you need to deploy the `metrics-server` package in the `kube-system` namespace of your Kubernetes cluster. This deployment allows HPA to consume the `metrics.k8s.io` API. + +>**Prerequisite:** You must be running `kubectl` 1.8 or later. + +1. Connect to your Kubernetes cluster using `kubectl`. + +1. Clone the GitHub `metrics-server` repo: + ``` + # git clone https://github.com/kubernetes-incubator/metrics-server + ``` + +1. Install the `metrics-server` package. + ``` + # kubectl create -f metrics-server/deploy/1.8+/ + ``` + +1. Check that `metrics-server` is running properly. Check the service pod and logs in the `kube-system` namespace. + + 1. Check the service pod for a status of `running`. Enter the following command: + ``` + # kubectl get pods -n kube-system + ``` + Then check for the status of `running`. + ``` + NAME READY STATUS RESTARTS AGE + ... + metrics-server-6fbfb84cdd-t2fk9 1/1 Running 0 8h + ... + ``` + 1. Check the service logs for service availability. Enter the following command: + ``` + # kubectl -n kube-system logs metrics-server-6fbfb84cdd-t2fk9 + ``` + Then review the log to confirm that the `metrics-server` package is running. + {{% accordion id="metrics-server-run-check" label="Metrics Server Log Output" %}} + I0723 08:09:56.193136 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:'' + I0723 08:09:56.193574 1 heapster.go:72] Metrics Server version v0.2.1 + I0723 08:09:56.194480 1 configs.go:61] Using Kubernetes client with master "https://10.43.0.1:443" and version + I0723 08:09:56.194501 1 configs.go:62] Using kubelet port 10255 + I0723 08:09:56.198612 1 heapster.go:128] Starting with Metric Sink + I0723 08:09:56.780114 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) + I0723 08:09:57.391518 1 heapster.go:101] Starting Heapster API server... + [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] listing is available at https:///swaggerapi + [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ + I0723 08:09:57.394080 1 serve.go:85] Serving securely on 0.0.0.0:443 + {{% /accordion %}} + + +1. Check that the metrics api is accessible from `kubectl`. + + + - If you are accessing the cluster through Rancher, enter your Server URL in the `kubectl` config in the following format: `https:///k8s/clusters/`. Add the suffix `/k8s/clusters/` to API path. + ``` + # kubectl get --raw /k8s/clusters//apis/metrics.k8s.io/v1beta1 + ``` + If the API is working correctly, you should receive output similar to the output below. + ``` + {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} + ``` + + - If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://:6443`. + ``` + # kubectl get --raw /apis/metrics.k8s.io/v1beta1 + ``` + If the API is working correctly, you should receive output similar to the output below. + ``` + {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} + ``` + +### Assigning Additional Required Roles to Your HPA + +By default, HPA reads resource and custom metrics with the user `system:anonymous`. Assign `system:anonymous` to `view-resource-metrics` and `view-custom-metrics` in the ClusterRole and ClusterRoleBindings manifests. These roles are used to access metrics. + +To do it, follow these steps: + +1. Configure `kubectl` to connect to your cluster. + +1. Copy the ClusterRole and ClusterRoleBinding manifest for the type of metrics you're using for your HPA. + {{% accordion id="cluster-role-resource-metrics" label="Resource Metrics: ApiGroups resource.metrics.k8s.io" %}} + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: view-resource-metrics + rules: + - apiGroups: + - metrics.k8s.io + resources: + - pods + - nodes + verbs: + - get + - list + - watch + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: view-resource-metrics + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: view-resource-metrics + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: User + name: system:anonymous + {{% /accordion %}} +{{% accordion id="cluster-role-custom-resources" label="Custom Metrics: ApiGroups custom.metrics.k8s.io" %}} + + ``` + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRole + metadata: + name: view-custom-metrics + rules: + - apiGroups: + - custom.metrics.k8s.io + resources: + - "*" + verbs: + - get + - list + - watch + --- + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: view-custom-metrics + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: view-custom-metrics + subjects: + - apiGroup: rbac.authorization.k8s.io + kind: User + name: system:anonymous + ``` +{{% /accordion %}} +1. Create them in your cluster using one of the follow commands, depending on the metrics you're using. + ``` + # kubectl create -f + # kubectl create -f + ``` From ac476af1e6555e4ebebca34c97a958846dac3f50 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Tue, 18 Jun 2019 15:57:35 -0700 Subject: [PATCH 32/33] Clarify reference to how to use Istio in project --- content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md | 2 +- content/rancher/v2.x/en/project-admin/istio/_index.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md index fc5cbd02863..38048cefb8d 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/istio/_index.md @@ -47,7 +47,7 @@ With Istio enabled, you can: ## Leveraging Istio in Projects -After you enable Istio, you can see traphic metrics and a traffic graph on the project level. You can see a traffic graph for all namespaces that have Istio sidecar injection enabled. For more information, refer to the [Project Administration docs for Istio]]({{< baseurl >}}/rancher/v2.x/en/project-admin/istio/). +After you enable Istio, you can see traphic metrics and a traffic graph on the project level. You can see a traffic graph for all namespaces that have Istio sidecar injection enabled. For more information, refer to [How to Use Istio in Your Project]({{< baseurl >}}/rancher/v2.x/en/project-admin/istio/). ## Disabling Istio diff --git a/content/rancher/v2.x/en/project-admin/istio/_index.md b/content/rancher/v2.x/en/project-admin/istio/_index.md index b3a31e69ae1..325a162b78c 100644 --- a/content/rancher/v2.x/en/project-admin/istio/_index.md +++ b/content/rancher/v2.x/en/project-admin/istio/_index.md @@ -1,5 +1,5 @@ --- -title: Istio +title: How to Use Istio in Your Project weight: 3528 --- From 4caa9bf23742e8c2da7dfac0d46b678453132e85 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Tue, 18 Jun 2019 17:00:10 -0700 Subject: [PATCH 33/33] Rearrange HPA docs and clarify versions --- .../horitzontal-pod-autoscaler/_index.md | 79 ++----- .../hpa-background/_index.md | 40 ++++ .../hpa-for-rancher-before-2_0_7/_index.md | 6 +- .../manage-hpa-with-kubectl/_index.md | 206 ++---------------- .../manage-hpa-with-rancher-ui/_index.md | 6 +- .../testing-hpa/_index.md | 2 +- 6 files changed, 77 insertions(+), 262 deletions(-) create mode 100644 content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md index 5dc7312dac0..2ddfb8d3734 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/_index.md @@ -3,75 +3,28 @@ title: Horizontal Pod Autoscaler weight: 3026 --- -Using the Kubernetes [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) feature (HPA), you can configure your cluster to automatically scale the services it's running up or down. +The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) is a Kubernetes feature that allows you to configure your cluster to automatically scale the services it's running up or down. -HPAs are handled differently based on your version of Rancher and your version of the Kubernetes API. +Rancher provides some additional features to help manage HPAs, depending on the version of Rancher. -### For Kubernetes API version `autoscaling/V2beta1` - -This version of the Kubernetes API lets you autoscale your pods based on the CPU and memory utilization of your application. - -### For Kubernetes API Version `autoscaling/V2beta2` - -This version of the Kubernetes API lets you autoscale your pods based on CPU and memory utilization, in addition to custom metrics. - -### For Rancher v2.0.7+ - -Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use HPA. - -### For Rancher Prior to v2.0.7 - -Clusters created in Rancher prior to v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7). - -### For Rancher Prior to v2.3.0-alpha - -You can [manage HPAs using `kubectl`]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl). - -### For Rancher v2.3.0-alpha+ - -You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. - -For configuring HPA to scale based on custom metrics, you still need to use `kubectl`. For more information, refer to [Managing HPAs using `kubectl`]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md) - -## Why Use Horizontal Pod Autoscaler? - -Using HPA, you can automatically scale the number of pods within a replication controller, deployment, or replica set up or down. HPA automatically scales the number of pods that are running for maximum efficiency. Factors that affect the number of pods include: - -- A minimum and maximum number of pods allowed to run, as defined by the user. -- Observed CPU/memory use, as reported in resource metrics. -- Custom metrics provided by third-party metrics application like Prometheus, Datadog, etc. - -HPA improves your services by: - -- Releasing hardware resources that would otherwise be wasted by an excessive number of pods. -- Increase/decrease performance as needed to accomplish service level agreements. - -## How HPA Works - -![HPA Schema]({{< baseurl >}}/img/rancher/horizontal-pod-autoscaler.jpg) - -HPA is implemented as a control loop, with a period controlled by the `kube-controller-manager` flags below: - -Flag | Default | Description | ----------|----------|----------| - `--horizontal-pod-autoscaler-sync-period` | `30s` | How often HPA audits resource/custom metrics in a deployment. - `--horizontal-pod-autoscaler-downscale-delay` | `5m0s` | Following completion of a downscale operation, how long HPA must wait before launching another downscale operations. - `--horizontal-pod-autoscaler-upscale-delay` | `3m0s` | Following completion of an upscale operation, how long HPA must wait before launching another upscale operation. - - -For full documentation on HPA, refer to the [Kubernetes Documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). - -## Horizontal Pod Autoscaler API Objects - -HPA is an API resource in the Kubernetes `autoscaling` API group. The current stable version is `autoscaling/v1`, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: `autoscaling/v2beta1`. - -For more information about the HPA API object, see the [HPA GitHub Readme](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object). +For more information on how HPA works and why it is used, refer to [Background Information on HPAs]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background). ## Managing HPAs -In Rancher v2.3.x+, the Rancher UI supports [creating, managing, and deleting HPAs]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/). It lets you configure CPU or memory usage as the metric that the HPA uses to scale. +The way that you manage HPAs is different based on your version of the Kubernetes API: -For prior versions of Rancher, you can [manage HPAs using `kubectl`]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md). You also need to use `kubectl` if you want to create HPAs that scale based on other metrics than CPU and memory. +- **For Kubernetes API version autoscaling/V2beta1:** This version of the Kubernetes API lets you autoscale your pods based on the CPU and memory utilization of your application. +- **For Kubernetes API Version autoscaling/V2beta2:** This version of the Kubernetes API lets you autoscale your pods based on CPU and memory utilization, in addition to custom metrics. + +HPAs are also managed differently based on your version of Rancher: + +- **For Rancher Prior to v2.3.0-alpha:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl). +- **For Rancher v2.3.0-alpha+**: You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. For more information, refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). To scale the HPA based on custom metrics, you still need to use `kubectl`. For more information, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus). + +You might have additional HPA installation steps if you are using an older version of Rancher: + +- **For Rancher v2.0.7+:** Clusters created in Rancher v2.0.7 and higher automatically have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use HPA. +- **For Rancher Prior to v2.0.7:** Clusters created in Rancher prior to v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7). ## Testing HPAs with a Service Deployment diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md new file mode 100644 index 00000000000..222b0cb3d8c --- /dev/null +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background/_index.md @@ -0,0 +1,40 @@ +--- +title: Background Information on HPAs +weight: 3027 +--- + +The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) is a Kubernetes feature that allows you to configure your cluster to automatically scale the services it's running up or down. This section provides explanation on how HPA works with Kubernetes. + +## Why Use Horizontal Pod Autoscaler? + +Using HPA, you can automatically scale the number of pods within a replication controller, deployment, or replica set up or down. HPA automatically scales the number of pods that are running for maximum efficiency. Factors that affect the number of pods include: + +- A minimum and maximum number of pods allowed to run, as defined by the user. +- Observed CPU/memory use, as reported in resource metrics. +- Custom metrics provided by third-party metrics application like Prometheus, Datadog, etc. + +HPA improves your services by: + +- Releasing hardware resources that would otherwise be wasted by an excessive number of pods. +- Increase/decrease performance as needed to accomplish service level agreements. + +## How HPA Works + +![HPA Schema]({{< baseurl >}}/img/rancher/horizontal-pod-autoscaler.jpg) + +HPA is implemented as a control loop, with a period controlled by the `kube-controller-manager` flags below: + +Flag | Default | Description | +---------|----------|----------| + `--horizontal-pod-autoscaler-sync-period` | `30s` | How often HPA audits resource/custom metrics in a deployment. + `--horizontal-pod-autoscaler-downscale-delay` | `5m0s` | Following completion of a downscale operation, how long HPA must wait before launching another downscale operations. + `--horizontal-pod-autoscaler-upscale-delay` | `3m0s` | Following completion of an upscale operation, how long HPA must wait before launching another upscale operation. + + +For full documentation on HPA, refer to the [Kubernetes Documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). + +## Horizontal Pod Autoscaler API Objects + +HPA is an API resource in the Kubernetes `autoscaling` API group. The current stable version is `autoscaling/v1`, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: `autoscaling/v2beta1`. + +For more information about the HPA API object, see the [HPA GitHub Readme](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object). \ No newline at end of file diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md index 08be0b5664d..1d6d4584a0b 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7/_index.md @@ -1,9 +1,9 @@ --- -title: Managing HPAs with kubectl -weight: 3027 +title: Manual HPA Installation for Clusters Created Before Rancher v2.0.7 +weight: 3050 --- -# Manual Installation for Clusters Created Before Rancher v2.0.7 +This section describes how to manually install HPAs for clusters created with Rancher prior to v2.0.7. This section also describes how to configure your HPA to scale up or down, and how to assign roles to your HPA. Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements. diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md index f55e0164bff..9775f319cc3 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md @@ -1,13 +1,23 @@ --- title: Managing HPAs with kubectl -weight: 3027 +weight: 3029 --- -In Rancher v2.3.x, a feature was added to the UI to manage HPAs. In the UI, you can create, view, and delete HPAs, and you can configure them to scale based on CPU or memory usage. +This section describes HPA management with `kubectl`. This document has instructions for how to: -For versions of Rancher prior to 2.3.x, or for scaling HPAs based on other metrics, you need `kubectl` to manage HPAs. +- Create an HPA +- Get information on HPAs +- Delete an HPA +- Configure your HPAs to scale with CPU or memory utilization +- Configure your HPAs to scale using custom metrics, if you use a third-party tool such as Prometheus for metrics -This section describes HPA management with `kubectl`. +### Note For Rancher v2.3.x + +In Rancher v2.3.x, you can create, view, and delete HPAs from the Rancher UI. You can also configure them to scale based on CPU or memory usage from the Rancher UI. For more information, refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). For scaling HPAs based on other metrics than CPU or memory, you still need `kubectl`. + +### Note For Rancher Prior to v2.0.7 + +Clusters created with older versions of Rancher don't automatically have all the requirements to create an HPA. To install an HPA on these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7). # Basic kubectl Command for Managing HPAs @@ -187,190 +197,4 @@ For HPA to use custom metrics from Prometheus, package [k8s-prometheus-adapter]( If the API is accessible, you should receive output that's similar to what follows. {{% accordion id="custom-metrics-api-response-rancher" label="API Response" %}} {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]} - {{% /accordion %}} - - - -# Manual Installation for Clusters Created Before Rancher v2.0.7 - -Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements. - -### Requirements - -Be sure that your Kubernetes cluster services are running with these flags at minimum: - -- kube-api: `requestheader-client-ca-file` -- kubelet: `read-only-port` at 10255 -- kube-controller: Optional, just needed if distinct values than default are required. - - - `horizontal-pod-autoscaler-downscale-delay: "5m0s"` - - `horizontal-pod-autoscaler-upscale-delay: "3m0s"` - - `horizontal-pod-autoscaler-sync-period: "30s"` - -For an RKE Kubernetes cluster definition, add this snippet in the `services` section. To add this snippet using the Rancher v2.0 UI, open the **Clusters** view and select **Ellipsis (...) > Edit** for the cluster in which you want to use HPA. Then, from **Cluster Options**, click **Edit as YAML**. Add the following snippet to the `services` section: - -``` -services: -... - kube-api: - extra_args: - requestheader-client-ca-file: "/etc/kubernetes/ssl/kube-ca.pem" - kube-controller: - extra_args: - horizontal-pod-autoscaler-downscale-delay: "5m0s" - horizontal-pod-autoscaler-upscale-delay: "1m0s" - horizontal-pod-autoscaler-sync-period: "30s" - kubelet: - extra_args: - read-only-port: 10255 -``` - -Once the Kubernetes cluster is configured and deployed, you can deploy metrics services. - ->**Note:** `kubectl` command samples in the sections that follow were tested in a cluster running Rancher v2.0.6 and Kubernetes v1.10.1. - -### Configuring HPA to Scale Using Resource Metrics - -To create HPA resources based on resource metrics such as CPU and memory use, you need to deploy the `metrics-server` package in the `kube-system` namespace of your Kubernetes cluster. This deployment allows HPA to consume the `metrics.k8s.io` API. - ->**Prerequisite:** You must be running `kubectl` 1.8 or later. - -1. Connect to your Kubernetes cluster using `kubectl`. - -1. Clone the GitHub `metrics-server` repo: - ``` - # git clone https://github.com/kubernetes-incubator/metrics-server - ``` - -1. Install the `metrics-server` package. - ``` - # kubectl create -f metrics-server/deploy/1.8+/ - ``` - -1. Check that `metrics-server` is running properly. Check the service pod and logs in the `kube-system` namespace. - - 1. Check the service pod for a status of `running`. Enter the following command: - ``` - # kubectl get pods -n kube-system - ``` - Then check for the status of `running`. - ``` - NAME READY STATUS RESTARTS AGE - ... - metrics-server-6fbfb84cdd-t2fk9 1/1 Running 0 8h - ... - ``` - 1. Check the service logs for service availability. Enter the following command: - ``` - # kubectl -n kube-system logs metrics-server-6fbfb84cdd-t2fk9 - ``` - Then review the log to confirm that the `metrics-server` package is running. - {{% accordion id="metrics-server-run-check" label="Metrics Server Log Output" %}} - I0723 08:09:56.193136 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:'' - I0723 08:09:56.193574 1 heapster.go:72] Metrics Server version v0.2.1 - I0723 08:09:56.194480 1 configs.go:61] Using Kubernetes client with master "https://10.43.0.1:443" and version - I0723 08:09:56.194501 1 configs.go:62] Using kubelet port 10255 - I0723 08:09:56.198612 1 heapster.go:128] Starting with Metric Sink - I0723 08:09:56.780114 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) - I0723 08:09:57.391518 1 heapster.go:101] Starting Heapster API server... - [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] listing is available at https:///swaggerapi - [restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ - I0723 08:09:57.394080 1 serve.go:85] Serving securely on 0.0.0.0:443 - {{% /accordion %}} - - -1. Check that the metrics api is accessible from `kubectl`. - - - - If you are accessing the cluster through Rancher, enter your Server URL in the `kubectl` config in the following format: `https:///k8s/clusters/`. Add the suffix `/k8s/clusters/` to API path. - ``` - # kubectl get --raw /k8s/clusters//apis/metrics.k8s.io/v1beta1 - ``` - If the API is working correctly, you should receive output similar to the output below. - ``` - {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} - ``` - - - If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://:6443`. - ``` - # kubectl get --raw /apis/metrics.k8s.io/v1beta1 - ``` - If the API is working correctly, you should receive output similar to the output below. - ``` - {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]} - ``` - -### Assigning Additional Required Roles to Your HPA - -By default, HPA reads resource and custom metrics with the user `system:anonymous`. Assign `system:anonymous` to `view-resource-metrics` and `view-custom-metrics` in the ClusterRole and ClusterRoleBindings manifests. These roles are used to access metrics. - -To do it, follow these steps: - -1. Configure `kubectl` to connect to your cluster. - -1. Copy the ClusterRole and ClusterRoleBinding manifest for the type of metrics you're using for your HPA. - {{% accordion id="cluster-role-resource-metrics" label="Resource Metrics: ApiGroups resource.metrics.k8s.io" %}} - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: view-resource-metrics - rules: - - apiGroups: - - metrics.k8s.io - resources: - - pods - - nodes - verbs: - - get - - list - - watch - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: view-resource-metrics - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: view-resource-metrics - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: User - name: system:anonymous - {{% /accordion %}} -{{% accordion id="cluster-role-custom-resources" label="Custom Metrics: ApiGroups custom.metrics.k8s.io" %}} - - ``` - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: view-custom-metrics - rules: - - apiGroups: - - custom.metrics.k8s.io - resources: - - "*" - verbs: - - get - - list - - watch - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: view-custom-metrics - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: view-custom-metrics - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: User - name: system:anonymous - ``` -{{% /accordion %}} -1. Create them in your cluster using one of the follow commands, depending on the metrics you're using. - ``` - # kubectl create -f - # kubectl create -f - ``` + {{% /accordion %}} \ No newline at end of file diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md index 64030106794..faf24d95f1b 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/_index.md @@ -3,11 +3,9 @@ title: Managing HPAs with the Rancher UI weight: 3028 --- -In Rancher v2.3.x+, the Rancher UI supports creating, managing, and deleting HPAs. You can configure CPU or memory usage as the metric that the HPA uses to scale. +This section applies only to Rancher v2.3.x+, which supports creating, managing, and deleting HPAs. You can configure CPU or memory usage as the metric that the HPA uses to scale. -For prior versions of Rancher, you can [manage HPAs using kubectl]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/). You also need to use `kubectl` if you want to create HPAs that scale based on other metrics than CPU and memory. - -Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use an HPA. +If you want to create HPAs that scale based on other metrics than CPU and memory, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus). ## Creating an HPA diff --git a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md index 31a296bab94..70e11daac47 100644 --- a/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md +++ b/content/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/testing-hpa/_index.md @@ -1,6 +1,6 @@ --- title: Testing HPAs with kubectl -weight: 3029 +weight: 3031 --- This document describes how to check the status of your HPAs after scaling them up or down with your load testing tool. For information on how to check the status from the Rancher UI (at least version 2.3.x), refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/).