mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-16 18:13:17 +00:00
Rearrange HPA docs and clarify versions
This commit is contained in:
@@ -3,75 +3,28 @@ title: Horizontal Pod Autoscaler
|
||||
weight: 3026
|
||||
---
|
||||
|
||||
Using the Kubernetes [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) feature (HPA), you can configure your cluster to automatically scale the services it's running up or down.
|
||||
The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) is a Kubernetes feature that allows you to configure your cluster to automatically scale the services it's running up or down.
|
||||
|
||||
HPAs are handled differently based on your version of Rancher and your version of the Kubernetes API.
|
||||
Rancher provides some additional features to help manage HPAs, depending on the version of Rancher.
|
||||
|
||||
### For Kubernetes API version `autoscaling/V2beta1`
|
||||
|
||||
This version of the Kubernetes API lets you autoscale your pods based on the CPU and memory utilization of your application.
|
||||
|
||||
### For Kubernetes API Version `autoscaling/V2beta2`
|
||||
|
||||
This version of the Kubernetes API lets you autoscale your pods based on CPU and memory utilization, in addition to custom metrics.
|
||||
|
||||
### For Rancher v2.0.7+
|
||||
|
||||
Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use HPA.
|
||||
|
||||
### For Rancher Prior to v2.0.7
|
||||
|
||||
Clusters created in Rancher prior to v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7).
|
||||
|
||||
### For Rancher Prior to v2.3.0-alpha
|
||||
|
||||
You can [manage HPAs using `kubectl`]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl).
|
||||
|
||||
### For Rancher v2.3.0-alpha+
|
||||
|
||||
You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization.
|
||||
|
||||
For configuring HPA to scale based on custom metrics, you still need to use `kubectl`. For more information, refer to [Managing HPAs using `kubectl`]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md)
|
||||
|
||||
## Why Use Horizontal Pod Autoscaler?
|
||||
|
||||
Using HPA, you can automatically scale the number of pods within a replication controller, deployment, or replica set up or down. HPA automatically scales the number of pods that are running for maximum efficiency. Factors that affect the number of pods include:
|
||||
|
||||
- A minimum and maximum number of pods allowed to run, as defined by the user.
|
||||
- Observed CPU/memory use, as reported in resource metrics.
|
||||
- Custom metrics provided by third-party metrics application like Prometheus, Datadog, etc.
|
||||
|
||||
HPA improves your services by:
|
||||
|
||||
- Releasing hardware resources that would otherwise be wasted by an excessive number of pods.
|
||||
- Increase/decrease performance as needed to accomplish service level agreements.
|
||||
|
||||
## How HPA Works
|
||||
|
||||

|
||||
|
||||
HPA is implemented as a control loop, with a period controlled by the `kube-controller-manager` flags below:
|
||||
|
||||
Flag | Default | Description |
|
||||
---------|----------|----------|
|
||||
`--horizontal-pod-autoscaler-sync-period` | `30s` | How often HPA audits resource/custom metrics in a deployment.
|
||||
`--horizontal-pod-autoscaler-downscale-delay` | `5m0s` | Following completion of a downscale operation, how long HPA must wait before launching another downscale operations.
|
||||
`--horizontal-pod-autoscaler-upscale-delay` | `3m0s` | Following completion of an upscale operation, how long HPA must wait before launching another upscale operation.
|
||||
|
||||
|
||||
For full documentation on HPA, refer to the [Kubernetes Documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
|
||||
|
||||
## Horizontal Pod Autoscaler API Objects
|
||||
|
||||
HPA is an API resource in the Kubernetes `autoscaling` API group. The current stable version is `autoscaling/v1`, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: `autoscaling/v2beta1`.
|
||||
|
||||
For more information about the HPA API object, see the [HPA GitHub Readme](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
|
||||
For more information on how HPA works and why it is used, refer to [Background Information on HPAs]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-background).
|
||||
|
||||
## Managing HPAs
|
||||
|
||||
In Rancher v2.3.x+, the Rancher UI supports [creating, managing, and deleting HPAs]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui/). It lets you configure CPU or memory usage as the metric that the HPA uses to scale.
|
||||
The way that you manage HPAs is different based on your version of the Kubernetes API:
|
||||
|
||||
For prior versions of Rancher, you can [manage HPAs using `kubectl`]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/_index.md). You also need to use `kubectl` if you want to create HPAs that scale based on other metrics than CPU and memory.
|
||||
- **For Kubernetes API version autoscaling/V2beta1:** This version of the Kubernetes API lets you autoscale your pods based on the CPU and memory utilization of your application.
|
||||
- **For Kubernetes API Version autoscaling/V2beta2:** This version of the Kubernetes API lets you autoscale your pods based on CPU and memory utilization, in addition to custom metrics.
|
||||
|
||||
HPAs are also managed differently based on your version of Rancher:
|
||||
|
||||
- **For Rancher Prior to v2.3.0-alpha:** To manage and configure HPAs, you need to use `kubectl`. For instructions on how to create, manage, and scale HPAs, refer to [Managing HPAs with kubectl]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl).
|
||||
- **For Rancher v2.3.0-alpha+**: You can create, manage, and delete HPAs using the Rancher UI. From the Rancher UI you can configure the HPA to scale based on CPU and memory utilization. For more information, refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). To scale the HPA based on custom metrics, you still need to use `kubectl`. For more information, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus).
|
||||
|
||||
You might have additional HPA installation steps if you are using an older version of Rancher:
|
||||
|
||||
- **For Rancher v2.0.7+:** Clusters created in Rancher v2.0.7 and higher automatically have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use HPA.
|
||||
- **For Rancher Prior to v2.0.7:** Clusters created in Rancher prior to v2.0.7 don't automatically have the requirements needed to use HPA. For instructions on installing HPA for these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7).
|
||||
|
||||
## Testing HPAs with a Service Deployment
|
||||
|
||||
|
||||
+40
@@ -0,0 +1,40 @@
|
||||
---
|
||||
title: Background Information on HPAs
|
||||
weight: 3027
|
||||
---
|
||||
|
||||
The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) is a Kubernetes feature that allows you to configure your cluster to automatically scale the services it's running up or down. This section provides explanation on how HPA works with Kubernetes.
|
||||
|
||||
## Why Use Horizontal Pod Autoscaler?
|
||||
|
||||
Using HPA, you can automatically scale the number of pods within a replication controller, deployment, or replica set up or down. HPA automatically scales the number of pods that are running for maximum efficiency. Factors that affect the number of pods include:
|
||||
|
||||
- A minimum and maximum number of pods allowed to run, as defined by the user.
|
||||
- Observed CPU/memory use, as reported in resource metrics.
|
||||
- Custom metrics provided by third-party metrics application like Prometheus, Datadog, etc.
|
||||
|
||||
HPA improves your services by:
|
||||
|
||||
- Releasing hardware resources that would otherwise be wasted by an excessive number of pods.
|
||||
- Increase/decrease performance as needed to accomplish service level agreements.
|
||||
|
||||
## How HPA Works
|
||||
|
||||

|
||||
|
||||
HPA is implemented as a control loop, with a period controlled by the `kube-controller-manager` flags below:
|
||||
|
||||
Flag | Default | Description |
|
||||
---------|----------|----------|
|
||||
`--horizontal-pod-autoscaler-sync-period` | `30s` | How often HPA audits resource/custom metrics in a deployment.
|
||||
`--horizontal-pod-autoscaler-downscale-delay` | `5m0s` | Following completion of a downscale operation, how long HPA must wait before launching another downscale operations.
|
||||
`--horizontal-pod-autoscaler-upscale-delay` | `3m0s` | Following completion of an upscale operation, how long HPA must wait before launching another upscale operation.
|
||||
|
||||
|
||||
For full documentation on HPA, refer to the [Kubernetes Documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
|
||||
|
||||
## Horizontal Pod Autoscaler API Objects
|
||||
|
||||
HPA is an API resource in the Kubernetes `autoscaling` API group. The current stable version is `autoscaling/v1`, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: `autoscaling/v2beta1`.
|
||||
|
||||
For more information about the HPA API object, see the [HPA GitHub Readme](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
|
||||
+3
-3
@@ -1,9 +1,9 @@
|
||||
---
|
||||
title: Managing HPAs with kubectl
|
||||
weight: 3027
|
||||
title: Manual HPA Installation for Clusters Created Before Rancher v2.0.7
|
||||
weight: 3050
|
||||
---
|
||||
|
||||
# Manual Installation for Clusters Created Before Rancher v2.0.7
|
||||
This section describes how to manually install HPAs for clusters created with Rancher prior to v2.0.7. This section also describes how to configure your HPA to scale up or down, and how to assign roles to your HPA.
|
||||
|
||||
Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements.
|
||||
|
||||
|
||||
+15
-191
@@ -1,13 +1,23 @@
|
||||
---
|
||||
title: Managing HPAs with kubectl
|
||||
weight: 3027
|
||||
weight: 3029
|
||||
---
|
||||
|
||||
In Rancher v2.3.x, a feature was added to the UI to manage HPAs. In the UI, you can create, view, and delete HPAs, and you can configure them to scale based on CPU or memory usage.
|
||||
This section describes HPA management with `kubectl`. This document has instructions for how to:
|
||||
|
||||
For versions of Rancher prior to 2.3.x, or for scaling HPAs based on other metrics, you need `kubectl` to manage HPAs.
|
||||
- Create an HPA
|
||||
- Get information on HPAs
|
||||
- Delete an HPA
|
||||
- Configure your HPAs to scale with CPU or memory utilization
|
||||
- Configure your HPAs to scale using custom metrics, if you use a third-party tool such as Prometheus for metrics
|
||||
|
||||
This section describes HPA management with `kubectl`.
|
||||
### Note For Rancher v2.3.x
|
||||
|
||||
In Rancher v2.3.x, you can create, view, and delete HPAs from the Rancher UI. You can also configure them to scale based on CPU or memory usage from the Rancher UI. For more information, refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-rancher-ui). For scaling HPAs based on other metrics than CPU or memory, you still need `kubectl`.
|
||||
|
||||
### Note For Rancher Prior to v2.0.7
|
||||
|
||||
Clusters created with older versions of Rancher don't automatically have all the requirements to create an HPA. To install an HPA on these clusters, refer to [Manual HPA Installation for Clusters Created Before Rancher v2.0.7]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/hpa-for-rancher-before-2_0_7).
|
||||
|
||||
# Basic kubectl Command for Managing HPAs
|
||||
|
||||
@@ -187,190 +197,4 @@ For HPA to use custom metrics from Prometheus, package [k8s-prometheus-adapter](
|
||||
If the API is accessible, you should receive output that's similar to what follows.
|
||||
{{% accordion id="custom-metrics-api-response-rancher" label="API Response" %}}
|
||||
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]}
|
||||
{{% /accordion %}}
|
||||
|
||||
|
||||
|
||||
# Manual Installation for Clusters Created Before Rancher v2.0.7
|
||||
|
||||
Before you can use HPA in your Kubernetes cluster, you must fulfill some requirements.
|
||||
|
||||
### Requirements
|
||||
|
||||
Be sure that your Kubernetes cluster services are running with these flags at minimum:
|
||||
|
||||
- kube-api: `requestheader-client-ca-file`
|
||||
- kubelet: `read-only-port` at 10255
|
||||
- kube-controller: Optional, just needed if distinct values than default are required.
|
||||
|
||||
- `horizontal-pod-autoscaler-downscale-delay: "5m0s"`
|
||||
- `horizontal-pod-autoscaler-upscale-delay: "3m0s"`
|
||||
- `horizontal-pod-autoscaler-sync-period: "30s"`
|
||||
|
||||
For an RKE Kubernetes cluster definition, add this snippet in the `services` section. To add this snippet using the Rancher v2.0 UI, open the **Clusters** view and select **Ellipsis (...) > Edit** for the cluster in which you want to use HPA. Then, from **Cluster Options**, click **Edit as YAML**. Add the following snippet to the `services` section:
|
||||
|
||||
```
|
||||
services:
|
||||
...
|
||||
kube-api:
|
||||
extra_args:
|
||||
requestheader-client-ca-file: "/etc/kubernetes/ssl/kube-ca.pem"
|
||||
kube-controller:
|
||||
extra_args:
|
||||
horizontal-pod-autoscaler-downscale-delay: "5m0s"
|
||||
horizontal-pod-autoscaler-upscale-delay: "1m0s"
|
||||
horizontal-pod-autoscaler-sync-period: "30s"
|
||||
kubelet:
|
||||
extra_args:
|
||||
read-only-port: 10255
|
||||
```
|
||||
|
||||
Once the Kubernetes cluster is configured and deployed, you can deploy metrics services.
|
||||
|
||||
>**Note:** `kubectl` command samples in the sections that follow were tested in a cluster running Rancher v2.0.6 and Kubernetes v1.10.1.
|
||||
|
||||
### Configuring HPA to Scale Using Resource Metrics
|
||||
|
||||
To create HPA resources based on resource metrics such as CPU and memory use, you need to deploy the `metrics-server` package in the `kube-system` namespace of your Kubernetes cluster. This deployment allows HPA to consume the `metrics.k8s.io` API.
|
||||
|
||||
>**Prerequisite:** You must be running `kubectl` 1.8 or later.
|
||||
|
||||
1. Connect to your Kubernetes cluster using `kubectl`.
|
||||
|
||||
1. Clone the GitHub `metrics-server` repo:
|
||||
```
|
||||
# git clone https://github.com/kubernetes-incubator/metrics-server
|
||||
```
|
||||
|
||||
1. Install the `metrics-server` package.
|
||||
```
|
||||
# kubectl create -f metrics-server/deploy/1.8+/
|
||||
```
|
||||
|
||||
1. Check that `metrics-server` is running properly. Check the service pod and logs in the `kube-system` namespace.
|
||||
|
||||
1. Check the service pod for a status of `running`. Enter the following command:
|
||||
```
|
||||
# kubectl get pods -n kube-system
|
||||
```
|
||||
Then check for the status of `running`.
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
...
|
||||
metrics-server-6fbfb84cdd-t2fk9 1/1 Running 0 8h
|
||||
...
|
||||
```
|
||||
1. Check the service logs for service availability. Enter the following command:
|
||||
```
|
||||
# kubectl -n kube-system logs metrics-server-6fbfb84cdd-t2fk9
|
||||
```
|
||||
Then review the log to confirm that the `metrics-server` package is running.
|
||||
{{% accordion id="metrics-server-run-check" label="Metrics Server Log Output" %}}
|
||||
I0723 08:09:56.193136 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
|
||||
I0723 08:09:56.193574 1 heapster.go:72] Metrics Server version v0.2.1
|
||||
I0723 08:09:56.194480 1 configs.go:61] Using Kubernetes client with master "https://10.43.0.1:443" and version
|
||||
I0723 08:09:56.194501 1 configs.go:62] Using kubelet port 10255
|
||||
I0723 08:09:56.198612 1 heapster.go:128] Starting with Metric Sink
|
||||
I0723 08:09:56.780114 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
|
||||
I0723 08:09:57.391518 1 heapster.go:101] Starting Heapster API server...
|
||||
[restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
|
||||
[restful] 2018/07/23 08:09:57 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
|
||||
I0723 08:09:57.394080 1 serve.go:85] Serving securely on 0.0.0.0:443
|
||||
{{% /accordion %}}
|
||||
|
||||
|
||||
1. Check that the metrics api is accessible from `kubectl`.
|
||||
|
||||
|
||||
- If you are accessing the cluster through Rancher, enter your Server URL in the `kubectl` config in the following format: `https://<RANCHER_URL>/k8s/clusters/<CLUSTER_ID>`. Add the suffix `/k8s/clusters/<CLUSTER_ID>` to API path.
|
||||
```
|
||||
# kubectl get --raw /k8s/clusters/<CLUSTER_ID>/apis/metrics.k8s.io/v1beta1
|
||||
```
|
||||
If the API is working correctly, you should receive output similar to the output below.
|
||||
```
|
||||
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}
|
||||
```
|
||||
|
||||
- If you are accessing the cluster directly, enter your Server URL in the kubectl config in the following format: `https://<K8s_URL>:6443`.
|
||||
```
|
||||
# kubectl get --raw /apis/metrics.k8s.io/v1beta1
|
||||
```
|
||||
If the API is working correctly, you should receive output similar to the output below.
|
||||
```
|
||||
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}
|
||||
```
|
||||
|
||||
### Assigning Additional Required Roles to Your HPA
|
||||
|
||||
By default, HPA reads resource and custom metrics with the user `system:anonymous`. Assign `system:anonymous` to `view-resource-metrics` and `view-custom-metrics` in the ClusterRole and ClusterRoleBindings manifests. These roles are used to access metrics.
|
||||
|
||||
To do it, follow these steps:
|
||||
|
||||
1. Configure `kubectl` to connect to your cluster.
|
||||
|
||||
1. Copy the ClusterRole and ClusterRoleBinding manifest for the type of metrics you're using for your HPA.
|
||||
{{% accordion id="cluster-role-resource-metrics" label="Resource Metrics: ApiGroups resource.metrics.k8s.io" %}}
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: view-resource-metrics
|
||||
rules:
|
||||
- apiGroups:
|
||||
- metrics.k8s.io
|
||||
resources:
|
||||
- pods
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: view-resource-metrics
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: view-resource-metrics
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: system:anonymous
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="cluster-role-custom-resources" label="Custom Metrics: ApiGroups custom.metrics.k8s.io" %}}
|
||||
|
||||
```
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: view-custom-metrics
|
||||
rules:
|
||||
- apiGroups:
|
||||
- custom.metrics.k8s.io
|
||||
resources:
|
||||
- "*"
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: view-custom-metrics
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: view-custom-metrics
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: system:anonymous
|
||||
```
|
||||
{{% /accordion %}}
|
||||
1. Create them in your cluster using one of the follow commands, depending on the metrics you're using.
|
||||
```
|
||||
# kubectl create -f <RESOURCE_METRICS_MANIFEST>
|
||||
# kubectl create -f <CUSTOM_METRICS_MANIFEST>
|
||||
```
|
||||
{{% /accordion %}}
|
||||
+2
-4
@@ -3,11 +3,9 @@ title: Managing HPAs with the Rancher UI
|
||||
weight: 3028
|
||||
---
|
||||
|
||||
In Rancher v2.3.x+, the Rancher UI supports creating, managing, and deleting HPAs. You can configure CPU or memory usage as the metric that the HPA uses to scale.
|
||||
This section applies only to Rancher v2.3.x+, which supports creating, managing, and deleting HPAs. You can configure CPU or memory usage as the metric that the HPA uses to scale.
|
||||
|
||||
For prior versions of Rancher, you can [manage HPAs using kubectl]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/). You also need to use `kubectl` if you want to create HPAs that scale based on other metrics than CPU and memory.
|
||||
|
||||
Clusters created in Rancher v2.0.7 and higher have all the requirements needed (metrics-server and Kubernetes cluster configuration) to use an HPA.
|
||||
If you want to create HPAs that scale based on other metrics than CPU and memory, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus).
|
||||
|
||||
## Creating an HPA
|
||||
|
||||
|
||||
+1
-1
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: Testing HPAs with kubectl
|
||||
weight: 3029
|
||||
weight: 3031
|
||||
---
|
||||
|
||||
This document describes how to check the status of your HPAs after scaling them up or down with your load testing tool. For information on how to check the status from the Rancher UI (at least version 2.3.x), refer to [Managing HPAs with the Rancher UI]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/).
|
||||
|
||||
Reference in New Issue
Block a user