mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-13 16:43:22 +00:00
Add redirects for links that changed with 2.5 updates
Signed-off-by: Bastian Hofmann <bashofmann@gmail.com>
This commit is contained in:
@@ -4,6 +4,7 @@ shortTitle: Kubernetes Installs
|
||||
weight: 370
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/after-installation/ha-backup-and-restoration/
|
||||
- /rancher/v2.x/en/installation/backups/restores
|
||||
---
|
||||
|
||||
This procedure describes how to use RKE to restore a snapshot of the Rancher Kubernetes cluster.
|
||||
|
||||
@@ -8,6 +8,7 @@ aliases:
|
||||
- /rancher/v2.x/en/concepts/catalogs/
|
||||
- /rancher/v2.x/en/tasks/global-configuration/catalog/
|
||||
- /rancher/v2.x/en/catalog
|
||||
- /rancher/v2.x/en/catalog/apps
|
||||
---
|
||||
|
||||
Rancher provides the ability to use a catalog of Helm charts that make it easy to repeatedly deploy applications.
|
||||
|
||||
@@ -4,6 +4,7 @@ weight: 400
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/global-configuration/catalog/customizing-charts/
|
||||
- /rancher/v2.x/en/catalog/custom/creating
|
||||
- /rancher/v2.x/en/catalog/custom
|
||||
- /rancher/v2.x/en/catalog/creating-apps
|
||||
---
|
||||
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
title: Install Rancher on a Kubernetes Cluster
|
||||
description: Learn how to install Rancher in development and production environments. Read about single node and high availability installation
|
||||
weight: 3
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/k8s-install
|
||||
- /rancher/v2.x/en/installation/k8s-install/helm-rancher
|
||||
- /rancher/v2.x/en/installation/install-rancher-on-k8s/install
|
||||
---
|
||||
|
||||
> **Prerequisite:**
|
||||
@@ -274,4 +278,4 @@ Doesn't work? Take a look at the [Troubleshooting]({{<baseurl>}}/rancher/v2.x/en
|
||||
|
||||
### Optional Next Steps
|
||||
|
||||
Enable the Enterprise Cluster Manager.
|
||||
Enable the Enterprise Cluster Manager.
|
||||
|
||||
-2
@@ -10,8 +10,6 @@ aliases:
|
||||
|
||||
This section is about how to deploy Rancher for your air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy. There are _tabs_ for either a high availability (recommended) or a Docker installation.
|
||||
|
||||
> **Note:** These installation instructions assume you are using Helm 3. For migration of installs started with Helm 2, refer to the official [Helm 2 to 3 migration docs.](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) This [section]({{<baseurl>}}/rancher/v2.x/en/installation/options/air-gap-helm2) provides a copy of the older air gap installation instructions for Rancher installed on Kubernetes with Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible.
|
||||
|
||||
### Privileged Access for Rancher v2.5+
|
||||
|
||||
When the Rancher server is deployed in the Docker container, a local Kubernetes cluster is installed within the container for Rancher to use. Because many features of Rancher run as deployments, and privileged mode is required to run containers within containers, you will need to install Rancher with the `--privileged` option.
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Resources
|
||||
weight: 4
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options
|
||||
---
|
||||
|
||||
### Docker Installations
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Enabling the API Audit Log to Record System Events
|
||||
weight: 4
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/api-audit-log
|
||||
---
|
||||
|
||||
You can enable the API audit log to record the sequence of system events initiated by individual users. You can know what happened, when it happened, who initiated it, and what cluster it affected. When you enable this feature, all requests to the Rancher API and all responses from it are written to a log.
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Running on ARM64 (Experimental)
|
||||
weight: 3
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/arm64-platform
|
||||
---
|
||||
|
||||
> **Important:**
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Tuning etcd for Large Installations
|
||||
weight: 2
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/etcd
|
||||
---
|
||||
|
||||
When running larger Rancher installations with 15 or more clusters it is recommended to increase the default keyspace for etcd from the default 2GB. The maximum setting is 8GB and the host should have enough RAM to keep the entire dataset in memory. When increasing this value you should also increase the size of the host. The keyspace size can also be adjusted in smaller installations if you anticipate a high rate of change of pods during the garbage collection interval.
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Helm Chart Options
|
||||
weight: 2
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/chart-options
|
||||
---
|
||||
|
||||
- [Common Options](#common-options)
|
||||
|
||||
+2
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: About Custom CA Root Certificates
|
||||
weight: 1
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/custom-ca-root-certificate
|
||||
---
|
||||
|
||||
If you're using Rancher in an internal production environment where you aren't exposing apps publicly, use a certificate from a private certificate authority (CA).
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Adding TLS Secrets
|
||||
weight: 2
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/tls-secrets
|
||||
---
|
||||
|
||||
Kubernetes will create all the objects and services for Rancher, but it will not become available until we populate the `tls-rancher-ingress` secret in the `cattle-system` namespace with the certificate and key.
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: TLS Settings
|
||||
weight: 3
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/tls-settings
|
||||
---
|
||||
|
||||
In Rancher v2.1.7, the default TLS configuration changed to only accept TLS 1.2 and secure TLS cipher suites. TLS 1.3 and TLS 1.3 exclusive cipher suites are not supported.
|
||||
|
||||
+3
@@ -1,6 +1,9 @@
|
||||
---
|
||||
title: Upgrading Cert-Manager
|
||||
weight: 4
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/upgrading-cert-manager
|
||||
- /rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions
|
||||
---
|
||||
|
||||
Rancher uses cert-manager to automatically generate and renew TLS certificates for HA deployments of Rancher. As of Fall 2019, three important changes to cert-manager are set to occur that you need to take action on if you have an HA deployment of Rancher:
|
||||
|
||||
@@ -1,6 +1,11 @@
|
||||
---
|
||||
title: Helm Version Requirements
|
||||
weight: 3
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/helm-version
|
||||
- /rancher/v2.x/en/installation/options/helm2
|
||||
- /rancher/v2.x/en/installation/options/helm2/helm-init
|
||||
- /rancher/v2.x/en/installation/options/helm2/helm-rancher
|
||||
---
|
||||
|
||||
This section contains the requirements for Helm, which is the tool used to install Rancher on a high-availability Kubernetes cluster.
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
title: Setting up a High-availability RKE Kubernetes Cluster
|
||||
shortTitle: Set up RKE Kubernetes
|
||||
weight: 3
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/k8s-install/kubernetes-rke
|
||||
---
|
||||
|
||||
|
||||
@@ -168,4 +170,4 @@ Save a copy of the following files in a secure location:
|
||||
|
||||
See the [Troubleshooting]({{<baseurl>}}/rancher/v2.x/en/installation/options/troubleshooting/) page.
|
||||
|
||||
### [Next: Install Rancher]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/)
|
||||
### [Next: Install Rancher]({{<baseurl>}}/rancher/v2.x/en/installation/k8s-install/helm-rancher/)
|
||||
|
||||
+3
-1
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Setting up Nodes in Amazon EC2
|
||||
weight: 3
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/ec2-node
|
||||
---
|
||||
|
||||
In this tutorial, you will learn one way to set up Linux nodes for the Rancher management server. These nodes will fulfill the node requirements for [OS, Docker, hardware, and networking.]({{<baseurl>}}/rancher/v2.x/en/installation/requirements/)
|
||||
@@ -61,4 +63,4 @@ curl https://releases.rancher.com/install-docker/18.09.sh | sh
|
||||
|
||||
If you are going to install an RKE cluster on the new nodes, take note of the **IPv4 Public IP** and **Private IP** of each node. This information can be found on the **Description** tab for each node after it is created. The public and private IP will be used to populate the `address` and `internal_address` of each node in the RKE cluster configuration file, `rancher-cluster.yml`.
|
||||
|
||||
RKE will also need access to the private key to connect to each node. Therefore, you might want to take note of the path to your private keys to connect to the nodes, which can also be included in the `rancher-cluster.yml` under the `ssh_key_path` directive for each node.
|
||||
RKE will also need access to the private key to connect to each node. Therefore, you might want to take note of the path to your private keys to connect to the nodes, which can also be included in the `rancher-cluster.yml` under the `ssh_key_path` directive for each node.
|
||||
|
||||
+2
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Setting up an NGINX Load Balancer
|
||||
weight: 4
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/nginx
|
||||
---
|
||||
|
||||
NGINX will be configured as Layer 4 load balancer (TCP) that forwards connections to one of your Rancher nodes.
|
||||
|
||||
+1
@@ -4,6 +4,7 @@ weight: 5
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/ha/create-nodes-lb/nlb
|
||||
- /rancher/v2.x/en/installation/k8s-install/create-nodes-lb/nlb
|
||||
- /rancher/v2.x/en/installation/options/nlb
|
||||
---
|
||||
|
||||
This how-to guide describes how to set up a Network Load Balancer (NLB) in Amazon's EC2 service that will direct traffic to multiple instances on EC2.
|
||||
|
||||
+3
-1
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Setting up a MySQL Database in Amazon RDS
|
||||
weight: 4
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/options/rds
|
||||
---
|
||||
This tutorial describes how to set up a MySQL database in Amazon's RDS.
|
||||
|
||||
@@ -31,4 +33,4 @@ This information will be used to connect to the database in the following format
|
||||
mysql://username:password@tcp(hostname:3306)/database-name
|
||||
```
|
||||
|
||||
For more information on configuring the datastore for K3s, refer to the [K3s documentation.]({{<baseurl>}}/k3s/latest/en/installation/datastore/)
|
||||
For more information on configuring the datastore for K3s, refer to the [K3s documentation.]({{<baseurl>}}/k3s/latest/en/installation/datastore/)
|
||||
|
||||
@@ -4,6 +4,8 @@ weight: 1020
|
||||
aliases:
|
||||
- /rancher/v2.x/en/upgrades/upgrades/ha-server-upgrade-helm-airgap
|
||||
- /rancher/v2.x/en/upgrades/air-gap-upgrade/
|
||||
- /rancher/v2.x/en/upgrades/upgrades/ha
|
||||
- /rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/upgrades/ha
|
||||
---
|
||||
|
||||
The following instructions will guide you through using Helm to upgrade a Rancher server that was installed on a Kubernetes cluster.
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
---
|
||||
title: Upgrading Rancher Installed on Kubernetes with Helm 2
|
||||
weight: 1050
|
||||
aliases:
|
||||
- /rancher/v2.x/en/upgrades/upgrades/ha/helm2
|
||||
- /rancher/v2.x/en/upgrades/helm2
|
||||
---
|
||||
|
||||
> Helm 3 has been released. If you are using Helm 2, we recommend [migrating to Helm 3](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) because it is simpler to use and more secure than Helm 2.
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Istio
|
||||
weight: 15
|
||||
aliases:
|
||||
- /rancher/v2.x/en/dashboard/istio
|
||||
---
|
||||
|
||||
# Istio in Cluster Manager
|
||||
@@ -90,4 +92,4 @@ By default, each Rancher-provisioned cluster has one NGINX ingress controller al
|
||||
|
||||
### Egress Support
|
||||
|
||||
By default the Egress gateway is disabled, but can be enabled on install or upgrade through the values.yaml or via the [overlay file]({{<baseurl>}}/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/#overlay-file).
|
||||
By default the Egress gateway is disabled, but can be enabled on install or upgrade through the values.yaml or via the [overlay file]({{<baseurl>}}/rancher/v2.x/en/istio/setup/enable-istio-in-cluster/#overlay-file).
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Kubernetes Resources
|
||||
weight: 10
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher
|
||||
---
|
||||
|
||||
|
||||
@@ -8,4 +10,4 @@ weight: 10
|
||||
|
||||
_Available as of v2.5_
|
||||
|
||||
The cluster explorer is a new feature in Rancher v2.5 that allows you to view and manipulate all of the custom resources and CRDs in a Kubernetes cluster from the Rancher UI.
|
||||
The cluster explorer is a new feature in Rancher v2.5 that allows you to view and manipulate all of the custom resources and CRDs in a Kubernetes cluster from the Rancher UI.
|
||||
|
||||
@@ -4,7 +4,7 @@ weight: 19
|
||||
aliases:
|
||||
- /rancher/v2.x/en/concepts/
|
||||
- /rancher/v2.x/en/tasks/
|
||||
- /rancher/v2.x/en/concepts/resources/
|
||||
- /rancher/v2.x/en/concepts/resources/
|
||||
---
|
||||
|
||||
When your project is set up, [project members]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can start managing their applications and all the components that comprise it.
|
||||
|
||||
@@ -4,6 +4,7 @@ description: Learn how to add an SSL (Secure Sockets Layer) certificate or TLS (
|
||||
weight: 3060
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/projects/add-ssl-certificates/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/certificates
|
||||
---
|
||||
|
||||
When you create an ingress within Rancher/Kubernetes, you must provide it with a secret that includes a TLS private key and certificate, which are used to encrypt and decrypt communications that come through the ingress. You can make certificates available for ingress use by navigating to its project or namespace, and then uploading the certificate. You can then add the certificate to the ingress deployment.
|
||||
|
||||
@@ -3,6 +3,7 @@ title: ConfigMaps
|
||||
weight: 3061
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/projects/add-configmaps
|
||||
- /rancher/v2.x/en/k8s-in-rancher/configmaps
|
||||
---
|
||||
|
||||
|
||||
|
||||
+2
@@ -2,6 +2,8 @@
|
||||
title: The Horizontal Pod Autoscaler
|
||||
description: Learn about the horizontal pod autoscaler (HPA). How to manage HPAs and how to test them with a service deployment
|
||||
weight: 3026
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher/horizontal-pod-autoscaler
|
||||
---
|
||||
|
||||
The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) is a Kubernetes feature that allows you to configure your cluster to automatically scale the services it's running up or down.
|
||||
|
||||
+3
-1
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Background Information on HPAs
|
||||
weight: 3027
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher/horizontal-pod-autoscaler/hpa-background
|
||||
---
|
||||
|
||||
The [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) is a Kubernetes feature that allows you to configure your cluster to automatically scale the services it's running up or down. This section provides explanation on how HPA works with Kubernetes.
|
||||
@@ -37,4 +39,4 @@ For full documentation on HPA, refer to the [Kubernetes Documentation](https://k
|
||||
|
||||
HPA is an API resource in the Kubernetes `autoscaling` API group. The current stable version is `autoscaling/v1`, which only includes support for CPU autoscaling. To get additional support for scaling based on memory and custom metrics, use the beta version instead: `autoscaling/v2beta1`.
|
||||
|
||||
For more information about the HPA API object, see the [HPA GitHub Readme](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
|
||||
For more information about the HPA API object, see the [HPA GitHub Readme](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
|
||||
|
||||
+2
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Manual HPA Installation for Clusters Created Before Rancher v2.0.7
|
||||
weight: 3050
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher/horizontal-pod-autoscaler/hpa-for-rancher-before-2_0_7
|
||||
---
|
||||
|
||||
This section describes how to manually install HPAs for clusters created with Rancher prior to v2.0.7. This section also describes how to configure your HPA to scale up or down, and how to assign roles to your HPA.
|
||||
|
||||
+3
-1
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Managing HPAs with kubectl
|
||||
weight: 3029
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher/horizontal-pod-autoscaler/manage-hpa-with-kubectl
|
||||
---
|
||||
|
||||
This section describes HPA management with `kubectl`. This document has instructions for how to:
|
||||
@@ -197,4 +199,4 @@ For HPA to use custom metrics from Prometheus, package [k8s-prometheus-adapter](
|
||||
If the API is accessible, you should receive output that's similar to what follows.
|
||||
{{% accordion id="custom-metrics-api-response-rancher" label="API Response" %}}
|
||||
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"custom.metrics.k8s.io/v1beta1","resources":[{"name":"pods/fs_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_rss","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_period","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_read","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_user","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/last_seen","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/tasks_state","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_quota","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/start_time_seconds","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_write","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_cache","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_cfs_throttled_periods","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_working_set_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_udp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes_free","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_inodes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_time_weighted","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failures","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_swap","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_cpu_shares","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_swap_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_io_current","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_failcnt","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_writes_merged","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/network_tcp_usage","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/memory_max_usage_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/spec_memory_reservation_limit_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_load_average_10s","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/cpu_system","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_reads_bytes","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]},{"name":"pods/fs_sector_reads","singularName":"","namespaced":true,"kind":"MetricValueList","verbs":["get"]}]}
|
||||
{{% /accordion %}}
|
||||
{{% /accordion %}}
|
||||
|
||||
+3
-1
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Managing HPAs with the Rancher UI
|
||||
weight: 3028
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher/horizontal-pod-autoscaler/manage-hpa-with-rancher-ui
|
||||
---
|
||||
|
||||
_Available as of v2.3.0_
|
||||
@@ -52,4 +54,4 @@ If you want to create HPAs that scale based on other metrics than CPU and memory
|
||||
|
||||
1. Click **Delete** to confirm.
|
||||
|
||||
> **Result:** The HPA is deleted from the current cluster.
|
||||
> **Result:** The HPA is deleted from the current cluster.
|
||||
|
||||
+4
-1
@@ -1,6 +1,9 @@
|
||||
---
|
||||
title: Testing HPAs with kubectl
|
||||
weight: 3031
|
||||
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher/horizontal-pod-autoscaler/testing-hpa
|
||||
---
|
||||
|
||||
This document describes how to check the status of your HPAs after scaling them up or down with your load testing tool. For information on how to check the status from the Rancher UI (at least version 2.3.x), refer to [Managing HPAs with the Rancher UI]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/).
|
||||
@@ -488,4 +491,4 @@ Use your load testing tool to scale down to one pod when all metrics below targe
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
hello-world-54764dfbf8-q6l82 1/1 Running 0 6h
|
||||
```
|
||||
{{% /accordion %}}
|
||||
{{% /accordion %}}
|
||||
|
||||
+2
@@ -2,6 +2,8 @@
|
||||
title: Set Up Load Balancer and Ingress Controller within Rancher
|
||||
description: Learn how you can set up load balancers and ingress controllers to redirect service requests within Rancher, and learn about the limitations of load balancers
|
||||
weight: 3040
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress
|
||||
---
|
||||
|
||||
Within Rancher, you can set up load balancers and ingress controllers to redirect service requests.
|
||||
|
||||
+1
@@ -4,6 +4,7 @@ description: Ingresses can be added for workloads to provide load balancing, SSL
|
||||
weight: 3042
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/workloads/add-ingress/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/ingress
|
||||
---
|
||||
|
||||
Ingress can be added for workloads to provide load balancing, SSL termination and host/path based routing. When using ingresses in a project, you can program the ingress hostname to an external DNS by setting up a [Global DNS entry]({{<baseurl>}}/rancher/v2.x/en/catalog/globaldns/).
|
||||
|
||||
+1
@@ -4,6 +4,7 @@ description: "Kubernetes supports load balancing in two ways: Layer-4 Load Balan
|
||||
weight: 3041
|
||||
aliases:
|
||||
- /rancher/v2.x/en/concepts/load-balancing/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers
|
||||
---
|
||||
Kubernetes supports load balancing in two ways: Layer-4 Load Balancing and Layer-7 Load Balancing.
|
||||
|
||||
|
||||
@@ -4,6 +4,7 @@ description: Learn about the Docker registry and Kubernetes registry, their use
|
||||
weight: 3063
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/projects/add-registries/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/registries
|
||||
---
|
||||
Registries are Kubernetes secrets containing credentials used to authenticate with [private Docker registries](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/).
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ title: Secrets
|
||||
weight: 3062
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/projects/add-a-secret
|
||||
- /rancher/v2.x/en/k8s-in-rancher/secrets
|
||||
---
|
||||
|
||||
[Secrets](https://kubernetes.io/docs/concepts/configuration/secret/#overview-of-secrets) store sensitive data like passwords, tokens, or keys. They may contain one or more key value pairs.
|
||||
|
||||
@@ -3,6 +3,7 @@ title: Service Discovery
|
||||
weight: 3045
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/workloads/add-a-dns-record/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/service-discovery
|
||||
---
|
||||
|
||||
For every workload created, a complementing Service Discovery entry is created. This Service Discovery entry enables DNS resolution for the workload's pods using the following naming convention:
|
||||
|
||||
@@ -5,6 +5,7 @@ weight: 3025
|
||||
aliases:
|
||||
- /rancher/v2.x/en/concepts/workloads/
|
||||
- /rancher/v2.x/en/tasks/workloads/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/workloads
|
||||
---
|
||||
|
||||
You can build any complex containerized application in Kubernetes using two basic constructs: pods and workloads. Once you build an application, you can expose it for access either within the same cluster or on the Internet using a third construct: services.
|
||||
|
||||
@@ -3,6 +3,7 @@ title: Adding a Sidecar
|
||||
weight: 3029
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/workloads/add-a-sidecar/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/workloads/add-a-sidecar
|
||||
---
|
||||
A _sidecar_ is a container that extends or enhances the main container in a pod. The main container and the sidecar share a pod, and therefore share the same network space and storage. You can add sidecars to existing workloads by using the **Add a Sidecar** option.
|
||||
|
||||
|
||||
+1
@@ -4,6 +4,7 @@ description: Read this step by step guide for deploying workloads. Deploy a work
|
||||
weight: 3026
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/workloads/deploy-workloads/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads
|
||||
---
|
||||
|
||||
Deploy a workload to run an application in one or more containers.
|
||||
|
||||
+1
@@ -3,6 +3,7 @@ title: Rolling Back Workloads
|
||||
weight: 3027
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/workloads/rollback-workloads/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/workloads/rollback-workloads
|
||||
---
|
||||
|
||||
Sometimes there is a need to rollback to the previous version of the application, either for debugging purposes or because an upgrade did not go as planned.
|
||||
|
||||
+1
@@ -3,6 +3,7 @@ title: Upgrading Workloads
|
||||
weight: 3028
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/workloads/upgrade-workloads/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/workloads/upgrade-workloads
|
||||
---
|
||||
When a new version of an application image is released on Docker Hub, you can upgrade any workloads running a previous version of the application to the new one.
|
||||
|
||||
|
||||
@@ -4,6 +4,8 @@ shortTitle: Logging
|
||||
description: Rancher integrates with popular logging services. Learn the requirements and benefits of integrating with logging services, and enable logging on your cluster.
|
||||
metaDescription: "Rancher integrates with popular logging services. Learn the requirements and benefits of integrating with logging services, and enable logging on your cluster."
|
||||
weight: 16
|
||||
aliases:
|
||||
- /rancher/v2.x/en/dashboard/logging
|
||||
---
|
||||
|
||||
- [Changes in Rancher v2.5](#changes-in-rancher-v2-5)
|
||||
|
||||
@@ -4,6 +4,7 @@ weight: 300
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/logging/splunk/
|
||||
- /rancher/v2.x/en/tools/logging/splunk/
|
||||
- /rancher/v2.x/en/cluster-admin/tools/logging/splunk
|
||||
---
|
||||
|
||||
If your organization uses [Splunk](https://www.splunk.com/), you can configure Rancher to send it Kubernetes logs. Afterwards, you can log into your Splunk server to view logs.
|
||||
|
||||
@@ -3,6 +3,7 @@ title: Syslog
|
||||
weight: 500
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tools/logging/syslog/
|
||||
- /rancher/v2.x/en/cluster-admin/tools/logging/syslog
|
||||
---
|
||||
|
||||
If your organization uses [Syslog](https://tools.ietf.org/html/rfc5424), you can configure Rancher to send it Kubernetes logs. Afterwards, you can log into your Syslog server to view logs.
|
||||
|
||||
@@ -3,6 +3,9 @@ title: Monitoring and Alerting
|
||||
shortTitle: Monitoring/Alerting
|
||||
description: Prometheus lets you view metrics from your different Rancher and Kubernetes objects. Learn about the scope of monitoring and how to enable cluster monitoring
|
||||
weight: 14
|
||||
aliases:
|
||||
- /rancher/v2.x/en/dashboard/monitoring-alerting
|
||||
- /rancher/v2.x/en/dashboard/notifiers
|
||||
---
|
||||
|
||||
Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with [Prometheus](https://prometheus.io/), a leading open-source monitoring solution.
|
||||
@@ -95,4 +98,4 @@ You can add this configuration to the ConfigMap using the Rancher UI.
|
||||
|
||||
### Configuring Grafana to Use Multiple Data Sources
|
||||
|
||||
The data from Prometheus is used as the data source for the Grafana dashboard. Multiple data sources can be configured for Grafana.
|
||||
The data from Prometheus is used as the data source for the Grafana dashboard. Multiple data sources can be configured for Grafana.
|
||||
|
||||
+2
-3
@@ -7,8 +7,7 @@ aliases:
|
||||
|
||||
When you create a cluster, some alert rules are predefined. These alerts notify you about signs that the cluster could be unhealthy. You can receive these alerts if you configure a [notifier]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/notifiers) for them.
|
||||
|
||||
Several of the alerts use Prometheus expressions as the metric that triggers the alert. For more information on how expressions work, you can refer to the Rancher [documentation about Prometheus expressions]({{<baseurl>}}
|
||||
/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/) or the Prometheus [documentation about querying metrics](https://prometheus.io/docs/prometheus/latest/querying/basics/).
|
||||
Several of the alerts use Prometheus expressions as the metric that triggers the alert. For more information on how expressions work, you can refer to the Rancher [documentation about Prometheus expressions]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/) or the Prometheus [documentation about querying metrics](https://prometheus.io/docs/prometheus/latest/querying/basics/).
|
||||
|
||||
# Alerts for etcd
|
||||
Etcd is the key-value store that contains the state of the Kubernetes cluster. Rancher provides default alerts if the built-in monitoring detects a potential problem with etcd. You don't have to enable monitoring to receive these alerts.
|
||||
@@ -56,4 +55,4 @@ Alerts can be triggered based on node metrics. Each computing resource in a Kube
|
||||
| Node disk is running full within 24 hours | A critical alert is triggered if the disk space on the node is expected to run out in the next 24 hours based on the disk growth over the last 6 hours. |
|
||||
|
||||
# Project-level Alerts
|
||||
When you enable monitoring for the project, some project-level alerts are provided. For details, refer to the [section on project-level alerts.]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/alerts/#default-project-level-alerts)
|
||||
When you enable monitoring for the project, some project-level alerts are provided. For details, refer to the [section on project-level alerts.]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/alerts/#default-project-level-alerts)
|
||||
|
||||
+1
@@ -3,6 +3,7 @@ title: Cluster Metrics
|
||||
weight: 3
|
||||
aliases:
|
||||
- rancher/v2.x/en/project-admin/tools/monitoring/cluster-metrics
|
||||
- rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
+1
@@ -3,6 +3,7 @@ title: Prometheus Custom Metrics Adapter
|
||||
weight: 5
|
||||
aliases:
|
||||
- rancher/v2.x/en/project-admin/tools/monitoring/custom-metrics
|
||||
- rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics
|
||||
---
|
||||
|
||||
After you've enabled [cluster level monitoring]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring), You can view the metrics data from Rancher. You can also deploy the Prometheus custom metrics adapter then you can use the HPA with metrics stored in cluster monitoring.
|
||||
|
||||
+1
@@ -3,6 +3,7 @@ title: Prometheus Expressions
|
||||
weight: 4
|
||||
aliases:
|
||||
- rancher/v2.x/en/project-admin/tools/monitoring/expression
|
||||
- rancher/v2.x/en/cluster-admin/tools/monitoring/expression
|
||||
---
|
||||
|
||||
The PromQL expressions in this doc can be used to configure [alerts.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/alerts/)
|
||||
|
||||
+1
@@ -3,6 +3,7 @@ title: Prometheus Configuration
|
||||
weight: 1
|
||||
aliases:
|
||||
- rancher/v2.x/en/project-admin/tools/monitoring/prometheus
|
||||
- rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
+1
@@ -3,6 +3,7 @@ title: Viewing Metrics
|
||||
weight: 2
|
||||
aliases:
|
||||
- rancher/v2.x/en/project-admin/tools/monitoring/viewing-metrics
|
||||
- rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
@@ -3,6 +3,7 @@ title: Notifiers
|
||||
weight: 1
|
||||
aliases:
|
||||
- rancher/v2.x/en/project-admin/tools/notifiers
|
||||
- rancher/v2.x/en/cluster-admin/tools/notifiers
|
||||
---
|
||||
|
||||
> In Rancher 2.5, the notifier application was improved. There are now two ways to enable notifiers. The older way is documented in this section, and the new application for notifiers is documented in the [dashboard section.]({{<baseurl>}}/rancher/v2.x/en/dashboard/notifiers)
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Pipelines
|
||||
weight: 11
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher/pipelines
|
||||
---
|
||||
|
||||
Rancher's pipeline provides a simple CI/CD experience. Use it to automatically checkout code, run builds or scripts, publish Docker images or catalog applications, and deploy the updated software to users.
|
||||
@@ -271,4 +273,4 @@ Available Events:
|
||||
|
||||
1. Select which event triggers (**Push**, **Pull Request** or **Tag**) you want for the repository.
|
||||
|
||||
1. Click **Save**.
|
||||
1. Click **Save**.
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Concepts
|
||||
weight: 1
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher/pipelines/concepts
|
||||
---
|
||||
|
||||
The purpose of this page is to explain common concepts and terminology related to pipelines.
|
||||
@@ -33,4 +35,4 @@ Typically, pipeline stages include:
|
||||
|
||||
- **Deploy:**
|
||||
|
||||
After the artifacts are published, you would release your application so users could start using the updated product.
|
||||
After the artifacts are published, you would release your application so users could start using the updated product.
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Pipeline Configuration Reference
|
||||
weight: 1
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher/pipelines/config
|
||||
---
|
||||
|
||||
In this section, you'll learn how to configure pipelines.
|
||||
@@ -655,4 +657,4 @@ For details on setting up persistent storage for pipelines, refer to [this page.
|
||||
|
||||
# Example rancher-pipeline.yml
|
||||
|
||||
An example pipeline configuration file is on [this page.]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/example)
|
||||
An example pipeline configuration file is on [this page.]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/example)
|
||||
|
||||
@@ -4,6 +4,7 @@ weight: 9000
|
||||
aliases:
|
||||
- /rancher/v2.x/en/project-admin/tools/pipelines/docs-for-v2.0.x
|
||||
- /rancher/v2.x/en/project-admin/pipelines/docs-for-v2.0.x
|
||||
- /rancher/v2.x/en/k8s-in-rancher/pipelines/docs-for-v2.0.x
|
||||
---
|
||||
|
||||
>**Note:** This section describes the pipeline feature as implemented in Rancher v2.0.x. If you are using Rancher v2.1 or later, where pipelines have been significantly improved, please refer to the new documentation for [v2.1 or later]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/).
|
||||
|
||||
@@ -3,6 +3,7 @@ title: Example Repositories
|
||||
weight: 500
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tools/pipelines/quick-start-guide/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/pipelines/example-repos
|
||||
---
|
||||
|
||||
Rancher ships with several example repositories that you can use to familiarize yourself with pipelines. We recommend configuring and testing the example repository that most resembles your environment before using pipelines with your own repositories in a production environment. Use this example repository as a sandbox for repo configuration, build demonstration, etc. Rancher includes example repositories for:
|
||||
@@ -73,4 +74,4 @@ After enabling an example repository, run the pipeline to see how it works.
|
||||
|
||||
### What's Next?
|
||||
|
||||
For detailed information about setting up your own pipeline for your repository, [configure a version control provider]({{<baseurl>}}/rancher/v2.x/en/project-admin/pipelines), [enable a repository](#configure-repositories) and finally [configure your pipeline]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#pipeline-configuration).
|
||||
For detailed information about setting up your own pipeline for your repository, [configure a version control provider]({{<baseurl>}}/rancher/v2.x/en/project-admin/pipelines), [enable a repository](#configure-repositories) and finally [configure your pipeline]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/#pipeline-configuration).
|
||||
|
||||
@@ -3,6 +3,7 @@ title: Example YAML File
|
||||
weight: 501
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tools/pipelines/reference/
|
||||
- /rancher/v2.x/en/k8s-in-rancher/pipelines/example
|
||||
---
|
||||
|
||||
Pipelines can be configured either through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`.
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
title: Configuring Persistent Data for Pipeline Components
|
||||
weight: 600
|
||||
aliases:
|
||||
- /rancher/v2.x/en/k8s-in-rancher/pipelines/storage
|
||||
---
|
||||
|
||||
The internal [Docker registry](#how-pipelines-work) and the [Minio](#how-pipelines-work) workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.
|
||||
|
||||
@@ -59,4 +59,4 @@ You will need to specify this hostname in a later step when you install Rancher,
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
### [Next: Set up a Kubernetes Cluster]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/ka-rke/)
|
||||
### [Next: Set up a Kubernetes Cluster]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/ha-rke/)
|
||||
|
||||
Reference in New Issue
Block a user