From 628043a8cd6f59540b63241abf24474de8ef8ac3 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Tue, 6 Oct 2020 15:15:04 -0700 Subject: [PATCH] Fix cluster admin links --- .../rancher/v2.x/en/api/api-tokens/_index.md | 2 + .../rancher/v2.x/en/best-practices/_index.md | 5 +- .../best-practices/deployment-types/_index.md | 2 +- .../en/best-practices/management/_index.md | 2 +- content/rancher/v2.x/en/cli/_index.md | 2 + .../cluster-access/ace/_index.md | 2 +- .../v2.x/en/cluster-admin/tools/_index.md | 29 +- .../tools/istio/release-notes/_index.md | 31 -- .../tools/monitoring/custom-metrics/_index.md | 489 ------------------ .../tools/monitoring/expression/_index.md | 430 --------------- .../imported-clusters/_index.md | 2 +- .../rke-clusters/options/_index.md | 2 +- .../multi-cluster-apps/_index.md | 2 +- .../rancher/v2.x/en/istio/legacy/_index.md | 6 +- .../v2.x/en/istio/release-notes/_index.md | 10 + .../logging/legacy/cluster-logging/_index.md | 2 +- .../v2.x/en/monitoring-alerting/_index.md | 8 - .../legacy/alerts/cluster-alerts/_index.md | 9 +- .../cluster-alerts/default-alerts/_index.md | 2 +- .../monitoring/cluster-monitoring/_index.md | 10 +- .../cluster-metrics/_index.md | 10 +- .../custom-metrics/_index.md | 7 +- .../cluster-monitoring/expression/_index.md | 4 +- .../cluster-monitoring/prometheus/_index.md | 4 +- .../viewing-metrics/_index.md | 6 +- .../monitoring/project-monitoring/_index.md | 20 +- .../legacy/notifiers/_index.md | 3 +- .../rancher/v2.x/en/opa-gatekeper/_index.md | 2 +- content/rancher/v2.x/en/overview/_index.md | 6 +- .../v2.x/en/pipelines/config/_index.md | 4 +- .../v2.x/en/quick-start-guide/cli/_index.md | 2 +- 31 files changed, 82 insertions(+), 1033 deletions(-) delete mode 100644 content/rancher/v2.x/en/cluster-admin/tools/istio/release-notes/_index.md delete mode 100644 content/rancher/v2.x/en/cluster-admin/tools/monitoring/custom-metrics/_index.md delete mode 100644 content/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/_index.md diff --git a/content/rancher/v2.x/en/api/api-tokens/_index.md b/content/rancher/v2.x/en/api/api-tokens/_index.md index 61e0af22326..46d2c93d662 100644 --- a/content/rancher/v2.x/en/api/api-tokens/_index.md +++ b/content/rancher/v2.x/en/api/api-tokens/_index.md @@ -1,6 +1,8 @@ --- title: API Tokens weight: 1 +aliases: + - /rancher/v2.x/en/cluster-admin/api/api-tokens/ --- By default, some cluster-level API tokens are generated with infinite time-to-live (`ttl=0`). In other words, API tokens with `ttl=0` never expire unless you invalidate them. Tokens are not invalidated by changing a password. diff --git a/content/rancher/v2.x/en/best-practices/_index.md b/content/rancher/v2.x/en/best-practices/_index.md index 0894996d4c8..2f80414d0e8 100644 --- a/content/rancher/v2.x/en/best-practices/_index.md +++ b/content/rancher/v2.x/en/best-practices/_index.md @@ -11,10 +11,7 @@ Use the navigation bar on the left to find the current best practices for managi For more guidance on best practices, you can consult these resources: -- [Rancher Docs]({{}}) - - [Monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) - - [Backups and Disaster Recovery]({{}}/rancher/v2.x/en/backups/) - - [Security]({{}}/rancher/v2.x/en/security/) +- [Security]({{}}/rancher/v2.x/en/security/) - [Rancher Blog](https://rancher.com/blog/) - [Articles about best practices on the Rancher blog](https://rancher.com/tags/best-practices/) - [101 More Security Best Practices for Kubernetes](https://rancher.com/blog/2019/2019-01-17-101-more-kubernetes-security-best-practices/) diff --git a/content/rancher/v2.x/en/best-practices/deployment-types/_index.md b/content/rancher/v2.x/en/best-practices/deployment-types/_index.md index ff493e7fbf2..d953b9f6393 100644 --- a/content/rancher/v2.x/en/best-practices/deployment-types/_index.md +++ b/content/rancher/v2.x/en/best-practices/deployment-types/_index.md @@ -34,5 +34,5 @@ However, metrics-driven capacity planning analysis should be the ultimate guidan Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with Prometheus, a leading open-source monitoring solution, and Grafana, which lets you visualize the metrics from Prometheus. -After you [enable monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) in the cluster, you can set up [a notification channel]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) and [cluster alerts]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) to let you know if your cluster is approaching its capacity. You can also use the Prometheus and Grafana monitoring framework to establish a baseline for key metrics as you scale. +After you [enable monitoring]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) in the cluster, you can set up [a notification channel]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) and [cluster alerts]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) to let you know if your cluster is approaching its capacity. You can also use the Prometheus and Grafana monitoring framework to establish a baseline for key metrics as you scale. diff --git a/content/rancher/v2.x/en/best-practices/management/_index.md b/content/rancher/v2.x/en/best-practices/management/_index.md index 4fd202dc1ec..210edfbdb9c 100644 --- a/content/rancher/v2.x/en/best-practices/management/_index.md +++ b/content/rancher/v2.x/en/best-practices/management/_index.md @@ -78,7 +78,7 @@ Provision 3 or 5 etcd nodes. Etcd requires a quorum to determine a leader by the Provision two or more control plane nodes. Some control plane components, such as the `kube-apiserver`, run in [active-active](https://www.jscape.com/blog/active-active-vs-active-passive-high-availability-cluster) mode and will give you more scalability. Other components such as kube-scheduler and kube-controller run in active-passive mode (leader elect) and give you more fault tolerance. ### Monitor Your Cluster -Closely monitor and scale your nodes as needed. You should [enable cluster monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and use the Prometheus metrics and Grafana visualization options as a starting point. +Closely monitor and scale your nodes as needed. You should [enable cluster monitoring]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) and use the Prometheus metrics and Grafana visualization options as a starting point. # Tips for Security diff --git a/content/rancher/v2.x/en/cli/_index.md b/content/rancher/v2.x/en/cli/_index.md index 69d6f7a1805..7a32f98321f 100644 --- a/content/rancher/v2.x/en/cli/_index.md +++ b/content/rancher/v2.x/en/cli/_index.md @@ -4,6 +4,8 @@ description: The Rancher CLI is a unified tool that you can use to interact with metaTitle: "Using the Rancher Command Line Interface " metaDescription: "The Rancher CLI is a unified tool that you can use to interact with Rancher. With it, you can operate Rancher using a command line interface rather than the GUI" weight: 21 +aliases: + - /rancher/v2.x/en/cluster-admin/cluster-access/cli --- The Rancher CLI (Command Line Interface) is a unified tool that you can use to interact with Rancher. With this tool, you can operate Rancher using a command line rather than the GUI. diff --git a/content/rancher/v2.x/en/cluster-admin/cluster-access/ace/_index.md b/content/rancher/v2.x/en/cluster-admin/cluster-access/ace/_index.md index 58167468fa5..c3bdd5bf1e1 100644 --- a/content/rancher/v2.x/en/cluster-admin/cluster-access/ace/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/cluster-access/ace/_index.md @@ -15,7 +15,7 @@ After you download the kubeconfig file, you will be able to use the kubeconfig f _Available as of v2.4.6_ -If admins have [enforced TTL on kubeconfig tokens](../../api/api-tokens/#setting-ttl-on-kubeconfig-tokens), the kubeconfig file requires [rancher cli](../cli) to be present in your PATH. +If admins have [enforced TTL on kubeconfig tokens]({{}}/rancher/v2.x/en/api/api-tokens/#setting-ttl-on-kubeconfig-tokens), the kubeconfig file requires [rancher cli](../cli) to be present in your PATH. ### Two Authentication Methods for RKE Clusters diff --git a/content/rancher/v2.x/en/cluster-admin/tools/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/_index.md index ed8fd982157..d70209cc528 100644 --- a/content/rancher/v2.x/en/cluster-admin/tools/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/tools/_index.md @@ -17,15 +17,8 @@ Rancher contains a variety of tools that aren't included in Kubernetes to assist -## Notifiers and Alerts -Notifiers and alerts are two features that work together to inform you of events in the Rancher system. - -[Notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers) are services that inform you of alert events. You can configure notifiers to send alert notifications to staff best suited to take corrective action. Notifications can be sent with Slack, email, PagerDuty, WeChat, and webhooks. - -[Alerts]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts) are rules that trigger those notifications. Before you can receive alerts, you must configure one or more notifier in Rancher. The scope for alerts can be set at either the cluster or project level. - -## Logging +# Logging Logging is helpful because it allows you to: @@ -37,18 +30,24 @@ Logging is helpful because it allows you to: Rancher can integrate with Elasticsearch, splunk, kafka, syslog, and fluentd. -For details, refer to the [logging section.]({{}}/rancher/v2.x/en/cluster-admin/tools/logging) +For details, refer to the [logging section.]({{}}/rancher/v2.x/en/logging) -## Monitoring +# Monitoring -_Available as of v2.2.0_ +Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with [Prometheus](https://prometheus.io/), a leading open-source monitoring solution. For details, refer to the [monitoring section.]({{}}/rancher/v2.x/en/monitoring) -Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with [Prometheus](https://prometheus.io/), a leading open-source monitoring solution. For details, refer to the [monitoring section.]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring) +### Notifiers and Alerts -## Istio +After monitoring is enabled, you can set up alerts and notifiers that provide the mechanism to receive them. - [Istio](https://istio.io/) is an open-source tool that makes it easier for DevOps teams to observe, control, troubleshoot, and secure the traffic within a complex network of microservices. For details on how to enable Istio in Rancher, refer to the [Istio section.]({{}}/rancher/v2.x/en/cluster-admin/tools/istio) +Notifiers are services that inform you of alert events. You can configure notifiers to send alert notifications to staff best suited to take corrective action. Notifications can be sent with Slack, email, PagerDuty, WeChat, and webhooks. + +Alerts are rules that trigger those notifications. Before you can receive alerts, you must configure one or more notifier in Rancher. The scope for alerts can be set at either the cluster or project level. + +# Istio + +[Istio](https://istio.io/) is an open-source tool that makes it easier for DevOps teams to observe, control, troubleshoot, and secure the traffic within a complex network of microservices. For details on how to enable Istio in Rancher, refer to the [Istio section.]({{}}/rancher/v2.x/en/istio) ## OPA Gatekeeper - [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper) is an open-source project that provides integration between OPA and Kubernetes to provide policy control via admission controller webhooks. For details on how to enable Gatekeeper in Rancher, refer to the [OPA Gatekeeper section.]({{}}/rancher/v2.x/en/cluster-admin/tools/opa-gatekeeper) + [OPA Gatekeeper](https://github.com/open-policy-agent/gatekeeper) is an open-source project that provides integration between OPA and Kubernetes to provide policy control via admission controller webhooks. For details on how to enable Gatekeeper in Rancher, refer to the [OPA Gatekeeper section.]({{}}/rancher/v2.x/en/opa-gatekeeper) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/istio/release-notes/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/istio/release-notes/_index.md deleted file mode 100644 index fe719fc5c6e..00000000000 --- a/content/rancher/v2.x/en/cluster-admin/tools/istio/release-notes/_index.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: Release Notes ---- - - -# Important note on Istio 1.5.x versions - -When upgrading from any 1.4 version of Istio to any 1.5 version, the Rancher installer will delete several resources in order to complete the upgrade, at which point they will be immediately re-installed. This includes the `istio-reader-service-account`. If your Istio installation is using this service account be aware that any secrets tied to the service account will be deleted. Most notably this will **break specific [multi-cluster deployments](https://archive.istio.io/v1.4/docs/setup/install/multicluster/)**. Downgrades back to 1.4 are not possible. - -See the official upgrade notes for additional information on the 1.5 release and upgrading from 1.4: https://istio.io/latest/news/releases/1.5.x/announcing-1.5/upgrade-notes/ - -> **Note:** Rancher continues to use the Helm installation method, which produces a different architecture from an istioctl installation. - - - -## Istio 1.5.9 release notes - -**Bug fixes** - -* The Kiali traffic graph is now working [#28109](https://github.com/rancher/rancher/issues/28109) - -**Known Issues** - -* The Kiali traffic graph is offset in the UI [#28207](https://github.com/rancher/rancher/issues/28207) - - -## Istio 1.5.8 release notes - -**Known Issues** - -* The Kiali traffic graph is currently not working [#24924](https://github.com/istio/istio/issues/24924) diff --git a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/custom-metrics/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/custom-metrics/_index.md deleted file mode 100644 index a78b62ee206..00000000000 --- a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/custom-metrics/_index.md +++ /dev/null @@ -1,489 +0,0 @@ ---- -title: Prometheus Custom Metrics Adapter -weight: 5 ---- - -After you've enabled [cluster level monitoring]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring), You can view the metrics data from Rancher. You can also deploy the Prometheus custom metrics adapter then you can use the HPA with metrics stored in cluster monitoring. - -## Deploy Prometheus Custom Metrics Adapter - -We are going to use the [Prometheus custom metrics adapter](https://github.com/DirectXMan12/k8s-prometheus-adapter/releases/tag/v0.5.0), version v0.5.0. This is a great example for the [custom metrics server](https://github.com/kubernetes-incubator/custom-metrics-apiserver). And you must be the *cluster owner* to execute following steps. - -- Get the service account of the cluster monitoring is using. It should be configured in the workload ID: `statefulset:cattle-prometheus:prometheus-cluster-monitoring`. And if you didn't customize anything, the service account name should be `cluster-monitoring`. - -- Grant permission to that service account. You will need two kinds of permission. -One role is `extension-apiserver-authentication-reader` in `kube-system`, so you will need to create a `Rolebinding` to in `kube-system`. This permission is to get api aggregation configuration from config map in `kube-system`. - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: custom-metrics-auth-reader - namespace: kube-system -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: extension-apiserver-authentication-reader -subjects: -- kind: ServiceAccount - name: cluster-monitoring - namespace: cattle-prometheus -``` - -The other one is cluster role `system:auth-delegator`, so you will need to create a `ClusterRoleBinding`. This permission is to have subject access review permission. - -```yaml -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: custom-metrics:system:auth-delegator -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: system:auth-delegator -subjects: -- kind: ServiceAccount - name: cluster-monitoring - namespace: cattle-prometheus -``` - -- Create configuration for custom metrics adapter. Following is an example configuration. There will be a configuration details in next session. - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: adapter-config - namespace: cattle-prometheus -data: - config.yaml: | - rules: - - seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}' - seriesFilters: [] - resources: - overrides: - namespace: - resource: namespace - pod_name: - resource: pod - name: - matches: ^container_(.*)_seconds_total$ - as: "" - metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[1m])) by (<<.GroupBy>>) - - seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}' - seriesFilters: - - isNot: ^container_.*_seconds_total$ - resources: - overrides: - namespace: - resource: namespace - pod_name: - resource: pod - name: - matches: ^container_(.*)_total$ - as: "" - metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[1m])) by (<<.GroupBy>>) - - seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}' - seriesFilters: - - isNot: ^container_.*_total$ - resources: - overrides: - namespace: - resource: namespace - pod_name: - resource: pod - name: - matches: ^container_(.*)$ - as: "" - metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}) by (<<.GroupBy>>) - - seriesQuery: '{namespace!="",__name__!~"^container_.*"}' - seriesFilters: - - isNot: .*_total$ - resources: - template: <<.Resource>> - name: - matches: "" - as: "" - metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>) - - seriesQuery: '{namespace!="",__name__!~"^container_.*"}' - seriesFilters: - - isNot: .*_seconds_total - resources: - template: <<.Resource>> - name: - matches: ^(.*)_total$ - as: "" - metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>) - - seriesQuery: '{namespace!="",__name__!~"^container_.*"}' - seriesFilters: [] - resources: - template: <<.Resource>> - name: - matches: ^(.*)_seconds_total$ - as: "" - metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>) - resourceRules: - cpu: - containerQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>) - nodeQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, id='/'}[1m])) by (<<.GroupBy>>) - resources: - overrides: - instance: - resource: node - namespace: - resource: namespace - pod_name: - resource: pod - containerLabel: container_name - memory: - containerQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>}) by (<<.GroupBy>>) - nodeQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}) by (<<.GroupBy>>) - resources: - overrides: - instance: - resource: node - namespace: - resource: namespace - pod_name: - resource: pod - containerLabel: container_name - window: 1m -``` - -- Create HTTPS TLS certs for your api server. You can use following command to create a self-signed cert. - -```bash -openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out serving.crt -keyout serving.key -subj "/C=CN/CN=custom-metrics-apiserver.cattle-prometheus.svc.cluster.local" -# And you will find serving.crt and serving.key in your path. And then you are going to create a secret in cattle-prometheus namespace. -kubectl create secret generic -n cattle-prometheus cm-adapter-serving-certs --from-file=serving.key=./serving.key --from-file=serving.crt=./serving.crt -``` - -- Then you can create the prometheus custom metrics adapter. And you will need a service for this deployment too. Creating it via Import YAML or Rancher would do. Please create those resources in `cattle-prometheus` namespaces. - -Here is the prometheus custom metrics adapter deployment. -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - labels: - app: custom-metrics-apiserver - name: custom-metrics-apiserver - namespace: cattle-prometheus -spec: - replicas: 1 - selector: - matchLabels: - app: custom-metrics-apiserver - template: - metadata: - labels: - app: custom-metrics-apiserver - name: custom-metrics-apiserver - spec: - serviceAccountName: cluster-monitoring - containers: - - name: custom-metrics-apiserver - image: directxman12/k8s-prometheus-adapter-amd64:v0.5.0 - args: - - --secure-port=6443 - - --tls-cert-file=/var/run/serving-cert/serving.crt - - --tls-private-key-file=/var/run/serving-cert/serving.key - - --logtostderr=true - - --prometheus-url=http://prometheus-operated/ - - --metrics-relist-interval=1m - - --v=10 - - --config=/etc/adapter/config.yaml - ports: - - containerPort: 6443 - volumeMounts: - - mountPath: /var/run/serving-cert - name: volume-serving-cert - readOnly: true - - mountPath: /etc/adapter/ - name: config - readOnly: true - - mountPath: /tmp - name: tmp-vol - volumes: - - name: volume-serving-cert - secret: - secretName: cm-adapter-serving-certs - - name: config - configMap: - name: adapter-config - - name: tmp-vol - emptyDir: {} - -``` - -Here is the service of the deployment. -```yaml -apiVersion: v1 -kind: Service -metadata: - name: custom-metrics-apiserver - namespace: cattle-prometheus -spec: - ports: - - port: 443 - targetPort: 6443 - selector: - app: custom-metrics-apiserver -``` - -- Create API service for your custom metric server. - -```yaml -apiVersion: apiregistration.k8s.io/v1beta1 -kind: APIService -metadata: - name: v1beta1.custom.metrics.k8s.io -spec: - service: - name: custom-metrics-apiserver - namespace: cattle-prometheus - group: custom.metrics.k8s.io - version: v1beta1 - insecureSkipTLSVerify: true - groupPriorityMinimum: 100 - versionPriority: 100 - -``` - -- Then you can verify your custom metrics server by `kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1`. If you see the return datas from the api, it means that the metrics server has been successfully set up. - -- You create HPA with custom metrics now. Here is an example of HPA. You will need to create a nginx deployment in your namespace first. - -```yaml -kind: HorizontalPodAutoscaler -apiVersion: autoscaling/v2beta1 -metadata: - name: nginx -spec: - scaleTargetRef: - # point the HPA at the nginx deployment you just created - apiVersion: apps/v1 - kind: Deployment - name: nginx - # autoscale between 1 and 10 replicas - minReplicas: 1 - maxReplicas: 10 - metrics: - # use a "Pods" metric, which takes the average of the - # given metric across all pods controlled by the autoscaling target - - type: Pods - pods: - metricName: memory_usage_bytes - targetAverageValue: 5000000 -``` - -And then, you should see your nginx is scaling up. HPA with custom metrics works. - -## Configuration of prometheus custom metrics adapter - -> Refer to https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config.md - -The adapter determines which metrics to expose, and how to expose them, -through a set of "discovery" rules. Each rule is executed independently -(so make sure that your rules are mutually exclusive), and specifies each -of the steps the adapter needs to take to expose a metric in the API. - -Each rule can be broken down into roughly four parts: - -- *Discovery*, which specifies how the adapter should find all Prometheus - metrics for this rule. - -- *Association*, which specifies how the adapter should determine which - Kubernetes resources a particular metric is associated with. - -- *Naming*, which specifies how the adapter should expose the metric in - the custom metrics API. - -- *Querying*, which specifies how a request for a particular metric on one - or more Kubernetes objects should be turned into a query to Prometheus. - -A more comprehensive configuration file can be found in -[sample-config.yaml](sample-config.yaml), but a basic config with one rule -might look like: - -```yaml -rules: -# this rule matches cumulative cAdvisor metrics measured in seconds -- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}' - resources: - # skip specifying generic resource<->label mappings, and just - # attach only pod and namespace resources by mapping label names to group-resources - overrides: - namespace: {resource: "namespace"}, - pod_name: {resource: "pod"}, - # specify that the `container_` and `_seconds_total` suffixes should be removed. - # this also introduces an implicit filter on metric family names - name: - # we use the value of the capture group implicitly as the API name - # we could also explicitly write `as: "$1"` - matches: "^container_(.*)_seconds_total$" - # specify how to construct a query to fetch samples for a given series - # This is a Go template where the `.Series` and `.LabelMatchers` string values - # are available, and the delimiters are `<<` and `>>` to avoid conflicts with - # the prometheus query language - metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)" -``` - -### Discovery - -Discovery governs the process of finding the metrics that you want to -expose in the custom metrics API. There are two fields that factor into -discovery: `seriesQuery` and `seriesFilters`. - -`seriesQuery` specifies Prometheus series query (as passed to the -`/api/v1/series` endpoint in Prometheus) to use to find some set of -Prometheus series. The adapter will strip the label values from this -series, and then use the resulting metric-name-label-names combinations -later on. - -In many cases, `seriesQuery` will be sufficient to narrow down the list of -Prometheus series. However, sometimes (especially if two rules might -otherwise overlap), it's useful to do additional filtering on metric -names. In this case, `seriesFilters` can be used. After the list of -series is returned from `seriesQuery`, each series has its metric name -filtered through any specified filters. - -Filters may be either: - -- `is: `, which matches any series whose name matches the specified - regex. - -- `isNot: `, which matches any series whose name does not match the - specified regex. - -For example: - -```yaml -# match all cAdvisor metrics that aren't measured in seconds -seriesQuery: '{__name__=~"^container_.*_total",container_name!="POD",namespace!="",pod_name!=""}' -seriesFilters: - isNot: "^container_.*_seconds_total" -``` - -### Association - -Association governs the process of figuring out which Kubernetes resources -a particular metric could be attached to. The `resources` field controls -this process. - -There are two ways to associate resources with a particular metric. In -both cases, the value of the label becomes the name of the particular -object. - -One way is to specify that any label name that matches some particular -pattern refers to some group-resource based on the label name. This can -be done using the `template` field. The pattern is specified as a Go -template, with the `Group` and `Resource` fields representing group and -resource. You don't necessarily have to use the `Group` field (in which -case the group is guessed by the system). For instance: - -```yaml -# any label `kube__` becomes . in Kubernetes -resources: - template: "kube_<<.Group>>_<<.Resource>>" -``` - -The other way is to specify that some particular label represents some -particular Kubernetes resource. This can be done using the `overrides` -field. Each override maps a Prometheus label to a Kubernetes -group-resource. For instance: - -```yaml -# the microservice label corresponds to the apps.deployment resource -resource: - overrides: - microservice: {group: "apps", resource: "deployment"} -``` - -These two can be combined, so you can specify both a template and some -individual overrides. - -The resources mentioned can be any resource available in your kubernetes -cluster, as long as you've got a corresponding label. - -### Naming - -Naming governs the process of converting a Prometheus metric name into -a metric in the custom metrics API, and vice versa. It's controlled by -the `name` field. - -Naming is controlled by specifying a pattern to extract an API name from -a Prometheus name, and potentially a transformation on that extracted -value. - -The pattern is specified in the `matches` field, and is just a regular -expression. If not specified, it defaults to `.*`. - -The transformation is specified by the `as` field. You can use any -capture groups defined in the `matches` field. If the `matches` field -doesn't contain capture groups, the `as` field defaults to `$0`. If it -contains a single capture group, the `as` field defautls to `$1`. -Otherwise, it's an error not to specify the as field. - -For example: - -```yaml -# match turn any name _total to _per_second -# e.g. http_requests_total becomes http_requests_per_second -name: - matches: "^(.*)_total$" - as: "${1}_per_second" -``` - -### Querying - -Querying governs the process of actually fetching values for a particular -metric. It's controlled by the `metricsQuery` field. - -The `metricsQuery` field is a Go template that gets turned into -a Prometheus query, using input from a particular call to the custom -metrics API. A given call to the custom metrics API is distilled down to -a metric name, a group-resource, and one or more objects of that -group-resource. These get turned into the following fields in the -template: - -- `Series`: the metric name -- `LabelMatchers`: a comma-separated list of label matchers matching the - given objects. Currently, this is the label for the particular - group-resource, plus the label for namespace, if the group-resource is - namespaced. -- `GroupBy`: a comma-separated list of labels to group by. Currently, - this contains the group-resource label used in `LabelMatchers`. - -For instance, suppose we had a series `http_requests_total` (exposed as -`http_requests_per_second` in the API) with labels `service`, `pod`, -`ingress`, `namespace`, and `verb`. The first four correspond to -Kubernetes resources. Then, if someone requested the metric -`pods/http_request_per_second` for the pods `pod1` and `pod2` in the -`somens` namespace, we'd have: - -- `Series: "http_requests_total"` -- `LabelMatchers: "pod=~\"pod1|pod2",namespace="somens"` -- `GroupBy`: `pod` - -Additionally, there are two advanced fields that are "raw" forms of other -fields: - -- `LabelValuesByName`: a map mapping the labels and values from the - `LabelMatchers` field. The values are pre-joined by `|` - (for used with the `=~` matcher in Prometheus). -- `GroupBySlice`: the slice form of `GroupBy`. - -In general, you'll probably want to use the `Series`, `LabelMatchers`, and -`GroupBy` fields. The other two are for advanced usage. - -The query is expected to return one value for each object requested. The -adapter will use the labels on the returned series to associate a given -series back to its corresponding object. - -For example: - -```yaml -# convert cumulative cAdvisor metrics into rates calculated over 2 minutes -metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)" -``` diff --git a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/_index.md b/content/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/_index.md deleted file mode 100644 index 9f5170c9779..00000000000 --- a/content/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/_index.md +++ /dev/null @@ -1,430 +0,0 @@ ---- -title: Prometheus Expressions -weight: 4 ---- - -The PromQL expressions in this doc can be used to configure [alerts.]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) - -> Before expression can be used in alerts, monitoring must be enabled. For more information, refer to the documentation on enabling monitoring [at the cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [at the project level.]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring) - -For more information about querying Prometheus, refer to the official [Prometheus documentation.](https://prometheus.io/docs/prometheus/latest/querying/basics/) - - - -- [Cluster Metrics](#cluster-metrics) - - [Cluster CPU Utilization](#cluster-cpu-utilization) - - [Cluster Load Average](#cluster-load-average) - - [Cluster Memory Utilization](#cluster-memory-utilization) - - [Cluster Disk Utilization](#cluster-disk-utilization) - - [Cluster Disk I/O](#cluster-disk-i-o) - - [Cluster Network Packets](#cluster-network-packets) - - [Cluster Network I/O](#cluster-network-i-o) -- [Node Metrics](#node-metrics) - - [Node CPU Utilization](#node-cpu-utilization) - - [Node Load Average](#node-load-average) - - [Node Memory Utilization](#node-memory-utilization) - - [Node Disk Utilization](#node-disk-utilization) - - [Node Disk I/O](#node-disk-i-o) - - [Node Network Packets](#node-network-packets) - - [Node Network I/O](#node-network-i-o) -- [Etcd Metrics](#etcd-metrics) - - [Etcd Has a Leader](#etcd-has-a-leader) - - [Number of Times the Leader Changes](#number-of-times-the-leader-changes) - - [Number of Failed Proposals](#number-of-failed-proposals) - - [GRPC Client Traffic](#grpc-client-traffic) - - [Peer Traffic](#peer-traffic) - - [DB Size](#db-size) - - [Active Streams](#active-streams) - - [Raft Proposals](#raft-proposals) - - [RPC Rate](#rpc-rate) - - [Disk Operations](#disk-operations) - - [Disk Sync Duration](#disk-sync-duration) -- [Kubernetes Components Metrics](#kubernetes-components-metrics) - - [API Server Request Latency](#api-server-request-latency) - - [API Server Request Rate](#api-server-request-rate) - - [Scheduling Failed Pods](#scheduling-failed-pods) - - [Controller Manager Queue Depth](#controller-manager-queue-depth) - - [Scheduler E2E Scheduling Latency](#scheduler-e2e-scheduling-latency) - - [Scheduler Preemption Attempts](#scheduler-preemption-attempts) - - [Ingress Controller Connections](#ingress-controller-connections) - - [Ingress Controller Request Process Time](#ingress-controller-request-process-time) -- [Rancher Logging Metrics](#rancher-logging-metrics) - - [Fluentd Buffer Queue Rate](#fluentd-buffer-queue-rate) - - [Fluentd Input Rate](#fluentd-input-rate) - - [Fluentd Output Errors Rate](#fluentd-output-errors-rate) - - [Fluentd Output Rate](#fluentd-output-rate) -- [Workload Metrics](#workload-metrics) - - [Workload CPU Utilization](#workload-cpu-utilization) - - [Workload Memory Utilization](#workload-memory-utilization) - - [Workload Network Packets](#workload-network-packets) - - [Workload Network I/O](#workload-network-i-o) - - [Workload Disk I/O](#workload-disk-i-o) -- [Pod Metrics](#pod-metrics) - - [Pod CPU Utilization](#pod-cpu-utilization) - - [Pod Memory Utilization](#pod-memory-utilization) - - [Pod Network Packets](#pod-network-packets) - - [Pod Network I/O](#pod-network-i-o) - - [Pod Disk I/O](#pod-disk-i-o) -- [Container Metrics](#container-metrics) - - [Container CPU Utilization](#container-cpu-utilization) - - [Container Memory Utilization](#container-memory-utilization) - - [Container Disk I/O](#container-disk-i-o) - - - -# Cluster Metrics - -### Cluster CPU Utilization - -| Catalog | Expression | -| --- | --- | -| Detail | `1 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) by (instance))` | -| Summary | `1 - (avg(irate(node_cpu_seconds_total{mode="idle"}[5m])))` | - -### Cluster Load Average - -| Catalog | Expression | -| --- | --- | -| Detail |
load1`sum(node_load1) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
load5`sum(node_load5) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
load15`sum(node_load15) by (instance) / count(node_cpu_seconds_total{mode="system"}) by (instance)`
| -| Summary |
load1`sum(node_load1) by (instance) / count(node_cpu_seconds_total{mode="system"})`
load5`sum(node_load5) by (instance) / count(node_cpu_seconds_total{mode="system"})`
load15`sum(node_load15) by (instance) / count(node_cpu_seconds_total{mode="system"})`
| - -### Cluster Memory Utilization - -| Catalog | Expression | -| --- | --- | -| Detail | `1 - sum(node_memory_MemAvailable_bytes) by (instance) / sum(node_memory_MemTotal_bytes) by (instance)` | -| Summary | `1 - sum(node_memory_MemAvailable_bytes) / sum(node_memory_MemTotal_bytes)` | - -### Cluster Disk Utilization - -| Catalog | Expression | -| --- | --- | -| Detail | `(sum(node_filesystem_size_bytes{device!="rootfs"}) by (instance) - sum(node_filesystem_free_bytes{device!="rootfs"}) by (instance)) / sum(node_filesystem_size_bytes{device!="rootfs"}) by (instance)` | -| Summary | `(sum(node_filesystem_size_bytes{device!="rootfs"}) - sum(node_filesystem_free_bytes{device!="rootfs"})) / sum(node_filesystem_size_bytes{device!="rootfs"})` | - -### Cluster Disk I/O - -| Catalog | Expression | -| --- | --- | -| Detail |
read`sum(rate(node_disk_read_bytes_total[5m])) by (instance)`
written`sum(rate(node_disk_written_bytes_total[5m])) by (instance)`
| -| Summary |
read`sum(rate(node_disk_read_bytes_total[5m]))`
written`sum(rate(node_disk_written_bytes_total[5m]))`
| - -### Cluster Network Packets - -| Catalog | Expression | -| --- | --- | -| Detail |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
| -| Summary |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
| - -### Cluster Network I/O - -| Catalog | Expression | -| --- | --- | -| Detail |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m])) by (instance)
| -| Summary |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*"}[5m]))
| - -# Node Metrics - -### Node CPU Utilization - -| Catalog | Expression | -| --- | --- | -| Detail | `avg(irate(node_cpu_seconds_total{mode!="idle", instance=~"$instance"}[5m])) by (mode)` | -| Summary | `1 - (avg(irate(node_cpu_seconds_total{mode="idle", instance=~"$instance"}[5m])))` | - -### Node Load Average - -| Catalog | Expression | -| --- | --- | -| Detail |
load1`sum(node_load1{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load5`sum(node_load5{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load15`sum(node_load15{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
| -| Summary |
load1`sum(node_load1{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load5`sum(node_load5{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
load15`sum(node_load15{instance=~"$instance"}) / count(node_cpu_seconds_total{mode="system",instance=~"$instance"})`
| - -### Node Memory Utilization - -| Catalog | Expression | -| --- | --- | -| Detail | `1 - sum(node_memory_MemAvailable_bytes{instance=~"$instance"}) / sum(node_memory_MemTotal_bytes{instance=~"$instance"})` | -| Summary | `1 - sum(node_memory_MemAvailable_bytes{instance=~"$instance"}) / sum(node_memory_MemTotal_bytes{instance=~"$instance"}) ` | - -### Node Disk Utilization - -| Catalog | Expression | -| --- | --- | -| Detail | `(sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) by (device) - sum(node_filesystem_free_bytes{device!="rootfs",instance=~"$instance"}) by (device)) / sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) by (device)` | -| Summary | `(sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"}) - sum(node_filesystem_free_bytes{device!="rootfs",instance=~"$instance"})) / sum(node_filesystem_size_bytes{device!="rootfs",instance=~"$instance"})` | - -### Node Disk I/O - -| Catalog | Expression | -| --- | --- | -| Detail |
read`sum(rate(node_disk_read_bytes_total{instance=~"$instance"}[5m]))`
written`sum(rate(node_disk_written_bytes_total{instance=~"$instance"}[5m]))`
| -| Summary |
read`sum(rate(node_disk_read_bytes_total{instance=~"$instance"}[5m]))`
written`sum(rate(node_disk_written_bytes_total{instance=~"$instance"}[5m]))`
| - -### Node Network Packets - -| Catalog | Expression | -| --- | --- | -| Detail |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
| -| Summary |
receive-droppedsum(rate(node_network_receive_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
receive-errssum(rate(node_network_receive_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
receive-packetssum(rate(node_network_receive_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-droppedsum(rate(node_network_transmit_drop_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-errssum(rate(node_network_transmit_errs_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmit-packetssum(rate(node_network_transmit_packets_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
| - -### Node Network I/O - -| Catalog | Expression | -| --- | --- | -| Detail |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m])) by (device)
| -| Summary |
receivesum(rate(node_network_receive_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
transmitsum(rate(node_network_transmit_bytes_total{device!~"lo | veth.* | docker.* | flannel.* | cali.* | cbr.*",instance=~"$instance"}[5m]))
| - -# Etcd Metrics - -### Etcd Has a Leader - -`max(etcd_server_has_leader)` - -### Number of Times the Leader Changes - -`max(etcd_server_leader_changes_seen_total)` - -### Number of Failed Proposals - -`sum(etcd_server_proposals_failed_total)` - -### GRPC Client Traffic - -| Catalog | Expression | -| --- | --- | -| Detail |
in`sum(rate(etcd_network_client_grpc_received_bytes_total[5m])) by (instance)`
out`sum(rate(etcd_network_client_grpc_sent_bytes_total[5m])) by (instance)`
| -| Summary |
in`sum(rate(etcd_network_client_grpc_received_bytes_total[5m]))`
out`sum(rate(etcd_network_client_grpc_sent_bytes_total[5m]))`
| - -### Peer Traffic - -| Catalog | Expression | -| --- | --- | -| Detail |
in`sum(rate(etcd_network_peer_received_bytes_total[5m])) by (instance)`
out`sum(rate(etcd_network_peer_sent_bytes_total[5m])) by (instance)`
| -| Summary |
in`sum(rate(etcd_network_peer_received_bytes_total[5m]))`
out`sum(rate(etcd_network_peer_sent_bytes_total[5m]))`
| - -### DB Size - -| Catalog | Expression | -| --- | --- | -| Detail | `sum(etcd_debugging_mvcc_db_total_size_in_bytes) by (instance)` | -| Summary | `sum(etcd_debugging_mvcc_db_total_size_in_bytes)` | - -### Active Streams - -| Catalog | Expression | -| --- | --- | -| Detail |
lease-watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) by (instance) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) by (instance)`
watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) by (instance) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) by (instance)`
| -| Summary |
lease-watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"})`
watch`sum(grpc_server_started_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"})`
| - -### Raft Proposals - -| Catalog | Expression | -| --- | --- | -| Detail |
applied`sum(increase(etcd_server_proposals_applied_total[5m])) by (instance)`
committed`sum(increase(etcd_server_proposals_committed_total[5m])) by (instance)`
pending`sum(increase(etcd_server_proposals_pending[5m])) by (instance)`
failed`sum(increase(etcd_server_proposals_failed_total[5m])) by (instance)`
| -| Summary |
applied`sum(increase(etcd_server_proposals_applied_total[5m]))`
committed`sum(increase(etcd_server_proposals_committed_total[5m]))`
pending`sum(increase(etcd_server_proposals_pending[5m]))`
failed`sum(increase(etcd_server_proposals_failed_total[5m]))`
| - -### RPC Rate - -| Catalog | Expression | -| --- | --- | -| Detail |
total`sum(rate(grpc_server_started_total{grpc_type="unary"}[5m])) by (instance)`
fail`sum(rate(grpc_server_handled_total{grpc_type="unary",grpc_code!="OK"}[5m])) by (instance)`
| -| Summary |
total`sum(rate(grpc_server_started_total{grpc_type="unary"}[5m]))`
fail`sum(rate(grpc_server_handled_total{grpc_type="unary",grpc_code!="OK"}[5m]))`
| - -### Disk Operations - -| Catalog | Expression | -| --- | --- | -| Detail |
commit-called-by-backend`sum(rate(etcd_disk_backend_commit_duration_seconds_sum[1m])) by (instance)`
fsync-called-by-wal`sum(rate(etcd_disk_wal_fsync_duration_seconds_sum[1m])) by (instance)`
| -| Summary |
commit-called-by-backend`sum(rate(etcd_disk_backend_commit_duration_seconds_sum[1m]))`
fsync-called-by-wal`sum(rate(etcd_disk_wal_fsync_duration_seconds_sum[1m]))`
| - -### Disk Sync Duration - -| Catalog | Expression | -| --- | --- | -| Detail |
wal`histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le))`
db`histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) by (instance, le))`
| -| Summary |
wal`sum(histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m])) by (instance, le)))`
db`sum(histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket[5m])) by (instance, le)))`
| - -# Kubernetes Components Metrics - -### API Server Request Latency - -| Catalog | Expression | -| --- | --- | -| Detail | `avg(apiserver_request_latencies_sum / apiserver_request_latencies_count) by (instance, verb) /1e+06` | -| Summary | `avg(apiserver_request_latencies_sum / apiserver_request_latencies_count) by (instance) /1e+06` | - -### API Server Request Rate - -| Catalog | Expression | -| --- | --- | -| Detail | `sum(rate(apiserver_request_count[5m])) by (instance, code)` | -| Summary | `sum(rate(apiserver_request_count[5m])) by (instance)` | - -### Scheduling Failed Pods - -| Catalog | Expression | -| --- | --- | -| Detail | `sum(kube_pod_status_scheduled{condition="false"})` | -| Summary | `sum(kube_pod_status_scheduled{condition="false"})` | - -### Controller Manager Queue Depth - -| Catalog | Expression | -| --- | --- | -| Detail |
volumes`sum(volumes_depth) by instance`
deployment`sum(deployment_depth) by instance`
replicaset`sum(replicaset_depth) by instance`
service`sum(service_depth) by instance`
serviceaccount`sum(serviceaccount_depth) by instance`
endpoint`sum(endpoint_depth) by instance`
daemonset`sum(daemonset_depth) by instance`
statefulset`sum(statefulset_depth) by instance`
replicationmanager`sum(replicationmanager_depth) by instance`
| -| Summary |
volumes`sum(volumes_depth)`
deployment`sum(deployment_depth)`
replicaset`sum(replicaset_depth)`
service`sum(service_depth)`
serviceaccount`sum(serviceaccount_depth)`
endpoint`sum(endpoint_depth)`
daemonset`sum(daemonset_depth)`
statefulset`sum(statefulset_depth)`
replicationmanager`sum(replicationmanager_depth)`
| - -### Scheduler E2E Scheduling Latency - -| Catalog | Expression | -| --- | --- | -| Detail | `histogram_quantile(0.99, sum(scheduler_e2e_scheduling_latency_microseconds_bucket) by (le, instance)) / 1e+06` | -| Summary | `sum(histogram_quantile(0.99, sum(scheduler_e2e_scheduling_latency_microseconds_bucket) by (le, instance)) / 1e+06)` | - -### Scheduler Preemption Attempts - -| Catalog | Expression | -| --- | --- | -| Detail | `sum(rate(scheduler_total_preemption_attempts[5m])) by (instance)` | -| Summary | `sum(rate(scheduler_total_preemption_attempts[5m]))` | - -### Ingress Controller Connections - -| Catalog | Expression | -| --- | --- | -| Detail |
reading`sum(nginx_ingress_controller_nginx_process_connections{state="reading"}) by (instance)`
waiting`sum(nginx_ingress_controller_nginx_process_connections{state="waiting"}) by (instance)`
writing`sum(nginx_ingress_controller_nginx_process_connections{state="writing"}) by (instance)`
accepted`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="accepted"}[5m]))) by (instance)`
active`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="active"}[5m]))) by (instance)`
handled`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="handled"}[5m]))) by (instance)`
| -| Summary |
reading`sum(nginx_ingress_controller_nginx_process_connections{state="reading"})`
waiting`sum(nginx_ingress_controller_nginx_process_connections{state="waiting"})`
writing`sum(nginx_ingress_controller_nginx_process_connections{state="writing"})`
accepted`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="accepted"}[5m])))`
active`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="active"}[5m])))`
handled`sum(ceil(increase(nginx_ingress_controller_nginx_process_connections_total{state="handled"}[5m])))`
| - -### Ingress Controller Request Process Time - -| Catalog | Expression | -| --- | --- | -| Detail | `topk(10, histogram_quantile(0.95,sum by (le, host, path)(rate(nginx_ingress_controller_request_duration_seconds_bucket{host!="_"}[5m]))))` | -| Summary | `topk(10, histogram_quantile(0.95,sum by (le, host)(rate(nginx_ingress_controller_request_duration_seconds_bucket{host!="_"}[5m]))))` | - -# Rancher Logging Metrics - - -### Fluentd Buffer Queue Rate - -| Catalog | Expression | -| --- | --- | -| Detail | `sum(rate(fluentd_output_status_buffer_queue_length[5m])) by (instance)` | -| Summary | `sum(rate(fluentd_output_status_buffer_queue_length[5m]))` | - -### Fluentd Input Rate - -| Catalog | Expression | -| --- | --- | -| Detail | `sum(rate(fluentd_input_status_num_records_total[5m])) by (instance)` | -| Summary | `sum(rate(fluentd_input_status_num_records_total[5m]))` | - -### Fluentd Output Errors Rate - -| Catalog | Expression | -| --- | --- | -| Detail | `sum(rate(fluentd_output_status_num_errors[5m])) by (type)` | -| Summary | `sum(rate(fluentd_output_status_num_errors[5m]))` | - -### Fluentd Output Rate - -| Catalog | Expression | -| --- | --- | -| Detail | `sum(rate(fluentd_output_status_num_records_total[5m])) by (instance)` | -| Summary | `sum(rate(fluentd_output_status_num_records_total[5m]))` | - -# Workload Metrics - -### Workload CPU Utilization - -| Catalog | Expression | -| --- | --- | -| Detail |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
user seconds`sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
system seconds`sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
usage seconds`sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| -| Summary |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
user seconds`sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
system seconds`sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
usage seconds`sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| - -### Workload Memory Utilization - -| Catalog | Expression | -| --- | --- | -| Detail | `sum(container_memory_working_set_bytes{namespace="$namespace",pod_name=~"$podName", container_name!=""}) by (pod_name)` | -| Summary | `sum(container_memory_working_set_bytes{namespace="$namespace",pod_name=~"$podName", container_name!=""})` | - -### Workload Network Packets - -| Catalog | Expression | -| --- | --- | -| Detail |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| -| Summary |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| - -### Workload Network I/O - -| Catalog | Expression | -| --- | --- | -| Detail |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| -| Summary |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| - -### Workload Disk I/O - -| Catalog | Expression | -| --- | --- | -| Detail |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)`
| -| Summary |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m]))`
| - -# Pod Metrics - -### Pod CPU Utilization - -| Catalog | Expression | -| --- | --- | -| Detail |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
usage seconds`sum(rate(container_cpu_usage_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
system seconds`sum(rate(container_cpu_system_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
user seconds`sum(rate(container_cpu_user_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m])) by (container_name)`
| -| Summary |
cfs throttled seconds`sum(rate(container_cpu_cfs_throttled_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
usage seconds`sum(rate(container_cpu_usage_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
system seconds`sum(rate(container_cpu_system_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
user seconds`sum(rate(container_cpu_user_seconds_total{container_name!="POD",namespace="$namespace",pod_name="$podName", container_name!=""}[5m]))`
| - -### Pod Memory Utilization - -| Catalog | Expression | -| --- | --- | -| Detail | `sum(container_memory_working_set_bytes{container_name!="POD",namespace="$namespace",pod_name="$podName",container_name!=""}) by (container_name)` | -| Summary | `sum(container_memory_working_set_bytes{container_name!="POD",namespace="$namespace",pod_name="$podName",container_name!=""})` | - -### Pod Network Packets - -| Catalog | Expression | -| --- | --- | -| Detail |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| -| Summary |
receive-packets`sum(rate(container_network_receive_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-dropped`sum(rate(container_network_receive_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
receive-errors`sum(rate(container_network_receive_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-packets`sum(rate(container_network_transmit_packets_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-dropped`sum(rate(container_network_transmit_packets_dropped_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit-errors`sum(rate(container_network_transmit_errors_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| - -### Pod Network I/O - -| Catalog | Expression | -| --- | --- | -| Detail |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| -| Summary |
receive`sum(rate(container_network_receive_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
transmit`sum(rate(container_network_transmit_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| - -### Pod Disk I/O - -| Catalog | Expression | -| --- | --- | -| Detail |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m])) by (container_name)`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m])) by (container_name)`
| -| Summary |
read`sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
write`sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name!=""}[5m]))`
| - -# Container Metrics - -### Container CPU Utilization - -| Catalog | Expression | -| --- | --- | -| cfs throttled seconds | `sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | -| usage seconds | `sum(rate(container_cpu_usage_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | -| system seconds | `sum(rate(container_cpu_system_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | -| user seconds | `sum(rate(container_cpu_user_seconds_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | - -### Container Memory Utilization - -`sum(container_memory_working_set_bytes{namespace="$namespace",pod_name="$podName",container_name="$containerName"})` - -### Container Disk I/O - -| Catalog | Expression | -| --- | --- | -| read | `sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | -| write | `sum(rate(container_fs_writes_bytes_total{namespace="$namespace",pod_name="$podName",container_name="$containerName"}[5m]))` | diff --git a/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md index b5396fa7e2b..c04cbfcf5eb 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/imported-clusters/_index.md @@ -41,7 +41,7 @@ When you delete an EKS cluster that was created in Rancher, the cluster is destr After importing a cluster, the cluster owner can: - [Manage cluster access]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/) through role-based access control -- Enable [monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and [logging]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/) +- Enable [monitoring]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) and [logging]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/) - Enable [Istio]({{}}/rancher/v2.x/en/cluster-admin/tools/istio/) - Use [pipelines]({{}}/rancher/v2.x/en/project-admin/pipelines/) - Configure [alerts]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) and [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md index 20922c7d3b2..ea103035b2e 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/_index.md @@ -361,7 +361,7 @@ See [Docker Root Directory](#docker-root-directory). ### enable_cluster_monitoring -Option to enable or disable [Cluster Monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/). +Option to enable or disable [Cluster Monitoring]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/). ### enable_network_policy diff --git a/content/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps/_index.md b/content/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps/_index.md index 1199650261a..949d2964fdf 100644 --- a/content/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps/_index.md +++ b/content/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps/_index.md @@ -20,7 +20,7 @@ After creating a multi-cluster application, you can program a [Global DNS entry] - [Roles](#roles) - [Application configuration options](#application-configuration-options) - [Using a questions.yml file](#using-a-questions-yml-file) - - [Key value pairs for native Helm charts](key-value-pairs-for-native-helm-charts) + - [Key value pairs for native Helm charts](#key-value-pairs-for-native-helm-charts) - [Members](#members) - [Overriding application configuration options for specific projects](#overriding-application-configuration-options-for-specific-projects) - [Upgrading multi-cluster app roles and projects](#upgrading-multi-cluster-app-roles-and-projects) diff --git a/content/rancher/v2.x/en/istio/legacy/_index.md b/content/rancher/v2.x/en/istio/legacy/_index.md index 9df562214ff..f7ae7ddcfdb 100644 --- a/content/rancher/v2.x/en/istio/legacy/_index.md +++ b/content/rancher/v2.x/en/istio/legacy/_index.md @@ -10,11 +10,11 @@ aliases: --- _Available as of v2.3.0_ -> In Rancher 2.5, the Istio application was improved. There are now two ways to enable Istio. The older way is documented in this section, and the new application for Istio is documented in the [dashboard section.]({{}}/rancher/v2.x/en/dashboard/istio) +> In Rancher 2.5, the Istio application was improved. There are now two ways to enable Istio. The older way is documented in this section, and the new application for Istio is documented [here.]({{}}/rancher/v2.x/en/istio) - [Istio](https://istio.io/) is an open-source tool that makes it easier for DevOps teams to observe, control, troubleshoot, and secure the traffic within a complex network of microservices. +[Istio](https://istio.io/) is an open-source tool that makes it easier for DevOps teams to observe, control, troubleshoot, and secure the traffic within a complex network of microservices. - As a network of microservices changes and grows, the interactions between them can become more difficult to manage and understand. In such a situation, it is useful to have a service mesh as a separate infrastructure layer. Istio's service mesh lets you manipulate traffic between microservices without changing the microservices directly. +As a network of microservices changes and grows, the interactions between them can become more difficult to manage and understand. In such a situation, it is useful to have a service mesh as a separate infrastructure layer. Istio's service mesh lets you manipulate traffic between microservices without changing the microservices directly. Our integration of Istio is designed so that a Rancher operator, such as an administrator or cluster owner, can deliver Istio to developers. Then developers can use Istio to enforce security policies, troubleshoot problems, or manage traffic for green/blue deployments, canary deployments, or A/B testing. diff --git a/content/rancher/v2.x/en/istio/release-notes/_index.md b/content/rancher/v2.x/en/istio/release-notes/_index.md index 52962b8596c..2cf7c2ca9a0 100644 --- a/content/rancher/v2.x/en/istio/release-notes/_index.md +++ b/content/rancher/v2.x/en/istio/release-notes/_index.md @@ -4,6 +4,16 @@ aliases: - /rancher/v2.x/en/cluster-admin/tools/istio/release-notes --- +## Istio 1.5.9 release notes + +**Bug fixes** + +* The Kiali traffic graph is now working [#28109](https://github.com/rancher/rancher/issues/28109) + +**Known Issues** + +* The Kiali traffic graph is offset in the UI [#28207](https://github.com/rancher/rancher/issues/28207) + # Istio 1.5.8 diff --git a/content/rancher/v2.x/en/logging/legacy/cluster-logging/_index.md b/content/rancher/v2.x/en/logging/legacy/cluster-logging/_index.md index 2fae22bbe8b..3876450ee5e 100644 --- a/content/rancher/v2.x/en/logging/legacy/cluster-logging/_index.md +++ b/content/rancher/v2.x/en/logging/legacy/cluster-logging/_index.md @@ -8,7 +8,7 @@ aliases: - /rancher/v2.x/en/cluster-admin/tools/logging --- -> In Rancher 2.5, the logging application was improved. There are now two ways to enable logging. The older way is documented in this section, and the new application for logging is documented in the [dashboard section.]({{}}/rancher/v2.x/en/dashboard/logging) +> In Rancher 2.5, the logging application was improved. There are now two ways to enable logging. The older way is documented in this section, and the new application for logging is documented [dashboard section.]({{}}/rancher/v2.x/en/logging) Logging is helpful because it allows you to: diff --git a/content/rancher/v2.x/en/monitoring-alerting/_index.md b/content/rancher/v2.x/en/monitoring-alerting/_index.md index 28a2fb9559f..e21f30e9bb5 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/_index.md @@ -48,14 +48,6 @@ In other words, Prometheus lets you view metrics from your different Rancher and By viewing data that Prometheus scrapes from your cluster control plane, nodes, and deployments, you can stay on top of everything happening in your cluster. You can then use these analytics to better run your organization: stop system emergencies before they start, develop maintenance strategies, restore crashed servers, etc. -# Monitoring Scope - -Cluster monitoring allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts. - -- [Kubernetes control plane]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#kubernetes-components-metrics) -- [etcd database]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#etcd-metrics) -- [All nodes (including workers)]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#cluster-metrics) - # Enabling Cluster Monitoring As an [administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Prometheus to monitor your Kubernetes cluster. diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/_index.md index 98bdc97bdfb..08abf054a10 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/_index.md @@ -5,8 +5,7 @@ aliases: - rancher/v2.x/en/cluster-admin/tools/alerts --- - -> In Rancher 2.5, the monitoring application was improved. There are now two ways to enable monitoring and alerting. The older way is documented in this section, and the new application for monitoring and alerting is documented in the [dashboard section.]({{}}/rancher/v2.x/en/dashboard/monitoring-alerting) +> In Rancher 2.5, the monitoring application was improved. There are now two ways to enable monitoring and alerting. The older way is documented in this section, and the new application for monitoring and alerting is documented [here.]({{}}/rancher/v2.x/en/monitoring-alerting) To keep your clusters and applications healthy and driving your organizational productivity forward, you need to stay informed of events occurring in your clusters and projects, both planned and unplanned. When an event occurs, your alert is triggered, and you are sent a notification. You can then, if necessary, follow up with corrective actions. @@ -38,9 +37,9 @@ Some examples of alert events are: ### Prometheus Queries -> **Prerequisite:** Monitoring must be [enabled]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) before you can trigger alerts with custom Prometheus queries or expressions. +> **Prerequisite:** Monitoring must be [enabled]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring) before you can trigger alerts with custom Prometheus queries or expressions. -When you edit an alert rule, you will have the opportunity to configure the alert to be triggered based on a Prometheus expression. For examples of expressions, refer to [this page.]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression) +When you edit an alert rule, you will have the opportunity to configure the alert to be triggered based on a Prometheus expression. For examples of expressions, refer to [this page.]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression) # Urgency Levels @@ -61,7 +60,7 @@ At the cluster level, Rancher monitors components in your Kubernetes cluster, an As a [cluster owner]({{}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to send you alerts for cluster events. ->**Prerequisite:** Before you can receive cluster alerts, you must [add a notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/#adding-notifiers). +>**Prerequisite:** Before you can receive cluster alerts, you must [add a notifier]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/#adding-notifiers). 1. From the **Global** view, navigate to the cluster that you want to configure cluster alerts for. Select **Tools > Alerts**. Then click **Add Alert Group**. diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/default-alerts/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/default-alerts/_index.md index 74053e89580..7242fb7cdc5 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/default-alerts/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/alerts/cluster-alerts/default-alerts/_index.md @@ -7,7 +7,7 @@ aliases: When you create a cluster, some alert rules are predefined. These alerts notify you about signs that the cluster could be unhealthy. You can receive these alerts if you configure a [notifier]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers) for them. -Several of the alerts use Prometheus expressions as the metric that triggers the alert. For more information on how expressions work, you can refer to the Rancher [documentation about Prometheus expressions]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/) or the Prometheus [documentation about querying metrics](https://prometheus.io/docs/prometheus/latest/querying/basics/). +Several of the alerts use Prometheus expressions as the metric that triggers the alert. For more information on how expressions work, you can refer to the Rancher [documentation about Prometheus expressions]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/) or the Prometheus [documentation about querying metrics](https://prometheus.io/docs/prometheus/latest/querying/basics/). # Alerts for etcd Etcd is the key-value store that contains the state of the Kubernetes cluster. Rancher provides default alerts if the built-in monitoring detects a potential problem with etcd. You don't have to enable monitoring to receive these alerts. diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/_index.md index b52ec1fe810..e0aef4ad107 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/_index.md @@ -42,9 +42,9 @@ Using Prometheus, you can monitor Rancher at both the cluster level and [project - Cluster monitoring allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts. - - [Kubernetes control plane]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#kubernetes-components-metrics) - - [etcd database]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#etcd-metrics) - - [All nodes (including workers)]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#cluster-metrics) + - [Kubernetes control plane]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#kubernetes-components-metrics) + - [etcd database]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#etcd-metrics) + - [All nodes (including workers)]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#cluster-metrics) - [Project monitoring]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/) allows you to view the state of pods running in a given project. Prometheus collects metrics from the project's deployed HTTP and TCP/UDP workloads. @@ -58,11 +58,11 @@ As an [administrator]({{}}/rancher/v2.x/en/admin-settings/rbac/global-p 1. Select **Tools > Monitoring** in the navigation bar. -1. Select **Enable** to show the [Prometheus configuration options]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/). Review the [resource consumption recommendations](#resource-consumption) to ensure you have enough resources for Prometheus and on your worker nodes to enable monitoring. Enter in your desired configuration options. +1. Select **Enable** to show the [Prometheus configuration options]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/prometheus/). Review the [resource consumption recommendations](#resource-consumption) to ensure you have enough resources for Prometheus and on your worker nodes to enable monitoring. Enter in your desired configuration options. 1. Click **Save**. -**Result:** The Prometheus server will be deployed as well as two monitoring applications. The two monitoring applications, `cluster-monitoring` and `monitoring-operator`, are added as an [application]({{}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the applications are `active`, you can start viewing [cluster metrics]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/) through the [Rancher dashboard]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/viewing-metrics/#rancher-dashboard) or directly from [Grafana]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#grafana). +**Result:** The Prometheus server will be deployed as well as two monitoring applications. The two monitoring applications, `cluster-monitoring` and `monitoring-operator`, are added as an [application]({{}}/rancher/v2.x/en/catalog/apps/) to the cluster's `system` project. After the applications are `active`, you can start viewing [cluster metrics]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/) through the Rancher dashboard or directly from [Grafana]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#grafana). > The default username and password for the Grafana instance will be `admin/admin`. However, Grafana dashboards are served via the Rancher authentication proxy, so only users who are currently authenticated into the Rancher server have access to the Grafana dashboard. diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/_index.md index 4ec27fa97d8..76764df6c0a 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/_index.md @@ -38,7 +38,7 @@ Some of the biggest metrics to look out for: 1. Click on **Node Metrics**. -[_Get expressions for Cluster Metrics_]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#cluster-metrics) +[_Get expressions for Cluster Metrics_]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/#cluster-metrics) ### Etcd Metrics @@ -58,7 +58,7 @@ Some of the biggest metrics to look out for: If this statistic suddenly grows, it usually indicates network communication issues that constantly force the cluster to elect a new leader. -[_Get expressions for Etcd Metrics_]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#etcd-metrics) +[_Get expressions for Etcd Metrics_]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/#etcd-metrics) ### Kubernetes Components Metrics @@ -90,13 +90,13 @@ Some of the more important component metrics to monitor are: How fast ingress is routing connections to your cluster services. -[_Get expressions for Kubernetes Component Metrics_]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#kubernetes-components-metrics) +[_Get expressions for Kubernetes Component Metrics_]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/#kubernetes-components-metrics) ## Rancher Logging Metrics Although the Dashboard for a cluster primarily displays data sourced from Prometheus, it also displays information for cluster logging, provided that you have [configured Rancher to use a logging service]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/). -[_Get expressions for Rancher Logging Metrics_]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#rancher-logging-metrics) +[_Get expressions for Rancher Logging Metrics_]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/#rancher-logging-metrics) ## Finding Workload Metrics @@ -113,4 +113,4 @@ Workload metrics display the hardware utilization for a Kubernetes workload. You - **View the Pod Metrics:** Click on **Pod Metrics**. - **View the Container Metrics:** In the **Containers** section, select a specific container and click on its name. Click on **Container Metrics**. -[_Get expressions for Workload Metrics_]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/expression/#workload-metrics) +[_Get expressions for Workload Metrics_]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/#workload-metrics) diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/custom-metrics/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/custom-metrics/_index.md index 30cdaadac4c..3ec611813a5 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/custom-metrics/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/custom-metrics/_index.md @@ -4,9 +4,10 @@ weight: 5 aliases: - rancher/v2.x/en/project-admin/tools/monitoring/custom-metrics - rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics + - /rancher/v2.x/en/cluster-admin/tools/monitoring/custom-metrics --- -After you've enabled [cluster level monitoring]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring), You can view the metrics data from Rancher. You can also deploy the Prometheus custom metrics adapter then you can use the HPA with metrics stored in cluster monitoring. +After you've enabled [cluster level monitoring]({{< baseurl >}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring), You can view the metrics data from Rancher. You can also deploy the Prometheus custom metrics adapter then you can use the HPA with metrics stored in cluster monitoring. ## Deploy Prometheus Custom Metrics Adapter @@ -305,9 +306,7 @@ Each rule can be broken down into roughly four parts: - *Querying*, which specifies how a request for a particular metric on one or more Kubernetes objects should be turned into a query to Prometheus. -A more comprehensive configuration file can be found in -[sample-config.yaml](sample-config.yaml), but a basic config with one rule -might look like: +A basic config with one rule might look like: ```yaml rules: diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/_index.md index c17d1021670..b413a3f6932 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/expression/_index.md @@ -3,12 +3,12 @@ title: Prometheus Expressions weight: 4 aliases: - rancher/v2.x/en/project-admin/tools/monitoring/expression - - rancher/v2.x/en/cluster-admin/tools/monitoring/expression + - /rancher/v2.x/en/cluster-admin/tools/monitoring/expression --- The PromQL expressions in this doc can be used to configure [alerts.]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) -> Before expression can be used in alerts, monitoring must be enabled. For more information, refer to the documentation on enabling monitoring [at the cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [at the project level.]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring) +> Before expression can be used in alerts, monitoring must be enabled. For more information, refer to the documentation on enabling monitoring [at the cluster level]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring) or [at the project level.]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring) For more information about querying Prometheus, refer to the official [Prometheus documentation.](https://prometheus.io/docs/prometheus/latest/querying/basics/) diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/prometheus/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/prometheus/_index.md index 59a374565d4..d99e02b8069 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/prometheus/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/prometheus/_index.md @@ -3,13 +3,13 @@ title: Prometheus Configuration weight: 1 aliases: - rancher/v2.x/en/project-admin/tools/monitoring/prometheus - - rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus + - /rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/ --- _Available as of v2.2.0_ -While configuring monitoring at either the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [project level]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), there are multiple options that can be configured. +While configuring monitoring at either the [cluster level]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring) or [project level]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), there are multiple options that can be configured. Option | Description -------|------------- diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/viewing-metrics/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/viewing-metrics/_index.md index 9775343eab7..b4ceac3bb92 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/viewing-metrics/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/viewing-metrics/_index.md @@ -8,11 +8,11 @@ aliases: _Available as of v2.2.0_ -After you've enabled monitoring at either the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [project level]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), you will want to be start viewing the data being collected. There are multiple ways to view this data. +After you've enabled monitoring at either the [cluster level]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring) or [project level]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), you will want to be start viewing the data being collected. There are multiple ways to view this data. ## Rancher Dashboard ->**Note:** This is only available if you've enabled monitoring at the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring). Project specific analytics must be viewed using the project's Grafana instance. +>**Note:** This is only available if you've enabled monitoring at the [cluster level]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring). Project specific analytics must be viewed using the project's Grafana instance. Rancher's dashboards are available at multiple locations: @@ -36,7 +36,7 @@ When analyzing these metrics, don't be concerned about any single standalone met ## Grafana -If you've enabled monitoring at either the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [project level]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), Rancher automatically creates a link to Grafana instance. Use this link to view monitoring data. +If you've enabled monitoring at either the [cluster level]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#enabling-cluster-monitoring) or [project level]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), Rancher automatically creates a link to Grafana instance. Use this link to view monitoring data. Grafana allows you to query, visualize, alert, and ultimately, understand your cluster and workload data. For more information on Grafana and its capabilities, visit the [Grafana website](https://grafana.com/grafana). diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/project-monitoring/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/project-monitoring/_index.md index 770fdbbc59e..a938712446f 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/project-monitoring/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/project-monitoring/_index.md @@ -9,8 +9,6 @@ _Available as of v2.2.4_ Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with [Prometheus](https://prometheus.io/), a leading open-source monitoring solution. -> For more information about how Prometheus works, refer to the [cluster administration section.]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#about-prometheus) - This section covers the following topics: - [Monitoring scope](#monitoring-scope) @@ -21,13 +19,13 @@ This section covers the following topics: ### Monitoring Scope -Using Prometheus, you can monitor Rancher at both the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and project level. For each cluster and project that is enabled for monitoring, Rancher deploys a Prometheus server. +Using Prometheus, you can monitor Rancher at both the [cluster level]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) and project level. For each cluster and project that is enabled for monitoring, Rancher deploys a Prometheus server. -- [Cluster monitoring]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts. +- [Cluster monitoring]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) allows you to view the health of your Kubernetes cluster. Prometheus collects metrics from the cluster components below, which you can view in graphs and charts. - - [Kubernetes control plane]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#kubernetes-components-metrics) - - [etcd database]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#etcd-metrics) - - [All nodes (including workers)]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#cluster-metrics) + - [Kubernetes control plane]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#kubernetes-components-metrics) + - [etcd database]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#etcd-metrics) + - [All nodes (including workers)]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#cluster-metrics) - Project monitoring allows you to view the state of pods running in a given project. Prometheus collects metrics from the project's deployed HTTP and TCP/UDP workloads. @@ -37,13 +35,13 @@ Only [administrators]({{}}/rancher/v2.x/en/admin-settings/rbac/global-p ### Enabling Project Monitoring -> **Prerequisite:** Cluster monitoring must be [enabled.]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) +> **Prerequisite:** Cluster monitoring must be [enabled.]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) 1. Go to the project where monitoring should be enabled. Note: When cluster monitoring is enabled, monitoring is also enabled by default in the **System** project. 1. Select **Tools > Monitoring** in the navigation bar. -1. Select **Enable** to show the [Prometheus configuration options]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/). Enter in your desired configuration options. +1. Select **Enable** to show the [Prometheus configuration options]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/prometheus/). Enter in your desired configuration options. 1. Click **Save**. @@ -55,13 +53,13 @@ Prometheus|750m| 750Mi | 1000m | 1000Mi | Yes Grafana | 100m | 100Mi | 200m | 200Mi | No -**Result:** A single application,`project-monitoring`, is added as an [application]({{}}/rancher/v2.x/en/catalog/apps/) to the project. After the application is `active`, you can start viewing [project metrics](#project-metrics) through the [Rancher dashboard]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#rancher-dashboard) or directly from [Grafana]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#grafana). +**Result:** A single application,`project-monitoring`, is added as an [application]({{}}/rancher/v2.x/en/catalog/apps/) to the project. After the application is `active`, you can start viewing [project metrics](#project-metrics) through the [Rancher dashboard]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#rancher-dashboard) or directly from [Grafana]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/#grafana). > The default username and password for the Grafana instance will be `admin/admin`. However, Grafana dashboards are served via the Rancher authentication proxy, so only users who are currently authenticated into the Rancher server have access to the Grafana dashboard. ### Project Metrics -[Workload metrics]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/cluster-metrics/#workload-metrics) are available for the project if monitoring is enabled at the [cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) and at the [project level.](#enabling-project-monitoring) +[Workload metrics]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/cluster-metrics/#workload-metrics) are available for the project if monitoring is enabled at the [cluster level]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/monitoring/cluster-monitoring/) and at the [project level.](#enabling-project-monitoring) You can monitor custom metrics from any [exporters.](https://prometheus.io/docs/instrumenting/exporters/) You can also expose some custom endpoints on deployments without needing to configure Prometheus for your project. diff --git a/content/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/_index.md b/content/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/_index.md index 5f777df874e..9292f5fe087 100644 --- a/content/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/_index.md +++ b/content/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/_index.md @@ -4,9 +4,10 @@ weight: 1 aliases: - rancher/v2.x/en/project-admin/tools/notifiers - rancher/v2.x/en/cluster-admin/tools/notifiers + - /rancher/v2.x/en/cluster-admin/tools/notifiers --- -> In Rancher 2.5, the notifier application was improved. There are now two ways to enable notifiers. The older way is documented in this section, and the new application for notifiers is documented in the [dashboard section.]({{}}/rancher/v2.x/en/dashboard/notifiers) +> In Rancher 2.5, the notifier application was improved. There are now two ways to enable notifiers. The older way is documented in this section, and the new application for notifiers is documented [here.]({{}}/rancher/v2.x/en/monitoring-alerting) Notifiers are services that inform you of alert events. You can configure notifiers to send alert notifications to staff best suited to take corrective action. diff --git a/content/rancher/v2.x/en/opa-gatekeper/_index.md b/content/rancher/v2.x/en/opa-gatekeper/_index.md index f73ef16be98..3b2b2ea095a 100644 --- a/content/rancher/v2.x/en/opa-gatekeper/_index.md +++ b/content/rancher/v2.x/en/opa-gatekeper/_index.md @@ -3,7 +3,7 @@ title: OPA Gatekeeper weight: 17 aliases: - /rancher/v2.x/en/cluster-admin/tools/opa-gatekeeper - + - /rancher/v2.x/en/opa-gatekeper/Open%20Policy%20Agent --- _Available as of v2.4.0_ diff --git a/content/rancher/v2.x/en/overview/_index.md b/content/rancher/v2.x/en/overview/_index.md index 1a97e92bffe..e25fe01072d 100644 --- a/content/rancher/v2.x/en/overview/_index.md +++ b/content/rancher/v2.x/en/overview/_index.md @@ -48,9 +48,9 @@ The Rancher API server is built on top of an embedded Kubernetes API server and ### Cluster Visibility -- **Logging:** Rancher can integrate with a variety of popular logging services and tools that exist outside of your Kubernetes clusters. Logging can be set up [at the cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/logging/) or [at the project level.]({{}}/rancher/v2.x/en/project-admin/tools/logging/) -- **Monitoring:** Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with Prometheus, a leading open-source monitoring solution. Monitoring can be configured [at the cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/monitoring/) or [at the project level.]({{}}/rancher/v2.x/en/project-admin/tools/monitoring/) -- **Alerting:** To keep your clusters and applications healthy and driving your organizational productivity forward, you need to stay informed of events occurring in your clusters and projects, both planned and unplanned. To help you stay informed of these events, you can configure alerts [at the cluster level]({{}}/rancher/v2.x/en/cluster-admin/tools/alerts/) or [at the project level.]({{}}/rancher/v2.x/en/project-admin/tools/alerts/) +- **Logging:** Rancher can integrate with a variety of popular logging services and tools that exist outside of your Kubernetes clusters. +- **Monitoring:** Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with Prometheus, a leading open-source monitoring solution. +- **Alerting:** To keep your clusters and applications healthy and driving your organizational productivity forward, you need to stay informed of events occurring in your clusters and projects, both planned and unplanned. # Editing Downstream Clusters with Rancher diff --git a/content/rancher/v2.x/en/pipelines/config/_index.md b/content/rancher/v2.x/en/pipelines/config/_index.md index 3d9d48d69c1..210e4c41b13 100644 --- a/content/rancher/v2.x/en/pipelines/config/_index.md +++ b/content/rancher/v2.x/en/pipelines/config/_index.md @@ -309,7 +309,7 @@ timeout: 30 # Notifications -You can enable notifications to any [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) based on the build status of a pipeline. Before enabling notifications, Rancher recommends [setting up notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/#adding-notifiers) so it will be easy to add recipients immediately. +You can enable notifications to any [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/) based on the build status of a pipeline. Before enabling notifications, Rancher recommends [setting up notifiers]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/#adding-notifiers) so it will be easy to add recipients immediately. ### Configuring Notifications by UI @@ -319,7 +319,7 @@ _Available as of v2.2.0_ 1. Select the conditions for the notification. You can select to get a notification for the following statuses: `Failed`, `Success`, `Changed`. For example, if you want to receive notifications when an execution fails, select **Failed**. -1. If you don't have any existing [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers), Rancher will provide a warning that no notifiers are set up and provide a link to be able to go to the notifiers page. Follow the [instructions]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers/#adding-notifiers) to add a notifier. If you already have notifiers, you can add them to the notification by clicking the **Add Recipient** button. +1. If you don't have any existing [notifiers]({{}}/rancher/v2.x/en/cluster-admin/tools/notifiers), Rancher will provide a warning that no notifiers are set up and provide a link to be able to go to the notifiers page. Follow the [instructions]({{}}/rancher/v2.x/en/monitoring-alerting/legacy/notifiers/#adding-notifiers) to add a notifier. If you already have notifiers, you can add them to the notification by clicking the **Add Recipient** button. > **Note:** Notifiers are configured at a cluster level and require a different level of permissions. diff --git a/content/rancher/v2.x/en/quick-start-guide/cli/_index.md b/content/rancher/v2.x/en/quick-start-guide/cli/_index.md index 1a15b4d409a..cd6784d1b23 100644 --- a/content/rancher/v2.x/en/quick-start-guide/cli/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/cli/_index.md @@ -26,7 +26,7 @@ _**Available as of v2.4.6**_ _Requirements_ -If admins have [enforced TTL on kubeconfig tokens](../../api/api-tokens/#setting-ttl-on-kubeconfig-tokens), the kubeconfig file requires the [Rancher cli](../cli) to be present in your PATH when you run `kubectl`. Otherwise, you’ll see error like: +If admins have [enforced TTL on kubeconfig tokens]({{}}/rancher/v2.x/en/api/api-tokens/#setting-ttl-on-kubeconfig-tokens), the kubeconfig file requires the [Rancher cli](../cli) to be present in your PATH when you run `kubectl`. Otherwise, you’ll see error like: `Unable to connect to the server: getting credentials: exec: exec: "rancher": executable file not found in $PATH`. This feature enables kubectl to authenticate with the Rancher server and get a new kubeconfig token when required. The following auth providers are currently supported: