mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-17 18:37:03 +00:00
Merge pull request #3337 from btat/2.6-prep-old-2.5-refs
2.6 prep: remove <=2.5 refs
This commit is contained in:
@@ -2,17 +2,16 @@
|
||||
title: Backups and Disaster Recovery
|
||||
weight: 5
|
||||
aliases:
|
||||
- /rancher/v2.6/en/backups/v2.5
|
||||
- /rancher/v2.6/en/backups/v2.6
|
||||
---
|
||||
|
||||
In this section, you'll learn how to create backups of Rancher, how to restore Rancher from backup, and how to migrate Rancher to a new Kubernetes cluster.
|
||||
|
||||
As of Rancher v2.5, the `rancher-backup` operator is used to backup and restore Rancher. The `rancher-backup` Helm chart is [here.](https://github.com/rancher/charts/tree/main/charts/rancher-backup)
|
||||
The `rancher-backup` operator is used to backup and restore Rancher on any Kubernetes cluster. This application is a Helm chart, and it can be deployed through the Rancher **Apps & Marketplace** page, or by using the Helm CLI. The `rancher-backup` Helm chart is [here.](https://github.com/rancher/charts/tree/main/charts/rancher-backup)
|
||||
|
||||
The backup-restore operator needs to be installed in the local cluster, and only backs up the Rancher app. The backup and restore operations are performed only in the local Kubernetes cluster.
|
||||
|
||||
- [Changes in Rancher v2.5](#changes-in-rancher-v2-5)
|
||||
- [Backup and Restore for Rancher v2.5 installed with Docker](#backup-and-restore-for-rancher-v2-5-installed-with-docker)
|
||||
- [Backup and Restore for Rancher installed with Docker](#backup-and-restore-for-rancher-installed-with-docker)
|
||||
- [How Backups and Restores Work](#how-backups-and-restores-work)
|
||||
- [Installing the rancher-backup Operator](#installing-the-rancher-backup-operator)
|
||||
- [Installing rancher-backup with the Rancher UI](#installing-rancher-backup-with-the-rancher-ui)
|
||||
@@ -24,19 +23,9 @@ The backup-restore operator needs to be installed in the local cluster, and only
|
||||
- [Default Storage Location Configuration](#default-storage-location-configuration)
|
||||
- [Example values.yaml for the rancher-backup Helm Chart](#example-values-yaml-for-the-rancher-backup-helm-chart)
|
||||
|
||||
# Changes in Rancher v2.5
|
||||
# Backup and Restore for Rancher installed with Docker
|
||||
|
||||
The new `rancher-backup` operator allows Rancher to be backed up and restored on any Kubernetes cluster. This application is a Helm chart, and it can be deployed through the Rancher **Apps & Marketplace** page, or by using the Helm CLI.
|
||||
|
||||
Previously, the way that cluster data was backed up depended on the type of Kubernetes cluster that was used.
|
||||
|
||||
In Rancher v2.4, it was only supported to install Rancher on two types of Kubernetes clusters: an RKE cluster, or a K3s cluster with an external database. If Rancher was installed on an RKE cluster, RKE would be used to take a snapshot of the etcd database and restore the cluster. If Rancher was installed on a K3s cluster with an external database, the database would need to be backed up and restored using the upstream documentation for the database.
|
||||
|
||||
In Rancher v2.5, it is now supported to install Rancher hosted Kubernetes clusters, such as Amazon EKS clusters, which do not expose etcd to a degree that would allow snapshots to be created by an external tool. etcd doesn't need to be exposed for `rancher-backup` to work, because the operator gathers resources by making calls to `kube-apiserver`.
|
||||
|
||||
### Backup and Restore for Rancher v2.5 installed with Docker
|
||||
|
||||
For Rancher installed with Docker, refer to the same steps used up till 2.5 for [backups](./docker-installs/docker-backups) and [restores.](./docker-installs/docker-restores)
|
||||
For Rancher installed with Docker, refer to [this page](./docker-installs/docker-backups) to perform backups and [this page](./docker-installs/docker-restores) to perform restores.
|
||||
|
||||
# How Backups and Restores Work
|
||||
|
||||
|
||||
@@ -2,20 +2,17 @@
|
||||
title: Logging Best Practices
|
||||
weight: 1
|
||||
aliases:
|
||||
- /rancher/v2.6/en/best-practices/v2.5/rancher-managed/logging
|
||||
- /rancher/v2.6/en/best-practices/v2.6/rancher-managed/logging
|
||||
---
|
||||
In this guide, we recommend best practices for cluster-level logging and application logging.
|
||||
|
||||
- [Changes in Logging in Rancher v2.5](#changes-in-logging-in-rancher-v2-5)
|
||||
- [Cluster-level Logging](#cluster-level-logging)
|
||||
- [Application Logging](#application-logging)
|
||||
- [General Best Practices](#general-best-practices)
|
||||
|
||||
# Changes in Logging in Rancher v2.5
|
||||
|
||||
Before Rancher v2.5, logging in Rancher has historically been a pretty static integration. There were a fixed list of aggregators to choose from (ElasticSearch, Splunk, Kafka, Fluentd and Syslog), and only two configuration points to choose (Cluster-level and Project-level).
|
||||
|
||||
Logging in 2.5 has been completely overhauled to provide a more flexible experience for log aggregation. With the new logging feature, administrators and users alike can deploy logging that meets fine-grained collection criteria while offering a wider array of destinations and configuration options.
|
||||
Rancher provides a flexible experience for log aggregation. With the logging feature, administrators and users alike can deploy logging that meets fine-grained collection criteria while offering a wider array of destinations and configuration options.
|
||||
|
||||
"Under the hood", Rancher logging uses the Banzai Cloud logging operator. We provide manageability of this operator (and its resources), and tie that experience in with managing your Rancher clusters.
|
||||
|
||||
@@ -33,7 +30,7 @@ Once you have created these _ClusterOutput_ objects, create a _ClusterFlow_ to c
|
||||
|
||||
_ClusterFlows_ have the ability to collect logs from all containers on all hosts in the Kubernetes cluster. This works well in cases where those containers are part of a Kubernetes pod; however, RKE containers exist outside of the scope of Kubernetes.
|
||||
|
||||
Currently (as of v2.5.1) the logs from RKE containers are collected, but are not able to easily be filtered. This is because those logs do not contain information as to the source container (e.g. `etcd` or `kube-apiserver`).
|
||||
Currently the logs from RKE containers are collected, but are not able to easily be filtered. This is because those logs do not contain information as to the source container (e.g. `etcd` or `kube-apiserver`).
|
||||
|
||||
A future release of Rancher will include the source container name which will enable filtering of these component logs. Once that change is made, you will be able to customize a _ClusterFlow_ to retrieve **only** the Kubernetes component logs, and direct them to an appropriate output.
|
||||
|
||||
@@ -90,4 +87,4 @@ spec:
|
||||
- Try to provide the name of the application that is creating the log entry, in the entry itself. This can make troubleshooting easier as Kubernetes objects do not always carry the name of the application as the object name. For instance, a pod ID may be something like `myapp-098kjhsdf098sdf98` which does not provide much information about the application running inside the container.
|
||||
- Except in the case of collecting all logs cluster-wide, try to scope your _Flow_ and _ClusterFlow_ objects tightly. This makes it easier to troubleshoot when problems arise, and also helps ensure unrelated log entries do not show up in your aggregator. An example of tight scoping would be to constrain a _Flow_ to a single _Deployment_ in a namespace, or perhaps even a single container within a _Pod_.
|
||||
- Keep the log verbosity down except when troubleshooting. High log verbosity poses a number of issues, chief among them being **noise**: significant events can be drowned out in a sea of `DEBUG` messages. This is somewhat mitigated with automated alerting and scripting, but highly verbose logging still places an inordinate amount of stress on the logging infrastructure.
|
||||
- Where possible, try to provide a transaction or request ID with the log entry. This can make tracing application activity across multiple log sources easier, especially when dealing with distributed applications.
|
||||
- Where possible, try to provide a transaction or request ID with the log entry. This can make tracing application activity across multiple log sources easier, especially when dealing with distributed applications.
|
||||
|
||||
@@ -5,11 +5,10 @@ aliases:
|
||||
- /rancher/v2.6/en/cis-scans/v2.6
|
||||
---
|
||||
|
||||
Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark.
|
||||
Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. The CIS scans can run on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE.
|
||||
|
||||
The `rancher-cis-benchmark` app leverages <a href="https://github.com/aquasecurity/kube-bench" target="_blank">kube-bench,</a> an open-source tool from Aqua Security, to check clusters for CIS Kubernetes Benchmark compliance. Also, to generate a cluster-wide report, the application utilizes <a href="https://github.com/vmware-tanzu/sonobuoy" target="_blank">Sonobuoy</a> for report aggregation.
|
||||
|
||||
- [Changes in Rancher v2.5](#changes-in-rancher-v2-5)
|
||||
- [About the CIS Benchmark](#about-the-cis-benchmark)
|
||||
- [About the Generated Report](#about-the-generated-report)
|
||||
- [Test Profiles](#test-profiles)
|
||||
@@ -27,41 +26,6 @@ The `rancher-cis-benchmark` app leverages <a href="https://github.com/aquasecuri
|
||||
- [Configuring Alerts for a Periodic Scan on a Schedule](#configuring-alerts-for-a-periodic-scan-on-a-schedule)
|
||||
- [Creating a Custom Benchmark Version for Running a Cluster Scan](#creating-a-custom-benchmark-version-for-running-a-cluster-scan)
|
||||
|
||||
# Changes in Rancher v2.5
|
||||
|
||||
We now support running CIS scans on any Kubernetes cluster, including hosted Kubernetes providers such as EKS, AKS, and GKE. Previously it was only supported to run CIS scans on RKE Kubernetes clusters.
|
||||
|
||||
In Rancher v2.4, the CIS scan tool was available from the **cluster manager** in the Rancher UI. Now it is available in the **Cluster Explorer** and it can be enabled and deployed using a Helm chart. It can be installed from the Rancher UI, but it can also be installed independently of Rancher. It deploys a CIS scan operator for the cluster, and deploys Kubernetes custom resources for cluster scans. The custom resources can be managed directly from the **Cluster Explorer.**
|
||||
|
||||
In v1 of the CIS scan tool, which was available in Rancher v2.4 through the cluster manager, recurring scans could be scheduled. The ability to schedule recurring scans is now also available for CIS v2 from Rancher v2.5.4.
|
||||
|
||||
Support for alerting for the cluster scan results is now also available from Rancher v2.5.4.
|
||||
|
||||
In Rancher v2.4, permissive and hardened profiles were included. In Rancher v2.5.0 and in v2.5.4, more profiles were included.
|
||||
|
||||
- Generic CIS 1.5
|
||||
- Generic CIS 1.6
|
||||
- RKE permissive 1.5
|
||||
- RKE hardened 1.5
|
||||
- RKE permissive 1.6
|
||||
- RKE hardened 1.6
|
||||
- EKS
|
||||
- GKE
|
||||
- RKE2 permissive 1.5
|
||||
- RKE2 permissive 1.5
|
||||
<br/>
|
||||
|
||||
|
||||
The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned:
|
||||
|
||||
The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version.
|
||||
|
||||
- For RKE Kubernetes clusters, the RKE Permissive 1.6 profile is the default.
|
||||
- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters.
|
||||
- For RKE2 Kubernetes clusters, the RKE2 Permissive 1.5 profile is the default.
|
||||
- For cluster types other than RKE, RKE2, EKS and GKE, the Generic CIS 1.5 profile will be used by default.
|
||||
|
||||
> **Note:** CIS v1 cannot run on a cluster when CIS v2 is deployed. In other words, after `rancher-cis-benchmark` is installed, you can't run scans by going to the Cluster Manager view in the Rancher UI and clicking <b>Tools > CIS Scans.</b>
|
||||
|
||||
# About the CIS Benchmark
|
||||
|
||||
@@ -76,7 +40,7 @@ The official Benchmark documents are available through the CIS website. The sign
|
||||
|
||||
Each scan generates a report can be viewed in the Rancher UI and can be downloaded in CSV format.
|
||||
|
||||
From Rancher v2.5.4, the scan uses the CIS Benchmark v1.6 by default. In Rancher v2.5.0-2.5.3, the CIS Benchmark v1.5. is used.
|
||||
By default, the CIS Benchmark v1.6 is used.
|
||||
|
||||
The Benchmark version is included in the generated report.
|
||||
|
||||
@@ -131,6 +95,15 @@ The EKS and GKE cluster scan profiles are based on CIS Benchmark versions that a
|
||||
|
||||
In order to pass the "Hardened" profile, you will need to follow the steps on the <a href="{{<baseurl>}}/rancher/v2.6/en/security/#rancher-hardening-guide" target="_blank">hardening guide</a> and use the `cluster.yml` defined in the hardening guide to provision a hardened cluster.
|
||||
|
||||
The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned:
|
||||
|
||||
The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version.
|
||||
|
||||
- For RKE Kubernetes clusters, the RKE Permissive 1.6 profile is the default.
|
||||
- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters.
|
||||
- For RKE2 Kubernetes clusters, the RKE2 Permissive 1.5 profile is the default.
|
||||
- For cluster types other than RKE, RKE2, EKS and GKE, the Generic CIS 1.5 profile will be used by default.
|
||||
|
||||
# About Skipped and Not Applicable Tests
|
||||
|
||||
For a list of skipped and not applicable tests, refer to <a href="{{<baseurl>}}/rancher/v2.6/en/cis-scans/skipped-tests" target="_blank">this page.</a>
|
||||
@@ -191,7 +164,6 @@ To run a scan,
|
||||
|
||||
**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears.
|
||||
### Running a Scan Periodically on a Schedule
|
||||
_Available as of v2.5.4_
|
||||
|
||||
To run a ClusterScan on a schedule,
|
||||
|
||||
@@ -251,7 +223,6 @@ To view the generated CIS scan reports,
|
||||
One can download the report from the Scans list or from the scan detail page.
|
||||
|
||||
### Enabling Alerting for rancher-cis-benchmark
|
||||
_Available as of v2.5.4_
|
||||
|
||||
Alerts can be configured to be sent out for a scan that runs on a schedule.
|
||||
|
||||
@@ -269,9 +240,8 @@ alerts:
|
||||
```
|
||||
|
||||
### Configuring Alerts for a Periodic Scan on a Schedule
|
||||
_Available as of v2.5.4_
|
||||
|
||||
From Rancher v2.5.4, it is possible to run a ClusterScan on a schedule.
|
||||
It is possible to run a ClusterScan on a schedule.
|
||||
|
||||
A scheduled scan can also specify if you should receive alerts when the scan completes.
|
||||
|
||||
@@ -305,7 +275,6 @@ To configure alerts for a scan that runs on a schedule,
|
||||
A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears.
|
||||
|
||||
### Creating a Custom Benchmark Version for Running a Cluster Scan
|
||||
_Available as of v2.5.4_
|
||||
|
||||
There could be some Kubernetes cluster setups that require custom configurations of the Benchmark tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream CIS Benchmarks look for them.
|
||||
|
||||
|
||||
@@ -5,8 +5,6 @@ aliases:
|
||||
- /rancher/v2.6/en/cis-scans/v2.5/custom-benchmark
|
||||
---
|
||||
|
||||
_Available as of v2.5.4_
|
||||
|
||||
Each Benchmark Version defines a set of test configuration files that define the CIS tests to be run by the <a href="https://github.com/aquasecurity/kube-bench" target="_blank">kube-bench</a> tool.
|
||||
The `rancher-cis-benchmark` application installs a few default Benchmark Versions which are listed under CIS Benchmark application menu.
|
||||
|
||||
@@ -81,4 +79,4 @@ To run a scan,
|
||||
1. Choose the new cluster scan profile `foo-profile`.
|
||||
1. Click **Create.**
|
||||
|
||||
**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears.
|
||||
**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears.
|
||||
|
||||
+1
-1
@@ -180,7 +180,7 @@ _Mutable: yes_
|
||||
|
||||
The node operating system image. For more information for the node image options that GKE offers for each OS, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#available_node_images)
|
||||
|
||||
> Note: the default option is "Container-Optimized OS with Docker". The read-only filesystem on GCP's Container-Optimized OS is not compatible with the [legacy logging]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/tools/logging) implementation in Rancher. If you need to use the legacy logging feature, select "Ubuntu with Docker" or "Ubuntu with Containerd". The [logging feature as of v2.5]({{<baseurl>}}/rancher/v2.6/en/logging) is compatible with the Container-Optimized OS image.
|
||||
> Note: the default option is "Container-Optimized OS with Docker". The read-only filesystem on GCP's Container-Optimized OS is not compatible with the [legacy logging]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/tools/logging) implementation in Rancher. If you need to use the legacy logging feature, select "Ubuntu with Docker" or "Ubuntu with Containerd". The [current logging feature]({{<baseurl>}}/rancher/v2.6/en/logging) is compatible with the Container-Optimized OS image.
|
||||
|
||||
> Note: if selecting "Windows Long Term Service Channel" or "Windows Semi-Annual Channel" for the node pool image type, you must also add at least one Container-Optimized OS or Ubuntu node pool.
|
||||
|
||||
|
||||
-2
@@ -5,8 +5,6 @@ aliases:
|
||||
- /rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/gke/private-clusters
|
||||
---
|
||||
|
||||
_Available as of v2.5.8_
|
||||
|
||||
In GKE, [private clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept) are clusters whose nodes are isolated from inbound and outbound traffic by assigning them internal IP addresses only. Private clusters in GKE have the option of exposing the control plane endpoint as a publicly accessible address or as a private address. This is different from other Kubernetes providers, which may refer to clusters with private control plane endpoints as "private clusters" but still allow traffic to and from nodes. You may want to create a cluster with private nodes, with or without a public control plane endpoint, depending on your organization's networking and security requirements. A GKE cluster provisioned from Rancher can use isolated nodes by selecting "Private Cluster" in the Cluster Options (under "Show advanced options"). The control plane endpoint can optionally be made private by selecting "Enable Private Endpoint".
|
||||
|
||||
### Private Nodes
|
||||
|
||||
+2
-4
@@ -27,9 +27,7 @@ The \container networking interface (CNI) that powers networking for your cluste
|
||||
|
||||
If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication.
|
||||
|
||||
Before Rancher v2.5.8, project network isolation is only available if you are using the Canal network plugin for RKE.
|
||||
|
||||
In v2.5.8+, project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
|
||||
Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
|
||||
|
||||
### Nginx Ingress
|
||||
|
||||
@@ -76,4 +74,4 @@ For the complete reference of configurable options for RKE Kubernetes clusters i
|
||||
|
||||
Clusters that were created before Kubernetes 1.16 will have an `ingress-nginx` `updateStrategy` of `OnDelete`. Clusters that were created with Kubernetes 1.16 or newer will have `RollingUpdate`.
|
||||
|
||||
If the `updateStrategy` of `ingress-nginx` is `OnDelete`, you will need to delete these pods to get the correct version for your deployment.
|
||||
If the `updateStrategy` of `ingress-nginx` is `OnDelete`, you will need to delete these pods to get the correct version for your deployment.
|
||||
|
||||
@@ -40,7 +40,7 @@ If you are registering a K3s cluster, make sure the `cluster.yml` is readable. I
|
||||
2. Choose **Register**.
|
||||
3. Enter a **Cluster Name**.
|
||||
4. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
|
||||
5. For Rancher v2.5.6+, use **Agent Environment Variables** under **Cluster Options** to set environment variables for [rancher cluster agent]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/rancher-agents/). The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server, `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables can be set using agent environment variables.
|
||||
5. Use **Agent Environment Variables** under **Cluster Options** to set environment variables for [rancher cluster agent]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/rancher-agents/). The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server, `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables can be set using agent environment variables.
|
||||
6. Click **Create**.
|
||||
7. The prerequisite for `cluster-admin` privileges is shown (see **Prerequisites** above), including an example command to fulfil the prerequisite.
|
||||
8. Copy the `kubectl` command to your clipboard and run it on a node where kubeconfig is configured to point to the cluster you want to import. If you are unsure it is configured correctly, run `kubectl get nodes` to verify before running the command shown in Rancher.
|
||||
|
||||
-2
@@ -11,6 +11,4 @@ To use the in-tree vSphere cloud provider, you will need to use an RKE configura
|
||||
|
||||
# Out-of-tree Cloud Provider
|
||||
|
||||
_Available as of v2.5+_
|
||||
|
||||
To set up the out-of-tree vSphere cloud provider, you will need to install Helm charts from the Rancher marketplace. For details, refer to [this page.](./out-of-tree)
|
||||
|
||||
-1
@@ -3,7 +3,6 @@ title: How to Configure Out-of-tree vSphere Cloud Provider
|
||||
shortTitle: Out-of-tree Cloud Provider
|
||||
weight: 10
|
||||
---
|
||||
_Available as of v2.5+_
|
||||
|
||||
Kubernetes is moving away from maintaining cloud providers in-tree. vSphere has an out-of-tree cloud provider that can be used by installing the vSphere cloud provider and cloud storage plugins.
|
||||
|
||||
|
||||
-2
@@ -2,8 +2,6 @@
|
||||
title: Migrating vSphere In-tree Volumes to CSI
|
||||
weight: 5
|
||||
---
|
||||
_Available as of v2.5+_
|
||||
|
||||
Kubernetes is moving away from maintaining cloud providers in-tree. vSphere has an out-of-tree cloud provider that can be used by installing the vSphere cloud provider and cloud storage plugins.
|
||||
|
||||
This page covers how to migrate from the in-tree vSphere cloud provider to out-of-tree, and manage the existing VMs post migration.
|
||||
|
||||
+1
-3
@@ -6,8 +6,6 @@ aliases:
|
||||
- /rancher/v2.6/en/cluster-provisionin/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/enabling-uuids
|
||||
---
|
||||
|
||||
The following node template configuration reference applies to Rancher v2.3.3+.
|
||||
|
||||
- [Account Access](#account-access)
|
||||
- [Scheduling](#scheduling)
|
||||
- [Instance Options](#instance-options)
|
||||
@@ -91,4 +89,4 @@ In the custom attributes, Rancher will let you select all the custom attributes
|
||||
|
||||
To make use of cloud-init initialization, create a cloud config file using valid YAML syntax and paste the file content in the the **Cloud Init** field. Refer to the [cloud-init documentation.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) for a commented set of examples of supported cloud config directives.
|
||||
|
||||
Note that cloud-init is not supported when using the ISO creation method.
|
||||
Note that cloud-init is not supported when using the ISO creation method.
|
||||
|
||||
@@ -143,8 +143,6 @@ Option to enable or disable [recurring etcd snapshots]({{<baseurl>}}/rke/latest/
|
||||
|
||||
### Agent Environment Variables
|
||||
|
||||
_Available as of v2.5.6_
|
||||
|
||||
Option to set environment variables for [rancher agents]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/rancher-agents/). The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server, `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables can be set using agent environment variables.
|
||||
|
||||
|
||||
@@ -157,7 +155,7 @@ Instead of using the Rancher UI to choose Kubernetes options for the cluster, ad
|
||||
|
||||

|
||||
|
||||
### Config File Structure in Rancher v2.3.0+
|
||||
### Config File Structure in Rancher
|
||||
|
||||
RKE (Rancher Kubernetes Engine) is the tool that Rancher uses to provision Kubernetes clusters. Rancher's cluster config files used to have the same structure as [RKE config files,]({{<baseurl>}}/rke/latest/en/example-yamls/) but the structure changed so that in Rancher, RKE cluster config items are separated from non-RKE config items. Therefore, configuration for your cluster needs to be nested under the `rancher_kubernetes_engine_config` directive in the cluster config file. Cluster config files created with earlier versions of Rancher will need to be updated for this format. An example cluster config file is included below.
|
||||
|
||||
@@ -281,9 +279,7 @@ Option to enable or disable [Cluster Monitoring]({{<baseurl>}}/rancher/v2.6/en/m
|
||||
|
||||
Option to enable or disable Project Network Isolation.
|
||||
|
||||
Before Rancher v2.5.8, project network isolation is only available if you are using the Canal network plugin for RKE.
|
||||
|
||||
In v2.5.8+, project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
|
||||
Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
|
||||
|
||||
### local_cluster_auth_endpoint
|
||||
|
||||
|
||||
@@ -20,9 +20,7 @@ The `cattle-node-agent` is used to interact with nodes in a [Rancher Launched Ku
|
||||
|
||||
### Scheduling rules
|
||||
|
||||
_Applies to v2.5.4 and higher_
|
||||
|
||||
Starting with Rancher v2.5.4, the tolerations for the `cattle-cluster-agent` changed from `operator:Exists` (allowing all taints) to a fixed set of tolerations (listed below, if no controlplane nodes are visible in the cluster) or dynamically added tolerations based on taints applied to the controlplane nodes. This change was made to allow [Taint based Evictions](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) to work properly for `cattle-cluster-agent`. The default tolerations are described below. If controlplane nodes are present the cluster, the tolerations will be replaced with tolerations matching the taints on the controlplane nodes.
|
||||
The `cattle-cluster-agent` uses a fixed fixed set of tolerations (listed below, if no controlplane nodes are visible in the cluster) or dynamically added tolerations based on taints applied to the controlplane nodes. This structure allows for [Taint based Evictions](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) to work properly for `cattle-cluster-agent`. The default tolerations are described below. If controlplane nodes are present the cluster, the tolerations will be replaced with tolerations matching the taints on the controlplane nodes.
|
||||
|
||||
| Component | nodeAffinity nodeSelectorTerms | nodeSelector | Tolerations |
|
||||
| ---------------------- | ------------------------------------------ | ------------ | ------------------------------------------------------------------------------ |
|
||||
@@ -42,18 +40,3 @@ The `preferredDuringSchedulingIgnoredDuringExecution` configuration is shown in
|
||||
| 100 | `node-role.kubernetes.io/master:In:"true"` |
|
||||
| 1 | `cattle.io/cluster-agent:In:"true"` |
|
||||
|
||||
_Applies to v2.3.0 up to v2.5.3_
|
||||
|
||||
| Component | nodeAffinity nodeSelectorTerms | nodeSelector | Tolerations |
|
||||
| ---------------------- | ------------------------------------------ | ------------ | ------------------------------------------------------------------------------ |
|
||||
| `cattle-cluster-agent` | `beta.kubernetes.io/os:NotIn:windows` | none | `operator:Exists` |
|
||||
| `cattle-node-agent` | `beta.kubernetes.io/os:NotIn:windows` | none | `operator:Exists` |
|
||||
|
||||
The `cattle-cluster-agent` Deployment has preferred scheduling rules using `preferredDuringSchedulingIgnoredDuringExecution`, favoring to be scheduled on nodes with the `controlplane` node. See [Kubernetes: Assigning Pods to Nodes](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/) to find more information about scheduling rules.
|
||||
|
||||
The `preferredDuringSchedulingIgnoredDuringExecution` configuration is shown in the table below:
|
||||
|
||||
| Weight | Expression |
|
||||
| ------ | ------------------------------------------------ |
|
||||
| 100 | `node-role.kubernetes.io/controlplane:In:"true"` |
|
||||
| 1 | `node-role.kubernetes.io/etcd:In:"true"` |
|
||||
|
||||
+3
-3
@@ -5,7 +5,7 @@ weight: 3
|
||||
|
||||
Windows clusters do not share the same feature support as Linux clusters.
|
||||
|
||||
The following chart describes the feature parity between Windows and Linux on Rancher as of Rancher v2.5.8:
|
||||
The following chart describes the feature parity between Windows and Linux on Rancher:
|
||||
|
||||
**Component** | **Linux** | **Windows**
|
||||
--- | --- | ---
|
||||
@@ -26,8 +26,8 @@ GKE Operator | Not Supported | Not Supported
|
||||
Alerting v1 | Supported | Supported
|
||||
Monitoring v1 | Supported | Supported
|
||||
Logging v1 | Supported | Supported
|
||||
Monitoring/Alerting v2 | Supported | Supported In 2.5.8+
|
||||
Logging v2 | Supported | Supported In 2.5.8+
|
||||
Monitoring/Alerting v2 | Supported | Supported
|
||||
Logging v2 | Supported | Supported
|
||||
Istio | Supported | Not Supported
|
||||
Catalog v1 | Supported | Not Supported
|
||||
Catalog v2 | Supported | Not Supported
|
||||
|
||||
@@ -3,8 +3,6 @@ title: Fleet - GitOps at Scale
|
||||
weight: 1
|
||||
---
|
||||
|
||||
_Available as of Rancher v2.5_
|
||||
|
||||
Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight enough that it works great for a [single cluster](https://fleet.rancher.io/single-cluster-install/) too, but it really shines when you get to a [large scale.](https://fleet.rancher.io/multi-cluster-install/) By large scale we mean either a lot of clusters, a lot of deployments, or a lot of teams in a single organization.
|
||||
|
||||
Fleet is a separate project from Rancher, and can be installed on any Kubernetes cluster with Helm.
|
||||
@@ -21,12 +19,10 @@ For information about how Fleet works, see [this page.](./architecture)
|
||||
|
||||
# Accessing Fleet in the Rancher UI
|
||||
|
||||
Fleet comes preinstalled in Rancher v2.5. To access it, go to the **Cluster Explorer** in the Rancher UI. In the top left dropdown menu, click **Cluster Explorer > Continuous Delivery.** On this page, you can edit Kubernetes resources and cluster groups managed by Fleet.
|
||||
Fleet comes preinstalled in Rancher. To access it, go to the **Cluster Explorer** in the Rancher UI. In the top left dropdown menu, click **Cluster Explorer > Continuous Delivery.** On this page, you can edit Kubernetes resources and cluster groups managed by Fleet.
|
||||
|
||||
# Windows Support
|
||||
|
||||
_Available as of v2.5.6_
|
||||
|
||||
For details on support for clusters with Windows nodes, see [this page.](./windows)
|
||||
|
||||
|
||||
@@ -37,8 +33,6 @@ The Fleet Helm charts are available [here.](https://github.com/rancher/fleet/rel
|
||||
|
||||
# Using Fleet Behind a Proxy
|
||||
|
||||
_Available as of v2.5.8_
|
||||
|
||||
For details on using Fleet behind a proxy, see [this page.](./proxy)
|
||||
|
||||
# Documentation
|
||||
|
||||
@@ -3,8 +3,6 @@ title: Using Fleet Behind a Proxy
|
||||
weight: 3
|
||||
---
|
||||
|
||||
_Available as of v2.5.8_
|
||||
|
||||
In this section, you'll learn how to enable Fleet in a setup that has a Rancher server with a public IP a Kubernetes cluster that has no public IP, but is configured to use a proxy.
|
||||
|
||||
Rancher does not establish connections with registered downstream clusters. The Rancher agent deployed on the downstream cluster must be able to establish the connection with Rancher.
|
||||
@@ -54,4 +52,4 @@ export HTTP_PROXY=http://${proxy_private_ip}:8888
|
||||
export HTTPS_PROXY=http://${proxy_private_ip}:8888
|
||||
export NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
|
||||
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
|
||||
```
|
||||
```
|
||||
|
||||
@@ -8,15 +8,7 @@ aliases:
|
||||
- /rancher/v2.6/en/catalog/launching-apps
|
||||
---
|
||||
|
||||
In this section, you'll learn how to manage Helm chart repositories and applications in Rancher.
|
||||
|
||||
### Changes in Rancher v2.5
|
||||
|
||||
In Rancher v2.5, the Apps and Marketplace feature replaced the catalog system.
|
||||
|
||||
In the cluster manager, Rancher uses a catalog system to import bundles of charts and then uses those charts to either deploy custom helm applications or Rancher's tools such as Monitoring or Istio. The catalog system is still available in the cluster manager in Rancher v2.5, but it is deprecated.
|
||||
|
||||
Now in the Cluster Explorer, Rancher uses a similar but simplified version of the same system. Repositories can be added in the same way that catalogs were, but are specific to the current cluster. Rancher tools come as pre-loaded repositories which deploy as standalone helm charts.
|
||||
In this section, you'll learn how to manage Helm chart repositories and applications in Rancher. Helm chart repositories are managed using the "Apps & Marketplace" feature found in the Cluster Explorer. It contains a simple catalog-like system to import bundles of charts from repositories and then uses those charts to either deploy custom Helm applications or Rancher's tools such as Monitoring or Istio. Rancher tools come as pre-loaded repositories which deploy as standalone Helm charts. Any additional repositories are only added to the current cluster.
|
||||
|
||||
### Charts
|
||||
|
||||
|
||||
@@ -12,19 +12,13 @@ This section provides an overview of the architecture options of installing Ranc
|
||||
|
||||
In this section,
|
||||
|
||||
- **The Rancher server** manages and provisions Kubernetes clusters. You can interact with downstream Kubernetes clusters through the Rancher server's user interface.
|
||||
- **The Rancher server** manages and provisions Kubernetes clusters. You can interact with downstream Kubernetes clusters through the Rancher server's user interface. The Rancher management server can be installed on any Kubernetes cluster, including hosted clusters, such as Amazon EKS clusters.
|
||||
- **RKE (Rancher Kubernetes Engine)** is a certified Kubernetes distribution and CLI/library which creates and manages a Kubernetes cluster.
|
||||
- **K3s (Lightweight Kubernetes)** is also a fully compliant Kubernetes distribution. It is newer than RKE, easier to use, and more lightweight, with a binary size of less than 100 MB.
|
||||
- **RKE2** is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector.
|
||||
- **RancherD** is a new tool for installing Rancher, which is available as of Rancher v2.5.4. It is an experimental feature. RancherD is a single binary that first launches an RKE2 Kubernetes cluster, then installs the Rancher server Helm chart on the cluster.
|
||||
- **RancherD** is a new tool for installing Rancher. It is an experimental feature. RancherD is a single binary that first launches an RKE2 Kubernetes cluster, then installs the Rancher server Helm chart on the cluster.
|
||||
|
||||
# Changes to Installation in Rancher v2.5
|
||||
|
||||
In Rancher v2.5, the Rancher management server can be installed on any Kubernetes cluster, including hosted clusters, such as Amazon EKS clusters.
|
||||
|
||||
For Docker installations, a local Kubernetes cluster is installed in the single Docker container, and Rancher is installed on the local cluster.
|
||||
|
||||
The `restrictedAdmin` Helm chart option was added. When this option is set to true, the initial Rancher user has restricted access to the local Kubernetes cluster to prevent privilege escalation. For more information, see the section about the [restricted-admin role.]({{<baseurl>}}/rancher/v2.6/en/admin-settings/rbac/global-permissions/#restricted-admin)
|
||||
Note the `restrictedAdmin` Helm chart option available for **the Rancher Server**. When this option is set to true, the initial Rancher user has restricted access to the local Kubernetes cluster to prevent privilege escalation. For more information, see the section about the [restricted-admin role.]({{<baseurl>}}/rancher/v2.6/en/admin-settings/rbac/global-permissions/#restricted-admin)
|
||||
|
||||
# Overview of Installation Options
|
||||
|
||||
@@ -36,8 +30,6 @@ We recommend using Helm, a Kubernetes package manager, to install Rancher on mul
|
||||
|
||||
### High-availability Kubernetes Install with RancherD
|
||||
|
||||
_Available as of v2.5.4_
|
||||
|
||||
> This is an experimental feature.
|
||||
|
||||
RancherD is a single binary that first launches an RKE2 Kubernetes cluster, then installs the Rancher server Helm chart on the cluster.
|
||||
@@ -58,7 +50,7 @@ However, this option is useful if you want to save resources by using a single n
|
||||
|
||||
### Docker Install
|
||||
|
||||
For test and demonstration purposes, Rancher can be installed with Docker on a single node.
|
||||
For test and demonstration purposes, Rancher can be installed with Docker on a single node. A local Kubernetes cluster is installed in the single Docker container, and Rancher is installed on the local cluster.
|
||||
|
||||
The Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{<baseurl>}}/rancher/v2.6/en/backups/migrating-rancher)
|
||||
|
||||
|
||||
@@ -44,7 +44,7 @@ The following CLI tools are required for setting up the Kubernetes cluster. Plea
|
||||
|
||||
### Ingress Controller (For Hosted Kubernetes)
|
||||
|
||||
To deploy Rancher v2.5 on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.](#3-choose-your-ssl-configuration)
|
||||
To deploy Rancher on a hosted Kubernetes cluster such as EKS, GKE, or AKS, you should deploy a compatible Ingress controller first to configure [SSL termination on Rancher.](#3-choose-your-ssl-configuration)
|
||||
|
||||
For an example of how to deploy an ingress on EKS, refer to [this section.]({{<baseurl>}}/rancher/v2.6/en/installation/install-rancher-on-k8s/amazon-eks/#5-install-an-ingress)
|
||||
|
||||
|
||||
@@ -57,7 +57,7 @@ For information on enabling experimental features, refer to [this page.]({{<base
|
||||
| `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials |
|
||||
| `ingress.configurationSnippet` | "" | `string` - Add additional Nginx configuration. Can be used for proxy configuration. |
|
||||
| `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress |
|
||||
| `ingress.enabled` | true | When set to false, Helm will not install a Rancher ingress. Set the option to false to deploy your own ingress. _Available as of v2.5.6_ |
|
||||
| `ingress.enabled` | true | When set to false, Helm will not install a Rancher ingress. Set the option to false to deploy your own ingress. |
|
||||
| `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. | |
|
||||
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local,cattle-system.svc" | `string` - comma separated list of hostnames or ip address not to use the proxy | |
|
||||
| `proxy` | "" | `string` - HTTP[S] proxy server for Rancher |
|
||||
@@ -66,7 +66,7 @@ For information on enabling experimental features, refer to [this page.]({{<base
|
||||
| `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag |
|
||||
| `replicas` | 3 | `int` - Number of replicas of Rancher pods |
|
||||
| `resources` | {} | `map` - rancher pod resource requests & limits |
|
||||
| `restrictedAdmin` | `false` | _Available in Rancher v2.5_ `bool` - When this option is set to true, the initial Rancher user has restricted access to the local Kubernetes cluster to prevent privilege escalation. For more information, see the section about the [restricted-admin role.]({{<baseurl>}}/rancher/v2.6/en/admin-settings/rbac/global-permissions/#restricted-admin) |
|
||||
| `restrictedAdmin` | `false` | `bool` - When this option is set to true, the initial Rancher user has restricted access to the local Kubernetes cluster to prevent privilege escalation. For more information, see the section about the [restricted-admin role.]({{<baseurl>}}/rancher/v2.6/en/admin-settings/rbac/global-permissions/#restricted-admin) |
|
||||
| `systemDefaultRegistry` | "" | `string` - private registry to be used for all system Docker images, e.g., http://registry.example.com/ |
|
||||
| `tls` | "ingress" | `string` - See [External TLS Termination](#external-tls-termination) for details. - "ingress, external" |
|
||||
| `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. |
|
||||
|
||||
@@ -17,4 +17,4 @@ The Docker installation is for development and testing environments only.
|
||||
|
||||
Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
|
||||
|
||||
For Rancher v2.5+, the Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{<baseurl>}}/rancher/v2.6/en/backups/migrating-rancher)
|
||||
The Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{<baseurl>}}/rancher/v2.6/en/backups/migrating-rancher)
|
||||
|
||||
+1
-1
@@ -12,7 +12,7 @@ aliases:
|
||||
|
||||
This section is about how to deploy Rancher for your air gapped environment in a high-availability Kubernetes installation. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
|
||||
|
||||
### Privileged Access for Rancher v2.5+
|
||||
### Privileged Access for Rancher
|
||||
|
||||
When the Rancher server is deployed in the Docker container, a local Kubernetes cluster is installed within the container for Rancher to use. Because many features of Rancher run as deployments, and privileged mode is required to run containers within containers, you will need to install Rancher with the `--privileged` option.
|
||||
|
||||
|
||||
+4
-4
@@ -7,7 +7,7 @@ The Docker installation is for Rancher users who want to test out Rancher.
|
||||
|
||||
Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
|
||||
|
||||
For Rancher v2.5+, the backup application can be used to migrate the Rancher server from a Docker install to a Kubernetes install using [these steps.]({{<baseurl>}}/rancher/v2.6/en/backups/migrating-rancher)
|
||||
The backup application can be used to migrate the Rancher server from a Docker install to a Kubernetes install using [these steps.]({{<baseurl>}}/rancher/v2.6/en/backups/migrating-rancher)
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
@@ -36,7 +36,7 @@ Log into your Linux host, and then run the installation command below. When ente
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
|
||||
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{<baseurl>}}/rancher/v2.6/en/installation/resources/chart-options/) that you want to install. |
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.](#privileged-access-for-rancher)
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
@@ -72,7 +72,7 @@ After creating your certificate, log into your Linux host, and then run the inst
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
|
||||
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version]({{<baseurl>}}/rancher/v2.6/en/installation/resources/chart-options/) that you want to install. |
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.](#privileged-access-for-rancher)
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
@@ -108,7 +108,7 @@ After obtaining your certificate, log into your Linux host, and then run the ins
|
||||
|
||||
> **Note:** Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.](#privileged-access-for-rancher)
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
|
||||
+1
-1
@@ -9,7 +9,7 @@ aliases:
|
||||
|
||||
This section describes how to install a Kubernetes cluster according to our [best practices for the Rancher server environment.]({{<baseurl>}}/rancher/v2.6/en/overview/architecture-recommendations/#environment-for-kubernetes-installations) This cluster should be dedicated to run only the Rancher server.
|
||||
|
||||
As of Rancher v2.5, Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes providers.
|
||||
Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes providers.
|
||||
|
||||
The steps to set up an air-gapped Kubernetes cluster on RKE or K3s are shown below.
|
||||
|
||||
|
||||
+2
-2
@@ -11,7 +11,7 @@ An air gapped environment is an environment where the Rancher server is installe
|
||||
|
||||
The infrastructure depends on whether you are installing Rancher on a K3s Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container. For more information on each installation option, refer to [this page.]({{<baseurl>}}/rancher/v2.6/en/installation/)
|
||||
|
||||
As of Rancher v2.5, Rancher can be installed on any Kubernetes cluster. The RKE and K3s Kubernetes infrastructure tutorials below are still included for convenience.
|
||||
Rancher can be installed on any Kubernetes cluster. The RKE and K3s Kubernetes infrastructure tutorials below are still included for convenience.
|
||||
|
||||
{{% tabs %}}
|
||||
{{% tab "K3s" %}}
|
||||
@@ -152,7 +152,7 @@ If you need help with creating a private registry, please refer to the [official
|
||||
{{% tab "Docker" %}}
|
||||
> The Docker installation is for Rancher users that are wanting to test out Rancher. Since there is only one node and a single Docker container, if the node goes down, you will lose all the data of your Rancher server.
|
||||
>
|
||||
> As of Rancher v2.5, the Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{<baseurl>}}/rancher/v2.6/en/backups/migrating-rancher)
|
||||
> The Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{<baseurl>}}/rancher/v2.6/en/backups/migrating-rancher)
|
||||
|
||||
### 1. Set up a Linux Node
|
||||
|
||||
|
||||
+1
-3
@@ -5,8 +5,6 @@ aliases:
|
||||
- /rancher/v2.6/en/installation/install-rancher-on-linux
|
||||
---
|
||||
|
||||
_Available as of Rancher v2.5.4_
|
||||
|
||||
> This is an experimental feature.
|
||||
|
||||
We are excited to introduce a new, simpler way to install Rancher called RancherD.
|
||||
@@ -239,4 +237,4 @@ rancherd-uninstall.sh
|
||||
|
||||
# RKE2 Documentation
|
||||
|
||||
For more information on RKE2, the Kubernetes distribution used to provision the underlying cluster, refer to the documentation [here.](https://docs.rke2.io/)
|
||||
For more information on RKE2, the Kubernetes distribution used to provision the underlying cluster, refer to the documentation [here.](https://docs.rke2.io/)
|
||||
|
||||
+5
-5
@@ -19,7 +19,7 @@ A Docker installation of Rancher is recommended only for development and testing
|
||||
|
||||
The Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.]({{<baseurl>}}/rancher/v2.6/en/backups/migrating-rancher)
|
||||
|
||||
### Privileged Access for Rancher v2.5+
|
||||
### Privileged Access for Rancher
|
||||
|
||||
When the Rancher server is deployed in the Docker container, a local Kubernetes cluster is installed within the container for Rancher to use. Because many features of Rancher run as deployments, and privileged mode is required to run containers within containers, you will need to install Rancher with the `--privileged` option.
|
||||
|
||||
@@ -55,7 +55,7 @@ If you are installing Rancher in a development or testing environment where iden
|
||||
|
||||
Log into your Linux host, and then run the minimum installation command below.
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.](#privileged-access-for-rancher)
|
||||
|
||||
```bash
|
||||
docker run -d --restart=unless-stopped \
|
||||
@@ -82,7 +82,7 @@ After creating your certificate, run the Docker command below to install Rancher
|
||||
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
|
||||
| `<CA_CERTS.pem>` | The path to the certificate authority's certificate. |
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.](#privileged-access-for-rancher)
|
||||
|
||||
```bash
|
||||
docker run -d --restart=unless-stopped \
|
||||
@@ -114,7 +114,7 @@ After obtaining your certificate, run the Docker command below.
|
||||
| `<FULL_CHAIN.pem>` | The path to your full certificate chain. |
|
||||
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.](#privileged-access-for-rancher)
|
||||
|
||||
```bash
|
||||
docker run -d --restart=unless-stopped \
|
||||
@@ -144,7 +144,7 @@ After you fulfill the prerequisites, you can install Rancher using a Let's Encry
|
||||
| ----------------- | ------------------- |
|
||||
| `<YOUR.DNS.NAME>` | Your domain address |
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.](#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.](#privileged-access-for-rancher)
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
|
||||
+5
-5
@@ -25,7 +25,7 @@ Use the command example to start a Rancher container with your private CA certif
|
||||
|
||||
The example below is based on having the CA root certificates in the `/host/certs` directory on the host and mounting this directory on `/container/certs` inside the Rancher container.
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
@@ -44,7 +44,7 @@ The API Audit Log writes to `/var/log/auditlog` inside the rancher container by
|
||||
|
||||
See [API Audit Log]({{<baseurl>}}/rancher/v2.6/en/installation/api-auditing) for more information and options.
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
@@ -67,7 +67,7 @@ docker run -d --restart=unless-stopped \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
|
||||
See [TLS settings]({{<baseurl>}}/rancher/v2.6/en/admin-settings/tls-settings) for more information and options.
|
||||
|
||||
@@ -93,7 +93,7 @@ docker run -d --restart=unless-stopped \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
|
||||
### Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node
|
||||
|
||||
@@ -112,4 +112,4 @@ docker run -d --restart=unless-stopped \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
|
||||
+1
-1
@@ -42,4 +42,4 @@ docker run -d --restart=unless-stopped \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
|
||||
+1
-1
@@ -79,7 +79,7 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s
|
||||
--privileged \
|
||||
rancher/rancher:<PRIOR_RANCHER_VERSION>
|
||||
```
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
|
||||
>**Note:** _Do not_ stop the rollback after initiating it, even if the rollback process seems longer than expected. Stopping the rollback may result in database issues during future upgrades.
|
||||
|
||||
|
||||
+7
-7
@@ -152,7 +152,7 @@ docker run -d --volumes-from rancher-data \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
@@ -183,7 +183,7 @@ docker run -d --volumes-from rancher-data \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
@@ -213,7 +213,7 @@ docker run -d --volumes-from rancher-data \
|
||||
--no-cacerts
|
||||
```
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
{{% /accordion %}}
|
||||
|
||||
### Option D: Let's Encrypt Certificate
|
||||
@@ -243,7 +243,7 @@ docker run -d --volumes-from rancher-data \
|
||||
--acme-domain <YOUR.DNS.NAME>
|
||||
```
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
@@ -275,7 +275,7 @@ Placeholder | Description
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
{{% /accordion %}}
|
||||
|
||||
### Option B: Bring Your Own Certificate: Self-Signed
|
||||
@@ -306,7 +306,7 @@ docker run -d --restart=unless-stopped \
|
||||
--privileged \
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
Privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
{{% /accordion %}}
|
||||
|
||||
### Option C: Bring Your Own Certificate: Signed by Recognized CA
|
||||
@@ -339,7 +339,7 @@ docker run -d --volumes-from rancher-data \
|
||||
--privileged
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher)
|
||||
{{% /accordion %}}
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
|
||||
@@ -62,7 +62,7 @@ If you are installing Rancher on a K3s cluster with Alpine Linux, follow [these
|
||||
|
||||
### RancherD Specific Requirements
|
||||
|
||||
_The RancherD install is available as of v2.5.4. It is an experimental feature._
|
||||
_The RancherD install is an experimental feature._
|
||||
|
||||
At this time, only Linux OSes that leverage systemd are supported.
|
||||
|
||||
@@ -72,8 +72,6 @@ Docker is not required for RancherD installs.
|
||||
|
||||
### RKE2 Specific Requirements
|
||||
|
||||
_The RKE2 install is available as of v2.5.6._
|
||||
|
||||
For details on which OS versions were tested with RKE2, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
|
||||
|
||||
Docker is not required for RKE2 installs.
|
||||
@@ -127,7 +125,7 @@ These CPU and memory requirements apply to each host in a [K3s Kubernetes cluste
|
||||
|
||||
### RancherD
|
||||
|
||||
_RancherD is available as of v2.5.4. It is an experimental feature._
|
||||
_RancherD is an experimental feature._
|
||||
|
||||
These CPU and memory requirements apply to each instance with RancherD installed. Minimum recommendations are outlined here.
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ The following table lists the ports that need to be open to and from nodes that
|
||||
|
||||
The port requirements differ based on the Rancher server architecture.
|
||||
|
||||
As of Rancher v2.5, Rancher can be installed on any Kubernetes cluster. For Rancher installs on a K3s, RKE, or RKE2 Kubernetes cluster, refer to the tabs below. For other Kubernetes distributions, refer to the distribution's documentation for the port requirements for cluster nodes.
|
||||
Rancher can be installed on any Kubernetes cluster. For Rancher installs on a K3s, RKE, or RKE2 Kubernetes cluster, refer to the tabs below. For other Kubernetes distributions, refer to the distribution's documentation for the port requirements for cluster nodes.
|
||||
|
||||
> **Notes:**
|
||||
>
|
||||
|
||||
@@ -5,4 +5,4 @@ weight: 4
|
||||
|
||||
This section contains information on how to install a Kubernetes cluster that the Rancher server can be installed on.
|
||||
|
||||
In Rancher v2.5, Rancher can run on any Kubernetes cluster.
|
||||
Rancher can run on any Kubernetes cluster.
|
||||
|
||||
@@ -9,7 +9,7 @@ aliases:
|
||||
|
||||
This section describes how to install a Kubernetes cluster. This cluster should be dedicated to run only the Rancher server.
|
||||
|
||||
> As of Rancher v2.5, Rancher can run on any Kubernetes cluster, included hosted Kubernetes solutions such as Amazon EKS. The below instructions represent only one possible way to install Kubernetes.
|
||||
> Rancher can run on any Kubernetes cluster, included hosted Kubernetes solutions such as Amazon EKS. The below instructions represent only one possible way to install Kubernetes.
|
||||
|
||||
The Rancher management server can only be run on Kubernetes cluster in an infrastructure provider where Kubernetes is installed using RKE or K3s. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported.
|
||||
|
||||
|
||||
@@ -57,8 +57,6 @@ You can check the health of the service mesh, or drill down to see the incoming
|
||||
|
||||
### Jaeger
|
||||
|
||||
_Bundled as of v2.5.4_
|
||||
|
||||
Our Istio installer includes a quick-start, all-in-one installation of [Jaeger,](https://www.jaegertracing.io/) a tool used for tracing distributed systems.
|
||||
|
||||
Note that this is not a production-qualified deployment of Jaeger. This deployment uses an in-memory storage component, while a persistent storage component is recommended for production. For more information on which deployment strategy you may need, refer to the [Jaeger documentation.](https://www.jaegertracing.io/docs/latest/operator/#production-strategy)
|
||||
|
||||
@@ -78,7 +78,7 @@ For a list of options that can be configured when the logging application is ins
|
||||
|
||||
### Windows Support
|
||||
|
||||
As of Rancher v2.5.8, logging support for Windows clusters has been added and logs can be collected from Windows nodes.
|
||||
Logging support for Windows clusters is available and logs can be collected from Windows nodes.
|
||||
|
||||
For details on how to enable or disable Windows node logging, see [this section.](./helm-chart-options/#enable-disable-windows-node-logging)
|
||||
|
||||
@@ -94,8 +94,6 @@ For information on how to use taints and tolerations with the logging applicatio
|
||||
|
||||
### Logging V2 with SELinux
|
||||
|
||||
_Available as of v2.5.8_
|
||||
|
||||
For information on enabling the logging application for SELinux-enabled nodes, see [this section.](./helm-chart-options/#enabling-the-logging-application-to-work-with-selinux)
|
||||
|
||||
### Additional Logging Sources
|
||||
|
||||
@@ -7,17 +7,6 @@ This section summarizes the architecture of the Rancher logging application.
|
||||
|
||||
For more details about how the Banzai Cloud Logging operator works, see the [official documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/#architecture)
|
||||
|
||||
### Changes in Rancher v2.5
|
||||
|
||||
The following changes were introduced to logging in Rancher v2.5:
|
||||
|
||||
- The [Banzai Cloud Logging operator](https://banzaicloud.com/docs/one-eye/logging-operator/) now powers Rancher's logging solution in place of the former, in-house solution.
|
||||
- [Fluent Bit](https://fluentbit.io/) is now used to aggregate the logs, and [Fluentd](https://www.fluentd.org/) is used for filtering the messages and routing them to the `Outputs`. Previously, only Fluentd was used.
|
||||
- Logging can be configured with a Kubernetes manifest, because logging now uses a Kubernetes operator with Custom Resource Definitions.
|
||||
- We now support filtering logs.
|
||||
- We now support writing logs to multiple `Outputs`.
|
||||
- We now always collect Control Plane and etcd logs.
|
||||
|
||||
### How the Banzai Cloud Logging Operator Works
|
||||
|
||||
The Logging operator automates the deployment and configuration of a Kubernetes logging pipeline. It deploys and configures a Fluent Bit DaemonSet on every node to collect container and application logs from the node file system.
|
||||
@@ -37,4 +26,4 @@ The following figure from the [Banzai documentation](https://banzaicloud.com/doc
|
||||
|
||||
<figcaption>How the Banzai Cloud Logging Operator Works with Fluentd and Fluent Bit</figcaption>
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -10,18 +10,11 @@ For the full details on configuring `Flows` and `ClusterFlows`, see the [Banzai
|
||||
|
||||
# Configuration
|
||||
|
||||
- [Flows](#flows-2-5-8)
|
||||
- [Matches](#matches-2-5-8)
|
||||
- [Filters](#filters-2-5-8)
|
||||
- [Outputs](#outputs-2-5-8)
|
||||
- [ClusterFlows](#clusterflows-2-5-8)
|
||||
|
||||
# Changes in v2.5.8
|
||||
|
||||
The `Flows` and `ClusterFlows` can now be configured by filling out forms in the Rancher UI.
|
||||
|
||||
|
||||
<a id="flows-2-5-8"></a>
|
||||
- [Flows](#flows)
|
||||
- [Matches](#matches)
|
||||
- [Filters](#filters)
|
||||
- [Outputs](#outputs)
|
||||
- [ClusterFlows](#clusterflows)
|
||||
|
||||
# Flows
|
||||
|
||||
@@ -29,11 +22,10 @@ A `Flow` defines which logs to collect and filter and which output to send the l
|
||||
|
||||
The `Flow` is a namespaced resource, which means logs will only be collected from the namespace that the `Flow` is deployed in.
|
||||
|
||||
`Flows` can be configured by filling out forms in the Rancher UI.
|
||||
|
||||
For more details about the `Flow` custom resource, see [FlowSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/flow_types/)
|
||||
|
||||
|
||||
<a id="matches-2-5-8"></a>
|
||||
|
||||
### Matches
|
||||
|
||||
Match statements are used to select which containers to pull logs from.
|
||||
@@ -44,8 +36,6 @@ Matches can be configured by filling out the `Flow` or `ClusterFlow` forms in th
|
||||
|
||||
For detailed examples on using the match statement, see the [official documentation on log routing.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/log-routing/)
|
||||
|
||||
<a id="filters-2-5-8"></a>
|
||||
|
||||
### Filters
|
||||
|
||||
You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the `Flow` are applied in the order in the definition.
|
||||
@@ -54,20 +44,18 @@ For a list of filters supported by the Banzai Cloud Logging operator, see [this
|
||||
|
||||
Filters need to be configured in YAML.
|
||||
|
||||
<a id="outputs-2-5-8"></a>
|
||||
|
||||
### Outputs
|
||||
|
||||
This `Output` will receive logs from the `Flow`. Because the `Flow` is a namespaced resource, the `Output` must reside in same namespace as the `Flow`.
|
||||
|
||||
`Outputs` can be referenced when filling out the `Flow` or `ClusterFlow` forms in the Rancher UI.
|
||||
|
||||
<a id="clusterflows-2-5-8"></a>
|
||||
|
||||
# ClusterFlows
|
||||
|
||||
Matches, filters and `Outputs` are configured for `ClusterFlows` in the same way that they are configured for `Flows`. The key difference is that the `ClusterFlow` is scoped at the cluster level and can configure log collection across all namespaces.
|
||||
|
||||
`ClusterFlows` can be configured by filling out forms in the Rancher UI.
|
||||
|
||||
After `ClusterFlow` selects logs from all namespaces in the cluster, logs from the cluster will be collected and logged to the selected `ClusterOutput`.
|
||||
|
||||
# YAML Example
|
||||
|
||||
@@ -14,15 +14,9 @@ For the full details on configuring `Outputs` and `ClusterOutputs`, see the [Ban
|
||||
|
||||
# Configuration
|
||||
|
||||
- [Outputs](#outputs-2-5-8)
|
||||
- [ClusterOutputs](#clusteroutputs-2-5-8)
|
||||
- [Outputs](#outputs)
|
||||
- [ClusterOutputs](#clusteroutputs)
|
||||
|
||||
# Changes in v2.5.8
|
||||
|
||||
The `Outputs` and `ClusterOutputs` can now be configured by filling out forms in the Rancher UI.
|
||||
|
||||
|
||||
<a id="outputs-2-5-8"></a>
|
||||
# Outputs
|
||||
|
||||
The `Output` resource defines where your `Flows` can send the log messages. `Outputs` are the final stage for a logging `Flow`.
|
||||
@@ -31,6 +25,8 @@ The `Output` is a namespaced resource, which means only a `Flow` within the same
|
||||
|
||||
You can use secrets in these definitions, but they must also be in the same namespace.
|
||||
|
||||
`Outputs` can be configured by filling out forms in the Rancher UI.
|
||||
|
||||
For the details of `Output` custom resource, see [OutputSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/output_types/)
|
||||
|
||||
The Rancher UI provides forms for configuring the following `Output` types:
|
||||
@@ -57,12 +53,12 @@ The Rancher UI provides forms for configuring the `Output` type, target, and acc
|
||||
|
||||
For example configuration for each logging plugin supported by the logging operator, see the [logging operator documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/)
|
||||
|
||||
<a id="clusteroutputs-2-5-8"></a>
|
||||
|
||||
# ClusterOutputs
|
||||
|
||||
`ClusterOutput` defines an `Output` without namespace restrictions. It is only effective when deployed in the same namespace as the logging operator.
|
||||
|
||||
`ClusterOutputs` can be configured by filling out forms in the Rancher UI.
|
||||
|
||||
For the details of the `ClusterOutput` custom resource, see [ClusterOutput.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/clusteroutput_types/)
|
||||
|
||||
# YAML Examples
|
||||
@@ -215,7 +211,7 @@ apiVersion: logging.banzaicloud.io/v1beta1
|
||||
|
||||
For the final example, we create an `Output` to write logs to a destination that is not supported out of the box:
|
||||
|
||||
> **Note on syslog** As of Rancher v2.5.4, `syslog` is a supported `Output`. However, this example still provides an overview on using unsupported plugins.
|
||||
> **Note on syslog** `syslog` is a supported `Output`. However, this example still provides an overview on using unsupported plugins.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
||||
@@ -13,8 +13,6 @@ weight: 4
|
||||
|
||||
### Enable/Disable Windows Node Logging
|
||||
|
||||
_Available as of v2.5.8_
|
||||
|
||||
You can enable or disable Windows node logging by setting `global.cattle.windows.enabled` to either `true` or `false` in the `values.yaml`.
|
||||
|
||||
By default, Windows node logging will be enabled if the Cluster Explorer UI is used to install the logging application on a Windows cluster.
|
||||
@@ -26,8 +24,6 @@ When disabled, logs will still be collected from Linux nodes within the Windows
|
||||
|
||||
### Working with a Custom Docker Root Directory
|
||||
|
||||
_Applies to v2.5.6+_
|
||||
|
||||
If using a custom Docker root directory, you can set `global.dockerRootDirectory` in `values.yaml`.
|
||||
|
||||
This will ensure that the Logging CRs created will use your specified path rather than the default Docker `data-root` location.
|
||||
@@ -42,8 +38,6 @@ You can add your own `nodeSelector` settings and add `tolerations` for additiona
|
||||
|
||||
### Enabling the Logging Application to Work with SELinux
|
||||
|
||||
_Available as of v2.5.8_
|
||||
|
||||
> **Requirements:** Logging v2 was tested with SELinux on RHEL/CentOS 7 and 8.
|
||||
|
||||
[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8.
|
||||
|
||||
@@ -11,7 +11,7 @@ aliases:
|
||||
|
||||
Using Rancher, you can quickly deploy leading open-source monitoring alerting solutions onto your cluster.
|
||||
|
||||
The `rancher-monitoring` operator, introduced in Rancher v2.5, is powered by [Prometheus](https://prometheus.io/), [Grafana](https://grafana.com/grafana/), [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator), and the [Prometheus adapter.](https://github.com/DirectXMan12/k8s-prometheus-adapter) This page describes how to enable monitoring and alerting within a cluster using the new monitoring application.
|
||||
The `rancher-monitoring` operator is powered by [Prometheus](https://prometheus.io/), [Grafana](https://grafana.com/grafana/), [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator), and the [Prometheus adapter.](https://github.com/DirectXMan12/k8s-prometheus-adapter) This page describes how to enable monitoring and alerting within a cluster using the new monitoring application.
|
||||
|
||||
Rancher's solution allows users to:
|
||||
|
||||
@@ -19,8 +19,8 @@ Rancher's solution allows users to:
|
||||
- Define alerts based on metrics collected via Prometheus
|
||||
- Create custom dashboards to make it easy to visualize collected metrics via Grafana
|
||||
- Configure alert-based notifications via Email, Slack, PagerDuty, etc. using Prometheus Alertmanager
|
||||
- Defines precomputed, frequently needed or computationally expensive expressions as new time series based on metrics collected via Prometheus (only available in 2.5)
|
||||
- Expose collected metrics from Prometheus to the Kubernetes Custom Metrics API via Prometheus Adapter for use in HPA (only available in 2.5)
|
||||
- Defines precomputed, frequently needed or computationally expensive expressions as new time series based on metrics collected via Prometheus
|
||||
- Expose collected metrics from Prometheus to the Kubernetes Custom Metrics API via Prometheus Adapter for use in HPA
|
||||
|
||||
More information about the resources that get deployed onto your cluster to support this solution can be found in the [`rancher-monitoring`](https://github.com/rancher/charts/tree/main/charts/rancher-monitoring) Helm chart, which closely tracks the upstream [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) Helm chart maintained by the Prometheus community with certain changes tracked in the [CHANGELOG.md](https://github.com/rancher/charts/blob/main/charts/rancher-monitoring/CHANGELOG.md).
|
||||
|
||||
@@ -113,8 +113,6 @@ To configure Prometheus resources from the Rancher UI, click **Apps & Marketplac
|
||||
|
||||
# Windows Cluster Support
|
||||
|
||||
_Available as of v2.5.8_
|
||||
|
||||
When deployed onto an RKE1 Windows cluster, Monitoring V2 will now automatically deploy a [windows-exporter](https://github.com/prometheus-community/windows_exporter) DaemonSet and set up a ServiceMonitor to collect metrics from each of the deployed Pods. This will populate Prometheus with `windows_` metrics that are akin to the `node_` metrics exported by [node_exporter](https://github.com/prometheus/node_exporter) for Linux hosts.
|
||||
|
||||
To be able to fully deploy Monitoring V2 for Windows, all of your Windows hosts must have a minimum [wins](https://github.com/rancher/wins) version of v0.1.0.
|
||||
|
||||
@@ -52,7 +52,6 @@ For more information, refer to the [official Prometheus documentation about conf
|
||||
When you define a Rule (which is declared within a RuleGroup in a PrometheusRule resource), the [spec of the Rule itself](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#rule) contains labels that are used by Prometheus to figure out which Route should receive this Alert. For example, an Alert with the label `team: front-end` will be sent to all Routes that match on that label.
|
||||
|
||||
# Creating Receivers in the Rancher UI
|
||||
_Available as of v2.5.4_
|
||||
|
||||
> **Prerequisites:**
|
||||
>
|
||||
@@ -82,14 +81,6 @@ Currently the Rancher Alerting Drivers app provides access to the following inte
|
||||
- Microsoft Teams, based on the [prom2teams](https://github.com/idealista/prom2teams) driver
|
||||
- SMS, based on the [Sachet](https://github.com/messagebird/sachet) driver
|
||||
|
||||
### Changes in Rancher v2.5.8
|
||||
|
||||
Rancher v2.5.8 added Microsoft Teams and SMS as configurable receivers in the Rancher UI.
|
||||
|
||||
### Changes in Rancher v2.5.4
|
||||
|
||||
Rancher v2.5.4 introduced the capability to configure receivers by filling out forms in the Rancher UI.
|
||||
|
||||
The following types of receivers can be configured in the Rancher UI:
|
||||
|
||||
- <a href="#slack">Slack</a>
|
||||
|
||||
@@ -40,8 +40,6 @@ When you define a Rule (which is declared within a RuleGroup in a PrometheusRule
|
||||
|
||||
### Creating PrometheusRules in the Rancher UI
|
||||
|
||||
_Available as of v2.5.4_
|
||||
|
||||
> **Prerequisite:** The monitoring application needs to be installed.
|
||||
|
||||
To create rule groups in the Rancher UI,
|
||||
|
||||
@@ -4,9 +4,7 @@ shortTitle: Windows Clusters
|
||||
weight: 5
|
||||
---
|
||||
|
||||
_Available as of v2.5.8_
|
||||
|
||||
Starting at Monitoring V2 14.5.100 (used by default in Rancher 2.5.8), Monitoring V2 can now be deployed on a Windows cluster and will scrape metrics from Windows nodes using [prometheus-community/windows_exporter](https://github.com/prometheus-community/windows_exporter) (previously named `wmi_exporter`).
|
||||
Monitoring V2 can be deployed on a Windows cluster and will scrape metrics from Windows nodes using [prometheus-community/windows_exporter](https://github.com/prometheus-community/windows_exporter) (previously named `wmi_exporter`).
|
||||
|
||||
- [Comparison to Monitoring V1](#comparison-to-monitoring-v1)
|
||||
- [Cluster Requirements](#cluster-requirements)
|
||||
@@ -22,7 +20,7 @@ In addition, Monitoring V2 for Windows will no longer require users to keep port
|
||||
|
||||
Monitoring V2 for Windows can only scrape metrics from Windows hosts that have a minimum `wins` version of v0.1.0. To be able to fully deploy Monitoring V2 for Windows, all of your hosts must meet this requirement.
|
||||
|
||||
If you provision a fresh RKE1 cluster in Rancher 2.5.8, your cluster should already meet this requirement.
|
||||
If you provision a fresh RKE1 cluster in Rancher, your cluster should already meet this requirement.
|
||||
|
||||
### Upgrading Existing Clusters to wins v0.1.0
|
||||
|
||||
|
||||
@@ -50,8 +50,6 @@ Kubernetes v1.17, v1.18, & v1.19 | CIS v1.5 | [Link]({{<baseurl>}}/k3s/latest/en
|
||||
|
||||
# Rancher with SELinux
|
||||
|
||||
_Available as of v2.5.8_
|
||||
|
||||
[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8.
|
||||
|
||||
To use Rancher with SELinux, we recommend installing the `rancher-selinux` RPM according to the instructions on [this page.]({{<baseurl>}}/rancher/v2.6/en/security/selinux/#installing-the-rancher-selinux-rpm)
|
||||
|
||||
@@ -3,8 +3,6 @@ title: SELinux RPM
|
||||
weight: 4
|
||||
---
|
||||
|
||||
_Available as of v2.5.8_
|
||||
|
||||
[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux.
|
||||
|
||||
Developed by Red Hat, it is an implementation of mandatory access controls (MAC) on Linux. Mandatory access controls allow an administrator of a system to define how applications and users can access different resources such as files, devices, networks and inter-process communication. SELinux also enhances security by making an OS restrictive by default.
|
||||
@@ -27,7 +25,7 @@ We provide two RPMs (Red Hat packages) that enable Rancher products to function
|
||||
|
||||
To allow Rancher to work with SELinux, some functionality has to be manually enabled for the SELinux nodes. To help with that, Rancher provides a SELinux RPM.
|
||||
|
||||
As of v2.5.8, the `rancher-selinux` RPM only contains policies for the [rancher-logging application.](https://github.com/rancher/charts/tree/dev-v2.5/charts/rancher-logging)
|
||||
The `rancher-selinux` RPM only contains policies for the [rancher-logging application.](https://github.com/rancher/charts/tree/dev-v2.5/charts/rancher-logging)
|
||||
|
||||
The `rancher-selinux` GitHub repository is [here.](https://github.com/rancher/rancher-selinux)
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ If the cattle-cluster-agent cannot connect to the configured `server-url`, the c
|
||||
|
||||
#### cattle-node-agent
|
||||
|
||||
> Note: Starting in Rancher 2.5 cattle-node-agents are only present in clusters created in Rancher with RKE.
|
||||
> Note: cattle-node-agents are only present in clusters created in Rancher with RKE.
|
||||
|
||||
Check if the cattle-node-agent pods are present on each node, have status **Running** and don't have a high count of Restarts:
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ When you create a node template, it is bound to your user profile. Node template
|
||||
1. From your user settings, select **User Avatar > Node Templates**.
|
||||
1. Choose the node template that you want to edit and click the **⋮ > Edit**.
|
||||
|
||||
> **Note:** As of v2.2.0, the default `active` [node drivers]({{<baseurl>}}/rancher/v2.6/en/admin-settings/drivers/node-drivers/) and any node driver, that has fields marked as `password`, are required to use [cloud credentials]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/node-pools/#cloud-credentials). If you have upgraded to v2.2.0, existing node templates will continue to work with the previous account access information, but when you edit the node template, you will be required to create a cloud credential and the node template will start using it.
|
||||
> **Note:** The default `active` [node drivers]({{<baseurl>}}/rancher/v2.6/en/admin-settings/drivers/node-drivers/) and any node driver, that has fields marked as `password`, are required to use [cloud credentials]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/node-pools/#cloud-credentials).
|
||||
|
||||
1. Edit the required information and click **Save**.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user