mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-16 18:13:17 +00:00
Clean up monitoring docs for Rancher 2.5
This commit is contained in:
@@ -17,9 +17,7 @@ Certificates can be rotated for the following services:
|
||||
- kube-controller-manager
|
||||
|
||||
|
||||
### Certificate Rotation in Rancher v2.2.x
|
||||
|
||||
_Available as of v2.2.0_
|
||||
### Certificate Rotation
|
||||
|
||||
Rancher launched Kubernetes clusters have the ability to rotate the auto-generated certificates through the UI.
|
||||
|
||||
@@ -36,52 +34,4 @@ Rancher launched Kubernetes clusters have the ability to rotate the auto-generat
|
||||
|
||||
**Results:** The selected certificates will be rotated and the related services will be restarted to start using the new certificate.
|
||||
|
||||
> **Note:** Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher Launched Kubernetes clusters.
|
||||
|
||||
|
||||
### Certificate Rotation in Rancher v2.1.x and v2.0.x
|
||||
|
||||
_Available as of v2.0.14 and v2.1.9_
|
||||
|
||||
Rancher launched Kubernetes clusters have the ability to rotate the auto-generated certificates through the API.
|
||||
|
||||
1. In the **Global** view, navigate to the cluster that you want to rotate certificates.
|
||||
|
||||
2. Select the **⋮ > View in API**.
|
||||
|
||||
3. Click on **RotateCertificates**.
|
||||
|
||||
4. Click on **Show Request**.
|
||||
|
||||
5. Click on **Send Request**.
|
||||
|
||||
**Results:** All Kubernetes certificates will be rotated.
|
||||
|
||||
### Rotating Expired Certificates After Upgrading Older Rancher Versions
|
||||
|
||||
If you are upgrading from Rancher v2.0.13 or earlier, or v2.1.8 or earlier, and your clusters have expired certificates, some manual steps are required to complete the certificate rotation.
|
||||
|
||||
1. For the `controlplane` and `etcd` nodes, log in to each corresponding host and check if the certificate `kube-apiserver-requestheader-ca.pem` is in the following directory:
|
||||
|
||||
```
|
||||
cd /etc/kubernetes/.tmp
|
||||
```
|
||||
|
||||
If the certificate is not in the directory, perform the following commands:
|
||||
|
||||
```
|
||||
cp kube-ca.pem kube-apiserver-requestheader-ca.pem
|
||||
cp kube-ca-key.pem kube-apiserver-requestheader-ca-key.pem
|
||||
cp kube-apiserver.pem kube-apiserver-proxy-client.pem
|
||||
cp kube-apiserver-key.pem kube-apiserver-proxy-client-key.pem
|
||||
```
|
||||
|
||||
If the `.tmp` directory does not exist, you can copy the entire SSL certificate to `.tmp`:
|
||||
|
||||
```
|
||||
cp -r /etc/kubernetes/ssl /etc/kubernetes/.tmp
|
||||
```
|
||||
|
||||
1. Rotate the certificates. For Rancher v2.0.x and v2.1.x, use the [Rancher API.](#certificate-rotation-in-rancher-v2-1-x-and-v2-0-x) For Rancher 2.2.x, [use the UI.](#certificate-rotation-in-rancher-v2-2-x)
|
||||
|
||||
1. After the command is finished, check if the `worker` nodes are Active. If not, log in to each `worker` node and restart the kubelet and proxy.
|
||||
> **Note:** Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher Launched Kubernetes clusters.
|
||||
-2
@@ -2,8 +2,6 @@
|
||||
title: "Layer 4 and Layer 7 Load Balancing"
|
||||
description: "Kubernetes supports load balancing in two ways: Layer-4 Load Balancing and Layer-7 Load Balancing. Learn about the support for each way in different deployments"
|
||||
weight: 3041
|
||||
aliases:
|
||||
- /rancher/v2.x/en/concepts/load-balancing/
|
||||
---
|
||||
Kubernetes supports load balancing in two ways: Layer-4 Load Balancing and Layer-7 Load Balancing.
|
||||
|
||||
|
||||
@@ -1,9 +1,6 @@
|
||||
---
|
||||
title: Pipelines
|
||||
weight: 3047
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tools/pipelines/concepts/
|
||||
|
||||
---
|
||||
|
||||
Rancher's pipeline provides a simple CI/CD experience. Use it to automatically checkout code, run builds or scripts, publish Docker images or catalog applications, and deploy the updated software to users.
|
||||
@@ -113,8 +110,6 @@ Select your provider's tab below and follow the directions.
|
||||
{{% /tab %}}
|
||||
{{% tab "GitLab" %}}
|
||||
|
||||
_Available as of v2.1.0_
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Tools > Pipelines** in the navigation bar. In versions prior to v2.2.0, you can select **Resources > Pipelines**.
|
||||
@@ -133,8 +128,6 @@ _Available as of v2.1.0_
|
||||
{{% /tab %}}
|
||||
{{% tab "Bitbucket Cloud" %}}
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Tools > Pipelines** in the navigation bar.
|
||||
@@ -150,8 +143,6 @@ _Available as of v2.2.0_
|
||||
{{% /tab %}}
|
||||
{{% tab "Bitbucket Server" %}}
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
1. From the **Global** view, navigate to the project that you want to configure pipelines.
|
||||
|
||||
1. Select **Tools > Pipelines** in the navigation bar.
|
||||
@@ -210,7 +201,7 @@ Now that repositories are added to your project, you can start configuring the p
|
||||
|
||||
1. Select which `branch` to use from the list of branches.
|
||||
|
||||
1. _Available as of v2.2.0_ Optional: Set up notifications.
|
||||
1. Optional: Set up notifications.
|
||||
|
||||
1. Set up the trigger rules for the pipeline.
|
||||
|
||||
|
||||
@@ -102,8 +102,6 @@ stages:
|
||||
```
|
||||
# Step Type: Build and Publish Images
|
||||
|
||||
_Available as of Rancher v2.1.0_
|
||||
|
||||
The **Build and Publish Image** step builds and publishes a Docker image. This process requires a Dockerfile in your source code's repository to complete successfully.
|
||||
|
||||
The option to publish an image to an insecure registry is not exposed in the UI, but you can specify an environment variable in the YAML that allows you to publish an image insecurely.
|
||||
@@ -154,8 +152,6 @@ stages:
|
||||
|
||||
# Step Type: Publish Catalog Template
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
The **Publish Catalog Template** step publishes a version of a catalog app template (i.e. Helm chart) to a [git hosted chart repository]({{<baseurl>}}/rancher/v2.x/en/catalog/custom/). It generates a git commit and pushes it to your chart repository. This process requires a chart folder in your source code's repository and a pre-configured secret in the dedicated pipeline namespace to complete successfully. Any variables in the [pipeline variable substitution reference](#pipeline-variable-substitution-reference) is supported for any file in the chart folder.
|
||||
|
||||
### Configuring Publishing a Catalog Template by UI
|
||||
@@ -235,8 +231,6 @@ stages:
|
||||
|
||||
# Step Type :Deploy Catalog App
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
The **Deploy Catalog App** step deploys a catalog app in the project. It will install a new app if it is not present, or upgrade an existing one.
|
||||
|
||||
### Configure Deploying Catalog App by UI
|
||||
@@ -311,8 +305,6 @@ You can enable notifications to any [notifiers]({{<baseurl>}}/rancher/v2.x/en/cl
|
||||
|
||||
### Configuring Notifications by UI
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
1. Within the **Notification** section, turn on notifications by clicking **Enable**.
|
||||
|
||||
1. Select the conditions for the notification. You can select to get a notification for the following statuses: `Failed`, `Success`, `Changed`. For example, if you want to receive notifications when an execution fails, select **Failed**.
|
||||
@@ -324,7 +316,6 @@ _Available as of v2.2.0_
|
||||
1. For each recipient, select which notifier type from the dropdown. Based on the type of notifier, you can use the default recipient or override the recipient with a different one. For example, if you have a notifier for _Slack_, you can update which channel to send the notification to. You can add additional notifiers by clicking **Add Recipient**.
|
||||
|
||||
### Configuring Notifications by YAML
|
||||
_Available as of v2.2.0_
|
||||
|
||||
In the `notification` section, you will provide the following information:
|
||||
|
||||
@@ -594,8 +585,6 @@ Select the maximum number of pipeline executors. The _executor quota_ decides ho
|
||||
|
||||
### Resource Quota for Executors
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Configure compute resources for Jenkins agent containers. When a pipeline execution is triggered, a build pod is dynamically provisioned to run your CI tasks. Under the hood, A build pod consists of one Jenkins agent container and one container for each pipeline step. You can [manage compute resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) for every containers in the pod.
|
||||
|
||||
Edit the **Memory Reservation**, **Memory Limit**, **CPU Reservation** or **CPU Limit**, then click **Update Limit and Reservation**.
|
||||
@@ -635,9 +624,7 @@ stages:
|
||||
|
||||
>**Note:** Rancher sets default compute resources for pipeline steps except for `Build and Publish Images` and `Run Script` steps. You can override the default value by specifying compute resources in the same way.
|
||||
|
||||
### Custom CA
|
||||
|
||||
_Available as of v2.2.0_
|
||||
### Custom CA
|
||||
|
||||
If you want to use a version control provider with a certificate from a custom/internal CA root, the CA root certificates need to be added as part of the version control provider configuration in order for the pipeline build pods to succeed.
|
||||
|
||||
|
||||
@@ -1,9 +1,6 @@
|
||||
---
|
||||
title: v2.0.x Pipeline Documentation
|
||||
weight: 9000
|
||||
aliases:
|
||||
- /rancher/v2.x/en/project-admin/tools/pipelines/docs-for-v2.0.x
|
||||
- /rancher/v2.x/en/project-admin/pipelines/docs-for-v2.0.x
|
||||
---
|
||||
|
||||
>**Note:** This section describes the pipeline feature as implemented in Rancher v2.0.x. If you are using Rancher v2.1 or later, where pipelines have been significantly improved, please refer to the new documentation for [v2.1 or later]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/pipelines/).
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
---
|
||||
title: Example Repositories
|
||||
weight: 500
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tools/pipelines/quick-start-guide/
|
||||
---
|
||||
|
||||
Rancher ships with several example repositories that you can use to familiarize yourself with pipelines. We recommend configuring and testing the example repository that most resembles your environment before using pipelines with your own repositories in a production environment. Use this example repository as a sandbox for repo configuration, build demonstration, etc. Rancher includes example repositories for:
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
---
|
||||
title: Example YAML File
|
||||
weight: 501
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tools/pipelines/reference/
|
||||
---
|
||||
|
||||
Pipelines can be configured either through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`.
|
||||
|
||||
@@ -38,6 +38,4 @@ For details, refer to the [logging section.]({{<baseurl>}}/rancher/v2.x/en/clust
|
||||
|
||||
## Monitoring
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with [Prometheus](https://prometheus.io/), a leading open-source monitoring solution. For details, refer to the [monitoring section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring)
|
||||
|
||||
-1
@@ -121,7 +121,6 @@ This alert type monitors for the availability of all workloads marked with tags
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="project-expression" label="Metric Expression Alerts" %}}
|
||||
<br>
|
||||
_Available as of v2.2.4_
|
||||
|
||||
If you enable [project monitoring]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/#monitoring), this alert type monitors for the overload from Prometheus expression querying.
|
||||
|
||||
|
||||
-2
@@ -3,8 +3,6 @@ title: Istio in Projects
|
||||
weight: 1
|
||||
---
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
Using Rancher, you can connect, secure, control, and observe services through integration with [Istio](https://istio.io/), a leading open-source service mesh solution. Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications.
|
||||
|
||||
This service mesh provides features that include but are not limited to the following:
|
||||
|
||||
@@ -13,8 +13,6 @@ Resource quotas can also be set when a new project is created. For details, refe
|
||||
|
||||
### Applying Resource Quotas to Existing Projects
|
||||
|
||||
_Available as of v2.0.1_
|
||||
|
||||
Edit [resource quotas]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas) when:
|
||||
|
||||
- You want to limit the resources that a project and its namespaces can use.
|
||||
|
||||
-4
@@ -3,16 +3,12 @@ title: Setting Container Default Resource Limits
|
||||
weight: 3
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
When setting resource quotas, if you set anything related to CPU or Memory (i.e. limits or reservations) on a project / namespace, all containers will require a respective CPU or Memory field set during creation. See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits) for more details on why this is required.
|
||||
|
||||
To avoid setting these limits on each and every container during workload creation, a default container resource limit can be specified on the namespace.
|
||||
|
||||
### Editing the Container Default Resource Limit
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Edit [container default resource limit]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas/#setting-container-default-resource-limit) when:
|
||||
|
||||
- You have a CPU or Memory resource quota set on a project, and want to supply the corresponding default values for a container.
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
---
|
||||
title: NFS Storage
|
||||
weight: 3054
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/clusters/adding-storage/provisioning-storage/nfs/
|
||||
---
|
||||
|
||||
Before you can use the NFS storage volume plug-in with Rancher deployments, you need to provision an NFS server.
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
---
|
||||
title: vSphere Storage
|
||||
weight: 3055
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/clusters/adding-storage/provisioning-storage/vsphere/
|
||||
---
|
||||
|
||||
To provide stateful workloads with vSphere storage, we recommend creating a vSphereVolume [storage class]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/#storage-classes). This practice dynamically provisions vSphere storage when workloads request volumes through a [persistent volume claim]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/volumes-and-storage/persistent-volume-claims/).
|
||||
|
||||
-2
@@ -3,8 +3,6 @@ title: Managing HPAs with the Rancher UI
|
||||
weight: 2
|
||||
---
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
The Rancher UI supports creating, managing, and deleting HPAs. You can configure CPU or memory usage as the metric that the HPA uses to scale.
|
||||
|
||||
If you want to create HPAs that scale based on other metrics than CPU and memory, refer to [Configuring HPA to Scale Using Custom Metrics with Prometheus]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/horitzontal-pod-autoscaler/manage-hpa-with-kubectl/#configuring-hpa-to-scale-using-custom-metrics-with-prometheus).
|
||||
|
||||
@@ -37,8 +37,6 @@ For more information, see [Provisioning Drivers]({{<baseurl>}}/rancher/v2.x/en/a
|
||||
|
||||
## Adding Kubernetes Versions into Rancher
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
With this feature, you can upgrade to the latest version of Kubernetes as soon as it is released, without upgrading Rancher. This feature allows you to easily upgrade Kubernetes patch versions (i.e. `v1.15.X`), but not intended to upgrade Kubernetes minor versions (i.e. `v1.X.0`) as Kubernetes tends to deprecate or add APIs between minor versions.
|
||||
|
||||
The information that Rancher uses to provision [RKE clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) is now located in the Rancher Kubernetes Metadata. For details on metadata configuration and how to change the Kubernetes version used for provisioning RKE clusters, see [Rancher Kubernetes Metadata.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/k8s-metadata/)
|
||||
@@ -49,6 +47,4 @@ For more information on how metadata works and how to configure metadata config,
|
||||
|
||||
## Enabling Experimental Features
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
Rancher includes some features that are experimental and disabled by default. Feature flags were introduced to allow you to try these features. For more information, refer to the section about [feature flags.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/feature-flags/)
|
||||
|
||||
@@ -3,8 +3,6 @@ title: Configuring Azure AD
|
||||
weight: 5
|
||||
---
|
||||
|
||||
_Available as of v2.0.3_
|
||||
|
||||
If you have an instance of Active Directory (AD) hosted in Azure, you can configure Rancher to allow your users to log in using their AD accounts. Configuration of Azure AD external authentication requires you to make configurations in both Azure and Rancher.
|
||||
|
||||
>**Note:** Azure AD integration only supports Service Provider initiated logins.
|
||||
|
||||
@@ -3,8 +3,6 @@ title: Configuring FreeIPA
|
||||
weight: 4
|
||||
---
|
||||
|
||||
_Available as of v2.0.5_
|
||||
|
||||
If your organization uses FreeIPA for user authentication, you can configure Rancher to allow your users to login using their FreeIPA credentials.
|
||||
|
||||
>**Prerequisites:**
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
title: Configuring Google OAuth
|
||||
weight: 12
|
||||
---
|
||||
_Available as of v2.3.0_
|
||||
|
||||
If your organization uses G Suite for user authentication, you can configure Rancher to allow your users to log in using their G Suite credentials.
|
||||
|
||||
|
||||
@@ -3,7 +3,6 @@ title: Configuring Keycloak (SAML)
|
||||
description: Create a Keycloak SAML client and configure Rancher to work with Keycloak. By the end your users will be able to sign into Rancher using their Keycloak logins
|
||||
weight: 7
|
||||
---
|
||||
_Available as of v2.1.0_
|
||||
|
||||
If your organization uses Keycloak Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
|
||||
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
title: Configuring Microsoft Active Directory Federation Service (SAML)
|
||||
weight: 9
|
||||
---
|
||||
_Available as of v2.0.7_
|
||||
|
||||
If your organization uses Microsoft Active Directory Federation Services (AD FS) for user authentication, you can configure Rancher to allow your users to log in using their AD FS credentials.
|
||||
|
||||
|
||||
-1
@@ -2,7 +2,6 @@
|
||||
title: 2 — Configuring Rancher for Microsoft AD FS
|
||||
weight: 1205
|
||||
---
|
||||
_Available as of v2.0.7_
|
||||
|
||||
After you complete [Configuring Microsoft AD FS for Rancher]({{<baseurl>}}/rancher/v2.x/en/admin-settings/authentication/microsoft-adfs/microsoft-adfs-setup/), enter your AD FS information into Rancher to allow AD FS users to authenticate with Rancher.
|
||||
|
||||
|
||||
@@ -3,8 +3,6 @@ title: Configuring Okta (SAML)
|
||||
weight: 10
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
If your organization uses Okta Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
|
||||
|
||||
>**Note:** Okta integration only supports Service Provider initiated logins.
|
||||
|
||||
@@ -3,8 +3,6 @@ title: Configuring OpenLDAP
|
||||
weight: 3
|
||||
---
|
||||
|
||||
_Available as of v2.0.5_
|
||||
|
||||
If your organization uses LDAP for user authentication, you can configure Rancher to communicate with an OpenLDAP server to authenticate users. This allows Rancher admins to control access to clusters and projects based on users and groups managed externally in the organisation's central user repository, while allowing end-users to authenticate with their LDAP credentials when logging in to the Rancher UI.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
title: Configuring PingIdentity (SAML)
|
||||
weight: 8
|
||||
---
|
||||
_Available as of v2.0.7_
|
||||
|
||||
If your organization uses Ping Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in using their IdP credentials.
|
||||
|
||||
|
||||
@@ -3,8 +3,6 @@ title: Configuring Shibboleth (SAML)
|
||||
weight: 11
|
||||
---
|
||||
|
||||
_Available as of v2.4.0_
|
||||
|
||||
If your organization uses Shibboleth Identity Provider (IdP) for user authentication, you can configure Rancher to allow your users to log in to Rancher using their Shibboleth credentials.
|
||||
|
||||
In this configuration, when Rancher users log in, they will be redirected to the Shibboleth IdP to enter their credentials. After authentication, they will be redirected back to the Rancher UI.
|
||||
|
||||
@@ -3,8 +3,6 @@ title: Group Permissions with Shibboleth and OpenLDAP
|
||||
weight: 1
|
||||
---
|
||||
|
||||
_Available as of Rancher v2.4_
|
||||
|
||||
This page provides background information and context for Rancher users who intend to set up the Shibboleth authentication provider in Rancher.
|
||||
|
||||
Because Shibboleth is a SAML provider, it does not support searching for groups. While a Shibboleth integration can validate user credentials, it can't be used to assign permissions to groups in Rancher without additional configuration.
|
||||
|
||||
@@ -23,8 +23,6 @@ Whenever a user logs in to the UI using an authentication provider, Rancher auto
|
||||
|
||||
### Automatically Refreshing User Information
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Rancher will periodically refresh the user information even before a user logs in through the UI. You can control how often Rancher performs this refresh. From the **Global** view, click on **Settings**. Two settings control this behavior:
|
||||
|
||||
- **`auth-user-info-max-age-seconds`**
|
||||
@@ -53,8 +51,6 @@ If you are not sure the last time Rancher performed an automatic refresh of user
|
||||
|
||||
## Session Length
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
The default length (TTL) of each user session is adjustable. The default session length is 16 hours.
|
||||
|
||||
1. From the **Global** view, click on **Settings**.
|
||||
|
||||
@@ -92,8 +92,6 @@ The steps to add custom roles differ depending on the version of Rancher.
|
||||
|
||||
## Creating a Custom Global Role
|
||||
|
||||
_Available as of v2.4.0_
|
||||
|
||||
### Creating a Custom Global Role that Copies Rules from an Existing Role
|
||||
|
||||
If you have a group of individuals that need the same level of access in Rancher, it can save time to create a custom global role in which all of the rules from another role, such as the administrator role, are copied into a new role. This allows you to only configure the variations between the existing role and the new role.
|
||||
@@ -122,8 +120,6 @@ Custom global roles don't have to be based on existing roles. To create a custom
|
||||
|
||||
## Deleting a Custom Global Role
|
||||
|
||||
_Available as of v2.4.0_
|
||||
|
||||
When deleting a custom global role, all global role bindings with this custom role are deleted.
|
||||
|
||||
If a user is only assigned one custom global role, and the role is deleted, the user would lose access to Rancher. For the user to regain access, an administrator would need to edit the user and apply new global permissions.
|
||||
@@ -138,8 +134,6 @@ To delete a custom global role,
|
||||
|
||||
## Assigning a Custom Global Role to a Group
|
||||
|
||||
_Available as of v2.4.0_
|
||||
|
||||
If you have a group of individuals that need the same level of access in Rancher, it can save time to create a custom global role. When the role is assigned to a group, the users in the group have the appropriate level of access the first time they sign into Rancher.
|
||||
|
||||
When a user in the group logs in, they get the built-in Standard User global role by default. They will also get the permissions assigned to their groups.
|
||||
|
||||
@@ -130,8 +130,6 @@ To configure permission for a user,
|
||||
|
||||
### Configuring Global Permissions for Groups
|
||||
|
||||
_Available as of v2.4.0_
|
||||
|
||||
If you have a group of individuals that need the same level of access in Rancher, it can save time to assign permissions to the entire group at once, so that the users in the group have the appropriate level of access the first time they sign into Rancher.
|
||||
|
||||
After you assign a custom global role to a group, the custom global role will be assigned to a user in the group when they log in to Rancher.
|
||||
|
||||
@@ -101,12 +101,10 @@ The `S3` backup target allows users to configure a S3 compatible backend to stor
|
||||
|S3 Region Endpoint|S3 regions endpoint for the backup bucket|* |
|
||||
|S3 Access Key|S3 access key with permission to access the backup bucket|*|
|
||||
|S3 Secret Key|S3 secret key with permission to access the backup bucket|*|
|
||||
| Custom CA Certificate | A custom certificate used to access private S3 backends _Available as of v2.2.5_ ||
|
||||
| Custom CA Certificate | A custom certificate used to access private S3 backends ||
|
||||
|
||||
### Using a custom CA certificate for S3
|
||||
|
||||
_Available as of v2.2.5_
|
||||
|
||||
The backup snapshot can be stored on a custom `S3` backup like [minio](https://min.io/). If the S3 back end uses a self-signed or custom certificate, provide a custom certificate using the `Custom CA Certificate` option to connect to the S3 backend.
|
||||
|
||||
### IAM Support for Storing Snapshots in S3
|
||||
@@ -130,8 +128,6 @@ The list of all available snapshots for the cluster is available in the Rancher
|
||||
|
||||
# Safe Timestamps
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
As of v2.2.6, snapshot files are timestamped to simplify processing the files using external tools and scripts, but in some S3 compatible backends, these timestamps were unusable. As of Rancher v2.3.0, the option `safe_timestamp` is added to support compatible file names. When this flag is set to `true`, all special characters in the snapshot filename timestamp are replaced.
|
||||
|
||||
This option is not available directly in the UI, and is only available through the `Edit as Yaml` interface.
|
||||
|
||||
@@ -3,8 +3,6 @@ title: Cluster Drivers
|
||||
weight: 1
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Cluster drivers are used to create clusters in a [hosted Kubernetes provider]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/), such as Google GKE. The availability of which cluster driver to display when creating clusters is defined by the cluster driver's status. Only `active` cluster drivers will be displayed as an option for creating clusters. By default, Rancher is packaged with several existing cloud provider cluster drivers, but you can also add custom cluster drivers to Rancher.
|
||||
|
||||
If there are specific cluster drivers that you do not want to show your users, you may deactivate those cluster drivers within Rancher and they will not appear as an option for cluster creation.
|
||||
|
||||
@@ -115,8 +115,6 @@ docker run -d -p 80:80 -p 443:443 \
|
||||
|
||||
# Enabling Features with the Rancher UI
|
||||
|
||||
_Available as of Rancher v2.3.3_
|
||||
|
||||
1. Go to the **Global** view and click **Settings.**
|
||||
1. Click the **Feature Flags** tab. You will see a list of experimental features.
|
||||
1. To enable a feature, go to the disabled feature you want to enable and click **⋮ > Activate.**
|
||||
|
||||
@@ -1,10 +1,7 @@
|
||||
---
|
||||
title: Allow Unsupported Storage Drivers
|
||||
weight: 1
|
||||
aliases:
|
||||
- /rancher/v2.x/en/admin-settings/feature-flags/enable-not-default-storage-drivers
|
||||
---
|
||||
_Available as of v2.3.0_
|
||||
|
||||
This feature allows you to use types for storage providers and provisioners that are not enabled by default.
|
||||
|
||||
|
||||
@@ -1,10 +1,7 @@
|
||||
---
|
||||
title: UI for Istio Virtual Services and Destination Rules
|
||||
weight: 2
|
||||
aliases:
|
||||
- /rancher/v2.x/en/admin-settings/feature-flags/istio-virtual-service-ui
|
||||
---
|
||||
_Available as of v2.3.0_
|
||||
|
||||
This feature enables a UI that lets you create, read, update and delete virtual services and destination rules, which are traffic management features of Istio.
|
||||
|
||||
|
||||
@@ -184,8 +184,6 @@ Once drain successfully completes, the node will be in a state of `drained`. You
|
||||
|
||||
# Labeling a Node to be Ignored by Rancher
|
||||
|
||||
_Available as of 2.3.3_
|
||||
|
||||
Some solutions, such as F5's BIG-IP integration, may require creating a node that is never registered to a cluster.
|
||||
|
||||
Since the node will never finish registering, it will always be shown as unhealthy in the Rancher UI.
|
||||
|
||||
@@ -25,8 +25,6 @@ Using Rancher, you can create a Pod Security Policy using our GUI rather than cr
|
||||
|
||||
## Default Pod Security Policies
|
||||
|
||||
_Available as of v2.0.7_
|
||||
|
||||
Rancher ships with two default Pod Security Policies (PSPs): the `restricted` and `unrestricted` policies.
|
||||
|
||||
- `restricted`
|
||||
|
||||
@@ -90,8 +90,6 @@ If you require another level of organization beyond the **Default** project, you
|
||||
|
||||
### The System Project
|
||||
|
||||
_Available as of v2.0.7_
|
||||
|
||||
When troubleshooting, you can view the `system` project to check if important namespaces in the Kubernetes system are working properly. This easily accessible project saves you from troubleshooting individual system namespace containers.
|
||||
|
||||
To open it, open the **Global** menu, and then select the `system` project for your cluster.
|
||||
@@ -167,8 +165,6 @@ To add members:
|
||||
|
||||
### 4. Optional: Add Resource Quotas
|
||||
|
||||
_Available as of v2.1.0_
|
||||
|
||||
Resource quotas limit the resources that a project (and its namespaces) can consume. For more information, see [Resource Quotas]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/resource-quotas).
|
||||
|
||||
To add a resource quota,
|
||||
|
||||
@@ -44,8 +44,6 @@ As of Rancher v2.3.3, an existing cluster's settings can be [saved as an RKE tem
|
||||
|
||||
### Converting an Existing Cluster to Use an RKE Template
|
||||
|
||||
_Available as of v2.3.3_
|
||||
|
||||
This section describes how to create an RKE template from an existing cluster.
|
||||
|
||||
RKE templates cannot be applied to existing clusters, except if you save an existing cluster's settings as an RKE template. This exports the cluster's settings as a new RKE template, and also binds the cluster to that template. The result is that the cluster can only be changed if the [template is updated,]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/#updating-a-template) and the cluster is upgraded to [use a newer version of the template.]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rke-templates/creating-and-revising/#upgrading-a-cluster-to-use-a-new-template-revision)
|
||||
|
||||
@@ -80,8 +80,6 @@ For more information, refer to the section on [importing existing clusters.]({{<
|
||||
|
||||
### Importing and Editing K3s Clusters
|
||||
|
||||
_Available as of Rancher v2.4.0_
|
||||
|
||||
[K3s]({{<baseurl>}}/k3s/latest/en/) is lightweight, fully compliant Kubernetes distribution. K3s Kubernetes clusters can now be imported into Rancher.
|
||||
|
||||
When a K3s cluster is imported, Rancher will recognize it as K3s, and the Rancher UI will expose the following features in addition to the functionality for other imported clusters:
|
||||
|
||||
@@ -4,8 +4,6 @@ shortTitle: Alibaba Cloud Container Service for Kubernetes
|
||||
weight: 2120
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
You can use Rancher to create a cluster hosted in Alibaba Cloud Kubernetes (ACK). Rancher has already implemented and packaged the [cluster driver]({{<baseurl>}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/) for ACK, but by default, this cluster driver is `inactive`. In order to launch ACK clusters, you will need to [enable the ACK cluster driver]({{<baseurl>}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/#activating-deactivating-cluster-drivers). After enabling the cluster driver, you can start provisioning ACK clusters.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
title: Creating an AKS Cluster
|
||||
shortTitle: Azure Kubernetes Service
|
||||
weight: 2115
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/clusters/creating-a-cluster/create-cluster-azure-container-service/
|
||||
---
|
||||
|
||||
You can use Rancher to create a cluster hosted in Microsoft Azure Kubernetes Service (AKS).
|
||||
|
||||
@@ -4,8 +4,6 @@ shortTitle: Huawei Cloud Kubernetes Service
|
||||
weight: 2130
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
You can use Rancher to create a cluster hosted in Huawei Cloud Container Engine (CCE). Rancher has already implemented and packaged the [cluster driver]({{<baseurl>}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/) for CCE, but by default, this cluster driver is `inactive`. In order to launch CCE clusters, you will need to [enable the CCE cluster driver]({{<baseurl>}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/#activating-deactivating-cluster-drivers). After enabling the cluster driver, you can start provisioning CCE clusters.
|
||||
|
||||
## Prerequisites in Huawei
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
title: Creating an EKS Cluster
|
||||
shortTitle: Amazon EKS
|
||||
weight: 2110
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/clusters/creating-a-cluster/create-cluster-eks/
|
||||
---
|
||||
|
||||
Amazon EKS provides a managed control plane for your Kubernetes cluster. Amazon EKS runs the Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Rancher provides an intuitive user interface for managing and deploying the Kubernetes clusters you run in Amazon EKS. With this guide, you will use Rancher to quickly and easily launch an Amazon EKS Kubernetes cluster in your AWS account. For more information on Amazon EKS, see this [documentation](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html).
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
title: Creating a GKE Cluster
|
||||
shortTitle: Google Kubernetes Engine
|
||||
weight: 2105
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/clusters/creating-a-cluster/create-cluster-gke/
|
||||
---
|
||||
|
||||
## Prerequisites in Google Kubernetes Engine
|
||||
|
||||
@@ -4,8 +4,6 @@ shortTitle: Tencent Kubernetes Engine
|
||||
weight: 2125
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
You can use Rancher to create a cluster hosted in Tencent Kubernetes Engine (TKE). Rancher has already implemented and packaged the [cluster driver]({{<baseurl>}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/) for TKE, but by default, this cluster driver is `inactive`. In order to launch TKE clusters, you will need to [enable the TKE cluster driver]({{<baseurl>}}/rancher/v2.x/en/admin-settings/drivers/cluster-drivers/#activating-deactivating-cluster-drivers). After enabling the cluster driver, you can start provisioning TKE clusters.
|
||||
|
||||
## Prerequisites in Tencent
|
||||
|
||||
@@ -76,8 +76,6 @@ You can now import a K3s Kubernetes cluster into Rancher. [K3s]({{<baseurl>}}/k3
|
||||
|
||||
### Additional Features for Imported K3s Clusters
|
||||
|
||||
_Available as of v2.4.0_
|
||||
|
||||
When a K3s cluster is imported, Rancher will recognize it as K3s, and the Rancher UI will expose the following features in addition to the functionality for other imported clusters:
|
||||
|
||||
- The ability to upgrade the K3s version
|
||||
|
||||
-8
@@ -84,8 +84,6 @@ If you want to see all the configuration options for a cluster, please click **S
|
||||
|
||||
### Private registries
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
The cluster-level private registry configuration is only used for provisioning clusters.
|
||||
|
||||
There are two main ways to set up private registries in Rancher: by setting up the [global default registry]({{<baseurl>}}/rancher/v2.x/en/admin-settings/config-private-registry) through the **Settings** tab in the global view, and by setting up a private registry in the advanced options in the cluster-level settings. The global default registry is intended to be used for air-gapped setups, for registries that do not require credentials. The cluster-level private registry is intended to be used in all setups in which the private registry requires credentials.
|
||||
@@ -101,8 +99,6 @@ See the [RKE documentation on private registries]({{<baseurl>}}/rke/latest/en/co
|
||||
|
||||
### Authorized Cluster Endpoint
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Authorized Cluster Endpoint can be used to directly access the Kubernetes API server, without requiring communication through Rancher.
|
||||
|
||||
> The authorized cluster endpoint only works on Rancher-launched Kubernetes clusters. In other words, it only works in clusters where Rancher [used RKE]({{<baseurl>}}/rancher/v2.x/en/overview/architecture/#tools-for-provisioning-kubernetes-clusters) to provision the cluster. It is not available for clusters in a hosted Kubernetes provider, such as Amazon's EKS.
|
||||
@@ -351,8 +347,6 @@ The table below indicates what DNS provider is deployed by default. See [RKE doc
|
||||
|
||||
# Rancher specific parameters
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Besides the RKE config file options, there are also Rancher specific settings that can be configured in the Config File (YAML):
|
||||
|
||||
### docker_root_dir
|
||||
@@ -382,8 +376,6 @@ local_cluster_auth_endpoint:
|
||||
|
||||
### Custom Network Plug-in
|
||||
|
||||
_Available as of v2.2.4_
|
||||
|
||||
You can add a custom network plug-in by using the [user-defined add-on functionality]({{<baseurl>}}/rke/latest/en/config-options/add-ons/user-defined-add-ons/) of RKE. You define any add-on that you want deployed after the Kubernetes cluster is deployed.
|
||||
|
||||
There are two ways that you can specify an add-on:
|
||||
|
||||
-3
@@ -1,9 +1,6 @@
|
||||
---
|
||||
title: Rancher Agent Options
|
||||
weight: 2500
|
||||
aliases:
|
||||
- /rancher/v2.x/en/admin-settings/agent-options/
|
||||
- /rancher/v2.x/en/cluster-provisioning/custom-clusters/agent-options
|
||||
---
|
||||
|
||||
Rancher deploys an agent on each node to communicate with the node. This pages describes the options that can be passed to the agent. To use these options, you will need to [create a cluster with custom nodes]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes) and add the options to the generated `docker run` command when adding a node.
|
||||
|
||||
-10
@@ -35,16 +35,12 @@ You can add [labels](https://kubernetes.io/docs/concepts/overview/working-with-o
|
||||
|
||||
### Node Taints
|
||||
|
||||
_Available as of Rancher v2.3.0_
|
||||
|
||||
You can add [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on each node template, so that any nodes created from the node template will automatically have these taints on them.
|
||||
|
||||
Since taints can be added at a node template and node pool, if there is no conflict with the same key and effect of the taints, all taints will be added to the nodes. If there are taints with the same key and different effect, the taints from the node pool will override the taints from the node template.
|
||||
|
||||
### Administrator Control of Node Templates
|
||||
|
||||
_Available as of v2.3.3_
|
||||
|
||||
Administrators can control all node templates. Admins can now maintain all the node templates within Rancher. When a node template owner is no longer using Rancher, the node templates created by them can be managed by administrators so the cluster can continue to be updated and maintained.
|
||||
|
||||
To access all node templates, an administrator will need to do the following:
|
||||
@@ -62,8 +58,6 @@ Each node pool is assigned with a [node component]({{<baseurl>}}/rancher/v2.x/en
|
||||
|
||||
### Node Pool Taints
|
||||
|
||||
_Available as of Rancher v2.3.0_
|
||||
|
||||
If you haven't defined [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) on your node template, you can add taints for each node pool. The benefit of adding taints at a node pool is beneficial over adding it at a node template is that you can swap out the node templates without worrying if the taint is on the node template.
|
||||
|
||||
For each taint, they will automatically be added to any created node in the node pool. Therefore, if you add taints to a node pool that have existing nodes, the taints won't apply to existing nodes in the node pool, but any new node added into the node pool will get the taint.
|
||||
@@ -72,8 +66,6 @@ When there are taints on the node pool and node template, if there is no conflic
|
||||
|
||||
### About Node Auto-replace
|
||||
|
||||
_Available as of Rancher v2.3.0_
|
||||
|
||||
If a node is in a node pool, Rancher can automatically replace unreachable nodes. Rancher will use the existing node template for the given node pool to recreate the node if it becomes inactive for a specified number of minutes.
|
||||
|
||||
> **Important:** Self-healing node pools are designed to help you replace worker nodes for **stateless** applications. It is not recommended to enable node auto-replace on a node pool of master nodes or nodes with persistent volumes attached, because VMs are treated ephemerally. When a node in a node pool loses connectivity with the cluster, its persistent volumes are destroyed, resulting in data loss for stateful applications.
|
||||
@@ -114,8 +106,6 @@ You can disable node auto-replace from the Rancher UI with the following steps:
|
||||
|
||||
# Cloud Credentials
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Node templates can use cloud credentials to store credentials for launching nodes in your cloud provider, which has some benefits:
|
||||
|
||||
- Credentials are stored as a Kubernetes secret, which is not only more secure, but it also allows you to edit a node template without having to enter your credentials every time.
|
||||
|
||||
-2
@@ -2,8 +2,6 @@
|
||||
title: Creating an Azure Cluster
|
||||
shortTitle: Azure
|
||||
weight: 2220
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/clusters/creating-a-cluster/create-cluster-azure/
|
||||
---
|
||||
|
||||
Use {{< product >}} to create a Kubernetes cluster in Azure.
|
||||
|
||||
-2
@@ -2,8 +2,6 @@
|
||||
title: Creating a DigitalOcean Cluster
|
||||
shortTitle: DigitalOcean
|
||||
weight: 2215
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/clusters/creating-a-cluster/create-cluster-digital-ocean/
|
||||
---
|
||||
Use {{< product >}} to create a Kubernetes cluster using DigitalOcean.
|
||||
|
||||
|
||||
-6
@@ -4,8 +4,6 @@ shortTitle: vSphere
|
||||
description: Use Rancher to create a vSphere cluster. It may consist of groups of VMs with distinct properties which allow for fine-grained control over the sizing of nodes.
|
||||
metaDescription: Use Rancher to create a vSphere cluster. It may consist of groups of VMs with distinct properties which allow for fine-grained control over the sizing of nodes.
|
||||
weight: 2225
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/clusters/creating-a-cluster/create-cluster-vsphere/
|
||||
---
|
||||
|
||||
By using Rancher with vSphere, you can bring cloud operations on-premises.
|
||||
@@ -20,16 +18,12 @@ The vSphere node templates have been updated, allowing you to bring cloud operat
|
||||
|
||||
### Self-healing Node Pools
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
One of the biggest advantages of provisioning vSphere nodes with Rancher is that it allows you to take advantage of Rancher's self-healing node pools, also called the [node auto-replace feature,]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/#node-auto-replace) in your on-premises clusters. Self-healing node pools are designed to help you replace worker nodes for stateless applications. When Rancher provisions nodes from a node template, Rancher can automatically replace unreachable nodes.
|
||||
|
||||
> **Important:** It is not recommended to enable node auto-replace on a node pool of master nodes or nodes with persistent volumes attached, because VMs are treated ephemerally. When a node in a node pool loses connectivity with the cluster, its persistent volumes are destroyed, resulting in data loss for stateful applications.
|
||||
|
||||
### Dynamically Populated Options for Instances and Scheduling
|
||||
|
||||
_Available as of v2.3.3_
|
||||
|
||||
Node templates for vSphere have been updated so that when you create a node template with your vSphere credentials, the template is automatically populated with the same options for provisioning VMs that you have access to in the vSphere console.
|
||||
|
||||
For the fields to be populated, your setup needs to fulfill the [prerequisites.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/#prerequisites)
|
||||
|
||||
-2
@@ -169,8 +169,6 @@ Ensure that the [OS ISO URL](#instance-options) contains the URL of the VMware I
|
||||
|
||||
### D. Add Networks
|
||||
|
||||
_Available as of v2.3.3_
|
||||
|
||||
The node template now allows a VM to be provisioned with multiple networks. In the **Networks** field, you can now click **Add Network** to add any networks available to you in vSphere.
|
||||
|
||||
### E. If Not Already Enabled, Enable Disk UUIDs
|
||||
|
||||
@@ -85,8 +85,6 @@ The cluster cannot be downgraded to a previous Kubernetes version.
|
||||
|
||||
# Rolling Back
|
||||
|
||||
_Available as of v2.4_
|
||||
|
||||
A cluster can be restored to a backup in which the previous Kubernetes version was used. For more information, refer to the following sections:
|
||||
|
||||
- [Backing up a cluster]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/#how-snapshots-work)
|
||||
|
||||
@@ -96,8 +96,6 @@ For more information, see the following pages:
|
||||
|
||||

|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Weave enables networking and network policy in Kubernetes clusters across the cloud. Additionally, it support encrypting traffic between the peers.
|
||||
|
||||
Kubernetes workers should open TCP port `6783` (control port), UDP port `6783` and UDP port `6784` (data ports). See the [port requirements for user clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/node-requirements/#networking-requirements/) for more details.
|
||||
|
||||
@@ -1,11 +1,6 @@
|
||||
---
|
||||
title: Rancher is No Longer Needed
|
||||
weight: 8010
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/removing-rancher/cleaning-cluster-nodes/
|
||||
- /rancher/v2.x/en/installation/removing-rancher/
|
||||
- /rancher/v2.x/en/admin-settings/removing-rancher/
|
||||
- /rancher/v2.x/en/admin-settings/removing-rancher/rancher-cluster-nodes/
|
||||
---
|
||||
|
||||
This page is intended to answer questions about what happens if you don't want Rancher anymore, if you don't want a cluster to be managed by Rancher anymore, or if the Rancher server is deleted.
|
||||
|
||||
@@ -58,8 +58,8 @@ weight: 2
|
||||
| `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag |
|
||||
| `rancherImagePullPolicy` | "IfNotPresent" | `string` - Override imagePullPolicy for rancher server images - "Always", "Never", "IfNotPresent" |
|
||||
| `tls` | "ingress" | `string` - See [External TLS Termination](#external-tls-termination) for details. - "ingress, external" |
|
||||
| `systemDefaultRegistry` | "" | `string` - private registry to be used for all system Docker images, e.g., http://registry.example.com/ _Available as of v2.3.0_ |
|
||||
| `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. _Available as of v2.3.0_ |
|
||||
| `systemDefaultRegistry` | "" | `string` - private registry to be used for all system Docker images, e.g., http://registry.example.com/ |
|
||||
| `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. |
|
||||
|
||||
<br/>
|
||||
|
||||
@@ -79,8 +79,6 @@ Set the `auditLog.destination` to `hostPath` to forward logs to volume shared wi
|
||||
|
||||
### Setting Extra Environment Variables
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
You can set extra environment variables for Rancher server using `extraEnv`. This list uses the same `name` and `value` keys as the container manifest definitions. Remember to quote the values.
|
||||
|
||||
```plain
|
||||
@@ -90,8 +88,6 @@ You can set extra environment variables for Rancher server using `extraEnv`. Thi
|
||||
|
||||
### TLS Settings
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` and `CATTLE_TLS_CIPHERS` environment variables. For example, to configure TLS 1.0 as minimum accepted TLS version:
|
||||
|
||||
```plain
|
||||
@@ -123,8 +119,6 @@ Example on setting a custom certificate issuer:
|
||||
--set ingress.extraAnnotations.'certmanager\.k8s\.io/cluster-issuer'=ca-key-pair
|
||||
```
|
||||
|
||||
_Available as of v2.0.15, v2.1.10 and v2.2.4_
|
||||
|
||||
Example on setting a static proxy header with `ingress.configurationSnippet`. This value is parsed like a template so variables can be used.
|
||||
|
||||
```plain
|
||||
|
||||
-2
@@ -3,8 +3,6 @@ title: Running on ARM64 (Experimental)
|
||||
weight: 3
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
> **Important:**
|
||||
>
|
||||
> Running on an ARM64 platform is currently an experimental feature and is not yet officially supported in Rancher. Therefore, we do not recommend using ARM64 based nodes in a production environment.
|
||||
|
||||
+2
-2
@@ -76,7 +76,7 @@ When setting up the Rancher Helm template, there are several options in the Helm
|
||||
| ----------------------- | -------------------------------- | ---- |
|
||||
| `certmanager.version` | "<version>" | Configure proper Rancher TLS issuer depending of running cert-manager version. |
|
||||
| `systemDefaultRegistry` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
|
||||
| `useBundledSystemChart` | `true` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_ |
|
||||
| `useBundledSystemChart` | `true` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. |
|
||||
|
||||
Based on the choice your made in [B. Choose your SSL Configuration](#b-choose-your-ssl-configuration), complete one of the procedures below.
|
||||
|
||||
@@ -240,7 +240,7 @@ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher
|
||||
| Environment Variable Key | Environment Variable Value | Description |
|
||||
| -------------------------------- | -------------------------------- | ---- |
|
||||
| `CATTLE_SYSTEM_DEFAULT_REGISTRY` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
|
||||
| `CATTLE_SYSTEM_CATALOG` | `bundled` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. _Available as of v2.3.0_ |
|
||||
| `CATTLE_SYSTEM_CATALOG` | `bundled` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. |
|
||||
|
||||
> **Do you want to...**
|
||||
>
|
||||
|
||||
-2
@@ -101,8 +101,6 @@ The `rancher-images.txt` is expected to be on the workstation in the same direct
|
||||
{{% /tab %}}
|
||||
{{% tab "Linux and Windows Clusters" %}}
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
For Rancher servers that will provision Linux and Windows clusters, there are distinctive steps to populate your private registry for the Windows images and the Linux images. Since a Windows cluster is a mix of Linux and Windows nodes, the Linux images pushed into the private registry are manifests.
|
||||
|
||||
### Windows Steps
|
||||
|
||||
-2
@@ -3,8 +3,6 @@ title: TLS Settings
|
||||
weight: 3
|
||||
---
|
||||
|
||||
_Available as of v2.1.7_
|
||||
|
||||
In Rancher v2.1.7, the default TLS configuration changed to only accept TLS 1.2 and secure TLS cipher suites. TLS 1.3 and TLS 1.3 exclusive cipher suites are not supported.
|
||||
|
||||
## Configuring TLS settings
|
||||
|
||||
-1
@@ -1,7 +1,6 @@
|
||||
---
|
||||
title: Setting up an NGINX Load Balancer
|
||||
weight: 4
|
||||
aliases:
|
||||
---
|
||||
|
||||
NGINX will be configured as Layer 4 load balancer (TCP) that forwards connections to one of your Rancher nodes.
|
||||
|
||||
-2
@@ -51,8 +51,6 @@ docker run -d --restart=unless-stopped \
|
||||
|
||||
### TLS settings
|
||||
|
||||
_Available as of v2.1.7_
|
||||
|
||||
To set a different TLS configuration, you can use the `CATTLE_TLS_MIN_VERSION` and `CATTLE_TLS_CIPHERS` environment variables. For example, to configure TLS 1.0 as minimum accepted TLS version:
|
||||
|
||||
```
|
||||
|
||||
@@ -1,11 +1,7 @@
|
||||
---
|
||||
title: CPU and Memory Allocations
|
||||
weight: 1
|
||||
aliases:
|
||||
- /rancher/v2.x/en/project-admin/istio/configuring-resource-allocations/_index.md
|
||||
- /rancher/v2.x/en/project-admin/istio/config/_index.md
|
||||
---
|
||||
_Available as of v2.3.0_
|
||||
|
||||
This section describes the minimum recommended computing resources for the Istio components in a cluster.
|
||||
|
||||
|
||||
@@ -5,117 +5,3 @@ description: Rancher integrates with popular logging services. Learn the require
|
||||
metaDescription: "Rancher integrates with popular logging services. Learn the requirements and benefits of integrating with logging services, and enable logging on your cluster."
|
||||
weight: 9
|
||||
---
|
||||
|
||||
Logging is helpful because it allows you to:
|
||||
|
||||
- Capture and analyze the state of your cluster
|
||||
- Look for trends in your environment
|
||||
- Save your logs to a safe location outside of your cluster
|
||||
- Stay informed of events like a container crashing, a pod eviction, or a node dying
|
||||
- More easily debug and troubleshoot problems
|
||||
|
||||
Rancher supports integration with the following services:
|
||||
|
||||
- Elasticsearch
|
||||
- Splunk
|
||||
- Kafka
|
||||
- Syslog
|
||||
- Fluentd
|
||||
|
||||
This section covers the following topics:
|
||||
|
||||
- [How logging integrations work](#how-logging-integrations-work)
|
||||
- [Requirements](#requirements)
|
||||
- [Logging scope](#logging-scope)
|
||||
- [Enabling cluster logging](#enabling-cluster-logging)
|
||||
|
||||
# How Logging Integrations Work
|
||||
|
||||
Rancher can integrate with popular external services used for event streams, telemetry, or search. These services can log errors and warnings in your Kubernetes infrastructure to a stream.
|
||||
|
||||
These services collect container log events, which are saved to the `/var/log/containers` directory on each of your nodes. The service collects both standard and error events. You can then log into your services to review the events collected, leveraging each service's unique features.
|
||||
|
||||
When configuring Rancher to integrate with these services, you'll have to point Rancher toward the service's endpoint and provide authentication information.
|
||||
|
||||
Additionally, you'll have the opportunity to enter key-value pairs to filter the log events collected. The service will only collect events for containers marked with your configured key-value pairs.
|
||||
|
||||
>**Note:** You can only configure one logging service per cluster or per project.
|
||||
|
||||
# Requirements
|
||||
|
||||
The Docker daemon on each node in the cluster should be [configured](https://docs.docker.com/config/containers/logging/configure/) with the (default) log-driver: `json-file`. You can check the log-driver by running the following command:
|
||||
|
||||
```
|
||||
$ docker info | grep 'Logging Driver'
|
||||
Logging Driver: json-file
|
||||
```
|
||||
|
||||
# Logging Scope
|
||||
|
||||
You can configure logging at either cluster level or project level.
|
||||
|
||||
- Cluster logging writes logs for every pod in the cluster, i.e. in all the projects. For [RKE clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters), it also writes logs for all the Kubernetes system components.
|
||||
- [Project logging]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/logging/) writes logs for every pod in that particular project.
|
||||
|
||||
Logs that are sent to your logging service are from the following locations:
|
||||
|
||||
- Pod logs stored at `/var/log/containers`.
|
||||
- Kubernetes system components logs stored at `/var/lib/rancher/rke/log/`.
|
||||
|
||||
# Enabling Cluster Logging
|
||||
|
||||
As an [administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to send Kubernetes logs to a logging service.
|
||||
|
||||
1. From the **Global** view, navigate to the cluster that you want to configure cluster logging.
|
||||
|
||||
1. Select **Tools > Logging** in the navigation bar.
|
||||
|
||||
1. Select a logging service and enter the configuration. Refer to the specific service for detailed configuration. Rancher supports integration with the following services:
|
||||
|
||||
- [Elasticsearch]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch/)
|
||||
- [Splunk]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/splunk/)
|
||||
- [Kafka]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/kafka/)
|
||||
- [Syslog]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/syslog/)
|
||||
- [Fluentd]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/fluentd/)
|
||||
|
||||
1. (Optional) Instead of using the UI to configure the logging services, you can enter custom advanced configurations by clicking on **Edit as File**, which is located above the logging targets. This link is only visible after you select a logging service.
|
||||
|
||||
- With the file editor, enter raw fluentd configuration for any logging service. Refer to the documentation for each logging service on how to setup the output configuration.
|
||||
|
||||
- [Elasticsearch Documentation](https://github.com/uken/fluent-plugin-elasticsearch)
|
||||
- [Splunk Documentation](https://github.com/fluent/fluent-plugin-splunk)
|
||||
- [Kafka Documentation](https://github.com/fluent/fluent-plugin-kafka)
|
||||
- [Syslog Documentation](https://github.com/dlackty/fluent-plugin-remote_syslog)
|
||||
- [Fluentd Documentation](https://docs.fluentd.org/v1.0/articles/out_forward)
|
||||
|
||||
- If the logging service is using TLS, you also need to complete the **SSL Configuration** form.
|
||||
1. Provide the **Client Private Key** and **Client Certificate**. You can either copy and paste them or upload them by using the **Read from a file** button.
|
||||
|
||||
- You can use either a self-signed certificate or one provided by a certificate authority.
|
||||
|
||||
- You can generate a self-signed certificate using an openssl command. For example:
|
||||
|
||||
```
|
||||
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
|
||||
```
|
||||
2. If you are using a self-signed certificate, provide the **CA Certificate PEM**.
|
||||
|
||||
1. (Optional) Complete the **Additional Logging Configuration** form.
|
||||
|
||||
1. **Optional:** Use the **Add Field** button to add custom log fields to your logging configuration. These fields are key value pairs (such as `foo=bar`) that you can use to filter the logs from another system.
|
||||
|
||||
1. Enter a **Flush Interval**. This value determines how often [Fluentd](https://www.fluentd.org/) flushes data to the logging server. Intervals are measured in seconds.
|
||||
|
||||
1. **Include System Log**. The logs from pods in system project and RKE components will be sent to the target. Uncheck it to exclude the system logs.
|
||||
|
||||
1. Click **Test**. Rancher sends a test log to the service.
|
||||
|
||||
> **Note:** This button is replaced with _Dry Run_ if you are using the custom configuration editor. In this case, Rancher calls the fluentd dry run command to validate the configuration.
|
||||
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:** Rancher is now configured to send logs to the selected service. Log into the logging service so that you can start viewing the logs.
|
||||
|
||||
## Related Links
|
||||
|
||||
[Logging Architecture](https://kubernetes.io/docs/concepts/cluster-administration/logging/)
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
---
|
||||
title: Legacy UI Docs
|
||||
weight: 2
|
||||
---
|
||||
@@ -0,0 +1,118 @@
|
||||
---
|
||||
title: Legacy UI Docs
|
||||
weight: 2
|
||||
---
|
||||
|
||||
Logging is helpful because it allows you to:
|
||||
|
||||
- Capture and analyze the state of your cluster
|
||||
- Look for trends in your environment
|
||||
- Save your logs to a safe location outside of your cluster
|
||||
- Stay informed of events like a container crashing, a pod eviction, or a node dying
|
||||
- More easily debug and troubleshoot problems
|
||||
|
||||
Rancher supports integration with the following services:
|
||||
|
||||
- Elasticsearch
|
||||
- Splunk
|
||||
- Kafka
|
||||
- Syslog
|
||||
- Fluentd
|
||||
|
||||
This section covers the following topics:
|
||||
|
||||
- [How logging integrations work](#how-logging-integrations-work)
|
||||
- [Requirements](#requirements)
|
||||
- [Logging scope](#logging-scope)
|
||||
- [Enabling cluster logging](#enabling-cluster-logging)
|
||||
|
||||
# How Logging Integrations Work
|
||||
|
||||
Rancher can integrate with popular external services used for event streams, telemetry, or search. These services can log errors and warnings in your Kubernetes infrastructure to a stream.
|
||||
|
||||
These services collect container log events, which are saved to the `/var/log/containers` directory on each of your nodes. The service collects both standard and error events. You can then log into your services to review the events collected, leveraging each service's unique features.
|
||||
|
||||
When configuring Rancher to integrate with these services, you'll have to point Rancher toward the service's endpoint and provide authentication information.
|
||||
|
||||
Additionally, you'll have the opportunity to enter key-value pairs to filter the log events collected. The service will only collect events for containers marked with your configured key-value pairs.
|
||||
|
||||
>**Note:** You can only configure one logging service per cluster or per project.
|
||||
|
||||
# Requirements
|
||||
|
||||
The Docker daemon on each node in the cluster should be [configured](https://docs.docker.com/config/containers/logging/configure/) with the (default) log-driver: `json-file`. You can check the log-driver by running the following command:
|
||||
|
||||
```
|
||||
$ docker info | grep 'Logging Driver'
|
||||
Logging Driver: json-file
|
||||
```
|
||||
|
||||
# Logging Scope
|
||||
|
||||
You can configure logging at either cluster level or project level.
|
||||
|
||||
- Cluster logging writes logs for every pod in the cluster, i.e. in all the projects. For [RKE clusters]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters), it also writes logs for all the Kubernetes system components.
|
||||
- [Project logging]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/logging/) writes logs for every pod in that particular project.
|
||||
|
||||
Logs that are sent to your logging service are from the following locations:
|
||||
|
||||
- Pod logs stored at `/var/log/containers`.
|
||||
- Kubernetes system components logs stored at `/var/lib/rancher/rke/log/`.
|
||||
|
||||
# Enabling Cluster Logging
|
||||
|
||||
As an [administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to send Kubernetes logs to a logging service.
|
||||
|
||||
1. From the **Global** view, navigate to the cluster that you want to configure cluster logging.
|
||||
|
||||
1. Select **Tools > Logging** in the navigation bar.
|
||||
|
||||
1. Select a logging service and enter the configuration. Refer to the specific service for detailed configuration. Rancher supports integration with the following services:
|
||||
|
||||
- [Elasticsearch]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch/)
|
||||
- [Splunk]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/splunk/)
|
||||
- [Kafka]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/kafka/)
|
||||
- [Syslog]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/syslog/)
|
||||
- [Fluentd]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/logging/fluentd/)
|
||||
|
||||
1. (Optional) Instead of using the UI to configure the logging services, you can enter custom advanced configurations by clicking on **Edit as File**, which is located above the logging targets. This link is only visible after you select a logging service.
|
||||
|
||||
- With the file editor, enter raw fluentd configuration for any logging service. Refer to the documentation for each logging service on how to setup the output configuration.
|
||||
|
||||
- [Elasticsearch Documentation](https://github.com/uken/fluent-plugin-elasticsearch)
|
||||
- [Splunk Documentation](https://github.com/fluent/fluent-plugin-splunk)
|
||||
- [Kafka Documentation](https://github.com/fluent/fluent-plugin-kafka)
|
||||
- [Syslog Documentation](https://github.com/dlackty/fluent-plugin-remote_syslog)
|
||||
- [Fluentd Documentation](https://docs.fluentd.org/v1.0/articles/out_forward)
|
||||
|
||||
- If the logging service is using TLS, you also need to complete the **SSL Configuration** form.
|
||||
1. Provide the **Client Private Key** and **Client Certificate**. You can either copy and paste them or upload them by using the **Read from a file** button.
|
||||
|
||||
- You can use either a self-signed certificate or one provided by a certificate authority.
|
||||
|
||||
- You can generate a self-signed certificate using an openssl command. For example:
|
||||
|
||||
```
|
||||
openssl req -x509 -newkey rsa:2048 -keyout myservice.key -out myservice.cert -days 365 -nodes -subj "/CN=myservice.example.com"
|
||||
```
|
||||
2. If you are using a self-signed certificate, provide the **CA Certificate PEM**.
|
||||
|
||||
1. (Optional) Complete the **Additional Logging Configuration** form.
|
||||
|
||||
1. **Optional:** Use the **Add Field** button to add custom log fields to your logging configuration. These fields are key value pairs (such as `foo=bar`) that you can use to filter the logs from another system.
|
||||
|
||||
1. Enter a **Flush Interval**. This value determines how often [Fluentd](https://www.fluentd.org/) flushes data to the logging server. Intervals are measured in seconds.
|
||||
|
||||
1. **Include System Log**. The logs from pods in system project and RKE components will be sent to the target. Uncheck it to exclude the system logs.
|
||||
|
||||
1. Click **Test**. Rancher sends a test log to the service.
|
||||
|
||||
> **Note:** This button is replaced with _Dry Run_ if you are using the custom configuration editor. In this case, Rancher calls the fluentd dry run command to validate the configuration.
|
||||
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:** Rancher is now configured to send logs to the selected service. Log into the logging service so that you can start viewing the logs.
|
||||
|
||||
## Related Links
|
||||
|
||||
[Logging Architecture](https://kubernetes.io/docs/concepts/cluster-administration/logging/)
|
||||
-2
@@ -1,8 +1,6 @@
|
||||
---
|
||||
title: Elasticsearch
|
||||
weight: 200
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tools/logging/elasticsearch/
|
||||
---
|
||||
|
||||
If your organization uses [Elasticsearch](https://www.elastic.co/), either on premise or in the cloud, you can configure Rancher to send it Kubernetes logs. Afterwards, you can log into your Elasticsearch deployment to view logs.
|
||||
-2
@@ -1,8 +1,6 @@
|
||||
---
|
||||
title: Kafka
|
||||
weight: 400
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tools/logging/kafka/
|
||||
---
|
||||
|
||||
If your organization uses [Kafka](https://kafka.apache.org/), you can configure Rancher to send it Kubernetes logs. Afterwards, you can log into your Kafka server to view logs.
|
||||
-3
@@ -1,9 +1,6 @@
|
||||
---
|
||||
title: Splunk
|
||||
weight: 300
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tasks/logging/splunk/
|
||||
- /rancher/v2.x/en/tools/logging/splunk/
|
||||
---
|
||||
|
||||
If your organization uses [Splunk](https://www.splunk.com/), you can configure Rancher to send it Kubernetes logs. Afterwards, you can log into your Splunk server to view logs.
|
||||
-2
@@ -1,8 +1,6 @@
|
||||
---
|
||||
title: Syslog
|
||||
weight: 500
|
||||
aliases:
|
||||
- /rancher/v2.x/en/tools/logging/syslog/
|
||||
---
|
||||
|
||||
If your organization uses [Syslog](https://tools.ietf.org/html/rfc5424), you can configure Rancher to send it Kubernetes logs. Afterwards, you can log into your Syslog server to view logs.
|
||||
@@ -83,8 +83,6 @@ Inside the `questions.yml`, most of the content will be around the questions to
|
||||
|
||||
### Min/Max Rancher versions
|
||||
|
||||
_Available as of v2.3.0_
|
||||
|
||||
For each chart, you can add the minimum and/or maximum Rancher version, which determines whether or not this chart is available to be deployed from Rancher.
|
||||
|
||||
> **Note:** Even though Rancher release versions are prefixed with a `v`, there is *no* prefix for the release version when using this option.
|
||||
|
||||
@@ -34,7 +34,6 @@ Helm comes with built-in package server for developer testing (helm serve). The
|
||||
In Rancher, you can add the custom Helm chart repository with only a catalog name and the URL address of the chart repository.
|
||||
|
||||
### Add Private Git/Helm Chart Repositories
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Private catalog repositories can be added using credentials like Username and Password. You may also want to use the OAuth token if your Git or Helm repository server supports that.
|
||||
|
||||
@@ -63,8 +62,6 @@ For more information on private Git/Helm catalogs, refer to the [custom catalog
|
||||
|
||||
# Adding Cluster Level Catalogs
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
>**Prerequisites:** In order to manage cluster scoped catalogs, you need _one_ of the following permissions:
|
||||
>
|
||||
>- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
|
||||
@@ -82,8 +79,6 @@ _Available as of v2.2.0_
|
||||
|
||||
# Adding Project Level Catalogs
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
>**Prerequisites:** In order to manage project scoped catalogs, you need _one_ of the following permissions:
|
||||
>
|
||||
>- [Administrator Global Permissions]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/)
|
||||
|
||||
@@ -69,8 +69,6 @@ For native Helm charts (i.e., charts from the **Helm Stable** or **Helm Incubato
|
||||
{{% /tab %}}
|
||||
{{% tab "Editing YAML Files" %}}
|
||||
|
||||
_Available as of v2.1.0_
|
||||
|
||||
If you do not want to input answers using the UI, you can choose the **Edit as YAML** option.
|
||||
|
||||
With this example YAML:
|
||||
@@ -97,8 +95,6 @@ servers[0].host=example
|
||||
|
||||
### YAML files
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
You can directly paste that YAML formatted structure into the YAML editor. By allowing custom values to be set using a YAML formatted structure, Rancher has the ability to easily customize for more complicated input values (e.g. multi-lines, array and JSON objects).
|
||||
{{% /tab %}}
|
||||
{{% /tabs %}}
|
||||
@@ -51,8 +51,6 @@ When [adding your catalog]({{<baseurl>}}/rancher/v2.x/en/catalog/custom/adding/)
|
||||
|
||||
# Private Repositories
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Private Git or Helm chart repositories can be added into Rancher using either credentials, i.e. `Username` and `Password`. Private Git repositories also support authentication using OAuth tokens.
|
||||
|
||||
### Using Username and Password
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
---
|
||||
title: Monitoring
|
||||
weight: 1
|
||||
---
|
||||
-3
@@ -2,9 +2,6 @@
|
||||
title: Cluster Metrics
|
||||
weight: 3
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Cluster metrics display the hardware utilization for all nodes in your cluster, regardless of its role. They give you a global monitoring insight into the cluster.
|
||||
|
||||
Some of the biggest metrics to look out for:
|
||||
-3
@@ -3,9 +3,6 @@ title: Prometheus Configuration
|
||||
weight: 1
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
|
||||
While configuring monitoring at either the [cluster level]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [project level]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), there are multiple options that can be configured.
|
||||
|
||||
Option | Description
|
||||
-2
@@ -3,8 +3,6 @@ title: Viewing Metrics
|
||||
weight: 2
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
After you've enabled monitoring at either the [cluster level]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/monitoring/#enabling-cluster-monitoring) or [project level]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/monitoring/#enabling-project-monitoring), you will want to be start viewing the data being collected. There are multiple ways to view this data.
|
||||
|
||||
## Rancher Dashboard
|
||||
@@ -5,4 +5,126 @@ weight: 11
|
||||
|
||||
Notifiers are services that inform you of alert events. You can configure notifiers to send alert notifications to staff best suited to take corrective action.
|
||||
|
||||
> This section is under construction.
|
||||
Rancher integrates with a variety of popular IT services, including:
|
||||
|
||||
- **Slack**: Send alert notifications to your Slack channels.
|
||||
- **Email**: Choose email recipients for alert notifications.
|
||||
- **PagerDuty**: Route notifications to staff by phone, SMS, or personal email.
|
||||
- **WebHooks**: Update a webpage with alert notifications.
|
||||
- **WeChat**: Send alert notifications to your Enterprise WeChat contacts.
|
||||
|
||||
This section covers the following topics:
|
||||
|
||||
- [Roles-based access control for notifiers](#roles-based-access-control-for-notifiers)
|
||||
- [Adding notifiers](#adding-notifiers)
|
||||
- [Managing notifiers](#managing-notifiers)
|
||||
- [Example payload for a webhook alert notifier](#example-payload-for-a-webhook-alert-notifier)
|
||||
|
||||
### Roles-based Access Control for Notifiers
|
||||
|
||||
Notifiers are configured at the cluster level. This model ensures that only cluster owners need to configure notifiers, leaving project owners to simply configure alerts in the scope of their projects. You don't need to dispense privileges like SMTP server access or cloud account access.
|
||||
|
||||
### Adding Notifiers
|
||||
|
||||
Set up a notifier so that you can begin configuring and sending alerts.
|
||||
|
||||
1. From the **Global View**, open the cluster that you want to add a notifier.
|
||||
|
||||
1. From the main menu, select **Tools > Notifiers**. Then click **Add Notifier**.
|
||||
|
||||
1. Select the service you want to use as your notifier, and then fill out the form.
|
||||
{{% accordion id="slack" label="Slack" %}}
|
||||
1. Enter a **Name** for the notifier.
|
||||
1. From Slack, create a webhook. For instructions, see the [Slack Documentation](https://get.slack.help/hc/en-us/articles/115005265063-Incoming-WebHooks-for-Slack).
|
||||
1. From Rancher, enter your Slack webhook **URL**.
|
||||
1. Enter the name of the channel that you want to send alert notifications in the following format: `#<channelname>`.
|
||||
|
||||
Both public and private channels are supported.
|
||||
1. Click **Test**. If the test is successful, the Slack channel you're configuring for the notifier outputs `Slack setting validated`.
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="email" label="Email" %}}
|
||||
1. Enter a **Name** for the notifier.
|
||||
1. In the **Sender** field, enter an email address available on your mail server that you want to send the notification.
|
||||
1. In the **Host** field, enter the IP address or hostname for your SMTP server. Example: `smtp.email.com`
|
||||
1. In the **Port** field, enter the port used for email. Typically, TLS uses `587` and SSL uses `465`. If you're using TLS, make sure **Use TLS** is selected.
|
||||
1. Enter a **Username** and **Password** that authenticate with the SMTP server.
|
||||
1. In the **Default Recipient** field, enter the email address that you want to receive the notification.
|
||||
1. Click **Test**. If the test is successful, Rancher prints `settings validated` and you receive a test notification email.
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="pagerduty" label="PagerDuty" %}}
|
||||
1. Enter a **Name** for the notifier.
|
||||
1. From PagerDuty, create a Prometheus integration. For instructions, see the [PagerDuty Documentation](https://www.pagerduty.com/docs/guides/prometheus-integration-guide/).
|
||||
1. From PagerDuty, copy the integration's **Integration Key**.
|
||||
1. From Rancher, enter the key in the **Service Key** field.
|
||||
1. Click **Test**. If the test is successful, your PagerDuty endpoint outputs `PagerDuty setting validated`.
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="webhook" label="WebHook" %}}
|
||||
1. Enter a **Name** for the notifier.
|
||||
1. Using the app of your choice, create a webhook URL.
|
||||
1. Enter your webhook **URL**.
|
||||
1. Click **Test**. If the test is successful, the URL you're configuring as a notifier outputs `Webhook setting validated`.
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="WeChat" label="WeChat" %}}
|
||||
|
||||
1. Enter a **Name** for the notifier.
|
||||
1. In the **Corporation ID** field, enter the "EnterpriseID" of your corporation, you could get it from [Profile page](https://work.weixin.qq.com/wework_admin/frame#profile).
|
||||
1. From Enterprise WeChat, create an application in the [Application page](https://work.weixin.qq.com/wework_admin/frame#apps), and then enter the "AgentId" and "Secret" of this application to the **Application Agent ID** and **Application Secret** fields.
|
||||
1. Select the **Recipient Type** and then enter a corresponding id to **Default Recipient** field, for example, the party id, tag id or user account that you want to receive the notification. You could get contact information from [Contacts page](https://work.weixin.qq.com/wework_admin/frame#contacts).
|
||||
{{% /accordion %}}
|
||||
|
||||
1. Select **Enable** for **Send Resolved Alerts** if you wish to notify about resolved alerts.
|
||||
1. Click **Add** to complete adding the notifier.
|
||||
|
||||
**Result:** Your notifier is added to Rancher.
|
||||
|
||||
|
||||
### Managing Notifiers
|
||||
|
||||
After you set up notifiers, you can manage them. From the **Global** view, open the cluster that you want to manage your notifiers. Select **Tools > Notifiers**. You can:
|
||||
|
||||
- **Edit** their settings that you configured during their initial setup.
|
||||
- **Clone** them, to quickly setup slightly different notifiers.
|
||||
- **Delete** them when they're no longer necessary.
|
||||
|
||||
### Example Payload for a Webhook Alert Notifier
|
||||
|
||||
```json
|
||||
{
|
||||
"receiver": "c-2a3bc:kube-components-alert",
|
||||
"status": "firing",
|
||||
"alerts": [
|
||||
{
|
||||
"status": "firing",
|
||||
"labels": {
|
||||
"alert_name": "Scheduler is unavailable",
|
||||
"alert_type": "systemService",
|
||||
"cluster_name": "mycluster (ID: c-2a3bc)",
|
||||
"component_name": "scheduler",
|
||||
"group_id": "c-2a3bc:kube-components-alert",
|
||||
"logs": "Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused",
|
||||
"rule_id": "c-2a3bc:kube-components-alert_scheduler-system-service",
|
||||
"severity": "critical"
|
||||
},
|
||||
"annotations": {},
|
||||
"startsAt": "2020-01-30T19:18:13.321684733Z",
|
||||
"endsAt": "0001-01-01T00:00:00Z",
|
||||
"generatorURL": ""
|
||||
}
|
||||
],
|
||||
"groupLabels": {
|
||||
"component_name": "scheduler",
|
||||
"rule_id": "c-2a3bc:kube-components-alert_scheduler-system-service"
|
||||
},
|
||||
"commonLabels": {
|
||||
"alert_name": "Scheduler is unavailable",
|
||||
"alert_type": "systemService",
|
||||
"cluster_name": "mycluster (ID: c-2a3bc)"
|
||||
}
|
||||
}
|
||||
```
|
||||
### What's Next?
|
||||
|
||||
After creating a notifier, set up alerts to receive notifications of Rancher system events.
|
||||
|
||||
- [Cluster owners]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) can set up alerts at the [cluster level]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/alerts/).
|
||||
- [Project owners]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can set up alerts at the [project level]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/alerts/).
|
||||
|
||||
@@ -1,132 +0,0 @@
|
||||
---
|
||||
title: Legacy UI Docs
|
||||
weight: 2
|
||||
---
|
||||
|
||||
Notifiers are services that inform you of alert events. You can configure notifiers to send alert notifications to staff best suited to take corrective action.
|
||||
|
||||
Rancher integrates with a variety of popular IT services, including:
|
||||
|
||||
- **Slack**: Send alert notifications to your Slack channels.
|
||||
- **Email**: Choose email recipients for alert notifications.
|
||||
- **PagerDuty**: Route notifications to staff by phone, SMS, or personal email.
|
||||
- **WebHooks**: Update a webpage with alert notifications.
|
||||
- **WeChat**: Send alert notifications to your Enterprise WeChat contacts.
|
||||
|
||||
This section covers the following topics:
|
||||
|
||||
- [Roles-based access control for notifiers](#roles-based-access-control-for-notifiers)
|
||||
- [Adding notifiers](#adding-notifiers)
|
||||
- [Managing notifiers](#managing-notifiers)
|
||||
- [Example payload for a webhook alert notifier](#example-payload-for-a-webhook-alert-notifier)
|
||||
|
||||
### Roles-based Access Control for Notifiers
|
||||
|
||||
Notifiers are configured at the cluster level. This model ensures that only cluster owners need to configure notifiers, leaving project owners to simply configure alerts in the scope of their projects. You don't need to dispense privileges like SMTP server access or cloud account access.
|
||||
|
||||
### Adding Notifiers
|
||||
|
||||
Set up a notifier so that you can begin configuring and sending alerts.
|
||||
|
||||
1. From the **Global View**, open the cluster that you want to add a notifier.
|
||||
|
||||
1. From the main menu, select **Tools > Notifiers**. Then click **Add Notifier**.
|
||||
|
||||
1. Select the service you want to use as your notifier, and then fill out the form.
|
||||
{{% accordion id="slack" label="Slack" %}}
|
||||
1. Enter a **Name** for the notifier.
|
||||
1. From Slack, create a webhook. For instructions, see the [Slack Documentation](https://get.slack.help/hc/en-us/articles/115005265063-Incoming-WebHooks-for-Slack).
|
||||
1. From Rancher, enter your Slack webhook **URL**.
|
||||
1. Enter the name of the channel that you want to send alert notifications in the following format: `#<channelname>`.
|
||||
|
||||
Both public and private channels are supported.
|
||||
1. Click **Test**. If the test is successful, the Slack channel you're configuring for the notifier outputs `Slack setting validated`.
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="email" label="Email" %}}
|
||||
1. Enter a **Name** for the notifier.
|
||||
1. In the **Sender** field, enter an email address available on your mail server that you want to send the notification.
|
||||
1. In the **Host** field, enter the IP address or hostname for your SMTP server. Example: `smtp.email.com`
|
||||
1. In the **Port** field, enter the port used for email. Typically, TLS uses `587` and SSL uses `465`. If you're using TLS, make sure **Use TLS** is selected.
|
||||
1. Enter a **Username** and **Password** that authenticate with the SMTP server.
|
||||
1. In the **Default Recipient** field, enter the email address that you want to receive the notification.
|
||||
1. Click **Test**. If the test is successful, Rancher prints `settings validated` and you receive a test notification email.
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="pagerduty" label="PagerDuty" %}}
|
||||
1. Enter a **Name** for the notifier.
|
||||
1. From PagerDuty, create a Prometheus integration. For instructions, see the [PagerDuty Documentation](https://www.pagerduty.com/docs/guides/prometheus-integration-guide/).
|
||||
1. From PagerDuty, copy the integration's **Integration Key**.
|
||||
1. From Rancher, enter the key in the **Service Key** field.
|
||||
1. Click **Test**. If the test is successful, your PagerDuty endpoint outputs `PagerDuty setting validated`.
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="webhook" label="WebHook" %}}
|
||||
1. Enter a **Name** for the notifier.
|
||||
1. Using the app of your choice, create a webhook URL.
|
||||
1. Enter your webhook **URL**.
|
||||
1. Click **Test**. If the test is successful, the URL you're configuring as a notifier outputs `Webhook setting validated`.
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="WeChat" label="WeChat" %}}
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
1. Enter a **Name** for the notifier.
|
||||
1. In the **Corporation ID** field, enter the "EnterpriseID" of your corporation, you could get it from [Profile page](https://work.weixin.qq.com/wework_admin/frame#profile).
|
||||
1. From Enterprise WeChat, create an application in the [Application page](https://work.weixin.qq.com/wework_admin/frame#apps), and then enter the "AgentId" and "Secret" of this application to the **Application Agent ID** and **Application Secret** fields.
|
||||
1. Select the **Recipient Type** and then enter a corresponding id to **Default Recipient** field, for example, the party id, tag id or user account that you want to receive the notification. You could get contact information from [Contacts page](https://work.weixin.qq.com/wework_admin/frame#contacts).
|
||||
{{% /accordion %}}
|
||||
|
||||
1. _Available as of v2.3.0_ - Select **Enable** for **Send Resolved Alerts** if you wish to notify about resolved alerts.
|
||||
1. Click **Add** to complete adding the notifier.
|
||||
|
||||
**Result:** Your notifier is added to Rancher.
|
||||
|
||||
|
||||
### Managing Notifiers
|
||||
|
||||
After you set up notifiers, you can manage them. From the **Global** view, open the cluster that you want to manage your notifiers. Select **Tools > Notifiers**. You can:
|
||||
|
||||
- **Edit** their settings that you configured during their initial setup.
|
||||
- **Clone** them, to quickly setup slightly different notifiers.
|
||||
- **Delete** them when they're no longer necessary.
|
||||
|
||||
### Example Payload for a Webhook Alert Notifier
|
||||
|
||||
```json
|
||||
{
|
||||
"receiver": "c-2a3bc:kube-components-alert",
|
||||
"status": "firing",
|
||||
"alerts": [
|
||||
{
|
||||
"status": "firing",
|
||||
"labels": {
|
||||
"alert_name": "Scheduler is unavailable",
|
||||
"alert_type": "systemService",
|
||||
"cluster_name": "mycluster (ID: c-2a3bc)",
|
||||
"component_name": "scheduler",
|
||||
"group_id": "c-2a3bc:kube-components-alert",
|
||||
"logs": "Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused",
|
||||
"rule_id": "c-2a3bc:kube-components-alert_scheduler-system-service",
|
||||
"severity": "critical"
|
||||
},
|
||||
"annotations": {},
|
||||
"startsAt": "2020-01-30T19:18:13.321684733Z",
|
||||
"endsAt": "0001-01-01T00:00:00Z",
|
||||
"generatorURL": ""
|
||||
}
|
||||
],
|
||||
"groupLabels": {
|
||||
"component_name": "scheduler",
|
||||
"rule_id": "c-2a3bc:kube-components-alert_scheduler-system-service"
|
||||
},
|
||||
"commonLabels": {
|
||||
"alert_name": "Scheduler is unavailable",
|
||||
"alert_type": "systemService",
|
||||
"cluster_name": "mycluster (ID: c-2a3bc)"
|
||||
}
|
||||
}
|
||||
```
|
||||
### What's Next?
|
||||
|
||||
After creating a notifier, set up alerts to receive notifications of Rancher system events.
|
||||
|
||||
- [Cluster owners]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles) can set up alerts at the [cluster level]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/tools/alerts/).
|
||||
- [Project owners]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#project-roles) can set up alerts at the [project level]({{<baseurl>}}/rancher/v2.x/en/project-admin/tools/alerts/).
|
||||
@@ -33,8 +33,6 @@ On this page, we provide security-related documentation along with resources to
|
||||
|
||||
### Running a CIS Security Scan on a Kubernetes Cluster
|
||||
|
||||
_Available as of v2.4.0_
|
||||
|
||||
Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS (Center for Internet Security) Kubernetes Benchmark.
|
||||
|
||||
The CIS Kubernetes Benchmark is a reference document that can be used to establish a secure configuration baseline for Kubernetes.
|
||||
|
||||
Reference in New Issue
Block a user