mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-04-26 16:25:40 +00:00
Fix links
This commit is contained in:
@@ -15,7 +15,7 @@ To edit your cluster, open the **Global** view, make sure the **Clusters** tab i
|
||||
|
||||
Some advanced configuration options are not exposed in the Rancher UI forms, but they can be enabled by editing the RKE cluster configuration file in YAML. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the [RKE documentation.](https://rancher.com/docs/rke/latest/en/config-options/)
|
||||
|
||||
### Kubernetes Version
|
||||
### Kubernetes Version
|
||||
|
||||
The version of Kubernetes installed on each cluster node. For more detail, see [Upgrading Kubernetes](../../../getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes.md).
|
||||
|
||||
@@ -25,9 +25,9 @@ The \container networking interface (CNI) that powers networking for your cluste
|
||||
|
||||
### Project Network Isolation
|
||||
|
||||
If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication.
|
||||
If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication.
|
||||
|
||||
Before Rancher v2.5.8, project network isolation is only available if you are using the Canal network plugin for RKE.
|
||||
Before Rancher v2.5.8, project network isolation is only available if you are using the Canal network plugin for RKE.
|
||||
|
||||
In v2.5.8+, project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.
|
||||
|
||||
@@ -35,7 +35,7 @@ In v2.5.8+, project network isolation is available if you are using any RKE netw
|
||||
|
||||
If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use Nginx ingress within the cluster.
|
||||
|
||||
### Metrics Server Monitoring
|
||||
### Metrics Server Monitoring
|
||||
|
||||
Each cloud provider capable of launching a cluster using RKE can collect metrics and monitor for your cluster nodes. Enable this option to view your node metrics from your cloud provider's portal.
|
||||
|
||||
@@ -57,7 +57,7 @@ If you enable **Pod Security Policy Support**, use this drop-down to choose the
|
||||
|
||||
### Cloud Provider
|
||||
|
||||
If you're using a cloud provider to host cluster nodes launched by RKE, enable [this option](cluster-provisioning/rke-clusters/options/cloud-providers/) so that you can use the cloud provider's native features. If you want to store persistent data for your cloud-hosted cluster, this option is required.
|
||||
If you're using a cloud provider to host cluster nodes launched by RKE, enable [this option](../../../pages-for-subheaders/set-up-cloud-providers.md) so that you can use the cloud provider's native features. If you want to store persistent data for your cloud-hosted cluster, this option is required.
|
||||
|
||||
# Editing Clusters with YAML
|
||||
|
||||
|
||||
@@ -47,7 +47,7 @@ For information on enabling experimental features, refer to [this page.](../../p
|
||||
| `antiAffinity` | "preferred" | `string` - AntiAffinity rule for Rancher pods - "preferred, required" |
|
||||
| `auditLog.destination` | "sidecar" | `string` - Stream to sidecar container console or hostPath volume - "sidecar, hostPath" |
|
||||
| `auditLog.hostPath` | "/var/log/rancher/audit" | `string` - log file destination on host (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
| `auditLog.level` | 0 | `int` - set the [API Audit Log](installation/api-auditing) level. 0 is off. [0-3] |
|
||||
| `auditLog.level` | 0 | `int` - set the [API Audit Log](../../getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/enable-api-audit-log.md) level. 0 is off. [0-3] |
|
||||
| `auditLog.maxAge` | 1 | `int` - maximum number of days to retain old audit log files (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
| `auditLog.maxBackup` | 1 | `int` - maximum number of audit log files to retain (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
| `auditLog.maxSize` | 100 | `int` - maximum size in megabytes of the audit log file before it gets rotated (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
@@ -107,7 +107,7 @@ See [TLS settings](tls-settings.md) for more information and options.
|
||||
|
||||
By default Rancher server will detect and import the `local` cluster it's running on. User with access to the `local` cluster will essentially have "root" access to all the clusters managed by Rancher server.
|
||||
|
||||
> **Important:** If you turn addLocal off, most Rancher v2.5 features won't work, including the EKS provisioner.
|
||||
> **Important:** If you turn addLocal off, most Rancher v2.5 features won't work, including the EKS provisioner.
|
||||
|
||||
If this is a concern in your environment you can set this option to "false" on your initial install.
|
||||
|
||||
@@ -160,10 +160,7 @@ kubectl -n cattle-system create secret generic tls-ca-additional --from-file=ca-
|
||||
|
||||
### Private Registry and Air Gap Installs
|
||||
|
||||
For details on installing Rancher with a private registry, see:
|
||||
|
||||
- [Air Gap: Docker Install](installation/air-gap-single-node/)
|
||||
- [Air Gap: Kubernetes Install](installation/air-gap-high-availability/)
|
||||
For details on installing Rancher with a private registry, see [Air Gapped Helm CLI Install](../../pages-for-subheaders/air-gapped-helm-cli-install.md).
|
||||
|
||||
# External TLS Termination
|
||||
|
||||
@@ -171,7 +168,7 @@ We recommend configuring your load balancer as a Layer 4 balancer, forwarding pl
|
||||
|
||||
You may terminate the SSL/TLS on a L7 load balancer external to the Rancher cluster (ingress). Use the `--set tls=external` option and point your load balancer at port http 80 on all of the Rancher cluster nodes. This will expose the Rancher interface on http port 80. Be aware that clients that are allowed to connect directly to the Rancher cluster will not be encrypted. If you choose to do this we recommend that you restrict direct access at the network level to just your load balancer.
|
||||
|
||||
> **Note:** If you are using a Private CA signed certificate, add `--set privateCA=true` and see [Adding TLS Secrets - Using a Private CA Signed Certificate](installation/resources/encryption/tls-secrets/) to add the CA cert for Rancher.
|
||||
> **Note:** If you are using a Private CA signed certificate, add `--set privateCA=true` and see [Adding TLS Secrets - Using a Private CA Signed Certificate](../../getting-started/installation-and-upgrade/resources/add-tls-secrets.md) to add the CA cert for Rancher.
|
||||
|
||||
Your load balancer must support long lived websocket connections and will need to insert proxy headers so Rancher can route links correctly.
|
||||
|
||||
|
||||
@@ -58,13 +58,13 @@ We deploy kube-state-metrics and node-exporter with monitoring v2. Node exporter
|
||||
|
||||
We also deploy grafana which is not managed by prometheus.
|
||||
|
||||
If you look at what the helm chart is doing like in kube-state-metrics, there are plenty more values that you can set that aren’t exposed in the top level chart.
|
||||
If you look at what the helm chart is doing like in kube-state-metrics, there are plenty more values that you can set that aren’t exposed in the top level chart.
|
||||
|
||||
But in the top level chart you can add values that override values that exist in the sub chart.
|
||||
|
||||
### Increase the Replicas of Alertmanager
|
||||
|
||||
As part of the chart deployment options, you can opt to increase the number of replicas of the Alertmanager deployed onto your cluster. The replicas can all be managed using the same underlying Alertmanager Config Secret. For more information on the Alertmanager Config Secret, refer to [this section]({{<baseurl>}}/monitoring-alerting/configuration/advanced/alertmanager/#multiple-alertmanager-replicas)
|
||||
As part of the chart deployment options, you can opt to increase the number of replicas of the Alertmanager deployed onto your cluster. The replicas can all be managed using the same underlying Alertmanager Config Secret. For more information on the Alertmanager Config Secret, refer to [this section](../../how-to-guides/advanced-user-guides/monitoring-v2-configuration-guides/advanced-configuration/alertmanager.md#multiple-alertmanager-replicas)
|
||||
|
||||
### Configuring the Namespace for a Persistent Grafana Dashboard
|
||||
|
||||
|
||||
@@ -393,7 +393,7 @@ spec:
|
||||
# key: string
|
||||
```
|
||||
|
||||
For more information on enabling alerting for `rancher-cis-benchmark`, see [this section.](cis-scans/v2.5/#enabling-alerting-for-rancher-cis-benchmark)
|
||||
For more information on enabling alerting for `rancher-cis-benchmark`, see [this section.](../../pages-for-subheaders/cis-scan-guides.md#enabling-alerting-for-rancher-cis-benchmark)
|
||||
|
||||
|
||||
# Trusted CA for Notifiers
|
||||
|
||||
@@ -9,7 +9,7 @@ aliases:
|
||||
|
||||
Pipelines can be configured either through the UI or using a yaml file in the repository, i.e. `.rancher-pipeline.yml` or `.rancher-pipeline.yaml`.
|
||||
|
||||
In the [pipeline configuration reference](k8s-in-rancher/pipelines/config), we provide examples of how to configure each feature using the Rancher UI or using YAML configuration.
|
||||
In the [pipeline configuration reference](pipeline-configuration.md), we provide examples of how to configure each feature using the Rancher UI or using YAML configuration.
|
||||
|
||||
Below is a full example `rancher-pipeline.yml` for those who want to jump right in.
|
||||
|
||||
@@ -69,7 +69,7 @@ notification:
|
||||
notifier: "c-wdcsr:n-c9pg7"
|
||||
- recipient: "test@example.com"
|
||||
notifier: "c-wdcsr:n-lkrhd"
|
||||
# Select which statuses you want the notification to be sent
|
||||
# Select which statuses you want the notification to be sent
|
||||
condition: ["Failed", "Success", "Changed"]
|
||||
# Ability to override the default message (Optional)
|
||||
message: "my-message"
|
||||
|
||||
@@ -304,7 +304,7 @@ timeout: 30
|
||||
|
||||
# Notifications
|
||||
|
||||
You can enable notifications to any notifiers based on the build status of a pipeline. Before enabling notifications, Rancher recommends [setting up notifiers](monitoring-alerting/legacy/notifiers/) so it will be easy to add recipients immediately.
|
||||
You can enable notifications to any notifiers based on the build status of a pipeline. Before enabling notifications, Rancher recommends [setting up notifiers](../monitoring-v2-configuration/receivers.md) so it will be easy to add recipients immediately.
|
||||
|
||||
### Configuring Notifications by UI
|
||||
|
||||
@@ -641,8 +641,8 @@ If you want to use a version control provider with a certificate from a custom/i
|
||||
|
||||
The internal Docker registry and the Minio workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.
|
||||
|
||||
For details on setting up persistent storage for pipelines, refer to [this page.](k8s-in-rancher/pipelines/storage)
|
||||
For details on setting up persistent storage for pipelines, refer to [this page.](configure-persistent-data.md)
|
||||
|
||||
# Example rancher-pipeline.yml
|
||||
|
||||
An example pipeline configuration file is on [this page.](k8s-in-rancher/pipelines/example)
|
||||
An example pipeline configuration file is on [this page.](example-yaml.md)
|
||||
|
||||
@@ -104,10 +104,10 @@ With that said, it is safe to use all three roles on three nodes when setting up
|
||||
|
||||
Because no additional workloads will be deployed on the Rancher server cluster, in most cases it is not necessary to use the same architecture that we recommend for the scalability and reliability of downstream clusters.
|
||||
|
||||
For more best practices for downstream clusters, refer to the [production checklist](../../pages-for-subheaders/checklist-for-production-ready-clusters.md) or our [best practices guide.](best-practices/v2.5/)
|
||||
For more best practices for downstream clusters, refer to the [production checklist](../../pages-for-subheaders/checklist-for-production-ready-clusters.md) or our [best practices guide.](../../pages-for-subheaders/best-practices.md)
|
||||
|
||||
# Architecture for an Authorized Cluster Endpoint
|
||||
# Architecture for an Authorized Cluster Endpoint
|
||||
|
||||
If you are using an [authorized cluster endpoint,](../../pages-for-subheaders/rancher-manager-architecture.md#4-authorized-cluster-endpoint) we recommend creating an FQDN pointing to a load balancer which balances traffic across your nodes with the `controlplane` role.
|
||||
|
||||
If you are using private CA signed certificates on the load balancer, you have to supply the CA certificate, which will be included in the generated kubeconfig file to validate the certificate chain. See the documentation on [kubeconfig files](k8s-in-rancher/kubeconfig/) and [API keys](../user-settings/api-keys.md#creating-an-api-key) for more information.
|
||||
If you are using private CA signed certificates on the load balancer, you have to supply the CA certificate, which will be included in the generated kubeconfig file to validate the certificate chain. See the documentation on [kubeconfig files](../../how-to-guides/advanced-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md) and [API keys](../user-settings/api-keys.md#creating-an-api-key) for more information.
|
||||
|
||||
@@ -26,4 +26,4 @@ Rancher is committed to informing the community of security issues in our produc
|
||||
| [CVE-2019-12274](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274) | Nodes using the built-in node drivers using a file path option allows the machine to read arbitrary files including sensitive ones from inside the Rancher server container. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) |
|
||||
| [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) |
|
||||
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions]({{<baseurl>}}/rancher/v2.6/en/installation/install-rancher-on-k8s/rollbacks). |
|
||||
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions](../../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rollbacks.md). |
|
||||
|
||||
@@ -5,7 +5,7 @@ aliases:
|
||||
- /rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/
|
||||
---
|
||||
|
||||
When installing Rancher, there are several [advanced options](installation/options/) that can be enabled:
|
||||
When installing Rancher, there are several [advanced options](../../pages-for-subheaders/resources.md) that can be enabled:
|
||||
|
||||
- [Custom CA Certificate](#custom-ca-certificate)
|
||||
- [API Audit Log](#api-audit-log)
|
||||
@@ -44,7 +44,7 @@ The API Audit Log records all the user and system transactions made through Ranc
|
||||
|
||||
The API Audit Log writes to `/var/log/auditlog` inside the rancher container by default. Share that directory as a volume and set your `AUDIT_LEVEL` to enable the log.
|
||||
|
||||
See [API Audit Log](installation/api-auditing) for more information and options.
|
||||
See [API Audit Log](../../getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/enable-api-audit-log.md) for more information and options.
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.](../../pages-for-subheaders/rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher-v2-5)
|
||||
|
||||
@@ -71,7 +71,7 @@ docker run -d --restart=unless-stopped \
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.](../../pages-for-subheaders/rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher-v2-5)
|
||||
|
||||
See [TLS settings](admin-settings/tls-settings) for more information and options.
|
||||
See [TLS settings](../installation-references/tls-settings.md) for more information and options.
|
||||
|
||||
### Air Gap
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ Make sure `NO_PROXY` contains the network addresses, network address ranges and
|
||||
|
||||
## Docker Installation
|
||||
|
||||
Passing environment variables to the Rancher container can be done using `-e KEY=VALUE` or `--env KEY=VALUE`. Required values for `NO_PROXY` in a [Docker Installation](installation/single-node-install/) are:
|
||||
Passing environment variables to the Rancher container can be done using `-e KEY=VALUE` or `--env KEY=VALUE`. Required values for `NO_PROXY` in a [Docker Installation](../../pages-for-subheaders/rancher-on-a-single-node-with-docker.md) are:
|
||||
|
||||
- `localhost`
|
||||
- `127.0.0.1`
|
||||
|
||||
@@ -43,7 +43,7 @@ After you download the tools, complete the following actions:
|
||||
|
||||
# Logs
|
||||
|
||||
The logs subcommand will collect log files of core Kubernetes cluster components from nodes in [Rancher-launched Kubernetes clusters](../pages-for-subheaders/launch-kubernetes-with-rancher.md) or nodes on an [RKE Kubernetes cluster that Rancher is installed on.](../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md). See [Troubleshooting]({{<baseurl>}}//rancher/v2.5/en/troubleshooting/) for a list of core Kubernetes cluster components.
|
||||
The logs subcommand will collect log files of core Kubernetes cluster components from nodes in [Rancher-launched Kubernetes clusters](../pages-for-subheaders/launch-kubernetes-with-rancher.md) or nodes on an [RKE Kubernetes cluster that Rancher is installed on.](../pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster.md). See [Troubleshooting](../troubleshooting.md) for a list of core Kubernetes cluster components.
|
||||
|
||||
System Tools will use the provided kubeconfig file to deploy a DaemonSet, that will copy all the logfiles from the core Kubernetes cluster components and add them to a single tar file (`cluster-logs.tar` by default). If you only want to collect logging from a single node, you can specify the node by using `--node NODENAME` or `-n NODENAME`.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ The commands/steps listed on this page can be used to check name resolution issu
|
||||
|
||||
Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml` for Rancher HA) or are using the embedded kubectl via the UI.
|
||||
|
||||
Before running the DNS checks, check the [default DNS provider](cluster-provisioning/rke-clusters/options/#default-dns-provider) for your cluster and make sure that [the overlay network is functioning correctly](networking.md#check-if-overlay-network-is-functioning-correctly) as this can also be the reason why DNS resolution (partly) fails.
|
||||
Before running the DNS checks, check the [default DNS provider](../../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#de) for your cluster and make sure that [the overlay network is functioning correctly](networking.md#check-if-overlay-network-is-functioning-correctly) as this can also be the reason why DNS resolution (partly) fails.
|
||||
|
||||
### Check if DNS pods are running
|
||||
|
||||
|
||||
Reference in New Issue
Block a user