mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-17 10:25:16 +00:00
Resolve merge conflict
This commit is contained in:
@@ -13,7 +13,7 @@ This section contains advanced information describing the different ways you can
|
||||
- [Using Docker as the container runtime](#using-docker-as-the-container-runtime)
|
||||
- [Configuring containerd](#configuring-containerd)
|
||||
- [Secrets Encryption Config (Experimental)](#secrets-encryption-config-experimental)
|
||||
- [Running K3s with RootlessKit (Experimental)](#running-k3s-with-rootlesskit-experimental)
|
||||
- [Running K3s with Rootless mode (Experimental)](#running-k3s-with-rootless-mode-experimental)
|
||||
- [Node labels and taints](#node-labels-and-taints)
|
||||
- [Starting the server with the installation script](#starting-the-server-with-the-installation-script)
|
||||
- [Additional preparation for Alpine Linux setup](#additional-preparation-for-alpine-linux-setup)
|
||||
@@ -163,18 +163,15 @@ As of v1.17.4+k3s1, K3s added the experimental feature of enabling secrets encry
|
||||
|
||||
Once enabled any created secret will be encrypted with this key. Note that if you disable encryption then any encrypted secrets will not be readable until you enable encryption again.
|
||||
|
||||
# Running K3s with RootlessKit (Experimental)
|
||||
# Running K3s with Rootless mode (Experimental)
|
||||
|
||||
> **Warning:** This feature is experimental.
|
||||
|
||||
RootlessKit is a kind of Linux-native "fake root" utility, made for mainly [running Docker and Kubernetes as an unprivileged user,](https://github.com/rootless-containers/usernetes) so as to protect the real root on the host from potential container-breakout attacks.
|
||||
Rootless mode allows running the entire k3s an unprivileged user, so as to protect the real root on the host from potential container-breakout attacks.
|
||||
|
||||
Initial rootless support has been added but there are a series of significant usability issues surrounding it.
|
||||
See also https://rootlesscontaine.rs/ to learn about Rootless mode.
|
||||
|
||||
We are releasing the initial support for those interested in rootless and hopefully some people can help to improve the usability. First, ensure you have a proper setup and support for user namespaces. Refer to the [requirements section](https://github.com/rootless-containers/rootlesskit#setup) in RootlessKit for instructions.
|
||||
In short, latest Ubuntu is your best bet for this to work.
|
||||
|
||||
### Known Issues with RootlessKit
|
||||
### Known Issues with Rootless mode
|
||||
|
||||
* **Ports**
|
||||
|
||||
@@ -184,24 +181,41 @@ In short, latest Ubuntu is your best bet for this to work.
|
||||
|
||||
Currently, only `LoadBalancer` services are automatically bound.
|
||||
|
||||
* **Daemon lifecycle**
|
||||
|
||||
Once you kill K3s and then start a new instance of K3s it will create a new network namespace, but it doesn't kill the old pods. So you are left
|
||||
with a fairly broken setup. This is the main issue at the moment, how to deal with the network namespace.
|
||||
|
||||
The issue is tracked in https://github.com/rootless-containers/rootlesskit/issues/65
|
||||
|
||||
* **Cgroups**
|
||||
|
||||
Cgroups are not supported.
|
||||
Cgroup v1 is not supported. v2 is supported.
|
||||
|
||||
* **Multi-node cluster**
|
||||
|
||||
Multi-cluster installation is untested and undocumented.
|
||||
|
||||
### Running Servers and Agents with Rootless
|
||||
* Enable cgroup v2 delegation, see https://rootlesscontaine.rs/getting-started/common/cgroup2/ .
|
||||
This step is optional, but highly recommended for enabling CPU and memory resource limtitation.
|
||||
|
||||
Just add `--rootless` flag to either server or agent. So run `k3s server --rootless` and then look for the message `Wrote kubeconfig [SOME PATH]` for where your kubeconfig file is.
|
||||
* Download `k3s-rootless.service` from [`https://github.com/k3s-io/k3s/blob/<VERSION>/k3s-rootless.service`](https://github.com/k3s-io/k3s/blob/master/k3s-rootless.service).
|
||||
Make sure to use the same version of `k3s-rootless.service` and `k3s`.
|
||||
|
||||
For more information about setting up the kubeconfig file, refer to the [section about cluster access.](../cluster-access)
|
||||
* Install `k3s-rootless.service` to `~/.config/systemd/user/k3s-rootless.service`.
|
||||
Installing this file as a system-wide service (`/etc/systemd/...`) is not supported.
|
||||
Depending on the path of `k3s` binary, you might need to modify the `ExecStart=/usr/local/bin/k3s ...` line of the file.
|
||||
|
||||
> Be careful, if you use `-o` to write the kubeconfig to a different directory it will probably not work. This is because the K3s instance in running in a different mount namespace.
|
||||
* Run `systemctl --user daemon-reload`
|
||||
|
||||
* Run `systemctl --user enable --now k3s-rootless`
|
||||
|
||||
* Run `KUBECONFIG=~/.kube/k3s.yaml kubectl get pods -A`, and make sure the pods are running.
|
||||
|
||||
> **Note:** Don't try to run `k3s server --rootless` on a terminal, as it doesn't enable cgroup v2 delegation.
|
||||
> If you really need to try it on a terminal, prepend `systemd-run --user -p Delegate=yes --tty` to create a systemd scope.
|
||||
>
|
||||
> i.e., `systemd-run --user -p Delegate=yes --tty k3s server --rootless`
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
* Run `systemctl --user status k3s-rootless` to check the daemon status
|
||||
* Run `journalctl --user -f -u k3s-rootless` to see the daemon log
|
||||
* See also https://rootlesscontaine.rs/
|
||||
|
||||
# Node Labels and Taints
|
||||
|
||||
|
||||
@@ -12,6 +12,6 @@ If you plan to use K3s with docker, Docker installed via a snap package is not r
|
||||
|
||||
If you are running iptables in nftables mode instead of legacy you might encounter issues. We recommend utilizing newer iptables (such as 1.6.1+) to avoid issues.
|
||||
|
||||
**RootlessKit**
|
||||
**Rootless Mode**
|
||||
|
||||
Running K3s with RootlessKit is experimental and has several [known issues.]({{<baseurl>}}/k3s/latest/en/advanced/#known-issues-with-rootlesskit)
|
||||
Running K3s with Rootless mode is experimental and has several [known issues.]({{<baseurl>}}/k3s/latest/en/advanced/#known-issues-with-rootless-mode)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: v2.0-v2.4.x
|
||||
weight: 2
|
||||
weight: 3
|
||||
showBreadcrumb: false
|
||||
---
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: v2.5.x
|
||||
title: Rancher v2.5.7+
|
||||
weight: 1
|
||||
showBreadcrumb: false
|
||||
---
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: "Rancher 2.5"
|
||||
shortTitle: "Rancher 2.5"
|
||||
title: "Rancher v2.5.7+ (Latest)"
|
||||
shortTitle: "Rancher v2.5.7+ (Latest)"
|
||||
description: "Rancher adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more."
|
||||
metaTitle: "Rancher 2.x Docs: What is New?"
|
||||
metaDescription: "Rancher 2 adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more."
|
||||
|
||||
@@ -42,6 +42,8 @@ Provision the host according to the [installation requirements]({{<baseurl>}}/ra
|
||||
|
||||
### 2. Create the Custom Cluster
|
||||
|
||||
Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present.
|
||||
|
||||
1. From the **Clusters** page, click **Add Cluster**.
|
||||
|
||||
2. Choose **Custom**.
|
||||
|
||||
@@ -66,6 +66,8 @@ Creating a [node template]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/rk
|
||||
|
||||
Use Rancher to create a Kubernetes cluster in Azure.
|
||||
|
||||
Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present.
|
||||
|
||||
1. From the **Clusters** page, click **Add Cluster**.
|
||||
1. Choose **Azure**.
|
||||
1. Enter a **Cluster Name**.
|
||||
|
||||
+2
@@ -37,6 +37,8 @@ Creating a [node template]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/rk
|
||||
|
||||
### 3. Create a cluster with node pools using the node template
|
||||
|
||||
Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present.
|
||||
|
||||
1. From the **Clusters** page, click **Add Cluster**.
|
||||
1. Choose **DigitalOcean**.
|
||||
1. Enter a **Cluster Name**.
|
||||
|
||||
@@ -51,6 +51,8 @@ Creating a [node template]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/rk
|
||||
|
||||
Add one or more node pools to your cluster. For more information about node pools, see [this section.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/node-pools)
|
||||
|
||||
Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present.
|
||||
|
||||
1. From the **Clusters** page, click **Add Cluster**.
|
||||
1. Choose **Amazon EC2**.
|
||||
1. Enter a **Cluster Name**.
|
||||
|
||||
@@ -38,6 +38,7 @@ For the fields to be populated, your setup needs to fulfill the [prerequisites.]
|
||||
### More Supported Operating Systems
|
||||
|
||||
You can provision VMs with any operating system that supports `cloud-init`. Only YAML format is supported for the [cloud config.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html)
|
||||
|
||||
### Video Walkthrough of v2.3.3 Node Template Features
|
||||
|
||||
In this YouTube video, we demonstrate how to set up a node template with the new features designed to help you bring cloud operations to on-premises clusters.
|
||||
|
||||
+2
@@ -77,6 +77,8 @@ Creating a [node template]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/rk
|
||||
|
||||
Use Rancher to create a Kubernetes cluster in vSphere.
|
||||
|
||||
Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present.
|
||||
|
||||
1. Navigate to **Clusters** in the **Global** view.
|
||||
1. Click **Add Cluster** and select the **vSphere** infrastructure provider.
|
||||
1. Enter a **Cluster Name.**
|
||||
|
||||
@@ -89,6 +89,8 @@ The Kubernetes cluster management nodes (`etcd` and `controlplane`) must be run
|
||||
|
||||
The `worker` nodes, which is where your workloads will be deployed on, will typically be Windows nodes, but there must be at least one `worker` node that is run on Linux in order to run the Rancher cluster agent, DNS, metrics server, and Ingress related containers.
|
||||
|
||||
Clusters won't begin provisioning until all three node roles (worker, etcd and controlplane) are present.
|
||||
|
||||
We recommend the minimum three-node architecture listed in the table below, but you can always add additional Linux and Windows workers to scale up your cluster for redundancy:
|
||||
|
||||
<a id="guide-architecture"></a>
|
||||
|
||||
@@ -172,18 +172,20 @@ The exact command to install Rancher differs depending on the certificate config
|
||||
{{% tab "Rancher-generated Certificates" %}}
|
||||
|
||||
|
||||
The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface.
|
||||
The default is for Rancher to generate a self-signed CA, and uses `cert-manager` to issue the certificate for access to the Rancher server interface.
|
||||
|
||||
Because `rancher` is the default option for `ingress.tls.source`, we are not specifying `ingress.tls.source` when running the `helm install` command.
|
||||
|
||||
- Set the `hostname` to the DNS name you pointed at your load balancer.
|
||||
- Set `hostname` to the DNS record that resolves to your load balancer.
|
||||
- Set `replicas` to the number of replicas to use for the Rancher Deployment. This defaults to 3; if you have less than 3 nodes in your cluster you should reduce it accordingly.
|
||||
- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`.
|
||||
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
|
||||
- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`
|
||||
|
||||
```
|
||||
helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org
|
||||
--set hostname=rancher.my.org \
|
||||
--set replicas=3
|
||||
```
|
||||
|
||||
Wait for Rancher to be rolled out:
|
||||
@@ -201,15 +203,18 @@ This option uses `cert-manager` to automatically request and renew [Let's Encryp
|
||||
|
||||
In the following command,
|
||||
|
||||
- `hostname` is set to the public DNS record,
|
||||
- `ingress.tls.source` is set to `letsEncrypt`
|
||||
- `letsEncrypt.email` is set to the email address used for communication about your certificate (for example, expiry notices)
|
||||
- Set `hostname` to the public DNS record that resolves to your load balancer.
|
||||
- Set `replicas` to the number of replicas to use for the Rancher Deployment. This defaults to 3; if you have less than 3 nodes in your cluster you should reduce it accordingly.
|
||||
- Set `ingress.tls.source` to `letsEncrypt`.
|
||||
- Set `letsEncrypt.email` to the email address used for communication about your certificate (for example, expiry notices).
|
||||
- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`.
|
||||
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
|
||||
|
||||
```
|
||||
helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set replicas=3 \
|
||||
--set ingress.tls.source=letsEncrypt \
|
||||
--set letsEncrypt.email=me@example.org
|
||||
```
|
||||
@@ -226,20 +231,23 @@ deployment "rancher" successfully rolled out
|
||||
{{% tab "Certificates from Files" %}}
|
||||
In this option, Kubernetes secrets are created from your own certificates for Rancher to use.
|
||||
|
||||
When you run this command, the `hostname` option must match the `Common Name` or a `Subject Alternative Names` entry in the server certificate or the Ingress controller will fail to configure correctly.
|
||||
When you run this command, the `hostname` option must match the `Common Name` or a `Subject Alternative Names` entry in the server certificate, or the Ingress controller will fail to configure correctly.
|
||||
|
||||
Although an entry in the `Subject Alternative Names` is technically required, having a matching `Common Name` maximizes compatibility with older browsers and applications.
|
||||
|
||||
> If you want to check if your certificates are correct, see [How do I check Common Name and Subject Alternative Names in my server certificate?]({{<baseurl>}}/rancher/v2.5/en/faq/technical/#how-do-i-check-common-name-and-subject-alternative-names-in-my-server-certificate)
|
||||
|
||||
- Set the `hostname`.
|
||||
- Set `hostname` as appropriate for your certificate, as described above.
|
||||
- Set `replicas` to the number of replicas to use for the Rancher Deployment. This defaults to 3; if you have less than 3 nodes in your cluster you should reduce it accordingly.
|
||||
- Set `ingress.tls.source` to `secret`.
|
||||
- To install a specific Rancher version, use the `--version` flag, example: `--version 2.3.6`.
|
||||
- If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
|
||||
|
||||
```
|
||||
helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set replicas=3 \
|
||||
--set ingress.tls.source=secret
|
||||
```
|
||||
|
||||
@@ -263,7 +271,7 @@ The Rancher chart configuration has many options for customizing the installatio
|
||||
- [Private Docker Image Registry]({{<baseurl>}}/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/#private-registry-and-air-gap-installs)
|
||||
- [TLS Termination on an External Load Balancer]({{<baseurl>}}/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/#external-tls-termination)
|
||||
|
||||
See the [Chart Options]({{<baseurl>}}/rancher/v2.5/en/installation/resources/chart-options/) for the full list of options.
|
||||
See the [Chart Options]({{<baseurl>}}/rancher/v2.5/en/installation/install-rancher-on-k8s/chart-options/) for the full list of options.
|
||||
|
||||
|
||||
### 6. Verify that the Rancher Server is Successfully Deployed
|
||||
|
||||
+2
-2
@@ -34,7 +34,7 @@ helm upgrade --install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager --version v0.15.2 \
|
||||
--set http_proxy=http://${proxy_host} \
|
||||
--set https_proxy=http://${proxy_host} \
|
||||
--set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local
|
||||
--set noProxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local
|
||||
```
|
||||
|
||||
Now you should wait until cert-manager is finished starting up:
|
||||
@@ -65,7 +65,7 @@ helm upgrade --install rancher rancher-latest/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.example.com \
|
||||
--set proxy=http://${proxy_host}
|
||||
--set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local
|
||||
--set noProxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local
|
||||
```
|
||||
|
||||
After waiting for the deployment to finish:
|
||||
|
||||
@@ -286,6 +286,36 @@ addons: |
|
||||
- configMap
|
||||
- projected
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
rules:
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resourceNames:
|
||||
- restricted
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:restricted
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:authenticated
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: v2.x
|
||||
weight: 4
|
||||
weight: 2
|
||||
showBreadcrumb: false
|
||||
---
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
---
|
||||
title: "Rancher 2.0-2.5.6 (Formerly 2.x)"
|
||||
shortTitle: "Rancher 2.5.6 (Archive)"
|
||||
title: "Pre-Versioned Docs from 2.0-2.5.6 (Formerly 2.x)"
|
||||
shortTitle: "Rancher 2.5-2.5.6"
|
||||
description: "Rancher adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more."
|
||||
metaTitle: "Rancher 2.x Docs: What is New?"
|
||||
metaDescription: "Rancher 2 adds significant value on top of Kubernetes: managing hundreds of clusters from one interface, centralizing RBAC, enabling monitoring and alerting. Read more."
|
||||
insertOneSix: false
|
||||
weight: 1
|
||||
weight: 2
|
||||
ctaBanner: 0
|
||||
---
|
||||
|
||||
|
||||
@@ -63,8 +63,8 @@ For more information about `values.yaml` files and configuring Helm charts durin
|
||||
|
||||
```yaml
|
||||
image:
|
||||
repository: rancher/rancher-backup
|
||||
tag: v0.0.1-rc10
|
||||
repository: rancher/backup-restore-operator
|
||||
tag: v1.0.3
|
||||
|
||||
## Default s3 bucket for storing all backup files created by the rancher-backup operator
|
||||
s3:
|
||||
|
||||
@@ -287,6 +287,36 @@ addons: |
|
||||
- configMap
|
||||
- projected
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
rules:
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resourceNames:
|
||||
- restricted
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:restricted
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:authenticated
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
|
||||
@@ -25,11 +25,31 @@
|
||||
{{ $product := index $path 1 }}
|
||||
{{ $version := index $path 2 }}
|
||||
{{ $productVersion := printf "%s/%s" $product $version}}
|
||||
{{ if eq $productVersion "rancher/v2.x" }}
|
||||
{{ if in .Dir "rancher/v2.x" }}
|
||||
<div class="alert alert-notice">
|
||||
<strong>We are transitioning to versioned documentation.</strong> The v2.x docs will no longer be maintained. For Rancher v2.5 docs, go <a href="https://rancher.com/docs/rancher/v2.5/en/">here.</a> For Rancher v2.0-v2.4 docs, go <a href="https://rancher.com/docs/rancher/v2.0-v2.4/en/">here.</a>
|
||||
</div>
|
||||
{{end}}
|
||||
{{ if in .Dir "/rancher/v2.5/en/pipelines" }}
|
||||
<div class="alert alert-notice">
|
||||
<strong>As of Rancher v2.5, Git-based deployment pipelines are now recommended to be handled with Rancher Continuous Delivery powered by <a href="https://rancher.com/docs/rancher/v2.5/en/fleet">Fleet</a>, available in Cluster Explorer.</strong>
|
||||
</div>
|
||||
{{end}}
|
||||
{{ if in .Dir "/rancher/v2.x/en/pipelines" }}
|
||||
<div class="alert alert-notice">
|
||||
<strong>As of Rancher v2.5, Git-based deployment pipelines are now recommended to be handled with Rancher Continuous Delivery powered by <a href="https://rancher.com/docs/rancher/v2.5/en/fleet">Fleet</a>, available in Cluster Explorer.</strong>
|
||||
</div>
|
||||
{{end}}
|
||||
{{ if in .Dir "/rancher/v2.5/en/deploy-across-clusters/multi-cluster-apps" }}
|
||||
<div class="alert alert-notice">
|
||||
<strong>As of Rancher v2.5, we now recommend using <a href="https://rancher.com/docs/rancher/v2.5/en/fleet">Fleet</a> for deploying apps across clusters.</a>
|
||||
</div>
|
||||
{{end}}
|
||||
{{ if in .Dir "/rancher/v2.x/en/deploy-across-clusters/multi-cluster-apps" }}
|
||||
<div class="alert alert-notice">
|
||||
<strong>As of Rancher v2.5, we now recommend using <a href="https://rancher.com/docs/rancher/v2.5/en/fleet">Fleet</a> for deploying apps across clusters.</a>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
<div class="
|
||||
col-xl-3
|
||||
|
||||
Reference in New Issue
Block a user