Merge pull request #2797 from rancher/master

Merge master into staging
This commit is contained in:
Catherine Luse
2020-10-23 09:43:39 -07:00
committed by GitHub
29 changed files with 557 additions and 80 deletions
+20 -4
View File
@@ -1,15 +1,31 @@
---
title: Backup and Restore Embedded etcd Datastore (Experimental)
shortTitle: Backup and Restore
title: Backup and Restore
weight: 26
---
The way K3s is backed up and restored depends on which type of datastore is used.
- [Backup and Restore with External Datastore](#backup-and-restore-with-external-datastore)
- [Backup and Restore with Embedded etcd Datastore (Experimental)](#backup-and-restore-with-embedded-etcd-datastore-experimental)
# Backup and Restore with External Datastore
When an external datastore is used, backup and restore operations are handled outside of K3s. The database administrator will need to back up the external database, or restore it from a snapshot or dump.
We recommend configuring the database to take recurring snapshots.
For details on taking database snapshots and restoring your database from them, refer to the official database documentation:
- [Official MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/replication-snapshot-method.html)
- [Official PostgreSQL documentation](https://www.postgresql.org/docs/8.3/backup-dump.html)
- [Official etcd documentation](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md)
# Backup and Restore with Embedded etcd Datastore (Experimental)
_Available as of v1.19.1+k3s1_
In this section, you'll learn how to create backups of the K3s cluster data and to restore the cluster from backup.
> This is an experimental feature available for K3s clusters with an embedded etcd datastore. If you installed K3s with an external datastore, refer to the upstream documentation for the database for information on backing up the cluster data.
### Creating Snapshots
Snapshots are enabled by default.
+1 -1
View File
@@ -29,7 +29,7 @@ You can adjust memory requirements by custom building RancherOS, please refer to
### How RancherOS Works
Everything in RancherOS is a Docker container. We accomplish this by launching two instances of Docker. One is what we call **System Docker** and is the first process on the system. All other system services, like `ntpd`, `syslog`, and `console`, are running in Docker containers. System Docker replaces traditional init systems like `systemd` and is used to launch [additional system services](installation/system-services/).
Everything in RancherOS is a Docker container. We accomplish this by launching two instances of Docker. One is what we call **System Docker** and is the first process on the system. All other system services, like `ntpd`, `syslog`, and `console`, are running in Docker containers. System Docker replaces traditional init systems like `systemd` and is used to launch [additional system services]({{<baseurl>}}/os/v1.x/en/system-services/).
System Docker runs a special container called **Docker**, which is another Docker daemon responsible for managing all of the users containers. Any containers that you launch as a user from the console will run inside this Docker. This creates isolation from the System Docker containers and ensures that normal user commands dont impact system services.
@@ -113,11 +113,11 @@ The following table lists each built-in custom project role available in Rancher
| Manage Services | ✓ | ✓ | |
| Manage Volumes | ✓ | ✓ | |
| Manage Workloads | ✓ | ✓ | |
| View Secrets | ✓ | ✓ | |
| View Config Maps | ✓ | ✓ | ✓ |
| View Ingress | ✓ | ✓ | ✓ |
| View Project Members | ✓ | ✓ | ✓ |
| View Project Catalogs | ✓ | ✓ | ✓ |
| View Secrets | ✓ | ✓ | ✓ |
| View Service Accounts | ✓ | ✓ | ✓ |
| View Services | ✓ | ✓ | ✓ |
| View Volumes | ✓ | ✓ | ✓ |
@@ -66,11 +66,12 @@ The Backup and Restore custom resources can be created in the Rancher UI, or by
# Installing the rancher-backup Operator
The `rancher-backup` operator can be installed from the Rancher UI, or with the Helm CLI. In both cases, the `rancher-backup` Helm chart is installed on the Kubernetes cluster running the Rancher server. It is a cluster-admin only feature and available only for the local cluster.
The `rancher-backup` operator can be installed from the Rancher UI, or with the Helm CLI. In both cases, the `rancher-backup` Helm chart is installed on the Kubernetes cluster running the Rancher server. It is a cluster-admin only feature and available only for the **local** cluster. (*If you do not see `rancher-backup` in the Rancher UI, you may have selected the wrong cluster.*)
### Installing rancher-backup with the Rancher UI
1. In the Rancher UI, go to the **Cluster Explorer.**
1. In the Rancher UI's Cluster Manager, choose the cluster named **local**
1. On the upper-right click on the **Cluster Explorer.**
1. Click **Apps.**
1. Click the `rancher-backup` operator.
1. Optional: Configure the default storage location. For help, refer to the [configuration section.](./configuration/storage-config)
@@ -5,7 +5,7 @@ aliases:
- /rancher/v2.x/en/backups/back-up-rancher
---
In this section, you'll learn how to back up Rancher running on any Kubernetes cluster. To backup Rancher installed with Docker, refer the instructions for [single node backups](../legacy/backup/single-node-backups/)
In this section, you'll learn how to back up Rancher running on any Kubernetes cluster. To backup Rancher installed with Docker, refer the instructions for [single node backups]({{<baseurl>}}/rancher/v2.x/en/backups/v2.5/docker-installs/docker-backups)
### Prerequisites
@@ -61,10 +61,6 @@ The official Benchmark documents are available through the CIS website. The sign
# Installing rancher-cis-benchmark
The application can be installed with the Rancher UI or with Helm.
### Installing with the Rancher UI
1. In the Rancher UI, go to the **Cluster Explorer.**
1. Click **Apps.**
1. Click `rancher-cis-benchmark`.
@@ -72,27 +68,8 @@ The application can be installed with the Rancher UI or with Helm.
**Result:** The CIS scan application is deployed on the Kubernetes cluster.
### Installing with Helm
There are two Helm charts for the application:
- `rancher-cis-benchmark-crds`, the custom resource definition chart
- `rancher-cis-benchmark`, the chart deploying <a href="https://github.com/rancher/cis-operator" target="_blank">rancher/cis-operator</a>
To install the charts, run the following commands:
```
helm repo add rancherchart https://charts.rancher.io
helm repo update
helm install rancher-cis-benchmark-crd --kubeconfig <> rancherchart/rancher-cis-benchmark-crd --create-namespace -n cis-operator-system
helm install rancher-cis-benchmark --kubeconfig <> rancherchart/rancher-cis-benchmark -n cis-operator-system
```
# Uninstalling rancher-cis-benchmark
The application can be uninstalled with the Rancher UI or with Helm.
### Uninstalling with the Rancher UI
1. From the **Cluster Explorer,** go to the top left dropdown menu and click **Apps & Marketplace.**
1. Click **Installed Apps.**
1. Go to the `cis-operator-system` namespace and check the boxes next to `rancher-cis-benchmark-crd` and `rancher-cis-benchmark`.
@@ -100,15 +77,6 @@ The application can be uninstalled with the Rancher UI or with Helm.
**Result:** The `rancher-cis-benchmark` application is uninstalled.
### Uninstalling with Helm
Run the following commands:
```
helm uninstall rancher-cis-benchmark -n cis-operator-system
helm uninstall rancher-cis-benchmark-crd -n cis-operator-system
```
# Running a Scan
When a ClusterScan custom resource is created, it launches a new CIS scan on the cluster for the chosen ClusterScanProfile.
@@ -29,6 +29,9 @@ This section covers the following topics:
{{% tabs %}}
{{% tab "Rancher v2.4.0+" %}}
### Snapshot Components
When Rancher creates a snapshot, it includes three components:
- The cluster data in etcd
@@ -37,13 +40,50 @@ When Rancher creates a snapshot, it includes three components:
Because the Kubernetes version is now included in the snapshot, it is possible to restore a cluster to a prior Kubernetes version.
The multiple components of the snapshot allow you to select from the following options if you need to a cluster from a snapshot:
The multiple components of the snapshot allow you to select from the following options if you need to restore a cluster from a snapshot:
- **Restore just the etcd contents:** This restoration is similar to restoring to snapshots in Rancher prior to v2.4.0.
- **Restore etcd and Kubernetes version:** This option should be used if a Kubernetes upgrade is the reason that your cluster is failing, and you haven't made any cluster configuration changes.
- **Restore etcd, Kubernetes versions and cluster configuration:** This option should be used if you changed both the Kubernetes version and cluster configuration when upgrading.
It's always recommended to take a new snapshot before any upgrades.
### Generating the Snapshot from etcd Nodes
For each etcd node in the cluster, the etcd cluster health is checked. If the node reports that the etcd cluster is healthy, a snapshot is created from it and optionally uploaded to S3.
The snapshot is stored in `/opt/rke/etcd-snapshots`. If the directory is configured on the nodes as a shared mount, it will be overwritten. On S3, the snapshot will always be from the last node that uploads it, as all etcd nodes upload it and the last will remain.
In the case when multiple etcd nodes exist, any created snapshot is created after the cluster has been health checked, so it can be considered a valid snapshot of the data in the etcd cluster.
### Snapshot Naming Conventions
The name of the snapshot is auto-generated. The `--name` option can be used to override the name of the snapshot when creating one-time snapshots with the RKE CLI.
When Rancher creates a snapshot of an RKE cluster, the snapshot name is based on the type (whether the snapshot is manual or recurring) and the target (whether the snapshot is saved locally or uploaded to S3). The naming convention is as follows:
- `m` stands for manual
- `r` stands for recurring
- `l` stands for local
- `s` stands for S3
Some example snapshot names are:
- c-9dmxz-rl-8b2cx
- c-9dmxz-ml-kr56m
- c-9dmxz-ms-t6bjb
- c-9dmxz-rs-8gxc8
### How Restoring from a Snapshot Works
On restore, the following process is used:
1. The snapshot is retrieved from S3, if S3 is configured.
2. The snapshot is unzipped (if zipped).
3. One of the etcd nodes in the cluster serves that snapshot file to the other nodes.
4. The other etcd nodes download the snapshot and validate the checksum so that they all use the same snapshot for the restore.
5. The cluster is restored and post-restore actions will be done in the cluster.
{{% /tab %}}
{{% tab "Rancher prior to v2.4.0" %}}
When Rancher creates a snapshot, only the etcd data is included in the snapshot.
@@ -51,6 +91,43 @@ When Rancher creates a snapshot, only the etcd data is included in the snapshot.
Because the Kubernetes version is not included in the snapshot, there is no option to restore a cluster to a different Kubernetes version.
It's always recommended to take a new snapshot before any upgrades.
### Generating the Snapshot from etcd Nodes
For each etcd node in the cluster, the etcd cluster health is checked. If the node reports that the etcd cluster is healthy, a snapshot is created from it and optionally uploaded to S3.
The snapshot is stored in `/opt/rke/etcd-snapshots`. If the directory is configured on the nodes as a shared mount, it will be overwritten. On S3, the snapshot will always be from the last node that uploads it, as all etcd nodes upload it and the last will remain.
In the case when multiple etcd nodes exist, any created snapshot is created after the cluster has been health checked, so it can be considered a valid snapshot of the data in the etcd cluster.
### Snapshot Naming Conventions
The name of the snapshot is auto-generated. The `--name` option can be used to override the name of the snapshot when creating one-time snapshots with the RKE CLI.
When Rancher creates a snapshot of an RKE cluster, the snapshot name is based on the type (whether the snapshot is manual or recurring) and the target (whether the snapshot is saved locally or uploaded to S3). The naming convention is as follows:
- `m` stands for manual
- `r` stands for recurring
- `l` stands for local
- `s` stands for S3
Some example snapshot names are:
- c-9dmxz-rl-8b2cx
- c-9dmxz-ml-kr56m
- c-9dmxz-ms-t6bjb
- c-9dmxz-rs-8gxc8
### How Restoring from a Snapshot Works
On restore, the following process is used:
1. The snapshot is retrieved from S3, if S3 is configured.
2. The snapshot is unzipped (if zipped).
3. One of the etcd nodes in the cluster serves that snapshot file to the other nodes.
4. The other etcd nodes download the snapshot and validate the checksum so that they all use the same snapshot for the restore.
5. The cluster is restored and post-restore actions will be done in the cluster.
{{% /tab %}}
{{% /tabs %}}
@@ -68,6 +68,6 @@ Rancher's integration with Istio was improved in Rancher v2.5.
Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark.
If you are using Rancher v2.5, refer to the CIS scan documentation [here.](./v2.5)
If you are using Rancher v2.5, refer to the CIS scan documentation [here.]({{<baseurl>}}/rancher/v2.x/en/cis-scans/v2.5)
If you are using Rancher v2.4, refer to the CIS scan documentation [here.](./v2.4)
If you are using Rancher v2.4, refer to the CIS scan documentation [here.]({{<baseurl>}}/rancher/v2.x/en/cis-scans/v2.4)
@@ -0,0 +1,16 @@
---
title: Deprecated Features in Rancher v2.5
weight: 100
---
### What is Rancher's Deprecation policy?
Starting in Rancher 2.5 we have published our official deprecation policy in the support [terms of service](https://rancher.com/support-maintenance-terms).
### Where can I find out which features have been deprecated in Rancher 2.5?
Rancher will publish deprecated features as part of the [release notes](https://github.com/rancher/rancher/releases/tag/v2.5.0) for Rancher found on GitHub.
### What can I expect when a feature is marked for deprecation?
In the release where functionality is marked as Deprecated it will still be available and supported allowing upgrades to follow the usual procedure. Once upgraded, users/admins should start planning to move away from the deprecated functionality before upgrading to the release it marked as removed. The recommendation for new deployments is to not use the deprecated feature.
@@ -20,6 +20,10 @@ New password for default administrator (user-xxxxx):
<new_password>
```
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
>
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{<baseurl>}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
Kubernetes install (RKE add-on):
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
@@ -28,7 +32,6 @@ New password for default administrator (user-xxxxx):
<new_password>
```
### I deleted/deactivated the last admin, how can I fix it?
Docker Install:
```
@@ -46,6 +49,10 @@ New password for default administrator (user-xxxxx):
<new_password>
```
> #### **Important: RKE add-on install is only supported up to Rancher v2.0.8**
>
>If you are currently using the RKE add-on install method, see [Migrating from a Kubernetes Install with an RKE Add-on]({{<baseurl>}}/rancher/v2.x/en/upgrades/upgrades/migrating-from-rke-add-on/) for details on how to move to using the helm chart.
Kubernetes install (RKE add-on):
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
@@ -0,0 +1,164 @@
---
title: Template for an RKE Cluster with a Certificate Signed by Recognized CA and a Layer 4 Load Balancer
weight: 3
aliases:
- /rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate-recognizedca
---
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version.
The following template can be used for the cluster.yml if you have a setup with:
- Certificate signed by a recognized CA
- Layer 4 load balancer
- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/)
> For more options, refer to [RKE Documentation: Config Options]({{<baseurl>}}/rke/latest/en/config-options/).
```yaml
nodes:
- address: <IP> # hostname or IP to access nodes
user: <USER> # root user (usually 'root')
role: [controlplane,etcd,worker] # K8s roles for node
ssh_key_path: <PEM_FILE> # path to PEM file
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
addons: |-
---
kind: Namespace
apiVersion: v1
metadata:
name: cattle-system
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: cattle-admin
namespace: cattle-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cattle-crb
namespace: cattle-system
subjects:
- kind: ServiceAccount
name: cattle-admin
namespace: cattle-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: cattle-keys-ingress
namespace: cattle-system
type: Opaque
data:
tls.crt: <BASE64_CRT> # ssl cert for ingress. If self-signed, must be signed by same CA as cattle server
tls.key: <BASE64_KEY> # ssl key for ingress. If self-signed, must be signed by same CA as cattle server
---
apiVersion: v1
kind: Service
metadata:
namespace: cattle-system
name: cattle-service
labels:
app: cattle
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: cattle
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: cattle-system
name: cattle-ingress-http
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
spec:
rules:
- host: <FQDN> # FQDN to access cattle server
http:
paths:
- backend:
serviceName: cattle-service
servicePort: 80
tls:
- secretName: cattle-keys-ingress
hosts:
- <FQDN> # FQDN to access cattle server
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
namespace: cattle-system
name: cattle
spec:
replicas: 1
template:
metadata:
labels:
app: cattle
spec:
serviceAccountName: cattle-admin
containers:
# Rancher install via RKE addons is only supported up to v2.0.8
- image: rancher/rancher:v2.0.8
args:
- --no-cacerts
imagePullPolicy: Always
name: cattle-server
# env:
# - name: HTTP_PROXY
# value: "http://your_proxy_address:port"
# - name: HTTPS_PROXY
# value: "http://your_proxy_address:port"
# - name: NO_PROXY
# value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access"
livenessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 20
periodSeconds: 10
ports:
- containerPort: 80
protocol: TCP
- containerPort: 443
protocol: TCP
```
@@ -0,0 +1,179 @@
---
title: Template for an RKE Cluster with a Self-signed Certificate and Layer 4 Load Balancer
weight: 2
aliases:
- /rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-certificate
---
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version.
The following template can be used for the cluster.yml if you have a setup with:
- Self-signed SSL
- Layer 4 load balancer
- [NGINX Ingress controller](https://kubernetes.github.io/ingress-nginx/)
> For more options, refer to [RKE Documentation: Config Options]({{<baseurl>}}/rke/latest/en/config-options/).
```yaml
nodes:
- address: <IP> # hostname or IP to access nodes
user: <USER> # root user (usually 'root')
role: [controlplane,etcd,worker] # K8s roles for node
ssh_key_path: <PEM_FILE> # path to PEM file
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
- address: <IP>
user: <USER>
role: [controlplane,etcd,worker]
ssh_key_path: <PEM_FILE>
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
addons: |-
---
kind: Namespace
apiVersion: v1
metadata:
name: cattle-system
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: cattle-admin
namespace: cattle-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cattle-crb
namespace: cattle-system
subjects:
- kind: ServiceAccount
name: cattle-admin
namespace: cattle-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
name: cattle-keys-ingress
namespace: cattle-system
type: Opaque
data:
tls.crt: <BASE64_CRT> # ssl cert for ingress. If selfsigned, must be signed by same CA as cattle server
tls.key: <BASE64_KEY> # ssl key for ingress. If selfsigned, must be signed by same CA as cattle server
---
apiVersion: v1
kind: Secret
metadata:
name: cattle-keys-server
namespace: cattle-system
type: Opaque
data:
cacerts.pem: <BASE64_CA> # CA cert used to sign cattle server cert and key
---
apiVersion: v1
kind: Service
metadata:
namespace: cattle-system
name: cattle-service
labels:
app: cattle
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: cattle
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: cattle-system
name: cattle-ingress-http
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" # Max time in seconds for ws to remain shell window open
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" # Max time in seconds for ws to remain shell window open
spec:
rules:
- host: <FQDN> # FQDN to access cattle server
http:
paths:
- backend:
serviceName: cattle-service
servicePort: 80
tls:
- secretName: cattle-keys-ingress
hosts:
- <FQDN> # FQDN to access cattle server
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
namespace: cattle-system
name: cattle
spec:
replicas: 1
template:
metadata:
labels:
app: cattle
spec:
serviceAccountName: cattle-admin
containers:
# Rancher install via RKE addons is only supported up to v2.0.8
- image: rancher/rancher:v2.0.8
imagePullPolicy: Always
name: cattle-server
# env:
# - name: HTTP_PROXY
# value: "http://your_proxy_address:port"
# - name: HTTPS_PROXY
# value: "http://your_proxy_address:port"
# - name: NO_PROXY
# value: "localhost,127.0.0.1,0.0.0.0,10.43.0.0/16,your_network_ranges_that_dont_need_proxy_to_access"
livenessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 20
periodSeconds: 10
ports:
- containerPort: 80
protocol: TCP
- containerPort: 443
protocol: TCP
volumeMounts:
- mountPath: /etc/rancher/ssl
name: cattle-keys-volume
readOnly: true
volumes:
- name: cattle-keys-volume
secret:
defaultMode: 420
secretName: cattle-keys-server
```
@@ -1,13 +1,13 @@
---
title: Template for an RKE Cluster with a Self-signed Certificate and SSL Termination on Layer 7 Load Balancer
weight: 3
aliases:
aliases:
- /rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-certificate
---
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version. For details, see the [Kubernetes Install - Installation Outline]({{<baseurl>}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#installation-outline).
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version.
The following template can be used for the cluster.yml if you have a setup with:
@@ -1,13 +1,13 @@
---
title: Template for an RKE Cluster with a Recognized CA Certificate and SSL Termination on Layer 7 Load Balancer
weight: 4
aliases:
aliases:
- /rancher/v2.x/en/installation/options/cluster-yml-templates/3-node-externalssl-recognizedca
---
RKE uses a cluster.yml file to install and configure your Kubernetes cluster.
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version. For details, see the [Kubernetes Install - Installation Outline]({{<baseurl>}}/rancher/v2.x/en/installation/install-rancher-on-k8s/#installation-outline).
This template is intended to be used for RKE add-on installs, which are only supported up to Rancher v2.0.8. Please use the Rancher Helm chart if you are installing a newer Rancher version.
The following template can be used for the cluster.yml if you have a setup with:
@@ -69,8 +69,8 @@ Health checks can be executed on the `/healthz` endpoint of the node, this will
We have example configurations for the following load balancers:
* [Amazon ALB configuration](alb/)
* [NGINX configuration](nginx/)
* [Amazon ELB configuration]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nlb/)
* [NGINX configuration]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/nginx/)
## 3. Configure DNS
@@ -17,7 +17,7 @@ For systems without direct internet access, refer to the air gap installation in
# Prerequisites
These instructions assume you have set up two nodes, a load balancer, a DNS record, and an external MySQL database as described in [this section.](../infra-for-ha-with-external-db)
These instructions assume you have set up two nodes, a load balancer, a DNS record, and an external MySQL database as described in [this section.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-ha-with-external-db/)
# Installing Kubernetes
@@ -4,12 +4,7 @@ shortTitle: Infrastructure Tutorials
weight: 5
---
The K3s documentation has:
To set up infrastructure for a high-availability K3s Kubernetes cluster with an external DB, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-ha-with-external-db/)
- Instructions for [setting up infrastructure for a high-availability K3s Kubernetes cluster with an external DB]({{<baseurl>}}/k3s/latest/en/installation/tutorials/ha-with-external-db)
- Instructions for [setting up a high-availability K3s Kubernetes cluster with an external DB for a Rancher server]({{<baseurl>}}/k3s/latest/en/installation/tutorials/ha-with-external-db)
The RKE documentation has:
- Instructions for [setting up infrastructure for a high-availability RKE Kubernetes cluster]({{<baseurl>}}/)
- Instructions for [setting up a high-availability RKE cluster]()
To set up infrastructure for a high-availability RKE Kubernetes cluster, refer to [this page.]({{<baseurl>}}/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/infra-for-ha/)
@@ -7,10 +7,7 @@ aliases:
---
Only a user with the following [Kubernetes default roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) assigned can configure and install Istio in a Kubernetes cluster.
- `cluster-admin`
>**Prerequisite:** Only a user with the `cluster-admin` [Kubernetes default role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) assigned can configure and install Istio in a Kubernetes cluster.
1. From the **Cluster Explorer**, navigate to available **Charts** in **Apps & Marketplace**
1. Select the Istio chart from the rancher provided charts
@@ -40,7 +37,7 @@ The first option is to add a new Network Policy in each of the namespaces where
matchLabels:
app: istio-ingressgateway
```
The second option is to move the `ingress-system` namespace to the `system` project, which by default is excluded from the network isolation
The second option is to move the `istio-system` namespace to the `system` project, which by default is excluded from the network isolation
## Additonal Config Options
+24 -5
View File
@@ -7,6 +7,7 @@ weight: 1
- [Changes in Rancher v2.5](#changes-in-rancher-v2-5)
- [Configuring the Logging Output for the Rancher Kubernetes Cluster](#configuring-the-logging-output-for-the-rancher-kubernetes-cluster)
- [Enabling Logging for Rancher Managed Clusters](#enabling-logging-for-rancher-managed-clusters)
- [Uninstall Logging](#uninstall-logging)
- [Configuring the Logging Application](#configuring-the-logging-application)
- [Working with Taints and Tolerations](#working-with-taints-and-tolerations)
@@ -37,7 +38,24 @@ If you install Rancher using the Rancher CLI on an Linux OS, the Rancher Helm c
### Enabling Logging for Rancher Managed Clusters
If you have Enterprise Cluster Manager enabled, you can enable the logging for a Rancher managed cluster by going to the Apps page and installing the logging app.
You can enable the logging for a Rancher managed cluster by going to the Apps page and installing the logging app.
1. In the Rancher UI, go to the cluster where you want to install logging and click **Cluster Explorer.**
1. Click **Apps.**
1. Click the `rancher-logging` app.
1. Scroll to the bottom of the Helm chart README and click **Install.**
**Result:** The logging app is deployed in the `cattle-logging-system` namespace.
### Uninstall Logging
1. From the **Cluster Explorer,** click **Apps & Marketplace.**
1. Click **Installed Apps.**
1. Go to the `cattle-logging-system` namespace and check the boxes for `rancher-logging` and `rancher-logging-crd`.
1. Click **Delete.**
1. Confirm **Delete.**
**Result** `rancher-logging` is uninstalled.
### Configuring the Logging Application
@@ -272,11 +290,12 @@ spec:
In the above example, we ensure that our pod only runs on Linux nodes, and we add a ```toleration``` for the taint we have on all of our Linux nodes.
You can do the same with Rancher's existing taints, or with your own custom ones.
**Why do we not schedule logging-related pods on Windows nodes?**
**Are clusters with Windows worker nodes supported?**
No parts of the logging stack are compatible with Windows Kubernetes nodes.
For instance, if a logging pod is attempting to pull its image from a container registry, there may only be Linux-compatible images available.
In this scenario, the pod would be stuck in an ```ImagePullBackOff``` status; and would eventually change to a ```ErrImagePull``` status.
Yes, clusters with Windows worker support logging with some small caveats...
1. Windows node logs are currently unable to be exported.
2. ```fluentd-configcheck``` pod(s) will fail due to an [upstream issue](https://github.com/banzaicloud/logging-operator/issues/592), where ```tolerations``` and ```nodeSelector``` settings are not inherited from the ```logging-operator```.
**Adding NodeSelector Settings and Tolerations for Custom Taints**
+16 -2
View File
@@ -6,7 +6,7 @@ weight: 19
[Longhorn](https://longhorn.io/) is a lightweight, reliable and easy-to-use distributed block storage system for Kubernetes.
Longhorn is free, open source software. Originally developed by Rancher Labs, it is now being developed as a sandbox project of the Cloud Native Computing Foundation. It can be installed on any Kubernetes cluster with Helm, with kubectl, or with the Rancher UI.
Longhorn is free, open source software. Originally developed by Rancher Labs, it is now being developed as a sandbox project of the Cloud Native Computing Foundation. It can be installed on any Kubernetes cluster with Helm, with kubectl, or with the Rancher UI. You can learn more about its architecture [here.](https://longhorn.io/docs/1.0.2/concepts/)
With Longhorn, you can:
@@ -19,6 +19,9 @@ With Longhorn, you can:
- Restore volumes from backup
- Upgrade Longhorn without disrupting persistent volumes
<figcaption>Longhorn Dashboard</figcaption>
![Longhorn Dashboard]({{<baseurl>}}/img/rancher/longhorn-screenshot.png)
### New in Rancher v2.5
Prior to Rancher v2.5, Longhorn could be installed as a Rancher catalog app. In Rancher v2.5, the catalog system was replaced by the **Apps & Marketplace,** and it became possible to install Longhorn as an app from that page.
@@ -59,4 +62,15 @@ The Longhorn project is available [here.](https://github.com/longhorn/longhorn)
### Documentation
The Longhorn documentation is [here.](https://longhorn.io/docs/)
The Longhorn documentation is [here.](https://longhorn.io/docs/)
### Architecture
Longhorn creates a dedicated storage controller for each volume and synchronously replicates the volume across multiple replicas stored on multiple nodes.
The storage controller and replicas are themselves orchestrated using Kubernetes.
You can learn more about its architecture [here.](https://longhorn.io/docs/1.0.2/concepts/)
<figcaption>Longhorn Architecture</figcaption>
![Longhorn Architecture]({{<baseurl>}}/img/rancher/longhorn-architecture.svg)
@@ -4,7 +4,6 @@ shortTitle: Rancher v2.5
weight: 1
---
Using Rancher, you can quickly deploy leading open-source monitoring & alerting solutions such as [Prometheus](https://prometheus.io/), [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/), and [Grafana](https://grafana.com/docs/grafana/latest/getting-started/what-is-grafana/) onto your cluster.
Rancher's solution (powered by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator)) allows users to:
@@ -63,7 +62,7 @@ By viewing data that Prometheus scrapes from your cluster control plane, nodes,
[Grafana](https://grafana.com/grafana/) allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture.
# Enabling Cluster Monitoring
# Enable Monitoring
As an [administrator]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/global-permissions/) or [cluster owner]({{<baseurl>}}/rancher/v2.x/en/admin-settings/rbac/cluster-project-roles/#cluster-roles), you can configure Rancher to deploy Prometheus to monitor your Kubernetes cluster.
@@ -25,10 +25,10 @@ Unlike in Monitoring & Alerting V1, both features are packaged in a single Helm
Monitoring V2 can only be configured on the cluster level. Project-level monitoring and alerting is no longer supported.
For more information on how to configure Monitoring & Alerting V2, see the [docs for monitoring in Rancher v2.5](/rancher/v2.x/en/monitoring-alerting).
For more information on how to configure Monitoring & Alerting V2, see [this page.]({{<baseurl>}}/rancher/v2.x/en/monitoring-alerting/v2.5/configuration)
### Changes to Role-based Access Control
Project owners and members no longer get access to Grafana or Prometheus by default. If view-only users had access to Grafana, they would be able to see data from any namespace. For Kiali, any user can edit things they dont own in any namespace.
For more information about role-based access control in `rancher-monitoring`, refer to [this page.](../rbac)
For more information about role-based access control in `rancher-monitoring`, refer to [this page.](../rbac)
@@ -9,7 +9,7 @@ _Available as of v2.4.0_
> This is an experimental feature.
To ensure consistency and compliance, every organization needs the ability to define and enforce policies in its environment in an automated way. OPA [https://www.openpolicyagent.org/] (Open Policy Agent) is a policy engine that facilitates policy-based control for cloud native environments. Rancher provides the ability to enable OPA Gatekeeper in Kubernetes clusters, and also installs a couple of built-in policy definitions, which are also called constraint templates.
To ensure consistency and compliance, every organization needs the ability to define and enforce policies in its environment in an automated way. [OPA (Open Policy Agent)](https://www.openpolicyagent.org/) is a policy engine that facilitates policy-based control for cloud native environments. Rancher provides the ability to enable OPA Gatekeeper in Kubernetes clusters, and also installs a couple of built-in policy definitions, which are also called constraint templates.
OPA provides a high-level declarative language that lets you specify policy as code and ability to extend simple APIs to offload policy decision-making.
@@ -32,7 +32,7 @@ See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [DO Q
Suggestions include:
- `do_region` - DigitalOcean region, choose the closest instead of the default
- `prefix` - Prefix for all created resources
- `droplet_size` - Droplet size used, minimum is `s-2vcpu-4gb` but `s-4vcpu-8g` could be used if within budget
- `droplet_size` - Droplet size used, minimum is `s-2vcpu-4gb` but `s-4vcpu-8gb` could be used if within budget
- `ssh_key_file_name` - Use a specific SSH key instead of `~/.ssh/id_rsa` (public key is assumed to be `${ssh_key_file_name}.pub`)
1. Run `terraform init`.
@@ -17,17 +17,17 @@ The following log levels are used in Rancher:
* Configure debug log level
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | awk '{ print $1 }' | while read rancherpod; do kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $rancherpod -- loglevel --set debug; done
$ kubectl -n cattle-system get pods -l app=rancher --no-headers -o custom-columns=name:.metadata.name | while read rancherpod; do kubectl -n cattle-system exec $rancherpod -c rancher -- loglevel --set debug; done
OK
OK
OK
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system logs -l app=rancher
$ kubectl -n cattle-system logs -l app=rancher -c rancher
```
* Configure info log level
```
$ KUBECONFIG=./kube_config_rancher-cluster.yml
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | awk '{ print $1 }' | while read rancherpod; do kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $rancherpod -- loglevel --set info; done
$ kubectl -n cattle-system get pods -l app=rancher --no-headers -o custom-columns=name:.metadata.name | while read rancherpod; do kubectl -n cattle-system exec $rancherpod -c rancher -- loglevel --set info; done
OK
OK
OK
@@ -27,6 +27,30 @@ You can use RKE to [restore your cluster from backup]({{<baseurl>}}/rke/latest/e
These [example scenarios]({{<baseurl>}}/rke/latest/en/etcd-snapshots/example-scenarios) for backup and restore are different based on your version of RKE.
# How Snapshots Work
For each etcd node in the cluster, the etcd cluster health is checked. If the node reports that the etcd cluster is healthy, a snapshot is created from it and optionally uploaded to S3.
The snapshot is stored in `/opt/rke/etcd-snapshots`. If the directory is configured on the nodes as a shared mount, it will be overwritten. On S3, the snapshot will always be from the last node that uploads it, as all etcd nodes upload it and the last will remain.
In the case when multiple etcd nodes exist, any created snapshot is created after the cluster has been health checked, so it can be considered a valid snapshot of the data in the etcd cluster.
### Snapshot Naming
The name of the snapshot is auto-generated. The `--name` option can be used to override the name of the snapshot when creating one-time snapshots with the RKE CLI.
An example one-time snapshot name is `rke_etcd_snapshot_2020-10-15T16:47:24+02:00`. An example recurring snapshot name is `2020-10-15T14:53:26Z_etcd`.
### How Restoring from a Snapshot Works
On restore, the following process is used:
1. The snapshot is retrieved from S3, if S3 is configured.
2. The snapshot is unzipped (if zipped).
3. One of the etcd nodes in the cluster serves that snapshot file to the other nodes.
4. The other etcd nodes download the snapshot and validate the checksum so that they all use the same snapshot for the restore.
5. The cluster is restored and post-restore actions will be done in the cluster.
## Troubleshooting
If you have trouble restoring your cluster, you can refer to the [troubleshooting]({{<baseurl>}}/rke/latest/en/etcd-snapshots/troubleshooting) page.
Binary file not shown.

After

Width:  |  Height:  |  Size: 376 B

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB