mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-14 00:53:22 +00:00
Merge branch 'staging' into windows
This commit is contained in:
@@ -5,16 +5,28 @@ Rancher Docs
|
||||
|
||||
The `rancher/docs:dev` docker image runs a live-updating server. To run on your workstation, run:
|
||||
|
||||
Linux
|
||||
```bash
|
||||
./scripts/dev
|
||||
```
|
||||
|
||||
Windows
|
||||
```powershell
|
||||
./scripts/dev-windows.ps1
|
||||
```
|
||||
|
||||
and then navigate to http://localhost:9001/. You can customize the port by passing it as an argument:
|
||||
|
||||
Linux
|
||||
```bash
|
||||
./scripts/dev 8080
|
||||
```
|
||||
|
||||
Windows
|
||||
```powershell
|
||||
./scripts/dev-windows.ps1 -port 8080
|
||||
```
|
||||
|
||||
License
|
||||
=======
|
||||
Copyright (c) 2014-2019 [Rancher Labs, Inc.](http://rancher.com)
|
||||
|
||||
@@ -30,7 +30,7 @@ The Rancher authentication proxy integrates with the following external authenti
|
||||
<br/>
|
||||
However, Rancher also provides [local authentication]({{< baseurl >}}/rancher/v2.x/en/admin-settings/authentication/local/).
|
||||
|
||||
In most cases, you should use an external authentication service over local authentication, as external authentication allows user management from a central location. However, you may want a few local authentication users for managing Rancher under rare circumstances, such as if Active Directory is down.
|
||||
In most cases, you should use an external authentication service over local authentication, as external authentication allows user management from a central location. However, you may want a few local authentication users for managing Rancher under rare circumstances, such as if your external authentication provider is unavailable or undergoing maintenance.
|
||||
|
||||
## Users and Groups
|
||||
|
||||
|
||||
@@ -49,7 +49,7 @@ The following table lists each custom global permission available and whether it
|
||||
| Manage Roles | ✓ | |
|
||||
| Manage Users | ✓ | |
|
||||
| Create Clusters | ✓ | ✓ |
|
||||
| User Catalog Templates | ✓ | ✓ |
|
||||
| Use Catalog Templates | ✓ | ✓ |
|
||||
| Login Access | ✓ | ✓ |
|
||||
|
||||
> **Notes:**
|
||||
|
||||
@@ -75,7 +75,7 @@ _Available as of v2.2.0_
|
||||
In Rancher v2.2.0, you can add private catalog repositories using credentials like Username and Password. You may also want to use the
|
||||
OAuth token if your Git or Helm repository server support that.
|
||||
|
||||
[Read More About Adding Private Git/Helm Catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/private/)
|
||||
[Read More About Adding Private Git/Helm Catalogs]({{< baseurl >}}/rancher/v2.x/en/catalog/custom/#private-repositories)
|
||||
|
||||
<!--There are two types of catalogs that can be added into Rancher. There are global catalogs and project catalogs. In a global catalog, the catalog templates are available in *all* projects. In a project catalog, the catalog charts are only available in the project that the catalog is added to.
|
||||
|
||||
|
||||
@@ -7,15 +7,15 @@ _Available as of v2.2.0_
|
||||
|
||||
In the Rancher UI, etcd backup and recovery for [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) can be easily performed. Snapshots of the etcd database are taken and saved either [locally onto the etcd nodes](#local-backup-target) or to a [S3 compatible target](#s3-backup-target). The advantages of configuring S3 is that if all etcd nodes are lost, your snapshot is saved remotely and can be used to restore the cluster.
|
||||
|
||||
Rancher recommends enabling the ability to set up recurring snapshots, but one-time snapshots can easily be taken as well.
|
||||
Rancher recommends configuring recurrent `etcd` snapshots for all production clusters. Additonally, one-time snapshots can easily be taken as well.
|
||||
|
||||
>**Note:** If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Ranchher, you must [edit the cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/restoring-etcd/).
|
||||
>**Note:** If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the updated snapshot features. Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to [back up and restore etcd through the UI]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/restoring-etcd/).
|
||||
|
||||
## Configuring Recurring Snapshots for the Cluster
|
||||
|
||||
By default, any [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) are enabled to take recurring snapshots that are saved locally.
|
||||
By default, [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) are configured to take recurring snapshots (saved to local disk). To protect against local disk failure, using the [S3 Target](#s3-backup-target) or replicating the path on disk is advised.
|
||||
|
||||
During cluster provisioning or editing the cluster, the configuration about snapshots are in the advanced section for **Cluster Options**. Click on **Show advanced options**.
|
||||
During cluster provisioning or editing the cluster, the configuration for snapshots can be found in the advanced section for **Cluster Options**. Click on **Show advanced options**.
|
||||
|
||||
In the **Advanced Cluster Options** section, there are several options available to configure:
|
||||
|
||||
@@ -39,7 +39,7 @@ By default, the `local` backup target is selected. The benefits of this option i
|
||||
|
||||
#### S3 Backup Target
|
||||
|
||||
The `S3` backup target allows users to configure a S3 compatible backend to store the snapshots. The main benefit of this option is that if the cluster loses all the etcd nodes, the cluster can still be restored as the snapshots are stored externally. The downside of using the `S3` backup target is that additional configuration is required in order to have these snapshots saved remotely.
|
||||
The `S3` backup target allows users to configure a S3 compatible backend to store the snapshots. The primary benefit of this option is that if the cluster loses all the etcd nodes, the cluster can still be restored as the snapshots are stored externally. Rancher recommends external targets like `S3` backup, however its configuration reuqirements do require additional effort that should be considered.
|
||||
|
||||
| Option | Description | Required|
|
||||
|---|---|---|
|
||||
@@ -55,7 +55,7 @@ Select how often you want recurring snapshots to be taken as well as how many sn
|
||||
|
||||
## One-Time Snapshots
|
||||
|
||||
Besides recurring snapshots, you might want to take a one-time snapshot in specific use cases. For example, if you're about to upgrade the Kubernetes version of your cluster, you might want to take a snapshot right before the upgrade.
|
||||
In addition to recurring snapshots, you may want to take a "one-time" snapshot. For example, before upgrading the Kubernetes version of a cluster it's best to backup the state of the cluster to protect against upgrade failure.
|
||||
|
||||
1. In the **Global** view, navigate to the cluster that you want to take a one-time snapshot.
|
||||
|
||||
|
||||
@@ -3,12 +3,8 @@ title: Certificate Rotation
|
||||
weight: 2040
|
||||
---
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
By default, Kubernetes clusters require certificates and Rancher launched Kubernetes clusters automatically generate certificates for the Kubernetes components. Rotating these certificates is important before the certificates expire as well as if a certificate is compromised. After the certificates are rotated, the Kubernetes components are automatically restarted.
|
||||
|
||||
> **Note:** Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher Launched Kubernetes clusters.
|
||||
|
||||
Certificates can be rotated for the following services:
|
||||
|
||||
- etcd
|
||||
@@ -18,6 +14,11 @@ Certificates can be rotated for the following services:
|
||||
- kube-scheduler
|
||||
- kube-controller-manager
|
||||
|
||||
|
||||
### Certificate Rotation in Rancher v2.2.x
|
||||
|
||||
_Available as of v2.2.0_
|
||||
|
||||
Rancher launched Kubernetes clusters have the ability to rotate the auto-generated certificates through the UI.
|
||||
|
||||
1. In the **Global** view, navigate to the cluster that you want to rotate certificates.
|
||||
@@ -32,3 +33,24 @@ Rancher launched Kubernetes clusters have the ability to rotate the auto-generat
|
||||
4. Click **Save**.
|
||||
|
||||
**Results:** The selected certificates will be rotated and the related services will be restarted to start using the new certificate.
|
||||
|
||||
> **Note:** Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher Launched Kubernetes clusters.
|
||||
|
||||
|
||||
### Certificate Rotation in Rancher v2.1.x and v2.0.x
|
||||
|
||||
_Available as of v2.1.14 and v2.0.9_
|
||||
|
||||
Rancher launched Kubernetes clusters have the ability to rotate the auto-generated certificates through the API.
|
||||
|
||||
1. In the **Global** view, navigate to the cluster that you want to rotate certificates.
|
||||
|
||||
2. Select the **Ellipsis (...) > View in API**.
|
||||
|
||||
3. Click on **RotateCertificates**.
|
||||
|
||||
4. Click on **Show Request**.
|
||||
|
||||
5. Click on **Send Request**.
|
||||
|
||||
**Results:** All kubernetes certificates will be rotated.
|
||||
|
||||
@@ -19,16 +19,16 @@ When cleaning nodes provisioned using Rancher, the following components are dele
|
||||
| `serviceAccount`, `clusterRoles`, and `clusterRoleBindings` labeled by Rancher | ✓ | ✓ | ✓ | ✓ |
|
||||
| Labels, Annotations, and Finalizers | ✓ | ✓ | ✓ | ✓ |
|
||||
| Rancher Deployment | ✓ | ✓ | ✓ | |
|
||||
| Machines, clusters, projects, and user custom resource deployments (CRDs) | ✓ | ✓ | ✓ | |
|
||||
| Machines, clusters, projects, and user custom resource definitions (CRDs) | ✓ | ✓ | ✓ | |
|
||||
| All resources create under the `management.cattle.io` API Group | ✓ | ✓ | ✓ | |
|
||||
| All CRDs created by Rancher v2.0.x | ✓ | ✓ | ✓ | |
|
||||
| All CRDs created by Rancher v2.x | ✓ | ✓ | ✓ | |
|
||||
|
||||
[1]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/
|
||||
[2]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/
|
||||
[3]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/
|
||||
[4]: {{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/
|
||||
|
||||
## Removing A Node from a Cluster by Rancher UI
|
||||
## Removing a Node from a Cluster by Rancher UI
|
||||
|
||||
When the node is in `Active` state, removing the node from a cluster will trigger a process to clean up the node. Please restart the node after the automatic cleanup process is done to make sure any non-persistent data is properly removed.
|
||||
|
||||
@@ -36,10 +36,10 @@ When the node is in `Active` state, removing the node from a cluster will trigge
|
||||
|
||||
```
|
||||
# using reboot
|
||||
reboot
|
||||
$ sudo reboot
|
||||
|
||||
# using shutdown
|
||||
shutdown -r now
|
||||
$ sudo shutdown -r now
|
||||
```
|
||||
|
||||
## Cleaning a Node Manually
|
||||
@@ -183,10 +183,10 @@ The remaining two components that are changed/configured are (virtual) network i
|
||||
|
||||
```
|
||||
# using reboot
|
||||
reboot
|
||||
$ sudo reboot
|
||||
|
||||
# using shutdown
|
||||
shutdown -r now
|
||||
$ sudo shutdown -r now
|
||||
```
|
||||
|
||||
If you want to know more on (virtual) network interfaces or iptables rules, please see the specific subjects below.
|
||||
@@ -223,7 +223,7 @@ ip link delete interface_name
|
||||
|
||||
>**Note:** Depending on the network provider configured for the cluster the node was part of, some of the chains will or won't be present on the node.
|
||||
|
||||
Iptables rules are used to route traffic from and to containers. The created rules are not persistent, so restarting the node will restore iptables to it's original state.
|
||||
Iptables rules are used to route traffic from and to containers. The created rules are not persistent, so restarting the node will restore iptables to its original state.
|
||||
|
||||
Chains |
|
||||
--------|
|
||||
|
||||
@@ -30,7 +30,14 @@ For [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provis
|
||||
|
||||
> **Note:** By default, all Rancher Launched Kubernetes clusters have [Authorized Cluster Endpoint]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#authorized-cluster-endpoint) enabled.
|
||||
|
||||
To find the name of the context(s), view the kubeconfig file.
|
||||
To find the name of the context(s), run:
|
||||
|
||||
```
|
||||
kubectl config get-contexts --kubeconfig /custom/path/kube.config
|
||||
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
|
||||
* my-cluster my-cluster user-46tmn
|
||||
my-cluster-controlplane-1 my-cluster-controlplane-1 user-46tmn
|
||||
```
|
||||
|
||||
### Clusters with FQDN defined as an Authorized Cluster Endpoint
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ etcd backup and recovery for [Rancher launched Kubernetes clusters]({{< baseurl
|
||||
|
||||
Rancher recommends enabling the [ability to set up recurring snapshots of etcd]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/#configuring-recurring-snapshots-for-the-cluster), but [one-time snapshots]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/#one-time-snapshots) can easily be taken as well. Rancher allows restore from [saved snapshots](#restoring-your-cluster-from-a-snapshot) or if you don't have any snapshots, you can still [restore etcd](#recovering-etcd-without-a-snapshot).
|
||||
|
||||
>**Note:** If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Ranchher, you must [edit the cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the [updated snapshot features]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/). Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to back up and restore etcd through the UI.
|
||||
>**Note:** If you have any Rancher launched Kubernetes clusters that were created prior to v2.2.0, after upgrading Rancher, you must [edit the cluster]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/editing-clusters/) and _save_ it, in order to enable the [updated snapshot features]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/backing-up-etcd/). Even if you were already creating snapshots prior to v2.2.0, you must do this step as the older snapshots will not be available to use to back up and restore etcd through the UI.
|
||||
|
||||
## Viewing Available Snapshots
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ aliases:
|
||||
|
||||
Rancher can integrate with a variety of popular logging services and tools that exist outside of your Kubernetes clusters.
|
||||
|
||||
Rancher supports the following services:
|
||||
Rancher supports integration with the following services:
|
||||
|
||||
- Elasticsearch
|
||||
- Splunk
|
||||
@@ -56,7 +56,7 @@ As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global
|
||||
|
||||
1. Select **Tools > Logging** in the navigation bar.
|
||||
|
||||
1. Select a logging service and enter the configuration. Refer to the specific service for detailed configuration. Rancher supports the following services:
|
||||
1. Select a logging service and enter the configuration. Refer to the specific service for detailed configuration. Rancher supports integration with the following services:
|
||||
|
||||
- [Elasticsearch]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/elasticsearch/)
|
||||
- [Splunk]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/tools/logging/splunk/)
|
||||
|
||||
@@ -39,7 +39,7 @@ As an [administrator]({{< baseurl >}}/rancher/v2.x/en/admin-settings/rbac/global
|
||||
|
||||
### Resource Consumption
|
||||
|
||||
When enabling cluster monitoring, you need to ensure your worker nodes and Prometheus pod have enough resources. The tables below provides a guide of how much resource consumption will be used.
|
||||
When enabling cluster monitoring, you need to ensure your worker nodes and Prometheus pod have enough resources. The tables below provides a guide of how much resource consumption will be used. In larger deployments, it is strongly advised that the monitoring infrastructure be placed on dedicated nodes in the cluster.
|
||||
|
||||
#### Prometheus Pod Resource Consumption
|
||||
|
||||
@@ -51,6 +51,24 @@ Number of Cluster Nodes | CPU (milli CPU) | Memory | Disk
|
||||
50| 2000 | 2 GB | ~5 GB/Day
|
||||
256| 4000 | 6 GB | ~18 GB/Day
|
||||
|
||||
Additional pod resource requirements for cluster level monitoring.
|
||||
|
||||
Workload | Container | CPU - Request | Mem - Request | CPU - Limit | Mem - Limit | Configurable
|
||||
---------|-----------|---------------|---------------|-------------|-------------|-------------
|
||||
Prometheus|Prometheus| 750m | 750Mi| 1000m | 1000Mi| Y
|
||||
||Prometheus-proxy| 50m | 50Mi | 100m | 100Mi| Y
|
||||
||Prometheus-auth| 100m | 100Mi | 500m | 200Mi | Y
|
||||
||Prometheus-config-reloader| - | - | 50m | 50Mi | N
|
||||
||rules-configmap-reloader | - | - | 100m | 25Mi | N
|
||||
Grafana | grafana-init-plugin-json-copy | 50m |50Mi|50m|50Mi|Y
|
||||
||grafana-init-plugin-json-modify|50m|50Mi|50m|50Mi|Y
|
||||
||grafana |100m|100Mi|200m|200Mi|Y
|
||||
||grafana-proxy|50m|50Mi|100m|100Mi|Y
|
||||
Kube-State Exporter|kube-state |100m|130Mi|100m|200Mi|Y
|
||||
Node Exporter | exporter-node | 200m | 200Mi | 200m | 200Mi|Y
|
||||
Operator | prometheus-operator | 100m | 50Mi | 200m | 100Mi | Y
|
||||
|
||||
|
||||
#### Other Pods Resource Consumption
|
||||
|
||||
Besides the Prometheus pod, there are components that are deployed that require additional resources on the worker nodes.
|
||||
|
||||
@@ -28,6 +28,8 @@ The [node exporter](https://github.com/prometheus/node_exporter/blob/master/READ
|
||||
|
||||
When configuring Prometheus and enabling the node exporter, enter a host port in the **Node Exporter Host Port** that will not produce port conflicts with existing applications. The host port chosen must be open to allow internal traffic between Prometheus and the Node Exporter.
|
||||
|
||||
>**Warning:** In order for Prometheus to collect the metrics of the node exporter, after enabling cluster monitoring, you must open the **Node Exporter Host Port** in the host firewall rules to allow intranet access. By default, `9796` is used as that host port.
|
||||
|
||||
## Persistent Storage
|
||||
|
||||
>**Prerequisite:** Configure one or more [storage class]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/#adding-storage-classes) to use as [persistent storage]({{< baseurl >}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/) for your Prometheus or Grafana pod.
|
||||
|
||||
@@ -140,7 +140,7 @@ Use {{< product >}} to create a Kubernetes cluster in Amazon EC2.
|
||||
"arn:aws:ec2:REGION:AWS_ACCOUNT_ID:key-pair/*",
|
||||
"arn:aws:ec2:REGION:AWS_ACCOUNT_ID:network-interface/*",
|
||||
"arn:aws:ec2:REGION:AWS_ACCOUNT_ID:security-group/*",
|
||||
"arn:aws:iam::AWS_ACCOUNT_ID:role/your-role-name"
|
||||
"arn:aws:iam::AWS_ACCOUNT_ID:role/YOUR_ROLE_NAME"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
+1
-1
@@ -7,7 +7,7 @@ aliases:
|
||||
|
||||
Rancher needs to be configured to use the private registry in order to provision any [Rancher launched Kubernetes clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) or [Rancher tools]({{< baseurl >}}/rancher/v2.x/en/tools/).
|
||||
|
||||
>**Note:** If you want to configure Rancher for your private registry when when starting the rancher/rancher container, you can use the environment variable `CATTLE_SYSTEM_DEFAULT_REGISTRY`.
|
||||
>**Note:** If you want to configure Rancher to use your private registry when starting the rancher/rancher container, you can use the environment variable `CATTLE_SYSTEM_DEFAULT_REGISTRY`.
|
||||
|
||||
1. Log into Rancher and configure the default admin password.
|
||||
|
||||
|
||||
+1
-1
@@ -6,7 +6,7 @@ aliases:
|
||||
|
||||
## A. Prepare System Charts
|
||||
|
||||
The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as moniotring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach and configure Rancher to use that repository.
|
||||
The [System Charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. To be able to use these features in an air gap install, you will need to mirror the `system-charts` repository to a location in your network that Rancher can reach and configure Rancher to use that repository.
|
||||
|
||||
## B. Configure System Charts
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
> **Note:** If you want to replace the certificate, you can delete the `tls-rancher-ingress` secret using `kubectl -n cattle-system delete secret tls-rancher-ingress` and add a new one using the command shown above. Replacing the certificate is only supported if the new certificate is signed by the same CA as the certificate currently in use.
|
||||
> **Note:** If you want to replace the certificate, you can delete the `tls-rancher-ingress` secret using `kubectl -n cattle-system delete secret tls-rancher-ingress` and add a new one using the command shown above. If you are using a private CA signed certificate, replacing the certificate is only possible if the new certificate is signed by the same CA as the certificate currently in use.
|
||||
|
||||
### Using a Private CA Signed Certificate
|
||||
|
||||
|
||||
@@ -78,9 +78,9 @@ Test your connectivity with `kubectl` and see if all your nodes are in `Ready` s
|
||||
kubectl get nodes
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
165.227.114.63 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
165.227.116.167 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
165.227.127.226 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
165.227.114.63 Ready controlplane,etcd,worker 11m v1.13.5
|
||||
165.227.116.167 Ready controlplane,etcd,worker 11m v1.13.5
|
||||
165.227.127.226 Ready controlplane,etcd,worker 11m v1.13.5
|
||||
```
|
||||
|
||||
### Check the Health of Your Cluster Pods
|
||||
|
||||
@@ -11,16 +11,16 @@ Whether you're configuring Rancher to run in a single-node or high-availability
|
||||
<br>
|
||||
Rancher is tested on the following operating systems and their subsequent non-major releases with a supported version of [Docker](https://www.docker.com/).
|
||||
|
||||
* Ubuntu 16.04 (64-bit)
|
||||
* Ubuntu 16.04 (64-bit x86)
|
||||
* Docker 17.03.x, 18.06.x, 18.09.x
|
||||
* Ubuntu 18.04 (64-bit)
|
||||
* Ubuntu 18.04 (64-bit x86)
|
||||
* Docker 18.06.x, 18.09.x
|
||||
* Red Hat Enterprise Linux (RHEL)/CentOS 7.6 (64-bit)
|
||||
* Red Hat Enterprise Linux (RHEL)/CentOS 7.6 (64-bit x86)
|
||||
* RHEL Docker 1.13
|
||||
* Docker 17.03.x, 18.06.x, 18.09.x
|
||||
* RancherOS 1.5.1 (64-bit)
|
||||
* RancherOS 1.5.1 (64-bit x86)
|
||||
* Docker 17.03.x, 18.06.x, 18.09.x
|
||||
* Windows Server 2019 (64-bit)
|
||||
* Windows Server 2019 (64-bit x86)
|
||||
* Docker 18.09
|
||||
* _Experimental, see [Configuring Custom Clusters for Windows]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/)_
|
||||
|
||||
@@ -32,7 +32,9 @@ sudo ros engine list
|
||||
# Switch to a supported version
|
||||
sudo ros engine switch docker-18.09.2
|
||||
```
|
||||
|
||||
See [Running on ARM64 (Experimental)]({{< baseurl >}}/rancher/v2.x/en/installation/arm64-platform/) if you plan to run Rancher on ARM64.
|
||||
<br>
|
||||
<br>
|
||||
[Docker Documentation: Installation Instructions](https://docs.docker.com/)
|
||||
<br>
|
||||
<br>
|
||||
|
||||
@@ -88,7 +88,7 @@ docker run -d --restart=unless-stopped \
|
||||
{{% /accordion %}}
|
||||
{{% accordion id="option-d" label="Option D-Let's Encrypt Certificate" %}}
|
||||
|
||||
For production environments, you also have the options of using [Let's Encrypt](https://letsencrypt.org/) certificates. Let's Encrypt uses an http-01 challenge to verify that you have control over your domain. You can confirm that you control the domain by pointing the hostname that you want to use for Rancher access (for example, `rancher.mydomain.com`) to the IP of the machine it is running on. You can bind the hostname to the IP address by creating an A record in DNS.
|
||||
For production environments, you also have the option of using [Let's Encrypt](https://letsencrypt.org/) certificates. Let's Encrypt uses an http-01 challenge to verify that you have control over your domain. You can confirm that you control the domain by pointing the hostname that you want to use for Rancher access (for example, `rancher.mydomain.com`) to the IP of the machine it is running on. You can bind the hostname to the IP address by creating an A record in DNS.
|
||||
|
||||
>**Prerequisites:**
|
||||
>
|
||||
@@ -186,7 +186,7 @@ If you are visiting this page to complete an air gap installation, you must pre-
|
||||
|
||||
In the situation where you want to use a single node to run Rancher and to be able to add the same node to a cluster, you have to adjust the host ports mapped for the `rancher/rancher` container.
|
||||
|
||||
If a node is added to a cluster, it deploys the nginx ingress controller which will use port 80 and 443. This will conflict with the default ports we advice to expose for the `rancher/rancher` container.
|
||||
If a node is added to a cluster, it deploys the nginx ingress controller which will use port 80 and 443. This will conflict with the default ports we advise to expose for the `rancher/rancher` container.
|
||||
|
||||
Please note that this setup is not recommended for production use, but can be convenient for development/demo purposes.
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ Load Balancers have a couple of limitations you should be aware of:
|
||||
|
||||
- Load Balancers can only handle one IP address per service, which means if you run multiple services in your cluster, you must have a load balancer for each service. Running multiples load balancers can be expensive.
|
||||
|
||||
- If you want to use a load balancer with a Hosted Kubernetes cluster (i.e., clusters hosted in GKE, EKS, or AKS), you must host your load balancer with the same cloud provider. Please review the compatibility tables regarding support for load balancers based on how you've provisioned your clusters:
|
||||
- If you want to use a load balancer with a Hosted Kubernetes cluster (i.e., clusters hosted in GKE, EKS, or AKS), the load balancer must be running within that cloud provider's infrastructure. Please review the compatibility tables regarding support for load balancers based on how you've provisioned your clusters:
|
||||
|
||||
|
||||
- [Support for Layer-4 Load Balancing]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/load-balancers-and-ingress/load-balancers/#support-for-layer-4-load-balancing)
|
||||
|
||||
@@ -5,7 +5,7 @@ weight: 2520
|
||||
|
||||
Within Rancher, you can further divide projects into different [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), which are virtual clusters within a project backed by a physical cluster. Should you require another level of organization beyond projects and the `default` namespace, you can use multiple namespaces to isolate applications and resources.
|
||||
|
||||
Although you assign resources at the project level so that each namespace can in the project can use them, you can override this inheritance by assigning resources explicitly to a namespace.
|
||||
Although you assign resources at the project level so that each namespace in the project can use them, you can override this inheritance by assigning resources explicitly to a namespace.
|
||||
|
||||
Resources that you can assign directly to namespaces include:
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ In the following diagram, a Kubernetes admin is trying to enforce a resource quo
|
||||
<sup>Base Kubernetes: Unique Resource Quotas Being Applied to Each Namespace</sup>
|
||||

|
||||
|
||||
Resource quotas are a little different in Rancher. In Rancher, you apply a resource quota to the [project]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects), and then the quota propagates to each namespace, whereafter Kubernetes enforces you limits using the native version of resource quotas. If you want to change the quota for a specific namespace, you can [override it](#overriding-the-default-limit-for-a-namespace).
|
||||
Resource quotas are a little different in Rancher. In Rancher, you apply a resource quota to the [project]({{< baseurl >}}/rancher/v2.x/en/k8s-in-rancher/projects-and-namespaces/#projects), and then the quota propagates to each namespace, whereafter Kubernetes enforces your limits using the native version of resource quotas. If you want to change the quota for a specific namespace, you can [override it](#overriding-the-default-limit-for-a-namespace).
|
||||
|
||||
The resource quota includes two limits, which you set while creating or editing a project:
|
||||
<a id="project-limits"></a>
|
||||
|
||||
@@ -58,16 +58,22 @@ The cluster state (`/var/lib/etcd`) contains wrong information to join the clust
|
||||
|
||||
### etcd cluster and connectivity checks
|
||||
|
||||
If any of the commands respond with `Error: context deadline exceeded`, the etcd instance is unhealthy (either quorum is lost or the instance is not correctly joined in the cluster)
|
||||
The address where etcd is listening depends on the address configuration of the host etcd is running on. If an internal address is configured for the host etcd is running on, the endpoint for `etcdctl` needs to be specified explicitly. If any of the commands respond with `Error: context deadline exceeded`, the etcd instance is unhealthy (either quorum is lost or the instance is not correctly joined in the cluster)
|
||||
|
||||
* Check etcd members on all nodes
|
||||
|
||||
Output should contain all the nodes with the `etcd` role and the output should be identical on all nodes.
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd etcdctl member list
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list"
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001
|
||||
@@ -79,10 +85,16 @@ xxx, started, etcd-xxx, https://IP:2380, https://IP:2379,https://IP:4001
|
||||
|
||||
The values for `RAFT TERM` should be equal and `RAFT INDEX` should be not be too far apart from each other.
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
+-----------------+------------------+---------+---------+-----------+-----------+------------+
|
||||
@@ -96,10 +108,16 @@ Example output:
|
||||
|
||||
* Check endpoint health
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd etcdctl endpoint health --endpoints=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','")
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd etcdctl endpoint health --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','")
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
https://IP:2379 is healthy: successfully committed proposal: took = 2.113189ms
|
||||
@@ -109,6 +127,7 @@ https://IP:2379 is healthy: successfully committed proposal: took = 2.451201ms
|
||||
|
||||
* Check connectivity on port TCP/2379
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5"); do
|
||||
echo "Validating connection to ${endpoint}/health";
|
||||
@@ -116,8 +135,17 @@ for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5
|
||||
done
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5"); do
|
||||
echo "Validating connection to ${endpoint}/health";
|
||||
curl -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health";
|
||||
done
|
||||
```
|
||||
|
||||
If you are running on an operating system without `curl` (for example, RancherOS), you can use the following command which uses a Docker container to run the `curl` command.
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5"); do
|
||||
echo "Validating connection to ${endpoint}/health";
|
||||
@@ -125,6 +153,14 @@ for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5
|
||||
done
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5"); do
|
||||
echo "Validating connection to ${endpoint}/health";
|
||||
docker run --net=host -v /opt/rke/etc/kubernetes/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health"
|
||||
done
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
Validating connection to https://IP:2379/health
|
||||
@@ -137,6 +173,7 @@ Validating connection to https://IP:2379/health
|
||||
|
||||
* Check connectivity on port TCP/2380
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f4"); do
|
||||
echo "Validating connection to ${endpoint}/version";
|
||||
@@ -144,8 +181,17 @@ for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f4
|
||||
done
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f4"); do
|
||||
echo "Validating connection to ${endpoint}/version";
|
||||
curl -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/version";
|
||||
done
|
||||
```
|
||||
|
||||
If you are running on an operating system without `curl` (for example, RancherOS), you can use the following command which uses a Docker container to run the `curl` command.
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f4"); do
|
||||
echo "Validating connection to ${endpoint}/version";
|
||||
@@ -153,6 +199,14 @@ for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f4
|
||||
done
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
for endpoint in $(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f4"); do
|
||||
echo "Validating connection to ${endpoint}/version";
|
||||
docker run --net=host -v /opt/rke/etc/kubernetes/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/version"
|
||||
done
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
Validating connection to https://IP:2380/version
|
||||
@@ -167,10 +221,16 @@ Validating connection to https://IP:2380/version
|
||||
|
||||
etcd will trigger alarms, for instance when it runs out of space.
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd etcdctl alarm list
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT alarm list"
|
||||
```
|
||||
|
||||
Example output when NOSPACE alarm is triggered:
|
||||
```
|
||||
memberID:x alarm:NOSPACE
|
||||
@@ -186,11 +246,18 @@ Resolution:
|
||||
|
||||
* Compact the keyspace
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
rev=$(docker exec etcd etcdctl endpoint status --write-out json | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*')
|
||||
docker exec etcd etcdctl compact "$rev"
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
rev=$(docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT endpoint status --write-out json | egrep -o '\"revision\":[0-9]*' | egrep -o '[0-9]*'")
|
||||
docker exec etcd sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT compact \"$rev\""
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
compacted revision xxx
|
||||
@@ -198,10 +265,16 @@ compacted revision xxx
|
||||
|
||||
* Defrag all etcd members
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd etcdctl defrag --endpoints=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','")
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd sh -c "etcdctl defrag --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','")"
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
Finished defragmenting etcd member[https://IP:2379]
|
||||
@@ -211,10 +284,16 @@ Finished defragmenting etcd member[https://IP:2379]
|
||||
|
||||
* Check endpoint status
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
docker exec etcd sh -c "etcdctl endpoint status --endpoints=$(docker exec etcd /bin/sh -c "etcdctl --endpoints=\$ETCDCTL_ENDPOINT member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ','") --write-out table"
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
+-----------------+------------------+---------+---------+-----------+-----------+------------+
|
||||
@@ -226,6 +305,32 @@ Example output:
|
||||
+-----------------+------------------+---------+---------+-----------+-----------+------------+
|
||||
```
|
||||
|
||||
### Log level
|
||||
|
||||
The log level of etcd can be changed dynamically via the API. You can configure debug logging using the commands below.
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
curl -XPUT -d '{"Level":"DEBUG"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) https://localhost:2379/config/local/log
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
curl -XPUT -d '{"Level":"DEBUG"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv $ETCDCTL_ENDPOINT)/config/local/log
|
||||
```
|
||||
|
||||
To reset the log level back to the default (`INFO`), you can use the following command.
|
||||
|
||||
Command when no internal address is configured on the host:
|
||||
```
|
||||
curl -XPUT -d '{"Level":"INFO"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) https://localhost:2379/config/local/log
|
||||
```
|
||||
|
||||
Command when internal address is configured on the host:
|
||||
```
|
||||
curl -XPUT -d '{"Level":"INFO"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv $ETCDCTL_ENDPOINT)/config/local/log
|
||||
```
|
||||
|
||||
## controlplane
|
||||
|
||||
This section applies to nodes with the `controlplane` role.
|
||||
|
||||
@@ -20,28 +20,30 @@ Run the command below and check the following:
|
||||
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
kubectl get nodes -o wide
|
||||
```
|
||||
|
||||
Example output:
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
etcd-0 Ready etcd 2m v1.11.5 <none> Ubuntu 16.04.5 LTS 4.4.0-138-generic docker://17.3.2
|
||||
etcd-1 Ready etcd 2m v1.11.5 <none> Ubuntu 16.04.5 LTS 4.4.0-138-generic docker://17.3.2
|
||||
etcd-2 Ready etcd 2m v1.11.5 <none> Ubuntu 16.04.5 LTS 4.4.0-138-generic docker://17.3.2
|
||||
controlplane-0 Ready controlplane 2m v1.11.5 <none> Ubuntu 16.04.5 LTS 4.4.0-138-generic docker://17.3.2
|
||||
controlplane-1 Ready controlplane 1m v1.11.5 <none> Ubuntu 16.04.5 LTS 4.4.0-138-generic docker://17.3.2
|
||||
worker-0 Ready worker 2m v1.11.5 <none> Ubuntu 16.04.5 LTS 4.4.0-138-generic docker://17.3.2
|
||||
worker-1 Ready worker 2m v1.11.5 <none> Ubuntu 16.04.5 LTS 4.4.0-138-generic docker://17.3.2
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
controlplane-0 Ready controlplane 31m v1.13.5 138.68.188.91 <none> Ubuntu 18.04.2 LTS 4.15.0-47-generic docker://18.9.5
|
||||
etcd-0 Ready etcd 31m v1.13.5 138.68.180.33 <none> Ubuntu 18.04.2 LTS 4.15.0-47-generic docker://18.9.5
|
||||
worker-0 Ready worker 30m v1.13.5 139.59.179.88 <none> Ubuntu 18.04.2 LTS 4.15.0-47-generic docker://18.9.5
|
||||
```
|
||||
|
||||
#### Get node conditions
|
||||
|
||||
Run the command below to list nodes with [Node Conditions](https://kubernetes.io/docs/concepts/architecture/nodes/#condition)
|
||||
|
||||
```
|
||||
kubectl get nodes -o go-template='{{range .items}}{{$node := .}}{{range .status.conditions}}{{$node.metadata.name}}{{": "}}{{.type}}{{":"}}{{.status}}{{"\n"}}{{end}}{{end}}'
|
||||
```
|
||||
|
||||
Run the command below to list nodes with [Node Conditions](https://kubernetes.io/docs/concepts/architecture/nodes/#condition) that are active that could prevent normal operation.
|
||||
|
||||
```
|
||||
kubectl get nodes -o go-template='{{range .items}}{{$node := .}}{{range .status.conditions}}{{if ne .type "Ready"}}{{if eq .status "True"}}{{$node.metadata.name}}{{": "}}{{.type}}{{":"}}{{.status}}{{"\n"}}{{end}}{{end}}{{end}}{{end}}'
|
||||
kubectl get nodes -o go-template='{{range .items}}{{$node := .}}{{range .status.conditions}}{{if ne .type "Ready"}}{{if eq .status "True"}}{{$node.metadata.name}}{{": "}}{{.type}}{{":"}}{{.status}}{{"\n"}}{{end}}{{else}}{{if ne .status "True"}}{{$node.metadata.name}}{{": "}}{{.type}}{{": "}}{{.status}}{{"\n"}}{{end}}{{end}}{{end}}{{end}}'
|
||||
```
|
||||
|
||||
Example output:
|
||||
|
||||
@@ -68,3 +68,13 @@ When accessing your configured Rancher FQDN does not show you the UI, check the
|
||||
```
|
||||
kubectl -n ingress-nginx logs -l app=ingress-nginx
|
||||
```
|
||||
|
||||
### Leader election
|
||||
|
||||
The leader is determined by a leader election process. After the leader has been determined, the leader (`holderIdentity`) is saved in the `cattle-controllers` ConfigMap (in this example, `rancher-7dbd7875f7-qbj5k`).
|
||||
|
||||
```
|
||||
kubectl -n kube-system get configmap cattle-controllers -o jsonpath='{.metadata.annotations.control-plane\.alpha\.kubernetes\.io/leader}'
|
||||
{"holderIdentity":"rancher-7dbd7875f7-qbj5k","leaseDurationSeconds":45,"acquireTime":"2019-04-04T11:53:12Z","renewTime":"2019-04-04T12:24:08Z","leaderTransitions":0}
|
||||
```
|
||||
|
||||
|
||||
@@ -122,5 +122,3 @@ $ echo $SSH_AUTH_SOCK
|
||||
### Add-ons Job Timeout
|
||||
|
||||
You can define [add-ons]({{< baseurl >}}/rke/latest/en/config-options/add-ons/) to be deployed after the Kubernetes cluster comes up, which uses Kubernetes [jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). RKE will stop attempting to retrieve the job status after the timeout, which is in seconds. The default timeout value is `30` seconds.
|
||||
|
||||
```yaml
|
||||
|
||||
@@ -65,7 +65,7 @@ services:
|
||||
|
||||
### Kubernetes Controller Manager Options
|
||||
|
||||
RKE support the following options for the `kube-controller` service:
|
||||
RKE supports the following options for the `kube-controller` service:
|
||||
|
||||
- **Cluster CIDR** (`cluster_cidr`) - The CIDR pool used to assign IP addresses to pods in the cluster. By default, each node in the cluster is assigned a `/24` network from this pool for pod IP assignments. The default value for this option is `10.42.0.0/16`.
|
||||
- **Service Cluster IP Range** (`service_cluster_ip_range`) - This is the virtual IP address that will be assigned to services created on Kubernetes. By default, the service cluster IP range is `10.43.0.0/16`. If you change this value, then it must also be set with the same value on the Kubernetes API server (`kube-api`).
|
||||
|
||||
@@ -6,6 +6,7 @@ weight: 50
|
||||
RKE is a fast, versatile Kubernetes installer that you can use to install Kubernetes on your Linux hosts. You can get started in a couple of quick and easy steps:
|
||||
|
||||
1. [Download the RKE Binary](#download-the-rke-binary)
|
||||
1. [Alternative RKE MacOS X Install - Homebrew](#alternative-rke-macos-x-install---homebrew)
|
||||
1. [Prepare the Nodes for the Kubernetes Cluster](#prepare-the-nodes-for-the-kubernetes-cluster)
|
||||
1. [Creating the Cluster Configuration File](#creating-the-cluster-configuration-file)
|
||||
1. [Deploying Kubernetes with RKE](#deploying-kubernetes-with-rke)
|
||||
@@ -49,6 +50,25 @@ RKE is a fast, versatile Kubernetes installer that you can use to install Kubern
|
||||
$ rke --version
|
||||
```
|
||||
|
||||
|
||||
### Alternative RKE MacOS X Install - Homebrew
|
||||
|
||||
RKE can also be installed and updated using Homebrew, a package manager for MacOS X.
|
||||
|
||||
1. Install Homebrew. See https://brew.sh/ for instructions.
|
||||
|
||||
2. Using `brew`, install RKE by running the following command in a Terminal window:
|
||||
|
||||
```
|
||||
$ brew install rke
|
||||
```
|
||||
|
||||
If you have already installed RKE using `brew`, you can upgrade RKE by running:
|
||||
|
||||
```
|
||||
$ brew upgrade rke
|
||||
```
|
||||
|
||||
## Prepare the Nodes for the Kubernetes cluster
|
||||
|
||||
The Kubernetes cluster components are launched using Docker on a Linux distro. You can use any Linux you want, as long as you can install Docker on it.
|
||||
|
||||
Executable
+74
@@ -0,0 +1,74 @@
|
||||
#Requires -Version 5.0
|
||||
|
||||
param (
|
||||
[parameter(Mandatory = $false,HelpMessage="Build the build & dev images instead of pulling from the registry")] [switch]$buildBuild,
|
||||
[parameter(Mandatory = $false,HelpMessage="Build the dev image instead of pulling from the registry")] [switch]$buildDev,
|
||||
[parameter(Mandatory = $false,HelpMessage="Port to listen on")] [string]$port,
|
||||
[parameter(Mandatory = $false,HelpMessage="Skip pulling build/dev images")] [switch]$skipPull,
|
||||
[parameter(Mandatory = $false,HelpMessage="Use DIR to for the theme, to devlop the theme at the same time")] [string]$theme,
|
||||
[parameter(Mandatory = $false,HelpMessage="Upload/push the build image after building")] [switch]$upload
|
||||
)
|
||||
|
||||
$DefaultPort = 9001
|
||||
$ListenPort = $DefaultPort
|
||||
$Image = "rancher/docs"
|
||||
$Tag = "dev"
|
||||
$twitterConsumer = $env:TWITTER_CONSUMER
|
||||
$twitterSecret = $env:TWITTER_SECRET
|
||||
|
||||
$dirPath = Split-Path -Parent $MyInvocation.MyCommand.Definition
|
||||
$baseDirPath = Get-Location
|
||||
if ($dirPath -eq $baseDirPath) {
|
||||
$baseDirPath = (Resolve-Path "$dirPath\..").Path
|
||||
}
|
||||
pushd $baseDirPath
|
||||
|
||||
if ($port) {
|
||||
$ListenPort = $port
|
||||
}
|
||||
|
||||
$ThemeVolume = ""
|
||||
if ($theme) {
|
||||
Write-Host "Using theme from $theme"
|
||||
$ThemeVolume = "-v ${baseDirPath}/${theme}:/run/node_modules/rancher-website-theme"
|
||||
}
|
||||
|
||||
if ($buildBuild) {
|
||||
Write-Host "Building ${Image}:build"
|
||||
docker build --no-cache -f Dockerfile.build --build-arg TWITTER_CONSUMER=$twitterConsumer --build-arg TWITTER_SECRET=$twitterSecret -t ${Image}:build .
|
||||
if ($upload) {
|
||||
docker push ${Image}:build
|
||||
}
|
||||
$buildDev = $true
|
||||
} elseif ($skipPull) {
|
||||
Write-Host "Skipping pull of ${Image}:build"
|
||||
} else {
|
||||
Write-Host "Pulling ${Image}:build"
|
||||
docker pull ${Image}:build
|
||||
}
|
||||
|
||||
if ($buildDev) {
|
||||
$Tag = "local"
|
||||
Write-Host "Building ${Image}:${Tag}"
|
||||
docker build -f Dockerfile.dev -t ${Image}:${Tag} .
|
||||
} elseif ($skipPull) {
|
||||
Write-Host "Skipping pull of ${Image}:${Tag}"
|
||||
} else {
|
||||
Write-Host "Pulling ${Image}:${Tag}"
|
||||
docker pull ${Image}:${Tag}
|
||||
}
|
||||
|
||||
Write-Host "Starting server on http://localhost:${ListenPORT}"
|
||||
docker run --rm -p ${ListenPort}:${ListenPort} -it `
|
||||
-v ${baseDirPath}/archetypes:/run/archetypes `
|
||||
-v ${baseDirPath}/assets:/run/assets `
|
||||
-v ${baseDirPath}/content:/run/content `
|
||||
-v ${baseDirPath}/data:/run/data `
|
||||
-v ${baseDirPath}/layouts:/run/layouts `
|
||||
-v ${baseDirPath}/scripts:/run/scripts `
|
||||
-v ${baseDirPath}/static:/run/static `
|
||||
-v ${baseDirPath}/.git:/run/.git `
|
||||
-v ${baseDirPath}/config.toml:/run/config.toml `
|
||||
${ThemeVolume} ${Image}:${Tag} --port=${ListenPort}
|
||||
|
||||
popd
|
||||
Reference in New Issue
Block a user