mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-14 17:13:33 +00:00
@@ -49,7 +49,7 @@ $ reboot
|
||||
|
||||
### Use RANCHER_BOOT partition
|
||||
|
||||
When you only use the RRACHER_STATE partition, the bootloader will be installed in the `/boot` directory.
|
||||
When you only use the RANCHER_STATE partition, the bootloader will be installed in the `/boot` directory.
|
||||
|
||||
```
|
||||
$ system-docker run -it --rm -v /:/host alpine
|
||||
|
||||
@@ -72,6 +72,9 @@ You need add `rancher.autologin=tty1` to the end, then press `<Enter>`. If all g
|
||||
We need to mount the root disk in the recovery console and delete some data:
|
||||
|
||||
```
|
||||
# If you couldn't see any disk devices created under `/dev/`, please try this command:
|
||||
$ ros udev-settle
|
||||
|
||||
$ mkdir /mnt/root-disk
|
||||
$ mount /dev/sda1 /mnt/root-disk
|
||||
|
||||
|
||||
@@ -93,7 +93,7 @@ Key | Value | Default | Description
|
||||
`extra_args` | List of Strings | `[]` | Arbitrary daemon arguments, appended to the generated command
|
||||
`environment` | List of Strings (optional) | `[]` |
|
||||
|
||||
_Available as of v1.4_
|
||||
_Available as of v1.4.x_
|
||||
|
||||
The docker-sys bridge can be configured with system-docker args, it will take effect after reboot.
|
||||
|
||||
@@ -101,6 +101,18 @@ The docker-sys bridge can be configured with system-docker args, it will take ef
|
||||
$ ros config set rancher.system_docker.bip 172.18.43.1/16
|
||||
```
|
||||
|
||||
_Available as of v1.4.x_
|
||||
|
||||
The default path of system-docker logs is `/var/log/system-docker.log`. If you want to write the system-docker logs to a separate partition,
|
||||
e.g. [RANCHE_OEM partition]({{< baseurl >}}/os/v1.x/en/about/custom-partition-layout/#use-rancher-oem-partition), you can try `rancher.defaults.system_docker_logs`:
|
||||
|
||||
```
|
||||
#cloud-config
|
||||
rancher:
|
||||
defaults:
|
||||
system_docker_logs: /usr/share/ros/oem/system-docker.log
|
||||
```
|
||||
|
||||
### Using a pull through registry mirror
|
||||
|
||||
There are 3 Docker engines that can be configured to use the pull-through Docker Hub registry mirror cache:
|
||||
|
||||
@@ -10,18 +10,3 @@ You must boot with at least **1280MB** of memory. If you boot with the ISO, you
|
||||
### Install to Disk
|
||||
|
||||
After you boot RancherOS from ISO, you can follow the instructions [here]({{< baseurl >}}/os/v1.x/en/installation/running-rancheros/server/install-to-disk/) to install RancherOS to a hard disk.
|
||||
|
||||
### Persisting State
|
||||
|
||||
If you are running from the ISO, RancherOS will be running from memory. All downloaded Docker images, for example, will be stored in a ramdisk and will be lost after the server is rebooted. You can
|
||||
create a file system with the label `RANCHER_STATE` to instruct RancherOS to use that partition to store state. Suppose you have a disk partition on the server called `/dev/sda`, the following command formats that partition and labels it `RANCHER_STATE`
|
||||
|
||||
```
|
||||
$ sudo mkfs.ext4 -L RANCHER_STATE /dev/sda
|
||||
# Reboot afterwards in order for the changes to start being saved.
|
||||
$ sudo reboot
|
||||
```
|
||||
|
||||
After you reboot, the server RancherOS will use `/dev/sda` as the state partition.
|
||||
|
||||
> **Note:** If you are installing RancherOS to disk, you do not need to run this command.
|
||||
|
||||
@@ -7,7 +7,7 @@ For production environments, we recommend installing Rancher in a high-availabil
|
||||
|
||||
This procedure walks you through setting up a 3-node cluster with RKE and installing the Rancher chart with the Helm package manager.
|
||||
|
||||
> **Important:** For the best performance, we recommend this Kubernetes cluster to be dedicated only to run Rancher.
|
||||
> **Important:** For the best performance, we recommend this Kubernetes cluster to be dedicated only to run Rancher. After the Kubernetes cluster to run Rancher is setup, you can [create or import clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-in-rancher) for running your workloads.
|
||||
|
||||
## Recommended Architecture
|
||||
|
||||
|
||||
@@ -25,13 +25,13 @@ kubectl create clusterrolebinding tiller \
|
||||
|
||||
helm init --service-account tiller
|
||||
|
||||
# For chinese users
|
||||
# The latest version of tiller images queries addresses:
|
||||
# https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085
|
||||
|
||||
helm init --service-account tiller \
|
||||
--tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:<tag>
|
||||
# Users in China: You will need to specify a specific tiller-image in order to initialize tiller.
|
||||
# The list of tiller image tags are available here: https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085.
|
||||
# When initializing tiller, you'll need to pass in --tiller-image
|
||||
|
||||
helm init --service-account tiller |
|
||||
--tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:<tag>
|
||||
```
|
||||
|
||||
> **Note:** This`tiller`install has full cluster access, which should be acceptable if the cluster is dedicated to Rancher server. Check out the [helm docs](https://docs.helm.sh/using_helm/#role-based-access-control) for restricting `tiller` access to suit your security requirements.
|
||||
|
||||
@@ -37,6 +37,7 @@ There are three recommended options for the source of the certificate.
|
||||
|
||||
Rancher relies on [cert-manager](https://github.com/kubernetes/charts/tree/master/stable/cert-manager) from the official Kubernetes Helm chart repository to issue certificates from Rancher's own generated CA or to request Let's Encrypt certificates.
|
||||
|
||||
|
||||
Install `cert-manager` from Kubernetes Helm chart repository.
|
||||
|
||||
```
|
||||
@@ -88,8 +89,10 @@ deployment "rancher" successfully rolled out
|
||||
This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA. This configuration uses HTTP validation (`HTTP-01`) so the load balancer must have a public DNS record and be accessible from the internet.
|
||||
|
||||
- Replace `<CHART_REPO>` with the repository that you configured in [Add the Helm Chart Repository](#add-the-helm-chart-repository) (i.e. `latest` or `stable`).
|
||||
|
||||
- Set `hostname` to the public DNS record, set `ingress.tls.source` to `letsEncrypt` and `letsEncrypt.email` to the email address used for communication about your certificate (for example, expiry notices)
|
||||
|
||||
|
||||
>**Using Air Gap?** [Set the `rancherImage` option]({{< baseurl >}}/rancher/v2.x/en/installation/air-gap-installation/install-rancher/#install-rancher-using-private-registry) in your command, pointing toward your private registry.
|
||||
|
||||
```
|
||||
@@ -113,6 +116,7 @@ deployment "rancher" successfully rolled out
|
||||
|
||||
Create Kubernetes secrets from your own certificates for Rancher to use.
|
||||
|
||||
|
||||
> **Note:** The `Common Name` or a `Subject Alternative Names` entry in the server certificate must match the `hostname` option, or the ingress controller will fail to configure correctly. Although an entry in the `Subject Alternative Names` is technically required, having a matching `Common Name` maximizes compatibility with older browsers/applications. If you want to check if your certificates are correct, see [How do I check Common Name and Subject Alternative Names in my server certificate?]({{< baseurl >}}/rancher/v2.x/en/faq/technical/#how-do-i-check-common-name-and-subject-alternative-names-in-my-server-certificate)
|
||||
|
||||
- Replace `<CHART_REPO>` with the repository that you configured in [Add the Helm Chart Repository](#add-the-helm-chart-repository) (i.e. `latest` or `stable`).
|
||||
|
||||
Reference in New Issue
Block a user