mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-04-16 19:35:39 +00:00
Remove letsEncrypt notes, add cluster pod check before starting on helm
This commit is contained in:
@@ -50,7 +50,7 @@ helm install rancher-stable/rancher --name rancher --namespace cattle-system \
|
||||
|
||||
#### LetsEncrypt
|
||||
|
||||
Use LetsEncrypt's free service to issue trusted SSL certs. This configuration uses http validation so the Load Balancer must have a Public DNS record and be accessible from the internet.
|
||||
Use [LetsEncrypt](https://letsencrypt.org/)'s free service to issue trusted SSL certs. This configuration uses http validation so the Load Balancer must have a Public DNS record and be accessible from the internet.
|
||||
|
||||
Set `hostname`, `ingress.tls.source=letEncrypt` and LetsEncrypt options.
|
||||
|
||||
@@ -61,8 +61,6 @@ helm install rancher-stable/rancher --name rancher --namespace cattle-system \
|
||||
--set letsEncrypt.email=me@example.org
|
||||
```
|
||||
|
||||
> LetsEncrypt ProTip: The default `production` environment only allows you to register a name 5 times in a week. If you're rebuilding a bunch of times, use `--set letsEncrypt.environment=staging` until you have you're confident your config is right.
|
||||
|
||||
#### Certificates from Files (Kubernetes Secret)
|
||||
|
||||
Create Kubernetes Secrets from your own certificates for Rancher to use.
|
||||
|
||||
@@ -56,7 +56,7 @@ You can copy this file to `$HOME/.kube/config` or if you are working with multip
|
||||
export KUBECONFIG=$(pwd)/kube_config_rancher-cluster.yml
|
||||
```
|
||||
|
||||
Test you connectivity with `kubectl` and see if you can get the list of nodes back.
|
||||
Test your connectivity with `kubectl` and see if you can get the list of nodes back.
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
@@ -67,6 +67,33 @@ NAME STATUS ROLES AGE VER
|
||||
165.227.127.226 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
```
|
||||
|
||||
### Check the health of your cluster pods
|
||||
|
||||
Check that all the required pods and containers are healthy are ready to continue.
|
||||
|
||||
* Pods are in `Running` or `Completed` state.
|
||||
* `READY` column shows all the containers are running (i.e. `3/3`) for pods with `STATUS` `Running`
|
||||
* Pods with `STATUS` `Completed` are run-one Jobs. For these pods `READY` should be `0/1`.
|
||||
|
||||
```
|
||||
kubectl get pods --all-namespaces
|
||||
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
ingress-nginx nginx-ingress-controller-tnsn4 1/1 Running 0 30s
|
||||
ingress-nginx nginx-ingress-controller-tw2ht 1/1 Running 0 30s
|
||||
ingress-nginx nginx-ingress-controller-v874b 1/1 Running 0 30s
|
||||
kube-system canal-jp4hz 3/3 Running 0 30s
|
||||
kube-system canal-z2hg8 3/3 Running 0 30s
|
||||
kube-system canal-z6kpw 3/3 Running 0 30s
|
||||
kube-system kube-dns-7588d5b5f5-sf4vh 3/3 Running 0 30s
|
||||
kube-system kube-dns-autoscaler-5db9bbb766-jz2k6 1/1 Running 0 30s
|
||||
kube-system metrics-server-97bc649d5-4rl2q 1/1 Running 0 30s
|
||||
kube-system rke-ingress-controller-deploy-job-bhzgm 0/1 Completed 0 30s
|
||||
kube-system rke-kubedns-addon-deploy-job-gl7t4 0/1 Completed 0 30s
|
||||
kube-system rke-metrics-addon-deploy-job-7ljkc 0/1 Completed 0 30s
|
||||
kube-system rke-network-plugin-deploy-job-6pbgj 0/1 Completed 0 30s
|
||||
```
|
||||
|
||||
### Save your files
|
||||
|
||||
Save a copy of the `kube_config_rancher-cluster.yml` and `rancher-cluster.yml` files. You will need these files to maintain and upgrade your Rancher instance.
|
||||
|
||||
@@ -3,6 +3,16 @@ title: Troubleshooting
|
||||
weight: 276
|
||||
---
|
||||
|
||||
#### canal Pods show READY 2/3
|
||||
|
||||
The most common cause of this issue is port 8472/UDP is not open between the nodes. Check your local firewall, network routing or security groups.
|
||||
|
||||
Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections.
|
||||
|
||||
#### nginx-ingress-controller Pods show RESTARTS
|
||||
|
||||
The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-2-3) for troubleshooting.
|
||||
|
||||
#### Failed to set up SSH tunneling for host [xxx.xxx.xxx.xxx]: Can't retrieve Docker Info
|
||||
|
||||
##### Failed to dial to /var/run/docker.sock: ssh: rejected: administratively prohibited (open failed)
|
||||
|
||||
Reference in New Issue
Block a user