mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-06 05:03:27 +00:00
Add v2.14 preview docs (#2212)
This commit is contained in:
+21
@@ -0,0 +1,21 @@
|
||||
---
|
||||
title: Kubernetes Components
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/troubleshooting/kubernetes-components"/>
|
||||
</head>
|
||||
|
||||
The commands and steps listed in this section apply to the core Kubernetes components on [Rancher Launched Kubernetes](../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) clusters.
|
||||
|
||||
This section includes troubleshooting tips in the following categories:
|
||||
|
||||
- [Troubleshooting etcd Nodes](troubleshooting-etcd-nodes.md)
|
||||
- [Troubleshooting Controlplane Nodes](troubleshooting-controlplane-nodes.md)
|
||||
- [Troubleshooting nginx-proxy Nodes](troubleshooting-nginx-proxy.md)
|
||||
- [Troubleshooting Worker Nodes and Generic Components](troubleshooting-worker-nodes-and-generic-components.md)
|
||||
|
||||
## Kubernetes Component Diagram
|
||||
|
||||
<br/>
|
||||
<sup>Lines show the traffic flow between components. Colors are used purely for visual aid</sup>
|
||||
+55
@@ -0,0 +1,55 @@
|
||||
---
|
||||
title: Troubleshooting Controlplane Nodes
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/troubleshooting/kubernetes-components/troubleshooting-controlplane-nodes"/>
|
||||
</head>
|
||||
|
||||
This section applies to nodes with the `controlplane` role.
|
||||
|
||||
## Check if the Controlplane Containers are Running
|
||||
|
||||
There are three specific containers launched on nodes with the `controlplane` role:
|
||||
|
||||
* `kube-apiserver`
|
||||
* `kube-controller-manager`
|
||||
* `kube-scheduler`
|
||||
|
||||
The containers should have status **Up**. The duration shown after **Up** is the time the container has been running.
|
||||
|
||||
```
|
||||
docker ps -a -f=name='kube-apiserver|kube-controller-manager|kube-scheduler'
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
26c7159abbcc rancher/hyperkube:v1.11.5-rancher1 "/opt/rke-tools/en..." 3 hours ago Up 3 hours kube-apiserver
|
||||
f3d287ca4549 rancher/hyperkube:v1.11.5-rancher1 "/opt/rke-tools/en..." 3 hours ago Up 3 hours kube-scheduler
|
||||
bdf3898b8063 rancher/hyperkube:v1.11.5-rancher1 "/opt/rke-tools/en..." 3 hours ago Up 3 hours kube-controller-manager
|
||||
```
|
||||
|
||||
## Controlplane Container Logging
|
||||
|
||||
:::note
|
||||
|
||||
If you added multiple nodes with the `controlplane` role, both `kube-controller-manager` and `kube-scheduler` use a leader election process to determine the leader. Only the current leader will log the performed actions. See [Kubernetes leader election](../other-troubleshooting-tips/kubernetes-resources.md#kubernetes-leader-election) how to retrieve the current leader.
|
||||
|
||||
:::
|
||||
|
||||
The logging of the containers can contain information on what the problem could be.
|
||||
|
||||
```
|
||||
docker logs kube-apiserver
|
||||
docker logs kube-controller-manager
|
||||
docker logs kube-scheduler
|
||||
```
|
||||
|
||||
## RKE2 Server Logging
|
||||
|
||||
If Rancher provisions an RKE2 cluster that can't communicate with Rancher, you can run this command on a server node in the downstream cluster to get the RKE2 server logs:
|
||||
|
||||
```
|
||||
journalctl -u rke2-server -f
|
||||
```
|
||||
+283
@@ -0,0 +1,283 @@
|
||||
---
|
||||
title: Troubleshooting etcd Nodes
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/troubleshooting/kubernetes-components/troubleshooting-etcd-nodes"/>
|
||||
</head>
|
||||
|
||||
This section contains commands and tips for troubleshooting nodes with the `etcd` role.
|
||||
|
||||
|
||||
## Checking if the etcd Container is Running
|
||||
|
||||
The container for etcd should have status **Up**. The duration shown after **Up** is the time the container has been running.
|
||||
|
||||
```
|
||||
docker ps -a -f=name=etcd$
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
d26adbd23643 rancher/mirrored-coreos-etcd:v3.5.7 "/usr/local/bin/etcd…" 30 minutes ago Up 30 minutes etcd
|
||||
```
|
||||
|
||||
## etcd Container Logging
|
||||
|
||||
The logging of the container can contain information on what the problem could be.
|
||||
|
||||
```
|
||||
docker logs etcd
|
||||
```
|
||||
| Log | Explanation |
|
||||
|-----|------------------|
|
||||
| `health check for peer xxx could not connect: dial tcp IP:2380: getsockopt: connection refused` | A connection to the address shown on port 2380 cannot be established. Check if the etcd container is running on the host with the address shown. |
|
||||
| `xxx is starting a new election at term x` | The etcd cluster has lost its quorum and is trying to establish a new leader. This can happen when the majority of the nodes running etcd go down/unreachable. |
|
||||
| `connection error: desc = "transport: Error while dialing dial tcp 0.0.0.0:2379: i/o timeout"; Reconnecting to {0.0.0.0:2379 0 <nil>}` | The host firewall is preventing network communication. |
|
||||
| `rafthttp: request cluster ID mismatch` | The node with the etcd instance logging `rafthttp: request cluster ID mismatch` is trying to join a cluster that has already been formed with another peer. The node should be removed from the cluster, and re-added. |
|
||||
| `rafthttp: failed to find member` | The cluster state (`/var/lib/etcd`) contains wrong information to join the cluster. The node should be removed from the cluster, the state directory should be cleaned and the node should be re-added.
|
||||
|
||||
## etcd Cluster and Connectivity Checks
|
||||
|
||||
The address where etcd is listening depends on the address configuration of the host etcd is running on. If an internal address is configured for the host etcd is running on, the endpoint for `etcdctl` needs to be specified explicitly. If any of the commands respond with `Error: context deadline exceeded`, the etcd instance is unhealthy (either quorum is lost or the instance is not correctly joined in the cluster)
|
||||
|
||||
### Check etcd Members on all Nodes
|
||||
|
||||
Output should contain all the nodes with the `etcd` role and the output should be identical on all nodes.
|
||||
|
||||
Command:
|
||||
```
|
||||
docker exec etcd etcdctl member list
|
||||
```
|
||||
|
||||
### Check Endpoint Status
|
||||
|
||||
The values for `RAFT TERM` should be equal and `RAFT INDEX` should be not be too far apart from each other.
|
||||
|
||||
Command:
|
||||
```
|
||||
docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint status --write-out table
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
+-----------------+------------------+---------+---------+-----------+-----------+------------+
|
||||
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
|
||||
+-----------------+------------------+---------+---------+-----------+-----------+------------+
|
||||
| https://IP:2379 | 333ef673fc4add56 | 3.5.7 | 24 MB | false | 72 | 66887 |
|
||||
| https://IP:2379 | 5feed52d940ce4cf | 3.5.7 | 24 MB | true | 72 | 66887 |
|
||||
| https://IP:2379 | db6b3bdb559a848d | 3.5.7 | 25 MB | false | 72 | 66887 |
|
||||
+-----------------+------------------+---------+---------+-----------+-----------+------------+
|
||||
```
|
||||
|
||||
### Check Endpoint Health
|
||||
|
||||
Command:
|
||||
```
|
||||
docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint health
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
https://IP:2379 is healthy: successfully committed proposal: took = 2.113189ms
|
||||
https://IP:2379 is healthy: successfully committed proposal: took = 2.649963ms
|
||||
https://IP:2379 is healthy: successfully committed proposal: took = 2.451201ms
|
||||
```
|
||||
|
||||
### Check Connectivity on Port TCP/2379
|
||||
|
||||
Command:
|
||||
```
|
||||
for endpoint in $(docker exec etcd etcdctl member list | cut -d, -f5); do
|
||||
echo "Validating connection to ${endpoint}/health"
|
||||
docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -w "\n" --cacert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CACERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --cert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --key $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_KEY" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) "${endpoint}/health"
|
||||
done
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
Validating connection to https://IP:2379/health
|
||||
{"health": "true"}
|
||||
Validating connection to https://IP:2379/health
|
||||
{"health": "true"}
|
||||
Validating connection to https://IP:2379/health
|
||||
{"health": "true"}
|
||||
```
|
||||
|
||||
### Check Connectivity on Port TCP/2380
|
||||
|
||||
Command:
|
||||
```
|
||||
for endpoint in $(docker exec etcd etcdctl member list | cut -d, -f4); do
|
||||
echo "Validating connection to ${endpoint}/version";
|
||||
docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl --http1.1 -s -w "\n" --cacert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CACERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --cert $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_CERT" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) --key $(docker inspect -f '{{range $index, $value := .Config.Env}}{{if eq (index (split $value "=") 0) "ETCDCTL_KEY" }}{{range $i, $part := (split $value "=")}}{{if gt $i 1}}{{print "="}}{{end}}{{if gt $i 0}}{{print $part}}{{end}}{{end}}{{end}}{{end}}' etcd) "${endpoint}/version"
|
||||
done
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
Validating connection to https://IP:2380/version
|
||||
{"etcdserver":"3.5.7","etcdcluster":"3.5.0"}
|
||||
Validating connection to https://IP:2380/version
|
||||
{"etcdserver":"3.5.7","etcdcluster":"3.5.0"}
|
||||
Validating connection to https://IP:2380/version
|
||||
{"etcdserver":"3.5.7","etcdcluster":"3.5.0"}
|
||||
```
|
||||
|
||||
## etcd Alarms
|
||||
|
||||
etcd will trigger alarms, for instance when it runs out of space.
|
||||
|
||||
Command:
|
||||
```
|
||||
docker exec etcd etcdctl alarm list
|
||||
```
|
||||
|
||||
Example output when NOSPACE alarm is triggered:
|
||||
```
|
||||
memberID:x alarm:NOSPACE
|
||||
memberID:x alarm:NOSPACE
|
||||
memberID:x alarm:NOSPACE
|
||||
```
|
||||
|
||||
## etcd Space Errors
|
||||
|
||||
Related error messages are `etcdserver: mvcc: database space exceeded` or `applying raft message exceeded backend quota`. Alarm `NOSPACE` will be triggered.
|
||||
|
||||
Resolutions:
|
||||
|
||||
- [Compact the Keyspace](#compact-the-keyspace)
|
||||
- [Defrag All etcd Members](#defrag-all-etcd-members)
|
||||
- [Check Endpoint Status](#check-endpoint-status)
|
||||
- [Disarm Alarm](#disarm-alarm)
|
||||
|
||||
### Compact the Keyspace
|
||||
|
||||
Command:
|
||||
```
|
||||
rev=$(docker exec etcd etcdctl endpoint status --write-out json | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*')
|
||||
docker exec etcd etcdctl compact "$rev"
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
compacted revision xxx
|
||||
```
|
||||
|
||||
### Defrag All etcd Members
|
||||
|
||||
Command:
|
||||
```
|
||||
docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl defrag
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
Finished defragmenting etcd member[https://IP:2379]
|
||||
Finished defragmenting etcd member[https://IP:2379]
|
||||
Finished defragmenting etcd member[https://IP:2379]
|
||||
```
|
||||
|
||||
### Check Endpoint Status
|
||||
|
||||
Command:
|
||||
```
|
||||
docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') etcd etcdctl endpoint status --write-out table
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
+-----------------+------------------+---------+---------+-----------+-----------+------------+
|
||||
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
|
||||
+-----------------+------------------+---------+---------+-----------+-----------+------------+
|
||||
| https://IP:2379 | e973e4419737125 | 3.5.7 | 553 kB | false | 32 | 2449410 |
|
||||
| https://IP:2379 | 4a509c997b26c206 | 3.5.7 | 553 kB | false | 32 | 2449410 |
|
||||
| https://IP:2379 | b217e736575e9dd3 | 3.5.7 | 553 kB | true | 32 | 2449410 |
|
||||
+-----------------+------------------+---------+---------+-----------+-----------+------------+
|
||||
```
|
||||
|
||||
### Disarm Alarm
|
||||
|
||||
After verifying that the DB size went down after compaction and defragmenting, the alarm needs to be disarmed for etcd to allow writes again.
|
||||
|
||||
Command:
|
||||
```
|
||||
docker exec etcd etcdctl alarm list
|
||||
docker exec etcd etcdctl alarm disarm
|
||||
docker exec etcd etcdctl alarm list
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
docker exec etcd etcdctl alarm list
|
||||
memberID:x alarm:NOSPACE
|
||||
memberID:x alarm:NOSPACE
|
||||
memberID:x alarm:NOSPACE
|
||||
docker exec etcd etcdctl alarm disarm
|
||||
docker exec etcd etcdctl alarm list
|
||||
```
|
||||
|
||||
## Configure Log Level
|
||||
|
||||
:::note
|
||||
|
||||
You can no longer dynamically change the log level in etcd v3.5 or later.
|
||||
|
||||
:::
|
||||
|
||||
### etcd v3.5 And Later
|
||||
|
||||
To configure the log level for etcd, edit the cluster YAML:
|
||||
|
||||
```
|
||||
services:
|
||||
etcd:
|
||||
extra_args:
|
||||
log-level: "debug"
|
||||
```
|
||||
|
||||
### etcd v3.4 And Earlier
|
||||
|
||||
In earlier etcd versions, you can use the API to dynamically change the log level. Configure debug logging using the commands below:
|
||||
|
||||
```
|
||||
docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"DEBUG"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINTS)/config/local/log
|
||||
```
|
||||
|
||||
To reset the log level back to the default (`INFO`), you can use the following command.
|
||||
|
||||
Command:
|
||||
```
|
||||
docker run --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro appropriate/curl -s -XPUT -d '{"Level":"INFO"}' --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) $(docker exec etcd printenv ETCDCTL_ENDPOINTS)/config/local/log
|
||||
```
|
||||
|
||||
## etcd Content
|
||||
|
||||
If you want to investigate the contents of your etcd, you can either watch streaming events or you can query etcd directly, see below for examples.
|
||||
|
||||
### Watch Streaming Events
|
||||
|
||||
Command:
|
||||
```
|
||||
docker exec etcd etcdctl watch --prefix /registry
|
||||
```
|
||||
|
||||
If you only want to see the affected keys (and not the binary data), you can append `| grep -a ^/registry` to the command to filter for keys only.
|
||||
|
||||
### Query etcd Directly
|
||||
|
||||
Command:
|
||||
```
|
||||
docker exec etcd etcdctl get /registry --prefix=true --keys-only
|
||||
```
|
||||
|
||||
You can process the data to get a summary of count per key, using the command below:
|
||||
|
||||
```
|
||||
docker exec etcd etcdctl get /registry --prefix=true --keys-only | grep -v ^$ | awk -F'/' '{ if ($3 ~ /cattle.io/) {h[$3"/"$4]++} else { h[$3]++ }} END { for(k in h) print h[k], k }' | sort -nr
|
||||
```
|
||||
|
||||
## Replacing Unhealthy etcd Nodes
|
||||
|
||||
When a node in your etcd cluster becomes unhealthy, the recommended approach is to fix or remove the failed or unhealthy node before adding a new etcd node to the cluster.
|
||||
+72
@@ -0,0 +1,72 @@
|
||||
---
|
||||
title: Troubleshooting nginx-proxy
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/troubleshooting/kubernetes-components/troubleshooting-nginx-proxy"/>
|
||||
</head>
|
||||
|
||||
The `nginx-proxy` container is deployed on every node that does not have the `controlplane` role. It provides access to all the nodes with the `controlplane` role by dynamically generating the NGINX configuration based on available nodes with the `controlplane` role.
|
||||
|
||||
## Check if the Container is Running
|
||||
|
||||
The container is called `nginx-proxy` and should have status `Up`. The duration shown after `Up` is the time the container has been running.
|
||||
|
||||
```
|
||||
docker ps -a -f=name=nginx-proxy
|
||||
```
|
||||
|
||||
Example output:
|
||||
|
||||
```
|
||||
docker ps -a -f=name=nginx-proxy
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
c3e933687c0e rancher/rke-tools:v0.1.15 "nginx-proxy CP_HO..." 3 hours ago Up 3 hours nginx-proxy
|
||||
```
|
||||
|
||||
## Check Generated NGINX Configuration
|
||||
|
||||
The generated configuration should include the IP addresses of the nodes with the `controlplane` role. The configuration can be checked using the following command:
|
||||
|
||||
```
|
||||
docker exec nginx-proxy cat /etc/nginx/nginx.conf
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
error_log stderr notice;
|
||||
|
||||
worker_processes auto;
|
||||
events {
|
||||
multi_accept on;
|
||||
use epoll;
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
stream {
|
||||
upstream kube_apiserver {
|
||||
|
||||
server ip_of_controlplane_node1:6443;
|
||||
|
||||
server ip_of_controlplane_node2:6443;
|
||||
|
||||
}
|
||||
|
||||
server {
|
||||
listen 6443;
|
||||
proxy_pass kube_apiserver;
|
||||
proxy_timeout 30;
|
||||
proxy_connect_timeout 2s;
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
## nginx-proxy Container Logging
|
||||
|
||||
The logging of the containers can contain information on what the problem could be.
|
||||
|
||||
```
|
||||
docker logs nginx-proxy
|
||||
```
|
||||
+38
@@ -0,0 +1,38 @@
|
||||
---
|
||||
title: Troubleshooting Worker Nodes and Generic Components
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/troubleshooting/kubernetes-components/troubleshooting-worker-nodes-and-generic-components"/>
|
||||
</head>
|
||||
|
||||
This section applies to every node as it includes components that run on nodes with any role.
|
||||
|
||||
## Check if the Containers are Running
|
||||
|
||||
There are two specific containers launched on nodes with the `worker` role:
|
||||
|
||||
* kubelet
|
||||
* kube-proxy
|
||||
|
||||
The containers should have status `Up`. The duration shown after `Up` is the time the container has been running.
|
||||
|
||||
```
|
||||
docker ps -a -f=name='kubelet|kube-proxy'
|
||||
```
|
||||
|
||||
Example output:
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
158d0dcc33a5 rancher/hyperkube:v1.11.5-rancher1 "/opt/rke-tools/en..." 3 hours ago Up 3 hours kube-proxy
|
||||
a30717ecfb55 rancher/hyperkube:v1.11.5-rancher1 "/opt/rke-tools/en..." 3 hours ago Up 3 hours kubelet
|
||||
```
|
||||
|
||||
## Container Logging
|
||||
|
||||
The logging of the containers can contain information on what the problem could be.
|
||||
|
||||
```
|
||||
docker logs kubelet
|
||||
docker logs kube-proxy
|
||||
```
|
||||
Reference in New Issue
Block a user