mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-16 01:53:51 +00:00
Merge pull request #3089 from catherineluse/staging-with-master-changes
Merge master to staging
This commit is contained in:
@@ -356,7 +356,7 @@ The `--disable-selinux` option should not be used. It is deprecated and will be
|
||||
Using a custom `--data-dir` under SELinux is not supported. To customize it, you would most likely need to write your own custom policy. For guidance, you could refer to the [containers/container-selinux](https://github.com/containers/container-selinux) repository, which contains the SELinux policy files for Container Runtimes, and the [rancher/k3s-selinux](https://github.com/rancher/k3s-selinux) repository, which contains the SELinux policy for K3s .
|
||||
|
||||
{{%/tab%}}
|
||||
{{% tab "K3s prior to v1.19.1+k3s1" %}}
|
||||
{{% tab "K3s before v1.19.1+k3s1" %}}
|
||||
|
||||
SELinux is automatically enabled for the built-in containerd.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ This section contains instructions for installing K3s in various environments. P
|
||||
|
||||
[High Availability with an External DB]({{<baseurl>}}/k3s/latest/en/installation/ha/) details how to set up an HA K3s cluster backed by an external datastore such as MySQL, PostgreSQL, or etcd.
|
||||
|
||||
[High Availability with Embedded DB (Experimental)]({{<baseurl>}}/k3s/latest/en/installation/ha-embedded/) details how to set up an HA K3s cluster that leverages a built-in distributed database.
|
||||
[High Availability with Embedded DB]({{<baseurl>}}/k3s/latest/en/installation/ha-embedded/) details how to set up an HA K3s cluster that leverages a built-in distributed database.
|
||||
|
||||
[Air-Gap Installation]({{<baseurl>}}/k3s/latest/en/installation/airgap/) details how to set up K3s in environments that do not have direct access to the Internet.
|
||||
|
||||
|
||||
@@ -69,7 +69,7 @@ INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetok
|
||||
{{% /tab %}}
|
||||
{{% tab "High Availability Configuration" %}}
|
||||
|
||||
Reference the [High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha) or [High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded) guides. You will be tweaking install commands so you specify `INSTALL_K3S_SKIP_DOWNLOAD=true` and run your install script locally instead of via curl. You will also utilize `INSTALL_K3S_EXEC='args'` to supply any arguments to k3s.
|
||||
Reference the [High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha) or [High Availability with Embedded DB]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded) guides. You will be tweaking install commands so you specify `INSTALL_K3S_SKIP_DOWNLOAD=true` and run your install script locally instead of via curl. You will also utilize `INSTALL_K3S_EXEC='args'` to supply any arguments to k3s.
|
||||
|
||||
For example, step two of the High Availability with an External DB guide mentions the following:
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ The ability to run Kubernetes using a datastore other than etcd sets K3s apart f
|
||||
|
||||
* If your team doesn't have expertise in operating etcd, you can choose an enterprise-grade SQL database like MySQL or PostgreSQL
|
||||
* If you need to run a simple, short-lived cluster in your CI/CD environment, you can use the embedded SQLite database
|
||||
* If you wish to deploy Kubernetes on the edge and require a highly available solution but can't afford the operational overhead of managing a database at the edge, you can use K3s's embedded HA datastore built on top of embedded etcd (currently experimental)
|
||||
* If you wish to deploy Kubernetes on the edge and require a highly available solution but can't afford the operational overhead of managing a database at the edge, you can use K3s's embedded HA datastore built on top of embedded etcd.
|
||||
|
||||
K3s supports the following datastore options:
|
||||
|
||||
@@ -16,7 +16,7 @@ K3s supports the following datastore options:
|
||||
* [MySQL](https://www.mysql.com/) (certified against version 5.7)
|
||||
* [MariaDB](https://mariadb.org/) (certified against version 10.3.20)
|
||||
* [etcd](https://etcd.io/) (certified against version 3.3.15)
|
||||
* Embedded etcd for High Availability (experimental)
|
||||
* Embedded etcd for High Availability
|
||||
|
||||
### External Datastore Configuration Parameters
|
||||
If you wish to use an external datastore such as PostgreSQL, MySQL, or etcd you must set the `datastore-endpoint` parameter so that K3s knows how to connect to it. You may also specify parameters to configure the authentication and encryption of the connection. The below table summarizes these parameters, which can be passed as either CLI flags or environment variables.
|
||||
|
||||
@@ -0,0 +1,9 @@
|
||||
---
|
||||
title: "Security"
|
||||
weight: 90
|
||||
---
|
||||
|
||||
This section describes the methodology and means of securing a K3s cluster. It's broken into 2 sections.
|
||||
|
||||
* [Hardening Guide](./hardening_guide/)
|
||||
* [CIS Benchmark Self-Assessment Guide](./self_assessment/)
|
||||
@@ -0,0 +1,544 @@
|
||||
---
|
||||
title: "CIS Hardening Guide"
|
||||
weight: 80
|
||||
---
|
||||
|
||||
This document provides prescriptive guidance for hardening a production installation of K3s. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS).
|
||||
|
||||
K3s has a number of security mitigations applied and turned on by default and will pass a number of the Kubernetes CIS controls without modification. There are some notable exceptions to this that require manual intervention to fully comply with the CIS Benchmark:
|
||||
|
||||
1. K3s will not modify the host operating system. Any host-level modifications will need to be done manually.
|
||||
2. Certain CIS policy controls for PodSecurityPolicies and NetworkPolicies will restrict the functionality of this cluster. You must opt into having K3s configure these by adding the appropriate options (enabling of admission plugins) to your command-line flags or configuration file as well as manually applying appropriate policies. Further detail in the sections below.
|
||||
|
||||
The first section (1.1) of the CIS Benchmark concerns itself primarily with pod manifest permissions and ownership. K3s doesn't utilize these for the core components since everything is packaged into a single binary.
|
||||
|
||||
## Host-level Requirements
|
||||
|
||||
There are two areas of host-level requirements: kernel parameters and etcd process/directory configuration. These are outlined in this section.
|
||||
|
||||
### Ensure `protect-kernel-defaults` is set
|
||||
|
||||
This is a kubelet flag that will cause the kubelet to exit if the required kernel parameters are unset or are set to values that are different from the kubelet's defaults.
|
||||
|
||||
> **Note:** `protect-kernel-defaults` is exposed as a top-level flag for K3s.
|
||||
|
||||
#### Set kernel parameters
|
||||
|
||||
Create a file called `/etc/sysctl.d/90-kubelet.conf` and add the snippet below. Then run `sysctl -p /etc/sysctl.d/90-kubelet.conf`.
|
||||
|
||||
```bash
|
||||
vm.panic_on_oom=0
|
||||
vm.overcommit_memory=1
|
||||
kernel.panic=10
|
||||
kernel.panic_on_oops=1
|
||||
```
|
||||
|
||||
## Kubernetes Runtime Requirements
|
||||
|
||||
The runtime requirements to comply with the CIS Benchmark are centered around pod security (PSPs) and network policies. These are outlined in this section. K3s doesn't apply any default PSPs or network policies however K3s ships with a controller that is meant to apply a given set of network policies. By default, K3s runs with the "NodeRestriction" admission controller. To enable PSPs, add the following to the K3s start command: `--kube-apiserver-arg="enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount"`. This will have the effect of maintaining the "NodeRestriction" plugin as well as enabling the "PodSecurityPolicy".
|
||||
|
||||
### PodSecurityPolicies
|
||||
|
||||
When PSPs are enabled, a policy can be applied to satisfy the necessary controls described in section 5.2 of the CIS Benchmark.
|
||||
|
||||
Here's an example of a compliant PSP.
|
||||
|
||||
```yaml
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: cis1.5-compliant-psp
|
||||
spec:
|
||||
privileged: false # CIS - 5.2.1
|
||||
allowPrivilegeEscalation: false # CIS - 5.2.5
|
||||
requiredDropCapabilities: # CIS - 5.2.7/8/9
|
||||
- ALL
|
||||
volumes:
|
||||
- 'configMap'
|
||||
- 'emptyDir'
|
||||
- 'projected'
|
||||
- 'secret'
|
||||
- 'downwardAPI'
|
||||
- 'persistentVolumeClaim'
|
||||
hostNetwork: false # CIS - 5.2.4
|
||||
hostIPC: false # CIS - 5.2.3
|
||||
hostPID: false # CIS - 5.2.2
|
||||
runAsUser:
|
||||
rule: 'MustRunAsNonRoot' # CIS - 5.2.6
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
- min: 1
|
||||
max: 65535
|
||||
fsGroup:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
- min: 1
|
||||
max: 65535
|
||||
readOnlyRootFilesystem: false
|
||||
```
|
||||
|
||||
Before the above PSP to be effective, we need to create a couple ClusterRoles and ClusterRole. We also need to include a "system unrestricted policy" which is needed for system-level pods that require additional privileges.
|
||||
|
||||
These can be combined with the PSP yaml above and NetworkPolicy yaml below into a single file and placed in the `/var/lib/rancher/k3s/server/manifests` directory. Below is an example of a `policy.yaml` file.
|
||||
|
||||
```yaml
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: cis1.5-compliant-psp
|
||||
spec:
|
||||
privileged: false
|
||||
allowPrivilegeEscalation: false
|
||||
requiredDropCapabilities:
|
||||
- ALL
|
||||
volumes:
|
||||
- 'configMap'
|
||||
- 'emptyDir'
|
||||
- 'projected'
|
||||
- 'secret'
|
||||
- 'downwardAPI'
|
||||
- 'persistentVolumeClaim'
|
||||
hostNetwork: false
|
||||
hostIPC: false
|
||||
hostPID: false
|
||||
runAsUser:
|
||||
rule: 'MustRunAsNonRoot'
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
- min: 1
|
||||
max: 65535
|
||||
fsGroup:
|
||||
rule: 'MustRunAs'
|
||||
ranges:
|
||||
- min: 1
|
||||
max: 65535
|
||||
readOnlyRootFilesystem: false
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
labels:
|
||||
addonmanager.kubernetes.io/mode: EnsureExists
|
||||
rules:
|
||||
- apiGroups: ['extensions']
|
||||
resources: ['podsecuritypolicies']
|
||||
verbs: ['use']
|
||||
resourceNames:
|
||||
- cis1.5-compliant-psp
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: default:restricted
|
||||
labels:
|
||||
addonmanager.kubernetes.io/mode: EnsureExists
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:restricted
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: system:authenticated
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: intra-namespace
|
||||
namespace: kube-system
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
name: kube-system
|
||||
---
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: intra-namespace
|
||||
namespace: default
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
name: default
|
||||
---
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: intra-namespace
|
||||
namespace: kube-public
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
name: kube-public
|
||||
---
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: system-unrestricted-psp
|
||||
spec:
|
||||
allowPrivilegeEscalation: true
|
||||
allowedCapabilities:
|
||||
- '*'
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
hostIPC: true
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
hostPorts:
|
||||
- max: 65535
|
||||
min: 0
|
||||
privileged: true
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
volumes:
|
||||
- '*'
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: system-unrestricted-node-psp-rolebinding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system-unrestricted-psp-role
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:nodes
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: system-unrestricted-psp-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
- policy
|
||||
resourceNames:
|
||||
- system-unrestricted-psp
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: system-unrestricted-svc-acct-psp-rolebinding
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system-unrestricted-psp-role
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
```
|
||||
|
||||
> **Note:** The Kubernetes critical additions such as CNI, DNS, and Ingress are ran as pods in the `kube-system` namespace. Therefore, this namespace will have a policy that is less restrictive so that these components can run properly.
|
||||
|
||||
### NetworkPolicies
|
||||
|
||||
> NOTE: K3s deploys kube-router for network policy enforcement. Support for this in K3s is currently experimental.
|
||||
|
||||
CIS requires that all namespaces have a network policy applied that reasonably limits traffic into namespaces and pods.
|
||||
|
||||
Here's an example of a compliant network policy.
|
||||
|
||||
```yaml
|
||||
kind: NetworkPolicy
|
||||
apiVersion: networking.k8s.io/v1
|
||||
metadata:
|
||||
name: intra-namespace
|
||||
namespace: kube-system
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
name: kube-system
|
||||
```
|
||||
|
||||
> **Note:** Operators must manage network policies as normal for additional namespaces that are created.
|
||||
|
||||
## Known Issues
|
||||
The following are controls that K3s currently does not pass by default. Each gap will be explained, along with a note clarifying whether it can be passed through manual operator intervention, or if it will be addressed in a future release of K3s.
|
||||
|
||||
|
||||
### Control 1.2.15
|
||||
Ensure that the admission control plugin `NamespaceLifecycle` is set.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
Setting admission control policy to NamespaceLifecycle ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects.
|
||||
|
||||
This can be remediated by passing this argument as a value to the `enable-admission-plugins=` and pass that to `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
|
||||
</details>
|
||||
|
||||
### Control 1.2.16 (mentioned above)
|
||||
Ensure that the admission control plugin `PodSecurityPolicy` is set.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The PodSecurityPolicy objects define a set of conditions that a pod must run with in order to be accepted into the system. Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions.
|
||||
|
||||
This can be remediated by passing this argument as a value to the `enable-admission-plugins=` and pass that to `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
|
||||
</details>
|
||||
|
||||
### Control 1.2.22
|
||||
Ensure that the `--audit-log-path` argument is set.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
Auditing the Kubernetes API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system. Even though currently, Kubernetes provides only basic audit capabilities, it should be enabled. You can enable it by setting an appropriate audit log path.
|
||||
|
||||
This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
|
||||
</details>
|
||||
|
||||
### Control 1.2.23
|
||||
Ensure that the `--audit-log-maxage` argument is set to 30 or as appropriate.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
Retaining logs for at least 30 days ensures that you can go back in time and investigate or correlate any events. Set your audit log retention period to 30 days or as per your business requirements.
|
||||
|
||||
This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
|
||||
</details>
|
||||
|
||||
### Control 1.2.24
|
||||
Ensure that the `--audit-log-maxbackup` argument is set to 10 or as appropriate.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. For example, if you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis.
|
||||
|
||||
This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
|
||||
</details>
|
||||
|
||||
### Control 1.2.25
|
||||
Ensure that the `--audit-log-maxsize` argument is set to 100 or as appropriate.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. If you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis.
|
||||
|
||||
This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
|
||||
</details>
|
||||
|
||||
### Control 1.2.26
|
||||
Ensure that the `--request-timeout` argument is set as appropriate.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
Setting global request timeout allows extending the API server request timeout limit to a duration appropriate to the user's connection speed. By default, it is set to 60 seconds which might be problematic on slower connections making cluster resources inaccessible once the data volume for requests exceeds what can be transmitted in 60 seconds. But, setting this timeout limit to be too large can exhaust the API server resources making it prone to Denial-of-Service attack. Hence, it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed.
|
||||
|
||||
This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
|
||||
</details>
|
||||
|
||||
### Control 1.2.27
|
||||
Ensure that the `--service-account-lookup` argument is set to true.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
If `--service-account-lookup` is not enabled, the apiserver only verifies that the authentication token is valid, and does not validate that the service account token mentioned in the request is actually present in etcd. This allows using a service account token even after the corresponding service account is deleted. This is an example of time of check to time of use security issue.
|
||||
|
||||
This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
|
||||
</details>
|
||||
|
||||
### Control 1.2.33
|
||||
Ensure that the `--encryption-provider-config` argument is set as appropriate.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
Where `etcd` encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the aescbc, kms and secretbox are likely to be appropriate options.
|
||||
</details>
|
||||
|
||||
### Control 1.2.34
|
||||
Ensure that encryption providers are appropriately configured.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
`etcd` is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures.
|
||||
|
||||
This can be remediated by passing a valid configuration to `k3s` as outlined above.
|
||||
</details>
|
||||
|
||||
### Control 1.3.1
|
||||
Ensure that the `--terminated-pod-gc-threshold` argument is set as appropriate.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
Garbage collection is important to ensure sufficient resource availability and avoiding degraded performance and availability. In the worst case, the system might crash or just be unusable for a long period of time. The current setting for garbage collection is 12,500 terminated pods which might be too high for your system to sustain. Based on your system resources and tests, choose an appropriate threshold value to activate garbage collection.
|
||||
|
||||
This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
|
||||
</details>
|
||||
|
||||
### Control 3.2.1
|
||||
Ensure that a minimal audit policy is created (Scored)
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
Logging is an important detective control for all systems, to detect potential unauthorized access.
|
||||
|
||||
This can be remediated by passing controls 1.2.22 - 1.2.25 and verifying their efficacy.
|
||||
</details>
|
||||
|
||||
|
||||
### Control 4.2.7
|
||||
Ensure that the `--make-iptables-util-chains` argument is set to true.
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods. It is recommended to let kubelets manage the changes to iptables. This ensures that the iptables configuration remains in sync with pods networking configuration. Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world. You might have iptables rules too restrictive or too open.
|
||||
|
||||
This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below.
|
||||
</details>
|
||||
|
||||
### Control 5.1.5
|
||||
Ensure that default service accounts are not actively used. (Scored)
|
||||
<details>
|
||||
<summary>Rationale</summary>
|
||||
|
||||
Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod.
|
||||
|
||||
Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account.
|
||||
|
||||
The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.
|
||||
</details>
|
||||
|
||||
The remediation for this is to update the `automountServiceAccountToken` field to `false` for the `default` service account in each namespace.
|
||||
|
||||
For `default` service accounts in the built-in namespaces (`kube-system`, `kube-public`, `kube-node-lease`, and `default`), K3s does not automatically do this. You can manually update this field on these service accounts to pass the control.
|
||||
|
||||
## Control Plane Execution and Arguments
|
||||
|
||||
Listed below are the K3s control plane components and the arguments they're given at start, by default. Commented to their right is the CIS 1.5 control that they satisfy.
|
||||
|
||||
```bash
|
||||
kube-apiserver
|
||||
--advertise-port=6443
|
||||
--allow-privileged=true
|
||||
--anonymous-auth=false # 1.2.1
|
||||
--api-audiences=unknown
|
||||
--authorization-mode=Node,RBAC
|
||||
--bind-address=127.0.0.1
|
||||
--cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs
|
||||
--client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt # 1.2.31
|
||||
--enable-admission-plugins=NodeRestriction,PodSecurityPolicy # 1.2.17
|
||||
--etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt # 1.2.32
|
||||
--etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt # 1.2.29
|
||||
--etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key # 1.2.29
|
||||
--etcd-servers=https://127.0.0.1:2379
|
||||
--insecure-port=0 # 1.2.19
|
||||
--kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt
|
||||
--kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt
|
||||
--kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key
|
||||
--profiling=false # 1.2.21
|
||||
--proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt
|
||||
--proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
|
||||
--requestheader-allowed-names=system:auth-proxy
|
||||
--requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt
|
||||
--requestheader-extra-headers-prefix=X-Remote-Extra-
|
||||
--requestheader-group-headers=X-Remote-Group
|
||||
--requestheader-username-headers=X-Remote-User
|
||||
--secure-port=6444 # 1.2.20
|
||||
--service-account-issuer=k3s
|
||||
--service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key # 1.2.28
|
||||
--service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key
|
||||
--service-cluster-ip-range=10.43.0.0/16
|
||||
--storage-backend=etcd3
|
||||
--tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt # 1.2.30
|
||||
--tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key # 1.2.30
|
||||
```
|
||||
|
||||
```bash
|
||||
kube-controller-manager
|
||||
--address=127.0.0.1
|
||||
--allocate-node-cidrs=true
|
||||
--bind-address=127.0.0.1 # 1.3.7
|
||||
--cluster-cidr=10.42.0.0/16
|
||||
--cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt
|
||||
--cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key
|
||||
--kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig
|
||||
--port=10252
|
||||
--profiling=false # 1.3.2
|
||||
--root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt # 1.3.5
|
||||
--secure-port=0
|
||||
--service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key # 1.3.4
|
||||
--use-service-account-credentials=true # 1.3.3
|
||||
```
|
||||
|
||||
```bash
|
||||
kube-scheduler
|
||||
--address=127.0.0.1
|
||||
--bind-address=127.0.0.1 # 1.4.2
|
||||
--kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig
|
||||
--port=10251
|
||||
--profiling=false # 1.4.1
|
||||
--secure-port=0
|
||||
```
|
||||
|
||||
```bash
|
||||
kubelet
|
||||
--address=0.0.0.0
|
||||
--anonymous-auth=false # 4.2.1
|
||||
--authentication-token-webhook=true
|
||||
--authorization-mode=Webhook # 4.2.2
|
||||
--cgroup-driver=cgroupfs
|
||||
--client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt # 4.2.3
|
||||
--cloud-provider=external
|
||||
--cluster-dns=10.43.0.10
|
||||
--cluster-domain=cluster.local
|
||||
--cni-bin-dir=/var/lib/rancher/k3s/data/223e6420f8db0d8828a8f5ed3c44489bb8eb47aa71485404f8af8c462a29bea3/bin
|
||||
--cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d
|
||||
--container-runtime-endpoint=/run/k3s/containerd/containerd.sock
|
||||
--container-runtime=remote
|
||||
--containerd=/run/k3s/containerd/containerd.sock
|
||||
--eviction-hard=imagefs.available<5%,nodefs.available<5%
|
||||
--eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10%
|
||||
--fail-swap-on=false
|
||||
--healthz-bind-address=127.0.0.1
|
||||
--hostname-override=hostname01
|
||||
--kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig
|
||||
--kubelet-cgroups=/systemd/system.slice
|
||||
--node-labels=
|
||||
--pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests
|
||||
--protect-kernel-defaults=true # 4.2.6
|
||||
--read-only-port=0 # 4.2.4
|
||||
--resolv-conf=/run/systemd/resolve/resolv.conf
|
||||
--runtime-cgroups=/systemd/system.slice
|
||||
--serialize-image-pulls=false
|
||||
--tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt # 4.2.10
|
||||
--tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key # 4.2.10
|
||||
```
|
||||
|
||||
The command below is an example of how the outlined remediations can be applied.
|
||||
|
||||
```bash
|
||||
k3s server \
|
||||
--protect-kernel-defaults=true \
|
||||
--secrets-encryption=true \
|
||||
--kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log' \
|
||||
--kube-apiserver-arg='audit-log-maxage=30' \
|
||||
--kube-apiserver-arg='audit-log-maxbackup=10' \
|
||||
--kube-apiserver-arg='audit-log-maxsize=100' \
|
||||
--kube-apiserver-arg='request-timeout=300s' \
|
||||
--kube-apiserver-arg='service-account-lookup=true' \
|
||||
--kube-apiserver-arg='enable-admission-plugins=NodeRestriction,PodSecurityPolicy,NamespaceLifecycle,ServiceAccount' \
|
||||
--kube-controller-manager-arg='terminated-pod-gc-threshold=10' \
|
||||
--kube-controller-manager-arg='use-service-account-credentials=true' \
|
||||
--kubelet-arg='streaming-connection-idle-timeout=5m' \
|
||||
--kubelet-arg='make-iptables-util-chains=true'
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
If you have followed this guide, your K3s cluster will be configured to comply with the CIS Kubernetes Benchmark. You can review the [CIS Benchmark Self-Assessment Guide](../self_assessment/) to understand the expectations of each of the benchmarks and how you can do the same on your cluster.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -75,7 +75,7 @@ The status should be Bound for each.
|
||||
|
||||
[comment]: <> (pending change - longhorn may support arm64 and armhf in the future.)
|
||||
|
||||
> **Note:** At this time Longhorn only supports amd64.
|
||||
> **Note:** At this time Longhorn only supports amd64 and arm64 (experimental).
|
||||
|
||||
K3s supports [Longhorn](https://github.com/longhorn/longhorn). Longhorn is an open-source distributed block storage system for Kubernetes.
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ weight: 190
|
||||
|
||||
### Pre-Requisites
|
||||
|
||||
Prior to launching RancherOS EC2 instances, the [ECS Container Instance IAM Role](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) will need to have been created. This `ecsInstanceRole` will need to be used when launching EC2 instances. If you have been using ECS, you created this role if you followed the ECS "Get Started" interactive guide.
|
||||
Before launching RancherOS EC2 instances, the [ECS Container Instance IAM Role](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) will need to have been created. This `ecsInstanceRole` will need to be used when launching EC2 instances. If you have been using ECS, you created this role if you followed the ECS "Get Started" interactive guide.
|
||||
|
||||
### Launching an instance with ECS
|
||||
|
||||
|
||||
+1
-1
@@ -72,4 +72,4 @@ Because of the changes necessary to address [CVE-2018-20321](https://cve.mitre.o
|
||||
```
|
||||
After a few moments the clusters will go from `Unavailable` back to `Available`.
|
||||
|
||||
6. Continue using Rancher as normal.
|
||||
6. Continue using Rancher as normal.
|
||||
|
||||
@@ -142,7 +142,6 @@ data:
|
||||
|
||||
Make sure to encode the keys to base64 in YAML file.
|
||||
Run the following command to encode the keys.
|
||||
|
||||
```
|
||||
echo -n "your_key" |base64
|
||||
```
|
||||
@@ -190,4 +189,4 @@ After the role is created, and you have attached the corresponding instance prof
|
||||
|
||||
# Examples
|
||||
|
||||
For example Backup custom resources, refer to [this page.](../../examples/#backup)
|
||||
For example Backup custom resources, refer to [this page.](../../examples/#backup)
|
||||
|
||||
@@ -59,4 +59,4 @@ kubectl logs <pod name from above command> -n cattle-resources-system -f
|
||||
|
||||
### Cleanup
|
||||
|
||||
If you created the restore resource with kubectl, remove the resource to prevent a naming conflict with future restores.
|
||||
If you created the restore resource with kubectl, remove the resource to prevent a naming conflict with future restores.
|
||||
|
||||
+5
-5
@@ -44,7 +44,7 @@ To use a storage provisioner that is not on the above list, you will need to use
|
||||
|
||||
These steps describe how to set up a storage class at the cluster level.
|
||||
|
||||
1. Go to the cluster for which you want to dynamically provision persistent storage volumes.
|
||||
1. Go to the **Cluster Explorer** of the cluster for which you want to dynamically provision persistent storage volumes.
|
||||
|
||||
1. From the cluster view, select `Storage > Storage Classes`. Click `Add Class`.
|
||||
|
||||
@@ -64,7 +64,7 @@ For full information about the storage class parameters, refer to the official [
|
||||
|
||||
These steps describe how to set up a PVC in the namespace where your stateful workload will be deployed.
|
||||
|
||||
1. Go to the project containing a workload that you want to add a PVC to.
|
||||
1. Go to the **Cluster Manager** to the project containing a workload that you want to add a PVC to.
|
||||
|
||||
1. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Then select the **Volumes** tab. Click **Add Volume**.
|
||||
|
||||
@@ -94,7 +94,7 @@ To attach the PVC to a new workload,
|
||||
|
||||
1. Create a workload as you would in [Deploying Workloads]({{<baseurl>}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/).
|
||||
1. For **Workload Type**, select **Stateful set of 1 pod**.
|
||||
1. Expand the **Volumes** section and click **Add Volume > Add a New Persistent Volume (Claim).**
|
||||
1. Expand the **Volumes** section and click **Add Volume > Use an Existing Persistent Volume (Claim).**
|
||||
1. In the **Persistent Volume Claim** section, select the newly created persistent volume claim that is attached to the storage class.
|
||||
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
|
||||
1. Click **Launch.**
|
||||
@@ -105,9 +105,9 @@ To attach the PVC to an existing workload,
|
||||
|
||||
1. Go to the project that has the workload that will have the PVC attached.
|
||||
1. Go to the workload that will have persistent storage and click **⋮ > Edit.**
|
||||
1. Expand the **Volumes** section and click **Add Volume > Add a New Persistent Volume (Claim).**
|
||||
1. Expand the **Volumes** section and click **Add Volume > Use an Existing Persistent Volume (Claim).**
|
||||
1. In the **Persistent Volume Claim** section, select the newly created persistent volume claim that is attached to the storage class.
|
||||
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
|
||||
1. Click **Save.**
|
||||
|
||||
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage.
|
||||
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage.
|
||||
|
||||
+1
-1
@@ -142,4 +142,4 @@ After creating your cluster, you can access it through the Rancher UI. As a best
|
||||
|
||||
- **Access your cluster with the kubectl CLI:** Follow [these steps]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#accessing-clusters-with-kubectl-on-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI.
|
||||
- **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster.
|
||||
- **Provision Storage:** For an example of how to provision storage in vSphere using Rancher, refer to [this section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere) In order to dynamically provision storage in vSphere, the vSphere provider must be [enabled.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere)
|
||||
- **Provision Storage:** For an example of how to provision storage in vSphere using Rancher, refer to [this section.]({{<baseurl>}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere) In order to dynamically provision storage in vSphere, the vSphere provider must be [enabled.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere)
|
||||
|
||||
+1
-1
@@ -13,4 +13,4 @@ The vSphere node templates in Rancher were updated in the following Rancher vers
|
||||
- [v2.2.0](./v2.2.0)
|
||||
- [v2.0.4](./v2.0.4)
|
||||
|
||||
For Rancher versions before v2.0.4, refer to [this version.](./prior-to-2.0.4)
|
||||
For Rancher versions before v2.0.4, refer to [this version.](./prior-to-2.0.4)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
title: Install Rancher on a Kubernetes Cluster
|
||||
title: Install/Upgrade Rancher on a Kubernetes Cluster
|
||||
description: Learn how to install Rancher in development and production environments. Read about single node and high availability installation
|
||||
weight: 3
|
||||
weight: 2
|
||||
aliases:
|
||||
- /rancher/v2.x/en/installation/k8s-install/
|
||||
- /rancher/v2.x/en/installation/k8s-install/helm-rancher
|
||||
|
||||
+3
-1
@@ -118,8 +118,10 @@ If you are currently running the cert-manger whose version is older than v0.11,
|
||||
1. Uninstall Rancher
|
||||
|
||||
```
|
||||
helm delete rancher -n cattle-system
|
||||
helm delete rancher
|
||||
```
|
||||
|
||||
In case this results in an error that the release "rancher" was not found, make sure you are using the correct deployment name. Use `helm list` to list the helm-deployed releases.
|
||||
|
||||
2. Uninstall and reinstall `cert-manager` according to the instructions on the [Upgrading Cert-Manager]({{<baseurl>}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions) page.
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: Install Rancher on a Linux OS
|
||||
weight: 2
|
||||
title: Install/Upgrade Rancher on a Linux OS
|
||||
weight: 3
|
||||
---
|
||||
|
||||
_Available as of Rancher v2.5.4_
|
||||
|
||||
+1
-1
@@ -15,7 +15,7 @@ For convenience export the IP address and port of your proxy into an environment
|
||||
export proxy_host="10.0.0.5:8888"
|
||||
export HTTP_PROXY=http://${proxy_host}
|
||||
export HTTPS_PROXY=http://${proxy_host}
|
||||
export NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16
|
||||
export NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,cattle-system.svc
|
||||
```
|
||||
|
||||
Next configure apt to use this proxy when installing packages. If you are not using Ubuntu, you have to adapt this step accordingly:
|
||||
|
||||
+1
-1
@@ -44,7 +44,7 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s
|
||||
|
||||
1. Using a remote Terminal connection, log into the node running your Rancher Server.
|
||||
|
||||
1. Pull the version of Rancher that you were running before upgrade. Replace the `<PRIOR_RANCHER_VERSION>` with [that version](#before-you-start).
|
||||
1. Pull the version of Rancher that you were running before upgrade. Replace the `<PRIOR_RANCHER_VERSION>` with that version.
|
||||
|
||||
For example, if you were running Rancher v2.0.5 before upgrade, pull v2.0.5.
|
||||
|
||||
|
||||
@@ -168,6 +168,14 @@ The following tables break down the port requirements for Rancher nodes, for inb
|
||||
|
||||
{{% /accordion %}}
|
||||
|
||||
### Ports for Rancher Server in GCP GKE
|
||||
|
||||
When deploying Rancher into a Google Kubernetes Engine [private cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters), the nodes where Rancher runs must be accessible from the control plane:
|
||||
|
||||
| Protocol | Port | Source | Description |
|
||||
|-----|-----|----------------|---|
|
||||
| TCP | 9443 | The GKE master `/28` range | Rancher webhooks |
|
||||
|
||||
# Downstream Kubernetes Cluster Nodes
|
||||
|
||||
Downstream Kubernetes clusters run your apps and services. This section describes what ports need to be opened on the nodes in downstream clusters so that Rancher can communicate with them.
|
||||
|
||||
+4
-2
@@ -35,9 +35,11 @@ If the Rancher server is installed in a single Docker container, you only need o
|
||||
1. Choose a new or existing key pair that you will use to connect to your instance later. If you are using an existing key pair, make sure you already have access to the private key.
|
||||
1. Click **Launch Instances.**
|
||||
|
||||
**Result:** You have created Rancher nodes that satisfy the requirements for OS, hardware, and networking. Next, you will install Docker on each node.
|
||||
**Result:** You have created Rancher nodes that satisfy the requirements for OS, hardware, and networking.
|
||||
|
||||
### 3. Install Docker and Create User
|
||||
**Note:** If the nodes are being used for an RKE Kubernetes cluster, install Docker on each node in the next step. For a K3s Kubernetes cluster, the nodes are now ready to install K3s.
|
||||
|
||||
### 3. Install Docker and Create User for RKE Kubernetes Cluster Nodes
|
||||
|
||||
1. From the [AWS EC2 console,](https://console.aws.amazon.com/ec2/) click **Instances** in the left panel.
|
||||
1. Go to the instance that you want to install Docker on. Select the instance and click **Actions > Connect.**
|
||||
|
||||
+1
-1
@@ -76,7 +76,7 @@ In this task, you can use the versatile **Custom** option. This option lets you
|
||||
|
||||
1. From the **Clusters** page, click **Add Cluster**.
|
||||
|
||||
2. Choose **Custom**.
|
||||
2. Choose **Existing Nodes**.
|
||||
|
||||
3. Enter a **Cluster Name**.
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ There are a few things worth noting:
|
||||
|
||||
* In addition to these pluggable add-ons, you can specify an add-on that you want deployed after the cluster deployment is complete.
|
||||
* As of v0.1.8, RKE will update an add-on if it is the same name.
|
||||
* Prior to v0.1.8, update any add-ons by using `kubectl edit`.
|
||||
* Before v0.1.8, update any add-ons by using `kubectl edit`.
|
||||
|
||||
## Critical and Non-Critical Add-ons
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ weight: 262
|
||||
|
||||
By default, RKE deploys the NGINX ingress controller on all schedulable nodes.
|
||||
|
||||
> **Note:** As of v0.1.8, only workers are considered schedulable nodes, but prior to v0.1.8, worker and controlplane nodes were considered schedulable nodes.
|
||||
> **Note:** As of v0.1.8, only workers are considered schedulable nodes, but before v0.1.8, worker and controlplane nodes were considered schedulable nodes.
|
||||
|
||||
RKE will deploy the ingress controller as a DaemonSet with `hostnetwork: true`, so ports `80`, and `443` will be opened on each node where the controller is deployed.
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ RKE only adds additional add-ons when using `rke up` multiple times. RKE does **
|
||||
|
||||
As of v0.1.8, RKE will update an add-on if it is the same name.
|
||||
|
||||
Prior to v0.1.8, update any add-ons by using `kubectl edit`.
|
||||
Before v0.1.8, update any add-ons by using `kubectl edit`.
|
||||
|
||||
## In-line Add-ons
|
||||
|
||||
|
||||
+1
-1
@@ -32,4 +32,4 @@ $ govc vm.change -vm <vm-path> -e disk.enableUUID=TRUE
|
||||
|
||||
In Rancher v2.0.4+, disk UUIDs are enabled in vSphere node templates by default.
|
||||
|
||||
If you are using Rancher prior to v2.0.4, refer to the [vSphere node template documentation.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4/#disk-uuids) for details on how to enable a UUID with a Rancher node template.
|
||||
If you are using Rancher before v2.0.4, refer to the [vSphere node template documentation.]({{<baseurl>}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/before-2.0.4/#disk-uuids) for details on how to enable a UUID with a Rancher node template.
|
||||
|
||||
@@ -78,7 +78,7 @@ nodes:
|
||||
|
||||
You can specify the list of roles that you want the node to be as part of the Kubernetes cluster. Three roles are supported: `controlplane`, `etcd` and `worker`. Node roles are not mutually exclusive. It's possible to assign any combination of roles to any node. It's also possible to change a node's role using the upgrade process.
|
||||
|
||||
> **Note:** Prior to v0.1.8, workloads/pods might have run on any nodes with `worker` or `controlplane` roles, but as of v0.1.8, they will only be deployed to any `worker` nodes.
|
||||
> **Note:** Before v0.1.8, workloads/pods might have run on any nodes with `worker` or `controlplane` roles, but as of v0.1.8, they will only be deployed to any `worker` nodes.
|
||||
|
||||
### etcd
|
||||
|
||||
|
||||
@@ -35,5 +35,5 @@ By default, all system images are being pulled from DockerHub. If you are on a s
|
||||
|
||||
As of v0.1.10, you have to configure your private registry credentials, but you can specify this registry as a default registry so that all [system images]({{<baseurl>}}/rke/latest/en/config-options/system-images/) are pulled from the designated private registry. You can use the command `rke config --system-images` to get the list of default system images to populate your private registry.
|
||||
|
||||
Prior to v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images]({{<baseurl>}}/rke/latest/en/config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name.
|
||||
Before v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images]({{<baseurl>}}/rke/latest/en/config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name.
|
||||
|
||||
|
||||
@@ -11,13 +11,13 @@ For any of the Kubernetes services, you can update the `extra_args` to change th
|
||||
|
||||
As of `v0.1.3`, using `extra_args` will add new arguments and **override** any existing defaults. For example, if you need to modify the default admission plugins list, you need to include the default list and edit it with your changes so all changes are included.
|
||||
|
||||
Prior to `v0.1.3`, using `extra_args` would only add new arguments to the list and there was no ability to change the default list.
|
||||
Before `v0.1.3`, using `extra_args` would only add new arguments to the list and there was no ability to change the default list.
|
||||
|
||||
All service defaults and parameters are defined per [`kubernetes_version`]({{<baseurl>}}/rke/latest/en/config-options/#kubernetes-version):
|
||||
|
||||
- For RKE v0.3.0+, the service defaults and parameters are defined per [`kubernetes_version`]({{<baseurl>}}/rke/latest/en/config-options/#kubernetes-version). The service defaults are located [here](https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_service_options.go). The default list of admissions plugins is the same for all Kubernetes versions and is located [here](https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_service_options.go#L11).
|
||||
|
||||
- For RKE prior to v0.3.0, the service defaults and admission plugins are defined per [`kubernetes_version`]({{<baseurl>}}/rke/latest/en/config-options/#kubernetes-version) and located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go).
|
||||
- For RKE before v0.3.0, the service defaults and admission plugins are defined per [`kubernetes_version`]({{<baseurl>}}/rke/latest/en/config-options/#kubernetes-version) and located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go).
|
||||
|
||||
```yaml
|
||||
services:
|
||||
|
||||
@@ -63,7 +63,7 @@ system_images:
|
||||
metrics_server: rancher/metrics-server-amd64:v0.3.1
|
||||
```
|
||||
|
||||
Prior to `v0.1.6`, instead of using the `rancher/rke-tools` image, we used the following images:
|
||||
Before `v0.1.6`, instead of using the `rancher/rke-tools` image, we used the following images:
|
||||
|
||||
```yaml
|
||||
system_images:
|
||||
|
||||
@@ -100,18 +100,18 @@ nginx-65899c769f-qkhml 1/1 Running 0 17s
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE prior to v0.2.0" %}}
|
||||
{{% tab "RKE before v0.2.0" %}}
|
||||
|
||||
This walkthrough will demonstrate how to restore an etcd cluster from a local snapshot with the following steps:
|
||||
|
||||
1. [Take a local snapshot of the cluster](#take-a-local-snapshot-of-the-cluster-rke-prior-to-v0.2.0)
|
||||
1. [Store the snapshot externally](#store-the-snapshot-externally-rke-prior-to-v0.2.0)
|
||||
1. [Simulate a node failure](#simulate-a-node-failure-rke-prior-to-v0.2.0)
|
||||
1. [Remove the Kubernetes cluster and clean the nodes](#remove-the-kubernetes-cluster-and-clean-the-nodes-rke-prior-to-v0.2.0)
|
||||
1. [Retrieve the backup and place it on a new node](#retrieve-the-backup-and-place-it-on-a-new-node-rke-prior-to-v0.2.0)
|
||||
1. [Add a new etcd node to the Kubernetes cluster](#add-a-new-etcd-node-to-the-kubernetes-cluster-rke-prior-to-v0.2.0)
|
||||
1. [Restore etcd on the new node from the backup](#restore-etcd-on-the-new-node-from-the-backup-rke-prior-to-v0.2.0)
|
||||
1. [Restore Operations on the Cluster](#restore-operations-on-the-cluster-rke-prior-to-v0.2.0)
|
||||
1. [Take a local snapshot of the cluster](#take-a-local-snapshot-of-the-cluster-rke-before-v0.2.0)
|
||||
1. [Store the snapshot externally](#store-the-snapshot-externally-rke-before-v0.2.0)
|
||||
1. [Simulate a node failure](#simulate-a-node-failure-rke-before-v0.2.0)
|
||||
1. [Remove the Kubernetes cluster and clean the nodes](#remove-the-kubernetes-cluster-and-clean-the-nodes-rke-before-v0.2.0)
|
||||
1. [Retrieve the backup and place it on a new node](#retrieve-the-backup-and-place-it-on-a-new-node-rke-before-v0.2.0)
|
||||
1. [Add a new etcd node to the Kubernetes cluster](#add-a-new-etcd-node-to-the-kubernetes-cluster-rke-before-v0.2.0)
|
||||
1. [Restore etcd on the new node from the backup](#restore-etcd-on-the-new-node-from-the-backup-rke-before-v0.2.0)
|
||||
1. [Restore Operations on the Cluster](#restore-operations-on-the-cluster-rke-before-v0.2.0)
|
||||
|
||||
### Example Scenario of restoring from a Local Snapshot
|
||||
|
||||
@@ -122,7 +122,7 @@ In this example, the Kubernetes cluster was deployed on two AWS nodes.
|
||||
| node1 | 10.0.0.1 | [controlplane, worker] |
|
||||
| node2 | 10.0.0.2 | [etcd] |
|
||||
|
||||
<a id="take-a-local-snapshot-of-the-cluster-rke-prior-to-v0.2.0"></a>
|
||||
<a id="take-a-local-snapshot-of-the-cluster-rke-before-v0.2.0"></a>
|
||||
### 1. Take a Local Snapshot of the Cluster
|
||||
|
||||
Back up the Kubernetes cluster by taking a local snapshot:
|
||||
@@ -131,7 +131,7 @@ Back up the Kubernetes cluster by taking a local snapshot:
|
||||
$ rke etcd snapshot-save --name snapshot.db --config cluster.yml
|
||||
```
|
||||
|
||||
<a id="store-the-snapshot-externally-rke-prior-to-v0.2.0"></a>
|
||||
<a id="store-the-snapshot-externally-rke-before-v0.2.0"></a>
|
||||
### 2. Store the Snapshot Externally
|
||||
|
||||
After taking the etcd snapshot on `node2`, we recommend saving this backup in a persistent place. One of the options is to save the backup and `pki.bundle.tar.gz` file on an S3 bucket or tape backup.
|
||||
@@ -145,7 +145,7 @@ root@node2:~# s3cmd \
|
||||
s3://rke-etcd-backup/
|
||||
```
|
||||
|
||||
<a id="simulate-a-node-failure-rke-prior-to-v0.2.0"></a>
|
||||
<a id="simulate-a-node-failure-rke-before-v0.2.0"></a>
|
||||
### 3. Simulate a Node Failure
|
||||
|
||||
To simulate the failure, let's power down `node2`.
|
||||
@@ -159,7 +159,7 @@ root@node2:~# poweroff
|
||||
| node1 | 10.0.0.1 | [controlplane, worker] |
|
||||
| ~~node2~~ | ~~10.0.0.2~~ | ~~[etcd]~~ |
|
||||
|
||||
<a id="remove-the-kubernetes-cluster-and-clean-the-nodes-rke-prior-to-v0.2.0"></a>
|
||||
<a id="remove-the-kubernetes-cluster-and-clean-the-nodes-rke-before-v0.2.0"></a>
|
||||
### 4. Remove the Kubernetes Cluster and Clean the Nodes
|
||||
|
||||
The following command removes your cluster and cleans the nodes so that the cluster can be restored without any conflicts:
|
||||
@@ -168,7 +168,7 @@ The following command removes your cluster and cleans the nodes so that the clus
|
||||
rke remove --config rancher-cluster.yml
|
||||
```
|
||||
|
||||
<a id="retrieve-the-backup-and-place-it-on-a-new-node-rke-prior-to-v0.2.0"></a>
|
||||
<a id="retrieve-the-backup-and-place-it-on-a-new-node-rke-before-v0.2.0"></a>
|
||||
### 5. Retrieve the Backup and Place it On a New Node
|
||||
|
||||
Before restoring etcd and running `rke up`, we need to retrieve the backup saved on S3 to a new node, e.g. `node3`.
|
||||
@@ -190,7 +190,7 @@ root@node3:~# s3cmd get \
|
||||
|
||||
> **Note:** If you had multiple etcd nodes, you would have to manually sync the snapshot and `pki.bundle.tar.gz` across all of the etcd nodes in the cluster.
|
||||
|
||||
<a id="add-a-new-etcd-node-to-the-kubernetes-cluster-rke-prior-to-v0.2.0"></a>
|
||||
<a id="add-a-new-etcd-node-to-the-kubernetes-cluster-rke-before-v0.2.0"></a>
|
||||
### 6. Add a New etcd Node to the Kubernetes Cluster
|
||||
|
||||
Before updating and restoring etcd, you will need to add the new node into the Kubernetes cluster with the `etcd` role. In the `cluster.yml`, comment out the old node and add in the new node. `
|
||||
@@ -215,7 +215,7 @@ nodes:
|
||||
- etcd
|
||||
```
|
||||
|
||||
<a id="restore-etcd-on-the-new-node-from-the-backup-rke-prior-to-v0.2.0"></a>
|
||||
<a id="restore-etcd-on-the-new-node-from-the-backup-rke-before-v0.2.0"></a>
|
||||
### 7. Restore etcd on the New Node from the Backup
|
||||
|
||||
After the new node is added to the `cluster.yml`, run the `rke etcd snapshot-restore` command to launch `etcd` from the backup:
|
||||
@@ -226,7 +226,7 @@ $ rke etcd snapshot-restore --name snapshot.db --config cluster.yml
|
||||
|
||||
The snapshot and `pki.bundle.tar.gz` file are expected to be saved at `/opt/rke/etcd-snapshots` on each etcd node.
|
||||
|
||||
<a id="restore-operations-on-the-cluster-rke-prior-to-v0.2.0"></a>
|
||||
<a id="restore-operations-on-the-cluster-rke-before-v0.2.0"></a>
|
||||
### 8. Restore Operations on the Cluster
|
||||
|
||||
Finally, we need to restore the operations on the cluster. We will make the Kubernetes API point to the new `etcd` by running `rke up` again using the new `cluster.yml`.
|
||||
|
||||
@@ -94,7 +94,7 @@ Below is an [example IAM policy](https://docs.aws.amazon.com/IAM/latest/UserGuid
|
||||
For details on giving an application access to S3, refer to the AWS documentation on [Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances.](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html)
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE prior to v0.2.0" %}}
|
||||
{{% tab "RKE before v0.2.0" %}}
|
||||
|
||||
To save a snapshot of etcd from each etcd node in the cluster config file, run the `rke etcd snapshot-save` command.
|
||||
|
||||
|
||||
@@ -30,8 +30,8 @@ time="2018-05-04T18:43:16Z" level=info msg="Created backup" name="2018-05-04T18:
|
||||
|
||||
|Option|Description| S3 Specific |
|
||||
|---|---| --- |
|
||||
|**interval_hours**| The duration in hours between recurring backups. This supercedes the `creation` option (which was used in RKE prior to v0.2.0) and will override it if both are specified.| |
|
||||
|**retention**| The number of snapshots to retain before rotation. If the retention is configured in both `etcd.retention` (time period to keep snapshots in hours), which was required in RKE prior to v0.2.0, and at `etcd.backup_config.retention` (number of snapshots), the latter will be used. | |
|
||||
|**interval_hours**| The duration in hours between recurring backups. This supercedes the `creation` option (which was used in RKE before v0.2.0) and will override it if both are specified.| |
|
||||
|**retention**| The number of snapshots to retain before rotation. If the retention is configured in both `etcd.retention` (time period to keep snapshots in hours), which was required in RKE before v0.2.0, and at `etcd.backup_config.retention` (number of snapshots), the latter will be used. | |
|
||||
|**bucket_name**| S3 bucket name where backups will be stored| * |
|
||||
|**folder**| Folder inside S3 bucket where backups will be stored. This is optional. _Available as of v0.3.0_ | * |
|
||||
|**access_key**| S3 access key with permission to access the backup bucket.| * |
|
||||
@@ -96,11 +96,11 @@ services:
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE prior to v0.2.0"%}}
|
||||
{{% tab "RKE before v0.2.0"%}}
|
||||
|
||||
To schedule automatic recurring etcd snapshots, you can enable the `etcd-snapshot` service with [extra configuration options](#options-for-the-local-etcd-snapshot-service). `etcd-snapshot` runs in a service container alongside the `etcd` container. By default, the `etcd-snapshot` service takes a snapshot for every node that has the `etcd` role and stores them to local disk in `/opt/rke/etcd-snapshots`.
|
||||
|
||||
RKE saves a backup of the certificates, i.e. a file named `pki.bundle.tar.gz`, in the same location. The snapshot and pki bundle file are required for the restore process in versions prior to v0.2.0.
|
||||
RKE saves a backup of the certificates, i.e. a file named `pki.bundle.tar.gz`, in the same location. The snapshot and pki bundle file are required for the restore process in versions before v0.2.0.
|
||||
|
||||
### Snapshot Service Logging
|
||||
|
||||
|
||||
@@ -74,7 +74,7 @@ $ rke etcd snapshot-restore \
|
||||
| `--ignore-docker-version` | [Disable Docker version check]({{<baseurl>}}/rke/latest/en/config-options/#supported-docker-versions) |
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE prior to v0.2.0"%}}
|
||||
{{% tab "RKE before v0.2.0"%}}
|
||||
|
||||
If there is a disaster with your Kubernetes cluster, you can use `rke etcd snapshot-restore` to recover your etcd. This command reverts etcd to a specific snapshot and should be run on an etcd node of the the specific cluster that has suffered the disaster.
|
||||
|
||||
|
||||
@@ -178,7 +178,7 @@ The Kubernetes cluster state, which consists of the cluster configuration file `
|
||||
|
||||
As of v0.2.0, RKE creates a `.rkestate` file in the same directory that has the cluster configuration file `cluster.yml`. The `.rkestate` file contains the current state of the cluster including the RKE configuration and the certificates. It is required to keep this file in order to update the cluster or perform any operation on it through RKE.
|
||||
|
||||
Prior to v0.2.0, RKE saved the Kubernetes cluster state as a secret. When updating the state, RKE pulls the secret, updates/changes the state and saves a new secret.
|
||||
Before v0.2.0, RKE saved the Kubernetes cluster state as a secret. When updating the state, RKE pulls the secret, updates/changes the state and saves a new secret.
|
||||
|
||||
## Interacting with your Kubernetes cluster
|
||||
|
||||
|
||||
@@ -5,23 +5,31 @@ weight: 5
|
||||
**In this section:**
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
- [Operating System](#operating-system)
|
||||
- [General Linux Requirements](#general-linux-requirements)
|
||||
- [Red Hat Enterprise Linux (RHEL) / Oracle Linux (OL) / CentOS](#red-hat-enterprise-linux-rhel-oracle-enterprise-linux-ol-centos)
|
||||
|
||||
- [Using upstream Docker](#using-upstream-docker)
|
||||
- [Using RHEL/CentOS packaged Docker](#using-rhel-centos-packaged-docker)
|
||||
- [Notes about Atomic Nodes](#red-hat-atomic)
|
||||
|
||||
- [OpenSSH version](#openssh-version)
|
||||
- [Creating a Docker Group](#creating-a-docker-group)
|
||||
- [Flatcar Container Linux](#flatcar-container-linux)
|
||||
- [General Linux Requirements](#general-linux-requirements)
|
||||
- [SUSE Linux Enterprise Server (SLES) / openSUSE](#suse-linux-enterprise-server-sles--opensuse)
|
||||
- [Using Upstream Docker](#using-upstream-docker)
|
||||
- [Using SUSE/openSUSE packaged Docker](#using-suseopensuse-packaged-docker)
|
||||
- [Adding the Software Repository for Docker](#adding-the-software-repository-for-docker)
|
||||
- [openSUSE MicroOS/Kubic (Atomic)](#opensuse-microoskubic-atomic)
|
||||
- [openSUSE MicroOS](#opensuse-microos)
|
||||
- [openSUSE Kubic](#opensuse-kubic)
|
||||
- [Red Hat Enterprise Linux (RHEL) / Oracle Linux (OL) / CentOS](#red-hat-enterprise-linux-rhel--oracle-linux-ol--centos)
|
||||
- [Using upstream Docker](#using-upstream-docker-1)
|
||||
- [Using RHEL/CentOS packaged Docker](#using-rhelcentos-packaged-docker)
|
||||
- [Red Hat Atomic](#red-hat-atomic)
|
||||
- [OpenSSH version](#openssh-version)
|
||||
- [Creating a Docker Group](#creating-a-docker-group)
|
||||
- [Flatcar Container Linux](#flatcar-container-linux)
|
||||
- [Software](#software)
|
||||
- [OpenSSH](#openssh)
|
||||
- [Kubernetes](#kubernetes)
|
||||
- [Docker](#docker)
|
||||
- [Installing Docker](#installing-docker)
|
||||
- [Checking the Installed Docker Version](#checking-the-installed-docker-version)
|
||||
- [Ports](#ports)
|
||||
|
||||
- [Opening port TCP/6443 using `iptables`](#opening-port-tcp-6443-using-iptables)
|
||||
- [Opening port TCP/6443 using `firewalld`](#opening-port-tcp-6443-using-firewalld)
|
||||
- [Opening port TCP/6443 using `iptables`](#opening-port-tcp6443-using-iptables)
|
||||
- [Opening port TCP/6443 using `firewalld`](#opening-port-tcp6443-using-firewalld)
|
||||
- [SSH Server Configuration](#ssh-server-configuration)
|
||||
|
||||
<!-- /TOC -->
|
||||
@@ -99,6 +107,80 @@ xt_tcpudp |
|
||||
net.bridge.bridge-nf-call-iptables=1
|
||||
```
|
||||
|
||||
### SUSE Linux Enterprise Server (SLES) / openSUSE
|
||||
|
||||
If you are using SUSE Linux Enterprise Server or openSUSE follow the instructions below.
|
||||
|
||||
#### Using upstream Docker
|
||||
If you are using upstream Docker, the package name is `docker-ce` or `docker-ee`. You can check the installed package by executing:
|
||||
|
||||
```
|
||||
rpm -q docker-ce
|
||||
```
|
||||
|
||||
When using the upstream Docker packages, please follow [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user).
|
||||
|
||||
#### Using SUSE/openSUSE packaged docker
|
||||
If you are using the Docker package supplied by SUSE/openSUSE, the package name is `docker`. You can check the installed package by executing:
|
||||
|
||||
```
|
||||
rpm -q docker
|
||||
```
|
||||
|
||||
#### Adding the Software repository for docker
|
||||
In SUSE Linux Enterprise Server 15 SP2 docker is found in the Containers module.
|
||||
This module will need to be added before istalling docker.
|
||||
|
||||
To list available modules you can run SUSEConnect to list the extensions and the activation command
|
||||
```
|
||||
node:~ # SUSEConnect --list-extensions
|
||||
AVAILABLE EXTENSIONS AND MODULES
|
||||
|
||||
Basesystem Module 15 SP2 x86_64 (Activated)
|
||||
Deactivate with: SUSEConnect -d -p sle-module-basesystem/15.2/x86_64
|
||||
|
||||
Containers Module 15 SP2 x86_64
|
||||
Activate with: SUSEConnect -p sle-module-containers/15.2/x86_64
|
||||
```
|
||||
Run this SUSEConnect command to activate the Containers module.
|
||||
```
|
||||
node:~ # SUSEConnect -p sle-module-containers/15.2/x86_64
|
||||
Registering system to registration proxy https://rmt.seader.us
|
||||
|
||||
Updating system details on https://rmt.seader.us ...
|
||||
|
||||
Activating sle-module-containers 15.2 x86_64 ...
|
||||
-> Adding service to system ...
|
||||
-> Installing release package ...
|
||||
|
||||
Successfully registered system
|
||||
```
|
||||
In order to run docker cli commands with your user then you need to add this user to the `docker` group.
|
||||
It is preferred not to use the root user for this.
|
||||
|
||||
```
|
||||
usermod -aG docker <user_name>
|
||||
```
|
||||
|
||||
To verify that the user is correctly configured, log out of the node and login using SSH or your preferred method, and execute `docker ps`:
|
||||
|
||||
```
|
||||
ssh user@node
|
||||
user@node:~> docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
user@node:~>
|
||||
```
|
||||
### openSUSE MicroOS/Kubic (Atomic)
|
||||
Consult the project pages for openSUSE MicroOS and Kubic for installation
|
||||
#### openSUSE MicroOS
|
||||
Designed to host container workloads with automated administration & patching. Installing openSUSE MicroOS you get a quick, small environment for deploying Containers, or any other workload that benefits from Transactional Updates. As rolling release distribution the software is always up-to-date.
|
||||
https://microos.opensuse.org
|
||||
#### openSUSE Kubic
|
||||
Based on MicroOS, but not a rolling release distribution. Designed with the same things in mind but also a Certified Kubernetes Distribution.
|
||||
https://kubic.opensuse.org
|
||||
Installation instructions:
|
||||
https://kubic.opensuse.org/blog/2021-02-08-MicroOS-Kubic-Rancher-RKE/
|
||||
|
||||
### Red Hat Enterprise Linux (RHEL) / Oracle Linux (OL) / CentOS
|
||||
|
||||
If using Red Hat Enterprise Linux, Oracle Linux or CentOS, you cannot use the `root` user as [SSH user]({{<baseurl>}}/rke/latest/en/config-options/nodes/#ssh-user) due to [Bugzilla 1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). Please follow the instructions below how to setup Docker correctly, based on the way you installed Docker on the node.
|
||||
|
||||
@@ -46,7 +46,7 @@ This file is created in the same directory that has the cluster configuration fi
|
||||
|
||||
It is required to keep the `cluster.rkestate` file to perform any operation on the cluster through RKE, or when upgrading a cluster last managed via RKE v0.2.0 or later.
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE prior to v0.2.0" %}}
|
||||
{{% tab "RKE before v0.2.0" %}}
|
||||
Ensure that the `kube_config_cluster.yml` file is present in the working directory.
|
||||
|
||||
RKE saves the Kubernetes cluster state as a secret. When updating the state, RKE pulls the secret, updates or changes the state, and saves a new secret. The `kube_config_cluster.yml` file is required for upgrading a cluster last managed via RKE v0.1.x.
|
||||
@@ -103,7 +103,7 @@ In addition, if neither `kubernetes_version` nor `system_images` are configured
|
||||
|
||||
As of v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, then RKE will error out.
|
||||
|
||||
Prior to v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, the default version from the supported list is used.
|
||||
Before v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, the default version from the supported list is used.
|
||||
|
||||
If you want to use a different version from the supported list, please use the [system images]({{<baseurl>}}/rke/latest/en/config-options/system-images/) option.
|
||||
|
||||
@@ -113,7 +113,7 @@ In RKE, `kubernetes_version` is used to map the version of Kubernetes to the def
|
||||
|
||||
For RKE v0.3.0+, the service defaults are located [here](https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_service_options.go).
|
||||
|
||||
For RKE prior to v0.3.0, the service defaults are located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go). Note: The version in the path of the service defaults file corresponds to a Rancher version. Therefore, for Rancher v2.1.x, [this file](https://github.com/rancher/types/blob/release/v2.1/apis/management.cattle.io/v3/k8s_defaults.go) should be used.
|
||||
For RKE before v0.3.0, the service defaults are located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go). Note: The version in the path of the service defaults file corresponds to a Rancher version. Therefore, for Rancher v2.1.x, [this file](https://github.com/rancher/types/blob/release/v2.1/apis/management.cattle.io/v3/k8s_defaults.go) should be used.
|
||||
|
||||
### Service Upgrades
|
||||
|
||||
|
||||
@@ -65,7 +65,7 @@ For more information on configuring the number of replicas for each addon, refer
|
||||
For an example showing how to configure the addons, refer to the [example cluster.yml.]({{<baseurl>}}/rke/latest/en/upgrades/configuring-strategy/#example-cluster-yml)
|
||||
|
||||
{{% /tab %}}
|
||||
{{% tab "RKE prior to v1.1.0" %}}
|
||||
{{% tab "RKE before v1.1.0" %}}
|
||||
|
||||
When a cluster is upgraded with `rke up`, using the default options, the following process is used:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user