diff --git a/content/k3s/latest/en/advanced/_index.md b/content/k3s/latest/en/advanced/_index.md index 633399b6614..f48955149c4 100644 --- a/content/k3s/latest/en/advanced/_index.md +++ b/content/k3s/latest/en/advanced/_index.md @@ -356,7 +356,7 @@ The `--disable-selinux` option should not be used. It is deprecated and will be Using a custom `--data-dir` under SELinux is not supported. To customize it, you would most likely need to write your own custom policy. For guidance, you could refer to the [containers/container-selinux](https://github.com/containers/container-selinux) repository, which contains the SELinux policy files for Container Runtimes, and the [rancher/k3s-selinux](https://github.com/rancher/k3s-selinux) repository, which contains the SELinux policy for K3s . {{%/tab%}} -{{% tab "K3s prior to v1.19.1+k3s1" %}} +{{% tab "K3s before v1.19.1+k3s1" %}} SELinux is automatically enabled for the built-in containerd. diff --git a/content/k3s/latest/en/installation/_index.md b/content/k3s/latest/en/installation/_index.md index b141bcce42b..91997c7a2a9 100644 --- a/content/k3s/latest/en/installation/_index.md +++ b/content/k3s/latest/en/installation/_index.md @@ -9,7 +9,7 @@ This section contains instructions for installing K3s in various environments. P [High Availability with an External DB]({{}}/k3s/latest/en/installation/ha/) details how to set up an HA K3s cluster backed by an external datastore such as MySQL, PostgreSQL, or etcd. -[High Availability with Embedded DB (Experimental)]({{}}/k3s/latest/en/installation/ha-embedded/) details how to set up an HA K3s cluster that leverages a built-in distributed database. +[High Availability with Embedded DB]({{}}/k3s/latest/en/installation/ha-embedded/) details how to set up an HA K3s cluster that leverages a built-in distributed database. [Air-Gap Installation]({{}}/k3s/latest/en/installation/airgap/) details how to set up K3s in environments that do not have direct access to the Internet. diff --git a/content/k3s/latest/en/installation/airgap/_index.md b/content/k3s/latest/en/installation/airgap/_index.md index 93aed208fab..91d37c00830 100644 --- a/content/k3s/latest/en/installation/airgap/_index.md +++ b/content/k3s/latest/en/installation/airgap/_index.md @@ -69,7 +69,7 @@ INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetok {{% /tab %}} {{% tab "High Availability Configuration" %}} -Reference the [High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha) or [High Availability with Embedded DB (Experimental)]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded) guides. You will be tweaking install commands so you specify `INSTALL_K3S_SKIP_DOWNLOAD=true` and run your install script locally instead of via curl. You will also utilize `INSTALL_K3S_EXEC='args'` to supply any arguments to k3s. +Reference the [High Availability with an External DB]({{< baseurl >}}/k3s/latest/en/installation/ha) or [High Availability with Embedded DB]({{< baseurl >}}/k3s/latest/en/installation/ha-embedded) guides. You will be tweaking install commands so you specify `INSTALL_K3S_SKIP_DOWNLOAD=true` and run your install script locally instead of via curl. You will also utilize `INSTALL_K3S_EXEC='args'` to supply any arguments to k3s. For example, step two of the High Availability with an External DB guide mentions the following: diff --git a/content/k3s/latest/en/installation/datastore/_index.md b/content/k3s/latest/en/installation/datastore/_index.md index f7baacab835..059d73e16fe 100644 --- a/content/k3s/latest/en/installation/datastore/_index.md +++ b/content/k3s/latest/en/installation/datastore/_index.md @@ -7,7 +7,7 @@ The ability to run Kubernetes using a datastore other than etcd sets K3s apart f * If your team doesn't have expertise in operating etcd, you can choose an enterprise-grade SQL database like MySQL or PostgreSQL * If you need to run a simple, short-lived cluster in your CI/CD environment, you can use the embedded SQLite database -* If you wish to deploy Kubernetes on the edge and require a highly available solution but can't afford the operational overhead of managing a database at the edge, you can use K3s's embedded HA datastore built on top of embedded etcd (currently experimental) +* If you wish to deploy Kubernetes on the edge and require a highly available solution but can't afford the operational overhead of managing a database at the edge, you can use K3s's embedded HA datastore built on top of embedded etcd. K3s supports the following datastore options: @@ -16,7 +16,7 @@ K3s supports the following datastore options: * [MySQL](https://www.mysql.com/) (certified against version 5.7) * [MariaDB](https://mariadb.org/) (certified against version 10.3.20) * [etcd](https://etcd.io/) (certified against version 3.3.15) -* Embedded etcd for High Availability (experimental) +* Embedded etcd for High Availability ### External Datastore Configuration Parameters If you wish to use an external datastore such as PostgreSQL, MySQL, or etcd you must set the `datastore-endpoint` parameter so that K3s knows how to connect to it. You may also specify parameters to configure the authentication and encryption of the connection. The below table summarizes these parameters, which can be passed as either CLI flags or environment variables. diff --git a/content/k3s/latest/en/security/_index.md b/content/k3s/latest/en/security/_index.md new file mode 100644 index 00000000000..7468e909504 --- /dev/null +++ b/content/k3s/latest/en/security/_index.md @@ -0,0 +1,9 @@ +--- +title: "Security" +weight: 90 +--- + +This section describes the methodology and means of securing a K3s cluster. It's broken into 2 sections. + +* [Hardening Guide](./hardening_guide/) +* [CIS Benchmark Self-Assessment Guide](./self_assessment/) diff --git a/content/k3s/latest/en/security/hardening_guide/_index.md b/content/k3s/latest/en/security/hardening_guide/_index.md new file mode 100644 index 00000000000..aa140f48ee3 --- /dev/null +++ b/content/k3s/latest/en/security/hardening_guide/_index.md @@ -0,0 +1,544 @@ +--- +title: "CIS Hardening Guide" +weight: 80 +--- + +This document provides prescriptive guidance for hardening a production installation of K3s. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). + +K3s has a number of security mitigations applied and turned on by default and will pass a number of the Kubernetes CIS controls without modification. There are some notable exceptions to this that require manual intervention to fully comply with the CIS Benchmark: + +1. K3s will not modify the host operating system. Any host-level modifications will need to be done manually. +2. Certain CIS policy controls for PodSecurityPolicies and NetworkPolicies will restrict the functionality of this cluster. You must opt into having K3s configure these by adding the appropriate options (enabling of admission plugins) to your command-line flags or configuration file as well as manually applying appropriate policies. Further detail in the sections below. + +The first section (1.1) of the CIS Benchmark concerns itself primarily with pod manifest permissions and ownership. K3s doesn't utilize these for the core components since everything is packaged into a single binary. + +## Host-level Requirements + +There are two areas of host-level requirements: kernel parameters and etcd process/directory configuration. These are outlined in this section. + +### Ensure `protect-kernel-defaults` is set + +This is a kubelet flag that will cause the kubelet to exit if the required kernel parameters are unset or are set to values that are different from the kubelet's defaults. + +> **Note:** `protect-kernel-defaults` is exposed as a top-level flag for K3s. + +#### Set kernel parameters + +Create a file called `/etc/sysctl.d/90-kubelet.conf` and add the snippet below. Then run `sysctl -p /etc/sysctl.d/90-kubelet.conf`. + +```bash +vm.panic_on_oom=0 +vm.overcommit_memory=1 +kernel.panic=10 +kernel.panic_on_oops=1 +``` + +## Kubernetes Runtime Requirements + +The runtime requirements to comply with the CIS Benchmark are centered around pod security (PSPs) and network policies. These are outlined in this section. K3s doesn't apply any default PSPs or network policies however K3s ships with a controller that is meant to apply a given set of network policies. By default, K3s runs with the "NodeRestriction" admission controller. To enable PSPs, add the following to the K3s start command: `--kube-apiserver-arg="enable-admission-plugins=NodeRestriction,PodSecurityPolicy,ServiceAccount"`. This will have the effect of maintaining the "NodeRestriction" plugin as well as enabling the "PodSecurityPolicy". + +### PodSecurityPolicies + +When PSPs are enabled, a policy can be applied to satisfy the necessary controls described in section 5.2 of the CIS Benchmark. + +Here's an example of a compliant PSP. + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: cis1.5-compliant-psp +spec: + privileged: false # CIS - 5.2.1 + allowPrivilegeEscalation: false # CIS - 5.2.5 + requiredDropCapabilities: # CIS - 5.2.7/8/9 + - ALL + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + - 'persistentVolumeClaim' + hostNetwork: false # CIS - 5.2.4 + hostIPC: false # CIS - 5.2.3 + hostPID: false # CIS - 5.2.2 + runAsUser: + rule: 'MustRunAsNonRoot' # CIS - 5.2.6 + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +``` + +Before the above PSP to be effective, we need to create a couple ClusterRoles and ClusterRole. We also need to include a "system unrestricted policy" which is needed for system-level pods that require additional privileges. + +These can be combined with the PSP yaml above and NetworkPolicy yaml below into a single file and placed in the `/var/lib/rancher/k3s/server/manifests` directory. Below is an example of a `policy.yaml` file. + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: cis1.5-compliant-psp +spec: + privileged: false + allowPrivilegeEscalation: false + requiredDropCapabilities: + - ALL + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + - 'persistentVolumeClaim' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + rule: 'MustRunAsNonRoot' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: psp:restricted + labels: + addonmanager.kubernetes.io/mode: EnsureExists +rules: +- apiGroups: ['extensions'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - cis1.5-compliant-psp +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: default:restricted + labels: + addonmanager.kubernetes.io/mode: EnsureExists +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp:restricted +subjects: +- kind: Group + name: system:authenticated + apiGroup: rbac.authorization.k8s.io +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: kube-system +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: kube-system +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: default +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: default +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: kube-public +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: kube-public +--- +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: system-unrestricted-psp +spec: + allowPrivilegeEscalation: true + allowedCapabilities: + - '*' + fsGroup: + rule: RunAsAny + hostIPC: true + hostNetwork: true + hostPID: true + hostPorts: + - max: 65535 + min: 0 + privileged: true + runAsUser: + rule: RunAsAny + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + volumes: + - '*' +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: system-unrestricted-node-psp-rolebinding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system-unrestricted-psp-role +subjects: +- apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:nodes +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: system-unrestricted-psp-role +rules: +- apiGroups: + - policy + resourceNames: + - system-unrestricted-psp + resources: + - podsecuritypolicies + verbs: + - use +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: system-unrestricted-svc-acct-psp-rolebinding + namespace: kube-system +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system-unrestricted-psp-role +subjects: +- apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts +``` + +> **Note:** The Kubernetes critical additions such as CNI, DNS, and Ingress are ran as pods in the `kube-system` namespace. Therefore, this namespace will have a policy that is less restrictive so that these components can run properly. + +### NetworkPolicies + +> NOTE: K3s deploys kube-router for network policy enforcement. Support for this in K3s is currently experimental. + +CIS requires that all namespaces have a network policy applied that reasonably limits traffic into namespaces and pods. + +Here's an example of a compliant network policy. + +```yaml +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: intra-namespace + namespace: kube-system +spec: + podSelector: {} + ingress: + - from: + - namespaceSelector: + matchLabels: + name: kube-system +``` + +> **Note:** Operators must manage network policies as normal for additional namespaces that are created. + +## Known Issues +The following are controls that K3s currently does not pass by default. Each gap will be explained, along with a note clarifying whether it can be passed through manual operator intervention, or if it will be addressed in a future release of K3s. + + +### Control 1.2.15 +Ensure that the admission control plugin `NamespaceLifecycle` is set. +
+Rationale +Setting admission control policy to NamespaceLifecycle ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects. + +This can be remediated by passing this argument as a value to the `enable-admission-plugins=` and pass that to `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.16 (mentioned above) +Ensure that the admission control plugin `PodSecurityPolicy` is set. +
+Rationale +A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The PodSecurityPolicy objects define a set of conditions that a pod must run with in order to be accepted into the system. Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions. + +This can be remediated by passing this argument as a value to the `enable-admission-plugins=` and pass that to `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.22 +Ensure that the `--audit-log-path` argument is set. +
+Rationale +Auditing the Kubernetes API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system. Even though currently, Kubernetes provides only basic audit capabilities, it should be enabled. You can enable it by setting an appropriate audit log path. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.23 +Ensure that the `--audit-log-maxage` argument is set to 30 or as appropriate. +
+Rationale +Retaining logs for at least 30 days ensures that you can go back in time and investigate or correlate any events. Set your audit log retention period to 30 days or as per your business requirements. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.24 +Ensure that the `--audit-log-maxbackup` argument is set to 10 or as appropriate. +
+Rationale +Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. For example, if you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.25 +Ensure that the `--audit-log-maxsize` argument is set to 100 or as appropriate. +
+Rationale +Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. If you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.26 +Ensure that the `--request-timeout` argument is set as appropriate. +
+Rationale +Setting global request timeout allows extending the API server request timeout limit to a duration appropriate to the user's connection speed. By default, it is set to 60 seconds which might be problematic on slower connections making cluster resources inaccessible once the data volume for requests exceeds what can be transmitted in 60 seconds. But, setting this timeout limit to be too large can exhaust the API server resources making it prone to Denial-of-Service attack. Hence, it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.27 +Ensure that the `--service-account-lookup` argument is set to true. +
+Rationale +If `--service-account-lookup` is not enabled, the apiserver only verifies that the authentication token is valid, and does not validate that the service account token mentioned in the request is actually present in etcd. This allows using a service account token even after the corresponding service account is deleted. This is an example of time of check to time of use security issue. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 1.2.33 +Ensure that the `--encryption-provider-config` argument is set as appropriate. +
+Rationale +Where `etcd` encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the aescbc, kms and secretbox are likely to be appropriate options. +
+ +### Control 1.2.34 +Ensure that encryption providers are appropriately configured. +
+Rationale +`etcd` is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures. + +This can be remediated by passing a valid configuration to `k3s` as outlined above. +
+ +### Control 1.3.1 +Ensure that the `--terminated-pod-gc-threshold` argument is set as appropriate. +
+Rationale +Garbage collection is important to ensure sufficient resource availability and avoiding degraded performance and availability. In the worst case, the system might crash or just be unusable for a long period of time. The current setting for garbage collection is 12,500 terminated pods which might be too high for your system to sustain. Based on your system resources and tests, choose an appropriate threshold value to activate garbage collection. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 3.2.1 +Ensure that a minimal audit policy is created (Scored) +
+Rationale +Logging is an important detective control for all systems, to detect potential unauthorized access. + +This can be remediated by passing controls 1.2.22 - 1.2.25 and verifying their efficacy. +
+ + +### Control 4.2.7 +Ensure that the `--make-iptables-util-chains` argument is set to true. +
+Rationale +Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods. It is recommended to let kubelets manage the changes to iptables. This ensures that the iptables configuration remains in sync with pods networking configuration. Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world. You might have iptables rules too restrictive or too open. + +This can be remediated by passing this argument as a value to the `--kube-apiserver-arg=` argument to `k3s server`. An example can be found below. +
+ +### Control 5.1.5 +Ensure that default service accounts are not actively used. (Scored) +
+Rationale + +Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. + +Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. + +The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. +
+ +The remediation for this is to update the `automountServiceAccountToken` field to `false` for the `default` service account in each namespace. + +For `default` service accounts in the built-in namespaces (`kube-system`, `kube-public`, `kube-node-lease`, and `default`), K3s does not automatically do this. You can manually update this field on these service accounts to pass the control. + +## Control Plane Execution and Arguments + +Listed below are the K3s control plane components and the arguments they're given at start, by default. Commented to their right is the CIS 1.5 control that they satisfy. + +```bash +kube-apiserver + --advertise-port=6443 + --allow-privileged=true + --anonymous-auth=false # 1.2.1 + --api-audiences=unknown + --authorization-mode=Node,RBAC + --bind-address=127.0.0.1 + --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs + --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt # 1.2.31 + --enable-admission-plugins=NodeRestriction,PodSecurityPolicy # 1.2.17 + --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt # 1.2.32 + --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt # 1.2.29 + --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key # 1.2.29 + --etcd-servers=https://127.0.0.1:2379 + --insecure-port=0 # 1.2.19 + --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt + --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt + --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key + --profiling=false # 1.2.21 + --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt + --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key + --requestheader-allowed-names=system:auth-proxy + --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt + --requestheader-extra-headers-prefix=X-Remote-Extra- + --requestheader-group-headers=X-Remote-Group + --requestheader-username-headers=X-Remote-User + --secure-port=6444 # 1.2.20 + --service-account-issuer=k3s + --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key # 1.2.28 + --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key + --service-cluster-ip-range=10.43.0.0/16 + --storage-backend=etcd3 + --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt # 1.2.30 + --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key # 1.2.30 +``` + +```bash +kube-controller-manager + --address=127.0.0.1 + --allocate-node-cidrs=true + --bind-address=127.0.0.1 # 1.3.7 + --cluster-cidr=10.42.0.0/16 + --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt + --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key + --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig + --port=10252 + --profiling=false # 1.3.2 + --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt # 1.3.5 + --secure-port=0 + --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key # 1.3.4 + --use-service-account-credentials=true # 1.3.3 +``` + +```bash +kube-scheduler + --address=127.0.0.1 + --bind-address=127.0.0.1 # 1.4.2 + --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig + --port=10251 + --profiling=false # 1.4.1 + --secure-port=0 +``` + +```bash +kubelet + --address=0.0.0.0 + --anonymous-auth=false # 4.2.1 + --authentication-token-webhook=true + --authorization-mode=Webhook # 4.2.2 + --cgroup-driver=cgroupfs + --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt # 4.2.3 + --cloud-provider=external + --cluster-dns=10.43.0.10 + --cluster-domain=cluster.local + --cni-bin-dir=/var/lib/rancher/k3s/data/223e6420f8db0d8828a8f5ed3c44489bb8eb47aa71485404f8af8c462a29bea3/bin + --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d + --container-runtime-endpoint=/run/k3s/containerd/containerd.sock + --container-runtime=remote + --containerd=/run/k3s/containerd/containerd.sock + --eviction-hard=imagefs.available<5%,nodefs.available<5% + --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% + --fail-swap-on=false + --healthz-bind-address=127.0.0.1 + --hostname-override=hostname01 + --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig + --kubelet-cgroups=/systemd/system.slice + --node-labels= + --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests + --protect-kernel-defaults=true # 4.2.6 + --read-only-port=0 # 4.2.4 + --resolv-conf=/run/systemd/resolve/resolv.conf + --runtime-cgroups=/systemd/system.slice + --serialize-image-pulls=false + --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt # 4.2.10 + --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key # 4.2.10 +``` + +The command below is an example of how the outlined remediations can be applied. + +```bash +k3s server \ + --protect-kernel-defaults=true \ + --secrets-encryption=true \ + --kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log' \ + --kube-apiserver-arg='audit-log-maxage=30' \ + --kube-apiserver-arg='audit-log-maxbackup=10' \ + --kube-apiserver-arg='audit-log-maxsize=100' \ + --kube-apiserver-arg='request-timeout=300s' \ + --kube-apiserver-arg='service-account-lookup=true' \ + --kube-apiserver-arg='enable-admission-plugins=NodeRestriction,PodSecurityPolicy,NamespaceLifecycle,ServiceAccount' \ + --kube-controller-manager-arg='terminated-pod-gc-threshold=10' \ + --kube-controller-manager-arg='use-service-account-credentials=true' \ + --kubelet-arg='streaming-connection-idle-timeout=5m' \ + --kubelet-arg='make-iptables-util-chains=true' +``` + +## Conclusion + +If you have followed this guide, your K3s cluster will be configured to comply with the CIS Kubernetes Benchmark. You can review the [CIS Benchmark Self-Assessment Guide](../self_assessment/) to understand the expectations of each of the benchmarks and how you can do the same on your cluster. diff --git a/content/k3s/latest/en/security/self_assessment/_index.md b/content/k3s/latest/en/security/self_assessment/_index.md new file mode 100644 index 00000000000..013da6db076 --- /dev/null +++ b/content/k3s/latest/en/security/self_assessment/_index.md @@ -0,0 +1,2497 @@ +--- +title: "CIS Self Assessment Guide" +weight: 90 +--- + + +### CIS Kubernetes Benchmark v1.5 - K3s v1.17, v1.18, & v1.19 + +#### Overview + +This document is a companion to the K3s security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of K3s, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the CIS Kubernetes benchmark. It is to be used by K3s operators, security teams, auditors, and decision-makers. + +This guide is specific to the **v1.17**, **v1.18**, and **v1.19** release line of K3s and the **v1.5.1** release of the CIS Kubernetes Benchmark. + +For more detail about each control, including more detailed descriptions and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.5. You can download the benchmark after logging in to [CISecurity.org](https://www.cisecurity.org/benchmark/kubernetes/). + +#### Testing controls methodology + +Each control in the CIS Kubernetes Benchmark was evaluated against a K3s cluster that was configured according to the accompanying hardening guide. + +Where control audits differ from the original CIS benchmark, the audit commands specific to K3s are provided for testing. + +These are the possible results for each control: + +- **Pass** - The K3s cluster under test passed the audit outlined in the benchmark. +- **Not Applicable** - The control is not applicable to K3s because of how it is designed to operate. The remediation section will explain why this is so. +- **Not Scored - Operator Dependent** - The control is not scored in the CIS benchmark and it depends on the cluster's use case or some other factor that must be determined by the cluster operator. These controls have been evaluated to ensure K3s does not prevent their implementation, but no further configuration or auditing of the cluster under test has been performed. + +This guide makes the assumption that K3s is running as a Systemd unit. Your installation may vary and will require you to adjust the "audit" commands to fit your scenario. + +### Controls + +--- +## 1 Master Node Security Configuration +### 1.1 Master Node Configuration Files + +#### 1.1.1 +Ensure that the API server pod specification file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The API server pod specification file controls various parameters that set the behavior of the API server. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + + +#### 1.1.2 +Ensure that the API server pod specification file ownership is set to `root:root` (Scored) +
+Rationale +The API server pod specification file controls various parameters that set the behavior of the API server. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Not Applicable + + +#### 1.1.3 +Ensure that the controller manager pod specification file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The controller manager pod specification file controls various parameters that set the behavior of the Controller Manager on the master node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + + +#### 1.1.4 +Ensure that the controller manager pod specification file ownership is set to `root:root` (Scored) +
+Rationale +The controller manager pod specification file controls various parameters that set the behavior of various components of the master node. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. +
+ +**Result:** Not Applicable + + +#### 1.1.5 +Ensure that the scheduler pod specification file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The scheduler pod specification file controls various parameters that set the behavior of the Scheduler service in the master node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + + +#### 1.1.6 +Ensure that the scheduler pod specification file ownership is set to `root:root` (Scored) +
+Rationale +The scheduler pod specification file controls various parameters that set the behavior of the kube-scheduler service in the master node. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. +
+ +**Result:** Not Applicable + + +#### 1.1.7 +Ensure that the etcd pod specification file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The etcd pod specification file /var/lib/rancher/k3s/agent/pod-manifests/etcd.yaml controls various parameters that set the behavior of the etcd service in the master node. etcd is a highly- available key-value store which Kubernetes uses for persistent storage of all of its REST API object. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + + +#### 1.1.8 +Ensure that the etcd pod specification file ownership is set to `root:root` (Scored) +
+Rationale +The etcd pod specification file /var/lib/rancher/k3s/agent/pod-manifests/etcd.yaml controls various parameters that set the behavior of the etcd service in the master node. etcd is a highly- available key-value store which Kubernetes uses for persistent storage of all of its REST API object. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. +
+ +**Result:** Not Applicable + + +#### 1.1.9 +Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Not Scored) +
+Rationale +Container Network Interface provides various networking options for overlay networking. You should consult their documentation and restrict their respective file permissions to maintain the integrity of those files. Those files should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + + +#### 1.1.10 +Ensure that the Container Network Interface file ownership is set to root:root (Not Scored) +
+Rationale +Container Network Interface provides various networking options for overlay networking. You should consult their documentation and restrict their respective file permissions to maintain the integrity of those files. Those files should be owned by root:root. +
+ +**Result:** Not Applicable + + +#### 1.1.11 +Ensure that the etcd data directory permissions are set to 700 or more restrictive (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. This data directory should be protected from any unauthorized reads or writes. It should not be readable or writable by any group members or the world. +
+ +**Result:** Pass + +**Audit:** +```bash +stat -c %a /var/lib/rancher/k3s/server/db/etcd +700 +``` + +**Remediation:** +K3s manages the etcd data directory and sets its permissions to 700. No manual remediation needed. (only relevant when Etcd is used for the data store) + + +#### 1.1.12 +Ensure that the etcd data directory ownership is set to `etcd:etcd` (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. This data directory should be protected from any unauthorized reads or writes. It should be owned by etcd:etcd. +
+ +**Result:** Not Applicable + + +#### 1.1.13 +Ensure that the `admin.conf` file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The admin.conf is the administrator kubeconfig file defining various settings for the administration of the cluster. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/admin.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected permissions of `644`. No manual remediation should be necessary. + + +#### 1.1.14 +Ensure that the `admin.conf` file ownership is set to `root:root` (Scored) +
+Rationale +The admin.conf file contains the admin credentials for the cluster. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/admin.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. + + +#### 1.1.15 +Ensure that the `scheduler.conf` file permissions are set to `644` or more restrictive (Scored) +
+Rationale + +The scheduler.conf file is the kubeconfig file for the Scheduler. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected permissions of `644`. No manual remediation should be necessary. + + +#### 1.1.16 +Ensure that the `scheduler.conf` file ownership is set to `root:root` (Scored) +
+Rationale +The scheduler.conf file is the kubeconfig file for the Scheduler. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. + + +#### 1.1.17 +Ensure that the `controller.kubeconfig` file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The controller.kubeconfig file is the kubeconfig file for the Scheduler. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/controller.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected permissions of `644`. No manual remediation should be necessary. + + +#### 1.1.18 +Ensure that the `controller.kubeconfig` file ownership is set to `root:root` (Scored) +
+Rationale +The controller.kubeconfig file is the kubeconfig file for the Scheduler. You should set its file ownership to maintain the integrity of the file. The file should be owned by root:root. + +In K3s, this file is located at `/var/lib/rancher/k3s/server/cred/controller.kubeconfig`. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. + + +#### 1.1.19 +Ensure that the Kubernetes PKI directory and file ownership is set to `root:root` (Scored) +
+Rationale +Kubernetes makes use of a number of certificates as part of its operation. You should set the ownership of the directory containing the PKI information and all files in that directory to maintain their integrity. The directory and files should be owned by root:root. +
+ +**Result:** Pass + +**Audit:** +```bash +stat -c %U:%G /var/lib/rancher/k3s/server/tls +root:root +``` + +**Remediation:** +By default, K3s creates the directory and files with the expected ownership of `root:root`. No manual remediation should be necessary. + + +#### 1.1.20 +Ensure that the Kubernetes PKI certificate file permissions are set to `644` or more restrictive (Scored) +
+Rationale +Kubernetes makes use of a number of certificate files as part of the operation of its components. The permissions on these files should be set to 644 or more restrictive to protect their integrity. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %n\ %a /var/lib/rancher/k3s/server/tls/*.crt +``` + +Verify that the permissions are `644` or more restrictive. + +**Remediation:** +By default, K3s creates the files with the expected permissions of `644`. No manual remediation is needed. + + +#### 1.1.21 +Ensure that the Kubernetes PKI key file permissions are set to `600` (Scored) +
+Rationale +Kubernetes makes use of a number of key files as part of the operation of its components. The permissions on these files should be set to 600 to protect their integrity and confidentiality. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %n\ %a /var/lib/rancher/k3s/server/tls/*.key +``` + +Verify that the permissions are `600` or more restrictive. + +**Remediation:** +By default, K3s creates the files with the expected permissions of `600`. No manual remediation is needed. + + +### 1.2 API Server +This section contains recommendations relating to API server configuration flags + + +#### 1.2.1 +Ensure that the `--anonymous-auth` argument is set to false (Not Scored) + +
+Rationale +When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests. These requests are then served by the API server. You should rely on authentication to authorize access and disallow anonymous requests. + +If you are using RBAC authorization, it is generally considered reasonable to allow anonymous access to the API Server for health checks and discovery purposes, and hence this recommendation is not scored. However, you should consider whether anonymous discovery is an acceptable risk for your purposes. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" +``` + +Verify that `--anonymous-auth=false` is present. + +**Remediation:** +By default, K3s kube-apiserver is configured to run with this flag and value. No manual remediation is needed. + +#### 1.2.2 +Ensure that the `--basic-auth-file` argument is not set (Scored) +
+Rationale +Basic authentication uses plaintext credentials for authentication. Currently, the basic authentication credentials last indefinitely, and the password cannot be changed without restarting the API server. The basic authentication is currently supported for convenience. Hence, basic authentication should not be used. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "basic-auth-file" +``` + +Verify that the `--basic-auth-file` argument does not exist. + +**Remediation:** +By default, K3s does not run with basic authentication enabled. No manual remediation is needed. + + +#### 1.2.3 +Ensure that the `--token-auth-file` parameter is not set (Scored) + +
+Rationale +The token-based authentication utilizes static tokens to authenticate requests to the apiserver. The tokens are stored in clear-text in a file on the apiserver, and cannot be revoked or rotated without restarting the apiserver. Hence, do not use static token-based authentication. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "token-auth-file" +``` + +Verify that the `--token-auth-file` argument does not exist. + +**Remediation:** +By default, K3s does not run with basic authentication enabled. No manual remediation is needed. + +#### 1.2.4 +Ensure that the `--kubelet-https` argument is set to true (Scored) + +
+Rationale +Connections from apiserver to kubelets could potentially carry sensitive data such as secrets and keys. It is thus important to use in-transit encryption for any communication between the apiserver and kubelets. +
+ +**Result:** Not Applicable + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "kubelet-https" +``` + +Verify that the `--kubelet-https` argument does not exist. + +**Remediation:** +By default, K3s kube-apiserver doesn't run with the `--kubelet-https` parameter as it runs with TLS. No manual remediation is needed. + +#### 1.2.5 +Ensure that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments are set as appropriate (Scored) + +
+Rationale +The apiserver, by default, does not authenticate itself to the kubelet's HTTPS endpoints. The requests from the apiserver are treated anonymously. You should set up certificate-based kubelet authentication to ensure that the apiserver authenticates itself to kubelets when submitting requests. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep -E 'kubelet-client-certificate|kubelet-client-key' +``` + +Verify that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments exist and they are set as appropriate. + +**Remediation:** +By default, K3s kube-apiserver is ran with these arguments for secure communication with kubelet. No manual remediation is needed. + + +#### 1.2.6 +Ensure that the `--kubelet-certificate-authority` argument is set as appropriate (Scored) +
+Rationale +The connections from the apiserver to the kubelet are used for fetching logs for pods, attaching (through kubectl) to running pods, and using the kubelet’s port-forwarding functionality. These connections terminate at the kubelet’s HTTPS endpoint. By default, the apiserver does not verify the kubelet’s serving certificate, which makes the connection subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public networks. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "kubelet-certificate-authority" +``` + +Verify that the `--kubelet-certificate-authority` argument exists and is set as appropriate. + +**Remediation:** +By default, K3s kube-apiserver is ran with this argument for secure communication with kubelet. No manual remediation is needed. + + +#### 1.2.7 +Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) +
+Rationale +The API Server, can be configured to allow all requests. This mode should not be used on any production cluster. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +``` + +Verify that the argument value doesn't contain `AlwaysAllow`. + +**Remediation:** +By default, K3s sets `Node,RBAC` as the parameter to the `--authorization-mode` argument. No manual remediation is needed. + + +#### 1.2.8 +Ensure that the `--authorization-mode` argument includes `Node` (Scored) +
+Rationale +The Node authorization mode only allows kubelets to read Secret, ConfigMap, PersistentVolume, and PersistentVolumeClaim objects associated with their nodes. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +``` + +Verify `Node` exists as a parameter to the argument. + +**Remediation:** +By default, K3s sets `Node,RBAC` as the parameter to the `--authorization-mode` argument. No manual remediation is needed. + + +#### 1.2.9 +Ensure that the `--authorization-mode` argument includes `RBAC` (Scored) +
+Rationale +Role Based Access Control (RBAC) allows fine-grained control over the operations that different entities can perform on different objects in the cluster. It is recommended to use the RBAC authorization mode. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +``` + +Verify `RBAC` exists as a parameter to the argument. + +**Remediation:** +By default, K3s sets `Node,RBAC` as the parameter to the `--authorization-mode` argument. No manual remediation is needed. + + +#### 1.2.10 +Ensure that the admission control plugin EventRateLimit is set (Not Scored) +
+Rationale +Using `EventRateLimit` admission control enforces a limit on the number of events that the API Server will accept in a given time slice. A misbehaving workload could overwhelm and DoS the API Server, making it unavailable. This particularly applies to a multi-tenant cluster, where there might be a small percentage of misbehaving tenants which could have a significant impact on the performance of the cluster overall. Hence, it is recommended to limit the rate of events that the API server will accept. + +Note: This is an Alpha feature in the Kubernetes 1.15 release. +
+ +**Result:** **Not Scored - Operator Dependent** + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that the `--enable-admission-plugins` argument is set to a value that includes EventRateLimit. + +**Remediation:** +By default, K3s only sets `NodeRestriction,PodSecurityPolicy` as the parameter to the `--enable-admission-plugins` argument. +To configure this, follow the Kubernetes documentation and set the desired limits in a configuration file. Then refer to K3s's documentation to see how to supply additional api server configuration via the kube-apiserver-arg parameter. + + +#### 1.2.11 +Ensure that the admission control plugin `AlwaysAdmit` is not set (Scored) +
+Rationale +Setting admission control plugin AlwaysAdmit allows all requests and do not filter any requests. + +The AlwaysAdmit admission controller was deprecated in Kubernetes v1.13. Its behavior was equivalent to turning off all admission controllers. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that if the `--enable-admission-plugins` argument is set, its value does not include `AlwaysAdmit`. + +**Remediation:** +By default, K3s only sets `NodeRestriction,PodSecurityPolicy` as the parameter to the `--enable-admission-plugins` argument. No manual remediation needed. + + +#### 1.2.12 +Ensure that the admission control plugin AlwaysPullImages is set (Not Scored) +
+Rationale +Setting admission control policy to `AlwaysPullImages` forces every new pod to pull the required images every time. In a multi-tenant cluster users can be assured that their private images can only be used by those who have the credentials to pull them. Without this admission control policy, once an image has been pulled to a node, any pod from any user can use it simply by knowing the image’s name, without any authorization check against the image ownership. When this plug-in is enabled, images are always pulled prior to starting containers, which means valid credentials are required. + +
+ +**Result:** **Not Scored - Operator Dependent** + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that the `--enable-admission-plugins` argument is set to a value that includes `AlwaysPullImages`. + +**Remediation:** +By default, K3s only sets `NodeRestriction,PodSecurityPolicy` as the parameter to the `--enable-admission-plugins` argument. +To configure this, follow the Kubernetes documentation and set the desired limits in a configuration file. Then refer to K3s's documentation to see how to supply additional api server configuration via the kube-apiserver-arg parameter. + +#### 1.2.13 +Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Not Scored) +
+Rationale +SecurityContextDeny can be used to provide a layer of security for clusters which do not have PodSecurityPolicies enabled. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that the `--enable-admission-plugins` argument is set to a value that includes `SecurityContextDeny`, if `PodSecurityPolicy` is not included. + +**Remediation:** +K3s would need to have the `SecurityContextDeny` admission plugin enabled by passing it as an argument to K3s. `--kube-apiserver-arg='enable-admission-plugins=SecurityContextDeny` + + +#### 1.2.14 +Ensure that the admission control plugin `ServiceAccount` is set (Scored) +
+Rationale +When you create a pod, if you do not specify a service account, it is automatically assigned the `default` service account in the same namespace. You should create your own service account and let the API server manage its security tokens. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "ServiceAccount" +``` + +Verify that the `--disable-admission-plugins` argument is set to a value that does not includes `ServiceAccount`. + +**Remediation:** +By default, K3s does not use this argument. If there's a desire to use this argument, follow the documentation and create ServiceAccount objects as per your environment. Then refer to K3s's documentation to see how to supply additional api server configuration via the kube-apiserver-arg parameter. + + +#### 1.2.15 +Ensure that the admission control plugin `NamespaceLifecycle` is set (Scored) +
+Rationale +Setting admission control policy to `NamespaceLifecycle` ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating the new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of the newer objects. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "disable-admission-plugins" +``` + +Verify that the `--disable-admission-plugins` argument is set to a value that does not include `NamespaceLifecycle`. + +**Remediation:** +By default, K3s does not use this argument. No manual remediation needed. + + +#### 1.2.16 +Ensure that the admission control plugin `PodSecurityPolicy` is set (Scored) +
+Rationale +A Pod Security Policy is a cluster-level resource that controls the actions that a pod can perform and what it has the ability to access. The `PodSecurityPolicy` objects define a set of conditions that a pod must run with in order to be accepted into the system. Pod Security Policies are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions. + +**Note:** When the PodSecurityPolicy admission plugin is in use, there needs to be at least one PodSecurityPolicy in place for ANY pods to be admitted. See section 1.7 for recommendations on PodSecurityPolicy settings. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that the `--enable-admission-plugins` argument is set to a value that includes `PodSecurityPolicy`. + +**Remediation:** +K3s would need to have the `PodSecurityPolicy` admission plugin enabled by passing it as an argument to K3s. `--kube-apiserver-arg='enable-admission-plugins=PodSecurityPolicy`. + + +#### 1.2.17 +Ensure that the admission control plugin `NodeRestriction` is set (Scored) +
+Rationale +Using the `NodeRestriction` plug-in ensures that the kubelet is restricted to the `Node` and `Pod` objects that it could modify as defined. Such kubelets will only be allowed to modify their own `Node` API object, and only modify `Pod` API objects that are bound to their node. + +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "enable-admission-plugins" +``` + +Verify that the `--enable-admission-plugins` argument is set to a value that includes `NodeRestriction`. + +**Remediation:** +K3s would need to have the `NodeRestriction` admission plugin enabled by passing it as an argument to K3s. `--kube-apiserver-arg='enable-admission-plugins=NodeRestriction`. + + +#### 1.2.18 +Ensure that the `--insecure-bind-address` argument is not set (Scored) +
+Rationale +If you bind the apiserver to an insecure address, basically anyone who could connect to it over the insecure port, would have unauthenticated and unencrypted access to your master node. The apiserver doesn't do any authentication checking for insecure binds and traffic to the Insecure API port is not encrpyted, allowing attackers to potentially read sensitive data in transit. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "insecure-bind-address" +``` + +Verify that the `--insecure-bind-address` argument does not exist. + +**Remediation:** +By default, K3s explicitly excludes the use of the `--insecure-bind-address` parameter. No manual remediation is needed. + + +#### 1.2.19 +Ensure that the `--insecure-port` argument is set to `0` (Scored) +
+Rationale +Setting up the apiserver to serve on an insecure port would allow unauthenticated and unencrypted access to your master node. This would allow attackers who could access this port, to easily take control of the cluster. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "insecure-port" +``` + +Verify that the `--insecure-port` argument is set to `0`. + +**Remediation:** +By default, K3s starts the kube-apiserver process with this argument's parameter set to `0`. No manual remediation is needed. + + +#### 1.2.20 +Ensure that the `--secure-port` argument is not set to `0` (Scored) +
+Rationale +The secure port is used to serve https with authentication and authorization. If you disable it, no https traffic is served and all traffic is served unencrypted. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "secure-port" +``` + +Verify that the `--secure-port` argument is either not set or is set to an integer value between 1 and 65535. + +**Remediation:** +By default, K3s sets the parameter of 6444 for the `--secure-port` argument. No manual remediation is needed. + + +#### 1.2.21 +Ensure that the `--profiling` argument is set to `false` (Scored) +
+Rationale +Profiling allows for the identification of specific performance bottlenecks. It generates a significant amount of program data that could potentially be exploited to uncover system and program details. If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "profiling" +``` + +Verify that the `--profiling` argument is set to false. + +**Remediation:** +By default, K3s sets the `--profiling` flag parameter to false. No manual remediation needed. + + +#### 1.2.22 +Ensure that the `--audit-log-path` argument is set (Scored) +
+Rationale +Auditing the Kubernetes API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators or other components of the system. Even though currently, Kubernetes provides only basic audit capabilities, it should be enabled. You can enable it by setting an appropriate audit log path. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-path" +``` + +Verify that the `--audit-log-path` argument is set as appropriate. + +**Remediation:** +K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-path=/path/to/log/file'`. + + +#### 1.2.23 +Ensure that the `--audit-log-maxage` argument is set to `30` or as appropriate (Scored) +
+Rationale +Retaining logs for at least 30 days ensures that you can go back in time and investigate or correlate any events. Set your audit log retention period to 30 days or as per your business requirements. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-maxage" +``` + +Verify that the `--audit-log-maxage` argument is set to `30` or as appropriate. + +**Remediation:** +K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-maxage=30'`. + + +#### 1.2.24 +Ensure that the `--audit-log-maxbackup` argument is set to `10` or as appropriate (Scored) +
+Rationale +Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. For example, if you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-maxbackup" +``` + +Verify that the `--audit-log-maxbackup` argument is set to `10` or as appropriate. + +**Remediation:** +K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-maxbackup=10'`. + + +#### 1.2.25 +Ensure that the `--audit-log-maxsize` argument is set to `100` or as appropriate (Scored) +
+Rationale +Kubernetes automatically rotates the log files. Retaining old log files ensures that you would have sufficient log data available for carrying out any investigation or correlation. If you have set file size of 100 MB and the number of old log files to keep as 10, you would approximate have 1 GB of log data that you could potentially use for your analysis. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-log-maxsize" +``` + +Verify that the `--audit-log-maxsize` argument is set to `100` or as appropriate. + +**Remediation:** +K3s server needs to be run with the following argument, `--kube-apiserver-arg='audit-log-maxsize=100'`. + + +#### 1.2.26 +Ensure that the `--request-timeout` argument is set as appropriate (Scored) +
+Rationale +Setting global request timeout allows extending the API server request timeout limit to a duration appropriate to the user's connection speed. By default, it is set to 60 seconds which might be problematic on slower connections making cluster resources inaccessible once the data volume for requests exceeds what can be transmitted in 60 seconds. But, setting this timeout limit to be too large can exhaust the API server resources making it prone to Denial-of-Service attack. Hence, it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "request-timeout" +``` + +Verify that the `--request-timeout` argument is either not set or set to an appropriate value. + +**Remediation:** +By default, K3s does not set the `--request-timeout` argument. No manual remediation needed. + + +#### 1.2.27 +Ensure that the `--service-account-lookup` argument is set to `true` (Scored) +
+Rationale +If `--service-account-lookup` is not enabled, the apiserver only verifies that the authentication token is valid, and does not validate that the service account token mentioned in the request is actually present in etcd. This allows using a service account token even after the corresponding service account is deleted. This is an example of time of check to time of use security issue. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "service-account-lookup" +``` + +Verify that if the `--service-account-lookup` argument exists it is set to `true`. + +**Remediation:** +K3s server needs to be run with the following argument, `--kube-apiserver-arg='service-account-lookup=true'`. + + +#### 1.2.28 +Ensure that the `--service-account-key-file` argument is set as appropriate (Scored) +
+Rationale +By default, if no `--service-account-key-file` is specified to the apiserver, it uses the private key from the TLS serving certificate to verify service account tokens. To ensure that the keys for service account tokens could be rotated as needed, a separate public/private key pair should be used for signing service account tokens. Hence, the public key should be specified to the apiserver with `--service-account-key-file`. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "service-account-key-file" +``` + +Verify that the `--service-account-key-file` argument exists and is set as appropriate. + +**Remediation:** +By default, K3s sets the `--service-account-key-file` explicitly. No manual remediation needed. + + +#### 1.2.29 +Ensure that the `--etcd-certfile` and `--etcd-keyfile` arguments are set as appropriate (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be protected by client authentication. This requires the API server to identify itself to the etcd server using a client certificate and key. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep -E 'etcd-certfile|etcd-keyfile' +``` + +Verify that the `--etcd-certfile` and `--etcd-keyfile` arguments exist and they are set as appropriate. + +**Remediation:** +By default, K3s sets the `--etcd-certfile` and `--etcd-keyfile` arguments explicitly. No manual remediation needed. + + +#### 1.2.30 +Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) +
+Rationale +API server communication contains sensitive parameters that should remain encrypted in transit. Configure the API server to serve only HTTPS traffic. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep -E 'tls-cert-file|tls-private-key-file' +``` + +Verify that the `--tls-cert-file` and `--tls-private-key-file` arguments exist and they are set as appropriate. + +**Remediation:** +By default, K3s sets the `--tls-cert-file` and `--tls-private-key-file` arguments explicitly. No manual remediation needed. + + +#### 1.2.31 +Ensure that the `--client-ca-file` argument is set as appropriate (Scored) +
+Rationale +API server communication contains sensitive parameters that should remain encrypted in transit. Configure the API server to serve only HTTPS traffic. If `--client-ca-file` argument is set, any request presenting a client certificate signed by one of the authorities in the `client-ca-file` is authenticated with an identity corresponding to the CommonName of the client certificate. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" +``` + +Verify that the `--client-ca-file` argument exists and it is set as appropriate. + +**Remediation:** +By default, K3s sets the `--client-ca-file` argument explicitly. No manual remediation needed. + + +#### 1.2.32 +Ensure that the `--etcd-cafile` argument is set as appropriate (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be protected by client authentication. This requires the API server to identify itself to the etcd server using a SSL Certificate Authority file. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "etcd-cafile" +``` + +Verify that the `--etcd-cafile` argument exists and it is set as appropriate. + +**Remediation:** +By default, K3s sets the `--etcd-cafile` argument explicitly. No manual remediation needed. + + +#### 1.2.33 +Ensure that the `--encryption-provider-config` argument is set as appropriate (Scored) +
+Rationale +etcd is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "encryption-provider-config" +``` + +Verify that the `--encryption-provider-config` argument is set to a EncryptionConfigfile. Additionally, ensure that the `EncryptionConfigfile` has all the desired resources covered especially any secrets. + +**Remediation:** +K3s server needs to be ran with the follow, `--kube-apiserver-arg='encryption-provider-config=/path/to/encryption_config'`. This can be done by running k3s with the `--secrets-encryptiuon` argument which will configure the encryption provider. + + +#### 1.2.34 +Ensure that encryption providers are appropriately configured (Scored) +
+Rationale +Where `etcd` encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, the `aescbc`, `kms` and `secretbox` are likely to be appropriate options. +
+ +**Result:** Pass + +**Remediation:** +Follow the Kubernetes documentation and configure a `EncryptionConfig` file. +In this file, choose **aescbc**, **kms** or **secretbox** as the encryption provider. + +**Audit:** +Run the below command on the master node. + +```bash +grep aescbc /path/to/encryption-config.json +``` + +Run the below command on the master node. + +Verify that aescbc is set as the encryption provider for all the desired resources. + +**Remediation** +K3s server needs to be run with the following, `--secrets-encryption=true`, and verify that one of the allowed encryption providers is present. + + +#### 1.2.35 +Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Not Scored) + +
+Rationale +TLS ciphers have had a number of known vulnerabilities and weaknesses, which can reduce the protection provided by them. By default Kubernetes supports a number of TLS cipher suites including some that have security concerns, weakening the protection provided. +
+ +**Result:** **Not Scored - Operator Dependent** + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "tls-cipher-suites" +``` + +Verify that the `--tls-cipher-suites` argument is set as outlined in the remediation procedure below. + +**Remediation:** +By default, K3s explicitly doesn't set this flag. No manual remediation needed. + + +### 1.3 Controller Manager + +#### 1.3.1 +Ensure that the `--terminated-pod-gc-threshold` argument is set as appropriate (Not Scored) +
+Rationale +Garbage collection is important to ensure sufficient resource availability and avoiding degraded performance and availability. In the worst case, the system might crash or just be unusable for a long period of time. The current setting for garbage collection is 12,500 terminated pods which might be too high for your system to sustain. Based on your system resources and tests, choose an appropriate threshold value to activate garbage collection. +
+ +**Result:** **Not Scored - Operator Dependent** + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "terminated-pod-gc-threshold +``` + +Verify that the `--terminated-pod-gc-threshold` argument is set as appropriate. + +**Remediation:** +K3s server needs to be run with the following, `--kube-controller-manager-arg='terminated-pod-gc-threshold=10`. + + +#### 1.3.2 +Ensure that the `--profiling` argument is set to false (Scored) +
+Rationale +Profiling allows for the identification of specific performance bottlenecks. It generates a significant amount of program data that could potentially be exploited to uncover system and program details. If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "profiling" +``` + +Verify that the `--profiling` argument is set to false. + +**Remediation:** +By default, K3s sets the `--profiling` flag parameter to false. No manual remediation needed. + + +#### 1.3.3 +Ensure that the `--use-service-account-credentials` argument is set to `true` (Scored) +
+Rationale +The controller manager creates a service account per controller in the `kube-system` namespace, generates a credential for it, and builds a dedicated API client with that service account credential for each controller loop to use. Setting the `--use-service-account-credentials` to `true` runs each control loop within the controller manager using a separate service account credential. When used in combination with RBAC, this ensures that the control loops run with the minimum permissions required to perform their intended tasks. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "use-service-account-credentials" +``` + +Verify that the `--use-service-account-credentials` argument is set to true. + +**Remediation:** +K3s server needs to be run with the following, `--kube-controller-manager-arg='use-service-account-credentials=true'` + + +#### 1.3.4 +Ensure that the `--service-account-private-key-file` argument is set as appropriate (Scored) +
+Rationale +To ensure that keys for service account tokens can be rotated as needed, a separate public/private key pair should be used for signing service account tokens. The private key should be specified to the controller manager with `--service-account-private-key-file` as appropriate. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "service-account-private-key-file" +``` + +Verify that the `--service-account-private-key-file` argument is set as appropriate. + +**Remediation:** +By default, K3s sets the `--service-account-private-key-file` argument with the service account key file. No manual remediation needed. + + +#### 1.3.5 +Ensure that the `--root-ca-file` argument is set as appropriate (Scored) +
+Rationale +Processes running within pods that need to contact the API server must verify the API server's serving certificate. Failing to do so could be a subject to man-in-the-middle attacks. + +Providing the root certificate for the API server's serving certificate to the controller manager with the `--root-ca-file` argument allows the controller manager to inject the trusted bundle into pods so that they can verify TLS connections to the API server. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "root-ca-file" +``` + +Verify that the `--root-ca-file` argument exists and is set to a certificate bundle file containing the root certificate for the API server's serving certificate + +**Remediation:** +By default, K3s sets the `--root-ca-file` argument with the root ca file. No manual remediation needed. + + +#### 1.3.6 +Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) +
+Rationale +`RotateKubeletServerCertificate` causes the kubelet to both request a serving certificate after bootstrapping its client credentials and rotate the certificate as its existing credentials expire. This automated periodic rotation ensures that there are no downtimes due to expired certificates and thus addressing availability in the CIA security triad. + +Note: This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g. Vault) then you need to take care of rotation yourself. +
+ +**Result:** Not Applicable + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "RotateKubeletServerCertificate" +``` + +Verify that RotateKubeletServerCertificateargument exists and is set to true. + +**Remediation:** +By default, K3s implements its own logic for certificate generation and rotation. + + +#### 1.3.7 +Ensure that the `--bind-address` argument is set to `127.0.0.1` (Scored) +
+Rationale +The Controller Manager API service which runs on port 10252/TCP by default is used for health and metrics information and is available without authentication or encryption. As such it should only be bound to a localhost interface, to minimize the cluster's attack surface. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-controller-manager" | tail -n1 | grep "bind-address" +``` + +Verify that the `--bind-address` argument is set to 127.0.0.1. + +**Remediation:** +By default, K3s sets the `--bind-address` argument to `127.0.0.1`. No manual remediation needed. + + +### 1.4 Scheduler +This section contains recommendations relating to Scheduler configuration flags + + +#### 1.4.1 +Ensure that the `--profiling` argument is set to `false` (Scored) +
+Rationale +Profiling allows for the identification of specific performance bottlenecks. It generates a significant amount of program data that could potentially be exploited to uncover system and program details. If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-scheduler" | tail -n1 | grep "profiling" +``` + +Verify that the `--profiling` argument is set to false. + +**Remediation:** +By default, K3s sets the `--profiling` flag parameter to false. No manual remediation needed. + + +#### 1.4.2 +Ensure that the `--bind-address` argument is set to `127.0.0.1` (Scored) +
+Rationale + +The Scheduler API service which runs on port 10251/TCP by default is used for health and metrics information and is available without authentication or encryption. As such it should only be bound to a localhost interface, to minimize the cluster's attack surface. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-scheduler" | tail -n1 | grep "bind-address" +``` + +Verify that the `--bind-address` argument is set to 127.0.0.1. + +**Remediation:** +By default, K3s sets the `--bind-address` argument to `127.0.0.1`. No manual remediation needed. + + +## 2 Etcd Node Configuration +This section covers recommendations for etcd configuration. + +#### 2.1 +Ensure that the `cert-file` and `key-file` fields are set as appropriate (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted in transit. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +grep -E 'cert-file|key-file' /var/lib/rancher/k3s/server/db/etcd/config +``` + +Verify that the `cert-file` and the `key-file` fields are set as appropriate. + +**Remediation:** +By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. Server and peer cert and key files are specified. No manual remediation needed. + + +#### 2.2 +Ensure that the `client-cert-auth` field is set to `true` (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +grep 'client-cert-auth' /var/lib/rancher/k3s/server/db/etcd/config +``` + +Verify that the `client-cert-auth` field is set to true. + +**Remediation:** +By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. `client-cert-auth` is set to true. No manual remediation needed. + + +#### 2.3 +Ensure that the `auto-tls` field is not set to `true` (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s starts Etcd without this flag. It is set to `false` by default. + + +#### 2.4 +Ensure that the `peer-cert-file` and `peer-key-file` fields are set as appropriate (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be encrypted in transit and also amongst peers in the etcd clusters. +
+ +**Result:** Pass + +**Remediation:** +By default, K3s starts Etcd with a config file found here, `/var/lib/rancher/k3s/server/db/etcd/config`. The config file contains `peer-transport-security:` which has fields that have the peer cert and peer key files. + + +#### 2.5 +Ensure that the `client-cert-auth` field is set to `true` (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be accessible only by authenticated etcd peers in the etcd cluster. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +grep 'client-cert-auth' /var/lib/rancher/k3s/server/db/etcd/config +``` + +Verify that the `client-cert-auth` field in the peer section is set to true. + +**Remediation:** +By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. Within the file, the `client-cert-auth` field is set. No manual remediation needed. + + +#### 2.6 +Ensure that the `peer-auto-tls` field is not set to `true` (Scored) +
+Rationale +etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be accessible only by authenticated etcd peers in the etcd cluster. Hence, do not use self- signed certificates for authentication. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +grep 'peer-auto-tls' /var/lib/rancher/k3s/server/db/etcd/config +``` + +Verify that if the `peer-auto-tls` field does not exist. + +**Remediation:** +By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config`. Within the file, it does not contain the `peer-auto-tls` field. No manual remediation needed. + + +#### 2.7 +Ensure that a unique Certificate Authority is used for etcd (Not Scored) +
+Rationale +etcd is a highly available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. Its access should be restricted to specifically designated clients and peers only. + +Authentication to etcd is based on whether the certificate presented was issued by a trusted certificate authority. There is no checking of certificate attributes such as common name or subject alternative name. As such, if any attackers were able to gain access to any certificate issued by the trusted certificate authority, they would be able to gain full access to the etcd database. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +# To find the ca file used by etcd: +grep 'trusted-ca-file' /var/lib/rancher/k3s/server/db/etcd/config +# To find the kube-apiserver process: +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 +``` + +Verify that the file referenced by the `client-ca-file` flag in the apiserver process is different from the file referenced by the `trusted-ca-file` parameter in the etcd configuration file. + +**Remediation:** +By default, K3s uses a config file for etcd that can be found at `/var/lib/rancher/k3s/server/db/etcd/config` and the `trusted-ca-file` parameters in it are set to unique values specific to etcd. No manual remediation needed. + + + +## 3 Control Plane Configuration + + +### 3.1 Authentication and Authorization + + +#### 3.1.1 +Client certificate authentication should not be used for users (Not Scored) +
+Rationale +With any authentication mechanism the ability to revoke credentials if they are compromised or no longer required, is a key control. Kubernetes client certificate authentication does not allow for this due to a lack of support for certificate revocation. +
+ +**Result:** Not Scored - Operator Dependent + +**Audit:** +Review user access to the cluster and ensure that users are not making use of Kubernetes client certificate authentication. + +**Remediation:** +Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented in place of client certificates. + +### 3.2 Logging + + +#### 3.2.1 +Ensure that a minimal audit policy is created (Scored) +
+Rationale +Logging is an important detective control for all systems, to detect potential unauthorized access. +
+ +**Result:** Does not pass. See the [Hardening Guide](../hardening_guide/) for details. + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "audit-policy-file" +``` + +Verify that the `--audit-policy-file` is set. Review the contents of the file specified and ensure that it contains a valid audit policy. + +**Remediation:** +Create an audit policy file for your cluster and pass it to k3s. e.g. `--kube-apiserver-arg='audit-log-path=/var/lib/rancher/k3s/server/logs/audit-log'` + + +#### 3.2.2 +Ensure that the audit policy covers key security concerns (Not Scored) +
+Rationale +Security audit logs should cover access and modification of key resources in the cluster, to enable them to form an effective part of a security environment. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** + + +## 4 Worker Node Security Configuration + + +### 4.1 Worker Node Configuration Files + + +#### 4.1.1 +Ensure that the kubelet service file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The `kubelet` service file controls various parameters that set the behavior of the kubelet service in the worker node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + +**Remediation:** +K3s doesn’t launch the kubelet as a service. It is launched and managed by the K3s supervisor process. All configuration is passed to it as command line arguments at run time. + + +#### 4.1.2 +Ensure that the kubelet service file ownership is set to `root:root` (Scored) +
+Rationale +The `kubelet` service file controls various parameters that set the behavior of the kubelet service in the worker node. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Not Applicable + +**Remediation:** +K3s doesn’t launch the kubelet as a service. It is launched and managed by the K3s supervisor process. All configuration is passed to it as command line arguments at run time. + + +#### 4.1.3 +Ensure that the proxy kubeconfig file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The `kube-proxy` kubeconfig file controls various parameters of the `kube-proxy` service in the worker node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. + +It is possible to run `kube-proxy` with the kubeconfig parameters configured as a Kubernetes ConfigMap instead of a file. In this case, there is no proxy kubeconfig file. +
+ +**Result:** Not Applicable + +**Audit:** +Run the below command on the worker node. + +```bash +stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig +644 +``` + +Verify that if a file is specified and it exists, the permissions are 644 or more restrictive. + +**Remediation:** +K3s runs `kube-proxy` in process and does not use a config file. + + +#### 4.1.4 +Ensure that the proxy kubeconfig file ownership is set to `root:root` (Scored) +
+Rationale +The kubeconfig file for `kube-proxy` controls various parameters for the `kube-proxy` service in the worker node. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Not Applicable + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig +root:root +``` + +Verify that if a file is specified and it exists, the permissions are 644 or more restrictive. + +**Remediation:** +K3s runs `kube-proxy` in process and does not use a config file. + + +#### 4.1.5 +Ensure that the kubelet.conf file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The `kubelet.conf` file is the kubeconfig file for the node, and controls various parameters that set the behavior and identity of the worker node. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the worker node. + +```bash +stat -c %a /var/lib/rancher/k3s/agent/kubelet.kubeconfig +644 +``` + +**Remediation:** +By default, K3s creates `kubelet.kubeconfig` with `644` permissions. No manual remediation needed. + +#### 4.1.6 +Ensure that the kubelet.conf file ownership is set to `root:root` (Scored) +
+Rationale +The `kubelet.conf` file is the kubeconfig file for the node, and controls various parameters that set the behavior and identity of the worker node. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Not Applicable + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig +root:root +``` + +**Remediation:** +By default, K3s creates `kubelet.kubeconfig` with `root:root` ownership. No manual remediation needed. + + +#### 4.1.7 +Ensure that the certificate authorities file permissions are set to `644` or more restrictive (Scored) +
+Rationale +The certificate authorities file controls the authorities used to validate API requests. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt +644 +``` + +Verify that the permissions are 644. + +**Remediation:** +By default, K3s creates `/var/lib/rancher/k3s/server/tls/server-ca.crt` with `644` permissions. + + +#### 4.1.8 +Ensure that the client certificate authorities file ownership is set to `root:root` (Scored) +
+Rationale +The certificate authorities file controls the authorities used to validate API requests. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt +root:root +``` + +**Remediation:** +By default, K3s creates `/var/lib/rancher/k3s/server/tls/client-ca.crt` with `root:root` ownership. + + +#### 4.1.9 +Ensure that the kubelet configuration file has permissions set to `644` or more restrictive (Scored) +
+Rationale +The kubelet reads various parameters, including security settings, from a config file specified by the `--config` argument. If this file is specified you should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. +
+ +**Result:** Not Applicable + +**Remediation:** +K3s doesn’t require or maintain a configuration file for the kubelet process. All configuration is passed to it as command line arguments at run time. + + +#### 4.1.10 +Ensure that the kubelet configuration file ownership is set to `root:root` (Scored) +
+Rationale +The kubelet reads various parameters, including security settings, from a config file specified by the `--config` argument. If this file is specified you should restrict its file permissions to maintain the integrity of the file. The file should be owned by `root:root`. +
+ +**Result:** Not Applicable + +**Remediation:** +K3s doesn’t require or maintain a configuration file for the kubelet process. All configuration is passed to it as command line arguments at run time. + + +### 4.2 Kubelet +This section contains recommendations for kubelet configuration. + + +#### 4.2.1 +Ensure that the `--anonymous-auth` argument is set to false (Scored) +
+Rationale +When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests. These requests are then served by the Kubelet server. You should rely on authentication to authorize access and disallow anonymous requests. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" +``` + +Verify that the value for `--anonymous-auth` is false. + +**Remediation:** +By default, K3s starts kubelet with `--anonymous-auth` set to false. No manual remediation needed. + +#### 4.2.2 +Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) +
+Rationale +Kubelets, by default, allow all authenticated requests (even anonymous ones) without needing explicit authorization checks from the apiserver. You should restrict this behavior and only allow explicitly authorized requests. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" +``` + +Verify that `AlwaysAllow` is not present. + +**Remediation:** +K3s starts kubelet with `Webhook` as the value for the `--authorization-mode` argument. No manual remediation needed. + + +#### 4.2.3 +Ensure that the `--client-ca-file` argument is set as appropriate (Scored) +
+Rationale +The connections from the apiserver to the kubelet are used for fetching logs for pods, attaching (through kubectl) to running pods, and using the kubelet’s port-forwarding functionality. These connections terminate at the kubelet’s HTTPS endpoint. By default, the apiserver does not verify the kubelet’s serving certificate, which makes the connection subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public networks. Enabling Kubelet certificate authentication ensures that the apiserver could authenticate the Kubelet before submitting any requests. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" +``` + +Verify that the `--client-ca-file` argument has a ca file associated. + +**Remediation:** +By default, K3s starts the kubelet process with the `--client-ca-file`. No manual remediation needed. + + +#### 4.2.4 +Ensure that the `--read-only-port` argument is set to `0` (Scored) +
+Rationale +The Kubelet process provides a read-only API in addition to the main Kubelet API. Unauthenticated access is provided to this read-only API which could possibly retrieve potentially sensitive information about the cluster. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "read-only-port" +``` +Verify that the `--read-only-port` argument is set to 0. + +**Remediation:** +By default, K3s starts the kubelet process with the `--read-only-port` argument set to `0`. + + +#### 4.2.5 +Ensure that the `--streaming-connection-idle-timeout` argument is not set to `0` (Scored) +
+Rationale +Setting idle timeouts ensures that you are protected against Denial-of-Service attacks, inactive connections and running out of ephemeral ports. + +**Note:** By default, `--streaming-connection-idle-timeout` is set to 4 hours which might be too high for your environment. Setting this as appropriate would additionally ensure that such streaming connections are timed out after serving legitimate use cases. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "streaming-connection-idle-timeout" +``` + +Verify that there's nothing returned. + +**Remediation:** +By default, K3s does not set `--streaming-connection-idle-timeout` when starting kubelet. + + +#### 4.2.6 +Ensure that the `--protect-kernel-defaults` argument is set to `true` (Scored) +
+Rationale +Kernel parameters are usually tuned and hardened by the system administrators before putting the systems into production. These parameters protect the kernel and the system. Your kubelet kernel defaults that rely on such parameters should be appropriately set to match the desired secured system state. Ignoring this could potentially lead to running pods with undesired kernel behavior. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "protect-kernel-defaults" +``` + +**Remediation:** +K3s server needs to be started with the following, `--protect-kernel-defaults=true`. + + +#### 4.2.7 +Ensure that the `--make-iptables-util-chains` argument is set to `true` (Scored) +
+Rationale +Kubelets can automatically manage the required changes to iptables based on how you choose your networking options for the pods. It is recommended to let kubelets manage the changes to iptables. This ensures that the iptables configuration remains in sync with pods networking configuration. Manually configuring iptables with dynamic pod network configuration changes might hamper the communication between pods/containers and to the outside world. You might have iptables rules too restrictive or too open. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep "make-iptables-util-chains" +``` + +Verify there are no results returned. + +**Remediation:** +K3s server needs to be run with the following, `--kube-apiserver-arg='make-iptables-util-chains=true'`. + + +#### 4.2.8 +Ensure that the `--hostname-override` argument is not set (Not Scored) +
+Rationale +Overriding hostnames could potentially break TLS setup between the kubelet and the apiserver. Additionally, with overridden hostnames, it becomes increasingly difficult to associate logs with a particular node and process them for security analytics. Hence, you should setup your kubelet nodes with resolvable FQDNs and avoid overriding the hostnames with IPs. +
+ +**Result:** Not Applicable + +**Remediation:** +K3s does set this parameter for each host, but K3s also manages all certificates in the cluster. It ensures the hostname-override is included as a subject alternative name (SAN) in the kubelet's certificate. + + +#### 4.2.9 +Ensure that the `--event-qps` argument is set to 0 or a level which ensures appropriate event capture (Not Scored) +
+Rationale +It is important to capture all events and not restrict event creation. Events are an important source of security information and analytics that ensure that your environment is consistently monitored using the event data. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** +See CIS Benchmark guide for further details on configuring this. + +#### 4.2.10 +Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) +
+Rationale +Kubelet communication contains sensitive parameters that should remain encrypted in transit. Configure the Kubelets to serve only HTTPS traffic. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +journalctl -u k3s | grep "Running kubelet" | tail -n1 | grep -E 'tls-cert-file|tls-private-key-file' +``` + +Verify the `--tls-cert-file` and `--tls-private-key-file` arguments are present and set appropriately. + +**Remediation:** +By default, K3s sets the `--tls-cert-file` and `--tls-private-key-file` arguments when executing the kubelet process. + + +#### 4.2.11 +Ensure that the `--rotate-certificates` argument is not set to `false` (Scored) +
+Rationale + +The `--rotate-certificates` setting causes the kubelet to rotate its client certificates by creating new CSRs as its existing credentials expire. This automated periodic rotation ensures that there is no downtime due to expired certificates and thus addressing availability in the CIA security triad. + +**Note:** This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g. Vault) then you need to take care of rotation yourself. + +**Note:**This feature also requires the `RotateKubeletClientCertificate` feature gate to be enabled (which is the default since Kubernetes v1.7) +
+ +**Result:** Not Applicable + +**Remediation:** +By default, K3s implements its own logic for certificate generation and rotation. + + +#### 4.2.12 +Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) +
+Rationale +`RotateKubeletServerCertificate` causes the kubelet to both request a serving certificate after bootstrapping its client credentials and rotate the certificate as its existing credentials expire. This automated periodic rotation ensures that there are no downtimes due to expired certificates and thus addressing availability in the CIA security triad. + +Note: This recommendation only applies if you let kubelets get their certificates from the API server. In case your kubelet certificates come from an outside authority/tool (e.g. Vault) then you need to take care of rotation yourself. +
+ +**Result:** Not Applicable + +**Remediation:** +By default, K3s implements its own logic for certificate generation and rotation. + + +#### 4.2.13 +Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Not Scored) +
+Rationale +TLS ciphers have had a number of known vulnerabilities and weaknesses, which can reduce the protection provided by them. By default Kubernetes supports a number of TLS ciphersuites including some that have security concerns, weakening the protection provided. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** +Configuration of the parameter is dependent on your use case. Please see the CIS Kubernetes Benchmark for suggestions on configuring this for your use-case. + + +## 5 Kubernetes Policies + + +### 5.1 RBAC and Service Accounts + + +#### 5.1.1 +Ensure that the cluster-admin role is only used where required (Not Scored) +
+Rationale +Kubernetes provides a set of default roles where RBAC is used. Some of these roles such as `cluster-admin` provide wide-ranging privileges which should only be applied where absolutely necessary. Roles such as `cluster-admin` allow super-user access to perform any action on any resource. When used in a `ClusterRoleBinding`, it gives full control over every resource in the cluster and in all namespaces. When used in a `RoleBinding`, it gives full control over every resource in the rolebinding's namespace, including the namespace itself. +
+ +**Result:** Pass + +**Remediation:** +K3s does not make inappropriate use of the cluster-admin role. Operators must audit their workloads of additional usage. See the CIS Benchmark guide for more details. + +#### 5.1.2 +Minimize access to secrets (Not Scored) +
+Rationale +Inappropriate access to secrets stored within the Kubernetes cluster can allow for an attacker to gain additional access to the Kubernetes cluster or external resources whose credentials are stored as secrets. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** +K3s limits its use of secrets for the system components appropriately, but operators must audit the use of secrets by their workloads. See the CIS Benchmark guide for more details. + +#### 5.1.3 +Minimize wildcard use in Roles and ClusterRoles (Not Scored) +
+Rationale +The principle of least privilege recommends that users are provided only the access required for their role and nothing more. The use of wildcard rights grants is likely to provide excessive rights to the Kubernetes API. +
+ +**Result:** Not Scored - Operator Dependent + +**Audit:** +Run the below command on the master node. + +```bash +# Retrieve the roles defined across each namespaces in the cluster and review for wildcards +kubectl get roles --all-namespaces -o yaml + +# Retrieve the cluster roles defined in the cluster and review for wildcards +kubectl get clusterroles -o yaml +``` + +Verify that there are not wildcards in use. + +**Remediation:** +Operators should review their workloads for proper role usage. See the CIS Benchmark guide for more details. + +#### 5.1.4 +Minimize access to create pods (Not Scored) +
+Rationale +The ability to create pods in a cluster opens up possibilities for privilege escalation and should be restricted, where possible. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** +Operators should review who has access to create pods in their cluster. See the CIS Benchmark guide for more details. + +#### 5.1.5 +Ensure that default service accounts are not actively used. (Scored) +
+Rationale +Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. + +Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. + +The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. +
+ +**Result:** Fail. Currently requires operator intervention See the [Harending Guide](../hardening_guide/_) for details. + +**Audit:** +For each namespace in the cluster, review the rights assigned to the default service account and ensure that it has no roles or cluster roles bound to it apart from the defaults. Additionally ensure that the automountServiceAccountToken: false setting is in place for each default service account. + +**Remediation:** +Create explicit service accounts wherever a Kubernetes workload requires specific access +to the Kubernetes API server. +Modify the configuration of each default service account to include this value + +``` bash +automountServiceAccountToken: false +``` + + +#### 5.1.6 +Ensure that Service Account Tokens are only mounted where necessary (Not Scored) +
+Rationale +Mounting service account tokens inside pods can provide an avenue for privilege escalation attacks where an attacker is able to compromise a single pod in the cluster. + +Avoiding mounting these tokens removes this attack avenue. +
+ +**Result:** Not Scored - Operator Dependent + +**Remediation:** +The pods launched by K3s are part of the control plane and generally need access to communicate with the API server, thus this control does not apply to them. Operators should review their workloads and take steps to modify the definition of pods and service accounts which do not need to mount service account tokens to disable it. + +### 5.2 Pod Security Policies + + +#### 5.2.1 +Minimize the admission of containers wishing to share the host process ID namespace (Scored) +
+Rationale +Privileged containers have access to all Linux Kernel capabilities and devices. A container running with full privileges can do almost everything that the host can do. This flag exists to allow special use-cases, like manipulating the network stack and accessing devices. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit privileged containers. + +If you need to run privileged containers, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl describe psp | grep MustRunAsNonRoot +``` + +Verify that the result is `Rule: MustRunAsNonRoot`. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `Rule` value to `MustRunAsNonRoot`. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.2 +Minimize the admission of containers wishing to share the host process ID namespace (Scored) +
+Rationale +A container running in the host's PID namespace can inspect processes running outside the container. If the container also has access to ptrace capabilities this can be used to escalate privileges outside of the container. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to share the host PID namespace. + +If you need to run containers which require hostPID, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +Verify that the returned count is 1. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `hostPID` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.3 +Minimize the admission of containers wishing to share the host IPC namespace (Scored) +
+Rationale + +A container running in the host's IPC namespace can use IPC to interact with processes outside the container. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to share the host IPC namespace. + +If you have a requirement to containers which require hostIPC, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +Verify that the returned count is 1. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `HostIPC` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.4 +Minimize the admission of containers wishing to share the host network namespace (Scored) +
+Rationale +A container running in the host's network namespace could access the local loopback device, and could access network traffic to and from other pods. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to share the host network namespace. + +If you have need to run containers which require hostNetwork, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +Verify that the returned count is 1. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `HostNetwork` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.5 +Minimize the admission of containers with `allowPrivilegeEscalation` (Scored) +
+Rationale +A container running with the `allowPrivilegeEscalation` flag set to true may have processes that can gain more privileges than their parent. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit containers to allow privilege escalation. The option exists (and is defaulted to true) to permit setuid binaries to run. + +If you have need to run containers which use setuid binaries or require privilege escalation, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +Verify that the returned count is 1. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `allowPrivilegeEscalation` value to false explicitly for the PSP it creates. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.6 +Minimize the admission of root containers (Not Scored) +
+Rationale +Containers may run as any Linux user. Containers which run as the root user, whilst constrained by Container Runtime security features still have an escalated likelihood of container breakout. + +Ideally, all containers should run as a defined non-UID 0 user. + +There should be at least one PodSecurityPolicy (PSP) defined which does not permit root users in a container. + +If you need to run root containers, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' +``` + +Verify that the returned count is 1. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets the `runAsUser.Rule` value to `MustRunAsNonRoot`. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.7 +Minimize the admission of containers with the NET_RAW capability (Not Scored) +
+Rationale +Containers run with a default set of capabilities as assigned by the Container Runtime. By default this can include potentially dangerous capabilities. With Docker as the container runtime the NET_RAW capability is enabled which may be misused by malicious containers. + +Ideally, all containers should drop this capability. + +There should be at least one PodSecurityPolicy (PSP) defined which prevents containers with the NET_RAW capability from launching. + +If you need to run containers with this capability, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp -o json | jq .spec.requiredDropCapabilities[] +``` + +Verify the value is `"ALL"`. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets `.spec.requiredDropCapabilities[]` to a value of `All`. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.8 +Minimize the admission of containers with added capabilities (Not Scored) +
+Rationale +Containers run with a default set of capabilities as assigned by the Container Runtime. Capabilities outside this set can be added to containers which could expose them to risks of container breakout attacks. + +There should be at least one PodSecurityPolicy (PSP) defined which prevents containers with capabilities beyond the default set from launching. + +If you need to run containers with additional capabilities, this should be defined in a separate PSP and you should carefully check RBAC controls to ensure that only limited service accounts and users are given permission to access that PSP. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp +``` + +Verify that there are no PSPs present which have `allowedCapabilities` set to anything other than an empty array. + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets `allowedCapabilities` to anything other than an empty array. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +#### 5.2.9 +Minimize the admission of containers with capabilities assigned (Not Scored) +
+Rationale +Containers run with a default set of capabilities as assigned by the Container Runtime. Capabilities are parts of the rights generally granted on a Linux system to the root user. + +In many cases applications running in containers do not require any capabilities to operate, so from the perspective of the principle of least privilege use of capabilities should be minimized. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get psp +``` + +**Remediation:** +An operator should apply a PodSecurityPolicy that sets `requiredDropCapabilities` to `ALL`. An example of this can be found in the [Hardening Guide](../hardening_guide/). + + +### 5.3 Network Policies and CNI + + +#### 5.3.1 +Ensure that the CNI in use supports Network Policies (Not Scored) +
+Rationale +Kubernetes network policies are enforced by the CNI plugin in use. As such it is important to ensure that the CNI plugin supports both Ingress and Egress network policies. +
+ +**Result:** Pass + +**Audit:** +Review the documentation of CNI plugin in use by the cluster, and confirm that it supports Ingress and Egress network policies. + +**Remediation:** +By default, K3s use Canal (Calico and Flannel) and fully supports network policies. + + +#### 5.3.2 +Ensure that all Namespaces have Network Policies defined (Scored) +
+Rationale +Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints. + +Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +for i in kube-system kube-public default; do + kubectl get networkpolicies -n $i; +done +``` + +Verify that there are network policies applied to each of the namespaces. + +**Remediation:** +An operator should apply NetworkPolcyies that prevent unneeded traffic from traversing networks unnecessarily. An example of applying a NetworkPolcy can be found in the [Hardening Guide](../hardening_guide/). + +### 5.4 Secrets Management + + +#### 5.4.1 +Prefer using secrets as files over secrets as environment variables (Not Scored) +
+Rationale +It is reasonably common for application code to log out its environment (particularly in the event of an error). This will include any secret values passed in as environment variables, so secrets can easily be exposed to any user or entity who has access to the logs. +
+ +**Result:** Not Scored + +**Audit:** +Run the following command to find references to objects which use environment variables defined from secrets. + +```bash +kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}' -A +``` + +**Remediation:** +If possible, rewrite application code to read secrets from mounted secret files, rather than from environment variables. + + +#### 5.4.2 +Consider external secret storage (Not Scored) +
+Rationale +Kubernetes supports secrets as first-class objects, but care needs to be taken to ensure that access to secrets is carefully limited. Using an external secrets provider can ease the management of access to secrets, especially where secrets are used across both Kubernetes and non-Kubernetes environments. +
+ +**Result:** Not Scored + +**Audit:** +Review your secrets management implementation. + +**Remediation:** +Refer to the secrets management options offered by your cloud provider or a third-party secrets management solution. + + +### 5.5 Extensible Admission Control + + +#### 5.5.1 +Configure Image Provenance using ImagePolicyWebhook admission controller (Not Scored) +
+Rationale +Kubernetes supports plugging in provenance rules to accept or reject the images in your deployments. You could configure such rules to ensure that only approved images are deployed in the cluster. +
+ +**Result:** Not Scored + +**Audit:** +Review the pod definitions in your cluster and verify that image _provenance_ is configured as appropriate. + +**Remediation:** +Follow the Kubernetes documentation and setup image provenance. + + +### 5.6 Omitted +The v1.5.1 Benchmark skips 5.6 and goes from 5.5 to 5.7. We are including it here merely for explanation. + + +### 5.7 General Policies +These policies relate to general cluster management topics, like namespace best practices and policies applied to pod objects in the cluster. + + +#### 5.7.1 +Create administrative boundaries between resources using namespaces (Not Scored) +
+Rationale +Limiting the scope of user permissions can reduce the impact of mistakes or malicious activities. A Kubernetes namespace allows you to partition created resources into logically named groups. Resources created in one namespace can be hidden from other namespaces. By default, each resource created by a user in Kubernetes cluster runs in a default namespace, called default. You can create additional namespaces and attach resources and users to them. You can use Kubernetes Authorization plugins to create policies that segregate access to namespace resources between different users. +
+ +**Result:** Not Scored + +**Audit:** +Run the below command and review the namespaces created in the cluster. + +```bash +kubectl get namespaces +``` + +Ensure that these namespaces are the ones you need and are adequately administered as per your requirements. + +**Remediation:** +Follow the documentation and create namespaces for objects in your deployment as you need them. + + +#### 5.7.2 +Ensure that the seccomp profile is set to `docker/default` in your pod definitions (Not Scored) +
+Rationale +Seccomp (secure computing mode) is used to restrict the set of system calls applications can make, allowing cluster administrators greater control over the security of workloads running in the cluster. Kubernetes disables seccomp profiles by default for historical reasons. You should enable it to ensure that the workloads have restricted actions available within the container. +
+ +**Result:** Not Scored + +**Audit:** +Review the pod definitions in your cluster. It should create a line as below: + +```yaml +annotations: + seccomp.security.alpha.kubernetes.io/pod: docker/default +``` + +**Remediation:** +Review the Kubernetes documentation and if needed, apply a relevant PodSecurityPolicy. + +#### 5.7.3 +Apply Security Context to Your Pods and Containers (Not Scored) +
+Rationale +A security context defines the operating system security settings (uid, gid, capabilities, SELinux role, etc..) applied to a container. When designing your containers and pods, make sure that you configure the security context for your pods, containers, and volumes. A security context is a property defined in the deployment yaml. It controls the security parameters that will be assigned to the pod/container/volume. There are two levels of security context: pod level security context, and container-level security context. +
+ +**Result:** Not Scored + +**Audit:** +Review the pod definitions in your cluster and verify that you have security contexts defined as appropriate. + +**Remediation:** +Follow the Kubernetes documentation and apply security contexts to your pods. For a suggested list of security contexts, you may refer to the CIS Security Benchmark. + + +#### 5.7.4 +The default namespace should not be used (Scored) +
+Rationale +Resources in a Kubernetes cluster should be segregated by namespace, to allow for security controls to be applied at that level and to make it easier to manage resources. +
+ +**Result:** Pass + +**Audit:** +Run the below command on the master node. + +```bash +kubectl get all -n default +``` + +The only entries there should be system-managed resources such as the kubernetes service. + +**Remediation:** +By default, K3s does not utilize the default namespace. diff --git a/content/k3s/latest/en/storage/_index.md b/content/k3s/latest/en/storage/_index.md index 760bd893fff..fd0dcba1168 100644 --- a/content/k3s/latest/en/storage/_index.md +++ b/content/k3s/latest/en/storage/_index.md @@ -75,7 +75,7 @@ The status should be Bound for each. [comment]: <> (pending change - longhorn may support arm64 and armhf in the future.) -> **Note:** At this time Longhorn only supports amd64. +> **Note:** At this time Longhorn only supports amd64 and arm64 (experimental). K3s supports [Longhorn](https://github.com/longhorn/longhorn). Longhorn is an open-source distributed block storage system for Kubernetes. diff --git a/content/os/v1.x/en/installation/amazon-ecs/_index.md b/content/os/v1.x/en/installation/amazon-ecs/_index.md index bc642bee97d..1379784c5bf 100644 --- a/content/os/v1.x/en/installation/amazon-ecs/_index.md +++ b/content/os/v1.x/en/installation/amazon-ecs/_index.md @@ -7,7 +7,7 @@ weight: 190 ### Pre-Requisites -Prior to launching RancherOS EC2 instances, the [ECS Container Instance IAM Role](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) will need to have been created. This `ecsInstanceRole` will need to be used when launching EC2 instances. If you have been using ECS, you created this role if you followed the ECS "Get Started" interactive guide. +Before launching RancherOS EC2 instances, the [ECS Container Instance IAM Role](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) will need to have been created. This `ecsInstanceRole` will need to be used when launching EC2 instances. If you have been using ECS, you created this role if you followed the ECS "Get Started" interactive guide. ### Launching an instance with ECS diff --git a/content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/v2.0-v2.1/_index.md b/content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/v2.0-v2.1/_index.md index 97e3968fdc0..4838d0942a8 100644 --- a/content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/v2.0-v2.1/_index.md +++ b/content/rancher/v2.x/en/backups/v2.0.x-v2.4.x/restore/rke-restore/v2.0-v2.1/_index.md @@ -72,4 +72,4 @@ Because of the changes necessary to address [CVE-2018-20321](https://cve.mitre.o ``` After a few moments the clusters will go from `Unavailable` back to `Available`. -6. Continue using Rancher as normal. \ No newline at end of file +6. Continue using Rancher as normal. diff --git a/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md b/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md index db1fa9b2881..839c6d93029 100644 --- a/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md +++ b/content/rancher/v2.x/en/backups/v2.5/configuration/backup-config/_index.md @@ -142,7 +142,6 @@ data: Make sure to encode the keys to base64 in YAML file. Run the following command to encode the keys. - ``` echo -n "your_key" |base64 ``` @@ -190,4 +189,4 @@ After the role is created, and you have attached the corresponding instance prof # Examples -For example Backup custom resources, refer to [this page.](../../examples/#backup) \ No newline at end of file +For example Backup custom resources, refer to [this page.](../../examples/#backup) diff --git a/content/rancher/v2.x/en/backups/v2.5/restoring-rancher/_index.md b/content/rancher/v2.x/en/backups/v2.5/restoring-rancher/_index.md index ce965a74636..baf91b4ec95 100644 --- a/content/rancher/v2.x/en/backups/v2.5/restoring-rancher/_index.md +++ b/content/rancher/v2.x/en/backups/v2.5/restoring-rancher/_index.md @@ -59,4 +59,4 @@ kubectl logs -n cattle-resources-system -f ### Cleanup -If you created the restore resource with kubectl, remove the resource to prevent a naming conflict with future restores. \ No newline at end of file +If you created the restore resource with kubectl, remove the resource to prevent a naming conflict with future restores. diff --git a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md index 6d14ae9baf6..ad1bfc9e0e7 100644 --- a/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md +++ b/content/rancher/v2.x/en/cluster-admin/volumes-and-storage/provisioning-new-storage/_index.md @@ -44,7 +44,7 @@ To use a storage provisioner that is not on the above list, you will need to use These steps describe how to set up a storage class at the cluster level. -1. Go to the cluster for which you want to dynamically provision persistent storage volumes. +1. Go to the **Cluster Explorer** of the cluster for which you want to dynamically provision persistent storage volumes. 1. From the cluster view, select `Storage > Storage Classes`. Click `Add Class`. @@ -64,7 +64,7 @@ For full information about the storage class parameters, refer to the official [ These steps describe how to set up a PVC in the namespace where your stateful workload will be deployed. -1. Go to the project containing a workload that you want to add a PVC to. +1. Go to the **Cluster Manager** to the project containing a workload that you want to add a PVC to. 1. From the main navigation bar, choose **Resources > Workloads.** (In versions before v2.3.0, choose **Workloads** on the main navigation bar.) Then select the **Volumes** tab. Click **Add Volume**. @@ -94,7 +94,7 @@ To attach the PVC to a new workload, 1. Create a workload as you would in [Deploying Workloads]({{}}/rancher/v2.x/en/k8s-in-rancher/workloads/deploy-workloads/). 1. For **Workload Type**, select **Stateful set of 1 pod**. -1. Expand the **Volumes** section and click **Add Volume > Add a New Persistent Volume (Claim).** +1. Expand the **Volumes** section and click **Add Volume > Use an Existing Persistent Volume (Claim).** 1. In the **Persistent Volume Claim** section, select the newly created persistent volume claim that is attached to the storage class. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Launch.** @@ -105,9 +105,9 @@ To attach the PVC to an existing workload, 1. Go to the project that has the workload that will have the PVC attached. 1. Go to the workload that will have persistent storage and click **⋮ > Edit.** -1. Expand the **Volumes** section and click **Add Volume > Add a New Persistent Volume (Claim).** +1. Expand the **Volumes** section and click **Add Volume > Use an Existing Persistent Volume (Claim).** 1. In the **Persistent Volume Claim** section, select the newly created persistent volume claim that is attached to the storage class. 1. In the **Mount Point** field, enter the path that the workload will use to access the volume. 1. Click **Save.** -**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage. \ No newline at end of file +**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage. diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md index b94c788f356..bddd5ddd319 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/provisioning-vsphere-clusters/_index.md @@ -142,4 +142,4 @@ After creating your cluster, you can access it through the Rancher UI. As a best - **Access your cluster with the kubectl CLI:** Follow [these steps]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#accessing-clusters-with-kubectl-on-your-workstation) to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI. - **Access your cluster with the kubectl CLI, using the authorized cluster endpoint:** Follow [these steps]({{}}/rancher/v2.x/en/cluster-admin/cluster-access/kubectl/#authenticating-directly-with-a-downstream-cluster) to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster. -- **Provision Storage:** For an example of how to provision storage in vSphere using Rancher, refer to [this section.]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere) In order to dynamically provision storage in vSphere, the vSphere provider must be [enabled.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) \ No newline at end of file +- **Provision Storage:** For an example of how to provision storage in vSphere using Rancher, refer to [this section.]({{}}/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere) In order to dynamically provision storage in vSphere, the vSphere provider must be [enabled.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/vsphere) diff --git a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md index 65dec16562d..d660f823fca 100644 --- a/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md +++ b/content/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/_index.md @@ -13,4 +13,4 @@ The vSphere node templates in Rancher were updated in the following Rancher vers - [v2.2.0](./v2.2.0) - [v2.0.4](./v2.0.4) -For Rancher versions before v2.0.4, refer to [this version.](./prior-to-2.0.4) \ No newline at end of file +For Rancher versions before v2.0.4, refer to [this version.](./prior-to-2.0.4) diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md index 8b391fdc81b..b8d07b51c62 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/_index.md @@ -1,7 +1,7 @@ --- -title: Install Rancher on a Kubernetes Cluster +title: Install/Upgrade Rancher on a Kubernetes Cluster description: Learn how to install Rancher in development and production environments. Read about single node and high availability installation -weight: 3 +weight: 2 aliases: - /rancher/v2.x/en/installation/k8s-install/ - /rancher/v2.x/en/installation/k8s-install/helm-rancher diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md index 1fe92cee4b6..4037760c9aa 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-k8s/upgrades/helm2/_index.md @@ -118,8 +118,10 @@ If you are currently running the cert-manger whose version is older than v0.11, 1. Uninstall Rancher ``` - helm delete rancher -n cattle-system + helm delete rancher ``` + + In case this results in an error that the release "rancher" was not found, make sure you are using the correct deployment name. Use `helm list` to list the helm-deployed releases. 2. Uninstall and reinstall `cert-manager` according to the instructions on the [Upgrading Cert-Manager]({{}}/rancher/v2.x/en/installation/options/upgrading-cert-manager/helm-2-instructions) page. diff --git a/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md b/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md index 487f6058742..c7c69757429 100644 --- a/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md +++ b/content/rancher/v2.x/en/installation/install-rancher-on-linux/_index.md @@ -1,6 +1,6 @@ --- -title: Install Rancher on a Linux OS -weight: 2 +title: Install/Upgrade Rancher on a Linux OS +weight: 3 --- _Available as of Rancher v2.5.4_ diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md index 5556ed41ad0..0393ea433b9 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/behind-proxy/launch-kubernetes/_index.md @@ -15,7 +15,7 @@ For convenience export the IP address and port of your proxy into an environment export proxy_host="10.0.0.5:8888" export HTTP_PROXY=http://${proxy_host} export HTTPS_PROXY=http://${proxy_host} -export NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16 +export NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,cattle-system.svc ``` Next configure apt to use this proxy when installing packages. If you are not using Ubuntu, you have to adapt this step accordingly: diff --git a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks/_index.md b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks/_index.md index 60238ff422c..589c8781f4d 100644 --- a/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks/_index.md +++ b/content/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/single-node-rollbacks/_index.md @@ -44,7 +44,7 @@ If you have issues upgrading Rancher, roll it back to its latest known healthy s 1. Using a remote Terminal connection, log into the node running your Rancher Server. -1. Pull the version of Rancher that you were running before upgrade. Replace the `` with [that version](#before-you-start). +1. Pull the version of Rancher that you were running before upgrade. Replace the `` with that version. For example, if you were running Rancher v2.0.5 before upgrade, pull v2.0.5. diff --git a/content/rancher/v2.x/en/installation/requirements/ports/_index.md b/content/rancher/v2.x/en/installation/requirements/ports/_index.md index 29340f95674..aaa0c2d6cbb 100644 --- a/content/rancher/v2.x/en/installation/requirements/ports/_index.md +++ b/content/rancher/v2.x/en/installation/requirements/ports/_index.md @@ -168,6 +168,14 @@ The following tables break down the port requirements for Rancher nodes, for inb {{% /accordion %}} +### Ports for Rancher Server in GCP GKE + +When deploying Rancher into a Google Kubernetes Engine [private cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters), the nodes where Rancher runs must be accessible from the control plane: + +| Protocol | Port | Source | Description | +|-----|-----|----------------|---| +| TCP | 9443 | The GKE master `/28` range | Rancher webhooks | + # Downstream Kubernetes Cluster Nodes Downstream Kubernetes clusters run your apps and services. This section describes what ports need to be opened on the nodes in downstream clusters so that Rancher can communicate with them. diff --git a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md index fafbf80e4e4..564ccdb49fb 100644 --- a/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md +++ b/content/rancher/v2.x/en/installation/resources/k8s-tutorials/infrastructure-tutorials/ec2-node/_index.md @@ -35,9 +35,11 @@ If the Rancher server is installed in a single Docker container, you only need o 1. Choose a new or existing key pair that you will use to connect to your instance later. If you are using an existing key pair, make sure you already have access to the private key. 1. Click **Launch Instances.** -**Result:** You have created Rancher nodes that satisfy the requirements for OS, hardware, and networking. Next, you will install Docker on each node. +**Result:** You have created Rancher nodes that satisfy the requirements for OS, hardware, and networking. -### 3. Install Docker and Create User +**Note:** If the nodes are being used for an RKE Kubernetes cluster, install Docker on each node in the next step. For a K3s Kubernetes cluster, the nodes are now ready to install K3s. + +### 3. Install Docker and Create User for RKE Kubernetes Cluster Nodes 1. From the [AWS EC2 console,](https://console.aws.amazon.com/ec2/) click **Instances** in the left panel. 1. Go to the instance that you want to install Docker on. Select the instance and click **Actions > Connect.** diff --git a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md index a55a0b89f25..39819061e7a 100644 --- a/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md +++ b/content/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/_index.md @@ -76,7 +76,7 @@ In this task, you can use the versatile **Custom** option. This option lets you 1. From the **Clusters** page, click **Add Cluster**. -2. Choose **Custom**. +2. Choose **Existing Nodes**. 3. Enter a **Cluster Name**. diff --git a/content/rke/latest/en/config-options/add-ons/_index.md b/content/rke/latest/en/config-options/add-ons/_index.md index 89695c786c3..a24a6b2d72a 100644 --- a/content/rke/latest/en/config-options/add-ons/_index.md +++ b/content/rke/latest/en/config-options/add-ons/_index.md @@ -16,7 +16,7 @@ There are a few things worth noting: * In addition to these pluggable add-ons, you can specify an add-on that you want deployed after the cluster deployment is complete. * As of v0.1.8, RKE will update an add-on if it is the same name. -* Prior to v0.1.8, update any add-ons by using `kubectl edit`. +* Before v0.1.8, update any add-ons by using `kubectl edit`. ## Critical and Non-Critical Add-ons diff --git a/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md b/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md index ad70ea165a4..62d29580145 100644 --- a/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md +++ b/content/rke/latest/en/config-options/add-ons/ingress-controllers/_index.md @@ -6,7 +6,7 @@ weight: 262 By default, RKE deploys the NGINX ingress controller on all schedulable nodes. -> **Note:** As of v0.1.8, only workers are considered schedulable nodes, but prior to v0.1.8, worker and controlplane nodes were considered schedulable nodes. +> **Note:** As of v0.1.8, only workers are considered schedulable nodes, but before v0.1.8, worker and controlplane nodes were considered schedulable nodes. RKE will deploy the ingress controller as a DaemonSet with `hostnetwork: true`, so ports `80`, and `443` will be opened on each node where the controller is deployed. diff --git a/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md b/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md index 72808d38936..fb874b9b134 100644 --- a/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md +++ b/content/rke/latest/en/config-options/add-ons/user-defined-add-ons/_index.md @@ -18,7 +18,7 @@ RKE only adds additional add-ons when using `rke up` multiple times. RKE does ** As of v0.1.8, RKE will update an add-on if it is the same name. -Prior to v0.1.8, update any add-ons by using `kubectl edit`. +Before v0.1.8, update any add-ons by using `kubectl edit`. ## In-line Add-ons diff --git a/content/rke/latest/en/config-options/cloud-providers/vsphere/enabling-uuid/_index.md b/content/rke/latest/en/config-options/cloud-providers/vsphere/enabling-uuid/_index.md index 15b8c0e8946..df0278d5088 100644 --- a/content/rke/latest/en/config-options/cloud-providers/vsphere/enabling-uuid/_index.md +++ b/content/rke/latest/en/config-options/cloud-providers/vsphere/enabling-uuid/_index.md @@ -32,4 +32,4 @@ $ govc vm.change -vm -e disk.enableUUID=TRUE In Rancher v2.0.4+, disk UUIDs are enabled in vSphere node templates by default. -If you are using Rancher prior to v2.0.4, refer to the [vSphere node template documentation.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/prior-to-2.0.4/#disk-uuids) for details on how to enable a UUID with a Rancher node template. +If you are using Rancher before v2.0.4, refer to the [vSphere node template documentation.]({{}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/vsphere/vsphere-node-template-config/before-2.0.4/#disk-uuids) for details on how to enable a UUID with a Rancher node template. diff --git a/content/rke/latest/en/config-options/nodes/_index.md b/content/rke/latest/en/config-options/nodes/_index.md index 68c315a332d..fad9ee1409b 100644 --- a/content/rke/latest/en/config-options/nodes/_index.md +++ b/content/rke/latest/en/config-options/nodes/_index.md @@ -78,7 +78,7 @@ nodes: You can specify the list of roles that you want the node to be as part of the Kubernetes cluster. Three roles are supported: `controlplane`, `etcd` and `worker`. Node roles are not mutually exclusive. It's possible to assign any combination of roles to any node. It's also possible to change a node's role using the upgrade process. -> **Note:** Prior to v0.1.8, workloads/pods might have run on any nodes with `worker` or `controlplane` roles, but as of v0.1.8, they will only be deployed to any `worker` nodes. +> **Note:** Before v0.1.8, workloads/pods might have run on any nodes with `worker` or `controlplane` roles, but as of v0.1.8, they will only be deployed to any `worker` nodes. ### etcd diff --git a/content/rke/latest/en/config-options/private-registries/_index.md b/content/rke/latest/en/config-options/private-registries/_index.md index 2f448920312..1fe91b8b182 100644 --- a/content/rke/latest/en/config-options/private-registries/_index.md +++ b/content/rke/latest/en/config-options/private-registries/_index.md @@ -35,5 +35,5 @@ By default, all system images are being pulled from DockerHub. If you are on a s As of v0.1.10, you have to configure your private registry credentials, but you can specify this registry as a default registry so that all [system images]({{}}/rke/latest/en/config-options/system-images/) are pulled from the designated private registry. You can use the command `rke config --system-images` to get the list of default system images to populate your private registry. -Prior to v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images]({{}}/rke/latest/en/config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name. +Before v0.1.10, you had to configure your private registry credentials **and** update the names of all the [system images]({{}}/rke/latest/en/config-options/system-images/) in the `cluster.yml` so that the image names would have the private registry URL appended before each image name. diff --git a/content/rke/latest/en/config-options/services/services-extras/_index.md b/content/rke/latest/en/config-options/services/services-extras/_index.md index 57f623800ab..8c86d64de56 100644 --- a/content/rke/latest/en/config-options/services/services-extras/_index.md +++ b/content/rke/latest/en/config-options/services/services-extras/_index.md @@ -11,13 +11,13 @@ For any of the Kubernetes services, you can update the `extra_args` to change th As of `v0.1.3`, using `extra_args` will add new arguments and **override** any existing defaults. For example, if you need to modify the default admission plugins list, you need to include the default list and edit it with your changes so all changes are included. -Prior to `v0.1.3`, using `extra_args` would only add new arguments to the list and there was no ability to change the default list. +Before `v0.1.3`, using `extra_args` would only add new arguments to the list and there was no ability to change the default list. All service defaults and parameters are defined per [`kubernetes_version`]({{}}/rke/latest/en/config-options/#kubernetes-version): - For RKE v0.3.0+, the service defaults and parameters are defined per [`kubernetes_version`]({{}}/rke/latest/en/config-options/#kubernetes-version). The service defaults are located [here](https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_service_options.go). The default list of admissions plugins is the same for all Kubernetes versions and is located [here](https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_service_options.go#L11). -- For RKE prior to v0.3.0, the service defaults and admission plugins are defined per [`kubernetes_version`]({{}}/rke/latest/en/config-options/#kubernetes-version) and located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go). +- For RKE before v0.3.0, the service defaults and admission plugins are defined per [`kubernetes_version`]({{}}/rke/latest/en/config-options/#kubernetes-version) and located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go). ```yaml services: diff --git a/content/rke/latest/en/config-options/system-images/_index.md b/content/rke/latest/en/config-options/system-images/_index.md index 041a99a186e..148168a5821 100644 --- a/content/rke/latest/en/config-options/system-images/_index.md +++ b/content/rke/latest/en/config-options/system-images/_index.md @@ -63,7 +63,7 @@ system_images: metrics_server: rancher/metrics-server-amd64:v0.3.1 ``` -Prior to `v0.1.6`, instead of using the `rancher/rke-tools` image, we used the following images: +Before `v0.1.6`, instead of using the `rancher/rke-tools` image, we used the following images: ```yaml system_images: diff --git a/content/rke/latest/en/etcd-snapshots/example-scenarios/_index.md b/content/rke/latest/en/etcd-snapshots/example-scenarios/_index.md index abf4f768cbc..3cae808ab71 100644 --- a/content/rke/latest/en/etcd-snapshots/example-scenarios/_index.md +++ b/content/rke/latest/en/etcd-snapshots/example-scenarios/_index.md @@ -100,18 +100,18 @@ nginx-65899c769f-qkhml 1/1 Running 0 17s ``` {{% /tab %}} -{{% tab "RKE prior to v0.2.0" %}} +{{% tab "RKE before v0.2.0" %}} This walkthrough will demonstrate how to restore an etcd cluster from a local snapshot with the following steps: -1. [Take a local snapshot of the cluster](#take-a-local-snapshot-of-the-cluster-rke-prior-to-v0.2.0) -1. [Store the snapshot externally](#store-the-snapshot-externally-rke-prior-to-v0.2.0) -1. [Simulate a node failure](#simulate-a-node-failure-rke-prior-to-v0.2.0) -1. [Remove the Kubernetes cluster and clean the nodes](#remove-the-kubernetes-cluster-and-clean-the-nodes-rke-prior-to-v0.2.0) -1. [Retrieve the backup and place it on a new node](#retrieve-the-backup-and-place-it-on-a-new-node-rke-prior-to-v0.2.0) -1. [Add a new etcd node to the Kubernetes cluster](#add-a-new-etcd-node-to-the-kubernetes-cluster-rke-prior-to-v0.2.0) -1. [Restore etcd on the new node from the backup](#restore-etcd-on-the-new-node-from-the-backup-rke-prior-to-v0.2.0) -1. [Restore Operations on the Cluster](#restore-operations-on-the-cluster-rke-prior-to-v0.2.0) +1. [Take a local snapshot of the cluster](#take-a-local-snapshot-of-the-cluster-rke-before-v0.2.0) +1. [Store the snapshot externally](#store-the-snapshot-externally-rke-before-v0.2.0) +1. [Simulate a node failure](#simulate-a-node-failure-rke-before-v0.2.0) +1. [Remove the Kubernetes cluster and clean the nodes](#remove-the-kubernetes-cluster-and-clean-the-nodes-rke-before-v0.2.0) +1. [Retrieve the backup and place it on a new node](#retrieve-the-backup-and-place-it-on-a-new-node-rke-before-v0.2.0) +1. [Add a new etcd node to the Kubernetes cluster](#add-a-new-etcd-node-to-the-kubernetes-cluster-rke-before-v0.2.0) +1. [Restore etcd on the new node from the backup](#restore-etcd-on-the-new-node-from-the-backup-rke-before-v0.2.0) +1. [Restore Operations on the Cluster](#restore-operations-on-the-cluster-rke-before-v0.2.0) ### Example Scenario of restoring from a Local Snapshot @@ -122,7 +122,7 @@ In this example, the Kubernetes cluster was deployed on two AWS nodes. | node1 | 10.0.0.1 | [controlplane, worker] | | node2 | 10.0.0.2 | [etcd] | - + ### 1. Take a Local Snapshot of the Cluster Back up the Kubernetes cluster by taking a local snapshot: @@ -131,7 +131,7 @@ Back up the Kubernetes cluster by taking a local snapshot: $ rke etcd snapshot-save --name snapshot.db --config cluster.yml ``` - + ### 2. Store the Snapshot Externally After taking the etcd snapshot on `node2`, we recommend saving this backup in a persistent place. One of the options is to save the backup and `pki.bundle.tar.gz` file on an S3 bucket or tape backup. @@ -145,7 +145,7 @@ root@node2:~# s3cmd \ s3://rke-etcd-backup/ ``` - + ### 3. Simulate a Node Failure To simulate the failure, let's power down `node2`. @@ -159,7 +159,7 @@ root@node2:~# poweroff | node1 | 10.0.0.1 | [controlplane, worker] | | ~~node2~~ | ~~10.0.0.2~~ | ~~[etcd]~~ | - + ### 4. Remove the Kubernetes Cluster and Clean the Nodes The following command removes your cluster and cleans the nodes so that the cluster can be restored without any conflicts: @@ -168,7 +168,7 @@ The following command removes your cluster and cleans the nodes so that the clus rke remove --config rancher-cluster.yml ``` - + ### 5. Retrieve the Backup and Place it On a New Node Before restoring etcd and running `rke up`, we need to retrieve the backup saved on S3 to a new node, e.g. `node3`. @@ -190,7 +190,7 @@ root@node3:~# s3cmd get \ > **Note:** If you had multiple etcd nodes, you would have to manually sync the snapshot and `pki.bundle.tar.gz` across all of the etcd nodes in the cluster. - + ### 6. Add a New etcd Node to the Kubernetes Cluster Before updating and restoring etcd, you will need to add the new node into the Kubernetes cluster with the `etcd` role. In the `cluster.yml`, comment out the old node and add in the new node. ` @@ -215,7 +215,7 @@ nodes: - etcd ``` - + ### 7. Restore etcd on the New Node from the Backup After the new node is added to the `cluster.yml`, run the `rke etcd snapshot-restore` command to launch `etcd` from the backup: @@ -226,7 +226,7 @@ $ rke etcd snapshot-restore --name snapshot.db --config cluster.yml The snapshot and `pki.bundle.tar.gz` file are expected to be saved at `/opt/rke/etcd-snapshots` on each etcd node. - + ### 8. Restore Operations on the Cluster Finally, we need to restore the operations on the cluster. We will make the Kubernetes API point to the new `etcd` by running `rke up` again using the new `cluster.yml`. diff --git a/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md b/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md index f1cfefeec90..ea37b69f44a 100644 --- a/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md +++ b/content/rke/latest/en/etcd-snapshots/one-time-snapshots/_index.md @@ -94,7 +94,7 @@ Below is an [example IAM policy](https://docs.aws.amazon.com/IAM/latest/UserGuid For details on giving an application access to S3, refer to the AWS documentation on [Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances.](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html) {{% /tab %}} -{{% tab "RKE prior to v0.2.0" %}} +{{% tab "RKE before v0.2.0" %}} To save a snapshot of etcd from each etcd node in the cluster config file, run the `rke etcd snapshot-save` command. diff --git a/content/rke/latest/en/etcd-snapshots/recurring-snapshots/_index.md b/content/rke/latest/en/etcd-snapshots/recurring-snapshots/_index.md index 145f3466510..4e4cd3fdee6 100644 --- a/content/rke/latest/en/etcd-snapshots/recurring-snapshots/_index.md +++ b/content/rke/latest/en/etcd-snapshots/recurring-snapshots/_index.md @@ -30,8 +30,8 @@ time="2018-05-04T18:43:16Z" level=info msg="Created backup" name="2018-05-04T18: |Option|Description| S3 Specific | |---|---| --- | -|**interval_hours**| The duration in hours between recurring backups. This supercedes the `creation` option (which was used in RKE prior to v0.2.0) and will override it if both are specified.| | -|**retention**| The number of snapshots to retain before rotation. If the retention is configured in both `etcd.retention` (time period to keep snapshots in hours), which was required in RKE prior to v0.2.0, and at `etcd.backup_config.retention` (number of snapshots), the latter will be used. | | +|**interval_hours**| The duration in hours between recurring backups. This supercedes the `creation` option (which was used in RKE before v0.2.0) and will override it if both are specified.| | +|**retention**| The number of snapshots to retain before rotation. If the retention is configured in both `etcd.retention` (time period to keep snapshots in hours), which was required in RKE before v0.2.0, and at `etcd.backup_config.retention` (number of snapshots), the latter will be used. | | |**bucket_name**| S3 bucket name where backups will be stored| * | |**folder**| Folder inside S3 bucket where backups will be stored. This is optional. _Available as of v0.3.0_ | * | |**access_key**| S3 access key with permission to access the backup bucket.| * | @@ -96,11 +96,11 @@ services: ``` {{% /tab %}} -{{% tab "RKE prior to v0.2.0"%}} +{{% tab "RKE before v0.2.0"%}} To schedule automatic recurring etcd snapshots, you can enable the `etcd-snapshot` service with [extra configuration options](#options-for-the-local-etcd-snapshot-service). `etcd-snapshot` runs in a service container alongside the `etcd` container. By default, the `etcd-snapshot` service takes a snapshot for every node that has the `etcd` role and stores them to local disk in `/opt/rke/etcd-snapshots`. -RKE saves a backup of the certificates, i.e. a file named `pki.bundle.tar.gz`, in the same location. The snapshot and pki bundle file are required for the restore process in versions prior to v0.2.0. +RKE saves a backup of the certificates, i.e. a file named `pki.bundle.tar.gz`, in the same location. The snapshot and pki bundle file are required for the restore process in versions before v0.2.0. ### Snapshot Service Logging diff --git a/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md b/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md index 7291c3605bd..50f22e8692e 100644 --- a/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md +++ b/content/rke/latest/en/etcd-snapshots/restoring-from-backup/_index.md @@ -74,7 +74,7 @@ $ rke etcd snapshot-restore \ | `--ignore-docker-version` | [Disable Docker version check]({{}}/rke/latest/en/config-options/#supported-docker-versions) | {{% /tab %}} -{{% tab "RKE prior to v0.2.0"%}} +{{% tab "RKE before v0.2.0"%}} If there is a disaster with your Kubernetes cluster, you can use `rke etcd snapshot-restore` to recover your etcd. This command reverts etcd to a specific snapshot and should be run on an etcd node of the the specific cluster that has suffered the disaster. diff --git a/content/rke/latest/en/installation/_index.md b/content/rke/latest/en/installation/_index.md index 44adb3e9509..b96e58af4b7 100644 --- a/content/rke/latest/en/installation/_index.md +++ b/content/rke/latest/en/installation/_index.md @@ -178,7 +178,7 @@ The Kubernetes cluster state, which consists of the cluster configuration file ` As of v0.2.0, RKE creates a `.rkestate` file in the same directory that has the cluster configuration file `cluster.yml`. The `.rkestate` file contains the current state of the cluster including the RKE configuration and the certificates. It is required to keep this file in order to update the cluster or perform any operation on it through RKE. -Prior to v0.2.0, RKE saved the Kubernetes cluster state as a secret. When updating the state, RKE pulls the secret, updates/changes the state and saves a new secret. +Before v0.2.0, RKE saved the Kubernetes cluster state as a secret. When updating the state, RKE pulls the secret, updates/changes the state and saves a new secret. ## Interacting with your Kubernetes cluster diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index 1c44104e1fe..da210b14422 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -5,23 +5,31 @@ weight: 5 **In this section:** - - [Operating System](#operating-system) - - [General Linux Requirements](#general-linux-requirements) - - [Red Hat Enterprise Linux (RHEL) / Oracle Linux (OL) / CentOS](#red-hat-enterprise-linux-rhel-oracle-enterprise-linux-ol-centos) - - - [Using upstream Docker](#using-upstream-docker) - - [Using RHEL/CentOS packaged Docker](#using-rhel-centos-packaged-docker) - - [Notes about Atomic Nodes](#red-hat-atomic) - - - [OpenSSH version](#openssh-version) - - [Creating a Docker Group](#creating-a-docker-group) - - [Flatcar Container Linux](#flatcar-container-linux) + - [General Linux Requirements](#general-linux-requirements) + - [SUSE Linux Enterprise Server (SLES) / openSUSE](#suse-linux-enterprise-server-sles--opensuse) + - [Using Upstream Docker](#using-upstream-docker) + - [Using SUSE/openSUSE packaged Docker](#using-suseopensuse-packaged-docker) + - [Adding the Software Repository for Docker](#adding-the-software-repository-for-docker) + - [openSUSE MicroOS/Kubic (Atomic)](#opensuse-microoskubic-atomic) + - [openSUSE MicroOS](#opensuse-microos) + - [openSUSE Kubic](#opensuse-kubic) + - [Red Hat Enterprise Linux (RHEL) / Oracle Linux (OL) / CentOS](#red-hat-enterprise-linux-rhel--oracle-linux-ol--centos) + - [Using upstream Docker](#using-upstream-docker-1) + - [Using RHEL/CentOS packaged Docker](#using-rhelcentos-packaged-docker) + - [Red Hat Atomic](#red-hat-atomic) + - [OpenSSH version](#openssh-version) + - [Creating a Docker Group](#creating-a-docker-group) + - [Flatcar Container Linux](#flatcar-container-linux) - [Software](#software) + - [OpenSSH](#openssh) + - [Kubernetes](#kubernetes) + - [Docker](#docker) + - [Installing Docker](#installing-docker) + - [Checking the Installed Docker Version](#checking-the-installed-docker-version) - [Ports](#ports) - - - [Opening port TCP/6443 using `iptables`](#opening-port-tcp-6443-using-iptables) - - [Opening port TCP/6443 using `firewalld`](#opening-port-tcp-6443-using-firewalld) + - [Opening port TCP/6443 using `iptables`](#opening-port-tcp6443-using-iptables) + - [Opening port TCP/6443 using `firewalld`](#opening-port-tcp6443-using-firewalld) - [SSH Server Configuration](#ssh-server-configuration) @@ -99,6 +107,80 @@ xt_tcpudp | net.bridge.bridge-nf-call-iptables=1 ``` +### SUSE Linux Enterprise Server (SLES) / openSUSE + +If you are using SUSE Linux Enterprise Server or openSUSE follow the instructions below. + +#### Using upstream Docker +If you are using upstream Docker, the package name is `docker-ce` or `docker-ee`. You can check the installed package by executing: + +``` +rpm -q docker-ce +``` + +When using the upstream Docker packages, please follow [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user). + +#### Using SUSE/openSUSE packaged docker +If you are using the Docker package supplied by SUSE/openSUSE, the package name is `docker`. You can check the installed package by executing: + +``` +rpm -q docker +``` + +#### Adding the Software repository for docker +In SUSE Linux Enterprise Server 15 SP2 docker is found in the Containers module. +This module will need to be added before istalling docker. + +To list available modules you can run SUSEConnect to list the extensions and the activation command +``` +node:~ # SUSEConnect --list-extensions +AVAILABLE EXTENSIONS AND MODULES + + Basesystem Module 15 SP2 x86_64 (Activated) + Deactivate with: SUSEConnect -d -p sle-module-basesystem/15.2/x86_64 + + Containers Module 15 SP2 x86_64 + Activate with: SUSEConnect -p sle-module-containers/15.2/x86_64 +``` +Run this SUSEConnect command to activate the Containers module. +``` +node:~ # SUSEConnect -p sle-module-containers/15.2/x86_64 +Registering system to registration proxy https://rmt.seader.us + +Updating system details on https://rmt.seader.us ... + +Activating sle-module-containers 15.2 x86_64 ... +-> Adding service to system ... +-> Installing release package ... + +Successfully registered system +``` +In order to run docker cli commands with your user then you need to add this user to the `docker` group. +It is preferred not to use the root user for this. + +``` +usermod -aG docker +``` + +To verify that the user is correctly configured, log out of the node and login using SSH or your preferred method, and execute `docker ps`: + +``` +ssh user@node +user@node:~> docker ps +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +user@node:~> +``` +### openSUSE MicroOS/Kubic (Atomic) +Consult the project pages for openSUSE MicroOS and Kubic for installation +#### openSUSE MicroOS +Designed to host container workloads with automated administration & patching. Installing openSUSE MicroOS you get a quick, small environment for deploying Containers, or any other workload that benefits from Transactional Updates. As rolling release distribution the software is always up-to-date. +https://microos.opensuse.org +#### openSUSE Kubic +Based on MicroOS, but not a rolling release distribution. Designed with the same things in mind but also a Certified Kubernetes Distribution. +https://kubic.opensuse.org +Installation instructions: +https://kubic.opensuse.org/blog/2021-02-08-MicroOS-Kubic-Rancher-RKE/ + ### Red Hat Enterprise Linux (RHEL) / Oracle Linux (OL) / CentOS If using Red Hat Enterprise Linux, Oracle Linux or CentOS, you cannot use the `root` user as [SSH user]({{}}/rke/latest/en/config-options/nodes/#ssh-user) due to [Bugzilla 1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). Please follow the instructions below how to setup Docker correctly, based on the way you installed Docker on the node. diff --git a/content/rke/latest/en/upgrades/_index.md b/content/rke/latest/en/upgrades/_index.md index fdbabdda02f..f64c128afdd 100644 --- a/content/rke/latest/en/upgrades/_index.md +++ b/content/rke/latest/en/upgrades/_index.md @@ -46,7 +46,7 @@ This file is created in the same directory that has the cluster configuration fi It is required to keep the `cluster.rkestate` file to perform any operation on the cluster through RKE, or when upgrading a cluster last managed via RKE v0.2.0 or later. {{% /tab %}} -{{% tab "RKE prior to v0.2.0" %}} +{{% tab "RKE before v0.2.0" %}} Ensure that the `kube_config_cluster.yml` file is present in the working directory. RKE saves the Kubernetes cluster state as a secret. When updating the state, RKE pulls the secret, updates or changes the state, and saves a new secret. The `kube_config_cluster.yml` file is required for upgrading a cluster last managed via RKE v0.1.x. @@ -103,7 +103,7 @@ In addition, if neither `kubernetes_version` nor `system_images` are configured As of v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, then RKE will error out. -Prior to v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, the default version from the supported list is used. +Before v0.2.0, if a version is defined in `kubernetes_version` and is not found in the specific list of supported Kubernetes versions, the default version from the supported list is used. If you want to use a different version from the supported list, please use the [system images]({{}}/rke/latest/en/config-options/system-images/) option. @@ -113,7 +113,7 @@ In RKE, `kubernetes_version` is used to map the version of Kubernetes to the def For RKE v0.3.0+, the service defaults are located [here](https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_service_options.go). -For RKE prior to v0.3.0, the service defaults are located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go). Note: The version in the path of the service defaults file corresponds to a Rancher version. Therefore, for Rancher v2.1.x, [this file](https://github.com/rancher/types/blob/release/v2.1/apis/management.cattle.io/v3/k8s_defaults.go) should be used. +For RKE before v0.3.0, the service defaults are located [here](https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go). Note: The version in the path of the service defaults file corresponds to a Rancher version. Therefore, for Rancher v2.1.x, [this file](https://github.com/rancher/types/blob/release/v2.1/apis/management.cattle.io/v3/k8s_defaults.go) should be used. ### Service Upgrades diff --git a/content/rke/latest/en/upgrades/how-upgrades-work/_index.md b/content/rke/latest/en/upgrades/how-upgrades-work/_index.md index 77ef6cfec1f..c7eb6fa7390 100644 --- a/content/rke/latest/en/upgrades/how-upgrades-work/_index.md +++ b/content/rke/latest/en/upgrades/how-upgrades-work/_index.md @@ -65,7 +65,7 @@ For more information on configuring the number of replicas for each addon, refer For an example showing how to configure the addons, refer to the [example cluster.yml.]({{}}/rke/latest/en/upgrades/configuring-strategy/#example-cluster-yml) {{% /tab %}} -{{% tab "RKE prior to v1.1.0" %}} +{{% tab "RKE before v1.1.0" %}} When a cluster is upgraded with `rke up`, using the default options, the following process is used: