diff --git a/content/k3s/latest/en/upgrades/_index.md b/content/k3s/latest/en/upgrades/_index.md index 1b9c86805ad..fad09759854 100644 --- a/content/k3s/latest/en/upgrades/_index.md +++ b/content/k3s/latest/en/upgrades/_index.md @@ -3,12 +3,22 @@ title: "Upgrades" weight: 25 --- -This section describes how to upgrade your K3s cluster. +### Upgrading your K3s cluster [Upgrade basics]({{< baseurl >}}/k3s/latest/en/upgrades/basic/) describes several techniques for upgrading your cluster manually. It can also be used as a basis for upgrading through third-party Infrastructure-as-Code tools like [Terraform](https://www.terraform.io/). [Automated upgrades]({{< baseurl >}}/k3s/latest/en/upgrades/automated/) describes how to perform Kubernetes-native automated upgrades using Rancher's [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller). -> If Traefik is not disabled K3s versions 1.20 and earlier will have installed Traefik v1, while K3s versions 1.21 and later will install Traefik v2 if v1 is not already present. To upgrade Traefik, please refer to the [Traefik documentation](https://doc.traefik.io/traefik/migration/v1-to-v2/) and use the [migration tool](https://github.com/traefik/traefik-migration-tool) to migrate from the older Traefik v1 to Traefik v2. +### Version-specific caveats -> The experimental embedded Dqlite data store was deprecated in K3s v1.19.1. Please note that upgrades from experimental Dqlite to experimental embedded etcd are not supported. If you attempt an upgrade it will not succeed and data will be lost. +- **Traefik:** If Traefik is not disabled, K3s versions 1.20 and earlier will install Traefik v1, while K3s versions 1.21 and later will install Traefik v2, if v1 is not already present. To upgrade from the older Traefik v1 to Traefik v2, please refer to the [Traefik documentation](https://doc.traefik.io/traefik/migration/v1-to-v2/) and use the [migration tool](https://github.com/traefik/traefik-migration-tool). + +- **K3s bootstrap data:** If you are using K3s in an HA configuration with an external SQL datastore, and your server (control-plane) nodes were not started with the `--token` CLI flag, you will no longer be able to add additional K3s servers to the cluster without specifying the token. Ensure that you retain a copy of this token, as it is required when restoring from backup. Previously, K3s did not enforce the use of a token when using external SQL datastores. + - The affected versions are <= v1.19.12+k3s1, v1.20.8+k3s1, v1.21.2+k3s1; the patched versions are v1.19.13+k3s1, v1.20.9+k3s1, v1.21.3+k3s1. + + - You may retrieve the token value from any server already joined to the cluster as follows: +``` +cat /var/lib/rancher/k3s/server/token +``` + +- **Experimental Dqlite:** The experimental embedded Dqlite data store was deprecated in K3s v1.19.1. Please note that upgrades from experimental Dqlite to experimental embedded etcd are not supported. If you attempt an upgrade, it will not succeed, and data will be lost. diff --git a/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/_index.md index 98c8db4a6e8..b98a08e0085 100644 --- a/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.0-v2.4/en/installation/install-rancher-on-k8s/_index.md @@ -231,7 +231,7 @@ helm install rancher rancher-/rancher \ If you are using a Private CA signed certificate , add `--set privateCA=true` to the command: ``` -helm install rancher rancher-latest/rancher \ +helm install rancher rancher-/rancher \ --namespace cattle-system \ --set hostname=rancher.my.org \ --set ingress.tls.source=secret \ diff --git a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md index 0567e110d02..8f4539e1c6f 100644 --- a/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md +++ b/content/rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md @@ -24,6 +24,7 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b "ec2:RunInstances", "ec2:RevokeSecurityGroupIngress", "ec2:RevokeSecurityGroupEgress", + "ec2:DescribeRegions", "ec2:DescribeVpcs", "ec2:DescribeTags", "ec2:DescribeSubnets", @@ -123,31 +124,6 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b ### Service Role Permissions -Rancher will create a service role with the following trust policy: - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Action": "sts:AssumeRole", - "Principal": { - "Service": "eks.amazonaws.com" - }, - "Effect": "Allow", - "Sid": "" - } - ] -} -``` - -This role will also have two role policy attachments with the following policies ARNs: - -``` -arn:aws:iam::aws:policy/AmazonEKSClusterPolicy -arn:aws:iam::aws:policy/AmazonEKSServicePolicy -``` - Permissions required for Rancher to create service role on users behalf during the EKS cluster creation process. ```json @@ -182,36 +158,66 @@ Permissions required for Rancher to create service role on users behalf during t } ``` +When an EKS cluster is created, Rancher will create a service role with the following trust policy: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Action": "sts:AssumeRole", + "Principal": { + "Service": "eks.amazonaws.com" + }, + "Effect": "Allow", + "Sid": "" + } + ] +} +``` + +This role will also have two role policy attachments with the following policies ARNs: + +``` +arn:aws:iam::aws:policy/AmazonEKSClusterPolicy +arn:aws:iam::aws:policy/AmazonEKSServicePolicy +``` + ### VPC Permissions Permissions required for Rancher to create VPC and associated resources. ```json { - "Sid": "VPCPermissions", - "Effect": "Allow", - "Action": [ - "ec2:ReplaceRoute", - "ec2:ModifyVpcAttribute", - "ec2:ModifySubnetAttribute", - "ec2:DisassociateRouteTable", - "ec2:DetachInternetGateway", - "ec2:DescribeVpcs", - "ec2:DeleteVpc", - "ec2:DeleteTags", - "ec2:DeleteSubnet", - "ec2:DeleteRouteTable", - "ec2:DeleteRoute", - "ec2:DeleteInternetGateway", - "ec2:CreateVpc", - "ec2:CreateSubnet", - "ec2:CreateSecurityGroup", - "ec2:CreateRouteTable", - "ec2:CreateRoute", - "ec2:CreateInternetGateway", - "ec2:AttachInternetGateway", - "ec2:AssociateRouteTable" - ], - "Resource": "*" + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VPCPermissions", + "Effect": "Allow", + "Action": [ + "ec2:ReplaceRoute", + "ec2:ModifyVpcAttribute", + "ec2:ModifySubnetAttribute", + "ec2:DisassociateRouteTable", + "ec2:DetachInternetGateway", + "ec2:DescribeVpcs", + "ec2:DeleteVpc", + "ec2:DeleteTags", + "ec2:DeleteSubnet", + "ec2:DeleteRouteTable", + "ec2:DeleteRoute", + "ec2:DeleteInternetGateway", + "ec2:CreateVpc", + "ec2:CreateSubnet", + "ec2:CreateSecurityGroup", + "ec2:CreateRouteTable", + "ec2:CreateRoute", + "ec2:CreateInternetGateway", + "ec2:AttachInternetGateway", + "ec2:AssociateRouteTable" + ], + "Resource": "*" + } + ] } -``` \ No newline at end of file +``` diff --git a/content/rancher/v2.5/en/helm-charts/_index.md b/content/rancher/v2.5/en/helm-charts/_index.md index 9dc759f7e3b..b74682a0c62 100644 --- a/content/rancher/v2.5/en/helm-charts/_index.md +++ b/content/rancher/v2.5/en/helm-charts/_index.md @@ -50,6 +50,27 @@ From the left sidebar select _"Repositories"_. These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. +To add a private CA for Helm Chart repositories: + +- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:
+ ``` + [...] + spec: + caBundle: + MIIFXzCCA0egAwIBAgIUWNy8WrvSkgNzV0zdWRP79j9cVcEwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRQwEgYDVQQKDAtNeU9yZywgSW5jLjENMAsGA1UEAwwEcm9vdDAeFw0yMTEyMTQwODMyMTdaFw0yNDEwMDMwODMyMT + ... + nDxZ/tNXt/WPJr/PgEB3hQdInDWYMg7vGO0Oz00G5kWg0sJ0ZTSoA10ZwdjIdGEeKlj1NlPyAqpQ+uDnmx6DW+zqfYtLnc/g6GuLLVPamraqN+gyU8CHwAWPNjZonFN9Vpg0PIk1I2zuOc4EHifoTAXSpnjfzfyAxCaZsnTptimlPFJJqAMj+FfDArGmr4= + [...] + ``` + +- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click **Edit YAML** for the chart repo, and add the key/value pair as follows: + ``` + [...] + spec: + insecureSkipTLSVerify: true + [...] + ``` + > **Note:** Helm chart repositories with authentication > > As of Rancher v2.5.12, a new value `disableSameOriginCheck` has been added to the Repo.Spec. This allows users to bypass the same origin checks, sending the repository Authentication information as a Basic Auth Header with all API calls. This is not recommended but can be used as a temporary solution in cases of non-standard Helm chart repositories such as those that have redirects to a different origin URL. @@ -61,7 +82,7 @@ These items represent helm repositories, and can be either traditional helm endp spec: disableSameOriginCheck: true [...] -``` +``` ### Helm Compatibility diff --git a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md index 94497592e86..cd3d7cef001 100644 --- a/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.5/en/installation/install-rancher-on-k8s/_index.md @@ -245,7 +245,7 @@ helm install rancher rancher-/rancher \ If you are using a Private CA signed certificate , add `--set privateCA=true` to the command: ``` -helm install rancher rancher-latest/rancher \ +helm install rancher rancher-/rancher \ --namespace cattle-system \ --set hostname=rancher.my.org \ --set ingress.tls.source=secret \ diff --git a/content/rancher/v2.5/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md b/content/rancher/v2.5/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md index 79f3da27930..39ddfd2b5a0 100644 --- a/content/rancher/v2.5/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md +++ b/content/rancher/v2.5/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md @@ -26,6 +26,6 @@ For more information about how ServiceMonitors work, refer to the [Prometheus Op This pseudo-CRD maps to a section of the Prometheus custom resource configuration. It declaratively specifies how group of pods should be monitored. -When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the ServiceMonitor. +When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the PodMonitor. Any Pods in your cluster that match the labels located within the PodMonitor `selector` field will be monitored based on the `podMetricsEndpoints` specified on the PodMonitor. For more information on what fields can be specified, please look at the [spec](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmonitorspec) provided by Prometheus Operator. diff --git a/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md b/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md index 0567e110d02..8f4539e1c6f 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/eks/permissions/_index.md @@ -24,6 +24,7 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b "ec2:RunInstances", "ec2:RevokeSecurityGroupIngress", "ec2:RevokeSecurityGroupEgress", + "ec2:DescribeRegions", "ec2:DescribeVpcs", "ec2:DescribeTags", "ec2:DescribeSubnets", @@ -123,31 +124,6 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b ### Service Role Permissions -Rancher will create a service role with the following trust policy: - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Action": "sts:AssumeRole", - "Principal": { - "Service": "eks.amazonaws.com" - }, - "Effect": "Allow", - "Sid": "" - } - ] -} -``` - -This role will also have two role policy attachments with the following policies ARNs: - -``` -arn:aws:iam::aws:policy/AmazonEKSClusterPolicy -arn:aws:iam::aws:policy/AmazonEKSServicePolicy -``` - Permissions required for Rancher to create service role on users behalf during the EKS cluster creation process. ```json @@ -182,36 +158,66 @@ Permissions required for Rancher to create service role on users behalf during t } ``` +When an EKS cluster is created, Rancher will create a service role with the following trust policy: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Action": "sts:AssumeRole", + "Principal": { + "Service": "eks.amazonaws.com" + }, + "Effect": "Allow", + "Sid": "" + } + ] +} +``` + +This role will also have two role policy attachments with the following policies ARNs: + +``` +arn:aws:iam::aws:policy/AmazonEKSClusterPolicy +arn:aws:iam::aws:policy/AmazonEKSServicePolicy +``` + ### VPC Permissions Permissions required for Rancher to create VPC and associated resources. ```json { - "Sid": "VPCPermissions", - "Effect": "Allow", - "Action": [ - "ec2:ReplaceRoute", - "ec2:ModifyVpcAttribute", - "ec2:ModifySubnetAttribute", - "ec2:DisassociateRouteTable", - "ec2:DetachInternetGateway", - "ec2:DescribeVpcs", - "ec2:DeleteVpc", - "ec2:DeleteTags", - "ec2:DeleteSubnet", - "ec2:DeleteRouteTable", - "ec2:DeleteRoute", - "ec2:DeleteInternetGateway", - "ec2:CreateVpc", - "ec2:CreateSubnet", - "ec2:CreateSecurityGroup", - "ec2:CreateRouteTable", - "ec2:CreateRoute", - "ec2:CreateInternetGateway", - "ec2:AttachInternetGateway", - "ec2:AssociateRouteTable" - ], - "Resource": "*" + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VPCPermissions", + "Effect": "Allow", + "Action": [ + "ec2:ReplaceRoute", + "ec2:ModifyVpcAttribute", + "ec2:ModifySubnetAttribute", + "ec2:DisassociateRouteTable", + "ec2:DetachInternetGateway", + "ec2:DescribeVpcs", + "ec2:DeleteVpc", + "ec2:DeleteTags", + "ec2:DeleteSubnet", + "ec2:DeleteRouteTable", + "ec2:DeleteRoute", + "ec2:DeleteInternetGateway", + "ec2:CreateVpc", + "ec2:CreateSubnet", + "ec2:CreateSecurityGroup", + "ec2:CreateRouteTable", + "ec2:CreateRoute", + "ec2:CreateInternetGateway", + "ec2:AttachInternetGateway", + "ec2:AssociateRouteTable" + ], + "Resource": "*" + } + ] } -``` \ No newline at end of file +``` diff --git a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md index 498e63ad5f1..189c4d95a9b 100644 --- a/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md +++ b/content/rancher/v2.6/en/faq/networking/cni-providers/_index.md @@ -14,11 +14,11 @@ Kubernetes uses CNI as an interface between network providers and Kubernetes pod For more information visit [CNI GitHub project](https://github.com/containernetworking/cni). -### What Network Models are Used in CNI? +## What Network Models are Used in CNI? -CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)). +CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)). -#### What is an Encapsulated Network? +### What is an Encapsulated Network? This network model provides a logical Layer 2 (L2) network encapsulated over the existing Layer 3 (L3) network topology that spans the Kubernetes cluster nodes. With this model you have an isolated L2 network for containers without needing routing distribution, all at the cost of minimal overhead in terms of processing and increased IP package size, which comes from an IP header generated by overlay encapsulation. Encapsulation information is distributed by UDP ports between Kubernetes workers, interchanging network control plane information about how MAC addresses can be reached. Common encapsulation used in this kind of network model is VXLAN, Internet Protocol Security (IPSec), and IP-in-IP. @@ -26,11 +26,11 @@ In simple terms, this network model generates a kind of network bridge extended This network model is used when an extended L2 bridge is preferred. This network model is sensitive to L3 network latencies of the Kubernetes workers. If datacenters are in distinct geolocations, be sure to have low latencies between them to avoid eventual network segmentation. -CNI network providers using this network model include Flannel, Canal, and Weave. +CNI network providers using this network model include Flannel, Canal, Weave, and Cilium. By default, Calico is not using this model, but it can be configured to do so. ![Encapsulated Network]({{}}/img/rancher/encapsulated-network.png) -#### What is an Unencapsulated Network? +### What is an Unencapsulated Network? This network model provides an L3 network to route packets between containers. This model doesn't generate an isolated l2 network, nor generates overhead. These benefits come at the cost of Kubernetes workers having to manage any route distribution that's needed. Instead of using IP headers for encapsulation, this network model uses a network protocol between Kubernetes workers to distribute routing information to reach pods, such as [BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol). @@ -38,13 +38,17 @@ In simple terms, this network model generates a kind of network router extended This network model is used when a routed L3 network is preferred. This mode dynamically updates routes at the OS level for Kubernetes workers. It's less sensitive to latency. -CNI network providers using this network model include Calico and Romana. +CNI network providers using this network model include Calico and Cilium. Cilium may be configured with this model although it is not the default mode. ![Unencapsulated Network]({{}}/img/rancher/unencapsulated-network.png) -### What CNI Providers are Provided by Rancher? +## What CNI Providers are Provided by Rancher? -Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. You can choose your CNI network provider when you create new Kubernetes clusters from Rancher. +### RKE Kubernetes clusters + +Out-of-the-box, Rancher provides the following CNI network providers for RKE Kubernetes clusters: Canal, Flannel, and Weave. + +You can choose your CNI network provider when you create new Kubernetes clusters from Rancher. #### Canal @@ -64,33 +68,18 @@ For more information, see the [Canal GitHub Page.](https://github.com/projectcal ![Flannel Logo]({{}}/img/rancher/flannel-logo.png) -Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan). +Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan). -Encapsulated traffic is unencrypted by default. Therefore, flannel provides an experimental backend for encryption, [IPSec](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers. +Encapsulated traffic is unencrypted by default. Flannel provides two solutions for encryption: + +* [IPSec](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers. It is an experimental backend for encryption. +* [WireGuard](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard), which is a more faster-performing alternative to strongSwan. Kubernetes workers should open UDP port `8472` (VXLAN) and TCP port `9099` (healthcheck). See [the port requirements for user clusters]({{}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details. ![Flannel Diagram]({{}}/img/rancher/flannel-diagram.png) -For more information, see the [Flannel GitHub Page](https://github.com/coreos/flannel). - -#### Calico - -![Calico Logo]({{}}/img/rancher/calico-logo.png) - -Calico enables networking and network policy in Kubernetes clusters across the cloud. Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP. - -Calico also provides a stateless IP-in-IP encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies. - -Kubernetes workers should open TCP port `179` (BGP). See [the port requirements for user clusters]({{}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details. - -![Calico Diagram]({{}}/img/rancher/calico-diagram.svg) - -For more information, see the following pages: - -- [Project Calico Official Site](https://www.projectcalico.org/) -- [Project Calico GitHub Page](https://github.com/projectcalico/calico) - +For more information, see the [Flannel GitHub Page](https://github.com/flannel-io/flannel). #### Weave @@ -104,16 +93,48 @@ For more information, see the following pages: - [Weave Net Official Site](https://www.weave.works/) -### CNI Features by Provider +### RKE2 Kubernetes clusters + +Out-of-the-box, Rancher provides the following CNI network providers for RKE2 Kubernetes clusters: [Canal](#canal) (see above section), Calico, and Cilium. + +You can choose your CNI network provider when you create new Kubernetes clusters from Rancher. + +#### Calico + +![Calico Logo]({{}}/img/rancher/calico-logo.png) + +Calico enables networking and network policy in Kubernetes clusters across the cloud. By default, Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP. + +Calico also provides a stateless IP-in-IP or VXLAN encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies. + +Kubernetes workers should open TCP port `179` (BGP). See [the port requirements for user clusters]({{}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details. + +![Calico Diagram]({{}}/img/rancher/calico-diagram.svg) + +For more information, see the following pages: + +- [Project Calico Official Site](https://www.projectcalico.org/) +- [Project Calico GitHub Page](https://github.com/projectcalico/calico) + +#### Cilium + +![Cilium Logo]({{}}/img/rancher/cilium-logo.png) + +Cilium enables networking and network policies (L3, L4, and L7) in Kubernetes. By default, Cilium uses eBPF technologies to route packets inside the node and VXLAN to send packets to other nodes. Unencapsulated techniques can also be configured. + +Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4140` for health checks. In addition, ICMP 8/0 must be enabled for health checks. For more information, check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements). + +## CNI Features by Provider The following table summarizes the different features available for each CNI network provider provided by Rancher. | Provider | Network Model | Route Distribution | Network Policies | Mesh | External Datastore | Encryption | Ingress/Egress Policies | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | -| Canal | Encapsulated (VXLAN) | No | Yes | No | K8S API | No | Yes | -| Flannel | Encapsulated (VXLAN) | No | No | No | K8S API | No | No | -| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8S API | No | Yes | +| Canal | Encapsulated (VXLAN) | No | Yes | No | K8s API | Yes | Yes | +| Flannel | Encapsulated (VXLAN) | No | No | No | K8s API | Yes | No | +| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8s API | Yes | Yes | | Weave | Encapsulated | Yes | Yes | Yes | No | Yes | Yes | +| Cilium | Encapsulated (VXLAN) | Yes | Yes | Yes | Etcd and K8s API | Yes | Yes | - Network Model: Encapsulated or unencapsulated. For more information, see [What Network Models are Used in CNI?](#what-network-models-are-used-in-cni) @@ -129,25 +150,27 @@ The following table summarizes the different features available for each CNI net - Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications. -#### CNI Community Popularity -The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2020. +## CNI Community Popularity + +The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2022. | Provider | Project | Stars | Forks | Contributors | | ---- | ---- | ---- | ---- | ---- | -| Canal | https://github.com/projectcalico/canal | 614 | 89 | 19 | -| flannel | https://github.com/coreos/flannel | 4977 | 1.4k | 140 | -| Calico | https://github.com/projectcalico/calico | 1534 | 429 | 135 | -| Weave | https://github.com/weaveworks/weave/ | 5737 | 559 | 73 | +| Canal | https://github.com/projectcalico/canal | 679 | 100 | 21 | +| Flannel | https://github.com/flannel-io/flannel | 7k | 2.5k | 185 | +| Calico | https://github.com/projectcalico/calico | 3.1k | 741 | 224 | +| Weave | https://github.com/weaveworks/weave/ | 6.2k | 635 | 84 | +| Cilium | https://github.com/cilium/cilium | 10.6k | 1.3k | 352 |
-### Which CNI Provider Should I Use? +## Which CNI Provider Should I Use? It depends on your project needs. There are many different providers, which each have various features and options. There isn't one provider that meets everyone's needs. Canal is the default CNI network provider. We recommend it for most use cases. It provides encapsulated networking for containers with Flannel, while adding Calico network policies that can provide project/namespace isolation in terms of networking. -### How can I configure a CNI network provider? +## How can I configure a CNI network provider? Please see [Cluster Options]({{}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/) on how to configure a network provider for your cluster. For more advanced configuration options, please see how to configure your cluster using a [Config File]({{}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/#cluster-config-file) and the options for [Network Plug-ins]({{}}/rke/latest/en/config-options/add-ons/network-plugins/). diff --git a/content/rancher/v2.6/en/helm-charts/_index.md b/content/rancher/v2.6/en/helm-charts/_index.md index bbc402a0683..11704efac48 100644 --- a/content/rancher/v2.6/en/helm-charts/_index.md +++ b/content/rancher/v2.6/en/helm-charts/_index.md @@ -5,6 +5,42 @@ weight: 11 In this section, you'll learn how to manage Helm chart repositories and applications in Rancher. Helm chart repositories are managed using **Apps & Marketplace**. It uses a catalog-like system to import bundles of charts from repositories and then uses those charts to either deploy custom Helm applications or Rancher's tools such as Monitoring or Istio. Rancher tools come as pre-loaded repositories which deploy as standalone Helm charts. Any additional repositories are only added to the current cluster. +### Changes in Rancher v2.6 + +Starting in Rancher v2.6.0, a new versioning scheme for Rancher feature charts was implemented. The changes are centered around the major version of the charts and the +up annotation for upstream charts, where applicable. + +**Major Version:** The major version of the charts is tied to Rancher minor versions. When you upgrade to a new Rancher minor version, you should ensure that all of your **Apps & Marketplace** charts are also upgraded to the correct release line for the chart. + +>**Note:** Any major versions that are less than the ones mentioned in the table below are meant for 2.5 and below only. For example, you are advised to not use <100.x.x versions of Monitoring in 2.6.x+. + +**Feature Charts:** + +| **Name** | **Supported Minimum Version** | **Supported Maximum Version** | +| ---------------- | ------------ | ------------ | +| external-ip-webhook | 100.0.0+up1.0.0 | 100.0.1+up1.0.1 | +| harvester-cloud-provider | 100.0.0+up0.1.8 | 100.0.0+up0.1.8 | +| harvester-csi-driver | 100.0.0+up0.1.9 | 100.0.0+up0.1.9 | +| rancher-alerting-drivers | 100.0.0 | 100.0.1 | +| rancher-backups | 2.0.0 | 2.1.0 | +| rancher-cis-benchmark | 2.0.0 | 2.0.2 | +| rancher-gatekeeper | 100.0.0+up3.5.1 | 100.0.1+up3.6.0 | +| rancher-istio | 100.0.0+up1.10.4 | 100.1.0+up1.11.4 | +| rancher-logging | 100.0.0+up3.12.0 | 100.0.1+up3.15.0 | +| rancher-longhorn | 100.0.0+up1.1.2 | 100.1.1+up1.2.3 | +| rancher-monitoring | 100.0.0+up16.6.0 | 100.1.0+up19.0.3 +| rancher-sriov (experimental) | 100.0.0+up0.1.0 | 100.0.1+up0.1.0 | +| rancher-vsphere-cpi | 100.0.0 | 100.1.0+up1.0.100 +| rancher-vsphere-csi | 100.0.0 | 100.1.0+up2.3.0 | +| rancher-wins-upgrader | 100.0.0+up0.0.1 | 100.0.0+up0.0.1 | + +
+**Charts based on upstream:** For charts that are based on upstreams, the +up annotation should inform you of what upstream version the Rancher chart is tracking. Check the upstream version compatibility with Rancher during upgrades also. + +- As an example, `100.x.x+up16.6.0` for Monitoring tracks upstream kube-prometheus-stack `16.6.0` with some Rancher patches added to it. + +- On upgrades, ensure that you are not downgrading the version of the chart that you are using. For example, if you are using a version of Monitoring > `16.6.0` in Rancher 2.5, you should not upgrade to `100.x.x+up16.6.0`. Instead, you should upgrade to the appropriate version in the next release. + + ### Charts From the top-left menu select _"Apps & Marketplace"_ and you will be taken to the Charts page. @@ -25,6 +61,27 @@ From the left sidebar select _"Repositories"_. These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository. +To add a private CA for Helm Chart repositories: + +- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:
+ ``` + [...] + spec: + caBundle: + MIIFXzCCA0egAwIBAgIUWNy8WrvSkgNzV0zdWRP79j9cVcEwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRQwEgYDVQQKDAtNeU9yZywgSW5jLjENMAsGA1UEAwwEcm9vdDAeFw0yMTEyMTQwODMyMTdaFw0yNDEwMDMwODMyMT + ... + nDxZ/tNXt/WPJr/PgEB3hQdInDWYMg7vGO0Oz00G5kWg0sJ0ZTSoA10ZwdjIdGEeKlj1NlPyAqpQ+uDnmx6DW+zqfYtLnc/g6GuLLVPamraqN+gyU8CHwAWPNjZonFN9Vpg0PIk1I2zuOc4EHifoTAXSpnjfzfyAxCaZsnTptimlPFJJqAMj+FfDArGmr4= + [...] + ``` + +- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click **Edit YAML** for the chart repo, and add the key/value pair as follows: + ``` + [...] + spec: + insecureSkipTLSVerify: true + [...] + ``` + > **Note:** Helm chart repositories with authentication > > As of Rancher v2.6.3, a new value `disableSameOriginCheck` has been added to the Repo.Spec. This allows users to bypass the same origin checks, sending the repository Authentication information as a Basic Auth Header with all API calls. This is not recommended but can be used as a temporary solution in cases of non-standard Helm chart repositories such as those that have redirects to a different origin URL. diff --git a/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md b/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md index 11b5f6c2def..5f9e50cc198 100644 --- a/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md +++ b/content/rancher/v2.6/en/installation/install-rancher-on-k8s/_index.md @@ -234,7 +234,7 @@ helm install rancher rancher-/rancher \ If you are using a Private CA signed certificate , add `--set privateCA=true` to the command: ``` -helm install rancher rancher-latest/rancher \ +helm install rancher rancher-/rancher \ --namespace cattle-system \ --set hostname=rancher.my.org \ --set bootstrapPassword=admin \ diff --git a/content/rancher/v2.6/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md b/content/rancher/v2.6/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md index be8c09f7792..f6804adb2ea 100644 --- a/content/rancher/v2.6/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md +++ b/content/rancher/v2.6/en/k8s-in-rancher/workloads/upgrade-workloads/_index.md @@ -16,6 +16,6 @@ When a new version of an application image is released on Docker Hub, you can up These options control how the upgrade rolls out to containers that are currently running. For example, for scalable deployments, you can choose whether you want to stop old pods before deploying new ones, or vice versa, as well as the upgrade batch size. -1. Click **Upgrade**. +1. Click **Save**. **Result:** The workload begins upgrading its containers, per your specifications. Note that scaling up the deployment or updating the upgrade/scaling policy won't result in the pods recreation. diff --git a/content/rancher/v2.6/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md b/content/rancher/v2.6/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md index 20dc901da31..d2f848b6e67 100644 --- a/content/rancher/v2.6/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md +++ b/content/rancher/v2.6/en/monitoring-alerting/configuration/servicemonitor-podmonitor/_index.md @@ -26,6 +26,6 @@ For more information about how ServiceMonitors work, refer to the [Prometheus Op This pseudo-CRD maps to a section of the Prometheus custom resource configuration. It declaratively specifies how group of pods should be monitored. -When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the ServiceMonitor. +When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the PodMonitor. Any Pods in your cluster that match the labels located within the PodMonitor `selector` field will be monitored based on the `podMetricsEndpoints` specified on the PodMonitor. For more information on what fields can be specified, please look at the [spec](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmonitorspec) provided by Prometheus Operator. diff --git a/content/rancher/v2.6/en/security/_index.md b/content/rancher/v2.6/en/security/_index.md index 871203d2102..1f121f11ed2 100644 --- a/content/rancher/v2.6/en/security/_index.md +++ b/content/rancher/v2.6/en/security/_index.md @@ -11,7 +11,7 @@ weight: 20

Reporting process

-

Please submit possible security issues by emailing security@rancher.com

+

Please submit possible security issues by emailing security@rancher.com .

Announcements

@@ -20,25 +20,25 @@ weight: 20 -Security is at the heart of all Rancher features. From integrating with all the popular authentication tools and services, to an enterprise grade [RBAC capability,]({{}}/rancher/v2.6/en/admin-settings/rbac) Rancher makes your Kubernetes clusters even more secure. +Security is at the heart of all Rancher features. From integrating with all the popular authentication tools and services, to an enterprise grade [RBAC capability]({{}}/rancher/v2.6/en/admin-settings/rbac), Rancher makes your Kubernetes clusters even more secure. -On this page, we provide security-related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters: +On this page, we provide security related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters: - [Running a CIS security scan on a Kubernetes cluster](#running-a-cis-security-scan-on-a-kubernetes-cluster) - [SELinux RPM](#selinux-rpm) - [Guide to hardening Rancher installations](#rancher-hardening-guide) - [The CIS Benchmark and self-assessment](#the-cis-benchmark-and-self-assessment) - [Third-party penetration test reports](#third-party-penetration-test-reports) -- [Rancher CVEs and resolutions](#rancher-cves-and-resolutions) +- [Rancher Security Advisories and CVEs](#rancher-security-advisories-and-cves) - [Kubernetes Security Best Practices](#kubernetes-security-best-practices) ### Running a CIS Security Scan on a Kubernetes Cluster -Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS (Center for Internet Security) Kubernetes Benchmark. +Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark. The CIS Kubernetes Benchmark is a reference document that can be used to establish a secure configuration baseline for Kubernetes. -The Center for Internet Security (CIS) is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace." +The Center for Internet Security (CIS) is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace". CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team. @@ -46,13 +46,13 @@ The Benchmark provides recommendations of two types: Automated and Manual. We ru When Rancher runs a CIS security scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests. -For details, refer to the section on [security scans.]({{}}/rancher/v2.6/en/cis-scans) +For details, refer to the section on [security scans]({{}}/rancher/v2.6/en/cis-scans). ### SELinux RPM [Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. -We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page.]({{}}/rancher/v2.6/en/security/selinux) +We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page]({{}}/rancher/v2.6/en/security/selinux). ### Rancher Hardening Guide @@ -78,13 +78,13 @@ Rancher periodically hires third parties to perform security audits and penetrat Results: -- [Cure53 Pen Test - 7/2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf) -- [Untamed Theory Pen Test- 3/2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf) +- [Cure53 Pen Test - July 2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf) +- [Untamed Theory Pen Test- March 2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf) -### Rancher CVEs and Resolutions +### Rancher Security Advisories and CVEs Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](./cve) ### Kubernetes Security Best Practices -For recommendations on securing your Kubernetes cluster, refer to the [Best Practices](./best-practices) guide. +For recommendations on securing your Kubernetes cluster, refer to the [Kubernetes Security Best Practices](./best-practices) guide. diff --git a/content/rancher/v2.6/en/security/best-practices/_index.md b/content/rancher/v2.6/en/security/best-practices/_index.md index 1b207551e35..4dc70b3d510 100644 --- a/content/rancher/v2.6/en/security/best-practices/_index.md +++ b/content/rancher/v2.6/en/security/best-practices/_index.md @@ -3,6 +3,10 @@ title: Kubernetes Security Best Practices weight: 5 --- -# Restricting cloud metadata API access +### Restricting cloud metadata API access -Cloud providers such as AWS, Azure, or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. +Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets. + +It is advised to consult your cloud provider's security best practices for further recommendations and specific details on how to restrict access to cloud instance metadata API. + +Further references: MITRE ATT&CK knowledge base on - [Unsecured Credentials: Cloud Instance Metadata API](https://attack.mitre.org/techniques/T1552/005/). diff --git a/content/rancher/v2.6/en/security/cve/_index.md b/content/rancher/v2.6/en/security/cve/_index.md index b97cf1a59c5..174acb15bf0 100644 --- a/content/rancher/v2.6/en/security/cve/_index.md +++ b/content/rancher/v2.6/en/security/cve/_index.md @@ -1,9 +1,9 @@ --- -title: Rancher CVEs and Resolutions +title: Security Advisories and CVEs weight: 300 --- -Rancher is committed to informing the community of security issues in our products. Rancher will publish CVEs (Common Vulnerabilities and Exposures) for issues we have resolved. +Rancher is committed to informing the community of security issues in our products. Rancher will publish security advisories and CVEs (Common Vulnerabilities and Exposures) for issues we have resolved. New security advisories are also published in Rancher's GitHub [security page](https://github.com/rancher/rancher/security/advisories). | ID | Description | Date | Resolution | |----|-------------|------|------------| @@ -18,4 +18,4 @@ Rancher is committed to informing the community of security issues in our produc | [CVE-2019-12274](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274) | Nodes using the built-in node drivers using a file path option allows the machine to read arbitrary files including sensitive ones from inside the Rancher server container. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) | | [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) | | [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) | -| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions]({{}}/rancher/v2.6/en/installation/install-rancher-on-k8s/rollbacks). | \ No newline at end of file +| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions]({{}}/rancher/v2.6/en/installation/install-rancher-on-k8s/rollbacks). | diff --git a/content/rancher/v2.6/en/security/hardening-guides/_index.md b/content/rancher/v2.6/en/security/hardening-guides/_index.md new file mode 100644 index 00000000000..a3635419be5 --- /dev/null +++ b/content/rancher/v2.6/en/security/hardening-guides/_index.md @@ -0,0 +1,13 @@ +--- +title: Self-Assessment and Hardening Guides for Rancher v2.6 +shortTitle: Rancher v2.6 Guides +weight: 1 +aliases: + - /rancher/v2.6/en/security/rancher-2.5/ + - /rancher/v2.6/en/security/rancher-2.5/1.5-hardening-2.5/ + - /rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/ + - /rancher/v2.6/en/security/rancher-2.5/1.6-hardening-2.5/ + - /rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/ +--- + +Rancher v2.6 hardening guides are currently being updated. For the time being, please consult [Rancher v2.5 self-assessment and hardening guides]({{}}/rancher/v2.5/en/security/rancher-2.5) for more information. diff --git a/content/rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/_index.md b/content/rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/_index.md deleted file mode 100644 index 463446b78a0..00000000000 --- a/content/rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/_index.md +++ /dev/null @@ -1,2265 +0,0 @@ ---- -title: CIS 1.5 Benchmark - Self-Assessment Guide - Rancher v2.5 -weight: 201 ---- - -### CIS v1.5 Kubernetes Benchmark - Rancher v2.5 with Kubernetes v1.15 - -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_1.5_Benchmark_Assessment.pdf) - -#### Overview - -This document is a companion to the Rancher v2.5 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. - -This guide corresponds to specific versions of the hardening guide, Rancher, CIS Benchmark, and Kubernetes: - -Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version ----------------------------|----------|---------|------- -Hardening Guide with CIS 1.5 Benchmark | Rancher v2.5 | CIS v1.5| Kubernetes v1.15 - -Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply and will have a result of `Not Applicable`. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher-created clusters. - -This document is to be used by Rancher operators, security teams, auditors and decision makers. - -For more detail about each audit, including rationales and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark v1.5. You can download the benchmark after logging in to [CISecurity.org]( https://www.cisecurity.org/benchmark/kubernetes/). - -#### Testing controls methodology - -Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. - -Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher Labs are provided for testing. -When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the the [jq](https://stedolan.github.io/jq/) and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (with valid config) tools to and are required in the testing and evaluation of test results. - -> NOTE: only scored tests are covered in this guide. - -### Controls - ---- -## 1 Master Node Security Configuration -### 1.1 Master Node Configuration Files - -#### 1.1.1 Ensure that the API server pod specification file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the API server. All configuration is passed in as arguments at container run time. - -#### 1.1.2 Ensure that the API server pod specification file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the API server. All configuration is passed in as arguments at container run time. - -#### 1.1.3 Ensure that the controller manager pod specification file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. - -#### 1.1.4 Ensure that the controller manager pod specification file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. - -#### 1.1.5 Ensure that the scheduler pod specification file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. - -#### 1.1.6 Ensure that the scheduler pod specification file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. - -#### 1.1.7 Ensure that the etcd pod specification file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. - -#### 1.1.8 Ensure that the etcd pod specification file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for etcd. All configuration is passed in as arguments at container run time. - -#### 1.1.11 Ensure that the etcd data directory permissions are set to `700` or more restrictive (Scored) - -**Result:** PASS - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument `--data-dir`, -from the below command: - -``` bash -ps -ef | grep etcd -``` - -Run the below command (based on the etcd data directory found above). For example, - -``` bash -chmod 700 /var/lib/etcd -``` - -**Audit Script:** 1.1.11.sh - -``` -#!/bin/bash -e - -etcd_bin=${1} - -test_dir=$(ps -ef | grep ${etcd_bin} | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%') - -docker inspect etcd | jq -r '.[].HostConfig.Binds[]' | grep "${test_dir}" | cut -d ":" -f 1 | xargs stat -c %a -``` - -**Audit Execution:** - -``` -./1.1.11.sh etcd -``` - -**Expected result**: - -``` -'700' is equal to '700' -``` - -#### 1.1.12 Ensure that the etcd data directory ownership is set to `etcd:etcd` (Scored) - -**Result:** PASS - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument `--data-dir`, -from the below command: - -``` bash -ps -ef | grep etcd -``` - -Run the below command (based on the etcd data directory found above). -For example, -``` bash -chown etcd:etcd /var/lib/etcd -``` - -**Audit Script:** 1.1.12.sh - -``` -#!/bin/bash -e - -etcd_bin=${1} - -test_dir=$(ps -ef | grep ${etcd_bin} | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%') - -docker inspect etcd | jq -r '.[].HostConfig.Binds[]' | grep "${test_dir}" | cut -d ":" -f 1 | xargs stat -c %U:%G -``` - -**Audit Execution:** - -``` -./1.1.12.sh etcd -``` - -**Expected result**: - -``` -'etcd:etcd' is present -``` - -#### 1.1.13 Ensure that the `admin.conf` file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE does not store the kubernetes default kubeconfig credentials file on the nodes. It’s presented to user where RKE is run. -We recommend that this `kube_config_cluster.yml` file be kept in secure store. - -#### 1.1.14 Ensure that the admin.conf file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE does not store the kubernetes default kubeconfig credentials file on the nodes. It’s presented to user where RKE is run. -We recommend that this `kube_config_cluster.yml` file be kept in secure store. - -#### 1.1.15 Ensure that the `scheduler.conf` file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. - -#### 1.1.16 Ensure that the `scheduler.conf` file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the scheduler. All configuration is passed in as arguments at container run time. - -#### 1.1.17 Ensure that the `controller-manager.conf` file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. - -#### 1.1.18 Ensure that the `controller-manager.conf` file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the controller manager. All configuration is passed in as arguments at container run time. - -#### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to `root:root` (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, - -``` bash -chown -R root:root /etc/kubernetes/ssl -``` - -**Audit:** - -``` -stat -c %U:%G /etc/kubernetes/ssl -``` - -**Expected result**: - -``` -'root:root' is present -``` - -#### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to `644` or more restrictive (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, - -``` bash -chmod -R 644 /etc/kubernetes/ssl -``` - -**Audit Script:** check_files_permissions.sh - -``` -#!/usr/bin/env bash - -# This script is used to ensure the file permissions are set to 644 or -# more restrictive for all files in a given directory or a wildcard -# selection of files -# -# inputs: -# $1 = /full/path/to/directory or /path/to/fileswithpattern -# ex: !(*key).pem -# -# $2 (optional) = permission (ex: 600) -# -# outputs: -# true/false - -# Turn on "extended glob" for use of '!' in wildcard -shopt -s extglob - -# Turn off history to avoid surprises when using '!' -set -H - -USER_INPUT=$1 - -if [[ "${USER_INPUT}" == "" ]]; then - echo "false" - exit -fi - - -if [[ -d ${USER_INPUT} ]]; then - PATTERN="${USER_INPUT}/*" -else - PATTERN="${USER_INPUT}" -fi - -PERMISSION="" -if [[ "$2" != "" ]]; then - PERMISSION=$2 -fi - -FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN}) - -while read -r fileInfo; do - p=$(echo ${fileInfo} | cut -d' ' -f2) - - if [[ "${PERMISSION}" != "" ]]; then - if [[ "$p" != "${PERMISSION}" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then - echo "false" - exit - fi - fi -done <<< "${FILES_PERMISSIONS}" - - -echo "true" -exit -``` - -**Audit Execution:** - -``` -./check_files_permissions.sh '/etc/kubernetes/ssl/*.pem' -``` - -**Expected result**: - -``` -'true' is present -``` - -#### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to `600` (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, - -``` bash -chmod -R 600 /etc/kubernetes/ssl/certs/serverca -``` - -**Audit Script:** 1.1.21.sh - -``` -#!/bin/bash -e -check_dir=${1:-/etc/kubernetes/ssl} - -for file in $(find ${check_dir} -name "*key.pem"); do - file_permission=$(stat -c %a ${file}) - if [[ "${file_permission}" == "600" ]]; then - continue - else - echo "FAIL: ${file} ${file_permission}" - exit 1 - fi -done - -echo "pass" -``` - -**Audit Execution:** - -``` -./1.1.21.sh /etc/kubernetes/ssl -``` - -**Expected result**: - -``` -'pass' is present -``` - -### 1.2 API Server - -#### 1.2.2 Ensure that the `--basic-auth-file` argument is not set (Scored) - -**Result:** PASS - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and remove the `--basic-auth-file=` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--basic-auth-file' is not present -``` - -#### 1.2.3 Ensure that the `--token-auth-file` parameter is not set (Scored) - -**Result:** PASS - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and remove the `--token-auth-file=` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--token-auth-file' is not present -``` - -#### 1.2.4 Ensure that the `--kubelet-https` argument is set to true (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and remove the `--kubelet-https` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--kubelet-https' is present OR '--kubelet-https' is not present -``` - -#### 1.2.5 Ensure that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments are set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -`/etc/kubernetes/manifests/kube-apiserver.yaml` on the master node and set the -kubelet client certificate and key parameters as below. - -``` bash ---kubelet-client-certificate= ---kubelet-client-key= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -#### 1.2.6 Ensure that the `--kubelet-certificate-authority` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -`/etc/kubernetes/manifests/kube-apiserver.yaml` on the master node and set the -`--kubelet-certificate-authority` parameter to the path to the cert file for the certificate authority. -`--kubelet-certificate-authority=` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--kubelet-certificate-authority' is present -``` - -#### 1.2.7 Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--authorization-mode` parameter to values other than `AlwaysAllow`. -One such example could be as below. - -``` bash ---authorization-mode=RBAC -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'Node,RBAC' not have 'AlwaysAllow' -``` - -#### 1.2.8 Ensure that the `--authorization-mode` argument includes `Node` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--authorization-mode` parameter to a value that includes `Node`. - -``` bash ---authorization-mode=Node,RBAC -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'Node,RBAC' has 'Node' -``` - -#### 1.2.9 Ensure that the `--authorization-mode` argument includes `RBAC` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--authorization-mode` parameter to a value that includes RBAC, -for example: - -``` bash ---authorization-mode=Node,RBAC -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'Node,RBAC' has 'RBAC' -``` - -#### 1.2.11 Ensure that the admission control plugin `AlwaysAdmit` is not set (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and either remove the `--enable-admission-plugins` parameter, or set it to a -value that does not include `AlwaysAdmit`. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -#### 1.2.14 Ensure that the admission control plugin `ServiceAccount` is set (Scored) - -**Result:** PASS - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and ensure that the `--disable-admission-plugins` parameter is set to a -value that does not include `ServiceAccount`. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'ServiceAccount' OR '--enable-admission-plugins' is not present -``` - -#### 1.2.15 Ensure that the admission control plugin `NamespaceLifecycle` is set (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--disable-admission-plugins` parameter to -ensure it does not include `NamespaceLifecycle`. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--disable-admission-plugins' is present OR '--disable-admission-plugins' is not present -``` - -#### 1.2.16 Ensure that the admission control plugin `PodSecurityPolicy` is set (Scored) - -**Result:** PASS - -**Remediation:** -Follow the documentation and create Pod Security Policy objects as per your environment. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--enable-admission-plugins` parameter to a -value that includes `PodSecurityPolicy`: - -``` bash ---enable-admission-plugins=...,PodSecurityPolicy,... -``` - -Then restart the API Server. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'PodSecurityPolicy' -``` - -#### 1.2.17 Ensure that the admission control plugin `NodeRestriction` is set (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and configure `NodeRestriction` plug-in on kubelets. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--enable-admission-plugins` parameter to a -value that includes `NodeRestriction`. - -``` bash ---enable-admission-plugins=...,NodeRestriction,... -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'NodeRestriction' -``` - -#### 1.2.18 Ensure that the `--insecure-bind-address` argument is not set (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and remove the `--insecure-bind-address` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--insecure-bind-address' is not present -``` - -#### 1.2.19 Ensure that the `--insecure-port` argument is set to `0` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the below parameter. - -``` bash ---insecure-port=0 -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'0' is equal to '0' -``` - -#### 1.2.20 Ensure that the `--secure-port` argument is not set to `0` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and either remove the `--secure-port` parameter or -set it to a different **(non-zero)** desired port. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -6443 is greater than 0 OR '--secure-port' is not present -``` - -#### 1.2.21 Ensure that the `--profiling` argument is set to `false` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the below parameter. - -``` bash ---profiling=false -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'false' is equal to 'false' -``` - -#### 1.2.22 Ensure that the `--audit-log-path` argument is set (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--audit-log-path` parameter to a suitable path and -file where you would like audit logs to be written, for example: - -``` bash ---audit-log-path=/var/log/apiserver/audit.log -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--audit-log-path' is present -``` - -#### 1.2.23 Ensure that the `--audit-log-maxage` argument is set to `30` or as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--audit-log-maxage` parameter to `30` or as an appropriate number of days: - -``` bash ---audit-log-maxage=30 -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -30 is greater or equal to 30 -``` - -#### 1.2.24 Ensure that the `--audit-log-maxbackup` argument is set to `10` or as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--audit-log-maxbackup` parameter to `10` or to an appropriate -value. - -``` bash ---audit-log-maxbackup=10 -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -10 is greater or equal to 10 -``` - -#### 1.2.25 Ensure that the `--audit-log-maxsize` argument is set to `100` or as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--audit-log-maxsize` parameter to an appropriate size in **MB**. -For example, to set it as `100` **MB**: - -``` bash ---audit-log-maxsize=100 -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -100 is greater or equal to 100 -``` - -#### 1.2.26 Ensure that the `--request-timeout` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -and set the below parameter as appropriate and if needed. -For example, - -``` bash ---request-timeout=300s -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--request-timeout' is not present OR '--request-timeout' is present -``` - -#### 1.2.27 Ensure that the `--service-account-lookup` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the below parameter. - -``` bash ---service-account-lookup=true -``` - -Alternatively, you can delete the `--service-account-lookup` parameter from this file so -that the default takes effect. - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--service-account-lookup' is not present OR 'true' is equal to 'true' -``` - -#### 1.2.28 Ensure that the `--service-account-key-file` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--service-account-key-file` parameter -to the public key file for service accounts: - -``` bash ---service-account-key-file= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--service-account-key-file' is present -``` - -#### 1.2.29 Ensure that the `--etcd-certfile` and `--etcd-keyfile` arguments are set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the **etcd** certificate and **key** file parameters. - -``` bash ---etcd-certfile= ---etcd-keyfile= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -#### 1.2.30 Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the TLS certificate and private key file parameters. - -``` bash ---tls-cert-file= ---tls-private-key-file= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -#### 1.2.31 Ensure that the `--client-ca-file` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the client certificate authority file. - -``` bash ---client-ca-file= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--client-ca-file' is present -``` - -#### 1.2.32 Ensure that the `--etcd-cafile` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the etcd certificate authority file parameter. - -``` bash ---etcd-cafile= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--etcd-cafile' is present -``` - -#### 1.2.33 Ensure that the `--encryption-provider-config` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file `/etc/kubernetes/manifests/kube-apiserver.yaml` -on the master node and set the `--encryption-provider-config` parameter to the path of that file: - -``` bash ---encryption-provider-config= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected result**: - -``` -'--encryption-provider-config' is present -``` - -#### 1.2.34 Ensure that encryption providers are appropriately configured (Scored) - -**Result:** PASS - -**Remediation:** -Follow the Kubernetes documentation and configure a `EncryptionConfig` file. -In this file, choose **aescbc**, **kms** or **secretbox** as the encryption provider. - -**Audit Script:** 1.2.34.sh - -``` -#!/bin/bash -e - -check_file=${1} - -grep -q -E 'aescbc|kms|secretbox' ${check_file} -if [ $? -eq 0 ]; then - echo "--pass" - exit 0 -else - echo "fail: encryption provider found in ${check_file}" - exit 1 -fi -``` - -**Audit Execution:** - -``` -./1.2.34.sh /etc/kubernetes/ssl/encryption.yaml -``` - -**Expected result**: - -``` -'--pass' is present -``` - -### 1.3 Controller Manager - -#### 1.3.1 Ensure that the `--terminated-pod-gc-threshold` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and set the `--terminated-pod-gc-threshold` to an appropriate threshold, -for example: - -``` bash ---terminated-pod-gc-threshold=10 -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'--terminated-pod-gc-threshold' is present -``` - -#### 1.3.2 Ensure that the `--profiling` argument is set to false (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and set the below parameter. - -``` bash ---profiling=false -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'false' is equal to 'false' -``` - -#### 1.3.3 Ensure that the `--use-service-account-credentials` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node to set the below parameter. - -``` bash ---use-service-account-credentials=true -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'true' is not equal to 'false' -``` - -#### 1.3.4 Ensure that the `--service-account-private-key-file` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and set the `--service-account-private-key-file` parameter -to the private key file for service accounts. - -``` bash ---service-account-private-key-file= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'--service-account-private-key-file' is present -``` - -#### 1.3.5 Ensure that the `--root-ca-file` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and set the `--root-ca-file` parameter to the certificate bundle file`. - -``` bash ---root-ca-file= -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'--root-ca-file' is present -``` - -#### 1.3.6 Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and set the `--feature-gates` parameter to include `RotateKubeletServerCertificate=true`. - -``` bash ---feature-gates=RotateKubeletServerCertificate=true -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'RotateKubeletServerCertificate=true' is equal to 'RotateKubeletServerCertificate=true' -``` - -#### 1.3.7 Ensure that the `--bind-address argument` is set to `127.0.0.1` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Controller Manager pod specification file `/etc/kubernetes/manifests/kube-controller-manager.yaml` -on the master node and ensure the correct value for the `--bind-address` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected result**: - -``` -'--bind-address' is present OR '--bind-address' is not present -``` - -### 1.4 Scheduler - -#### 1.4.1 Ensure that the `--profiling` argument is set to `false` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Scheduler pod specification file `/etc/kubernetes/manifests/kube-scheduler.yaml` file -on the master node and set the below parameter. - -``` bash ---profiling=false -``` - -**Audit:** - -``` -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected result**: - -``` -'false' is equal to 'false' -``` - -#### 1.4.2 Ensure that the `--bind-address` argument is set to `127.0.0.1` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the Scheduler pod specification file `/etc/kubernetes/manifests/kube-scheduler.yaml` -on the master node and ensure the correct value for the `--bind-address` parameter. - -**Audit:** - -``` -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected result**: - -``` -'--bind-address' is present OR '--bind-address' is not present -``` - -## 2 Etcd Node Configuration -### 2 Etcd Node Configuration Files - -#### 2.1 Ensure that the `--cert-file` and `--key-file` arguments are set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` -on the master node and set the below parameters. - -``` bash ---cert-file= ---key-file= -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'--cert-file' is present AND '--key-file' is present -``` - -#### 2.2 Ensure that the `--client-cert-auth` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master -node and set the below parameter. - -``` bash ---client-cert-auth="true" -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'true' is equal to 'true' -``` - -#### 2.3 Ensure that the `--auto-tls` argument is not set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master -node and either remove the `--auto-tls` parameter or set it to `false`. - -``` bash - --auto-tls=false -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'--auto-tls' is not present OR '--auto-tls' is not present -``` - -#### 2.4 Ensure that the `--peer-cert-file` and `--peer-key-file` arguments are set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. Then, edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the -master node and set the below parameters. - -``` bash ---peer-client-file= ---peer-key-file= -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -#### 2.5 Ensure that the `--peer-client-cert-auth` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master -node and set the below parameter. - -``` bash ---peer-client-cert-auth=true -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'true' is equal to 'true' -``` - -#### 2.6 Ensure that the `--peer-auto-tls` argument is not set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the etcd pod specification file `/etc/kubernetes/manifests/etcd.yaml` on the master -node and either remove the `--peer-auto-tls` parameter or set it to `false`. - -``` bash ---peer-auto-tls=false -``` - -**Audit:** - -``` -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected result**: - -``` -'--peer-auto-tls' is not present OR '--peer-auto-tls' is present -``` - -## 3 Control Plane Configuration -### 3.2 Logging - -#### 3.2.1 Ensure that a minimal audit policy is created (Scored) - -**Result:** PASS - -**Remediation:** -Create an audit policy file for your cluster. - -**Audit Script:** 3.2.1.sh - -``` -#!/bin/bash -e - -api_server_bin=${1} - -/bin/ps -ef | /bin/grep ${api_server_bin} | /bin/grep -v ${0} | /bin/grep -v grep -``` - -**Audit Execution:** - -``` -./3.2.1.sh kube-apiserver -``` - -**Expected result**: - -``` -'--audit-policy-file' is present -``` - -## 4 Worker Node Security Configuration -### 4.1 Worker Node Configuration Files - -#### 4.1.1 Ensure that the kubelet service file permissions are set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. - -#### 4.1.2 Ensure that the kubelet service file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. - -#### 4.1.3 Ensure that the proxy kubeconfig file permissions are set to `644` or more restrictive (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, - -``` bash -chmod 644 /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml -``` - -**Audit:** - -``` -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %a /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected result**: - -``` -'644' is present OR '640' is present OR '600' is equal to '600' OR '444' is present OR '440' is present OR '400' is present OR '000' is present -``` - -#### 4.1.4 Ensure that the proxy kubeconfig file ownership is set to `root:root` (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, - -``` bash -chown root:root /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml -``` - -**Audit:** - -``` -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected result**: - -``` -'root:root' is present -``` - -#### 4.1.5 Ensure that the kubelet.conf file permissions are set to `644` or more restrictive (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, - -``` bash -chmod 644 /etc/kubernetes/ssl/kubecfg-kube-node.yaml -``` - -**Audit:** - -``` -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %a /etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected result**: - -``` -'644' is present OR '640' is present OR '600' is equal to '600' OR '444' is present OR '440' is present OR '400' is present OR '000' is present -``` - -#### 4.1.6 Ensure that the kubelet.conf file ownership is set to `root:root` (Scored) - -**Result:** PASS - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, - -``` bash -chown root:root /etc/kubernetes/ssl/kubecfg-kube-node.yaml -``` - -**Audit:** - -``` -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected result**: - -``` -'root:root' is equal to 'root:root' -``` - -#### 4.1.7 Ensure that the certificate authorities file permissions are set to `644` or more restrictive (Scored) - -**Result:** PASS - -**Remediation:** -Run the following command to modify the file permissions of the - -``` bash ---client-ca-file chmod 644 -``` - -**Audit:** - -``` -stat -c %a /etc/kubernetes/ssl/kube-ca.pem -``` - -**Expected result**: - -``` -'644' is equal to '644' OR '640' is present OR '600' is present -``` - -#### 4.1.8 Ensure that the client certificate authorities file ownership is set to `root:root` (Scored) - -**Result:** PASS - -**Remediation:** -Run the following command to modify the ownership of the `--client-ca-file`. - -``` bash -chown root:root -``` - -**Audit:** - -``` -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kube-ca.pem; then stat -c %U:%G /etc/kubernetes/ssl/kube-ca.pem; fi' -``` - -**Expected result**: - -``` -'root:root' is equal to 'root:root' -``` - -#### 4.1.9 Ensure that the kubelet configuration file has permissions set to `644` or more restrictive (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. - -#### 4.1.10 Ensure that the kubelet configuration file ownership is set to `root:root` (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. - -### 4.2 Kubelet - -#### 4.2.1 Ensure that the `--anonymous-auth argument` is set to false (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set authentication: `anonymous`: enabled to -`false`. -If using executable arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. - -``` bash ---anonymous-auth=false -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'false' is equal to 'false' -``` - -#### 4.2.2 Ensure that the `--authorization-mode` argument is not set to `AlwaysAllow` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set authorization: `mode` to `Webhook`. If -using executable arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_AUTHZ_ARGS` variable. - -``` bash ---authorization-mode=Webhook -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'Webhook' not have 'AlwaysAllow' -``` - -#### 4.2.3 Ensure that the `--client-ca-file` argument is set as appropriate (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set authentication: `x509`: `clientCAFile` to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_AUTHZ_ARGS` variable. - -``` bash ---client-ca-file= -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'--client-ca-file' is present -``` - -#### 4.2.4 Ensure that the `--read-only-port` argument is set to `0` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set `readOnlyPort` to `0`. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. - -``` bash ---read-only-port=0 -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'0' is equal to '0' -``` - -#### 4.2.5 Ensure that the `--streaming-connection-idle-timeout` argument is not set to `0` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a -value other than `0`. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. - -``` bash ---streaming-connection-idle-timeout=5m -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'30m' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -#### 4.2.6 Ensure that the ```--protect-kernel-defaults``` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set `protectKernelDefaults`: `true`. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -set the below parameter in `KUBELET_SYSTEM_PODS_ARGS` variable. - -``` bash ---protect-kernel-defaults=true -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'true' is equal to 'true' -``` - -#### 4.2.7 Ensure that the `--make-iptables-util-chains` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains`: `true`. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -remove the `--make-iptables-util-chains` argument from the -`KUBELET_SYSTEM_PODS_ARGS` variable. -Based on your system, restart the kubelet service. For example: - -```bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'true' is equal to 'true' OR '--make-iptables-util-chains' is not present -``` - -#### 4.2.10 Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate (Scored) - -**Result:** Not Applicable - -**Remediation:** -RKE doesn’t require or maintain a configuration file for the kubelet service. All configuration is passed in as arguments at container run time. - -#### 4.2.11 Ensure that the `--rotate-certificates` argument is not set to `false` (Scored) - -**Result:** PASS - -**Remediation:** -If using a Kubelet config file, edit the file to add the line `rotateCertificates`: `true` or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` on each worker node and -remove `--rotate-certificates=false` argument from the `KUBELET_CERTIFICATE_ARGS` -variable. -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'--rotate-certificates' is present OR '--rotate-certificates' is not present -``` - -#### 4.2.12 Ensure that the `RotateKubeletServerCertificate` argument is set to `true` (Scored) - -**Result:** PASS - -**Remediation:** -Edit the kubelet service file `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` -on each worker node and set the below parameter in `KUBELET_CERTIFICATE_ARGS` variable. - -``` bash ---feature-gates=RotateKubeletServerCertificate=true -``` - -Based on your system, restart the kubelet service. For example: - -``` bash -systemctl daemon-reload -systemctl restart kubelet.service -``` - -**Audit:** - -``` -/bin/ps -fC kubelet -``` - -**Audit Config:** - -``` -/bin/cat /var/lib/kubelet/config.yaml -``` - -**Expected result**: - -``` -'true' is equal to 'true' -``` - -## 5 Kubernetes Policies -### 5.1 RBAC and Service Accounts - -#### 5.1.5 Ensure that default service accounts are not actively used. (Scored) - -**Result:** PASS - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value - -``` bash -automountServiceAccountToken: false -``` - -**Audit Script:** 5.1.5.sh - -``` -#!/bin/bash - -export KUBECONFIG=${KUBECONFIG:-/root/.kube/config} - -kubectl version > /dev/null -if [ $? -ne 0 ]; then - echo "fail: kubectl failed" - exit 1 -fi - -accounts="$(kubectl --kubeconfig=${KUBECONFIG} get serviceaccounts -A -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true)) | "fail \(.metadata.name) \(.metadata.namespace)"')" - -if [[ "${accounts}" != "" ]]; then - echo "fail: automountServiceAccountToken not false for accounts: ${accounts}" - exit 1 -fi - -default_binding="$(kubectl get rolebindings,clusterrolebindings -A -o json | jq -r '.items[] | select(.subjects[].kind=="ServiceAccount" and .subjects[].name=="default" and .metadata.name=="default").metadata.uid' | wc -l)" - -if [[ "${default_binding}" -gt 0 ]]; then - echo "fail: default service accounts have non default bindings" - exit 1 -fi - -echo "--pass" -exit 0 -``` - -**Audit Execution:** - -``` -./5.1.5.sh -``` - -**Expected result**: - -``` -'--pass' is present -``` - -### 5.2 Pod Security Policies - -#### 5.2.2 Minimize the admission of containers wishing to share the host process ID namespace (Scored) - -**Result:** PASS - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -`.spec.hostPID` field is omitted or set to `false`. - -**Audit:** - -``` -kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected result**: - -``` -1 is greater than 0 -``` - -#### 5.2.3 Minimize the admission of containers wishing to share the host IPC namespace (Scored) - -**Result:** PASS - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -`.spec.hostIPC` field is omitted or set to `false`. - -**Audit:** - -``` -kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected result**: - -``` -1 is greater than 0 -``` - -#### 5.2.4 Minimize the admission of containers wishing to share the host network namespace (Scored) - -**Result:** PASS - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -`.spec.hostNetwork` field is omitted or set to `false`. - -**Audit:** - -``` -kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected result**: - -``` -1 is greater than 0 -``` - -#### 5.2.5 Minimize the admission of containers with `allowPrivilegeEscalation` (Scored) - -**Result:** PASS - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -`.spec.allowPrivilegeEscalation` field is omitted or set to `false`. - -**Audit:** - -``` -kubectl --kubeconfig=/root/.kube/config get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected result**: - -``` -1 is greater than 0 -``` - -### 5.3 Network Policies and CNI - -#### 5.3.2 Ensure that all Namespaces have Network Policies defined (Scored) - -**Result:** PASS - -**Remediation:** -Follow the documentation and create `NetworkPolicy` objects as you need them. - -**Audit Script:** 5.3.2.sh - -``` -#!/bin/bash -e - -export KUBECONFIG=${KUBECONFIG:-"/root/.kube/config"} - -kubectl version > /dev/null -if [ $? -ne 0 ]; then - echo "fail: kubectl failed" - exit 1 -fi - -for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do - policy_count=$(kubectl get networkpolicy -n ${namespace} -o json | jq '.items | length') - if [ ${policy_count} -eq 0 ]; then - echo "fail: ${namespace}" - exit 1 - fi -done - -echo "pass" -``` - -**Audit Execution:** - -``` -./5.3.2.sh -``` - -**Expected result**: - -``` -'pass' is present -``` - -### 5.6 General Policies - -#### 5.6.4 The default namespace should not be used (Scored) - -**Result:** PASS - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. - -**Audit Script:** 5.6.4.sh - -``` -#!/bin/bash -e - -export KUBECONFIG=${KUBECONFIG:-/root/.kube/config} - -kubectl version > /dev/null -if [[ $? -gt 0 ]]; then - echo "fail: kubectl failed" - exit 1 -fi - -default_resources=$(kubectl get all -o json | jq --compact-output '.items[] | select((.kind == "Service") and (.metadata.name == "kubernetes") and (.metadata.namespace == "default") | not)' | wc -l) - -echo "--count=${default_resources}" -``` - -**Audit Execution:** - -``` -./5.6.4.sh -``` - -**Expected result**: - -``` -'0' is equal to '0' -``` diff --git a/content/rancher/v2.6/en/security/rancher-2.5/1.5-hardening-2.5/_index.md b/content/rancher/v2.6/en/security/rancher-2.5/1.5-hardening-2.5/_index.md deleted file mode 100644 index 7f64bbbc6a9..00000000000 --- a/content/rancher/v2.6/en/security/rancher-2.5/1.5-hardening-2.5/_index.md +++ /dev/null @@ -1,720 +0,0 @@ ---- -title: Hardening Guide with CIS 1.5 Benchmark -weight: 200 ---- - -This document provides prescriptive guidance for hardening a production installation of a RKE cluster to be used with Rancher v2.5. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). - -> This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes. - -This hardening guide is intended to be used for RKE clusters and associated with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: - - Rancher Version | CIS Benchmark Version | Kubernetes Version -----------------|-----------------------|------------------ - Rancher v2.5 | Benchmark v1.5 | Kubernetes 1.15 - -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_Hardening_Guide_CIS_1.5.pdf) - -### Overview - -This document provides prescriptive guidance for hardening a RKE cluster to be used for installing Rancher v2.5 with Kubernetes v1.15 or provisioning a RKE cluster with Kubernetes 1.15 to be used within Rancher v2.5. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS). - -For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS 1.5 Benchmark - Self-Assessment Guide - Rancher v2.5]({{< baseurl >}}/rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/). - -#### Known Issues - -- Rancher **exec shell** and **view logs** for pods are **not** functional in a CIS 1.5 hardened setup when only public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes. -- When setting the `default_pod_security_policy_template_id:` to `restricted` Rancher creates **RoleBindings** and **ClusterRoleBindings** on the default service accounts. The CIS 1.5 5.1.5 check requires the default service accounts have no roles or cluster roles bound to it apart from the defaults. In addition the default service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments. - -### Configure Kernel Runtime Parameters - -The following `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: - -``` -vm.overcommit_memory=1 -vm.panic_on_oom=0 -kernel.panic=10 -kernel.panic_on_oops=1 -kernel.keys.root_maxbytes=25000000 -``` - -Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. - -### Configure `etcd` user and group -A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. - -#### create `etcd` user and group -To create the **etcd** group run the following console commands. - -The commands below use `52034` for **uid** and **gid** are for example purposes. Any valid unused **uid** or **gid** could also be used in lieu of `52034`. - -``` -groupadd --gid 52034 etcd -useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd -``` - -Update the RKE **config.yml** with the **uid** and **gid** of the **etcd** user: - -``` yaml -services: - etcd: - gid: 52034 - uid: 52034 -``` - -#### Set `automountServiceAccountToken` to `false` for `default` service accounts -Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. - -For each namespace including **default** and **kube-system** on a standard RKE install the **default** service account must include this value: - -``` -automountServiceAccountToken: false -``` - -Save the following yaml to a file called `account_update.yaml` - -``` yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: default -automountServiceAccountToken: false -``` - -Create a bash script file called `account_update.sh`. Be sure to `chmod +x account_update.sh` so the script has execute permissions. - -``` -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do - kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" -done -``` - -### Ensure that all Namespaces have Network Policies defined - -Running different applications on the same Kubernetes cluster creates a risk of one -compromised application attacking a neighboring application. Network segmentation is -important to ensure that containers can communicate only with those they are supposed -to. A network policy is a specification of how selections of pods are allowed to -communicate with each other and other network endpoints. - -Network Policies are namespace scoped. When a network policy is introduced to a given -namespace, all traffic not allowed by the policy is denied. However, if there are no network -policies in a namespace all traffic will be allowed into and out of the pods in that -namespace. To enforce network policies, a CNI (container network interface) plugin must be enabled. -This guide uses [canal](https://github.com/projectcalico/canal) to provide the policy enforcement. -Additional information about CNI providers can be found -[here](https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/) - -Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a -**permissive** example is provide below. If you want to allow all traffic to all pods in a namespace -(even if policies are added that cause some pods to be treated as “isolated”), -you can create a policy that explicitly allows all traffic in that namespace. Save the following `yaml` as -`default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) -about network policies can be found on the Kubernetes site. - -> This `NetworkPolicy` is not recommended for production use - -``` yaml ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: default-allow-all -spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress -``` - -Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to -`chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions. - -``` -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do - kubectl apply -f default-allow-all.yaml -n ${namespace} -done -``` -Execute this script to apply the `default-allow-all.yaml` the **permissive** `NetworkPolicy` to all namespaces. - -### Reference Hardened RKE `cluster.yml` configuration - -The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened install -of Rancher Kubernetes Engine (RKE). Install [documentation](https://rancher.com/docs/rke/latest/en/installation/) is -provided with additional details about the configuration items. This reference `cluster.yml` does not include the required **nodes** directive which will vary depending on your environment. Documentation for node configuration can be found here: https://rancher.com/docs/rke/latest/en/config-options/nodes - - -``` yaml -# If you intend to deploy Kubernetes in an air-gapped environment, -# please consult the documentation on how to configure custom RKE images. -kubernetes_version: "v1.15.9-rancher1-1" -enable_network_policy: true -default_pod_security_policy_template_id: "restricted" -# the nodes directive is required and will vary depending on your environment -# documentation for node configuration can be found here: -# https://rancher.com/docs/rke/latest/en/config-options/nodes -nodes: -services: - etcd: - uid: 52034 - gid: 52034 - kube-api: - pod_security_policy: true - secrets_encryption_config: - enabled: true - audit_log: - enabled: true - admission_configuration: - event_rate_limit: - enabled: true - kube-controller: - extra_args: - feature-gates: "RotateKubeletServerCertificate=true" - scheduler: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] - kubelet: - generate_serving_certificate: true - extra_args: - feature-gates: "RotateKubeletServerCertificate=true" - protect-kernel-defaults: "true" - tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256" - extra_binds: [] - extra_env: [] - cluster_domain: "" - infra_container_image: "" - cluster_dns_server: "" - fail_swap_on: false - kubeproxy: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] -network: - plugin: "" - options: {} - mtu: 0 - node_selector: {} -authentication: - strategy: "" - sans: [] - webhook: null -addons: | - --- - apiVersion: v1 - kind: Namespace - metadata: - name: ingress-nginx - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: Role - metadata: - name: default-psp-role - namespace: ingress-nginx - rules: - - apiGroups: - - extensions - resourceNames: - - default-psp - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: default-psp-rolebinding - namespace: ingress-nginx - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: default-psp-role - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: v1 - kind: Namespace - metadata: - name: cattle-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: Role - metadata: - name: default-psp-role - namespace: cattle-system - rules: - - apiGroups: - - extensions - resourceNames: - - default-psp - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: default-psp-rolebinding - namespace: cattle-system - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: default-psp-role - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - name: restricted - spec: - requiredDropCapabilities: - - NET_RAW - privileged: false - allowPrivilegeEscalation: false - defaultAllowPrivilegeEscalation: false - fsGroup: - rule: RunAsAny - runAsUser: - rule: MustRunAsNonRoot - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - volumes: - - emptyDir - - secret - - persistentVolumeClaim - - downwardAPI - - configMap - - projected - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: psp:restricted - rules: - - apiGroups: - - extensions - resourceNames: - - restricted - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp:restricted - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:restricted - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: tiller - namespace: kube-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: tiller - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin - subjects: - - kind: ServiceAccount - name: tiller - namespace: kube-system - -addons_include: [] -system_images: - etcd: "" - alpine: "" - nginx_proxy: "" - cert_downloader: "" - kubernetes_services_sidecar: "" - kubedns: "" - dnsmasq: "" - kubedns_sidecar: "" - kubedns_autoscaler: "" - coredns: "" - coredns_autoscaler: "" - kubernetes: "" - flannel: "" - flannel_cni: "" - calico_node: "" - calico_cni: "" - calico_controllers: "" - calico_ctl: "" - calico_flexvol: "" - canal_node: "" - canal_cni: "" - canal_flannel: "" - canal_flexvol: "" - weave_node: "" - weave_cni: "" - pod_infra_container: "" - ingress: "" - ingress_backend: "" - metrics_server: "" - windows_pod_infra_container: "" -ssh_key_path: "" -ssh_cert_path: "" -ssh_agent_auth: false -authorization: - mode: "" - options: {} -ignore_docker_version: false -private_registries: [] -ingress: - provider: "" - options: {} - node_selector: {} - extra_args: {} - dns_policy: "" - extra_envs: [] - extra_volumes: [] - extra_volume_mounts: [] -cluster_name: "" -prefix_path: "" -addon_job_timeout: 0 -bastion_host: - address: "" - port: "" - user: "" - ssh_key: "" - ssh_key_path: "" - ssh_cert: "" - ssh_cert_path: "" -monitoring: - provider: "" - options: {} - node_selector: {} -restore: - restore: false - snapshot_name: "" -dns: null -``` - -### Reference Hardened RKE Template configuration - -The reference RKE Template provides the configuration needed to achieve a hardened install of Kubenetes. -RKE Templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher -[documentaion](https://rancher.com/docs/rancher/v2.6/en/installation) for additional installation and RKE Template details. - -``` yaml -# -# Cluster Config -# -default_pod_security_policy_template_id: restricted -docker_root_dir: /var/lib/docker -enable_cluster_alerting: false -enable_cluster_monitoring: false -enable_network_policy: true -# -# Rancher Config -# -rancher_kubernetes_engine_config: - addon_job_timeout: 30 - addons: |- - --- - apiVersion: v1 - kind: Namespace - metadata: - name: ingress-nginx - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: Role - metadata: - name: default-psp-role - namespace: ingress-nginx - rules: - - apiGroups: - - extensions - resourceNames: - - default-psp - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: default-psp-rolebinding - namespace: ingress-nginx - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: default-psp-role - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: v1 - kind: Namespace - metadata: - name: cattle-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: Role - metadata: - name: default-psp-role - namespace: cattle-system - rules: - - apiGroups: - - extensions - resourceNames: - - default-psp - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: RoleBinding - metadata: - name: default-psp-rolebinding - namespace: cattle-system - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: default-psp-role - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - name: restricted - spec: - requiredDropCapabilities: - - NET_RAW - privileged: false - allowPrivilegeEscalation: false - defaultAllowPrivilegeEscalation: false - fsGroup: - rule: RunAsAny - runAsUser: - rule: MustRunAsNonRoot - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - volumes: - - emptyDir - - secret - - persistentVolumeClaim - - downwardAPI - - configMap - - projected - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: psp:restricted - rules: - - apiGroups: - - extensions - resourceNames: - - restricted - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp:restricted - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:restricted - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: tiller - namespace: kube-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: tiller - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin - subjects: - - kind: ServiceAccount - name: tiller - namespace: kube-system - ignore_docker_version: true - kubernetes_version: v1.15.9-rancher1-1 -# -# If you are using calico on AWS -# -# network: -# plugin: calico -# calico_network_provider: -# cloud_provider: aws -# -# # To specify flannel interface -# -# network: -# plugin: flannel -# flannel_network_provider: -# iface: eth1 -# -# # To specify flannel interface for canal plugin -# -# network: -# plugin: canal -# canal_network_provider: -# iface: eth1 -# - network: - mtu: 0 - plugin: canal -# -# services: -# kube-api: -# service_cluster_ip_range: 10.43.0.0/16 -# kube-controller: -# cluster_cidr: 10.42.0.0/16 -# service_cluster_ip_range: 10.43.0.0/16 -# kubelet: -# cluster_domain: cluster.local -# cluster_dns_server: 10.43.0.10 -# - services: - etcd: - backup_config: - enabled: false - interval_hours: 12 - retention: 6 - safe_timestamp: false - creation: 12h - extra_args: - election-timeout: '5000' - heartbeat-interval: '500' - gid: 52034 - retention: 72h - snapshot: false - uid: 52034 - kube_api: - always_pull_images: false - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - secrets_encryption_config: - enabled: true - service_node_port_range: 30000-32767 - kube_controller: - extra_args: - address: 127.0.0.1 - feature-gates: RotateKubeletServerCertificate=true - profiling: 'false' - terminated-pod-gc-threshold: '1000' - kubelet: - extra_args: - anonymous-auth: 'false' - event-qps: '0' - feature-gates: RotateKubeletServerCertificate=true - make-iptables-util-chains: 'true' - protect-kernel-defaults: 'true' - streaming-connection-idle-timeout: 1800s - tls-cipher-suites: >- - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - fail_swap_on: false - generate_serving_certificate: true - scheduler: - extra_args: - address: 127.0.0.1 - profiling: 'false' - ssh_agent_auth: false -windows_prefered_cluster: false -``` - -### Hardened Reference Ubuntu 18.04 LTS **cloud-config**: - -The reference **cloud-config** is generally used in cloud infrastructure environments to allow for -configuration management of compute instances. The reference config configures Ubuntu operating system level settings -needed before installing kubernetes. - -``` yaml -#cloud-config -packages: - - curl - - jq -runcmd: - - sysctl -w vm.overcommit_memory=1 - - sysctl -w kernel.panic=10 - - sysctl -w kernel.panic_on_oops=1 - - curl https://releases.rancher.com/install-docker/18.09.sh | sh - - usermod -aG docker ubuntu - - return=1; while [ $return != 0 ]; do sleep 2; docker ps; return=$?; done - - addgroup --gid 52034 etcd - - useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd -write_files: - - path: /etc/sysctl.d/kubelet.conf - owner: root:root - permissions: "0644" - content: | - vm.overcommit_memory=1 - kernel.panic=10 - kernel.panic_on_oops=1 -``` diff --git a/content/rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/_index.md b/content/rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/_index.md deleted file mode 100644 index d7803779eb6..00000000000 --- a/content/rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/_index.md +++ /dev/null @@ -1,3317 +0,0 @@ ---- -title: CIS 1.6 Benchmark - Self-Assessment Guide - Rancher v2.5.4 -weight: 101 ---- - -### CIS 1.6 Kubernetes Benchmark - Rancher v2.5.4 with Kubernetes v1.18 - -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_1.6_Benchmark_Assessment.pdf) - -#### Overview - -This document is a companion to the Rancher v2.5.4 security hardening guide. The hardening guide provides prescriptive guidance for hardening a production installation of Rancher, and this benchmark guide is meant to help you evaluate the level of security of the hardened cluster against each control in the benchmark. - -This guide corresponds to specific versions of the hardening guide, Rancher, CIS Benchmark, and Kubernetes: - -Hardening Guide Version | Rancher Version | CIS Benchmark Version | Kubernetes Version ----------------------------|----------|---------|------- -Hardening Guide with CIS 1.6 Benchmark | Rancher v2.5.4 | CIS 1.6| Kubernetes v1.18 - -Because Rancher and RKE install Kubernetes services as Docker containers, many of the control verification checks in the CIS Kubernetes Benchmark don't apply and will have a result of `Not Applicable`. This guide will walk through the various controls and provide updated example commands to audit compliance in Rancher-created clusters. - -This document is to be used by Rancher operators, security teams, auditors and decision makers. - -For more detail about each audit, including rationales and remediations for failing tests, you can refer to the corresponding section of the CIS Kubernetes Benchmark 1.6. You can download the benchmark after logging in to [CISecurity.org]( https://www.cisecurity.org/benchmark/kubernetes/). - -#### Testing controls methodology - -Rancher and RKE install Kubernetes services via Docker containers. Configuration is defined by arguments passed to the container at the time of initialization, not via configuration files. - -Where control audits differ from the original CIS benchmark, the audit commands specific to Rancher Labs are provided for testing. -When performing the tests, you will need access to the Docker command line on the hosts of all three RKE roles. The commands also make use of the the [jq](https://stedolan.github.io/jq/) and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (with valid config) tools to and are required in the testing and evaluation of test results. - -### Controls - -## 1.1 Etcd Node Configuration Files -### 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated) - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the below command: -ps -ef | grep etcd Run the below command (based on the etcd data directory found above). For example, -chmod 700 /var/lib/etcd - - -**Audit:** - -```bash -stat -c %a /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'700' is equal to '700' -``` - -**Returned Value**: - -```console -700 - -``` -### 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) - -**Result:** pass - -**Remediation:** -On the etcd server node, get the etcd data directory, passed as an argument --data-dir, -from the below command: -ps -ef | grep etcd -Run the below command (based on the etcd data directory found above). -For example, chown etcd:etcd /var/lib/etcd - -A system service account is required for etcd data directory ownership. -Refer to Rancher's hardening guide for more details on how to configure this ownership. - - -**Audit:** - -```bash -stat -c %U:%G /node/var/lib/etcd -``` - -**Expected Result**: - -```console -'etcd:etcd' is present -``` - -**Returned Value**: - -```console -etcd:etcd - -``` -### 1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, -chown -R root:root /etc/kubernetes/pki/ - - -**Audit:** - -```bash -check_files_owner_in_dir.sh /node/etc/kubernetes/ssl -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -# This script is used to ensure the owner is set to root:root for -# the given directory and all the files in it -# -# inputs: -# $1 = /full/path/to/directory -# -# outputs: -# true/false - -INPUT_DIR=$1 - -if [[ "${INPUT_DIR}" == "" ]]; then - echo "false" - exit -fi - -if [[ $(stat -c %U:%G ${INPUT_DIR}) != "root:root" ]]; then - echo "false" - exit -fi - -statInfoLines=$(stat -c "%n %U:%G" ${INPUT_DIR}/*) -while read -r statInfoLine; do - f=$(echo ${statInfoLine} | cut -d' ' -f1) - p=$(echo ${statInfoLine} | cut -d' ' -f2) - - if [[ $(basename "$f" .pem) == "kube-etcd-"* ]]; then - if [[ "$p" != "root:root" && "$p" != "etcd:etcd" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "root:root" ]]; then - echo "false" - exit - fi - fi -done <<< "${statInfoLines}" - - -echo "true" -exit - -``` -**Returned Value**: - -```console -true - -``` -### 1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, -chmod -R 644 /etc/kubernetes/pki/*.crt - - -**Audit:** - -```bash -check_files_permissions.sh /node/etc/kubernetes/ssl/!(*key).pem -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -# This script is used to ensure the file permissions are set to 644 or -# more restrictive for all files in a given directory or a wildcard -# selection of files -# -# inputs: -# $1 = /full/path/to/directory or /path/to/fileswithpattern -# ex: !(*key).pem -# -# $2 (optional) = permission (ex: 600) -# -# outputs: -# true/false - -# Turn on "extended glob" for use of '!' in wildcard -shopt -s extglob - -# Turn off history to avoid surprises when using '!' -set -H - -USER_INPUT=$1 - -if [[ "${USER_INPUT}" == "" ]]; then - echo "false" - exit -fi - - -if [[ -d ${USER_INPUT} ]]; then - PATTERN="${USER_INPUT}/*" -else - PATTERN="${USER_INPUT}" -fi - -PERMISSION="" -if [[ "$2" != "" ]]; then - PERMISSION=$2 -fi - -FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN}) - -while read -r fileInfo; do - p=$(echo ${fileInfo} | cut -d' ' -f2) - - if [[ "${PERMISSION}" != "" ]]; then - if [[ "$p" != "${PERMISSION}" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then - echo "false" - exit - fi - fi -done <<< "${FILES_PERMISSIONS}" - - -echo "true" -exit - -``` -**Returned Value**: - -```console -true - -``` -### 1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, -chmod -R 600 /etc/kubernetes/ssl/*key.pem - - -**Audit:** - -```bash -check_files_permissions.sh /node/etc/kubernetes/ssl/*key.pem 600 -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -# This script is used to ensure the file permissions are set to 644 or -# more restrictive for all files in a given directory or a wildcard -# selection of files -# -# inputs: -# $1 = /full/path/to/directory or /path/to/fileswithpattern -# ex: !(*key).pem -# -# $2 (optional) = permission (ex: 600) -# -# outputs: -# true/false - -# Turn on "extended glob" for use of '!' in wildcard -shopt -s extglob - -# Turn off history to avoid surprises when using '!' -set -H - -USER_INPUT=$1 - -if [[ "${USER_INPUT}" == "" ]]; then - echo "false" - exit -fi - - -if [[ -d ${USER_INPUT} ]]; then - PATTERN="${USER_INPUT}/*" -else - PATTERN="${USER_INPUT}" -fi - -PERMISSION="" -if [[ "$2" != "" ]]; then - PERMISSION=$2 -fi - -FILES_PERMISSIONS=$(stat -c %n\ %a ${PATTERN}) - -while read -r fileInfo; do - p=$(echo ${fileInfo} | cut -d' ' -f2) - - if [[ "${PERMISSION}" != "" ]]; then - if [[ "$p" != "${PERMISSION}" ]]; then - echo "false" - exit - fi - else - if [[ "$p" != "644" && "$p" != "640" && "$p" != "600" ]]; then - echo "false" - exit - fi - fi -done <<< "${FILES_PERMISSIONS}" - - -echo "true" -exit - -``` -**Returned Value**: - -```console -true - -``` -### 1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-apiserver.yaml; then stat -c permissions=%a /etc/kubernetes/manifests/kube-apiserver.yaml; fi' -``` - - -### 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-apiserver.yaml; then stat -c %U:%G /etc/kubernetes/manifests/kube-apiserver.yaml; fi' -``` - - -### 1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-controller-manager.yaml; then stat -c permissions=%a /etc/kubernetes/manifests/kube-controller-manager.yaml; fi' -``` - - -### 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-controller-manager.yaml; then stat -c %U:%G /etc/kubernetes/manifests/kube-controller-manager.yaml; fi' -``` - - -### 1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-scheduler.yaml; then stat -c permissions=%a /etc/kubernetes/manifests/kube-scheduler.yaml; fi' -``` - - -### 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-scheduler.yaml; then stat -c %U:%G /etc/kubernetes/manifests/kube-scheduler.yaml; fi' -``` - - -### 1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/etcd.yaml; then stat -c permissions=%a /etc/kubernetes/manifests/etcd.yaml; fi' -``` - - -### 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for etcd. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/manifests/etcd.yaml; then stat -c %U:%G /etc/kubernetes/manifests/etcd.yaml; fi' -``` - - -### 1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual) - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, -chmod 644 - - -**Audit:** - -```bash -stat -c permissions=%a -``` - - -### 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual) - -**Result:** warn - -**Remediation:** -Run the below command (based on the file location on your system) on the master node. -For example, -chown root:root - - -**Audit:** - -```bash -stat -c %U:%G -``` - - -### 1.1.13 Ensure that the admin.conf file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c permissions=%a /etc/kubernetes/admin.conf; fi' -``` - - -### 1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c %U:%G /etc/kubernetes/admin.conf; fi' -``` - - -### 1.1.15 Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e scheduler; then stat -c permissions=%a scheduler; fi' -``` - - -### 1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e scheduler; then stat -c %U:%G scheduler; fi' -``` - - -### 1.1.17 Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e controllermanager; then stat -c permissions=%a controllermanager; fi' -``` - - -### 1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e controllermanager; then stat -c %U:%G controllermanager; fi' -``` - - -## 1.2 API Server -### 1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the below parameter. ---anonymous-auth=false - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'false' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.2 Ensure that the --basic-auth-file argument is not set (Automated) - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and remove the --basic-auth-file= parameter. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--basic-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.3 Ensure that the --token-auth-file parameter is not set (Automated) - -**Result:** pass - -**Remediation:** -Follow the documentation and configure alternate mechanisms for authentication. Then, -edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and remove the --token-auth-file= parameter. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--token-auth-file' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.4 Ensure that the --kubelet-https argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and remove the --kubelet-https parameter. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-https' is not present OR '--kubelet-https' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.5 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the -apiserver and kubelets. Then, edit API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the -kubelet client certificate and key parameters as below. ---kubelet-client-certificate= ---kubelet-client-key= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-client-certificate' is present AND '--kubelet-client-key' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and setup the TLS connection between -the apiserver and kubelets. Then, edit the API server pod specification file -/etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the ---kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. ---kubelet-certificate-authority= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--kubelet-certificate-authority' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --authorization-mode parameter to values other than AlwaysAllow. -One such example could be as below. ---authorization-mode=RBAC - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console - 'Node,RBAC' not have 'AlwaysAllow' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.8 Ensure that the --authorization-mode argument includes Node (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --authorization-mode parameter to a value that includes Node. ---authorization-mode=Node,RBAC - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'Node,RBAC' has 'Node' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.9 Ensure that the --authorization-mode argument includes RBAC (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --authorization-mode parameter to a value that includes RBAC, -for example: ---authorization-mode=Node,RBAC - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'Node,RBAC' has 'RBAC' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.10 Ensure that the admission control plugin EventRateLimit is set (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set the desired limits in a configuration file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameters. ---enable-admission-plugins=...,EventRateLimit,... ---admission-control-config-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'EventRateLimit' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.11 Ensure that the admission control plugin AlwaysAdmit is not set (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and either remove the --enable-admission-plugins parameter, or set it to a -value that does not include AlwaysAdmit. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console - 'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' not have 'AlwaysAdmit' OR '--enable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.12 Ensure that the admission control plugin AlwaysPullImages is set (Manual) - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --enable-admission-plugins parameter to include -AlwaysPullImages. ---enable-admission-plugins=...,AlwaysPullImages,... - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - - -### 1.2.13 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual) - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --enable-admission-plugins parameter to include -SecurityContextDeny, unless PodSecurityPolicy is already in place. ---enable-admission-plugins=...,SecurityContextDeny,... - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - - -### 1.2.14 Ensure that the admission control plugin ServiceAccount is set (Automated) - -**Result:** pass - -**Remediation:** -Follow the documentation and create ServiceAccount objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and ensure that the --disable-admission-plugins parameter is set to a -value that does not include ServiceAccount. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is not present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.15 Ensure that the admission control plugin NamespaceLifecycle is set (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --disable-admission-plugins parameter to -ensure it does not include NamespaceLifecycle. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--disable-admission-plugins' is not present OR '--disable-admission-plugins' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.16 Ensure that the admission control plugin PodSecurityPolicy is set (Automated) - -**Result:** pass - -**Remediation:** -Follow the documentation and create Pod Security Policy objects as per your environment. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --enable-admission-plugins parameter to a -value that includes PodSecurityPolicy: ---enable-admission-plugins=...,PodSecurityPolicy,... -Then restart the API Server. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'PodSecurityPolicy' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.17 Ensure that the admission control plugin NodeRestriction is set (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --enable-admission-plugins parameter to a -value that includes NodeRestriction. ---enable-admission-plugins=...,NodeRestriction,... - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit' has 'NodeRestriction' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.18 Ensure that the --insecure-bind-address argument is not set (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and remove the --insecure-bind-address parameter. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--insecure-bind-address' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.19 Ensure that the --insecure-port argument is set to 0 (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the below parameter. ---insecure-port=0 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'0' is equal to '0' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.20 Ensure that the --secure-port argument is not set to 0 (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and either remove the --secure-port parameter or -set it to a different (non-zero) desired port. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -6443 is greater than 0 OR '--secure-port' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.21 Ensure that the --profiling argument is set to false (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the below parameter. ---profiling=false - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'false' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.22 Ensure that the --audit-log-path argument is set (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --audit-log-path parameter to a suitable path and -file where you would like audit logs to be written, for example: ---audit-log-path=/var/log/apiserver/audit.log - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-log-path' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.23 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --audit-log-maxage parameter to 30 or as an appropriate number of days: ---audit-log-maxage=30 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -30 is greater or equal to 30 -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.24 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --audit-log-maxbackup parameter to 10 or to an appropriate -value. ---audit-log-maxbackup=10 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -10 is greater or equal to 10 -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.25 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --audit-log-maxsize parameter to an appropriate size in MB. -For example, to set it as 100 MB: ---audit-log-maxsize=100 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -100 is greater or equal to 100 -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.26 Ensure that the --request-timeout argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -and set the below parameter as appropriate and if needed. -For example, ---request-timeout=300s - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--request-timeout' is not present OR '--request-timeout' is not present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.27 Ensure that the --service-account-lookup argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the below parameter. ---service-account-lookup=true -Alternatively, you can delete the --service-account-lookup parameter from this file so -that the default takes effect. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-lookup' is not present OR 'true' is equal to 'true' -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.28 Ensure that the --service-account-key-file argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --service-account-key-file parameter -to the public key file for service accounts: ---service-account-key-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-key-file' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.29 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the etcd certificate and key file parameters. ---etcd-certfile= ---etcd-keyfile= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-certfile' is present AND '--etcd-keyfile' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.30 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the TLS certificate and private key file parameters. ---tls-cert-file= ---tls-private-key-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--tls-cert-file' is present AND '--tls-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.31 Ensure that the --client-ca-file argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection on the apiserver. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the client certificate authority file. ---client-ca-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--client-ca-file' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.32 Ensure that the --etcd-cafile argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the etcd certificate authority file parameter. ---etcd-cafile= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--etcd-cafile' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.33 Ensure that the --encryption-provider-config argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the --encryption-provider-config parameter to the path of that file: --encryption-provider-config= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--encryption-provider-config' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 1.2.34 Ensure that encryption providers are appropriately configured (Automated) - -**Result:** pass - -**Remediation:** -Follow the Kubernetes documentation and configure a EncryptionConfig file. -In this file, choose aescbc, kms or secretbox as the encryption provider. - - -**Audit:** - -```bash -check_encryption_provider_config.sh aescbc kms secretbox -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -# This script is used to check the encrption provider config is set to aesbc -# -# outputs: -# true/false - -# TODO: Figure out the file location from the kube-apiserver commandline args -ENCRYPTION_CONFIG_FILE="/node/etc/kubernetes/ssl/encryption.yaml" - -if [[ ! -f "${ENCRYPTION_CONFIG_FILE}" ]]; then - echo "false" - exit -fi - -for provider in "$@" -do - if grep "$provider" "${ENCRYPTION_CONFIG_FILE}"; then - echo "true" - exit - fi -done - -echo "false" -exit - -``` -**Returned Value**: - -```console - - aescbc: -true - -``` -### 1.2.35 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated) - -**Result:** warn - -**Remediation:** -Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml -on the master node and set the below parameter. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM -_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM -_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM -_SHA384 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - - -## 1.3 Controller Manager -### 1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and set the --terminated-pod-gc-threshold to an appropriate threshold, -for example: ---terminated-pod-gc-threshold=10 - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--terminated-pod-gc-threshold' is present -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -### 1.3.2 Ensure that the --profiling argument is set to false (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and set the below parameter. ---profiling=false - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'false' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -### 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node to set the below parameter. ---use-service-account-credentials=true - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'true' is not equal to 'false' -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -### 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and set the --service-account-private-key-file parameter -to the private key file for service accounts. ---service-account-private-key-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--service-account-private-key-file' is present -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -### 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and set the --root-ca-file parameter to the certificate bundle file`. ---root-ca-file= - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--root-ca-file' is present -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -### 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated) - -**Result:** notApplicable - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true. ---feature-gates=RotateKubeletServerCertificate=true - -Cluster provisioned by RKE handles certificate rotation directly through RKE. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - - -### 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - -**Result:** pass - -**Remediation:** -Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml -on the master node and ensure the correct value for the --bind-address parameter - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-controller-manager | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is not present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4788 4773 4 16:16 ? 00:00:09 kube-controller-manager --configure-cloud-routes=false --cloud-provider= --service-cluster-ip-range=10.43.0.0/16 --v=2 --pod-eviction-timeout=5m0s --leader-elect=true --cluster-cidr=10.42.0.0/16 --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --profiling=false --node-monitor-grace-period=40s --service-account-private-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --address=0.0.0.0 --allow-untagged-cloud=true --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --allocate-node-cidrs=true --enable-hostpath-provisioner=false --terminated-pod-gc-threshold=1000 --feature-gates=RotateKubeletServerCertificate=true --use-service-account-credentials=true - -``` -## 1.4 Scheduler -### 1.4.1 Ensure that the --profiling argument is set to false (Automated) - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file -on the master node and set the below parameter. ---profiling=false - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'false' is equal to 'false' -``` - -**Returned Value**: - -```console -root 4947 4930 1 16:16 ? 00:00:02 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --address=0.0.0.0 - -``` -### 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated) - -**Result:** pass - -**Remediation:** -Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml -on the master node and ensure the correct value for the --bind-address parameter - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-scheduler | grep -v grep -``` - -**Expected Result**: - -```console -'--bind-address' is not present OR '--bind-address' is not present -``` - -**Returned Value**: - -```console -root 4947 4930 1 16:16 ? 00:00:02 kube-scheduler --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml --leader-elect=true --profiling=false --v=2 --address=0.0.0.0 - -``` -## 2 Etcd Node Configuration Files -### 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure TLS encryption. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml -on the master node and set the below parameters. ---cert-file= ---key-file= - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--cert-file' is present AND '--key-file' is present -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.2 Ensure that the --client-cert-auth argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---client-cert-auth="true" - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--client-cert-auth' is present OR 'true' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.3 Ensure that the --auto-tls argument is not set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --auto-tls parameter or set it to false. - --auto-tls=false - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--auto-tls' is not present OR '--auto-tls' is not present -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -Follow the etcd service documentation and configure peer TLS encryption as appropriate -for your etcd cluster. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameters. ---peer-client-file= ---peer-key-file= - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-cert-file' is present AND '--peer-key-file' is present -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and set the below parameter. ---peer-client-cert-auth=true - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-client-cert-auth' is present OR 'true' is equal to 'true' -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated) - -**Result:** pass - -**Remediation:** -Edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the master -node and either remove the --peer-auto-tls parameter or set it to false. ---peer-auto-tls=false - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--peer-auto-tls' is not present OR '--peer-auto-tls' is present -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -### 2.7 Ensure that a unique Certificate Authority is used for etcd (Automated) - -**Result:** pass - -**Remediation:** -[Manual test] -Follow the etcd documentation and create a dedicated certificate authority setup for the -etcd service. -Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml on the -master node and set the below parameter. ---trusted-ca-file= - - -**Audit:** - -```bash -/bin/ps -ef | /bin/grep etcd | /bin/grep -v grep -``` - -**Expected Result**: - -```console -'--trusted-ca-file' is present -``` - -**Returned Value**: - -```console -etcd 4318 4301 6 16:15 ? 00:00:14 /usr/local/bin/etcd --listen-peer-urls=https://0.0.0.0:2380 --cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --advertise-client-urls=https://192.168.1.225:2379,https://192.168.1.225:4001 --election-timeout=5000 --data-dir=/var/lib/rancher/etcd/ --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225.pem --enable-v2=true --initial-cluster=etcd-cis-aio-0=https://192.168.1.225:2380 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --client-cert-auth=true --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --name=etcd-cis-aio-0 --listen-client-urls=https://0.0.0.0:2379 --peer-key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem --peer-client-cert-auth=true --initial-advertise-peer-urls=https://192.168.1.225:2380 --initial-cluster-state=new --key-file=/etc/kubernetes/ssl/kube-etcd-192-168-1-225-key.pem -root 4366 4349 0 16:15 ? 00:00:00 /opt/rke-tools/rke-etcd-backup etcd-backup save --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-node.pem --key /etc/kubernetes/ssl/kube-node-key.pem --name etcd-rolling-snapshots --endpoints=192.168.1.225:2379 --retention=72h --creation=12h -root 4643 4626 23 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User -root 14998 14985 0 16:19 ? 00:00:00 kube-bench run --targets etcd --scored --nosummary --noremediations --v=5 --config-dir=/etc/kube-bench/cfg --benchmark rke-cis-1.6-hardened --json --log_dir /tmp/results/logs --outputfile /tmp/results/etcd.json - -``` -## 3.1 Authentication and Authorization -### 3.1.1 Client certificate authentication should not be used for users (Manual) - -**Result:** warn - -**Remediation:** -Alternative mechanisms provided by Kubernetes such as the use of OIDC should be -implemented in place of client certificates. - - -**Audit:** - -```bash - -``` - - -## 3.2 Logging -### 3.2.1 Ensure that a minimal audit policy is created (Automated) - -**Result:** pass - -**Remediation:** -Create an audit policy file for your cluster. - - -**Audit:** - -```bash -/bin/ps -ef | grep kube-apiserver | grep -v grep -``` - -**Expected Result**: - -```console -'--audit-policy-file' is present -``` - -**Returned Value**: - -```console -root 4643 4626 22 16:15 ? 00:00:46 kube-apiserver --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --proxy-client-cert-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client.pem --service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --authorization-mode=Node,RBAC --audit-log-maxsize=100 --audit-log-format=json --requestheader-allowed-names=kube-apiserver-proxy-client --cloud-provider= --etcd-prefix=/registry --proxy-client-key-file=/etc/kubernetes/ssl/kube-apiserver-proxy-client-key.pem --allow-privileged=true --service-account-lookup=true --admission-control-config-file=/etc/kubernetes/admission.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --requestheader-client-ca-file=/etc/kubernetes/ssl/kube-apiserver-requestheader-ca.pem --service-node-port-range=30000-32767 --kubelet-certificate-authority=/etc/kubernetes/ssl/kube-ca.pem --storage-backend=etcd3 --anonymous-auth=false --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --advertise-address=192.168.1.225 --audit-log-maxage=30 --etcd-servers=https://192.168.1.225:2379 --runtime-config=policy/v1beta1/podsecuritypolicy=true --bind-address=0.0.0.0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --audit-log-path=/var/log/kube-audit/audit-log.json --audit-log-maxbackup=10 --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --service-account-key-file=/etc/kubernetes/ssl/kube-service-account-token-key.pem --insecure-port=0 --requestheader-group-headers=X-Remote-Group --secure-port=6443 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority,TaintNodesByCondition,PersistentVolumeClaimResize,PodSecurityPolicy,EventRateLimit --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml --requestheader-extra-headers-prefix=X-Remote-Extra- --profiling=false --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --requestheader-username-headers=X-Remote-User - -``` -### 3.2.2 Ensure that the audit policy covers key security concerns (Manual) - -**Result:** warn - -**Remediation:** -Consider modification of the audit policy in use on the cluster to include these items, at a -minimum. - - -**Audit:** - -```bash - -``` - - -## 4.1 Worker Node Configuration Files -### 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/systemd/system/kubelet.service.d/10-kubeadm.conf; then stat -c permissions=%a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf; fi' -``` - - -### 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Cluster provisioned by RKE doesn’t require or maintain a configuration file for the kubelet service. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/systemd/system/kubelet.service.d/10-kubeadm.conf; then stat -c %U:%G /etc/systemd/system/kubelet.service.d/10-kubeadm.conf; fi' -``` - - -### 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 644 $proykubeconfig - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %a /node/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'644' is present OR '640' is present OR '600' is equal to '600' OR '444' is present OR '440' is present OR '400' is present OR '000' is present -``` - -**Returned Value**: - -```console -600 - -``` -### 4.1.4 Ensure that the proxy kubeconfig file ownership is set to root:root (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, chown root:root /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; then stat -c %U:%G /etc/kubernetes/ssl/kubecfg-kube-proxy.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is not present OR '/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml' is not present -``` - -### 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chmod 644 /etc/kubernetes/ssl/kubecfg-kube-node.yaml - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c permissions=%a /etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'permissions' is not present -``` - -### 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated) - -**Result:** pass - -**Remediation:** -Run the below command (based on the file location on your system) on the each worker node. -For example, -chown root:root /etc/kubernetes/ssl/kubecfg-kube-node.yaml - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; then stat -c %U:%G /node/etc/kubernetes/ssl/kubecfg-kube-node.yaml; fi' -``` - -**Expected Result**: - -```console -'root:root' is equal to 'root:root' -``` - -**Returned Value**: - -```console -root:root - -``` -### 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Automated) - -**Result:** pass - -**Remediation:** -Run the following command to modify the file permissions of the ---client-ca-file chmod 644 - - -**Audit:** - -```bash -check_cafile_permissions.sh -``` - -**Expected Result**: - -```console -'permissions' is not present -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}') -if test -z $CAFILE; then CAFILE=$kubeletcafile; fi -if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi - -``` -### 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Automated) - -**Result:** pass - -**Remediation:** -Run the following command to modify the ownership of the --client-ca-file. -chown root:root - - -**Audit:** - -```bash -check_cafile_ownership.sh -``` - -**Expected Result**: - -```console -'root:root' is not present -``` - -**Audit Script:** -```bash -#!/usr/bin/env bash - -CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}') -if test -z $CAFILE; then CAFILE=$kubeletcafile; fi -if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi - -``` -### 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated) - -**Result:** notApplicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chmod 644 /var/lib/kubelet/config.yaml - -Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then stat -c permissions=%a /var/lib/kubelet/config.yaml; fi' -``` - - -### 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated) - -**Result:** notApplicable - -**Remediation:** -Run the following command (using the config file location identified in the Audit step) -chown root:root /var/lib/kubelet/config.yaml - -Clusters provisioned by RKE doesn’t require or maintain a configuration file for the kubelet. -All configuration is passed in as arguments at container run time. - - -**Audit:** - -```bash -/bin/sh -c 'if test -e /var/lib/kubelet/config.yaml; then stat -c %U:%G /var/lib/kubelet/config.yaml; fi' -``` - - -## 4.2 Kubelet -### 4.2.1 Ensure that the anonymous-auth argument is set to false (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to -false. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---anonymous-auth=false -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -### 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If -using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---authorization-mode=Webhook -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -### 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to -the location of the client CA file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_AUTHZ_ARGS variable. ---client-ca-file= -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -### 4.2.4 Ensure that the --read-only-port argument is set to 0 (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set readOnlyPort to 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---read-only-port=0 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present OR '' is not present -``` - -### 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a -value other than 0. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---streaming-connection-idle-timeout=5m -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'30m' is not equal to '0' OR '--streaming-connection-idle-timeout' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD -root 5103 5086 7 16:16 ? 00:00:12 kubelet --resolv-conf=/etc/resolv.conf --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --make-iptables-util-chains=true --streaming-connection-idle-timeout=30m --cluster-dns=10.43.0.10 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-192-168-1-225-key.pem --address=0.0.0.0 --cni-bin-dir=/opt/cni/bin --anonymous-auth=false --protect-kernel-defaults=true --cloud-provider= --hostname-override=cis-aio-0 --fail-swap-on=false --cgroups-per-qos=True --authentication-token-webhook=true --event-qps=0 --v=2 --pod-infra-container-image=rancher/pause:3.1 --authorization-mode=Webhook --network-plugin=cni --cluster-domain=cluster.local --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cni-conf-dir=/etc/cni/net.d --root-dir=/var/lib/kubelet --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-192-168-1-225.pem --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf - -``` -### 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set protectKernelDefaults: true. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. ---protect-kernel-defaults=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -### 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove the --make-iptables-util-chains argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present OR '' is not present -``` - -### 4.2.8 Ensure that the --hostname-override argument is not set (Manual) - -**Result:** notApplicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and remove the --hostname-override argument from the -KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - - -### 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -### 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set tlsCertFile to the location -of the certificate file to use to identify this Kubelet, and tlsPrivateKeyFile -to the location of the corresponding private key file. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the below parameters in KUBELET_CERTIFICATE_ARGS variable. ---tls-cert-file= ---tls-private-key-file= -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present AND '' is not present -``` - -### 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to add the line rotateCertificates: true or -remove it altogether to use the default value. -If using command line arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS -variable. -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'--rotate-certificates' is not present OR '--rotate-certificates' is not present -``` - -**Returned Value**: - -```console -UID PID PPID C STIME TTY TIME CMD -root 5103 5086 6 16:16 ? 00:00:12 kubelet --resolv-conf=/etc/resolv.conf --read-only-port=0 --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 --feature-gates=RotateKubeletServerCertificate=true --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --make-iptables-util-chains=true --streaming-connection-idle-timeout=30m --cluster-dns=10.43.0.10 --tls-private-key-file=/etc/kubernetes/ssl/kube-kubelet-192-168-1-225-key.pem --address=0.0.0.0 --cni-bin-dir=/opt/cni/bin --anonymous-auth=false --protect-kernel-defaults=true --cloud-provider= --hostname-override=cis-aio-0 --fail-swap-on=false --cgroups-per-qos=True --authentication-token-webhook=true --event-qps=0 --v=2 --pod-infra-container-image=rancher/pause:3.1 --authorization-mode=Webhook --network-plugin=cni --cluster-domain=cluster.local --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --volume-plugin-dir=/var/lib/kubelet/volumeplugins --cni-conf-dir=/etc/cni/net.d --root-dir=/var/lib/kubelet --tls-cert-file=/etc/kubernetes/ssl/kube-kubelet-192-168-1-225.pem --cgroup-driver=cgroupfs --resolv-conf=/run/systemd/resolve/resolv.conf - -``` -### 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Automated) - -**Result:** notApplicable - -**Remediation:** -Edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf -on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable. ---feature-gates=RotateKubeletServerCertificate=true -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - -Clusters provisioned by RKE handles certificate rotation directly through RKE. - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - - -### 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated) - -**Result:** pass - -**Remediation:** -If using a Kubelet config file, edit the file to set TLSCipherSuites: to -TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -or to a subset of these values. -If using executable arguments, edit the kubelet service file -/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and -set the --tls-cipher-suites parameter as follows, or to a subset of these values. ---tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 -Based on your system, restart the kubelet service. For example: -systemctl daemon-reload -systemctl restart kubelet.service - - -**Audit:** - -```bash -/bin/ps -fC kubelet -``` - -**Expected Result**: - -```console -'' is not present -``` - -## 5.1 RBAC and Service Accounts -### 5.1.1 Ensure that the cluster-admin role is only used where required (Manual) - -**Result:** warn - -**Remediation:** -Identify all clusterrolebindings to the cluster-admin role. Check if they are used and -if they need this role or if they could use a role with fewer privileges. -Where possible, first bind users to a lower privileged role and then remove the -clusterrolebinding to the cluster-admin role : -kubectl delete clusterrolebinding [name] - - -**Audit:** - -```bash - -``` - - -### 5.1.2 Minimize access to secrets (Manual) - -**Result:** warn - -**Remediation:** -Where possible, remove get, list and watch access to secret objects in the cluster. - - -**Audit:** - -```bash - -``` - - -### 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Manual) - -**Result:** warn - -**Remediation:** -Where possible replace any use of wildcards in clusterroles and roles with specific -objects or actions. - - -**Audit:** - -```bash - -``` - - -### 5.1.4 Minimize access to create pods (Manual) - -**Result:** warn - -**Remediation:** -Where possible, remove create access to pod objects in the cluster. - - -**Audit:** - -```bash - -``` - - -### 5.1.5 Ensure that default service accounts are not actively used. (Automated) - -**Result:** pass - -**Remediation:** -Create explicit service accounts wherever a Kubernetes workload requires specific access -to the Kubernetes API server. -Modify the configuration of each default service account to include this value -automountServiceAccountToken: false - - -**Audit:** - -```bash -check_for_default_sa.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count_sa=$(kubectl get serviceaccounts --all-namespaces -o json | jq -r '.items[] | select(.metadata.name=="default") | select((.automountServiceAccountToken == null) or (.automountServiceAccountToken == true))' | jq .metadata.namespace | wc -l) -if [[ ${count_sa} -gt 0 ]]; then - echo "false" - exit -fi - -for ns in $(kubectl get ns --no-headers -o custom-columns=":metadata.name") -do - for result in $(kubectl get clusterrolebinding,rolebinding -n $ns -o json | jq -r '.items[] | select((.subjects[].kind=="ServiceAccount" and .subjects[].name=="default") or (.subjects[].kind=="Group" and .subjects[].name=="system:serviceaccounts"))' | jq -r '"\(.roleRef.kind),\(.roleRef.name)"') - do - read kind name <<<$(IFS=","; echo $result) - resource_count=$(kubectl get $kind $name -n $ns -o json | jq -r '.rules[] | select(.resources[] != "podsecuritypolicies")' | wc -l) - if [[ ${resource_count} -gt 0 ]]; then - echo "false" - exit - fi - done -done - - -echo "true" -``` -**Returned Value**: - -```console -true - -``` -### 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Manual) - -**Result:** warn - -**Remediation:** -Modify the definition of pods and service accounts which do not need to mount service -account tokens to disable it. - - -**Audit:** - -```bash - -``` - - -## 5.2 Pod Security Policies -### 5.2.1 Minimize the admission of privileged containers (Manual) - -**Result:** warn - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that -the .spec.privileged field is omitted or set to false. - - -**Audit:** - -```bash - -``` - - -### 5.2.2 Minimize the admission of containers wishing to share the host process ID namespace (Automated) - -**Result:** pass - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.hostPID field is omitted or set to false. - - -**Audit:** - -```bash -kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected Result**: - -```console -1 is greater than 0 -``` - -**Returned Value**: - -```console ---count=1 - -``` -### 5.2.3 Minimize the admission of containers wishing to share the host IPC namespace (Automated) - -**Result:** pass - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.hostIPC field is omitted or set to false. - - -**Audit:** - -```bash -kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected Result**: - -```console -1 is greater than 0 -``` - -**Returned Value**: - -```console ---count=1 - -``` -### 5.2.4 Minimize the admission of containers wishing to share the host network namespace (Automated) - -**Result:** pass - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.hostNetwork field is omitted or set to false. - - -**Audit:** - -```bash -kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected Result**: - -```console -1 is greater than 0 -``` - -**Returned Value**: - -```console ---count=1 - -``` -### 5.2.5 Minimize the admission of containers with allowPrivilegeEscalation (Automated) - -**Result:** pass - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.allowPrivilegeEscalation field is omitted or set to false. - - -**Audit:** - -```bash -kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}' -``` - -**Expected Result**: - -```console -1 is greater than 0 -``` - -**Returned Value**: - -```console ---count=1 - -``` -### 5.2.6 Minimize the admission of root containers (Manual) - -**Result:** warn - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of -UIDs not including 0. - - -**Audit:** - -```bash - -``` - - -### 5.2.7 Minimize the admission of containers with the NET_RAW capability (Manual) - -**Result:** warn - -**Remediation:** -Create a PSP as described in the Kubernetes documentation, ensuring that the -.spec.requiredDropCapabilities is set to include either NET_RAW or ALL. - - -**Audit:** - -```bash - -``` - - -### 5.2.8 Minimize the admission of containers with added capabilities (Manual) - -**Result:** warn - -**Remediation:** -Ensure that allowedCapabilities is not present in PSPs for the cluster unless -it is set to an empty array. - - -**Audit:** - -```bash - -``` - - -### 5.2.9 Minimize the admission of containers with capabilities assigned (Manual) - -**Result:** warn - -**Remediation:** -Review the use of capabilites in applications runnning on your cluster. Where a namespace -contains applicaions which do not require any Linux capabities to operate consider adding -a PSP which forbids the admission of containers which do not drop all capabilities. - - -**Audit:** - -```bash - -``` - - -## 5.3 Network Policies and CNI -### 5.3.1 Ensure that the CNI in use supports Network Policies (Manual) - -**Result:** warn - -**Remediation:** -If the CNI plugin in use does not support network policies, consideration should be given to -making use of a different plugin, or finding an alternate mechanism for restricting traffic -in the Kubernetes cluster. - - -**Audit:** - -```bash - -``` - - -### 5.3.2 Ensure that all Namespaces have Network Policies defined (Automated) - -**Result:** pass - -**Remediation:** -Follow the documentation and create NetworkPolicy objects as you need them. - - -**Audit:** - -```bash -check_for_network_policies.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -for namespace in $(kubectl get namespaces --all-namespaces -o json | jq -r '.items[].metadata.name'); do - policy_count=$(kubectl get networkpolicy -n ${namespace} -o json | jq '.items | length') - if [[ ${policy_count} -eq 0 ]]; then - echo "false" - exit - fi -done - -echo "true" - -``` -**Returned Value**: - -```console -true - -``` -## 5.4 Secrets Management -### 5.4.1 Prefer using secrets as files over secrets as environment variables (Manual) - -**Result:** warn - -**Remediation:** -if possible, rewrite application code to read secrets from mounted secret files, rather than -from environment variables. - - -**Audit:** - -```bash - -``` - - -### 5.4.2 Consider external secret storage (Manual) - -**Result:** warn - -**Remediation:** -Refer to the secrets management options offered by your cloud provider or a third-party -secrets management solution. - - -**Audit:** - -```bash - -``` - - -## 5.5 Extensible Admission Control -### 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual) - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and setup image provenance. - - -**Audit:** - -```bash - -``` - - -## 5.7 General Policies -### 5.7.1 Create administrative boundaries between resources using namespaces (Manual) - -**Result:** warn - -**Remediation:** -Follow the documentation and create namespaces for objects in your deployment as you need -them. - - -**Audit:** - -```bash - -``` - - -### 5.7.2 Ensure that the seccomp profile is set to docker/default in your pod definitions (Manual) - -**Result:** warn - -**Remediation:** -Seccomp is an alpha feature currently. By default, all alpha features are disabled. So, you -would need to enable alpha features in the apiserver by passing "--feature- -gates=AllAlpha=true" argument. -Edit the /etc/kubernetes/apiserver file on the master node and set the KUBE_API_ARGS -parameter to "--feature-gates=AllAlpha=true" -KUBE_API_ARGS="--feature-gates=AllAlpha=true" -Based on your system, restart the kube-apiserver service. For example: -systemctl restart kube-apiserver.service -Use annotations to enable the docker/default seccomp profile in your pod definitions. An -example is as below: -apiVersion: v1 -kind: Pod -metadata: - name: trustworthy-pod - annotations: - seccomp.security.alpha.kubernetes.io/pod: docker/default -spec: - containers: - - name: trustworthy-container - image: sotrustworthy:latest - - -**Audit:** - -```bash - -``` - - -### 5.7.3 Apply Security Context to Your Pods and Containers (Manual) - -**Result:** warn - -**Remediation:** -Follow the Kubernetes documentation and apply security contexts to your pods. For a -suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker -Containers. - - -**Audit:** - -```bash - -``` - - -### 5.7.4 The default namespace should not be used (Automated) - -**Result:** pass - -**Remediation:** -Ensure that namespaces are created to allow for appropriate segregation of Kubernetes -resources and that all new resources are created in a specific namespace. - - -**Audit:** - -```bash -check_for_default_ns.sh -``` - -**Expected Result**: - -```console -'true' is equal to 'true' -``` - -**Audit Script:** -```bash -#!/bin/bash - -set -eE - -handle_error() { - echo "false" -} - -trap 'handle_error' ERR - -count=$(kubectl get all -n default -o json | jq .items[] | jq -r 'select((.metadata.name!="kubernetes"))' | jq .metadata.name | wc -l) -if [[ ${count} -gt 0 ]]; then - echo "false" - exit -fi - -echo "true" - - -``` -**Returned Value**: - -```console -true - -``` diff --git a/content/rancher/v2.6/en/security/rancher-2.5/1.6-hardening-2.5/_index.md b/content/rancher/v2.6/en/security/rancher-2.5/1.6-hardening-2.5/_index.md deleted file mode 100644 index 78f2763e57c..00000000000 --- a/content/rancher/v2.6/en/security/rancher-2.5/1.6-hardening-2.5/_index.md +++ /dev/null @@ -1,570 +0,0 @@ ---- -title: Hardening Guide with CIS 1.6 Benchmark -weight: 100 ---- - -This document provides prescriptive guidance for hardening a production installation of a RKE cluster to be used with Rancher v2.5.4. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS). - -> This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes. - -This hardening guide is intended to be used for RKE clusters and associated with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher: - - Rancher Version | CIS Benchmark Version | Kubernetes Version -----------------|-----------------------|------------------ - Rancher v2.5.4 | Benchmark 1.6 | Kubernetes v1.18 - -[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_Hardening_Guide_CIS_1.6.pdf) - -### Overview - -This document provides prescriptive guidance for hardening a RKE cluster to be used for installing Rancher v2.5.4 with Kubernetes v1.18 or provisioning a RKE cluster with Kubernetes v1.18 to be used within Rancher v2.5.4. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS). - -For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS 1.6 Benchmark - Self-Assessment Guide - Rancher v2.5.4]({{< baseurl >}}/rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/). - -#### Known Issues - -- Rancher **exec shell** and **view logs** for pods are **not** functional in a CIS 1.6 hardened setup when only public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes. -- When setting the `default_pod_security_policy_template_id:` to `restricted` Rancher creates **RoleBindings** and **ClusterRoleBindings** on the default service accounts. The CIS 1.6 5.1.5 check requires the default service accounts have no roles or cluster roles bound to it apart from the defaults. In addition the default service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments. - -Migration Rancher from 2.4 to 2.5. Addons were removed in HG 2.5, and therefore namespaces on migration may be not created on the downstream clusters. Pod may fail to run because of missing namesapce like ingress-nginx, cattle-system. - -### Configure Kernel Runtime Parameters - -The following `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`: - -```ini -vm.overcommit_memory=1 -vm.panic_on_oom=0 -kernel.panic=10 -kernel.panic_on_oops=1 -kernel.keys.root_maxbytes=25000000 -``` - -Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings. - -### Configure `etcd` user and group -A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time. - -#### create `etcd` user and group -To create the **etcd** group run the following console commands. - -The commands below use `52034` for **uid** and **gid** are for example purposes. Any valid unused **uid** or **gid** could also be used in lieu of `52034`. - -```bash -groupadd --gid 52034 etcd -useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd -``` - -Update the RKE **config.yml** with the **uid** and **gid** of the **etcd** user: - -```yaml -services: - etcd: - gid: 52034 - uid: 52034 -``` - -#### Set `automountServiceAccountToken` to `false` for `default` service accounts -Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments. - -For each namespace including **default** and **kube-system** on a standard RKE install the **default** service account must include this value: - -```yaml -automountServiceAccountToken: false -``` - -Save the following yaml to a file called `account_update.yaml` - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: default -automountServiceAccountToken: false -``` - -Create a bash script file called `account_update.sh`. Be sure to `chmod +x account_update.sh` so the script has execute permissions. - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do - kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)" -done -``` - -### Ensure that all Namespaces have Network Policies defined - -Running different applications on the same Kubernetes cluster creates a risk of one -compromised application attacking a neighboring application. Network segmentation is -important to ensure that containers can communicate only with those they are supposed -to. A network policy is a specification of how selections of pods are allowed to -communicate with each other and other network endpoints. - -Network Policies are namespace scoped. When a network policy is introduced to a given -namespace, all traffic not allowed by the policy is denied. However, if there are no network -policies in a namespace all traffic will be allowed into and out of the pods in that -namespace. To enforce network policies, a CNI (container network interface) plugin must be enabled. -This guide uses [canal](https://github.com/projectcalico/canal) to provide the policy enforcement. -Additional information about CNI providers can be found -[here](https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/) - -Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a -**permissive** example is provide below. If you want to allow all traffic to all pods in a namespace -(even if policies are added that cause some pods to be treated as “isolated”), -you can create a policy that explicitly allows all traffic in that namespace. Save the following `yaml` as -`default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) -about network policies can be found on the Kubernetes site. - -> This `NetworkPolicy` is not recommended for production use - -```yaml ---- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: default-allow-all -spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress -``` - -Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to -`chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions. - -```bash -#!/bin/bash -e - -for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do - kubectl apply -f default-allow-all.yaml -n ${namespace} -done -``` - -Execute this script to apply the `default-allow-all.yaml` the **permissive** `NetworkPolicy` to all namespaces. - -### Reference Hardened RKE `cluster.yml` configuration - -The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened install -of Rancher Kubernetes Engine (RKE). Install [documentation](https://rancher.com/docs/rke/latest/en/installation/) is -provided with additional details about the configuration items. This reference `cluster.yml` does not include the required **nodes** directive which will vary depending on your environment. Documentation for node configuration can be found here: https://rancher.com/docs/rke/latest/en/config-options/nodes - - -```yaml -# If you intend to deploy Kubernetes in an air-gapped environment, -# please consult the documentation on how to configure custom RKE images. -# https://rancher.com/docs/rke/latest/en/installation/ - -# the nodes directive is required and will vary depending on your environment -# documentation for node configuration can be found here: -# https://rancher.com/docs/rke/latest/en/config-options/nodes -nodes: [] -services: - etcd: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] - external_urls: [] - ca_cert: "" - cert: "" - key: "" - path: "" - uid: 52034 - gid: 52034 - snapshot: false - retention: "" - creation: "" - backup_config: null - kube-api: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] - service_cluster_ip_range: "" - service_node_port_range: "" - pod_security_policy: true - always_pull_images: false - secrets_encryption_config: - enabled: true - custom_config: null - audit_log: - enabled: true - configuration: null - admission_configuration: null - event_rate_limit: - enabled: true - configuration: null - kube-controller: - image: "" - extra_args: - feature-gates: RotateKubeletServerCertificate=true - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] - cluster_cidr: "" - service_cluster_ip_range: "" - scheduler: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] - kubelet: - image: "" - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: "true" - tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] - cluster_domain: cluster.local - infra_container_image: "" - cluster_dns_server: "" - fail_swap_on: false - generate_serving_certificate: true - kubeproxy: - image: "" - extra_args: {} - extra_binds: [] - extra_env: [] - win_extra_args: {} - win_extra_binds: [] - win_extra_env: [] -network: - plugin: "" - options: {} - mtu: 0 - node_selector: {} - update_strategy: null -authentication: - strategy: "" - sans: [] - webhook: null -addons: | - apiVersion: policy/v1beta1 - kind: PodSecurityPolicy - metadata: - name: restricted - spec: - requiredDropCapabilities: - - NET_RAW - privileged: false - allowPrivilegeEscalation: false - defaultAllowPrivilegeEscalation: false - fsGroup: - rule: RunAsAny - runAsUser: - rule: MustRunAsNonRoot - seLinux: - rule: RunAsAny - supplementalGroups: - rule: RunAsAny - volumes: - - emptyDir - - secret - - persistentVolumeClaim - - downwardAPI - - configMap - - projected - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRole - metadata: - name: psp:restricted - rules: - - apiGroups: - - extensions - resourceNames: - - restricted - resources: - - podsecuritypolicies - verbs: - - use - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: psp:restricted - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: psp:restricted - subjects: - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:serviceaccounts - - apiGroup: rbac.authorization.k8s.io - kind: Group - name: system:authenticated - --- - apiVersion: networking.k8s.io/v1 - kind: NetworkPolicy - metadata: - name: default-allow-all - spec: - podSelector: {} - ingress: - - {} - egress: - - {} - policyTypes: - - Ingress - - Egress - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: default - automountServiceAccountToken: false -addons_include: [] -system_images: - etcd: "" - alpine: "" - nginx_proxy: "" - cert_downloader: "" - kubernetes_services_sidecar: "" - kubedns: "" - dnsmasq: "" - kubedns_sidecar: "" - kubedns_autoscaler: "" - coredns: "" - coredns_autoscaler: "" - nodelocal: "" - kubernetes: "" - flannel: "" - flannel_cni: "" - calico_node: "" - calico_cni: "" - calico_controllers: "" - calico_ctl: "" - calico_flexvol: "" - canal_node: "" - canal_cni: "" - canal_controllers: "" - canal_flannel: "" - canal_flexvol: "" - weave_node: "" - weave_cni: "" - pod_infra_container: "" - ingress: "" - ingress_backend: "" - metrics_server: "" - windows_pod_infra_container: "" -ssh_key_path: "" -ssh_cert_path: "" -ssh_agent_auth: false -authorization: - mode: "" - options: {} -ignore_docker_version: false -kubernetes_version: v1.18.12-rancher1-1 -private_registries: [] -ingress: - provider: "" - options: {} - node_selector: {} - extra_args: {} - dns_policy: "" - extra_envs: [] - extra_volumes: [] - extra_volume_mounts: [] - update_strategy: null - http_port: 0 - https_port: 0 - network_mode: "" -cluster_name: -cloud_provider: - name: "" -prefix_path: "" -win_prefix_path: "" -addon_job_timeout: 0 -bastion_host: - address: "" - port: "" - user: "" - ssh_key: "" - ssh_key_path: "" - ssh_cert: "" - ssh_cert_path: "" -monitoring: - provider: "" - options: {} - node_selector: {} - update_strategy: null - replicas: null -restore: - restore: false - snapshot_name: "" -dns: null -upgrade_strategy: - max_unavailable_worker: "" - max_unavailable_controlplane: "" - drain: null - node_drain_input: null -``` - -### Reference Hardened RKE Template configuration - -The reference RKE Template provides the configuration needed to achieve a hardened install of Kubenetes. -RKE Templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher -[documentaion](https://rancher.com/docs/rancher/v2.6/en/installation) for additional installation and RKE Template details. - -```yaml -# -# Cluster Config -# -default_pod_security_policy_template_id: restricted -docker_root_dir: /var/lib/docker -enable_cluster_alerting: false -enable_cluster_monitoring: false -enable_network_policy: true -# -# Rancher Config -# -rancher_kubernetes_engine_config: - addon_job_timeout: 45 - ignore_docker_version: true - kubernetes_version: v1.18.12-rancher1-1 -# -# If you are using calico on AWS -# -# network: -# plugin: calico -# calico_network_provider: -# cloud_provider: aws -# -# # To specify flannel interface -# -# network: -# plugin: flannel -# flannel_network_provider: -# iface: eth1 -# -# # To specify flannel interface for canal plugin -# -# network: -# plugin: canal -# canal_network_provider: -# iface: eth1 -# - network: - mtu: 0 - plugin: canal - rotate_encryption_key: false -# -# services: -# kube-api: -# service_cluster_ip_range: 10.43.0.0/16 -# kube-controller: -# cluster_cidr: 10.42.0.0/16 -# service_cluster_ip_range: 10.43.0.0/16 -# kubelet: -# cluster_domain: cluster.local -# cluster_dns_server: 10.43.0.10 -# - services: - etcd: - backup_config: - enabled: false - interval_hours: 12 - retention: 6 - safe_timestamp: false - creation: 12h - extra_args: - election-timeout: '5000' - heartbeat-interval: '500' - gid: 52034 - retention: 72h - snapshot: false - uid: 52034 - kube_api: - always_pull_images: false - audit_log: - enabled: true - event_rate_limit: - enabled: true - pod_security_policy: true - secrets_encryption_config: - enabled: true - service_node_port_range: 30000-32767 - kube_controller: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - kubelet: - extra_args: - feature-gates: RotateKubeletServerCertificate=true - protect-kernel-defaults: 'true' - tls-cipher-suites: >- - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 - fail_swap_on: false - generate_serving_certificate: true - ssh_agent_auth: false - upgrade_strategy: - max_unavailable_controlplane: '1' - max_unavailable_worker: 10% -windows_prefered_cluster: false -``` - -### Hardened Reference Ubuntu 20.04 LTS **cloud-config**: - -The reference **cloud-config** is generally used in cloud infrastructure environments to allow for -configuration management of compute instances. The reference config configures Ubuntu operating system level settings -needed before installing kubernetes. - -```yaml -#cloud-config -apt: - sources: - docker.list: - source: deb [arch=amd64] http://download.docker.com/linux/ubuntu $RELEASE stable - keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 -system_info: - default_user: - groups: - - docker -write_files: -- path: "/etc/apt/preferences.d/docker" - owner: root:root - permissions: '0600' - content: | - Package: docker-ce - Pin: version 5:19* - Pin-Priority: 800 -- path: "/etc/sysctl.d/90-kubelet.conf" - owner: root:root - permissions: '0644' - content: | - vm.overcommit_memory=1 - vm.panic_on_oom=0 - kernel.panic=10 - kernel.panic_on_oops=1 - kernel.keys.root_maxbytes=25000000 -package_update: true -packages: -- docker-ce -- docker-ce-cli -- containerd.io -runcmd: -- sysctl -p /etc/sysctl.d/90-kubelet.conf -- groupadd --gid 52034 etcd -- useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd -``` diff --git a/content/rancher/v2.6/en/security/rancher-2.5/_index.md b/content/rancher/v2.6/en/security/rancher-2.5/_index.md deleted file mode 100644 index 7282a7d813a..00000000000 --- a/content/rancher/v2.6/en/security/rancher-2.5/_index.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Self-Assessment and Hardening Guides for Rancher v2.5 -shortTitle: Rancher v2.5 Guides -weight: 1 ---- - -Rancher v2.5 introduced the capability to deploy Rancher on any Kubernetes cluster. For that reason, we now provide separate security hardening guides for Rancher deployments on each of Rancher's Kubernetes distributions. - -- [Rancher Kubernetes Distributions](#rancher-kubernetes-distributions) -- [Hardening Guides and Benchmark Versions](#hardening-guides-and-benchmark-versions) - - [RKE Guides](#rke-guides) - - [RKE2 Guides](#rke2-guides) - - [K3s Guides](#k3s) -- [Rancher with SELinux](#rancher-with-selinux) - -# Rancher Kubernetes Distributions - -Rancher has the following Kubernetes distributions: - -- [**RKE,**]({{}}/rke/latest/en/) Rancher Kubernetes Engine, is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. -- [**K3s,**]({{}}/k3s/latest/en/) is a fully conformant, lightweight Kubernetes distribution. It is easy to install, with half the memory of upstream Kubernetes, all in a binary of less than 100 MB. -- [**RKE2**](https://docs.rke2.io/) is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector. - -To harden a Kubernetes cluster outside of Rancher's distributions, refer to your Kubernetes provider docs. - -# Hardening Guides and Benchmark Versions - -These guides have been tested along with the Rancher v2.5 release. Each self-assessment guide is accompanied with a hardening guide and tested on a specific Kubernetes version and CIS benchmark version. If a CIS benchmark has not been validated for your Kubernetes version, you can choose to use the existing guides until a newer version is added. - -### RKE Guides - -Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides ----|---|---|--- -Kubernetes v1.15+ | CIS v1.5 | [Link](./1.5-benchmark-2.5) | [Link](./1.5-hardening-2.5) -Kubernetes v1.18+ | CIS v1.6 | [Link](./1.6-benchmark-2.5) | [Link](./1.6-hardening-2.5) - -### RKE2 Guides - -Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides ----|---|---|--- -Kubernetes v1.18 | CIS v1.5 | [Link](https://docs.rke2.io/security/cis_self_assessment15/) | [Link](https://docs.rke2.io/security/hardening_guide/) -Kubernetes v1.20 | CIS v1.6 | [Link](https://docs.rke2.io/security/cis_self_assessment16/) | [Link](https://docs.rke2.io/security/hardening_guide/) - -### K3s Guides - -Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guide ----|---|---|--- -Kubernetes v1.17, v1.18, & v1.19 | CIS v1.5 | [Link]({{}}/k3s/latest/en/security/self_assessment/) | [Link]({{}}/k3s/latest/en/security/hardening_guide/) - - -# Rancher with SELinux - -[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8. - -To use Rancher with SELinux, we recommend installing the `rancher-selinux` RPM according to the instructions on [this page.]({{}}/rancher/v2.6/en/security/selinux/#installing-the-rancher-selinux-rpm) diff --git a/static/img/rancher/cilium-logo.png b/static/img/rancher/cilium-logo.png new file mode 100644 index 00000000000..681a0b3c530 Binary files /dev/null and b/static/img/rancher/cilium-logo.png differ