mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-16 18:13:17 +00:00
Merge branch 'staging' of https://github.com/rancher/docs into staging-to-master
This commit is contained in:
@@ -3,12 +3,22 @@ title: "Upgrades"
|
||||
weight: 25
|
||||
---
|
||||
|
||||
This section describes how to upgrade your K3s cluster.
|
||||
### Upgrading your K3s cluster
|
||||
|
||||
[Upgrade basics]({{< baseurl >}}/k3s/latest/en/upgrades/basic/) describes several techniques for upgrading your cluster manually. It can also be used as a basis for upgrading through third-party Infrastructure-as-Code tools like [Terraform](https://www.terraform.io/).
|
||||
|
||||
[Automated upgrades]({{< baseurl >}}/k3s/latest/en/upgrades/automated/) describes how to perform Kubernetes-native automated upgrades using Rancher's [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller).
|
||||
|
||||
> If Traefik is not disabled K3s versions 1.20 and earlier will have installed Traefik v1, while K3s versions 1.21 and later will install Traefik v2 if v1 is not already present. To upgrade Traefik, please refer to the [Traefik documentation](https://doc.traefik.io/traefik/migration/v1-to-v2/) and use the [migration tool](https://github.com/traefik/traefik-migration-tool) to migrate from the older Traefik v1 to Traefik v2.
|
||||
### Version-specific caveats
|
||||
|
||||
> The experimental embedded Dqlite data store was deprecated in K3s v1.19.1. Please note that upgrades from experimental Dqlite to experimental embedded etcd are not supported. If you attempt an upgrade it will not succeed and data will be lost.
|
||||
- **Traefik:** If Traefik is not disabled, K3s versions 1.20 and earlier will install Traefik v1, while K3s versions 1.21 and later will install Traefik v2, if v1 is not already present. To upgrade from the older Traefik v1 to Traefik v2, please refer to the [Traefik documentation](https://doc.traefik.io/traefik/migration/v1-to-v2/) and use the [migration tool](https://github.com/traefik/traefik-migration-tool).
|
||||
|
||||
- **K3s bootstrap data:** If you are using K3s in an HA configuration with an external SQL datastore, and your server (control-plane) nodes were not started with the `--token` CLI flag, you will no longer be able to add additional K3s servers to the cluster without specifying the token. Ensure that you retain a copy of this token, as it is required when restoring from backup. Previously, K3s did not enforce the use of a token when using external SQL datastores.
|
||||
- The affected versions are <= v1.19.12+k3s1, v1.20.8+k3s1, v1.21.2+k3s1; the patched versions are v1.19.13+k3s1, v1.20.9+k3s1, v1.21.3+k3s1.
|
||||
|
||||
- You may retrieve the token value from any server already joined to the cluster as follows:
|
||||
```
|
||||
cat /var/lib/rancher/k3s/server/token
|
||||
```
|
||||
|
||||
- **Experimental Dqlite:** The experimental embedded Dqlite data store was deprecated in K3s v1.19.1. Please note that upgrades from experimental Dqlite to experimental embedded etcd are not supported. If you attempt an upgrade, it will not succeed, and data will be lost.
|
||||
|
||||
@@ -231,7 +231,7 @@ helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
If you are using a Private CA signed certificate , add `--set privateCA=true` to the command:
|
||||
|
||||
```
|
||||
helm install rancher rancher-latest/rancher \
|
||||
helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set ingress.tls.source=secret \
|
||||
|
||||
+57
-51
@@ -24,6 +24,7 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b
|
||||
"ec2:RunInstances",
|
||||
"ec2:RevokeSecurityGroupIngress",
|
||||
"ec2:RevokeSecurityGroupEgress",
|
||||
"ec2:DescribeRegions",
|
||||
"ec2:DescribeVpcs",
|
||||
"ec2:DescribeTags",
|
||||
"ec2:DescribeSubnets",
|
||||
@@ -123,31 +124,6 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b
|
||||
|
||||
### Service Role Permissions
|
||||
|
||||
Rancher will create a service role with the following trust policy:
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Action": "sts:AssumeRole",
|
||||
"Principal": {
|
||||
"Service": "eks.amazonaws.com"
|
||||
},
|
||||
"Effect": "Allow",
|
||||
"Sid": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
This role will also have two role policy attachments with the following policies ARNs:
|
||||
|
||||
```
|
||||
arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
|
||||
arn:aws:iam::aws:policy/AmazonEKSServicePolicy
|
||||
```
|
||||
|
||||
Permissions required for Rancher to create service role on users behalf during the EKS cluster creation process.
|
||||
|
||||
```json
|
||||
@@ -182,36 +158,66 @@ Permissions required for Rancher to create service role on users behalf during t
|
||||
}
|
||||
```
|
||||
|
||||
When an EKS cluster is created, Rancher will create a service role with the following trust policy:
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Action": "sts:AssumeRole",
|
||||
"Principal": {
|
||||
"Service": "eks.amazonaws.com"
|
||||
},
|
||||
"Effect": "Allow",
|
||||
"Sid": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
This role will also have two role policy attachments with the following policies ARNs:
|
||||
|
||||
```
|
||||
arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
|
||||
arn:aws:iam::aws:policy/AmazonEKSServicePolicy
|
||||
```
|
||||
|
||||
### VPC Permissions
|
||||
|
||||
Permissions required for Rancher to create VPC and associated resources.
|
||||
|
||||
```json
|
||||
{
|
||||
"Sid": "VPCPermissions",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"ec2:ReplaceRoute",
|
||||
"ec2:ModifyVpcAttribute",
|
||||
"ec2:ModifySubnetAttribute",
|
||||
"ec2:DisassociateRouteTable",
|
||||
"ec2:DetachInternetGateway",
|
||||
"ec2:DescribeVpcs",
|
||||
"ec2:DeleteVpc",
|
||||
"ec2:DeleteTags",
|
||||
"ec2:DeleteSubnet",
|
||||
"ec2:DeleteRouteTable",
|
||||
"ec2:DeleteRoute",
|
||||
"ec2:DeleteInternetGateway",
|
||||
"ec2:CreateVpc",
|
||||
"ec2:CreateSubnet",
|
||||
"ec2:CreateSecurityGroup",
|
||||
"ec2:CreateRouteTable",
|
||||
"ec2:CreateRoute",
|
||||
"ec2:CreateInternetGateway",
|
||||
"ec2:AttachInternetGateway",
|
||||
"ec2:AssociateRouteTable"
|
||||
],
|
||||
"Resource": "*"
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "VPCPermissions",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"ec2:ReplaceRoute",
|
||||
"ec2:ModifyVpcAttribute",
|
||||
"ec2:ModifySubnetAttribute",
|
||||
"ec2:DisassociateRouteTable",
|
||||
"ec2:DetachInternetGateway",
|
||||
"ec2:DescribeVpcs",
|
||||
"ec2:DeleteVpc",
|
||||
"ec2:DeleteTags",
|
||||
"ec2:DeleteSubnet",
|
||||
"ec2:DeleteRouteTable",
|
||||
"ec2:DeleteRoute",
|
||||
"ec2:DeleteInternetGateway",
|
||||
"ec2:CreateVpc",
|
||||
"ec2:CreateSubnet",
|
||||
"ec2:CreateSecurityGroup",
|
||||
"ec2:CreateRouteTable",
|
||||
"ec2:CreateRoute",
|
||||
"ec2:CreateInternetGateway",
|
||||
"ec2:AttachInternetGateway",
|
||||
"ec2:AssociateRouteTable"
|
||||
],
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
@@ -50,6 +50,27 @@ From the left sidebar select _"Repositories"_.
|
||||
|
||||
These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository.
|
||||
|
||||
To add a private CA for Helm Chart repositories:
|
||||
|
||||
- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:</br>
|
||||
```
|
||||
[...]
|
||||
spec:
|
||||
caBundle:
|
||||
MIIFXzCCA0egAwIBAgIUWNy8WrvSkgNzV0zdWRP79j9cVcEwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRQwEgYDVQQKDAtNeU9yZywgSW5jLjENMAsGA1UEAwwEcm9vdDAeFw0yMTEyMTQwODMyMTdaFw0yNDEwMDMwODMyMT
|
||||
...
|
||||
nDxZ/tNXt/WPJr/PgEB3hQdInDWYMg7vGO0Oz00G5kWg0sJ0ZTSoA10ZwdjIdGEeKlj1NlPyAqpQ+uDnmx6DW+zqfYtLnc/g6GuLLVPamraqN+gyU8CHwAWPNjZonFN9Vpg0PIk1I2zuOc4EHifoTAXSpnjfzfyAxCaZsnTptimlPFJJqAMj+FfDArGmr4=
|
||||
[...]
|
||||
```
|
||||
|
||||
- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click **Edit YAML** for the chart repo, and add the key/value pair as follows:
|
||||
```
|
||||
[...]
|
||||
spec:
|
||||
insecureSkipTLSVerify: true
|
||||
[...]
|
||||
```
|
||||
|
||||
> **Note:** Helm chart repositories with authentication
|
||||
>
|
||||
> As of Rancher v2.5.12, a new value `disableSameOriginCheck` has been added to the Repo.Spec. This allows users to bypass the same origin checks, sending the repository Authentication information as a Basic Auth Header with all API calls. This is not recommended but can be used as a temporary solution in cases of non-standard Helm chart repositories such as those that have redirects to a different origin URL.
|
||||
@@ -61,7 +82,7 @@ These items represent helm repositories, and can be either traditional helm endp
|
||||
spec:
|
||||
disableSameOriginCheck: true
|
||||
[...]
|
||||
```
|
||||
```
|
||||
|
||||
### Helm Compatibility
|
||||
|
||||
|
||||
@@ -245,7 +245,7 @@ helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
If you are using a Private CA signed certificate , add `--set privateCA=true` to the command:
|
||||
|
||||
```
|
||||
helm install rancher rancher-latest/rancher \
|
||||
helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set ingress.tls.source=secret \
|
||||
|
||||
+1
-1
@@ -26,6 +26,6 @@ For more information about how ServiceMonitors work, refer to the [Prometheus Op
|
||||
|
||||
This pseudo-CRD maps to a section of the Prometheus custom resource configuration. It declaratively specifies how group of pods should be monitored.
|
||||
|
||||
When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the ServiceMonitor.
|
||||
When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the PodMonitor.
|
||||
|
||||
Any Pods in your cluster that match the labels located within the PodMonitor `selector` field will be monitored based on the `podMetricsEndpoints` specified on the PodMonitor. For more information on what fields can be specified, please look at the [spec](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmonitorspec) provided by Prometheus Operator.
|
||||
|
||||
+57
-51
@@ -24,6 +24,7 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b
|
||||
"ec2:RunInstances",
|
||||
"ec2:RevokeSecurityGroupIngress",
|
||||
"ec2:RevokeSecurityGroupEgress",
|
||||
"ec2:DescribeRegions",
|
||||
"ec2:DescribeVpcs",
|
||||
"ec2:DescribeTags",
|
||||
"ec2:DescribeSubnets",
|
||||
@@ -123,31 +124,6 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b
|
||||
|
||||
### Service Role Permissions
|
||||
|
||||
Rancher will create a service role with the following trust policy:
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Action": "sts:AssumeRole",
|
||||
"Principal": {
|
||||
"Service": "eks.amazonaws.com"
|
||||
},
|
||||
"Effect": "Allow",
|
||||
"Sid": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
This role will also have two role policy attachments with the following policies ARNs:
|
||||
|
||||
```
|
||||
arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
|
||||
arn:aws:iam::aws:policy/AmazonEKSServicePolicy
|
||||
```
|
||||
|
||||
Permissions required for Rancher to create service role on users behalf during the EKS cluster creation process.
|
||||
|
||||
```json
|
||||
@@ -182,36 +158,66 @@ Permissions required for Rancher to create service role on users behalf during t
|
||||
}
|
||||
```
|
||||
|
||||
When an EKS cluster is created, Rancher will create a service role with the following trust policy:
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Action": "sts:AssumeRole",
|
||||
"Principal": {
|
||||
"Service": "eks.amazonaws.com"
|
||||
},
|
||||
"Effect": "Allow",
|
||||
"Sid": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
This role will also have two role policy attachments with the following policies ARNs:
|
||||
|
||||
```
|
||||
arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
|
||||
arn:aws:iam::aws:policy/AmazonEKSServicePolicy
|
||||
```
|
||||
|
||||
### VPC Permissions
|
||||
|
||||
Permissions required for Rancher to create VPC and associated resources.
|
||||
|
||||
```json
|
||||
{
|
||||
"Sid": "VPCPermissions",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"ec2:ReplaceRoute",
|
||||
"ec2:ModifyVpcAttribute",
|
||||
"ec2:ModifySubnetAttribute",
|
||||
"ec2:DisassociateRouteTable",
|
||||
"ec2:DetachInternetGateway",
|
||||
"ec2:DescribeVpcs",
|
||||
"ec2:DeleteVpc",
|
||||
"ec2:DeleteTags",
|
||||
"ec2:DeleteSubnet",
|
||||
"ec2:DeleteRouteTable",
|
||||
"ec2:DeleteRoute",
|
||||
"ec2:DeleteInternetGateway",
|
||||
"ec2:CreateVpc",
|
||||
"ec2:CreateSubnet",
|
||||
"ec2:CreateSecurityGroup",
|
||||
"ec2:CreateRouteTable",
|
||||
"ec2:CreateRoute",
|
||||
"ec2:CreateInternetGateway",
|
||||
"ec2:AttachInternetGateway",
|
||||
"ec2:AssociateRouteTable"
|
||||
],
|
||||
"Resource": "*"
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "VPCPermissions",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"ec2:ReplaceRoute",
|
||||
"ec2:ModifyVpcAttribute",
|
||||
"ec2:ModifySubnetAttribute",
|
||||
"ec2:DisassociateRouteTable",
|
||||
"ec2:DetachInternetGateway",
|
||||
"ec2:DescribeVpcs",
|
||||
"ec2:DeleteVpc",
|
||||
"ec2:DeleteTags",
|
||||
"ec2:DeleteSubnet",
|
||||
"ec2:DeleteRouteTable",
|
||||
"ec2:DeleteRoute",
|
||||
"ec2:DeleteInternetGateway",
|
||||
"ec2:CreateVpc",
|
||||
"ec2:CreateSubnet",
|
||||
"ec2:CreateSecurityGroup",
|
||||
"ec2:CreateRouteTable",
|
||||
"ec2:CreateRoute",
|
||||
"ec2:CreateInternetGateway",
|
||||
"ec2:AttachInternetGateway",
|
||||
"ec2:AssociateRouteTable"
|
||||
],
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
@@ -14,11 +14,11 @@ Kubernetes uses CNI as an interface between network providers and Kubernetes pod
|
||||
|
||||
For more information visit [CNI GitHub project](https://github.com/containernetworking/cni).
|
||||
|
||||
### What Network Models are Used in CNI?
|
||||
## What Network Models are Used in CNI?
|
||||
|
||||
CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)).
|
||||
CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)).
|
||||
|
||||
#### What is an Encapsulated Network?
|
||||
### What is an Encapsulated Network?
|
||||
|
||||
This network model provides a logical Layer 2 (L2) network encapsulated over the existing Layer 3 (L3) network topology that spans the Kubernetes cluster nodes. With this model you have an isolated L2 network for containers without needing routing distribution, all at the cost of minimal overhead in terms of processing and increased IP package size, which comes from an IP header generated by overlay encapsulation. Encapsulation information is distributed by UDP ports between Kubernetes workers, interchanging network control plane information about how MAC addresses can be reached. Common encapsulation used in this kind of network model is VXLAN, Internet Protocol Security (IPSec), and IP-in-IP.
|
||||
|
||||
@@ -26,11 +26,11 @@ In simple terms, this network model generates a kind of network bridge extended
|
||||
|
||||
This network model is used when an extended L2 bridge is preferred. This network model is sensitive to L3 network latencies of the Kubernetes workers. If datacenters are in distinct geolocations, be sure to have low latencies between them to avoid eventual network segmentation.
|
||||
|
||||
CNI network providers using this network model include Flannel, Canal, and Weave.
|
||||
CNI network providers using this network model include Flannel, Canal, Weave, and Cilium. By default, Calico is not using this model, but it can be configured to do so.
|
||||
|
||||

|
||||
|
||||
#### What is an Unencapsulated Network?
|
||||
### What is an Unencapsulated Network?
|
||||
|
||||
This network model provides an L3 network to route packets between containers. This model doesn't generate an isolated l2 network, nor generates overhead. These benefits come at the cost of Kubernetes workers having to manage any route distribution that's needed. Instead of using IP headers for encapsulation, this network model uses a network protocol between Kubernetes workers to distribute routing information to reach pods, such as [BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol).
|
||||
|
||||
@@ -38,13 +38,17 @@ In simple terms, this network model generates a kind of network router extended
|
||||
|
||||
This network model is used when a routed L3 network is preferred. This mode dynamically updates routes at the OS level for Kubernetes workers. It's less sensitive to latency.
|
||||
|
||||
CNI network providers using this network model include Calico and Romana.
|
||||
CNI network providers using this network model include Calico and Cilium. Cilium may be configured with this model although it is not the default mode.
|
||||
|
||||

|
||||
|
||||
### What CNI Providers are Provided by Rancher?
|
||||
## What CNI Providers are Provided by Rancher?
|
||||
|
||||
Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. You can choose your CNI network provider when you create new Kubernetes clusters from Rancher.
|
||||
### RKE Kubernetes clusters
|
||||
|
||||
Out-of-the-box, Rancher provides the following CNI network providers for RKE Kubernetes clusters: Canal, Flannel, and Weave.
|
||||
|
||||
You can choose your CNI network provider when you create new Kubernetes clusters from Rancher.
|
||||
|
||||
#### Canal
|
||||
|
||||
@@ -64,33 +68,18 @@ For more information, see the [Canal GitHub Page.](https://github.com/projectcal
|
||||
|
||||

|
||||
|
||||
Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan).
|
||||
Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan).
|
||||
|
||||
Encapsulated traffic is unencrypted by default. Therefore, flannel provides an experimental backend for encryption, [IPSec](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers.
|
||||
Encapsulated traffic is unencrypted by default. Flannel provides two solutions for encryption:
|
||||
|
||||
* [IPSec](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers. It is an experimental backend for encryption.
|
||||
* [WireGuard](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard), which is a more faster-performing alternative to strongSwan.
|
||||
|
||||
Kubernetes workers should open UDP port `8472` (VXLAN) and TCP port `9099` (healthcheck). See [the port requirements for user clusters]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details.
|
||||
|
||||

|
||||
|
||||
For more information, see the [Flannel GitHub Page](https://github.com/coreos/flannel).
|
||||
|
||||
#### Calico
|
||||
|
||||

|
||||
|
||||
Calico enables networking and network policy in Kubernetes clusters across the cloud. Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP.
|
||||
|
||||
Calico also provides a stateless IP-in-IP encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies.
|
||||
|
||||
Kubernetes workers should open TCP port `179` (BGP). See [the port requirements for user clusters]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details.
|
||||
|
||||

|
||||
|
||||
For more information, see the following pages:
|
||||
|
||||
- [Project Calico Official Site](https://www.projectcalico.org/)
|
||||
- [Project Calico GitHub Page](https://github.com/projectcalico/calico)
|
||||
|
||||
For more information, see the [Flannel GitHub Page](https://github.com/flannel-io/flannel).
|
||||
|
||||
#### Weave
|
||||
|
||||
@@ -104,16 +93,48 @@ For more information, see the following pages:
|
||||
|
||||
- [Weave Net Official Site](https://www.weave.works/)
|
||||
|
||||
### CNI Features by Provider
|
||||
### RKE2 Kubernetes clusters
|
||||
|
||||
Out-of-the-box, Rancher provides the following CNI network providers for RKE2 Kubernetes clusters: [Canal](#canal) (see above section), Calico, and Cilium.
|
||||
|
||||
You can choose your CNI network provider when you create new Kubernetes clusters from Rancher.
|
||||
|
||||
#### Calico
|
||||
|
||||

|
||||
|
||||
Calico enables networking and network policy in Kubernetes clusters across the cloud. By default, Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP.
|
||||
|
||||
Calico also provides a stateless IP-in-IP or VXLAN encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies.
|
||||
|
||||
Kubernetes workers should open TCP port `179` (BGP). See [the port requirements for user clusters]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/node-requirements/#networking-requirements) for more details.
|
||||
|
||||

|
||||
|
||||
For more information, see the following pages:
|
||||
|
||||
- [Project Calico Official Site](https://www.projectcalico.org/)
|
||||
- [Project Calico GitHub Page](https://github.com/projectcalico/calico)
|
||||
|
||||
#### Cilium
|
||||
|
||||

|
||||
|
||||
Cilium enables networking and network policies (L3, L4, and L7) in Kubernetes. By default, Cilium uses eBPF technologies to route packets inside the node and VXLAN to send packets to other nodes. Unencapsulated techniques can also be configured.
|
||||
|
||||
Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4140` for health checks. In addition, ICMP 8/0 must be enabled for health checks. For more information, check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements).
|
||||
|
||||
## CNI Features by Provider
|
||||
|
||||
The following table summarizes the different features available for each CNI network provider provided by Rancher.
|
||||
|
||||
| Provider | Network Model | Route Distribution | Network Policies | Mesh | External Datastore | Encryption | Ingress/Egress Policies |
|
||||
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
|
||||
| Canal | Encapsulated (VXLAN) | No | Yes | No | K8S API | No | Yes |
|
||||
| Flannel | Encapsulated (VXLAN) | No | No | No | K8S API | No | No |
|
||||
| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8S API | No | Yes |
|
||||
| Canal | Encapsulated (VXLAN) | No | Yes | No | K8s API | Yes | Yes |
|
||||
| Flannel | Encapsulated (VXLAN) | No | No | No | K8s API | Yes | No |
|
||||
| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8s API | Yes | Yes |
|
||||
| Weave | Encapsulated | Yes | Yes | Yes | No | Yes | Yes |
|
||||
| Cilium | Encapsulated (VXLAN) | Yes | Yes | Yes | Etcd and K8s API | Yes | Yes |
|
||||
|
||||
- Network Model: Encapsulated or unencapsulated. For more information, see [What Network Models are Used in CNI?](#what-network-models-are-used-in-cni)
|
||||
|
||||
@@ -129,25 +150,27 @@ The following table summarizes the different features available for each CNI net
|
||||
|
||||
- Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications.
|
||||
|
||||
#### CNI Community Popularity
|
||||
|
||||
The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2020.
|
||||
## CNI Community Popularity
|
||||
|
||||
The following table summarizes different GitHub metrics to give you an idea of each project's popularity and activity. This data was collected in January 2022.
|
||||
|
||||
| Provider | Project | Stars | Forks | Contributors |
|
||||
| ---- | ---- | ---- | ---- | ---- |
|
||||
| Canal | https://github.com/projectcalico/canal | 614 | 89 | 19 |
|
||||
| flannel | https://github.com/coreos/flannel | 4977 | 1.4k | 140 |
|
||||
| Calico | https://github.com/projectcalico/calico | 1534 | 429 | 135 |
|
||||
| Weave | https://github.com/weaveworks/weave/ | 5737 | 559 | 73 |
|
||||
| Canal | https://github.com/projectcalico/canal | 679 | 100 | 21 |
|
||||
| Flannel | https://github.com/flannel-io/flannel | 7k | 2.5k | 185 |
|
||||
| Calico | https://github.com/projectcalico/calico | 3.1k | 741 | 224 |
|
||||
| Weave | https://github.com/weaveworks/weave/ | 6.2k | 635 | 84 |
|
||||
| Cilium | https://github.com/cilium/cilium | 10.6k | 1.3k | 352 |
|
||||
|
||||
<br/>
|
||||
|
||||
### Which CNI Provider Should I Use?
|
||||
## Which CNI Provider Should I Use?
|
||||
|
||||
It depends on your project needs. There are many different providers, which each have various features and options. There isn't one provider that meets everyone's needs.
|
||||
|
||||
Canal is the default CNI network provider. We recommend it for most use cases. It provides encapsulated networking for containers with Flannel, while adding Calico network policies that can provide project/namespace isolation in terms of networking.
|
||||
|
||||
### How can I configure a CNI network provider?
|
||||
## How can I configure a CNI network provider?
|
||||
|
||||
Please see [Cluster Options]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/) on how to configure a network provider for your cluster. For more advanced configuration options, please see how to configure your cluster using a [Config File]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/#cluster-config-file) and the options for [Network Plug-ins]({{<baseurl>}}/rke/latest/en/config-options/add-ons/network-plugins/).
|
||||
|
||||
@@ -5,6 +5,42 @@ weight: 11
|
||||
|
||||
In this section, you'll learn how to manage Helm chart repositories and applications in Rancher. Helm chart repositories are managed using **Apps & Marketplace**. It uses a catalog-like system to import bundles of charts from repositories and then uses those charts to either deploy custom Helm applications or Rancher's tools such as Monitoring or Istio. Rancher tools come as pre-loaded repositories which deploy as standalone Helm charts. Any additional repositories are only added to the current cluster.
|
||||
|
||||
### Changes in Rancher v2.6
|
||||
|
||||
Starting in Rancher v2.6.0, a new versioning scheme for Rancher feature charts was implemented. The changes are centered around the major version of the charts and the +up annotation for upstream charts, where applicable.
|
||||
|
||||
**Major Version:** The major version of the charts is tied to Rancher minor versions. When you upgrade to a new Rancher minor version, you should ensure that all of your **Apps & Marketplace** charts are also upgraded to the correct release line for the chart.
|
||||
|
||||
>**Note:** Any major versions that are less than the ones mentioned in the table below are meant for 2.5 and below only. For example, you are advised to not use <100.x.x versions of Monitoring in 2.6.x+.
|
||||
|
||||
**Feature Charts:**
|
||||
|
||||
| **Name** | **Supported Minimum Version** | **Supported Maximum Version** |
|
||||
| ---------------- | ------------ | ------------ |
|
||||
| external-ip-webhook | 100.0.0+up1.0.0 | 100.0.1+up1.0.1 |
|
||||
| harvester-cloud-provider | 100.0.0+up0.1.8 | 100.0.0+up0.1.8 |
|
||||
| harvester-csi-driver | 100.0.0+up0.1.9 | 100.0.0+up0.1.9 |
|
||||
| rancher-alerting-drivers | 100.0.0 | 100.0.1 |
|
||||
| rancher-backups | 2.0.0 | 2.1.0 |
|
||||
| rancher-cis-benchmark | 2.0.0 | 2.0.2 |
|
||||
| rancher-gatekeeper | 100.0.0+up3.5.1 | 100.0.1+up3.6.0 |
|
||||
| rancher-istio | 100.0.0+up1.10.4 | 100.1.0+up1.11.4 |
|
||||
| rancher-logging | 100.0.0+up3.12.0 | 100.0.1+up3.15.0 |
|
||||
| rancher-longhorn | 100.0.0+up1.1.2 | 100.1.1+up1.2.3 |
|
||||
| rancher-monitoring | 100.0.0+up16.6.0 | 100.1.0+up19.0.3
|
||||
| rancher-sriov (experimental) | 100.0.0+up0.1.0 | 100.0.1+up0.1.0 |
|
||||
| rancher-vsphere-cpi | 100.0.0 | 100.1.0+up1.0.100
|
||||
| rancher-vsphere-csi | 100.0.0 | 100.1.0+up2.3.0 |
|
||||
| rancher-wins-upgrader | 100.0.0+up0.0.1 | 100.0.0+up0.0.1 |
|
||||
|
||||
</br>
|
||||
**Charts based on upstream:** For charts that are based on upstreams, the +up annotation should inform you of what upstream version the Rancher chart is tracking. Check the upstream version compatibility with Rancher during upgrades also.
|
||||
|
||||
- As an example, `100.x.x+up16.6.0` for Monitoring tracks upstream kube-prometheus-stack `16.6.0` with some Rancher patches added to it.
|
||||
|
||||
- On upgrades, ensure that you are not downgrading the version of the chart that you are using. For example, if you are using a version of Monitoring > `16.6.0` in Rancher 2.5, you should not upgrade to `100.x.x+up16.6.0`. Instead, you should upgrade to the appropriate version in the next release.
|
||||
|
||||
|
||||
### Charts
|
||||
|
||||
From the top-left menu select _"Apps & Marketplace"_ and you will be taken to the Charts page.
|
||||
@@ -25,6 +61,27 @@ From the left sidebar select _"Repositories"_.
|
||||
|
||||
These items represent helm repositories, and can be either traditional helm endpoints which have an index.yaml, or git repositories which will be cloned and can point to a specific branch. In order to use custom charts, simply add your repository here and they will become available in the Charts tab under the name of the repository.
|
||||
|
||||
To add a private CA for Helm Chart repositories:
|
||||
|
||||
- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:</br>
|
||||
```
|
||||
[...]
|
||||
spec:
|
||||
caBundle:
|
||||
MIIFXzCCA0egAwIBAgIUWNy8WrvSkgNzV0zdWRP79j9cVcEwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNBMRQwEgYDVQQKDAtNeU9yZywgSW5jLjENMAsGA1UEAwwEcm9vdDAeFw0yMTEyMTQwODMyMTdaFw0yNDEwMDMwODMyMT
|
||||
...
|
||||
nDxZ/tNXt/WPJr/PgEB3hQdInDWYMg7vGO0Oz00G5kWg0sJ0ZTSoA10ZwdjIdGEeKlj1NlPyAqpQ+uDnmx6DW+zqfYtLnc/g6GuLLVPamraqN+gyU8CHwAWPNjZonFN9Vpg0PIk1I2zuOc4EHifoTAXSpnjfzfyAxCaZsnTptimlPFJJqAMj+FfDArGmr4=
|
||||
[...]
|
||||
```
|
||||
|
||||
- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click **Edit YAML** for the chart repo, and add the key/value pair as follows:
|
||||
```
|
||||
[...]
|
||||
spec:
|
||||
insecureSkipTLSVerify: true
|
||||
[...]
|
||||
```
|
||||
|
||||
> **Note:** Helm chart repositories with authentication
|
||||
>
|
||||
> As of Rancher v2.6.3, a new value `disableSameOriginCheck` has been added to the Repo.Spec. This allows users to bypass the same origin checks, sending the repository Authentication information as a Basic Auth Header with all API calls. This is not recommended but can be used as a temporary solution in cases of non-standard Helm chart repositories such as those that have redirects to a different origin URL.
|
||||
|
||||
@@ -234,7 +234,7 @@ helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
If you are using a Private CA signed certificate , add `--set privateCA=true` to the command:
|
||||
|
||||
```
|
||||
helm install rancher rancher-latest/rancher \
|
||||
helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set bootstrapPassword=admin \
|
||||
|
||||
@@ -16,6 +16,6 @@ When a new version of an application image is released on Docker Hub, you can up
|
||||
|
||||
These options control how the upgrade rolls out to containers that are currently running. For example, for scalable deployments, you can choose whether you want to stop old pods before deploying new ones, or vice versa, as well as the upgrade batch size.
|
||||
|
||||
1. Click **Upgrade**.
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:** The workload begins upgrading its containers, per your specifications. Note that scaling up the deployment or updating the upgrade/scaling policy won't result in the pods recreation.
|
||||
|
||||
+1
-1
@@ -26,6 +26,6 @@ For more information about how ServiceMonitors work, refer to the [Prometheus Op
|
||||
|
||||
This pseudo-CRD maps to a section of the Prometheus custom resource configuration. It declaratively specifies how group of pods should be monitored.
|
||||
|
||||
When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the ServiceMonitor.
|
||||
When a PodMonitor is created, the Prometheus Operator updates the Prometheus scrape configuration to include the PodMonitor configuration. Then Prometheus begins scraping metrics from the endpoint defined in the PodMonitor.
|
||||
|
||||
Any Pods in your cluster that match the labels located within the PodMonitor `selector` field will be monitored based on the `podMetricsEndpoints` specified on the PodMonitor. For more information on what fields can be specified, please look at the [spec](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#podmonitorspec) provided by Prometheus Operator.
|
||||
|
||||
@@ -11,7 +11,7 @@ weight: 20
|
||||
</td>
|
||||
<td width="30%" style="border: none;">
|
||||
<h4>Reporting process</h4>
|
||||
<p style="padding: 8px">Please submit possible security issues by emailing <a href="mailto:security@rancher.com">security@rancher.com</a></p>
|
||||
<p style="padding: 8px">Please submit possible security issues by emailing <a href="mailto:security@rancher.com">security@rancher.com</a> .</p>
|
||||
</td>
|
||||
<td width="30%" style="border: none;">
|
||||
<h4>Announcements</h4>
|
||||
@@ -20,25 +20,25 @@ weight: 20
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
Security is at the heart of all Rancher features. From integrating with all the popular authentication tools and services, to an enterprise grade [RBAC capability,]({{<baseurl>}}/rancher/v2.6/en/admin-settings/rbac) Rancher makes your Kubernetes clusters even more secure.
|
||||
Security is at the heart of all Rancher features. From integrating with all the popular authentication tools and services, to an enterprise grade [RBAC capability]({{<baseurl>}}/rancher/v2.6/en/admin-settings/rbac), Rancher makes your Kubernetes clusters even more secure.
|
||||
|
||||
On this page, we provide security-related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters:
|
||||
On this page, we provide security related documentation along with resources to help you secure your Rancher installation and your downstream Kubernetes clusters:
|
||||
|
||||
- [Running a CIS security scan on a Kubernetes cluster](#running-a-cis-security-scan-on-a-kubernetes-cluster)
|
||||
- [SELinux RPM](#selinux-rpm)
|
||||
- [Guide to hardening Rancher installations](#rancher-hardening-guide)
|
||||
- [The CIS Benchmark and self-assessment](#the-cis-benchmark-and-self-assessment)
|
||||
- [Third-party penetration test reports](#third-party-penetration-test-reports)
|
||||
- [Rancher CVEs and resolutions](#rancher-cves-and-resolutions)
|
||||
- [Rancher Security Advisories and CVEs](#rancher-security-advisories-and-cves)
|
||||
- [Kubernetes Security Best Practices](#kubernetes-security-best-practices)
|
||||
|
||||
### Running a CIS Security Scan on a Kubernetes Cluster
|
||||
|
||||
Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS (Center for Internet Security) Kubernetes Benchmark.
|
||||
Rancher leverages [kube-bench](https://github.com/aquasecurity/kube-bench) to run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the [CIS](https://www.cisecurity.org/cis-benchmarks/) (Center for Internet Security) Kubernetes Benchmark.
|
||||
|
||||
The CIS Kubernetes Benchmark is a reference document that can be used to establish a secure configuration baseline for Kubernetes.
|
||||
|
||||
The Center for Internet Security (CIS) is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace."
|
||||
The Center for Internet Security (CIS) is a 501(c\)(3) non-profit organization, formed in October 2000, with a mission to "identify, develop, validate, promote, and sustain best practice solutions for cyber defense and build and lead communities to enable an environment of trust in cyberspace".
|
||||
|
||||
CIS Benchmarks are best practices for the secure configuration of a target system. CIS Benchmarks are developed through the generous volunteer efforts of subject matter experts, technology vendors, public and private community members, and the CIS Benchmark Development team.
|
||||
|
||||
@@ -46,13 +46,13 @@ The Benchmark provides recommendations of two types: Automated and Manual. We ru
|
||||
|
||||
When Rancher runs a CIS security scan on a cluster, it generates a report showing the results of each test, including a summary with the number of passed, skipped and failed tests. The report also includes remediation steps for any failed tests.
|
||||
|
||||
For details, refer to the section on [security scans.]({{<baseurl>}}/rancher/v2.6/en/cis-scans)
|
||||
For details, refer to the section on [security scans]({{<baseurl>}}/rancher/v2.6/en/cis-scans).
|
||||
|
||||
### SELinux RPM
|
||||
|
||||
[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8.
|
||||
|
||||
We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page.]({{<baseurl>}}/rancher/v2.6/en/security/selinux)
|
||||
We provide two RPMs (Red Hat packages) that enable Rancher products to function properly on SELinux-enforcing hosts: `rancher-selinux` and `rke2-selinux`. For details, see [this page]({{<baseurl>}}/rancher/v2.6/en/security/selinux).
|
||||
|
||||
### Rancher Hardening Guide
|
||||
|
||||
@@ -78,13 +78,13 @@ Rancher periodically hires third parties to perform security audits and penetrat
|
||||
|
||||
Results:
|
||||
|
||||
- [Cure53 Pen Test - 7/2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf)
|
||||
- [Untamed Theory Pen Test- 3/2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf)
|
||||
- [Cure53 Pen Test - July 2019](https://releases.rancher.com/documents/security/pen-tests/2019/RAN-01-cure53-report.final.pdf)
|
||||
- [Untamed Theory Pen Test- March 2019](https://releases.rancher.com/documents/security/pen-tests/2019/UntamedTheory-Rancher_SecurityAssessment-20190712_v5.pdf)
|
||||
|
||||
### Rancher CVEs and Resolutions
|
||||
### Rancher Security Advisories and CVEs
|
||||
|
||||
Rancher is committed to informing the community of security issues in our products. For the list of CVEs (Common Vulnerabilities and Exposures) for issues we have resolved, refer to [this page.](./cve)
|
||||
|
||||
### Kubernetes Security Best Practices
|
||||
|
||||
For recommendations on securing your Kubernetes cluster, refer to the [Best Practices](./best-practices) guide.
|
||||
For recommendations on securing your Kubernetes cluster, refer to the [Kubernetes Security Best Practices](./best-practices) guide.
|
||||
|
||||
@@ -3,6 +3,10 @@ title: Kubernetes Security Best Practices
|
||||
weight: 5
|
||||
---
|
||||
|
||||
# Restricting cloud metadata API access
|
||||
### Restricting cloud metadata API access
|
||||
|
||||
Cloud providers such as AWS, Azure, or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets.
|
||||
Cloud providers such as AWS, Azure, DigitalOcean or GCP often expose metadata services locally to instances. By default, this endpoint is accessible by pods running on a cloud instance, including pods in hosted Kubernetes providers such as EKS, AKS, DigitalOcean Kubernetes or GKE, and can contain cloud credentials for that node, provisioning data such as kubelet credentials, or other sensitive data. To mitigate this risk when running on a cloud platform, follow the [Kubernetes security recommendations](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/#restricting-cloud-metadata-api-access): limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets.
|
||||
|
||||
It is advised to consult your cloud provider's security best practices for further recommendations and specific details on how to restrict access to cloud instance metadata API.
|
||||
|
||||
Further references: MITRE ATT&CK knowledge base on - [Unsecured Credentials: Cloud Instance Metadata API](https://attack.mitre.org/techniques/T1552/005/).
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
title: Rancher CVEs and Resolutions
|
||||
title: Security Advisories and CVEs
|
||||
weight: 300
|
||||
---
|
||||
|
||||
Rancher is committed to informing the community of security issues in our products. Rancher will publish CVEs (Common Vulnerabilities and Exposures) for issues we have resolved.
|
||||
Rancher is committed to informing the community of security issues in our products. Rancher will publish security advisories and CVEs (Common Vulnerabilities and Exposures) for issues we have resolved. New security advisories are also published in Rancher's GitHub [security page](https://github.com/rancher/rancher/security/advisories).
|
||||
|
||||
| ID | Description | Date | Resolution |
|
||||
|----|-------------|------|------------|
|
||||
@@ -18,4 +18,4 @@ Rancher is committed to informing the community of security issues in our produc
|
||||
| [CVE-2019-12274](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12274) | Nodes using the built-in node drivers using a file path option allows the machine to read arbitrary files including sensitive ones from inside the Rancher server container. | 5 Jun 2019 | [Rancher v2.2.4](https://github.com/rancher/rancher/releases/tag/v2.2.4), [Rancher v2.1.10](https://github.com/rancher/rancher/releases/tag/v2.1.10) and [Rancher v2.0.15](https://github.com/rancher/rancher/releases/tag/v2.0.15) |
|
||||
| [CVE-2019-11202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11202) | The default admin, that is shipped with Rancher, will be re-created upon restart of Rancher despite being explicitly deleted. | 16 Apr 2019 | [Rancher v2.2.2](https://github.com/rancher/rancher/releases/tag/v2.2.2), [Rancher v2.1.9](https://github.com/rancher/rancher/releases/tag/v2.1.9) and [Rancher v2.0.14](https://github.com/rancher/rancher/releases/tag/v2.0.14) |
|
||||
| [CVE-2019-6287](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6287) | Project members continue to get access to namespaces from projects that they were removed from if they were added to more than one project. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) |
|
||||
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions]({{<baseurl>}}/rancher/v2.6/en/installation/install-rancher-on-k8s/rollbacks). |
|
||||
| [CVE-2018-20321](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20321) | Any project member with access to the `default` namespace can mount the `netes-default` service account in a pod and then use that pod to execute administrative privileged commands against the Kubernetes cluster. | 29 Jan 2019 | [Rancher v2.1.6](https://github.com/rancher/rancher/releases/tag/v2.1.6) and [Rancher v2.0.11](https://github.com/rancher/rancher/releases/tag/v2.0.11) - Rolling back from these versions or greater have specific [instructions]({{<baseurl>}}/rancher/v2.6/en/installation/install-rancher-on-k8s/rollbacks). |
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
---
|
||||
title: Self-Assessment and Hardening Guides for Rancher v2.6
|
||||
shortTitle: Rancher v2.6 Guides
|
||||
weight: 1
|
||||
aliases:
|
||||
- /rancher/v2.6/en/security/rancher-2.5/
|
||||
- /rancher/v2.6/en/security/rancher-2.5/1.5-hardening-2.5/
|
||||
- /rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/
|
||||
- /rancher/v2.6/en/security/rancher-2.5/1.6-hardening-2.5/
|
||||
- /rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/
|
||||
---
|
||||
|
||||
Rancher v2.6 hardening guides are currently being updated. For the time being, please consult [Rancher v2.5 self-assessment and hardening guides]({{<baseurl>}}/rancher/v2.5/en/security/rancher-2.5) for more information.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,720 +0,0 @@
|
||||
---
|
||||
title: Hardening Guide with CIS 1.5 Benchmark
|
||||
weight: 200
|
||||
---
|
||||
|
||||
This document provides prescriptive guidance for hardening a production installation of a RKE cluster to be used with Rancher v2.5. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS).
|
||||
|
||||
> This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes.
|
||||
|
||||
This hardening guide is intended to be used for RKE clusters and associated with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher:
|
||||
|
||||
Rancher Version | CIS Benchmark Version | Kubernetes Version
|
||||
----------------|-----------------------|------------------
|
||||
Rancher v2.5 | Benchmark v1.5 | Kubernetes 1.15
|
||||
|
||||
[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_Hardening_Guide_CIS_1.5.pdf)
|
||||
|
||||
### Overview
|
||||
|
||||
This document provides prescriptive guidance for hardening a RKE cluster to be used for installing Rancher v2.5 with Kubernetes v1.15 or provisioning a RKE cluster with Kubernetes 1.15 to be used within Rancher v2.5. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS).
|
||||
|
||||
For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS 1.5 Benchmark - Self-Assessment Guide - Rancher v2.5]({{< baseurl >}}/rancher/v2.6/en/security/rancher-2.5/1.5-benchmark-2.5/).
|
||||
|
||||
#### Known Issues
|
||||
|
||||
- Rancher **exec shell** and **view logs** for pods are **not** functional in a CIS 1.5 hardened setup when only public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes.
|
||||
- When setting the `default_pod_security_policy_template_id:` to `restricted` Rancher creates **RoleBindings** and **ClusterRoleBindings** on the default service accounts. The CIS 1.5 5.1.5 check requires the default service accounts have no roles or cluster roles bound to it apart from the defaults. In addition the default service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments.
|
||||
|
||||
### Configure Kernel Runtime Parameters
|
||||
|
||||
The following `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`:
|
||||
|
||||
```
|
||||
vm.overcommit_memory=1
|
||||
vm.panic_on_oom=0
|
||||
kernel.panic=10
|
||||
kernel.panic_on_oops=1
|
||||
kernel.keys.root_maxbytes=25000000
|
||||
```
|
||||
|
||||
Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings.
|
||||
|
||||
### Configure `etcd` user and group
|
||||
A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
|
||||
|
||||
#### create `etcd` user and group
|
||||
To create the **etcd** group run the following console commands.
|
||||
|
||||
The commands below use `52034` for **uid** and **gid** are for example purposes. Any valid unused **uid** or **gid** could also be used in lieu of `52034`.
|
||||
|
||||
```
|
||||
groupadd --gid 52034 etcd
|
||||
useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd
|
||||
```
|
||||
|
||||
Update the RKE **config.yml** with the **uid** and **gid** of the **etcd** user:
|
||||
|
||||
``` yaml
|
||||
services:
|
||||
etcd:
|
||||
gid: 52034
|
||||
uid: 52034
|
||||
```
|
||||
|
||||
#### Set `automountServiceAccountToken` to `false` for `default` service accounts
|
||||
Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.
|
||||
|
||||
For each namespace including **default** and **kube-system** on a standard RKE install the **default** service account must include this value:
|
||||
|
||||
```
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
Save the following yaml to a file called `account_update.yaml`
|
||||
|
||||
``` yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: default
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
Create a bash script file called `account_update.sh`. Be sure to `chmod +x account_update.sh` so the script has execute permissions.
|
||||
|
||||
```
|
||||
#!/bin/bash -e
|
||||
|
||||
for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do
|
||||
kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
|
||||
done
|
||||
```
|
||||
|
||||
### Ensure that all Namespaces have Network Policies defined
|
||||
|
||||
Running different applications on the same Kubernetes cluster creates a risk of one
|
||||
compromised application attacking a neighboring application. Network segmentation is
|
||||
important to ensure that containers can communicate only with those they are supposed
|
||||
to. A network policy is a specification of how selections of pods are allowed to
|
||||
communicate with each other and other network endpoints.
|
||||
|
||||
Network Policies are namespace scoped. When a network policy is introduced to a given
|
||||
namespace, all traffic not allowed by the policy is denied. However, if there are no network
|
||||
policies in a namespace all traffic will be allowed into and out of the pods in that
|
||||
namespace. To enforce network policies, a CNI (container network interface) plugin must be enabled.
|
||||
This guide uses [canal](https://github.com/projectcalico/canal) to provide the policy enforcement.
|
||||
Additional information about CNI providers can be found
|
||||
[here](https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/)
|
||||
|
||||
Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a
|
||||
**permissive** example is provide below. If you want to allow all traffic to all pods in a namespace
|
||||
(even if policies are added that cause some pods to be treated as “isolated”),
|
||||
you can create a policy that explicitly allows all traffic in that namespace. Save the following `yaml` as
|
||||
`default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
about network policies can be found on the Kubernetes site.
|
||||
|
||||
> This `NetworkPolicy` is not recommended for production use
|
||||
|
||||
``` yaml
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: default-allow-all
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- {}
|
||||
egress:
|
||||
- {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
```
|
||||
|
||||
Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to
|
||||
`chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions.
|
||||
|
||||
```
|
||||
#!/bin/bash -e
|
||||
|
||||
for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do
|
||||
kubectl apply -f default-allow-all.yaml -n ${namespace}
|
||||
done
|
||||
```
|
||||
Execute this script to apply the `default-allow-all.yaml` the **permissive** `NetworkPolicy` to all namespaces.
|
||||
|
||||
### Reference Hardened RKE `cluster.yml` configuration
|
||||
|
||||
The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened install
|
||||
of Rancher Kubernetes Engine (RKE). Install [documentation](https://rancher.com/docs/rke/latest/en/installation/) is
|
||||
provided with additional details about the configuration items. This reference `cluster.yml` does not include the required **nodes** directive which will vary depending on your environment. Documentation for node configuration can be found here: https://rancher.com/docs/rke/latest/en/config-options/nodes
|
||||
|
||||
|
||||
``` yaml
|
||||
# If you intend to deploy Kubernetes in an air-gapped environment,
|
||||
# please consult the documentation on how to configure custom RKE images.
|
||||
kubernetes_version: "v1.15.9-rancher1-1"
|
||||
enable_network_policy: true
|
||||
default_pod_security_policy_template_id: "restricted"
|
||||
# the nodes directive is required and will vary depending on your environment
|
||||
# documentation for node configuration can be found here:
|
||||
# https://rancher.com/docs/rke/latest/en/config-options/nodes
|
||||
nodes:
|
||||
services:
|
||||
etcd:
|
||||
uid: 52034
|
||||
gid: 52034
|
||||
kube-api:
|
||||
pod_security_policy: true
|
||||
secrets_encryption_config:
|
||||
enabled: true
|
||||
audit_log:
|
||||
enabled: true
|
||||
admission_configuration:
|
||||
event_rate_limit:
|
||||
enabled: true
|
||||
kube-controller:
|
||||
extra_args:
|
||||
feature-gates: "RotateKubeletServerCertificate=true"
|
||||
scheduler:
|
||||
image: ""
|
||||
extra_args: {}
|
||||
extra_binds: []
|
||||
extra_env: []
|
||||
kubelet:
|
||||
generate_serving_certificate: true
|
||||
extra_args:
|
||||
feature-gates: "RotateKubeletServerCertificate=true"
|
||||
protect-kernel-defaults: "true"
|
||||
tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
|
||||
extra_binds: []
|
||||
extra_env: []
|
||||
cluster_domain: ""
|
||||
infra_container_image: ""
|
||||
cluster_dns_server: ""
|
||||
fail_swap_on: false
|
||||
kubeproxy:
|
||||
image: ""
|
||||
extra_args: {}
|
||||
extra_binds: []
|
||||
extra_env: []
|
||||
network:
|
||||
plugin: ""
|
||||
options: {}
|
||||
mtu: 0
|
||||
node_selector: {}
|
||||
authentication:
|
||||
strategy: ""
|
||||
sans: []
|
||||
webhook: null
|
||||
addons: |
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ingress-nginx
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: default-psp-role
|
||||
namespace: ingress-nginx
|
||||
rules:
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resourceNames:
|
||||
- default-psp
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: default-psp-rolebinding
|
||||
namespace: ingress-nginx
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: default-psp-role
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:authenticated
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: cattle-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: default-psp-role
|
||||
namespace: cattle-system
|
||||
rules:
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resourceNames:
|
||||
- default-psp
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: default-psp-rolebinding
|
||||
namespace: cattle-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: default-psp-role
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:authenticated
|
||||
---
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: restricted
|
||||
spec:
|
||||
requiredDropCapabilities:
|
||||
- NET_RAW
|
||||
privileged: false
|
||||
allowPrivilegeEscalation: false
|
||||
defaultAllowPrivilegeEscalation: false
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
runAsUser:
|
||||
rule: MustRunAsNonRoot
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
volumes:
|
||||
- emptyDir
|
||||
- secret
|
||||
- persistentVolumeClaim
|
||||
- downwardAPI
|
||||
- configMap
|
||||
- projected
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
rules:
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resourceNames:
|
||||
- restricted
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:restricted
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:authenticated
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: tiller
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: tiller
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: tiller
|
||||
namespace: kube-system
|
||||
|
||||
addons_include: []
|
||||
system_images:
|
||||
etcd: ""
|
||||
alpine: ""
|
||||
nginx_proxy: ""
|
||||
cert_downloader: ""
|
||||
kubernetes_services_sidecar: ""
|
||||
kubedns: ""
|
||||
dnsmasq: ""
|
||||
kubedns_sidecar: ""
|
||||
kubedns_autoscaler: ""
|
||||
coredns: ""
|
||||
coredns_autoscaler: ""
|
||||
kubernetes: ""
|
||||
flannel: ""
|
||||
flannel_cni: ""
|
||||
calico_node: ""
|
||||
calico_cni: ""
|
||||
calico_controllers: ""
|
||||
calico_ctl: ""
|
||||
calico_flexvol: ""
|
||||
canal_node: ""
|
||||
canal_cni: ""
|
||||
canal_flannel: ""
|
||||
canal_flexvol: ""
|
||||
weave_node: ""
|
||||
weave_cni: ""
|
||||
pod_infra_container: ""
|
||||
ingress: ""
|
||||
ingress_backend: ""
|
||||
metrics_server: ""
|
||||
windows_pod_infra_container: ""
|
||||
ssh_key_path: ""
|
||||
ssh_cert_path: ""
|
||||
ssh_agent_auth: false
|
||||
authorization:
|
||||
mode: ""
|
||||
options: {}
|
||||
ignore_docker_version: false
|
||||
private_registries: []
|
||||
ingress:
|
||||
provider: ""
|
||||
options: {}
|
||||
node_selector: {}
|
||||
extra_args: {}
|
||||
dns_policy: ""
|
||||
extra_envs: []
|
||||
extra_volumes: []
|
||||
extra_volume_mounts: []
|
||||
cluster_name: ""
|
||||
prefix_path: ""
|
||||
addon_job_timeout: 0
|
||||
bastion_host:
|
||||
address: ""
|
||||
port: ""
|
||||
user: ""
|
||||
ssh_key: ""
|
||||
ssh_key_path: ""
|
||||
ssh_cert: ""
|
||||
ssh_cert_path: ""
|
||||
monitoring:
|
||||
provider: ""
|
||||
options: {}
|
||||
node_selector: {}
|
||||
restore:
|
||||
restore: false
|
||||
snapshot_name: ""
|
||||
dns: null
|
||||
```
|
||||
|
||||
### Reference Hardened RKE Template configuration
|
||||
|
||||
The reference RKE Template provides the configuration needed to achieve a hardened install of Kubenetes.
|
||||
RKE Templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher
|
||||
[documentaion](https://rancher.com/docs/rancher/v2.6/en/installation) for additional installation and RKE Template details.
|
||||
|
||||
``` yaml
|
||||
#
|
||||
# Cluster Config
|
||||
#
|
||||
default_pod_security_policy_template_id: restricted
|
||||
docker_root_dir: /var/lib/docker
|
||||
enable_cluster_alerting: false
|
||||
enable_cluster_monitoring: false
|
||||
enable_network_policy: true
|
||||
#
|
||||
# Rancher Config
|
||||
#
|
||||
rancher_kubernetes_engine_config:
|
||||
addon_job_timeout: 30
|
||||
addons: |-
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ingress-nginx
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: default-psp-role
|
||||
namespace: ingress-nginx
|
||||
rules:
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resourceNames:
|
||||
- default-psp
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: default-psp-rolebinding
|
||||
namespace: ingress-nginx
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: default-psp-role
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:authenticated
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: cattle-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: default-psp-role
|
||||
namespace: cattle-system
|
||||
rules:
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resourceNames:
|
||||
- default-psp
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: default-psp-rolebinding
|
||||
namespace: cattle-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: default-psp-role
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:authenticated
|
||||
---
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: restricted
|
||||
spec:
|
||||
requiredDropCapabilities:
|
||||
- NET_RAW
|
||||
privileged: false
|
||||
allowPrivilegeEscalation: false
|
||||
defaultAllowPrivilegeEscalation: false
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
runAsUser:
|
||||
rule: MustRunAsNonRoot
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
volumes:
|
||||
- emptyDir
|
||||
- secret
|
||||
- persistentVolumeClaim
|
||||
- downwardAPI
|
||||
- configMap
|
||||
- projected
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
rules:
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resourceNames:
|
||||
- restricted
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:restricted
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:authenticated
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: tiller
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: tiller
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: tiller
|
||||
namespace: kube-system
|
||||
ignore_docker_version: true
|
||||
kubernetes_version: v1.15.9-rancher1-1
|
||||
#
|
||||
# If you are using calico on AWS
|
||||
#
|
||||
# network:
|
||||
# plugin: calico
|
||||
# calico_network_provider:
|
||||
# cloud_provider: aws
|
||||
#
|
||||
# # To specify flannel interface
|
||||
#
|
||||
# network:
|
||||
# plugin: flannel
|
||||
# flannel_network_provider:
|
||||
# iface: eth1
|
||||
#
|
||||
# # To specify flannel interface for canal plugin
|
||||
#
|
||||
# network:
|
||||
# plugin: canal
|
||||
# canal_network_provider:
|
||||
# iface: eth1
|
||||
#
|
||||
network:
|
||||
mtu: 0
|
||||
plugin: canal
|
||||
#
|
||||
# services:
|
||||
# kube-api:
|
||||
# service_cluster_ip_range: 10.43.0.0/16
|
||||
# kube-controller:
|
||||
# cluster_cidr: 10.42.0.0/16
|
||||
# service_cluster_ip_range: 10.43.0.0/16
|
||||
# kubelet:
|
||||
# cluster_domain: cluster.local
|
||||
# cluster_dns_server: 10.43.0.10
|
||||
#
|
||||
services:
|
||||
etcd:
|
||||
backup_config:
|
||||
enabled: false
|
||||
interval_hours: 12
|
||||
retention: 6
|
||||
safe_timestamp: false
|
||||
creation: 12h
|
||||
extra_args:
|
||||
election-timeout: '5000'
|
||||
heartbeat-interval: '500'
|
||||
gid: 52034
|
||||
retention: 72h
|
||||
snapshot: false
|
||||
uid: 52034
|
||||
kube_api:
|
||||
always_pull_images: false
|
||||
audit_log:
|
||||
enabled: true
|
||||
event_rate_limit:
|
||||
enabled: true
|
||||
pod_security_policy: true
|
||||
secrets_encryption_config:
|
||||
enabled: true
|
||||
service_node_port_range: 30000-32767
|
||||
kube_controller:
|
||||
extra_args:
|
||||
address: 127.0.0.1
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
profiling: 'false'
|
||||
terminated-pod-gc-threshold: '1000'
|
||||
kubelet:
|
||||
extra_args:
|
||||
anonymous-auth: 'false'
|
||||
event-qps: '0'
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
make-iptables-util-chains: 'true'
|
||||
protect-kernel-defaults: 'true'
|
||||
streaming-connection-idle-timeout: 1800s
|
||||
tls-cipher-suites: >-
|
||||
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
|
||||
fail_swap_on: false
|
||||
generate_serving_certificate: true
|
||||
scheduler:
|
||||
extra_args:
|
||||
address: 127.0.0.1
|
||||
profiling: 'false'
|
||||
ssh_agent_auth: false
|
||||
windows_prefered_cluster: false
|
||||
```
|
||||
|
||||
### Hardened Reference Ubuntu 18.04 LTS **cloud-config**:
|
||||
|
||||
The reference **cloud-config** is generally used in cloud infrastructure environments to allow for
|
||||
configuration management of compute instances. The reference config configures Ubuntu operating system level settings
|
||||
needed before installing kubernetes.
|
||||
|
||||
``` yaml
|
||||
#cloud-config
|
||||
packages:
|
||||
- curl
|
||||
- jq
|
||||
runcmd:
|
||||
- sysctl -w vm.overcommit_memory=1
|
||||
- sysctl -w kernel.panic=10
|
||||
- sysctl -w kernel.panic_on_oops=1
|
||||
- curl https://releases.rancher.com/install-docker/18.09.sh | sh
|
||||
- usermod -aG docker ubuntu
|
||||
- return=1; while [ $return != 0 ]; do sleep 2; docker ps; return=$?; done
|
||||
- addgroup --gid 52034 etcd
|
||||
- useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd
|
||||
write_files:
|
||||
- path: /etc/sysctl.d/kubelet.conf
|
||||
owner: root:root
|
||||
permissions: "0644"
|
||||
content: |
|
||||
vm.overcommit_memory=1
|
||||
kernel.panic=10
|
||||
kernel.panic_on_oops=1
|
||||
```
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,570 +0,0 @@
|
||||
---
|
||||
title: Hardening Guide with CIS 1.6 Benchmark
|
||||
weight: 100
|
||||
---
|
||||
|
||||
This document provides prescriptive guidance for hardening a production installation of a RKE cluster to be used with Rancher v2.5.4. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS).
|
||||
|
||||
> This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes.
|
||||
|
||||
This hardening guide is intended to be used for RKE clusters and associated with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher:
|
||||
|
||||
Rancher Version | CIS Benchmark Version | Kubernetes Version
|
||||
----------------|-----------------------|------------------
|
||||
Rancher v2.5.4 | Benchmark 1.6 | Kubernetes v1.18
|
||||
|
||||
[Click here to download a PDF version of this document](https://releases.rancher.com/documents/security/2.5/Rancher_Hardening_Guide_CIS_1.6.pdf)
|
||||
|
||||
### Overview
|
||||
|
||||
This document provides prescriptive guidance for hardening a RKE cluster to be used for installing Rancher v2.5.4 with Kubernetes v1.18 or provisioning a RKE cluster with Kubernetes v1.18 to be used within Rancher v2.5.4. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS).
|
||||
|
||||
For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the [CIS 1.6 Benchmark - Self-Assessment Guide - Rancher v2.5.4]({{< baseurl >}}/rancher/v2.6/en/security/rancher-2.5/1.6-benchmark-2.5/).
|
||||
|
||||
#### Known Issues
|
||||
|
||||
- Rancher **exec shell** and **view logs** for pods are **not** functional in a CIS 1.6 hardened setup when only public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes.
|
||||
- When setting the `default_pod_security_policy_template_id:` to `restricted` Rancher creates **RoleBindings** and **ClusterRoleBindings** on the default service accounts. The CIS 1.6 5.1.5 check requires the default service accounts have no roles or cluster roles bound to it apart from the defaults. In addition the default service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments.
|
||||
|
||||
Migration Rancher from 2.4 to 2.5. Addons were removed in HG 2.5, and therefore namespaces on migration may be not created on the downstream clusters. Pod may fail to run because of missing namesapce like ingress-nginx, cattle-system.
|
||||
|
||||
### Configure Kernel Runtime Parameters
|
||||
|
||||
The following `sysctl` configuration is recommended for all nodes type in the cluster. Set the following parameters in `/etc/sysctl.d/90-kubelet.conf`:
|
||||
|
||||
```ini
|
||||
vm.overcommit_memory=1
|
||||
vm.panic_on_oom=0
|
||||
kernel.panic=10
|
||||
kernel.panic_on_oops=1
|
||||
kernel.keys.root_maxbytes=25000000
|
||||
```
|
||||
|
||||
Run `sysctl -p /etc/sysctl.d/90-kubelet.conf` to enable the settings.
|
||||
|
||||
### Configure `etcd` user and group
|
||||
A user account and group for the **etcd** service is required to be setup before installing RKE. The **uid** and **gid** for the **etcd** user will be used in the RKE **config.yml** to set the proper permissions for files and directories during installation time.
|
||||
|
||||
#### create `etcd` user and group
|
||||
To create the **etcd** group run the following console commands.
|
||||
|
||||
The commands below use `52034` for **uid** and **gid** are for example purposes. Any valid unused **uid** or **gid** could also be used in lieu of `52034`.
|
||||
|
||||
```bash
|
||||
groupadd --gid 52034 etcd
|
||||
useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd
|
||||
```
|
||||
|
||||
Update the RKE **config.yml** with the **uid** and **gid** of the **etcd** user:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
etcd:
|
||||
gid: 52034
|
||||
uid: 52034
|
||||
```
|
||||
|
||||
#### Set `automountServiceAccountToken` to `false` for `default` service accounts
|
||||
Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.
|
||||
|
||||
For each namespace including **default** and **kube-system** on a standard RKE install the **default** service account must include this value:
|
||||
|
||||
```yaml
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
Save the following yaml to a file called `account_update.yaml`
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: default
|
||||
automountServiceAccountToken: false
|
||||
```
|
||||
|
||||
Create a bash script file called `account_update.sh`. Be sure to `chmod +x account_update.sh` so the script has execute permissions.
|
||||
|
||||
```bash
|
||||
#!/bin/bash -e
|
||||
|
||||
for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do
|
||||
kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
|
||||
done
|
||||
```
|
||||
|
||||
### Ensure that all Namespaces have Network Policies defined
|
||||
|
||||
Running different applications on the same Kubernetes cluster creates a risk of one
|
||||
compromised application attacking a neighboring application. Network segmentation is
|
||||
important to ensure that containers can communicate only with those they are supposed
|
||||
to. A network policy is a specification of how selections of pods are allowed to
|
||||
communicate with each other and other network endpoints.
|
||||
|
||||
Network Policies are namespace scoped. When a network policy is introduced to a given
|
||||
namespace, all traffic not allowed by the policy is denied. However, if there are no network
|
||||
policies in a namespace all traffic will be allowed into and out of the pods in that
|
||||
namespace. To enforce network policies, a CNI (container network interface) plugin must be enabled.
|
||||
This guide uses [canal](https://github.com/projectcalico/canal) to provide the policy enforcement.
|
||||
Additional information about CNI providers can be found
|
||||
[here](https://rancher.com/blog/2019/2019-03-21-comparing-kubernetes-cni-providers-flannel-calico-canal-and-weave/)
|
||||
|
||||
Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a
|
||||
**permissive** example is provide below. If you want to allow all traffic to all pods in a namespace
|
||||
(even if policies are added that cause some pods to be treated as “isolated”),
|
||||
you can create a policy that explicitly allows all traffic in that namespace. Save the following `yaml` as
|
||||
`default-allow-all.yaml`. Additional [documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
about network policies can be found on the Kubernetes site.
|
||||
|
||||
> This `NetworkPolicy` is not recommended for production use
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: default-allow-all
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- {}
|
||||
egress:
|
||||
- {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
```
|
||||
|
||||
Create a bash script file called `apply_networkPolicy_to_all_ns.sh`. Be sure to
|
||||
`chmod +x apply_networkPolicy_to_all_ns.sh` so the script has execute permissions.
|
||||
|
||||
```bash
|
||||
#!/bin/bash -e
|
||||
|
||||
for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do
|
||||
kubectl apply -f default-allow-all.yaml -n ${namespace}
|
||||
done
|
||||
```
|
||||
|
||||
Execute this script to apply the `default-allow-all.yaml` the **permissive** `NetworkPolicy` to all namespaces.
|
||||
|
||||
### Reference Hardened RKE `cluster.yml` configuration
|
||||
|
||||
The reference `cluster.yml` is used by the RKE CLI that provides the configuration needed to achieve a hardened install
|
||||
of Rancher Kubernetes Engine (RKE). Install [documentation](https://rancher.com/docs/rke/latest/en/installation/) is
|
||||
provided with additional details about the configuration items. This reference `cluster.yml` does not include the required **nodes** directive which will vary depending on your environment. Documentation for node configuration can be found here: https://rancher.com/docs/rke/latest/en/config-options/nodes
|
||||
|
||||
|
||||
```yaml
|
||||
# If you intend to deploy Kubernetes in an air-gapped environment,
|
||||
# please consult the documentation on how to configure custom RKE images.
|
||||
# https://rancher.com/docs/rke/latest/en/installation/
|
||||
|
||||
# the nodes directive is required and will vary depending on your environment
|
||||
# documentation for node configuration can be found here:
|
||||
# https://rancher.com/docs/rke/latest/en/config-options/nodes
|
||||
nodes: []
|
||||
services:
|
||||
etcd:
|
||||
image: ""
|
||||
extra_args: {}
|
||||
extra_binds: []
|
||||
extra_env: []
|
||||
win_extra_args: {}
|
||||
win_extra_binds: []
|
||||
win_extra_env: []
|
||||
external_urls: []
|
||||
ca_cert: ""
|
||||
cert: ""
|
||||
key: ""
|
||||
path: ""
|
||||
uid: 52034
|
||||
gid: 52034
|
||||
snapshot: false
|
||||
retention: ""
|
||||
creation: ""
|
||||
backup_config: null
|
||||
kube-api:
|
||||
image: ""
|
||||
extra_args: {}
|
||||
extra_binds: []
|
||||
extra_env: []
|
||||
win_extra_args: {}
|
||||
win_extra_binds: []
|
||||
win_extra_env: []
|
||||
service_cluster_ip_range: ""
|
||||
service_node_port_range: ""
|
||||
pod_security_policy: true
|
||||
always_pull_images: false
|
||||
secrets_encryption_config:
|
||||
enabled: true
|
||||
custom_config: null
|
||||
audit_log:
|
||||
enabled: true
|
||||
configuration: null
|
||||
admission_configuration: null
|
||||
event_rate_limit:
|
||||
enabled: true
|
||||
configuration: null
|
||||
kube-controller:
|
||||
image: ""
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
extra_binds: []
|
||||
extra_env: []
|
||||
win_extra_args: {}
|
||||
win_extra_binds: []
|
||||
win_extra_env: []
|
||||
cluster_cidr: ""
|
||||
service_cluster_ip_range: ""
|
||||
scheduler:
|
||||
image: ""
|
||||
extra_args: {}
|
||||
extra_binds: []
|
||||
extra_env: []
|
||||
win_extra_args: {}
|
||||
win_extra_binds: []
|
||||
win_extra_env: []
|
||||
kubelet:
|
||||
image: ""
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
protect-kernel-defaults: "true"
|
||||
tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
|
||||
extra_binds: []
|
||||
extra_env: []
|
||||
win_extra_args: {}
|
||||
win_extra_binds: []
|
||||
win_extra_env: []
|
||||
cluster_domain: cluster.local
|
||||
infra_container_image: ""
|
||||
cluster_dns_server: ""
|
||||
fail_swap_on: false
|
||||
generate_serving_certificate: true
|
||||
kubeproxy:
|
||||
image: ""
|
||||
extra_args: {}
|
||||
extra_binds: []
|
||||
extra_env: []
|
||||
win_extra_args: {}
|
||||
win_extra_binds: []
|
||||
win_extra_env: []
|
||||
network:
|
||||
plugin: ""
|
||||
options: {}
|
||||
mtu: 0
|
||||
node_selector: {}
|
||||
update_strategy: null
|
||||
authentication:
|
||||
strategy: ""
|
||||
sans: []
|
||||
webhook: null
|
||||
addons: |
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: restricted
|
||||
spec:
|
||||
requiredDropCapabilities:
|
||||
- NET_RAW
|
||||
privileged: false
|
||||
allowPrivilegeEscalation: false
|
||||
defaultAllowPrivilegeEscalation: false
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
runAsUser:
|
||||
rule: MustRunAsNonRoot
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
volumes:
|
||||
- emptyDir
|
||||
- secret
|
||||
- persistentVolumeClaim
|
||||
- downwardAPI
|
||||
- configMap
|
||||
- projected
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
rules:
|
||||
- apiGroups:
|
||||
- extensions
|
||||
resourceNames:
|
||||
- restricted
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: psp:restricted
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:serviceaccounts
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: Group
|
||||
name: system:authenticated
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: default-allow-all
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- {}
|
||||
egress:
|
||||
- {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: default
|
||||
automountServiceAccountToken: false
|
||||
addons_include: []
|
||||
system_images:
|
||||
etcd: ""
|
||||
alpine: ""
|
||||
nginx_proxy: ""
|
||||
cert_downloader: ""
|
||||
kubernetes_services_sidecar: ""
|
||||
kubedns: ""
|
||||
dnsmasq: ""
|
||||
kubedns_sidecar: ""
|
||||
kubedns_autoscaler: ""
|
||||
coredns: ""
|
||||
coredns_autoscaler: ""
|
||||
nodelocal: ""
|
||||
kubernetes: ""
|
||||
flannel: ""
|
||||
flannel_cni: ""
|
||||
calico_node: ""
|
||||
calico_cni: ""
|
||||
calico_controllers: ""
|
||||
calico_ctl: ""
|
||||
calico_flexvol: ""
|
||||
canal_node: ""
|
||||
canal_cni: ""
|
||||
canal_controllers: ""
|
||||
canal_flannel: ""
|
||||
canal_flexvol: ""
|
||||
weave_node: ""
|
||||
weave_cni: ""
|
||||
pod_infra_container: ""
|
||||
ingress: ""
|
||||
ingress_backend: ""
|
||||
metrics_server: ""
|
||||
windows_pod_infra_container: ""
|
||||
ssh_key_path: ""
|
||||
ssh_cert_path: ""
|
||||
ssh_agent_auth: false
|
||||
authorization:
|
||||
mode: ""
|
||||
options: {}
|
||||
ignore_docker_version: false
|
||||
kubernetes_version: v1.18.12-rancher1-1
|
||||
private_registries: []
|
||||
ingress:
|
||||
provider: ""
|
||||
options: {}
|
||||
node_selector: {}
|
||||
extra_args: {}
|
||||
dns_policy: ""
|
||||
extra_envs: []
|
||||
extra_volumes: []
|
||||
extra_volume_mounts: []
|
||||
update_strategy: null
|
||||
http_port: 0
|
||||
https_port: 0
|
||||
network_mode: ""
|
||||
cluster_name:
|
||||
cloud_provider:
|
||||
name: ""
|
||||
prefix_path: ""
|
||||
win_prefix_path: ""
|
||||
addon_job_timeout: 0
|
||||
bastion_host:
|
||||
address: ""
|
||||
port: ""
|
||||
user: ""
|
||||
ssh_key: ""
|
||||
ssh_key_path: ""
|
||||
ssh_cert: ""
|
||||
ssh_cert_path: ""
|
||||
monitoring:
|
||||
provider: ""
|
||||
options: {}
|
||||
node_selector: {}
|
||||
update_strategy: null
|
||||
replicas: null
|
||||
restore:
|
||||
restore: false
|
||||
snapshot_name: ""
|
||||
dns: null
|
||||
upgrade_strategy:
|
||||
max_unavailable_worker: ""
|
||||
max_unavailable_controlplane: ""
|
||||
drain: null
|
||||
node_drain_input: null
|
||||
```
|
||||
|
||||
### Reference Hardened RKE Template configuration
|
||||
|
||||
The reference RKE Template provides the configuration needed to achieve a hardened install of Kubenetes.
|
||||
RKE Templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher
|
||||
[documentaion](https://rancher.com/docs/rancher/v2.6/en/installation) for additional installation and RKE Template details.
|
||||
|
||||
```yaml
|
||||
#
|
||||
# Cluster Config
|
||||
#
|
||||
default_pod_security_policy_template_id: restricted
|
||||
docker_root_dir: /var/lib/docker
|
||||
enable_cluster_alerting: false
|
||||
enable_cluster_monitoring: false
|
||||
enable_network_policy: true
|
||||
#
|
||||
# Rancher Config
|
||||
#
|
||||
rancher_kubernetes_engine_config:
|
||||
addon_job_timeout: 45
|
||||
ignore_docker_version: true
|
||||
kubernetes_version: v1.18.12-rancher1-1
|
||||
#
|
||||
# If you are using calico on AWS
|
||||
#
|
||||
# network:
|
||||
# plugin: calico
|
||||
# calico_network_provider:
|
||||
# cloud_provider: aws
|
||||
#
|
||||
# # To specify flannel interface
|
||||
#
|
||||
# network:
|
||||
# plugin: flannel
|
||||
# flannel_network_provider:
|
||||
# iface: eth1
|
||||
#
|
||||
# # To specify flannel interface for canal plugin
|
||||
#
|
||||
# network:
|
||||
# plugin: canal
|
||||
# canal_network_provider:
|
||||
# iface: eth1
|
||||
#
|
||||
network:
|
||||
mtu: 0
|
||||
plugin: canal
|
||||
rotate_encryption_key: false
|
||||
#
|
||||
# services:
|
||||
# kube-api:
|
||||
# service_cluster_ip_range: 10.43.0.0/16
|
||||
# kube-controller:
|
||||
# cluster_cidr: 10.42.0.0/16
|
||||
# service_cluster_ip_range: 10.43.0.0/16
|
||||
# kubelet:
|
||||
# cluster_domain: cluster.local
|
||||
# cluster_dns_server: 10.43.0.10
|
||||
#
|
||||
services:
|
||||
etcd:
|
||||
backup_config:
|
||||
enabled: false
|
||||
interval_hours: 12
|
||||
retention: 6
|
||||
safe_timestamp: false
|
||||
creation: 12h
|
||||
extra_args:
|
||||
election-timeout: '5000'
|
||||
heartbeat-interval: '500'
|
||||
gid: 52034
|
||||
retention: 72h
|
||||
snapshot: false
|
||||
uid: 52034
|
||||
kube_api:
|
||||
always_pull_images: false
|
||||
audit_log:
|
||||
enabled: true
|
||||
event_rate_limit:
|
||||
enabled: true
|
||||
pod_security_policy: true
|
||||
secrets_encryption_config:
|
||||
enabled: true
|
||||
service_node_port_range: 30000-32767
|
||||
kube_controller:
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
kubelet:
|
||||
extra_args:
|
||||
feature-gates: RotateKubeletServerCertificate=true
|
||||
protect-kernel-defaults: 'true'
|
||||
tls-cipher-suites: >-
|
||||
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
|
||||
fail_swap_on: false
|
||||
generate_serving_certificate: true
|
||||
ssh_agent_auth: false
|
||||
upgrade_strategy:
|
||||
max_unavailable_controlplane: '1'
|
||||
max_unavailable_worker: 10%
|
||||
windows_prefered_cluster: false
|
||||
```
|
||||
|
||||
### Hardened Reference Ubuntu 20.04 LTS **cloud-config**:
|
||||
|
||||
The reference **cloud-config** is generally used in cloud infrastructure environments to allow for
|
||||
configuration management of compute instances. The reference config configures Ubuntu operating system level settings
|
||||
needed before installing kubernetes.
|
||||
|
||||
```yaml
|
||||
#cloud-config
|
||||
apt:
|
||||
sources:
|
||||
docker.list:
|
||||
source: deb [arch=amd64] http://download.docker.com/linux/ubuntu $RELEASE stable
|
||||
keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
|
||||
system_info:
|
||||
default_user:
|
||||
groups:
|
||||
- docker
|
||||
write_files:
|
||||
- path: "/etc/apt/preferences.d/docker"
|
||||
owner: root:root
|
||||
permissions: '0600'
|
||||
content: |
|
||||
Package: docker-ce
|
||||
Pin: version 5:19*
|
||||
Pin-Priority: 800
|
||||
- path: "/etc/sysctl.d/90-kubelet.conf"
|
||||
owner: root:root
|
||||
permissions: '0644'
|
||||
content: |
|
||||
vm.overcommit_memory=1
|
||||
vm.panic_on_oom=0
|
||||
kernel.panic=10
|
||||
kernel.panic_on_oops=1
|
||||
kernel.keys.root_maxbytes=25000000
|
||||
package_update: true
|
||||
packages:
|
||||
- docker-ce
|
||||
- docker-ce-cli
|
||||
- containerd.io
|
||||
runcmd:
|
||||
- sysctl -p /etc/sysctl.d/90-kubelet.conf
|
||||
- groupadd --gid 52034 etcd
|
||||
- useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd
|
||||
```
|
||||
@@ -1,55 +0,0 @@
|
||||
---
|
||||
title: Self-Assessment and Hardening Guides for Rancher v2.5
|
||||
shortTitle: Rancher v2.5 Guides
|
||||
weight: 1
|
||||
---
|
||||
|
||||
Rancher v2.5 introduced the capability to deploy Rancher on any Kubernetes cluster. For that reason, we now provide separate security hardening guides for Rancher deployments on each of Rancher's Kubernetes distributions.
|
||||
|
||||
- [Rancher Kubernetes Distributions](#rancher-kubernetes-distributions)
|
||||
- [Hardening Guides and Benchmark Versions](#hardening-guides-and-benchmark-versions)
|
||||
- [RKE Guides](#rke-guides)
|
||||
- [RKE2 Guides](#rke2-guides)
|
||||
- [K3s Guides](#k3s)
|
||||
- [Rancher with SELinux](#rancher-with-selinux)
|
||||
|
||||
# Rancher Kubernetes Distributions
|
||||
|
||||
Rancher has the following Kubernetes distributions:
|
||||
|
||||
- [**RKE,**]({{<baseurl>}}/rke/latest/en/) Rancher Kubernetes Engine, is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.
|
||||
- [**K3s,**]({{<baseurl>}}/k3s/latest/en/) is a fully conformant, lightweight Kubernetes distribution. It is easy to install, with half the memory of upstream Kubernetes, all in a binary of less than 100 MB.
|
||||
- [**RKE2**](https://docs.rke2.io/) is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector.
|
||||
|
||||
To harden a Kubernetes cluster outside of Rancher's distributions, refer to your Kubernetes provider docs.
|
||||
|
||||
# Hardening Guides and Benchmark Versions
|
||||
|
||||
These guides have been tested along with the Rancher v2.5 release. Each self-assessment guide is accompanied with a hardening guide and tested on a specific Kubernetes version and CIS benchmark version. If a CIS benchmark has not been validated for your Kubernetes version, you can choose to use the existing guides until a newer version is added.
|
||||
|
||||
### RKE Guides
|
||||
|
||||
Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides
|
||||
---|---|---|---
|
||||
Kubernetes v1.15+ | CIS v1.5 | [Link](./1.5-benchmark-2.5) | [Link](./1.5-hardening-2.5)
|
||||
Kubernetes v1.18+ | CIS v1.6 | [Link](./1.6-benchmark-2.5) | [Link](./1.6-hardening-2.5)
|
||||
|
||||
### RKE2 Guides
|
||||
|
||||
Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guides
|
||||
---|---|---|---
|
||||
Kubernetes v1.18 | CIS v1.5 | [Link](https://docs.rke2.io/security/cis_self_assessment15/) | [Link](https://docs.rke2.io/security/hardening_guide/)
|
||||
Kubernetes v1.20 | CIS v1.6 | [Link](https://docs.rke2.io/security/cis_self_assessment16/) | [Link](https://docs.rke2.io/security/hardening_guide/)
|
||||
|
||||
### K3s Guides
|
||||
|
||||
Kubernetes Version | CIS Benchmark Version | Self Assessment Guide | Hardening Guide
|
||||
---|---|---|---
|
||||
Kubernetes v1.17, v1.18, & v1.19 | CIS v1.5 | [Link]({{<baseurl>}}/k3s/latest/en/security/self_assessment/) | [Link]({{<baseurl>}}/k3s/latest/en/security/hardening_guide/)
|
||||
|
||||
|
||||
# Rancher with SELinux
|
||||
|
||||
[Security-Enhanced Linux (SELinux)](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is a security enhancement to Linux. After being historically used by government agencies, SELinux is now industry standard and is enabled by default on CentOS 7 and 8.
|
||||
|
||||
To use Rancher with SELinux, we recommend installing the `rancher-selinux` RPM according to the instructions on [this page.]({{<baseurl>}}/rancher/v2.6/en/security/selinux/#installing-the-rancher-selinux-rpm)
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 14 KiB |
Reference in New Issue
Block a user