diff --git a/content/rancher/v2.6/en/admin-settings/rbac/global-permissions/_index.md b/content/rancher/v2.6/en/admin-settings/rbac/global-permissions/_index.md index 360bebf8ea5..7ebbefbbe5e 100644 --- a/content/rancher/v2.6/en/admin-settings/rbac/global-permissions/_index.md +++ b/content/rancher/v2.6/en/admin-settings/rbac/global-permissions/_index.md @@ -43,41 +43,12 @@ CATTLE_RESTRICTED_DEFAULT_ADMIN=true ``` ### List of `restricted-admin` Permissions -The permissions for the `restricted-admin` role differ based on the Rancher version. - -{{% tabs %}} -{{% tab "v2.5.6" %}} - The `restricted-admin` permissions are as follows: - Has full admin access to all downstream clusters managed by Rancher. - Can add other users and assign them to clusters outside of the local cluster. - Can create other restricted admins. -{{% /tab %}} -{{% tab "v2.5.0-v2.5.5" %}} - -The `restricted-admin` permissions are as follows: - -- Has full admin access to all downstream clusters managed by Rancher. -- Has very limited access to the local Kubernetes cluster. Can access Rancher custom resource definitions, but has no access to any Kubernetes native types. -- Can add other users and assign them to clusters outside of the local cluster. -- Can create other restricted admins. -- Cannot grant any permissions in the local cluster they don't currently have. (This is how Kubernetes normally operates) - -{{% /tab %}} -{{% /tabs %}} - -### Upgrading from Rancher with a Hidden Local Cluster - -Before Rancher v2.5, it was possible to run the Rancher server using this flag to hide the local cluster: - -``` ---add-local=false -``` - -You will need to drop this flag when upgrading to Rancher v2.5. Otherwise, Rancher will not start. The `restricted-admin` role can be used to continue restricting access to the local cluster. - ### Changing Global Administrators to Restricted Admins If Rancher already has a global administrator, they should change all global administrators over to the new `restricted-admin` role. diff --git a/content/rancher/v2.6/en/cis-scans/_index.md b/content/rancher/v2.6/en/cis-scans/_index.md index d54e85e2b91..98a8b6a9a8e 100644 --- a/content/rancher/v2.6/en/cis-scans/_index.md +++ b/content/rancher/v2.6/en/cis-scans/_index.md @@ -2,7 +2,7 @@ title: CIS Scans weight: 17 aliases: - - /rancher/v2.5/en/cis-scans/v2.5 + - /rancher/v2.6/en/cis-scans/v2.6 --- Rancher can run a security scan to check whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark. @@ -39,8 +39,6 @@ Support for alerting for the cluster scan results is now also available from Ran In Rancher v2.4, permissive and hardened profiles were included. In Rancher v2.5.0 and in v2.5.4, more profiles were included. -{{% tabs %}} -{{% tab "Profiles in v2.5.4" %}} - Generic CIS 1.5 - Generic CIS 1.6 - RKE permissive 1.5 @@ -51,22 +49,10 @@ In Rancher v2.4, permissive and hardened profiles were included. In Rancher v2.5 - GKE - RKE2 permissive 1.5 - RKE2 permissive 1.5 -{{% /tab %}} -{{% tab "Profiles in v2.5.0-v2.5.3" %}} -- Generic CIS 1.5 -- RKE permissive -- RKE hardened -- EKS -- GKE -{{% /tab %}} -{{% /tabs %}}
-The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned and the Rancher version: - -{{% tabs %}} -{{% tab "v2.5.4" %}} +The default profile and the supported CIS benchmark version depends on the type of cluster that will be scanned: The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version. @@ -75,18 +61,6 @@ The `rancher-cis-benchmark` supports the CIS 1.6 Benchmark version. - For RKE2 Kubernetes clusters, the RKE2 Permissive 1.5 profile is the default. - For cluster types other than RKE, RKE2, EKS and GKE, the Generic CIS 1.5 profile will be used by default. -{{% /tab %}} -{{% tab "v2.5.0-v2.5.3" %}} - -The `rancher-cis-benchmark` supports the CIS 1.5 Benchmark version. - -- For RKE Kubernetes clusters, the RKE permissive profile is the default. -- EKS and GKE have their own CIS Benchmarks published by `kube-bench`. The corresponding test profiles are used by default for those clusters. -- For cluster types other than RKE, EKS and GKE, the Generic CIS 1.5 profile will be used by default. - -{{% /tab %}} -{{% /tabs %}} - > **Note:** CIS v1 cannot run on a cluster when CIS v2 is deployed. In other words, after `rancher-cis-benchmark` is installed, you can't run scans by going to the Cluster Manager view in the Rancher UI and clicking Tools > CIS Scans. # About the CIS Benchmark @@ -133,8 +107,6 @@ Refer to the t The following profiles are available: -{{% tabs %}} -{{% tab "Profiles in v2.5.4" %}} - Generic CIS 1.5 - Generic CIS 1.6 - RKE permissive 1.5 @@ -145,15 +117,6 @@ The following profiles are available: - GKE - RKE2 permissive 1.5 - RKE2 permissive 1.5 -{{% /tab %}} -{{% tab "Profiles in v2.5.0-v2.5.3" %}} -- Generic CIS 1.5 -- RKE permissive -- RKE hardened -- EKS -- GKE -{{% /tab %}} -{{% /tabs %}} You also have the ability to customize a profile by saving a set of tests to skip. @@ -348,4 +311,4 @@ There could be some Kubernetes cluster setups that require custom configurations It is now possible to create a custom Benchmark Version for running a cluster scan using the `rancher-cis-benchmark` application. -For details, see [this page.](./custom-benchmark) \ No newline at end of file +For details, see [this page.](./custom-benchmark) diff --git a/content/rancher/v2.6/en/cluster-admin/editing-clusters/eks-config-reference/_index.md b/content/rancher/v2.6/en/cluster-admin/editing-clusters/eks-config-reference/_index.md index 9fa6ea0ce89..67d3e46d94f 100644 --- a/content/rancher/v2.6/en/cluster-admin/editing-clusters/eks-config-reference/_index.md +++ b/content/rancher/v2.6/en/cluster-admin/editing-clusters/eks-config-reference/_index.md @@ -4,13 +4,8 @@ shortTitle: EKS Cluster Configuration weight: 2 --- -{{% tabs %}} -{{% tab "Rancher v2.5.6+" %}} - ### Account Access - - Complete each drop-down and field using the information obtained for your IAM policy. | Setting | Description | @@ -20,8 +15,6 @@ Complete each drop-down and field using the information obtained for your IAM po ### Service Role - - Choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html). Service Role | Description @@ -31,14 +24,10 @@ Custom: Choose from your existing service roles | If you choose this role, Ranch ### Secrets Encryption - - Optional: To encrypt secrets, select or enter a key created in [AWS Key Management Service (KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) ### API Server Endpoint Access - - Configuring Public/Private API access is an advanced use case. For details, refer to the EKS cluster endpoint access control [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) ### Private-only API Endpoints @@ -51,8 +40,6 @@ There are two ways to avoid this extra manual step: ### Public Access Endpoints - - Optionally limit access to the public endpoint via explicit CIDR blocks. If you limit access to specific CIDR blocks, then it is recommended that you also enable the private access to avoid losing network communication to the cluster. @@ -65,8 +52,6 @@ For more information about public and private access to the cluster endpoint, re ### Subnet - - | Option | Description | | ------- | ------------ | | Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC with 3 public subnets. | @@ -79,8 +64,6 @@ For more information about public and private access to the cluster endpoint, re ### Security Group - - Amazon Documentation: - [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) @@ -89,8 +72,6 @@ Amazon Documentation: ### Logging - - Configure control plane logs to send to Amazon CloudWatch. You are charged the standard CloudWatch Logs data ingestion and storage costs for any logs sent to CloudWatch Logs from your clusters. Each log type corresponds to a component of the Kubernetes control plane. To learn more about these components, see [Kubernetes Components](https://kubernetes.io/docs/concepts/overview/components/) in the Kubernetes documentation. @@ -99,8 +80,6 @@ For more information on EKS control plane logging, refer to the official [docume ### Managed Node Groups - - Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters. For more information about how node groups work and how they are configured, refer to the [EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) @@ -152,254 +131,8 @@ The following settings are also configurable. All of these except for the "Node | Tags | These are tags for the managed node group and do not propagate to any of the associated resources. | -{{% /tab %}} -{{% tab "Rancher v2.5.0-v2.5.5" %}} - -### Changes in Rancher v2.5 - -More EKS options can be configured when you create an EKS cluster in Rancher, including the following: - -- Managed node groups -- Desired size, minimum size, maximum size (requires the Cluster Autoscaler to be installed) -- Control plane logging -- Secrets encryption with KMS - -The following capabilities have been added for configuring EKS clusters in Rancher: - -- GPU support -- Exclusively use managed nodegroups that come with the most up-to-date AMIs -- Add new nodes -- Upgrade nodes -- Add and remove node groups -- Disable and enable private access -- Add restrictions to public access -- Use your cloud credentials to create the EKS cluster instead of passing in your access key and secret key - -Due to the way that the cluster data is synced with EKS, if the cluster is modified from another source, such as in the EKS console, and in Rancher within five minutes, it could cause some changes to be overwritten. For information about how the sync works and how to configure it, refer to [this section](#syncing). - -### Account Access - - - -Complete each drop-down and field using the information obtained for your IAM policy. - -| Setting | Description | -| ---------- | -------------------------------------------------------------------------------------------------------------------- | -| Region | From the drop-down choose the geographical region in which to build your cluster. | -| Cloud Credentials | Select the cloud credentials that you created for your IAM policy. For more information on creating cloud credentials in Rancher, refer to [this page.]({{}}/rancher/v2.x/en/user-settings/cloud-credentials/) | - -### Service Role - - - -Choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html). - -Service Role | Description --------------|--------------------------- -Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster. -Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role). - -### Secrets Encryption - - - -Optional: To encrypt secrets, select or enter a key created in [AWS Key Management Service (KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) - -### API Server Endpoint Access - - - -Configuring Public/Private API access is an advanced use case. For details, refer to the EKS cluster endpoint access control [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) - -### Private-only API Endpoints - -If you enable private and disable public API endpoint access when creating a cluster, then there is an extra step you must take in order for Rancher to connect to the cluster successfully. In this case, a pop-up will be displayed with a command that you will run on the cluster to register it with Rancher. Once the cluster is provisioned, you can run the displayed command anywhere you can connect to the cluster's Kubernetes API. - -There are two ways to avoid this extra manual step: -- You can create the cluster with both private and public API endpoint access on cluster creation. You can disable public access after the cluster is created and in an active state and Rancher will continue to communicate with the EKS cluster. -- You can ensure that Rancher shares a subnet with the EKS cluster. Then security groups can be used to enable Rancher to communicate with the cluster's API endpoint. In this case, the command to register the cluster is not needed, and Rancher will be able to communicate with your cluster. For more information on configuring security groups, refer to the [security groups documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html). - -### Public Access Endpoints - - - -Optionally limit access to the public endpoint via explicit CIDR blocks. - -If you limit access to specific CIDR blocks, then it is recommended that you also enable the private access to avoid losing network communication to the cluster. - -One of the following is required to enable private access: -- Rancher's IP must be part of an allowed CIDR block -- Private access should be enabled, and Rancher must share a subnet with the cluster and have network access to the cluster, which can be configured with a security group - -For more information about public and private access to the cluster endpoint, refer to the [Amazon EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) - -### Subnet - - - -| Option | Description | -| ------- | ------------ | -| Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC with 3 public subnets. | -| Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your Control Plane and nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html). | - - For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step. - -- [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) -- [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) - -### Security Group - - - -Amazon Documentation: - -- [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) -- [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) -- [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group) - -### Logging - - - -Configure control plane logs to send to Amazon CloudWatch. You are charged the standard CloudWatch Logs data ingestion and storage costs for any logs sent to CloudWatch Logs from your clusters. - -Each log type corresponds to a component of the Kubernetes control plane. To learn more about these components, see [Kubernetes Components](https://kubernetes.io/docs/concepts/overview/components/) in the Kubernetes documentation. - -For more information on EKS control plane logging, refer to the official [documentation.](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) - -### Managed Node Groups - - - -Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters. - -For more information about how node groups work and how they are configured, refer to the [EKS documentation.](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html) - -Amazon will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the Kubernetes version. You can configure whether the AMI has GPU enabled. - -| Option | Description | -| ------- | ------------ | -| Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning. | -| Maximum ASG Size | The maximum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. | -| Minimum ASG Size | The minimum number of instances. This setting won't take effect until the [Cluster Autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html) is installed. | - -{{% /tab %}} -{{% tab "Rancher prior to v2.5" %}} - - -### Account Access - - - -Complete each drop-down and field using the information obtained for your IAM policy. - -| Setting | Description | -| ---------- | -------------------------------------------------------------------------------------------------------------------- | -| Region | From the drop-down choose the geographical region in which to build your cluster. | -| Access Key | Enter the access key that you created for your IAM policy. | -| Secret Key | Enter the secret key that you created for your IAM policy. | - -### Service Role - - - -Choose a [service role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html). - -Service Role | Description --------------|--------------------------- -Standard: Rancher generated service role | If you choose this role, Rancher automatically adds a service role for use with the cluster. -Custom: Choose from your existing service roles | If you choose this role, Rancher lets you choose from service roles that you're already created within AWS. For more information on creating a custom service role in AWS, see the [Amazon documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#create-service-linked-role). - -### Public IP for Worker Nodes - - - -Your selection for this option determines what options are available for **VPC & Subnet**. - -Option | Description --------|------------ -Yes | When your cluster nodes are provisioned, they're assigned a both a private and public IP address. -No: Private IPs only | When your cluster nodes are provisioned, they're assigned only a private IP address.

If you choose this option, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane. - -### VPC & Subnet - - - -The available options depend on the [public IP for worker nodes.](#public-ip-for-worker-nodes) - -Option | Description - -------|------------ - Standard: Rancher generated VPC and Subnet | While provisioning your cluster, Rancher generates a new VPC and Subnet. - Custom: Choose from your existing VPC and Subnets | While provisioning your cluster, Rancher configures your nodes to use a VPC and Subnet that you've already [created in AWS](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html). If you choose this option, complete the remaining steps below. - - For more information, refer to the AWS documentation for [Cluster VPC Considerations](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html). Follow one of the sets of instructions below based on your selection from the previous step. - -- [What Is Amazon VPC?](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html) -- [VPCs and Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) - - -If you choose to assign a public IP address to your cluster's worker nodes, you have the option of choosing between a VPC that's automatically generated by Rancher (i.e., **Standard: Rancher generated VPC and Subnet**), or a VPC that you've already created with AWS (i.e., **Custom: Choose from your existing VPC and Subnets**). Choose the option that best fits your use case. - -{{% accordion id="yes" label="Click to expand" %}} - -If you're using **Custom: Choose from your existing VPC and Subnets**: - -(If you're using **Standard**, skip to the [instance options.)](#select-instance-options-2-4) - -1. Make sure **Custom: Choose from your existing VPC and Subnets** is selected. - -1. From the drop-down that displays, choose a VPC. - -1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays. - -1. Click **Next: Select Security Group**. -{{% /accordion %}} - -If your worker nodes have Private IPs only, you must also choose a **VPC & Subnet** that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane. -{{% accordion id="no" label="Click to expand" %}} -Follow the steps below. - ->**Tip:** When using only private IP addresses, you can provide your nodes internet access by creating a VPC constructed with two subnets, a private set and a public set. The private set should have its route tables configured to point toward a NAT in the public set. For more information on routing traffic from private subnets, please see the [official AWS documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html). - -1. From the drop-down that displays, choose a VPC. - -1. Click **Next: Select Subnets**. Then choose one of the **Subnets** that displays. - -{{% /accordion %}} - -### Security Group - - - -Amazon Documentation: - -- [Cluster Security Group Considerations](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) -- [Security Groups for Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) -- [Create a Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/getting-started-ipv4.html#getting-started-create-security-group) - -### Instance Options - - - -Instance type and size of your worker nodes affects how many IP addresses each worker node will have available. See this [documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) for more information. - -Option | Description --------|------------ -Instance Type | Choose the [hardware specs](https://aws.amazon.com/ec2/instance-types/) for the instance you're provisioning. -Custom AMI Override | If you want to use a custom [Amazon Machine Image](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html#creating-an-ami) (AMI), specify it here. By default, Rancher will use the [EKS-optimized AMI](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html) for the EKS version that you chose. -Desired ASG Size | The number of instances that your cluster will provision. -User Data | Custom commands can to be passed to perform automated configuration tasks **WARNING: Modifying this may cause your nodes to be unable to join the cluster.** _Note: Available as of v2.2.0_ - -{{% /tab %}} -{{% /tabs %}} - - - ### Configuring the Refresh Interval -{{% tabs %}} -{{% tab "Rancher v2.5.8+" %}} - The `eks-refresh-cron` setting is deprecated. It has been migrated to the `eks-refresh` setting, which is an integer representing seconds. The default value is 300 seconds. @@ -410,12 +143,3 @@ If the `eks-refresh-cron` setting was previously set, the migration will happen The shorter the refresh window, the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for AWS APIs. -{{% /tab %}} -{{% tab "Before v2.5.8" %}} - -It is possible to change the refresh interval through the setting `eks-refresh-cron`. This setting accepts values in the Cron format. The default is `*/5 * * * *`. - -The shorter the refresh window, the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for AWS APIs. - -{{% /tab %}} -{{% /tabs %}} diff --git a/content/rancher/v2.6/en/cluster-admin/editing-clusters/gke-config-reference/_index.md b/content/rancher/v2.6/en/cluster-admin/editing-clusters/gke-config-reference/_index.md index 1738c2aef5f..6b452ead8d8 100644 --- a/content/rancher/v2.6/en/cluster-admin/editing-clusters/gke-config-reference/_index.md +++ b/content/rancher/v2.6/en/cluster-admin/editing-clusters/gke-config-reference/_index.md @@ -4,23 +4,6 @@ shortTitle: GKE Cluster Configuration weight: 3 --- -{{% tabs %}} -{{% tab "v2.5.8" %}} - -# Changes in v2.5.8 - -- We now support private GKE clusters. Note: This advanced setup can require more steps during the cluster provisioning process. For details, see [this section.](./private-clusters) -- [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc) are now supported. -- We now support more configuration options for Rancher managed GKE clusters: - - Project - - Network policy - - Network policy config - - Node pools and node configuration options: - - More image types are available for the nodes - - The maximum number of pods per node can be configured - - Node pools can be added while configuring the GKE cluster -- When provisioning a GKE cluster, you can now use reusable cloud credentials instead of using a service account token directly to create the cluster. - # Cluster Location | Value | Description | @@ -301,148 +284,3 @@ The syncing interval can be changed by running `kubectl edit setting gke-refresh The shorter the refresh window, the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for GCP APIs. -{{% /tab %}} -{{% tab "Rancher before v2.5.8" %}} - - -# Labels & Annotations - -Add Kubernetes [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) or [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) to the cluster. - -# Kubernetes Options - -### Location Type -Zonal or Regional. With GKE, you can create a cluster tailored to the availability requirements of your workload and your budget. By default, a cluster's nodes run in a single compute zone. When multiple zones are selected, the cluster's nodes will span multiple compute zones, while the controlplane is located in a single zone. Regional clusters increase the availability of the controlplane as well. For help choosing the type of cluster availability, refer to [these docs.](https://cloud.google.com/kubernetes-engine/docs/best-practices/scalability#choosing_a_regional_or_zonal_control_plane) - -For [regional clusters,](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#regional_clusters) you can select a region. For more information about available regions and zones, refer to [this section](https://cloud.google.com/compute/docs/regions-zones#available). The first part of each zone name is the name of the region. - -The location type can't be changed after the cluster is created. - -### Zone -Each region in Compute engine contains a number of zones. - -For more information about available regions and zones, refer to [these docs.](https://cloud.google.com/compute/docs/regions-zones#available) - -### Additional Zones -For zonal clusters, you can select additional zones to create a [multi-zone cluster.](https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#multi-zonal_clusters) - -### Kubernetes Version -Link to list of GKE kubernetes versions - -### Container Address Range - -The IP address range for pods in the cluster. Must be a valid CIDR range, e.g. 10.42.0.0/16. If not specified, a random range is automatically chosen from 10.0.0.0/8 and will exclude ranges already allocated to VMs, other clusters, or routes. Automatically chosen ranges may conflict with reserved IP addresses, dynamic routes, or routes within VPCs peering with the cluster. - -### Alpha Features - -Turns on all Kubernetes alpha API groups and features for the cluster. When enabled, the cluster cannot be upgraded and will be deleted automatically after 30 days. Alpha clusters are not recommended for production use as they are not covered by the GKE SLA. For more information, refer to [this page](https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters). - -### Legacy Authorization - -This option is deprecated and it is recommended to leave it disabled. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster#leave_abac_disabled) -### Stackdriver Logging - -Enable logging with Google Cloud's Operations Suite, formerly called Stackdriver. For details, see the [documentation.](https://cloud.google.com/logging/docs/basic-concepts) -### Stackdriver Monitoring - -Enable monitoring with Google Cloud's Operations Suite, formerly called Stackdriver. For details, see the [documentation.](https://cloud.google.com/monitoring/docs/monitoring-overview) -### Kubernetes Dashboard - -Enable the [Kubernetes dashboard add-on.](https://cloud.google.com/kubernetes-engine/docs/concepts/dashboards#kubernetes_dashboard) Starting with GKE v1.15, you will no longer be able to enable the Kubernetes Dashboard by using the add-on API. -### Http Load Balancing - -Set up [HTTP(S) load balancing.](https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer) To use Ingress, you must have the HTTP(S) Load Balancing add-on enabled. -### Horizontal Pod Autoscaling - -The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster. For more information, see the [documentation.](https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler) -### Maintenance Window - -Set the start time for a 4 hour maintenance window. The time is specified in the UTC time zone using the HH:MM format. For more information, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/maintenance-windows-and-exclusions) - -### Network - -The Compute Engine Network that the cluster connects to. Routes and firewalls will be created using this network. If using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc), the VPC networks that are shared to your project will appear here. will be available to select in this field. For more information, refer to [this page](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets). - -### Node Subnet / Subnet - -The Compute Engine subnetwork that the cluster connects to. This subnetwork must belong to the network specified in the **Network** field. Select an existing subnetwork, or select "Auto Create Subnetwork" to have one automatically created. If not using an existing network, **Subnetwork Name** is required to generate one. If using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc), the VPC subnets that are shared to your project will appear here. If using a Shared VPC network, you cannot select "Auto Create Subnetwork". For more information, refer to [this page.](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets) -### Ip Aliases - -Enable [alias IPs](https://cloud.google.com/vpc/docs/alias-ip). This enables VPC-native traffic routing. Required if using [Shared VPCs](https://cloud.google.com/vpc/docs/shared-vpc). - -### Pod address range - -When you create a VPC-native cluster, you specify a subnet in a VPC network. The cluster uses three unique subnet IP address ranges for nodes, pods, and services. For more information on IP address ranges, see [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing) - -### Service address range - -When you create a VPC-native cluster, you specify a subnet in a VPC network. The cluster uses three unique subnet IP address ranges for nodes, pods, and services. For more information on IP address ranges, see [this section.](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing) -### Cluster Labels - -A [cluster label](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-managing-labels) is a key-value pair that helps you organize your Google Cloud clusters. You can attach a label to each resource, then filter the resources based on their labels. Information about labels is forwarded to the billing system, so you can break down your billing charges by label. - -## Node Options - -### Node Count -Integer for the starting number of nodes in the node pool. - -### Machine Type -For more information on Google Cloud machine types, refer to [this page.](https://cloud.google.com/compute/docs/machine-types#machine_types) - -### Image Type -Ubuntu or Container-Optimized OS images are available. - -For more information about GKE node image options, refer to [this page.](https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#available_node_images) - -### Root Disk Type - -Standard persistent disks are backed by standard hard disk drives (HDD), while SSD persistent disks are backed by solid state drives (SSD). For more information, refer to [this section.](https://cloud.google.com/compute/docs/disks) - -### Root Disk Size -The size in GB of the [root disk.](https://cloud.google.com/compute/docs/disks) - -### Local SSD disks -Configure each node's local SSD disk storage in GB. - -Local SSDs are physically attached to the server that hosts your VM instance. Local SSDs have higher throughput and lower latency than standard persistent disks or SSD persistent disks. The data that you store on a local SSD persists only until the instance is stopped or deleted. For more information, see [this section.](https://cloud.google.com/compute/docs/disks#localssds) - -### Preemptible nodes (beta) - -Preemptible nodes, also called preemptible VMs, are Compute Engine VM instances that last a maximum of 24 hours in general, and provide no availability guarantees. For more information, see [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms) - -### Auto Upgrade - -> Note: Enabling the Auto Upgrade feature for Nodes is not recommended. - -When enabled, the auto-upgrade feature keeps the nodes in your cluster up-to-date with the cluster control plane (master) version when your control plane is [updated on your behalf.](https://cloud.google.com/kubernetes-engine/upgrades#automatic_cp_upgrades) For more information about auto-upgrading nodes, see [this page.](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades) - -### Auto Repair - -GKE's node auto-repair feature helps you keep the nodes in your cluster in a healthy, running state. When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node. For more information, see the section on [auto-repairing nodes.](https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair) - -### Node Pool Autoscaling - -Enable node pool autoscaling based on cluster load. For more information, see the documentation on [adding a node pool with autoscaling.](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler#adding_a_node_pool_with_autoscaling) - -### Taints -When you apply a taint to a node, only Pods that tolerate the taint are allowed to run on the node. In a GKE cluster, you can apply a taint to a node pool, which applies the taint to all nodes in the pool. -### Node Labels -You can apply labels to the node pool, which applies the labels to all nodes in the pool. - -## Security Options - -### Service Account - -Create a [Service Account](https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts) with a JSON private key and provide the JSON here. See [Google Cloud docs](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances) for more info about creating a service account. These IAM roles are required: Compute Viewer (`roles/compute.viewer`), (Project) Viewer (`roles/viewer`), Kubernetes Engine Admin (`roles/container.admin`), Service Account User (`roles/iam.serviceAccountUser`). More info on roles can be found [here.](https://cloud.google.com/kubernetes-engine/docs/how-to/iam-integration) - -### Access Scopes - -Access scopes are the legacy method of specifying permissions for your nodes. - -- **Allow default access:** The default access for new clusters is the [Compute Engine default service account.](https://cloud.google.com/compute/docs/access/service-accounts?hl=en_US#default_service_account) -- **Allow full access to all Cloud APIs:** Generally, you can just set the cloud-platform access scope to allow full access to all Cloud APIs, then grant the service account only relevant IAM roles. The combination of access scopes granted to the virtual machine instance and the IAM roles granted to the service account determines the amount of access the service account has for that instance. -- **Set access for each API:** Alternatively, you can choose to set specific scopes that permit access to the particular API methods that the service will call. - -For more information, see the [section about enabling service accounts for a VM.](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances) -{{% /tab %}} -{{% /tabs %}} diff --git a/content/rancher/v2.6/en/cluster-provisioning/cluster-capabilities-table/index.md b/content/rancher/v2.6/en/cluster-provisioning/cluster-capabilities-table/index.md index 5dd2664c8b7..469dd74318a 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/cluster-capabilities-table/index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/cluster-capabilities-table/index.md @@ -2,9 +2,6 @@ headless: true --- -{{% tabs %}} -{{% tab "Rancher v2.5.8" %}} - | Action | Rancher Launched Kubernetes Clusters | EKS and GKE Clusters* | Other Hosted Kubernetes Clusters | Non-EKS or GKE Registered Clusters | | --- | --- | ---| ---|----| | [Using kubectl and a kubeconfig file to Access a Cluster]({{}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/) | ✓ | ✓ | ✓ | ✓ | @@ -29,31 +26,3 @@ headless: true \* \* \* For registered cluster nodes, the Rancher UI exposes the ability to cordon drain, and edit the node. -{{% /tab %}} -{{% tab "Rancher v2.5.0-v2.5.7" %}} - -| Action | Rancher Launched Kubernetes Clusters | Hosted Kubernetes Clusters | Registered EKS Clusters | All Other Registered Clusters | -| --- | --- | ---| ---|----| -| [Using kubectl and a kubeconfig file to Access a Cluster]({{}}/rancher/v2.5/en/cluster-admin/cluster-access/kubectl/) | ✓ | ✓ | ✓ | ✓ | -| [Managing Cluster Members]({{}}/rancher/v2.5/en/cluster-admin/cluster-access/cluster-members/) | ✓ | ✓ | ✓ | ✓ | -| [Editing and Upgrading Clusters]({{}}/rancher/v2.5/en/cluster-admin/editing-clusters/) | ✓ | ✓ | ✓ | * | -| [Managing Nodes]({{}}/rancher/v2.5/en/cluster-admin/nodes) | ✓ | ✓ | ✓ | ✓ ** | -| [Managing Persistent Volumes and Storage Classes]({{}}/rancher/v2.5/en/cluster-admin/volumes-and-storage/) | ✓ | ✓ | ✓ | ✓ | -| [Managing Projects, Namespaces and Workloads]({{}}/rancher/v2.5/en/cluster-admin/projects-and-namespaces/) | ✓ | ✓ | ✓ | ✓ | -| [Using App Catalogs]({{}}/rancher/v2.5/en/catalog/) | ✓ | ✓ | ✓ | ✓ | -| [Configuring Tools (Alerts, Notifiers, Logging, Monitoring, Istio)]({{}}/rancher/v2.5/en/cluster-admin/tools/) | ✓ | ✓ | ✓ | ✓ | -| [Running Security Scans]({{}}/rancher/v2.5/en/security/security-scan/) | ✓ | ✓ | ✓ | ✓ | -| [Cloning Clusters]({{}}/rancher/v2.5/en/cluster-admin/cloning-clusters/)| ✓ | ✓ |✓ | | -| [Ability to rotate certificates]({{}}/rancher/v2.5/en/cluster-admin/certificate-rotation/) | ✓ | | ✓ | | -| [Ability to back up your Kubernetes Clusters]({{}}/rancher/v2.5/en/cluster-admin/backing-up-etcd/) | ✓ | | ✓ | | -| [Ability to recover and restore etcd]({{}}/rancher/v2.5/en/cluster-admin/restoring-etcd/) | ✓ | | ✓ | | -| [Cleaning Kubernetes components when clusters are no longer reachable from Rancher]({{}}/rancher/v2.5/en/cluster-admin/cleaning-cluster-nodes/) | ✓ | | | | -| [Configuring Pod Security Policies]({{}}/rancher/v2.5/en/cluster-admin/pod-security-policy/) | ✓ | | ✓ || - -\* Cluster configuration options can't be edited for imported clusters, except for [K3s and RKE2 clusters.]({{}}/rancher/v2.5/en/cluster-provisioning/imported-clusters/) - -\* \* For registered cluster nodes, the Rancher UI exposes the ability to cordon drain, and edit the node. - - -{{% /tab %}} -{{% /tabs %}} \ No newline at end of file diff --git a/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md b/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md index d58fee74333..6e3d2d0bfc3 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/hosted-kubernetes-clusters/gke/_index.md @@ -7,9 +7,6 @@ aliases: - /rancher/v2.5/en/cluster-provisioning/hosted-kubernetes-clusters/gke --- -{{% tabs %}} -{{% tab "Rancher v2.5.8+" %}} - - [Prerequisites](#prerequisites) - [Provisioning a GKE Cluster](#provisioning-a-gke-cluster) - [Private Clusters](#private-clusters) @@ -103,60 +100,3 @@ The GKE provisioner can synchronize the state of a GKE cluster between Rancher a For information on configuring the refresh interval, see [this section.]({{}}/rancher/v2.5/en/cluster-admin/editing-clusters/gke-config-reference/#configuring-the-refresh-interval) - -{{% /tab %}} -{{% tab "Rancher before v2.5.8" %}} - -# Prerequisites - -Some setup in Google Kubernetes Engine is required. - -### Service Account Token - -Create a service account using [Google Kubernetes Engine](https://console.cloud.google.com/projectselector/iam-admin/serviceaccounts). GKE uses this account to operate your cluster. Creating this account also generates a private key used for authentication. - -The service account requires the following roles: - -- **Compute Viewer:** `roles/compute.viewer` -- **Project Viewer:** `roles/viewer` -- **Kubernetes Engine Admin:** `roles/container.admin` -- **Service Account User:** `roles/iam.serviceAccountUser` - -[Google Documentation: Creating and Enabling Service Accounts](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances) - - ->**Note** ->Deploying to GKE will incur charges. - -# Create the GKE Cluster - -Use Rancher to set up and configure your Kubernetes cluster. - -1. From the **Clusters** page, click **Add Cluster**. - -2. Choose **Google Kubernetes Engine**. - -3. Enter a **Cluster Name**. - -4. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user. - -5. Either paste your service account private key in the **Service Account** text box or **Read from a file**. Then click **Next: Configure Nodes**. - - >**Note:** After submitting your private key, you may have to enable the Google Kubernetes Engine API. If prompted, browse to the URL displayed in the Rancher UI to enable the API. - -6. Select your cluster options, node options and security options. For help, refer to the [GKE Cluster Configuration Reference.](#gke-before-v2-5-8) -9. Review your options to confirm they're correct. Then click **Create**. - -**Result:** You have successfully deployed a GKE cluster. - -Your cluster is created and assigned a state of **Provisioning.** Rancher is standing up your cluster. - -You can access your cluster after its state is updated to **Active.** - -**Active** clusters are assigned two Projects: - -- `Default`, containing the `default` namespace -- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces - -{{% /tab %}} -{{% /tabs %}} diff --git a/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md b/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md index f8e7a29a2a8..9eb8edcf1ff 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/registered-clusters/_index.md @@ -78,19 +78,10 @@ $ curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - The control that Rancher has to manage a registered cluster depends on the type of cluster. -{{% tabs %}} -{{% tab "Rancher v2.5.8+" %}} - -- [Changes in v2.5.8](#changes-in-v2-5-8) - [Features for All Registered Clusters](#2-5-8-features-for-all-registered-clusters) - [Additional Features for Registered K3s Clusters](#2-5-8-additional-features-for-registered-k3s-clusters) - [Additional Features for Registered EKS and GKE Clusters](#additional-features-for-registered-eks-and-gke-clusters) -### Changes in v2.5.8 - -Greater management capabilities are now available for [registered GKE clusters.](#additional-features-for-registered-eks-and-gke-clusters) The same configuration options are available for registered GKE clusters as for the GKE clusters created through the Rancher UI. - - ### Features for All Registered Clusters After registering a cluster, the cluster owner can: @@ -102,7 +93,6 @@ After registering a cluster, the cluster owner can: - Use [pipelines]({{}}/rancher/v2.5/en/project-admin/pipelines/) - Manage projects and workloads - ### Additional Features for Registered K3s Clusters [K3s]({{}}/k3s/latest/en/) is a lightweight, fully compliant Kubernetes distribution. @@ -123,51 +113,6 @@ When you delete an EKS cluster or GKE cluster that was created in Rancher, the c The capabilities for registered clusters are listed in the table on [this page.]({{}}/rancher/v2.5/en/cluster-provisioning/) - -{{% /tab %}} -{{% tab "Rancher v2.5.0-v2.5.8" %}} - -- [Features for All Registered Clusters](#before-2-5-8-features-for-all-registered-clusters) -- [Additional Features for Registered K3s Clusters](#before-2-5-8-additional-features-for-registered-k3s-clusters) -- [Additional Features for Registered EKS Clusters](#additional-features-for-registered-eks-clusters) - - -### Features for All Registered Clusters - -After registering a cluster, the cluster owner can: - -- [Manage cluster access]({{}}/rancher/v2.5/en/admin-settings/rbac/cluster-project-roles/) through role-based access control -- Enable [monitoring, alerts and notifiers]({{}}/rancher/v2.5/en/monitoring-alerting/v2.5/) -- Enable [logging]({{}}/rancher/v2.5/en/logging/v2.5/) -- Enable [Istio]({{}}/rancher/v2.5/en/istio/v2.5/) -- Use [pipelines]({{}}/rancher/v2.5/en/project-admin/pipelines/) -- Manage projects and workloads - - -### Additional Features for Registered K3s Clusters - -[K3s]({{}}/k3s/latest/en/) is a lightweight, fully compliant Kubernetes distribution. - -When a K3s cluster is registered in Rancher, Rancher will recognize it as K3s. The Rancher UI will expose the features for [all registered clusters,](#features-for-all-registered-clusters) in addition to the following features for editing and upgrading the cluster: - -- The ability to [upgrade the K3s version]({{}}/rancher/v2.5/en/cluster-admin/upgrading-kubernetes/) -- The ability to configure the maximum number of nodes that will be upgraded concurrently -- The ability to see a read-only version of the K3s cluster's configuration arguments and environment variables used to launch each node in the cluster - -### Additional Features for Registered EKS Clusters - -Registering an Amazon EKS cluster allows Rancher to treat it as though it were created in Rancher. - -Amazon EKS clusters can now be registered in Rancher. For the most part, registered EKS clusters and EKS clusters created in Rancher are treated the same way in the Rancher UI, except for deletion. - -When you delete an EKS cluster that was created in Rancher, the cluster is destroyed. When you delete an EKS cluster that was registered in Rancher, it is disconnected from the Rancher server, but it still exists and you can still access it in the same way you did before it was registered in Rancher. - -The capabilities for registered EKS clusters are listed in the table on [this page.]({{}}/rancher/v2.5/en/cluster-provisioning/) -{{% /tab %}} -{{% /tabs %}} - - - # Configuring K3s Cluster Upgrades > It is a Kubernetes best practice to back up the cluster before upgrading. When upgrading a high-availability K3s cluster with an external database, back up the database in whichever way is recommended by the relational database provider. diff --git a/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/_index.md b/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/_index.md index aef26507b12..48cc8dd929b 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/_index.md @@ -68,19 +68,8 @@ When Weave is selected as network provider, Rancher will automatically enable en Project network isolation is used to enable or disable communication between pods in different projects. -{{% tabs %}} -{{% tab "Rancher v2.5.8+" %}} - To enable project network isolation as a cluster option, you will need to use any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin. -{{% /tab %}} -{{% tab "Rancher before v2.5.8" %}} - -To enable project network isolation as a cluster option, you will need to use Canal as the CNI. - -{{% /tab %}} -{{% /tabs %}} - ### Kubernetes Cloud Providers You can configure a [Kubernetes cloud provider]({{}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/options/cloud-providers). If you want to use [volumes and storage]({{}}/rancher/v2.5/en/k8s-in-rancher/volumes-and-storage/) in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the `aws` cloud provider. diff --git a/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md b/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md index d4cab37b33d..06419998b31 100644 --- a/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md +++ b/content/rancher/v2.6/en/cluster-provisioning/rke-clusters/windows-clusters/_index.md @@ -33,25 +33,9 @@ The general node requirements for networking, operating systems, and Docker are ### OS and Docker Requirements -{{% tabs %}} -{{% tab "Rancher v2.5.8+" %}} - Our support for Windows Server and Windows containers match the Microsoft official lifecycle for LTSC (Long-Term Servicing Channel) and SAC (Semi-Annual Channel). For the support lifecycle dates for Windows Server, see the [Microsoft Documentation.](https://docs.microsoft.com/en-us/windows-server/get-started/windows-server-release-info) -{{% /tab %}} -{{% tab "Rancher before v2.5.8" %}} -In order to add Windows worker nodes to a cluster, the node must be running one of the following Windows Server versions and the corresponding version of Docker Engine - Enterprise Edition (EE): - -- Nodes with Windows Server core version 1809 should use Docker EE-basic 18.09 or Docker EE-basic 19.03. -- Nodes with Windows Server core version 1903 should use Docker EE-basic 19.03. - -> **Notes:** -> -> - If you are using AWS, Rancher recommends _Microsoft Windows Server 2019 Base with Containers_ as the Amazon Machine Image (AMI). -> - If you are using GCE, Rancher recommends _Windows Server 2019 Datacenter for Containers_ as the OS image. -{{% /tab %}} -{{% /tabs %}} ### Kubernetes Version @@ -247,4 +231,4 @@ After creating your cluster, you can access it through the Rancher UI. As a best # Configuration for Storage Classes in Azure -If you are using Azure VMs for your nodes, you can use [Azure files](https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv) as a StorageClass for the cluster. For details, refer to [this section.]({{}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/azure-storageclass) \ No newline at end of file +If you are using Azure VMs for your nodes, you can use [Azure files](https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv) as a StorageClass for the cluster. For details, refer to [this section.]({{}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/azure-storageclass) diff --git a/content/rancher/v2.6/en/installation/install-rancher-on-k8s/upgrades/air-gap-upgrade/_index.md b/content/rancher/v2.6/en/installation/install-rancher-on-k8s/upgrades/air-gap-upgrade/_index.md index 2d591e83ab4..5e69f3ea54e 100644 --- a/content/rancher/v2.6/en/installation/install-rancher-on-k8s/upgrades/air-gap-upgrade/_index.md +++ b/content/rancher/v2.6/en/installation/install-rancher-on-k8s/upgrades/air-gap-upgrade/_index.md @@ -22,9 +22,6 @@ Placeholder | Description ### Option A: Default Self-signed Certificate -{{% tabs %}} -{{% tab "Rancher v2.5.8+" %}} - ``` helm template rancher ./rancher-.tgz --output-dir . \ --no-hooks \ # prevent files for Helm hooks from being generated @@ -36,31 +33,8 @@ helm template rancher ./rancher-.tgz --output-dir . \ --set useBundledSystemChart=true # Use the packaged Rancher system charts ``` -{{% /tab %}} -{{% tab "Rancher before v2.5.8" %}} - - ```plain -helm template rancher ./rancher-.tgz --output-dir . \ - --namespace cattle-system \ - --set hostname= \ - --set certmanager.version= \ - --set rancherImage=/rancher/rancher \ - --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher - --set useBundledSystemChart=true # Use the packaged Rancher system charts -``` - -{{% /tab %}} -{{% /tabs %}} - - - ### Option B: Certificates from Files using Kubernetes Secrets - -{{% tabs %}} -{{% tab "Rancher v2.5.8+" %}} - - ```plain helm template rancher ./rancher-.tgz --output-dir . \ --no-hooks \ # prevent files for Helm hooks from being generated @@ -86,36 +60,6 @@ helm template rancher ./rancher-.tgz --output-dir . \ --set useBundledSystemChart=true # Use the packaged Rancher system charts ``` -{{% /tab %}} -{{% tab "Rancher before v2.5.8" %}} - - -```plain -helm template rancher ./rancher-.tgz --output-dir . \ ---namespace cattle-system \ ---set hostname= \ ---set rancherImage=/rancher/rancher \ ---set ingress.tls.source=secret \ ---set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher ---set useBundledSystemChart=true # Use the packaged Rancher system charts -``` - -If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`: - -```plain -helm template rancher ./rancher-.tgz --output-dir . \ ---namespace cattle-system \ ---set hostname= \ ---set rancherImage=/rancher/rancher \ ---set ingress.tls.source=secret \ ---set privateCA=true \ ---set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher ---set useBundledSystemChart=true # Use the packaged Rancher system charts -``` -{{% /tab %}} -{{% /tabs %}} - - ### Apply the Rendered Templates Copy the rendered manifest directories to a system with access to the Rancher server cluster and apply the rendered templates. diff --git a/content/rancher/v2.6/en/installation/other-installation-methods/air-gap/install-rancher/_index.md b/content/rancher/v2.6/en/installation/other-installation-methods/air-gap/install-rancher/_index.md index dda9e56d1ea..f70e8ada91e 100644 --- a/content/rancher/v2.6/en/installation/other-installation-methods/air-gap/install-rancher/_index.md +++ b/content/rancher/v2.6/en/installation/other-installation-methods/air-gap/install-rancher/_index.md @@ -136,8 +136,6 @@ Placeholder | Description `` | The DNS name for your private registry. `` | Cert-manager version running on k8s cluster. -{{% tabs %}} -{{% tab "Rancher v2.5.8" %}} ```plain helm template rancher ./rancher-.tgz --output-dir . \ --no-hooks \ # prevent files for Helm hooks from being generated @@ -150,24 +148,6 @@ helm template rancher ./rancher-.tgz --output-dir . \ ``` **Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.5.8` -{{% /tab %}} -{{% tab "Rancher before v2.5.8" %}} - -```plain -helm template rancher ./rancher-.tgz --output-dir . \ - --namespace cattle-system \ - --set hostname= \ - --set certmanager.version= \ - --set rancherImage=/rancher/rancher \ - --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher - --set useBundledSystemChart=true # Use the packaged Rancher system charts -``` - -**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.5.6` -{{% /tab %}} -{{% /tabs %}} - - # Option B: Certificates From Files using Kubernetes Secrets @@ -186,9 +166,6 @@ Render the Rancher template, declaring your chosen options. Use the reference ta | `` | The DNS name you pointed at your load balancer. | | `` | The DNS name for your private registry. | -{{% tabs %}} -{{% tab "Rancher v2.5.8+" %}} - ```plain helm template rancher ./rancher-.tgz --output-dir . \ --no-hooks \ # prevent files for Helm hooks from being generated @@ -217,40 +194,6 @@ If you are using a Private CA signed cert, add `--set privateCA=true` following **Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.3.6` Then refer to [Adding TLS Secrets]({{}}/rancher/v2.5/en/installation/resources/encryption/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them. -{{% /tab %}} -{{% tab "Rancher before v2.5.8" %}} - - -```plain - helm template rancher ./rancher-.tgz --output-dir . \ - --namespace cattle-system \ - --set hostname= \ - --set rancherImage=/rancher/rancher \ - --set ingress.tls.source=secret \ - --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher - --set useBundledSystemChart=true # Use the packaged Rancher system charts -``` - -If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`: - -```plain - helm template rancher ./rancher-.tgz --output-dir . \ - --namespace cattle-system \ - --set hostname= \ - --set rancherImage=/rancher/rancher \ - --set ingress.tls.source=secret \ - --set privateCA=true \ - --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher - --set useBundledSystemChart=true # Use the packaged Rancher system charts -``` - -**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.3.6` - -Then refer to [Adding TLS Secrets]({{}}/rancher/v2.5/en/installation/resources/encryption/tls-secrets/) to publish the certificate files so Rancher and the ingress controller can use them. -{{% /tab %}} -{{% /tabs %}} - - # 4. Install Rancher diff --git a/content/rancher/v2.6/en/installation/resources/feature-flags/_index.md b/content/rancher/v2.6/en/installation/resources/feature-flags/_index.md index 62f14aa8c15..5aaa4e2c7fa 100644 --- a/content/rancher/v2.6/en/installation/resources/feature-flags/_index.md +++ b/content/rancher/v2.6/en/installation/resources/feature-flags/_index.md @@ -76,9 +76,6 @@ Here is an example of a command for passing in the feature flag names when rende The Helm 3 command is as follows: -{{% tabs %}} -{{% tab "Rancher v2.5.8" %}} - ``` helm template rancher ./rancher-.tgz --output-dir . \ --no-hooks \ # prevent files for Helm hooks from being generated @@ -91,22 +88,6 @@ helm template rancher ./rancher-.tgz --output-dir . \ --set 'extraEnv[0].name=CATTLE_FEATURES' --set 'extraEnv[0].value==true,=true' ``` -{{% /tab %}} -{{% tab "Rancher before v2.5.8" %}} - -``` -helm template rancher ./rancher-.tgz --output-dir . \ - --namespace cattle-system \ - --set hostname= \ - --set rancherImage=/rancher/rancher \ - --set ingress.tls.source=secret \ - --set systemDefaultRegistry= \ # Set a default private registry to be used in Rancher - --set useBundledSystemChart=true # Use the packaged Rancher system charts - --set 'extraEnv[0].name=CATTLE_FEATURES' - --set 'extraEnv[0].value==true,=true' -``` -{{% /tab %}} -{{% /tabs %}} The Helm 2 command is as follows: diff --git a/content/rancher/v2.6/en/istio/configuration-reference/enable-istio-with-psp/_index.md b/content/rancher/v2.6/en/istio/configuration-reference/enable-istio-with-psp/_index.md index d3d2b0809a9..7af6ae450b7 100644 --- a/content/rancher/v2.6/en/istio/configuration-reference/enable-istio-with-psp/_index.md +++ b/content/rancher/v2.6/en/istio/configuration-reference/enable-istio-with-psp/_index.md @@ -12,11 +12,6 @@ If you have restrictive Pod Security Policies enabled, then Istio may not be abl The Istio CNI plugin removes the need for each application pod to have a privileged `NET_ADMIN` container. For further information, see the [Istio CNI Plugin docs](https://istio.io/docs/setup/additional-setup/cni). Please note that the [Istio CNI Plugin is in alpha](https://istio.io/about/feature-stages/). -The steps differ based on the Rancher version. - -{{% tabs %}} -{{% tab "v2.5.4+" %}} - > **Prerequisites:** > > - The cluster must be an RKE Kubernetes cluster. @@ -58,53 +53,3 @@ Istio should install successfully with the CNI enabled in the cluster. Verify that the CNI is working by deploying a [sample application](https://istio.io/latest/docs/examples/bookinfo/) or deploying one of your own applications. -{{% /tab %}} -{{% tab "v2.5.0-v2.5.3" %}} - -> **Prerequisites:** -> -> - The cluster must be an RKE Kubernetes cluster. -> - The cluster must have been created with a default PodSecurityPolicy. -> -> To enable pod security policy support when creating a Kubernetes cluster in the Rancher UI, go to Advanced Options. In the Pod Security Policy Support section, click Enabled. Then select a default pod security policy. - -1. [Configure the System Project Policy to allow Istio install.](#1-configure-the-system-project-policy-to-allow-istio-install) -2. [Install the CNI plugin in the System project.](#2-install-the-cni-plugin-in-the-system-project) -3. [Install Istio.](#3-install-istio) - -### 1. Configure the System Project Policy to allow Istio install - -1. From the cluster view of the **Cluster Manager,** select **Projects/Namespaces.** -1. Find the **Project: System** and select the **⋮ > Edit**. -1. Change the Pod Security Policy option to be unrestricted, then click Save. - -### 2. Install the CNI Plugin in the System Project - -1. From the main menu of the **Dashboard**, select **Projects/Namespaces**. -1. Select the **Project: System** project. -1. Choose **Tools > Catalogs** in the navigation bar. -1. Add a catalog with the following: - 1. Name: istio-cni - 1. Catalog URL: https://github.com/istio/cni - 1. Branch: The branch that matches your current release, for example: `release-1.4`. -1. From the main menu select **Apps** -1. Click Launch and select istio-cni -1. Update the namespace to be "kube-system" -1. In the answers section, click "Edit as YAML" and paste in the following, then click launch: - -``` ---- - logLevel: "info" - excludeNamespaces: - - "istio-system" - - "kube-system" -``` - -### 3. Install Istio - -Follow the [primary instructions]({{}}/rancher/v2.5/en/cluster-admin/tools/istio/setup/enable-istio-in-cluster/), adding a custom answer: `istio_cni.enabled: true`. - -After Istio has finished installing, the Apps page in System Projects should show both istio and `istio-cni` applications deployed successfully. Sidecar injection will now be functional. - -{{% /tab %}} -{{% /tabs %}} \ No newline at end of file diff --git a/content/rancher/v2.6/en/istio/resources/_index.md b/content/rancher/v2.6/en/istio/resources/_index.md index d4fbc4f3777..c9cf095969a 100644 --- a/content/rancher/v2.6/en/istio/resources/_index.md +++ b/content/rancher/v2.6/en/istio/resources/_index.md @@ -20,9 +20,6 @@ The table below shows a summary of the minimum recommended resource requests and In Kubernetes, the resource request indicates that the workload will not deployed on a node unless the node has at least the specified amount of memory and CPU available. If the workload surpasses the limit for CPU or memory, it can be terminated or evicted from the node. For more information on managing resource limits for containers, refer to the [Kubernetes documentation.](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) -{{% tabs %}} -{{% tab "v2.5.6+" %}} - | Workload | CPU - Request | Memory - Request | CPU - Limit | Memory - Limit | |----------------------|---------------|------------|-----------------|-------------------| | ingress gateway | 100m | 128mi | 2000m | 1024mi | @@ -31,23 +28,6 @@ In Kubernetes, the resource request indicates that the workload will not deploye | proxy | 10m | 10mi | 2000m | 1024mi | | **Totals:** | **710m** | **2314Mi** | **6000m** | **3072Mi** | -{{% /tab %}} -{{% tab "v2.5.0-v2.5.5" %}} - -Workload | CPU - Request | Memory - Request | CPU - Limit | Mem - Limit | Configurable ----------:|---------------:|---------------:|-------------:|-------------:|-------------: -Istiod | 500m | 2048Mi | No limit | No limit | Y | -Istio-Mixer | 1000m | 1000Mi | 4800m | 4000Mi | Y | -Istio-ingressgateway | 100m | 128Mi | 2000m | 1024Mi | Y | -Others | 10m | - | - | - | Y | -Totals: | 1710m | 3304Mi | >8800m | >6048Mi | - - -{{% /tab %}} -{{% /tabs %}} - - - - # Configuring Resource Allocations You can individually configure the resource allocation for each type of Istio component. This section includes the default resource allocations for each component. @@ -78,4 +58,4 @@ In the example overlay file provided with the Istio application, the following s # resources: # requests: # cpu: 200m -``` \ No newline at end of file +``` diff --git a/content/rancher/v2.6/en/logging/_index.md b/content/rancher/v2.6/en/logging/_index.md index 043f8d5d1d4..a1d205f91d8 100644 --- a/content/rancher/v2.6/en/logging/_index.md +++ b/content/rancher/v2.6/en/logging/_index.md @@ -78,22 +78,10 @@ For a list of options that can be configured when the logging application is ins ### Windows Support -{{% tabs %}} -{{% tab "Rancher v2.5.8" %}} As of Rancher v2.5.8, logging support for Windows clusters has been added and logs can be collected from Windows nodes. For details on how to enable or disable Windows node logging, see [this section.](./helm-chart-options/#enable-disable-windows-node-logging) -{{% /tab %}} -{{% tab "Rancher v2.5.0-2.5.7" %}} -Clusters with Windows workers support exporting logs from Linux nodes, but Windows node logs are currently unable to be exported. -Only Linux node logs are able to be exported. - -To allow the logging pods to be scheduled on Linux nodes, tolerations must be added to the pods. Refer to the [Working with Taints and Tolerations]({{}}/rancher/v2.5/en/logging/taints-tolerations/) section for details and an example. -{{% /tab %}} -{{% /tabs %}} - - ### Working with a Custom Docker Root Directory For details on using a custom Docker root directory, see [this section.](./helm-chart-options/#working-with-a-custom-docker-root-directory) diff --git a/content/rancher/v2.6/en/logging/custom-resource-config/flows/_index.md b/content/rancher/v2.6/en/logging/custom-resource-config/flows/_index.md index a2d9489b218..7b422e2cbb7 100644 --- a/content/rancher/v2.6/en/logging/custom-resource-config/flows/_index.md +++ b/content/rancher/v2.6/en/logging/custom-resource-config/flows/_index.md @@ -10,9 +10,6 @@ For the full details on configuring `Flows` and `ClusterFlows`, see the [Banzai # Configuration -{{% tabs %}} -{{% tab "Rancher v2.5.8+" %}} - - [Flows](#flows-2-5-8) - [Matches](#matches-2-5-8) - [Filters](#filters-2-5-8) @@ -73,67 +70,6 @@ Matches, filters and `Outputs` are configured for `ClusterFlows` in the same way After `ClusterFlow` selects logs from all namespaces in the cluster, logs from the cluster will be collected and logged to the selected `ClusterOutput`. -{{% /tab %}} -{{% tab "Rancher v2.5.0-v2.5.7" %}} - -- [Flows](#flows-2-5-0) - - [Matches](#matches-2-5-0) - - [Filters](#filters-2-5-0) - - [Outputs](#outputs-2-5-0) -- [ClusterFlows](#clusterflows-2-5-0) - - - - -# Flows - -A `Flow` defines which logs to collect and filter and which `Output` to send the logs to. The `Flow` is a namespaced resource, which means logs will only be collected from the namespace that the `Flow` is deployed in. - -`Flows` need to be defined in YAML. - -For more details about the `Flow` custom resource, see [FlowSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/flow_types/) - - - - -### Matches - -Match statements are used to select which containers to pull logs from. - -You can specify match statements to select or exclude logs according to Kubernetes labels, container and host names. Match statements are evaluated in the order they are defined and processed only until the first matching select or exclude rule applies. - -For detailed examples on using the match statement, see the [official documentation on log routing.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/log-routing/) - - - -### Filters - -You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the `Flow` are applied in the order in the definition. - -For a list of filters supported by the Banzai Cloud Logging operator, see [this page.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/filters/) - - - -### Outputs - -This `Output` will receive logs from the `Flow`. - -Because the `Flow` is a namespaced resource, the `Output` must reside in same namespace as the `Flow`. - - - -# ClusterFlows - -Matches, filters and `Outputs` are also configured for `ClusterFlows`. The only difference is that the `ClusterFlow` is scoped at the cluster level and can configure log collection across all namespaces. - -`ClusterFlow` selects logs from all namespaces in the cluster. Logs from the cluster will be collected and logged to the selected `ClusterOutput`. - -`ClusterFlows` need to be defined in YAML. - -{{% /tab %}} -{{% /tabs %}} - - # YAML Example The following example `Flow` transforms the log messages from the default namespace and sends them to an S3 `Output`: diff --git a/content/rancher/v2.6/en/logging/custom-resource-config/outputs/_index.md b/content/rancher/v2.6/en/logging/custom-resource-config/outputs/_index.md index c64e9ba7040..96d80ece8b3 100644 --- a/content/rancher/v2.6/en/logging/custom-resource-config/outputs/_index.md +++ b/content/rancher/v2.6/en/logging/custom-resource-config/outputs/_index.md @@ -14,9 +14,6 @@ For the full details on configuring `Outputs` and `ClusterOutputs`, see the [Ban # Configuration -{{% tabs %}} -{{% tab "v2.5.8+" %}} - - [Outputs](#outputs-2-5-8) - [ClusterOutputs](#clusteroutputs-2-5-8) @@ -68,43 +65,6 @@ For example configuration for each logging plugin supported by the logging opera For the details of the `ClusterOutput` custom resource, see [ClusterOutput.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/clusteroutput_types/) -{{% /tab %}} -{{% tab "v2.5.0-v2.5.7" %}} - - -- [Outputs](#outputs-2-5-0) -- [ClusterOutputs](#clusteroutputs-2-5-0) - - - -# Outputs - -The `Output` resource defines where your `Flows` can send the log messages. `Outputs` are the final stage for a logging `Flow`. - -The `Output` is a namespaced resource, which means only a `Flow` within the same namespace can access it. - -You can use secrets in these definitions, but they must also be in the same namespace. - -`Outputs` are configured in YAML. For the details of `Output` custom resource, see [OutputSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/output_types/) - -For examples of configuration for each logging plugin supported by the logging operator, see the [logging operator documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/) - - - -# ClusterOutputs - -`ClusterOutput` defines an `Output` without namespace restrictions. It is only effective when deployed in the same namespace as the logging operator. - -The Rancher UI provides forms for configuring the `ClusterOutput` type, target, and access credentials if applicable. - -`ClusterOutputs` are configured in YAML. For the details of `ClusterOutput` custom resource, see [ClusterOutput.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/clusteroutput_types/) - -For example configuration for each logging plugin supported by the logging operator, see the [logging operator documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/) - -{{% /tab %}} -{{% /tabs %}} - - # YAML Examples Once logging is installed, you can use these examples to help craft your own logging pipeline. @@ -343,4 +303,4 @@ spec: ignore_network_errors_at_startup: false ``` -Let's break down what is happening here. First, we create a deployment of a container that has the additional `syslog` plugin and accepts logs forwarded from another `fluentd`. Next we create an `Output` configured as a forwarder to our deployment. The deployment `fluentd` will then forward all logs to the configured `syslog` destination. \ No newline at end of file +Let's break down what is happening here. First, we create a deployment of a container that has the additional `syslog` plugin and accepts logs forwarded from another `fluentd`. Next we create an `Output` configured as a forwarder to our deployment. The deployment `fluentd` will then forward all logs to the configured `syslog` destination. diff --git a/content/rancher/v2.6/en/logging/taints-tolerations/_index.md b/content/rancher/v2.6/en/logging/taints-tolerations/_index.md index c851f4bc296..00cee550a8d 100644 --- a/content/rancher/v2.6/en/logging/taints-tolerations/_index.md +++ b/content/rancher/v2.6/en/logging/taints-tolerations/_index.md @@ -19,21 +19,10 @@ Both provide choice for the what node(s) the pod will run on. ### Default Implementation in Rancher's Logging Stack -{{% tabs %}} -{{% tab "Rancher v2.5.8" %}} By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes. The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes. Moreover, most logging stack pods run on Linux only and have a `nodeSelector` added to ensure they run on Linux nodes. -{{% /tab %}} -{{% tab "Rancher v2.5.0-2.5.7" %}} -By default, Rancher taints all Linux nodes with `cattle.io/os=linux`, and does not taint Windows nodes. -The logging stack pods have `tolerations` for this taint, which enables them to run on Linux nodes. -Moreover, we can populate the `nodeSelector` to ensure that our pods *only* run on Linux nodes. - -{{% /tab %}} -{{% /tabs %}} - This example Pod YAML file shows a nodeSelector being used with a toleration: ```yaml @@ -74,4 +63,4 @@ However, if you would like to add tolerations for *only* the `fluentbit` contain ```yaml fluentbit_tolerations: # insert tolerations list for fluentbit containers only... -``` \ No newline at end of file +``` diff --git a/content/rancher/v2.6/en/monitoring-alerting/_index.md b/content/rancher/v2.6/en/monitoring-alerting/_index.md index e45f6b8a830..f9682dbc192 100644 --- a/content/rancher/v2.6/en/monitoring-alerting/_index.md +++ b/content/rancher/v2.6/en/monitoring-alerting/_index.md @@ -62,9 +62,6 @@ As an [administrator]({{}}/rancher/v2.5/en/admin-settings/rbac/global-p > - Make sure your cluster fulfills the resource requirements. The cluster should have at least 1950Mi memory available, 2700m CPU, and 50Gi storage. A breakdown of the resource limits and requests is [here.](#setting-resource-limits-and-requests) > - When installing monitoring on an RKE cluster using RancherOS or Flatcar Linux nodes, change the etcd node certificate directory to `/opt/rke/etc/kubernetes/ssl`. -{{% tabs %}} -{{% tab "Rancher v2.5.8" %}} - ### Enable Monitoring for use without SSL 1. In the Rancher UI, go to the cluster where you want to install monitoring and click **Cluster Explorer.** @@ -100,21 +97,6 @@ key.pfx=`base64-content` Then **Cert File Path** would be set to `/etc/alertmanager/secrets/cert.pem`. -{{% /tab %}} -{{% tab "Rancher v2.5.0-2.5.7" %}} - -1. In the Rancher UI, go to the cluster where you want to install monitoring and click **Cluster Explorer.** -1. Click **Apps.** -1. Click the `rancher-monitoring` app. -1. Optional: Click **Chart Options** and configure alerting, Prometheus and Grafana. For help, refer to the [configuration reference.](./configuration) -1. Scroll to the bottom of the Helm chart README and click **Install.** - -**Result:** The monitoring app is deployed in the `cattle-monitoring-system` namespace. - -{{% /tab %}} - -{{% /tabs %}} - ### Default Alerts, Targets, and Grafana Dashboards By default, Rancher Monitoring deploys exporters (such as [node-exporter](https://github.com/prometheus/node_exporter) and [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)) as well as default Prometheus alerts and Grafana dashboards (curated by the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) project) onto a cluster. diff --git a/content/rancher/v2.6/en/monitoring-alerting/configuration/alertmanager/_index.md b/content/rancher/v2.6/en/monitoring-alerting/configuration/alertmanager/_index.md index d2414bfd56b..6d49c373b26 100644 --- a/content/rancher/v2.6/en/monitoring-alerting/configuration/alertmanager/_index.md +++ b/content/rancher/v2.6/en/monitoring-alerting/configuration/alertmanager/_index.md @@ -90,9 +90,6 @@ Rancher v2.5.8 added Microsoft Teams and SMS as configurable receivers in the Ra Rancher v2.5.4 introduced the capability to configure receivers by filling out forms in the Rancher UI. -{{% tabs %}} -{{% tab "Rancher v2.5.8" %}} - The following types of receivers can be configured in the Rancher UI: - Slack @@ -245,97 +242,8 @@ name: telegram-receiver-1 url http://rancher-alerting-drivers-sachet.ns-1.svc:9876/alert ``` - - -{{% /tab %}} - -{{% tab "Rancher v2.5.4-2.5.7" %}} - -The following types of receivers can be configured in the Rancher UI: - -- Slack -- Email -- PagerDuty -- Opsgenie -- Webhook -- Custom - -The custom receiver option can be used to configure any receiver in YAML that cannot be configured by filling out the other forms in the Rancher UI. - -### Slack {#slack-254-257} - -| Field | Type | Description | -|------|--------------|------| -| URL | String | Enter your Slack webhook URL. For instructions to create a Slack webhook, see the [Slack documentation.](https://get.slack.help/hc/en-us/articles/115005265063-Incoming-WebHooks-for-Slack) | -| Default Channel | String | Enter the name of the channel that you want to send alert notifications in the following format: `#`. | -| Proxy URL | String | Proxy for the webhook notifications. | -| Enable Send Resolved Alerts | Bool | Whether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage). | - -### Email {#email-254-257} - -| Field | Type | Description | -|------|--------------|------| -| Default Recipient Address | String | The email address that will receive notifications. | -| Enable Send Resolved Alerts | Bool | Whether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage). | - -SMTP options: - -| Field | Type | Description | -|------|--------------|------| -| Sender | String | Enter an email address available on your SMTP mail server that you want to send the notification from. | -| Host | String | Enter the IP address or hostname for your SMTP server. Example: `smtp.email.com`. | -| Use TLS | Bool | Use TLS for encryption. | -| Username | String | Enter a username to authenticate with the SMTP server. | -| Password | String | Enter a password to authenticate with the SMTP server. | - -### PagerDuty {#pagerduty-254-257} - -| Field | Type | Description | -|------|------|-------| -| Integration Type | String | `Events API v2` or `Prometheus`. | -| Default Integration Key | String | For instructions to get an integration key, see the [PagerDuty documentation.](https://www.pagerduty.com/docs/guides/prometheus-integration-guide/) | -| Proxy URL | String | Proxy for the PagerDuty notifications. | -| Enable Send Resolved Alerts | Bool | Whether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage). | - -### Opsgenie {#opsgenie-254-257} - -| Field | Description | -|------|-------------| -| API Key | For instructions to get an API key, refer to the [Opsgenie documentation.](https://docs.opsgenie.com/docs/api-key-management) | -| Proxy URL | Proxy for the Opsgenie notifications. | -| Enable Send Resolved Alerts | Whether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage). | - -Opsgenie Responders: - -| Field | Type | Description | -|-------|------|--------| -| Type | String | Schedule, Team, User, or Escalation. For more information on alert responders, refer to the [Opsgenie documentation.](https://docs.opsgenie.com/docs/alert-recipients-and-teams) | -| Send To | String | Id, Name, or Username of the Opsgenie recipient. | - -### Webhook {#webhook-1} - -| Field | Description | -|-------|--------------| -| URL | Webhook URL for the app of your choice. | -| Proxy URL | Proxy for the webhook notification. | -| Enable Send Resolved Alerts | Whether to send a follow-up notification if an alert has been resolved (e.g. [Resolved] High CPU Usage). | - -### Custom {#custom-254-257} - -The YAML provided here will be directly appended to your receiver within the Alertmanager Config Secret. - -{{% /tab %}} -{{% tab "Rancher v2.5.0-2.5.3" %}} -The Alertmanager must be configured in YAML, as shown in these [examples.](#example-alertmanager-configs) -{{% /tab %}} -{{% /tabs %}} - - # Route Configuration -{{% tabs %}} -{{% tab "Rancher v2.5.4+" %}} - ### Receiver The route needs to refer to a [receiver](#receiver-configuration) that has already been configured. @@ -364,12 +272,6 @@ match_re: [ : , ... ] ``` -{{% /tab %}} -{{% tab "Rancher v2.5.0-2.5.3" %}} -The Alertmanager must be configured in YAML, as shown in these [examples.](#example-alertmanager-configs) -{{% /tab %}} -{{% /tabs %}} - # Example Alertmanager Configs ### Slack diff --git a/content/rancher/v2.6/en/monitoring-alerting/configuration/prometheusrules/_index.md b/content/rancher/v2.6/en/monitoring-alerting/configuration/prometheusrules/_index.md index eef2549284b..c2715edaab6 100644 --- a/content/rancher/v2.6/en/monitoring-alerting/configuration/prometheusrules/_index.md +++ b/content/rancher/v2.6/en/monitoring-alerting/configuration/prometheusrules/_index.md @@ -56,8 +56,6 @@ To create rule groups in the Rancher UI, # Configuration -{{% tabs %}} -{{% tab "Rancher v2.5.4" %}} Rancher v2.5.4 introduced the capability to configure PrometheusRules by filling out forms in the Rancher UI. @@ -93,8 +91,3 @@ Rancher v2.5.4 introduced the capability to configure PrometheusRules by filling | PromQL Expression | The PromQL expression to evaluate. Prometheus will evaluate the current value of this PromQL expression on every evaluation cycle and the result will be recorded as a new set of time series with the metric name as given by 'record'. For more information about expressions, refer to the [Prometheus documentation](https://prometheus.io/docs/prometheus/latest/querying/basics/) or our [example PromQL expressions.](../expression) | | Labels | Labels to add or overwrite before storing the result. | -{{% /tab %}} -{{% tab "Rancher v2.5.0-v2.5.3" %}} -For Rancher v2.5.0-v2.5.3, PrometheusRules must be configured in YAML. For examples, refer to the Prometheus documentation on [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [alerting rules.](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) -{{% /tab %}} -{{% /tabs %}} \ No newline at end of file diff --git a/content/rancher/v2.6/en/monitoring-alerting/persist-grafana/_index.md b/content/rancher/v2.6/en/monitoring-alerting/persist-grafana/_index.md index 95d45176ce5..d1c9984928e 100644 --- a/content/rancher/v2.6/en/monitoring-alerting/persist-grafana/_index.md +++ b/content/rancher/v2.6/en/monitoring-alerting/persist-grafana/_index.md @@ -12,9 +12,6 @@ To allow the Grafana dashboard to persist after the Grafana instance restarts, a # Creating a Persistent Grafana Dashboard -{{% tabs %}} -{{% tab "Rancher v2.5.8+" %}} - > **Prerequisites:** > > - The monitoring application needs to be installed. @@ -83,50 +80,3 @@ grafana.sidecar.dashboards.searchNamespace=ALL Note that the RBAC roles exposed by the Monitoring chart to add Grafana Dashboards are still restricted to giving permissions for users to add dashboards in the namespace defined in `grafana.dashboards.namespace`, which defaults to `cattle-dashboards`. -{{% /tab %}} -{{% tab "Rancher v2.5.0-v2.5.8" %}} -> **Prerequisites:** -> -> - The monitoring application needs to be installed. -> - You must have the cluster-admin ClusterRole permission. - -1. Open the Grafana dashboard. From the **Cluster Explorer,** click **Cluster Explorer > Monitoring.** -1. Log in to Grafana. Note: The default Admin username and password for the Grafana instance is `admin/prom-operator`. Alternative credentials can also be supplied on deploying or upgrading the chart. - - > **Note:** Regardless of who has the password, cluster administrator permission in Rancher is still required to access the Grafana instance. -1. Go to the dashboard that you want to persist. In the top navigation menu, go to the dashboard settings by clicking the gear icon. -1. In the left navigation menu, click **JSON Model.** -1. Copy the JSON data structure that appears. -1. Create a ConfigMap in the `cattle-dashboards` namespace. The ConfigMap needs to have the label `grafana_dashboard: "1"`. Paste the JSON into the ConfigMap in the format shown in the example below: - - ```yaml - apiVersion: v1 - kind: ConfigMap - metadata: - labels: - grafana_dashboard: "1" - name: - namespace: cattle-dashboards - data: - .json: |- - - ``` - -**Result:** After the ConfigMap is created, it should show up on the Grafana UI and be persisted even if the Grafana pod is restarted. - -Dashboards that are persisted using ConfigMaps cannot be deleted from the Grafana UI. If you attempt to delete the dashboard in the Grafana UI, you will see the error message "Dashboard cannot be deleted because it was provisioned." To delete the dashboard, you will need to delete the ConfigMap. - -To prevent the persistent dashboard from being deleted when Monitoring v2 is uninstalled, add the following annotation to the `cattle-dashboards` namespace: - -``` -helm.sh/resource-policy: "keep" -``` - -{{% /tab %}} -{{% /tabs %}} - -# Known Issues - -For users who are using Monitoring V2 v9.4.203 or below, uninstalling the Monitoring chart will delete the `cattle-dashboards` namespace, which will delete all persisted dashboards, unless the namespace is marked with the annotation `helm.sh/resource-policy: "keep"`. - -This annotation will be added by default in the new monitoring chart released by Rancher v2.5.8, but it still needs to be manually applied for users of earlier Rancher versions.