* Sync main to v2.13.0 (#2065) * It's bad form to ask users to pass something they just curled from the internet directly to sh Updated the instructions for uninstalling the rancher-system-agent to use a temporary script file instead of piping directly to sh. * doc(rancher-security): improve structure and content to latest, v2.13-preview and v2.12 (#2024) - add Rancher Kubernetes Distributions (K3s/RKE2) Self-Assessment and Hardening Guide section - add kubernetes cluster security best practices link to rancher-security section - add k3s-selinux and update selinux-rpm details - remove rhel/centos 7 support Signed-off-by: Andy Pitcher <andy.pitcher@suse.com> * Updating across supported versions and translations. Signed-off-by: Sunil Singh <sunil.singh@suse.com> --------- Signed-off-by: Andy Pitcher <andy.pitcher@suse.com> Signed-off-by: Sunil Singh <sunil.singh@suse.com> Co-authored-by: Tejeev <tj@rancher.com> Co-authored-by: Andy Pitcher <andy.pitcher@suse.com> Co-authored-by: Sunil Singh <sunil.singh@suse.com> * Update roletemplate aggregation doc and version information * Add versioned docs * Remove ext token and kubeconfig feature flag sections and document bearer Token * Update corresponding v2.13 pages * update doc for pni in gke * Adding reverted session idle information from PR 1653 Signed-off-by: Sunil Singh <sunil.singh@suse.com> * [2.13.0] Add versions table entry * [2.13.0] Add webhook version * [2.13.0] Add CSP Adapter version * [2.13.0] Add deprecated feature table entry * [2.13.0] Update CNI popularity stats * Update GKE Cluster Configuration for Project Network Isolation instructions * Fix link and port to 2.13 * [2.13.0] Add Swagger JSON * [v2.13.0] Add info about Azure AD Roles claims (#2079) * Add info about Azure AD roles claims compatibility * Apply suggestions from code review Co-authored-by: Sunil Singh <sunil.singh@suse.com> * Add suggestions to v2.13 --------- Co-authored-by: Sunil Singh <sunil.singh@suse.com> * [2.13.0] Remove preview designation * user public api docs (#2069) * user public api docs * Apply suggestions from code review Co-authored-by: Andreas Kupries <akupries@suse.com> * Apply suggestions from code review Co-authored-by: Peter Matseykanets <pmatseykanets@gmail.com> * explain plaintext is never stored * add users 2.13 versioned docs * remove extra ``` * Apply suggestions from code review Co-authored-by: Lucas Saintarbor <lucas.saintarbor@suse.com> * add space before code block --------- Co-authored-by: Andreas Kupries <akupries@suse.com> Co-authored-by: Peter Matseykanets <pmatseykanets@gmail.com> Co-authored-by: Lucas Saintarbor <lucas.saintarbor@suse.com> * support IPv6 (#2041) * [v2.13.0] Add Configure GitHub App page (#2081) * Add Configure GitHub App page * Apply suggestions from code review Co-authored-by: Billy Tat <btat@suse.com> * Fix header/GH URL & add suggestions to v2.13 * Apply suggestions from code review Co-authored-by: Petr Kovar <pknbe@volny.cz> * Apply suggestions from code review to v2.13 * Add note describing why to use Installation ID * Apply suggestions from code review Co-authored-by: Billy Tat <btat@suse.com> --------- Co-authored-by: Billy Tat <btat@suse.com> Co-authored-by: Petr Kovar <pknbe@volny.cz> * [v2.13.0] Add info about Generic OIDC Custom Mapping (#2080) * Add info about Generic OIDC Custom Mapping * Apply suggestions from code review Co-authored-by: Sunil Singh <sunil.singh@suse.com> Co-authored-by: Billy Tat <btat@suse.com> * Apply suggestions from code review Co-authored-by: Sunil Singh <sunil.singh@suse.com> Co-authored-by: Billy Tat <btat@suse.com> * Add suggestions to v2.13 * Remove repetitive statement in intro * Move Prereq intro/note to appropriate section * Fix formatting, UI typo, add Custom Claims section under Configuration Reference section * Add section about how a custom groups claim works / note about search limitations for groups in RBAC --------- Co-authored-by: Sunil Singh <sunil.singh@suse.com> Co-authored-by: Billy Tat <btat@suse.com> * [v2.13.0] Add info about OIDC SLO support (#2086) * Add shared file covering OIDC SLO support to OIDC auth pages * Ad How to get the End Session Endpoint steps * Add generic curl exampleto retrieve end_session_endpoint * [2.13.0] Bump release date --------- Signed-off-by: Andy Pitcher <andy.pitcher@suse.com> Signed-off-by: Sunil Singh <sunil.singh@suse.com> Co-authored-by: Lucas Saintarbor <lucas.saintarbor@suse.com> Co-authored-by: Tejeev <tj@rancher.com> Co-authored-by: Andy Pitcher <andy.pitcher@suse.com> Co-authored-by: Sunil Singh <sunil.singh@suse.com> Co-authored-by: Jonathan Crowther <jonathan.crowther@suse.com> Co-authored-by: Peter Matseykanets <peter.matseykanets@suse.com> Co-authored-by: Petr Kovar <petr.kovar@suse.com> Co-authored-by: Krunal Hingu <krunal.hingu222@gmail.com> Co-authored-by: Raul Cabello Martin <raul.cabello@suse.com> Co-authored-by: Andreas Kupries <akupries@suse.com> Co-authored-by: Peter Matseykanets <pmatseykanets@gmail.com> Co-authored-by: Jack Luo <jiaqi.luo@suse.com> Co-authored-by: Petr Kovar <pknbe@volny.cz>
7.7 KiB
title, description
| title | description |
|---|---|
| Launching Kubernetes on Existing Custom Nodes | To create a cluster with custom nodes, you’ll need to access servers in your cluster and provision them according to Rancher requirements |
When you create a custom cluster, Rancher can use RKE2/K3s to create a Kubernetes cluster in on-prem bare-metal servers, on-prem virtual machines, or in any node hosted by an infrastructure provider.
To use this option, you need access to the servers that will be part of your Kubernetes cluster. Provision each server according to the requirements. Then, run the command provided in the Rancher UI on each server to convert it into a Kubernetes node.
This section describes how to set up a custom cluster.
Creating a Cluster with Custom Nodes
:::note Want to use Windows hosts as Kubernetes workers?
See Configuring Custom Clusters for Windows before you start.
:::
1. Provision a Linux Host
Begin creation of a custom cluster by provisioning a Linux host. Your host can be:
- A cloud-host virtual machine (VM)
- An on-prem VM
- A bare-metal server
If you want to reuse a node from a previous custom cluster, clean the node before using it in a cluster again. If you reuse a node that hasn't been cleaned, cluster provisioning may fail.
Provision the host according to the installation requirements and the checklist for production-ready clusters.
:::note IPv6-only cluster
For an IPv6-only cluster, ensure that your operating system correctly configures the /etc/hosts file.
::1 localhost
:::
2. Create the Custom Cluster
-
Click ☰ > Cluster Management.
-
On the Clusters page, click Create.
-
Click Custom.
-
Enter a Cluster Name.
-
Use the Cluster Configuration section to set up the cluster. For more information, see RKE2 Cluster Configuration Reference and K3s Cluster Configuration Reference.
:::note Windows nodes
To learn more about using Windows nodes as Kubernetes workers, see Launching Kubernetes on Windows Clusters.
:::
-
Click Create.
Result: The UI redirects to the Registration page, where you can generate the registration command for your nodes.
-
From Node Role, select the roles you want a cluster node to fill. You must provision at least one node for each role: etcd, worker, and control plane. A custom cluster requires all three roles to finish provisioning. For more information on roles, see Roles for Nodes in Kubernetes Clusters.
:::note Bare-Metal Server
If you plan to dedicate bare-metal servers to each role, you must provision a bare-metal server for each role (i.e., provision multiple bare-metal servers).
:::note
-
Optional: Click Show Advanced to configure additional settings such as specifying the IP address(es), overriding the node hostname, or adding labels or taints to the node
:::note
The Node Public IP and Node Private IP fields can accept either a single address or a comma-separated list of addresses (for example:
10.0.0.5,2001:db8::1).:::
:::note Ipv6-only or Dual-stack Cluster
In both IPv6-only and dual-stack clusters, you should specify the node’s IPv6 address as the Node Private IP.
:::
-
Copy the command displayed on screen to your clipboard.
-
Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.
:::note
Repeat steps 7-10 if you want to dedicate specific hosts to specific node roles. Repeat the steps as many times as needed.
:::
Result:
The cluster is created and transitions to the Updating state while Rancher initializes and provisions cluster components.
You can access your cluster after its state is updated to Active.
Active clusters are assigned two Projects:
Default, containing thedefaultnamespaceSystem, containing thecattle-system,ingress-nginx,kube-public, andkube-systemnamespaces
3. Amazon Only: Tag Resources
If you have configured your cluster to use Amazon as Cloud Provider, tag your AWS resources with a cluster ID.
Amazon Documentation: Tagging Your Amazon EC2 Resources
:::note
You can use Amazon EC2 instances without configuring a cloud provider in Kubernetes. You only have to configure the cloud provider if you want to use specific Kubernetes cloud provider functionality. For more information, see Kubernetes Cloud Providers
:::
The following resources need to be tagged with a ClusterID:
- Nodes: All hosts added in Rancher.
- Subnet: The subnet used for your cluster
- Security Group: The security group used for your cluster.
:::note
Do not tag multiple security groups. Tagging multiple groups generates an error when creating Elastic Load Balancer.
:::
The tag that should be used is:
Key=kubernetes.io/cluster/<CLUSTERID>, Value=owned
<CLUSTERID> can be any string you choose. However, the same string must be used on every resource you tag. Setting the tag value to owned informs the cluster that all resources tagged with the <CLUSTERID> are owned and managed by this cluster.
If you share resources between clusters, you can change the tag to:
Key=kubernetes.io/cluster/CLUSTERID, Value=shared
Optional Next Steps
After creating your cluster, you can access it through the Rancher UI. As a best practice, we recommend setting up these alternate ways of accessing your cluster:
- Access your cluster with the kubectl CLI: Follow these steps to access clusters with kubectl on your workstation. In this case, you will be authenticated through the Rancher server’s authentication proxy, then Rancher will connect you to the downstream cluster. This method lets you manage the cluster without the Rancher UI.
- Access your cluster with the kubectl CLI, using the authorized cluster endpoint: Follow these steps to access your cluster with kubectl directly, without authenticating through Rancher. We recommend setting up this alternative method to access your cluster so that in case you can’t connect to Rancher, you can still access the cluster.