mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-04-16 03:15:39 +00:00
Merge branch 'rancher:main' into windows_clusters
This commit is contained in:
8
.github/dependabot.yml
vendored
Normal file
8
.github/dependabot.yml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
version: 2
|
||||
|
||||
updates:
|
||||
- package-ecosystem: gitsubmodule
|
||||
schedule:
|
||||
interval: "daily"
|
||||
directory: /
|
||||
|
||||
4
.github/pull_request_template.md
vendored
4
.github/pull_request_template.md
vendored
@@ -10,7 +10,7 @@ Fixes #[issue_number]
|
||||
|
||||
- Verify if changes pertain to other versions of Rancher. If they do, finalize the edits on one version of the page, then apply the edits to the other versions.
|
||||
|
||||
- If the pull request is dependent on an upcoming release, make sure to target the release branch instead of `main`.
|
||||
- If the pull request is dependent on an upcoming release, remember to add a "MERGE ON RELEASE" label and set the proper milestone.
|
||||
|
||||
## Description
|
||||
|
||||
@@ -24,4 +24,4 @@ Fixes #[issue_number]
|
||||
|
||||
<!--
|
||||
Any additional notes a reviewer should know before we review.
|
||||
-->
|
||||
-->
|
||||
|
||||
2
.github/styles/suse-vale-styleguide
vendored
2
.github/styles/suse-vale-styleguide
vendored
Submodule .github/styles/suse-vale-styleguide updated: 06f144fdfc...45136e8ea1
45
.github/workflows/deploy.yml
vendored
45
.github/workflows/deploy.yml
vendored
@@ -4,16 +4,18 @@ on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
paths-ignore:
|
||||
- '**/README.md'
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
name: Deploy to GitHub Pages
|
||||
build:
|
||||
name: Build Docusaurus
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-node@v3
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 18
|
||||
cache: yarn
|
||||
@@ -25,18 +27,25 @@ jobs:
|
||||
NODE_OPTIONS: "--max_old_space_size=7168"
|
||||
run: yarn build --no-minify
|
||||
|
||||
# Popular action to deploy to GitHub Pages:
|
||||
# Docs: https://github.com/peaceiris/actions-gh-pages#%EF%B8%8F-docusaurus
|
||||
- name: Deploy to GitHub Pages
|
||||
uses: peaceiris/actions-gh-pages@v3
|
||||
- name: Upload Build Artifact
|
||||
uses: actions/upload-pages-artifact@v3
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
# Build output to publish to the `gh-pages` branch:
|
||||
publish_dir: ./build
|
||||
# The following lines assign commit authorship to the official
|
||||
# GH-Actions bot for deploys to `gh-pages` branch:
|
||||
# https://github.com/actions/checkout/issues/13#issuecomment-724415212
|
||||
# The GH actions bot is used by default if you didn't specify the two fields.
|
||||
# You can swap them out with your own user credentials.
|
||||
user_name: github-actions[bot]
|
||||
user_email: 41898282+github-actions[bot]@users.noreply.github.com
|
||||
path: build
|
||||
|
||||
deploy:
|
||||
name: Deploy to GitHub Pages
|
||||
needs: build
|
||||
|
||||
permissions:
|
||||
pages: write
|
||||
id-token: write
|
||||
|
||||
environment:
|
||||
name: github-pages
|
||||
url: ${{ steps.deployment.outputs.page_url }}
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Deploy to GitHub Pages
|
||||
id: deployment
|
||||
uses: actions/deploy-pages@v4
|
||||
10
.github/workflows/test-deploy.yml
vendored
10
.github/workflows/test-deploy.yml
vendored
@@ -2,16 +2,18 @@ name: Test deployment
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
paths-ignore:
|
||||
- '**/README.md'
|
||||
|
||||
jobs:
|
||||
test-deploy:
|
||||
name: Test deployment
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-node@v3
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 18
|
||||
cache: yarn
|
||||
|
||||
5
.github/workflows/vale.yml
vendored
5
.github/workflows/vale.yml
vendored
@@ -5,7 +5,10 @@
|
||||
# It uses Vale (https://vale.sh/docs/vale-cli/installation/) to provide feedback base off the SUSE Style Guide / OpenSUSE style rules (https://github.com/openSUSE/suse-vale-styleguide)
|
||||
|
||||
name: Style check
|
||||
on: [pull_request]
|
||||
on:
|
||||
pull_request:
|
||||
paths-ignore:
|
||||
- '**/README.md'
|
||||
|
||||
jobs:
|
||||
vale-lint:
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
StylesPath = .github/styles
|
||||
StylesPath = .github/styles/suse-vale-styleguide
|
||||
|
||||
[formtats]
|
||||
mdx = md
|
||||
|
||||
[*.md]
|
||||
BasedOnStyles = suse-vale-styleguide
|
||||
BasedOnStyles = common
|
||||
@@ -15,9 +15,9 @@ To get started, [fork](https://github.com/rancher/rancher-docs/fork) and clone t
|
||||
|
||||
Our repository doesn't allow you to make changes directly to the `main` branch. Create a working branch and make pull requests from your fork to [rancher/rancher-docs](https://github.com/rancher/rancher-docs).
|
||||
|
||||
For most updates, you'll need to edit a file in the `/docs` directory, which represents the ["Latest"](https://ranchermanager.docs.rancher.com/) version of our published documentation. The "Latest" version is a mirror of the most recently released version of Rancher. As of December 2023, the most recently released version of Rancher is 2.8.
|
||||
For most updates, you'll need to edit a file in the `/docs` directory, which represents the ["Latest"](https://ranchermanager.docs.rancher.com/) version of our published documentation. The "Latest" version is a mirror of the most recently released version of Rancher. As of August 2024, the most recently released version of Rancher is 2.9.
|
||||
|
||||
Whenever an update is made to `/docs`, you should apply the same change to the corresponding file in `/versioned_docs/version-2.8`. If a change only affects older versions, you don't need to mirror it to the `/docs` directory.
|
||||
Whenever an update is made to `/docs`, you should apply the same change to the corresponding file in `/versioned_docs/version-2.9`. If a change only affects older versions, you don't need to mirror it to the `/docs` directory.
|
||||
|
||||
If a file is moved or renamed, you'll also need to edit the `sidebars.js` files for each affected version, as well as the list of redirects in `docusaurus.config.js`. See [Moving or Renaming Docs](./moving-or-renaming-docs.md).
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
---
|
||||
title: API Reference
|
||||
hide_table_of_contents: true
|
||||
---
|
||||
|
||||
<head>
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
---
|
||||
title: API Tokens
|
||||
title: Using API Tokens
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/about-the-api/api-tokens"/>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/api-tokens"/>
|
||||
</head>
|
||||
|
||||
Rancher v2.8.0 introduced the [Rancher Kubernetes API](./api-reference.mdx) which can be used to manage Rancher resources through `kubectl`. This page covers information on API tokens used with the [Rancher CLI](../reference-guides/cli-with-rancher/cli-with-rancher.md), [kubeconfig files](../how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md#about-the-kubeconfig-file), Terraform and the [v3 API browser](./v3-rancher-api-guide.md#enable-view-in-api).
|
||||
|
||||
By default, some cluster-level API tokens are generated with infinite time-to-live (`ttl=0`). In other words, API tokens with `ttl=0` never expire unless you invalidate them. Tokens are not invalidated by changing a password.
|
||||
|
||||
You can deactivate API tokens by deleting them or by deactivating the user account.
|
||||
@@ -43,13 +45,11 @@ This setting is used by all kubeconfig tokens except those created by the CLI to
|
||||
|
||||
## Disable Tokens in Generated Kubeconfigs
|
||||
|
||||
Set the `kubeconfig-generate-token` setting to `false`. This setting instructs Rancher to no longer automatically generate a token when a user clicks on download a kubeconfig file. When this setting is deactivated, a generated kubeconfig references the [Rancher CLI](../cli-with-rancher/kubectl-utility.md#authentication-with-kubectl-and-kubeconfig-tokens-with-ttl) to retrieve a short-lived token for the cluster. When this kubeconfig is used in a client, such as `kubectl`, the Rancher CLI needs to be installed to complete the log in request.
|
||||
Set the `kubeconfig-generate-token` setting to `false`. This setting instructs Rancher to no longer automatically generate a token when a user clicks on download a kubeconfig file. When this setting is deactivated, a generated kubeconfig references the [Rancher CLI](../reference-guides/cli-with-rancher/kubectl-utility.md#authentication-with-kubectl-and-kubeconfig-tokens-with-ttl) to retrieve a short-lived token for the cluster. When this kubeconfig is used in a client, such as `kubectl`, the Rancher CLI needs to be installed to complete the log in request.
|
||||
|
||||
## Token Hashing
|
||||
|
||||
Users can enable token hashing, where tokens undergo a one-way hash using the SHA256 algorithm. This is a non-reversible process: once enabled, this feature cannot be disabled. It is advisable to take backups prior to enabling and/or evaluating in a test environment first.
|
||||
|
||||
To enable token hashing, refer to [this section](../../how-to-guides/advanced-user-guides/enable-experimental-features/enable-experimental-features.md).
|
||||
You can [enable token hashing](../how-to-guides/advanced-user-guides/enable-experimental-features/enable-experimental-features.md), where tokens undergo a one-way hash using the SHA256 algorithm. This is a non-reversible process: once enabled, this feature cannot be disabled. You should first evaluate this setting in a test environment, and/or take backups before enabling.
|
||||
|
||||
This feature affects all tokens which include, but are not limited to, the following:
|
||||
|
||||
@@ -82,4 +82,4 @@ Maximum Time to Live (TTL) in minutes allowed for auth tokens. If a user attempt
|
||||
|
||||
### kubeconfig-generate-token
|
||||
|
||||
When true, kubeconfigs requested through the UI contain a valid token. When false, kubeconfigs contain a command that uses the Rancher CLI to prompt the user to log in. [The CLI then retrieves and caches a token for the user](../cli-with-rancher/kubectl-utility.md#authentication-with-kubectl-and-kubeconfig-tokens-with-ttl).
|
||||
When true, kubeconfigs requested through the UI contain a valid token. When false, kubeconfigs contain a command that uses the Rancher CLI to prompt the user to log in. [The CLI then retrieves and caches a token for the user](../reference-guides/cli-with-rancher/kubectl-utility.md#authentication-with-kubectl-and-kubeconfig-tokens-with-ttl).
|
||||
@@ -1,12 +1,12 @@
|
||||
---
|
||||
title: API Quick Start Guide
|
||||
title: RK-API Quick Start Guide
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/quickstart"/>
|
||||
</head>
|
||||
|
||||
You can access Rancher's resources through the Kubernetes API. This guide will help you get started on using this API as a Rancher user.
|
||||
You can access Rancher's resources through the Kubernetes API. This guide helps you get started on using this API as a Rancher user.
|
||||
|
||||
1. In the upper left corner, click **☰ > Global Settings**.
|
||||
2. Find and copy the address in the `server-url` field.
|
||||
@@ -129,7 +129,7 @@ To ensure that your tools can recognize Rancher's CA certificates, most setups r
|
||||
If your Rancher instance is proxied by another service, you must extract the certificate that the service is using, and add it to the kubeconfig file, as demonstrated in step 5.
|
||||
:::
|
||||
|
||||
4. The following commands will convert `rancher.crt` to base64 output, trim all new-lines, and update the cluster in the kubeconfig with the certificate, then finishing by removing the `rancher.crt` file:
|
||||
4. The following commands convert `rancher.crt` to base64 output, trim all new-lines, and update the cluster in the kubeconfig with the certificate, then finish by removing the `rancher.crt` file:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=$PATH_TO_RANCHER_KUBECONFIG
|
||||
|
||||
94
docs/api/v3-rancher-api-guide.md
Normal file
94
docs/api/v3-rancher-api-guide.md
Normal file
@@ -0,0 +1,94 @@
|
||||
---
|
||||
title: Previous v3 Rancher API Guide
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/v3-rancher-api-guide"/>
|
||||
</head>
|
||||
|
||||
Rancher v2.8.0 introduced the Rancher Kubernetes API (RK-API). The previous v3 Rancher API is still available. This page describes the v3 API. For more information on RK-API, see the [RK-API quickstart](./quickstart.md) and [reference guide](./api-reference.mdx).
|
||||
|
||||
## How to Use the API
|
||||
|
||||
The previous v3 API has its own user interface accessible from a [web browser](#enable-view-in-api). This is an easy way to see resources, perform actions, and see the equivalent `curl` or HTTP request & response. To access it:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Rancher v2.6.4+">
|
||||
|
||||
1. Click your user avatar in the upper right corner.
|
||||
1. Click **Account & API Keys**.
|
||||
1. Under the **API Keys** section, find the **API Endpoint** field and click the link. The link looks something like `https://<RANCHER_FQDN>/v3`, where `<RANCHER_FQDN>` is the fully qualified domain name of your Rancher deployment.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Rancher before v2.6.4">
|
||||
|
||||
Go to the URL endpoint at `https://<RANCHER_FQDN>/v3`, where `<RANCHER_FQDN>` is the fully qualified domain name of your Rancher deployment.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Authentication
|
||||
|
||||
API requests must include authentication information. Authentication is done with HTTP basic authentication using [API keys](../reference-guides/user-settings/api-keys.md). API keys can create new clusters and have access to multiple clusters via `/v3/clusters/`. [Cluster and project roles](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md) apply to these keys and restrict what clusters and projects the account can see and what actions they can take.
|
||||
|
||||
By default, certain cluster-level API tokens are generated with infinite time-to-live (`ttl=0`). In other words, API tokens with `ttl=0` never expire unless you invalidate them. For details on how to invalidate them, refer to the [API tokens page](api-tokens.md).
|
||||
|
||||
## Making Requests
|
||||
|
||||
The API is generally RESTful but has several features to make the definition of everything discoverable by a client so that generic clients can be written instead of having to write specific code for every type of resource. For detailed info about the generic API spec, [see further documentation](https://github.com/rancher/api-spec/blob/master/specification.md).
|
||||
|
||||
- Every type has a Schema which describes:
|
||||
- The URL to get to the collection of this type of resource.
|
||||
- Every field the resource can have, along with their type, basic validation rules, whether they are required or optional, etc.
|
||||
- Every action that is possible on this type of resource, with their inputs and outputs (also as schemas).
|
||||
- Every field that allows filtering.
|
||||
- What HTTP verb methods are available for the collection itself, or for individual resources in the collection.
|
||||
|
||||
The design allows you to load just the list of schemas and access everything about the API. The UI for the API contains no code specific to Rancher itself. The URL to get Schemas is sent in every HTTP response as a `X-Api-Schemas` header. From there you can follow the `collection` link on each schema to know where to list resources, and follow other `links` inside of the returned resources to get any other information.
|
||||
|
||||
In practice, you may just want to construct URL strings. We highly suggest limiting this to the top-level to list a collection (`/v3/<type>`) or get a specific resource (`/v3/<type>/<id>`). Anything deeper than that is subject to change in future releases.
|
||||
|
||||
Resources have relationships between each other called links. Each resource includes a map of `links` with the name of the link and the URL where you can retrieve that information. Again, you should `GET` the resource and then follow the URL in the `links` map, not construct these strings yourself.
|
||||
|
||||
Most resources have actions, which do something or change the state of the resource. To use them, send a HTTP `POST` to the URL in the `actions` map of the action you want. Certain actions require input or produce output. See the individual documentation for each type or the schemas for specific information.
|
||||
|
||||
To edit a resource, send a HTTP `PUT` to the `links.update` link on the resource with the fields that you want to change. If the link is missing then you don't have permission to update the resource. Unknown fields and ones that are not editable are ignored.
|
||||
|
||||
To delete a resource, send a HTTP `DELETE` to the `links.remove` link on the resource. If the link is missing then you don't have permission to update the resource.
|
||||
|
||||
To create a new resource, HTTP `POST` to the collection URL in the schema (which is `/v3/<type>`).
|
||||
|
||||
## Filtering
|
||||
|
||||
Most collections can be filtered on the server-side by common fields using HTTP query parameters. The `filters` map shows you what fields can be filtered on and what the filtered values were for the request you made. The API UI has controls to setup filtering and show you the appropriate request. For simple "equals" matches it's just `field=value`. Modifiers can be added to the field name, for example, `field_gt=42` for "field is greater than 42." See the [API spec](https://github.com/rancher/api-spec/blob/master/specification.md#filtering) for full details.
|
||||
|
||||
## Sorting
|
||||
|
||||
Most collections can be sorted on the server-side by common fields using HTTP query parameters. The `sortLinks` map shows you what sorts are available, along with the URL to get the collection sorted by that. It also includes info about what the current response was sorted by, if specified.
|
||||
|
||||
## Pagination
|
||||
|
||||
API responses are paginated with a limit of 100 resources per page by default. This can be changed with the `limit` query parameter, up to a maximum of 1000, for example, `/v3/pods?limit=1000`. The `pagination` map in collection responses tells you whether or not you have the full result set and has a link to the next page if you do not.
|
||||
|
||||
## Capturing v3 API Calls
|
||||
|
||||
You can use browser developer tools to capture how the v3 API is called. For example, you could follow these steps to use the Chrome developer tools to get the API call for provisioning an RKE cluster:
|
||||
|
||||
1. In the Rancher UI, go to **Cluster Management** and click **Create.**
|
||||
1. Click one of the cluster types. This example uses Digital Ocean.
|
||||
1. Fill out the form with a cluster name and node template, but don't click **Create**.
|
||||
1. You need to open the developer tools before the cluster creation to see the API call being recorded. To open the tools, right-click the Rancher UI and click **Inspect.**
|
||||
1. In the developer tools, click the **Network** tab.
|
||||
1. On the **Network** tab, make sure **Fetch/XHR** is selected.
|
||||
1. In the Rancher UI, click **Create**. In the developer tools, you should see a new network request with the name `cluster?_replace=true`.
|
||||
1. Right-click `cluster?_replace=true` and click **Copy > Copy as cURL.**
|
||||
1. Paste the result into any text editor. You can see the POST request, including the URL it was sent to, all headers, and the full body of the request. This command can be used to create a cluster from the command line. Note: the request should be stored in a safe place because it contains credentials.
|
||||
|
||||
### Enable View in API
|
||||
|
||||
You can also view captured v3 API calls for your respective clusters and resources. This feature is not enabled by default. To enable it:
|
||||
|
||||
1. Click your **User Tile** in the top right corner of the UI and select **Preferences** from the drop-down menu.
|
||||
2. Under the **Advanced Features** section, click **Enable "View in API"**
|
||||
|
||||
Once checked, the **View in API** link is displayed under the **⋮** sub-menu on resource pages in the UI.
|
||||
@@ -1,9 +0,0 @@
|
||||
---
|
||||
title: RKE Cluster Configuration
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration"/>
|
||||
</head>
|
||||
|
||||
This page has moved [here.](../../../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md)
|
||||
@@ -96,7 +96,7 @@ Kubernetes workers should open TCP port `6783` (control port), UDP port `6783` a
|
||||
|
||||
For more information, see the following pages:
|
||||
|
||||
- [Weave Net Official Site](https://www.weave.works/)
|
||||
- [Weave Net Official Site](https://github.com/weaveworks/weave/blob/master/site/overview.md)
|
||||
|
||||
### RKE2 Kubernetes clusters
|
||||
|
||||
|
||||
@@ -6,21 +6,20 @@ title: Deprecated Features in Rancher
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/deprecated-features"/>
|
||||
</head>
|
||||
|
||||
### What is Rancher's deprecation policy?
|
||||
## What is Rancher's deprecation policy?
|
||||
|
||||
We have published our official deprecation policy in the support [terms of service](https://rancher.com/support-maintenance-terms).
|
||||
|
||||
### Where can I find out which features have been deprecated in Rancher?
|
||||
## Where can I find out which features have been deprecated in Rancher?
|
||||
|
||||
Rancher will publish deprecated features as part of the [release notes](https://github.com/rancher/rancher/releases) for Rancher found on GitHub. Please consult the following patch releases for deprecated features:
|
||||
|
||||
| Patch Version | Release Date |
|
||||
|---------------|---------------|
|
||||
| [2.8.3](https://github.com/rancher/rancher/releases/tag/v2.8.3) | Mar 28, 2024 |
|
||||
| [2.8.2](https://github.com/rancher/rancher/releases/tag/v2.8.2) | Feb 8, 2024 |
|
||||
| [2.8.1](https://github.com/rancher/rancher/releases/tag/v2.8.1) | Jan 22, 2024 |
|
||||
| [2.8.0](https://github.com/rancher/rancher/releases/tag/v2.8.0) | Dec 6, 2023 |
|
||||
| [2.9.2](https://github.com/rancher/rancher/releases/tag/v2.9.2) | Sep 19, 2024 |
|
||||
| [2.9.1](https://github.com/rancher/rancher/releases/tag/v2.9.1) | Aug 26, 2024 |
|
||||
| [2.9.0](https://github.com/rancher/rancher/releases/tag/v2.9.0) | Jul 31, 2024 |
|
||||
|
||||
### What can I expect when a feature is marked for deprecation?
|
||||
## What can I expect when a feature is marked for deprecation?
|
||||
|
||||
In the release where functionality is marked as "Deprecated", it will still be available and supported allowing upgrades to follow the usual procedure. Once upgraded, users/admins should start planning to move away from the deprecated functionality before upgrading to the release it marked as removed. The recommendation for new deployments is to not use the deprecated feature.
|
||||
@@ -18,19 +18,19 @@ enable_cri_dockerd: true
|
||||
|
||||
For users looking to use another container runtime, Rancher has the edge-focused K3s and datacenter-focused RKE2 Kubernetes distributions that use containerd as the default runtime. Imported RKE2 and K3s Kubernetes clusters can then be upgraded and managed through Rancher even after the removal of in-tree Dockershim in Kubernetes 1.24.
|
||||
|
||||
### FAQ
|
||||
## FAQ
|
||||
|
||||
<br/>
|
||||
|
||||
Q. Do I have to upgrade Rancher to get Rancher’s support of the upstream Dockershim?
|
||||
Q: Do I have to upgrade Rancher to get Rancher’s support of the upstream Dockershim?
|
||||
|
||||
The upstream support of Dockershim begins for RKE in Kubernetes 1.21. You will need to be on Rancher 2.6 or above to have support for RKE with Kubernetes 1.21. See our [support matrix](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.6.0/) for details.
|
||||
|
||||
<br/>
|
||||
|
||||
Q. I am currently on RKE with Kubernetes 1.20. Do I need to upgrade to RKE with Kubernetes 1.21 sooner to avoid being out of support for Dockershim?
|
||||
Q: I am currently on RKE with Kubernetes 1.20. Do I need to upgrade to RKE with Kubernetes 1.21 sooner to avoid being out of support for Dockershim?
|
||||
|
||||
A. The version of Dockershim in RKE with Kubernetes 1.20 will continue to work and is not scheduled for removal upstream until Kubernetes 1.24. It will only emit a warning of its future deprecation, which Rancher has mitigated in RKE with Kubernetes 1.21. You can plan your upgrade to Kubernetes 1.21 as you would normally, but should consider enabling the external Dockershim by Kubernetes 1.22. The external Dockershim will need to be enabled before upgrading to Kubernetes 1.24, at which point the existing implementation will be removed.
|
||||
A: The version of Dockershim in RKE with Kubernetes 1.20 will continue to work and is not scheduled for removal upstream until Kubernetes 1.24. It will only emit a warning of its future deprecation, which Rancher has mitigated in RKE with Kubernetes 1.21. You can plan your upgrade to Kubernetes 1.21 as you would normally, but should consider enabling the external Dockershim by Kubernetes 1.22. The external Dockershim will need to be enabled before upgrading to Kubernetes 1.24, at which point the existing implementation will be removed.
|
||||
|
||||
For more information on the deprecation and its timeline, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed).
|
||||
|
||||
|
||||
@@ -10,10 +10,6 @@ This FAQ is a work in progress designed to answer the questions most frequently
|
||||
|
||||
See the [Technical FAQ](technical-items.md) for frequently asked technical questions.
|
||||
|
||||
## Does Rancher v2.x support Docker Swarm and Mesos as environment types?
|
||||
|
||||
Swarm and Mesos are no longer selectable options when you create a new environment in Rancher v2.x. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 were running Swarm.
|
||||
|
||||
## Is it possible to manage Azure Kubernetes Services with Rancher v2.x?
|
||||
|
||||
Yes. See our [Cluster Administration](../how-to-guides/new-user-guides/manage-clusters/manage-clusters.md) guide for what Rancher features are available on AKS, as well as our [documentation on AKS](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-aks.md).
|
||||
|
||||
@@ -8,11 +8,11 @@ title: Installing and Configuring kubectl
|
||||
|
||||
`kubectl` is a CLI utility for running commands against Kubernetes clusters. It's required for many maintenance and administrative tasks in Rancher 2.x.
|
||||
|
||||
### Installation
|
||||
## Installation
|
||||
|
||||
See [kubectl Installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/) for installation on your operating system.
|
||||
|
||||
### Configuration
|
||||
## Configuration
|
||||
|
||||
When you create a Kubernetes cluster with RKE, RKE creates a `kube_config_cluster.yml` in the local directory that contains credentials to connect to your new cluster with tools like `kubectl` or `helm`.
|
||||
|
||||
|
||||
@@ -9,11 +9,11 @@ title: Rancher is No Longer Needed
|
||||
This page is intended to answer questions about what happens if you don't want Rancher anymore, if you don't want a cluster to be managed by Rancher anymore, or if the Rancher server is deleted.
|
||||
|
||||
|
||||
### If the Rancher server is deleted, what happens to the workloads in my downstream clusters?
|
||||
## If the Rancher server is deleted, what happens to the workloads in my downstream clusters?
|
||||
|
||||
If Rancher is ever deleted or unrecoverable, all workloads in the downstream Kubernetes clusters managed by Rancher will continue to function as normal.
|
||||
|
||||
### If the Rancher server is deleted, how do I access my downstream clusters?
|
||||
## If the Rancher server is deleted, how do I access my downstream clusters?
|
||||
|
||||
The capability to access a downstream cluster without Rancher depends on the type of cluster and the way that the cluster was created. To summarize:
|
||||
|
||||
@@ -21,7 +21,7 @@ The capability to access a downstream cluster without Rancher depends on the typ
|
||||
- **Hosted Kubernetes clusters:** If you created the cluster in a cloud-hosted Kubernetes provider such as EKS, GKE, or AKS, you can continue to manage the cluster using your provider's cloud credentials.
|
||||
- **RKE clusters:** To access an [RKE cluster,](../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) the cluster must have the [authorized cluster endpoint](../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) enabled, and you must have already downloaded the cluster's kubeconfig file from the Rancher UI. (The authorized cluster endpoint is enabled by default for RKE clusters.) With this endpoint, you can access your cluster with kubectl directly instead of communicating through the Rancher server's [authentication proxy.](../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#1-the-authentication-proxy) For instructions on how to configure kubectl to use the authorized cluster endpoint, refer to the section about directly accessing clusters with [kubectl and the kubeconfig file.](../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) These clusters will use a snapshot of the authentication as it was configured when Rancher was removed.
|
||||
|
||||
### What if I don't want Rancher anymore?
|
||||
## What if I don't want Rancher anymore?
|
||||
|
||||
:::note
|
||||
|
||||
@@ -44,7 +44,7 @@ If you installed Rancher with Docker, you can uninstall Rancher by removing the
|
||||
|
||||
Imported clusters will not be affected by Rancher being removed. For other types of clusters, refer to the section on [accessing downstream clusters when Rancher is removed.](#if-the-rancher-server-is-deleted-how-do-i-access-my-downstream-clusters)
|
||||
|
||||
### What if I don't want my registered cluster managed by Rancher?
|
||||
## What if I don't want my registered cluster managed by Rancher?
|
||||
|
||||
If a registered cluster is deleted from the Rancher UI, the cluster is detached from Rancher, leaving it intact and accessible by the same methods that were used to access it before it was registered in Rancher.
|
||||
|
||||
@@ -56,7 +56,7 @@ To detach the cluster,
|
||||
|
||||
**Result:** The registered cluster is detached from Rancher and functions normally outside of Rancher.
|
||||
|
||||
### What if I don't want my RKE cluster or hosted Kubernetes cluster managed by Rancher?
|
||||
## What if I don't want my RKE cluster or hosted Kubernetes cluster managed by Rancher?
|
||||
|
||||
At this time, there is no functionality to detach these clusters from Rancher. In this context, "detach" is defined as the ability to remove Rancher components from the cluster and manage access to the cluster independently of Rancher.
|
||||
|
||||
|
||||
@@ -1,21 +1,20 @@
|
||||
---
|
||||
title: Security FAQ
|
||||
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/security"/>
|
||||
</head>
|
||||
|
||||
### Is there a Hardening Guide?
|
||||
## Is there a Hardening Guide?
|
||||
|
||||
The Hardening Guide is located in the main [Security](../reference-guides/rancher-security/rancher-security.md) section.
|
||||
|
||||
### Have hardened Rancher Kubernetes clusters been evaluated by the CIS Kubernetes Benchmark? Where can I find the results?
|
||||
## Have hardened Rancher Kubernetes clusters been evaluated by the CIS Kubernetes Benchmark? Where can I find the results?
|
||||
|
||||
We have run the CIS Kubernetes benchmark against a hardened Rancher Kubernetes cluster. The results of that assessment can be found in the main [Security](../reference-guides/rancher-security/rancher-security.md) section.
|
||||
|
||||
### How does Rancher verify communication with downstream clusters, and what are some associated security concerns?
|
||||
## How does Rancher verify communication with downstream clusters, and what are some associated security concerns?
|
||||
|
||||
Communication between the Rancher server and downstream clusters is performed through agents. Rancher uses either a registered certificate authority (CA) bundle or the local trust store to verify communication between Rancher agents and the Rancher server. Using a CA bundle for verification is more strict, as only the certificates based on that bundle are trusted. If TLS verification for a explicit CA bundle fails, Rancher may fall back to using the local trust store for verifying future communication. Any CA within the local trust store can then be used to generate a valid certificate.
|
||||
|
||||
|
||||
@@ -6,9 +6,10 @@ title: Technical FAQ
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/technical-items"/>
|
||||
</head>
|
||||
|
||||
### How can I reset the administrator password?
|
||||
## How can I reset the administrator password?
|
||||
|
||||
Docker install:
|
||||
|
||||
Docker Install:
|
||||
```
|
||||
$ docker exec -ti <container_id> reset-password
|
||||
New password for default administrator (user-xxxxx):
|
||||
@@ -16,6 +17,7 @@ New password for default administrator (user-xxxxx):
|
||||
```
|
||||
|
||||
Kubernetes install (Helm):
|
||||
|
||||
```
|
||||
$ KUBECONFIG=./kube_config_cluster.yml
|
||||
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher --no-headers | head -1 | awk '{ print $1 }') -c rancher -- reset-password
|
||||
@@ -23,10 +25,10 @@ New password for default administrator (user-xxxxx):
|
||||
<new_password>
|
||||
```
|
||||
|
||||
## I deleted/deactivated the last admin, how can I fix it?
|
||||
|
||||
Docker install:
|
||||
|
||||
### I deleted/deactivated the last admin, how can I fix it?
|
||||
Docker Install:
|
||||
```
|
||||
$ docker exec -ti <container_id> ensure-default-admin
|
||||
New default administrator (user-xxxxx)
|
||||
@@ -35,38 +37,40 @@ New password for default administrator (user-xxxxx):
|
||||
```
|
||||
|
||||
Kubernetes install (Helm):
|
||||
|
||||
```
|
||||
$ KUBECONFIG=./kube_config_cluster.yml
|
||||
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- ensure-default-admin
|
||||
New password for default administrator (user-xxxxx):
|
||||
<new_password>
|
||||
```
|
||||
### How can I enable debug logging?
|
||||
|
||||
## How can I enable debug logging?
|
||||
|
||||
See [Troubleshooting: Logging](../troubleshooting/other-troubleshooting-tips/logging.md)
|
||||
|
||||
### My ClusterIP does not respond to ping
|
||||
## My ClusterIP does not respond to ping
|
||||
|
||||
ClusterIP is a virtual IP, which will not respond to ping. Best way to test if the ClusterIP is configured correctly, is by using `curl` to access the IP and port to see if it responds.
|
||||
|
||||
### Where can I manage Node Templates?
|
||||
## Where can I manage Node Templates?
|
||||
|
||||
Node Templates can be accessed by opening your account menu (top right) and selecting `Node Templates`.
|
||||
|
||||
### Why is my Layer-4 Load Balancer in `Pending` state?
|
||||
## Why is my Layer-4 Load Balancer in `Pending` state?
|
||||
|
||||
The Layer-4 Load Balancer is created as `type: LoadBalancer`. In Kubernetes, this needs a cloud provider or controller that can satisfy these requests, otherwise these will be in `Pending` state forever. More information can be found on [Cloud Providers](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md) or [Create External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
|
||||
### Where is the state of Rancher stored?
|
||||
## Where is the state of Rancher stored?
|
||||
|
||||
- Docker Install: in the embedded etcd of the `rancher/rancher` container, located at `/var/lib/rancher`.
|
||||
- Kubernetes install: in the etcd of the RKE cluster created to run Rancher.
|
||||
|
||||
### How are the supported Docker versions determined?
|
||||
## How are the supported Docker versions determined?
|
||||
|
||||
We follow the validated Docker versions for upstream Kubernetes releases. The validated versions can be found under [External Dependencies](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md#external-dependencies) in the Kubernetes release CHANGELOG.md.
|
||||
|
||||
### How can I access nodes created by Rancher?
|
||||
## How can I access nodes created by Rancher?
|
||||
|
||||
SSH keys to access the nodes created by Rancher can be downloaded via the **Nodes** view. Choose the node which you want to access and click on the vertical ⋮ button at the end of the row, and choose **Download Keys** as shown in the picture below.
|
||||
|
||||
@@ -78,14 +82,14 @@ Unzip the downloaded zip file, and use the file `id_rsa` to connect to you host.
|
||||
$ ssh -i id_rsa user@ip_of_node
|
||||
```
|
||||
|
||||
### How can I automate task X in Rancher?
|
||||
## How can I automate task X in Rancher?
|
||||
|
||||
The UI consists of static files, and works based on responses of the API. That means every action/task that you can execute in the UI, can be automated via the API. There are 2 ways to do this:
|
||||
|
||||
* Visit `https://your_rancher_ip/v3` and browse the API options.
|
||||
* Capture the API calls when using the UI (Most commonly used for this is [Chrome Developer Tools](https://developers.google.com/web/tools/chrome-devtools/#network) but you can use anything you like)
|
||||
|
||||
### The IP address of a node changed, how can I recover?
|
||||
## The IP address of a node changed, how can I recover?
|
||||
|
||||
A node is required to have a static IP configured (or a reserved IP via DHCP). If the IP of a node has changed, you will have to remove it from the cluster and readd it. After it is removed, Rancher will update the cluster to the correct state. If the cluster is no longer in `Provisioning` state, the node is removed from the cluster.
|
||||
|
||||
@@ -93,11 +97,11 @@ When the IP address of the node changed, Rancher lost connection to the node, so
|
||||
|
||||
When the node is removed from the cluster, and the node is cleaned, you can readd the node to the cluster.
|
||||
|
||||
### How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster?
|
||||
## How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster?
|
||||
|
||||
You can add more arguments/binds/environment variables via the [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke1-cluster-configuration.md#rke-cluster-config-file-reference) option in Cluster Options. For more information, see the [Extra Args, Extra Binds, and Extra Environment Variables](https://rancher.com/docs/rke/latest/en/config-options/services/services-extras/) in the RKE documentation or browse the [Example Cluster.ymls](https://rancher.com/docs/rke/latest/en/example-yamls/).
|
||||
|
||||
### How do I check if my certificate chain is valid?
|
||||
## How do I check if my certificate chain is valid?
|
||||
|
||||
Use the `openssl verify` command to validate your certificate chain:
|
||||
|
||||
@@ -138,7 +142,7 @@ subject= /C=GB/ST=England/O=Alice Ltd/CN=rancher.yourdomain.com
|
||||
issuer= /C=GB/ST=England/O=Alice Ltd/CN=Alice Intermediate CA
|
||||
```
|
||||
|
||||
### How do I check `Common Name` and `Subject Alternative Names` in my server certificate?
|
||||
## How do I check `Common Name` and `Subject Alternative Names` in my server certificate?
|
||||
|
||||
Although technically an entry in `Subject Alternative Names` is required, having the hostname in both `Common Name` and as entry in `Subject Alternative Names` gives you maximum compatibility with older browser/applications.
|
||||
|
||||
@@ -156,7 +160,7 @@ openssl x509 -noout -in cert.pem -text | grep DNS
|
||||
DNS:rancher.my.org
|
||||
```
|
||||
|
||||
### Why does it take 5+ minutes for a pod to be rescheduled when a node has failed?
|
||||
## Why does it take 5+ minutes for a pod to be rescheduled when a node has failed?
|
||||
|
||||
This is due to a combination of the following default Kubernetes settings:
|
||||
|
||||
@@ -175,6 +179,6 @@ In Kubernetes v1.13, the `TaintBasedEvictions` feature is enabled by default. Se
|
||||
* `default-not-ready-toleration-seconds`: Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
|
||||
* `default-unreachable-toleration-seconds`: Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
|
||||
|
||||
### Can I use keyboard shortcuts in the UI?
|
||||
## Can I use keyboard shortcuts in the UI?
|
||||
|
||||
Yes, most parts of the UI can be reached using keyboard shortcuts. For an overview of the available shortcuts, press `?` anywhere in the UI.
|
||||
|
||||
@@ -6,11 +6,11 @@ title: Telemetry FAQ
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/telemetry"/>
|
||||
</head>
|
||||
|
||||
### What is Telemetry?
|
||||
## What is Telemetry?
|
||||
|
||||
Telemetry collects aggregate information about the size of Rancher installations, versions of components used, and which features are used. This information is used by Rancher Labs to help make the product better and is not shared with third-parties.
|
||||
|
||||
### What information is collected?
|
||||
## What information is collected?
|
||||
|
||||
No specific identifying information like usernames, passwords, or the names or addresses of user resources will ever be collected.
|
||||
|
||||
@@ -24,12 +24,12 @@ The primary things collected include:
|
||||
- The image name & version of Rancher that is running.
|
||||
- A unique randomly-generated identifier for this installation.
|
||||
|
||||
### Can I see the information that is being sent?
|
||||
## Can I see the information that is being sent?
|
||||
|
||||
If Telemetry is enabled, you can go to `https://<your rancher server>/v1-telemetry` in your installation to see the current data.
|
||||
|
||||
If Telemetry is not enabled, the process that collects the data is not running, so there is nothing being collected to look at.
|
||||
|
||||
### How do I turn it on or off?
|
||||
## How do I turn it on or off?
|
||||
|
||||
After initial setup, an administrator can go to the `Settings` page in the `Global` section of the UI and click Edit to change the `telemetry-opt` setting to either `in` or `out`.
|
||||
|
||||
@@ -12,7 +12,7 @@ These instructions assume you have already followed the instructions for a Kuber
|
||||
|
||||
:::
|
||||
|
||||
### Rancher Helm Upgrade Options
|
||||
## Rancher Helm Upgrade Options
|
||||
|
||||
To upgrade with Helm, apply the same options that you used when installing Rancher. Refer to the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
|
||||
@@ -107,15 +107,15 @@ The Rancher management server is designed to be secure by default and requires S
|
||||
|
||||
:::note
|
||||
|
||||
If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer](../installation-references/helm-chart-options.md#external-tls-termination).
|
||||
If you want to externally terminate SSL/TLS, see [TLS termination on an External Load Balancer](../installation-references/helm-chart-options.md#external-tls-termination). As outlined on that page, this option does have additional requirements for TLS verification.
|
||||
|
||||
:::
|
||||
|
||||
There are three recommended options for the source of the certificate used for TLS termination at the Rancher server:
|
||||
|
||||
- **Rancher-generated TLS certificate:** In this case, you will need to install `cert-manager` into the cluster. Rancher utilizes `cert-manager` to issue and maintain its certificates. Rancher will generate a CA certificate of its own, and sign a cert using that CA. `cert-manager` is then responsible for managing that certificate.
|
||||
- **Let's Encrypt:** The Let's Encrypt option also uses `cert-manager`. However, in this case, cert-manager is combined with a special Issuer for Let's Encrypt that performs all actions (including request and validation) necessary for getting a Let's Encrypt issued cert. This configuration uses HTTP validation (`HTTP-01`), so the load balancer must have a public DNS record and be accessible from the internet.
|
||||
- **Bring your own certificate:** This option allows you to bring your own public- or private-CA signed certificate. Rancher will use that certificate to secure websocket and HTTPS traffic. In this case, you must upload this certificate (and associated key) as PEM-encoded files with the name `tls.crt` and `tls.key`. If you are using a private CA, you must also upload that certificate. This is due to the fact that this private CA may not be trusted by your nodes. Rancher will take that CA certificate, and generate a checksum from it, which the various Rancher components will use to validate their connection to Rancher.
|
||||
- **Rancher-generated TLS certificate:** In this case, you will need to install `cert-manager` into the cluster. Rancher utilizes `cert-manager` to issue and maintain its certificates. Rancher will generate a CA certificate of its own, and sign a cert using that CA. `cert-manager` is then responsible for managing that certificate. No extra action is needed when `agent-tls-mode` is set to strict. More information can be found on this setting in [Agent TLS Enforcement](../installation-references/tls-settings.md#agent-tls-enforcement).
|
||||
- **Let's Encrypt:** The Let's Encrypt option also uses `cert-manager`. However, in this case, cert-manager is combined with a special Issuer for Let's Encrypt that performs all actions (including request and validation) necessary for getting a Let's Encrypt issued cert. This configuration uses HTTP validation (`HTTP-01`), so the load balancer must have a public DNS record and be accessible from the internet. When setting `agent-tls-mode` to `strict`, you must also specify `--privateCA=true` and upload the Let's Encrypt CA as described in [Adding TLS Secrets](../resources/add-tls-secrets.md). More information can be found on this setting in [Agent TLS Enforcement](../installation-references/tls-settings.md#agent-tls-enforcement).
|
||||
- **Bring your own certificate:** This option allows you to bring your own public- or private-CA signed certificate. Rancher will use that certificate to secure websocket and HTTPS traffic. In this case, you must upload this certificate (and associated key) as PEM-encoded files with the name `tls.crt` and `tls.key`. If you are using a private CA, you must also upload that certificate. This is due to the fact that this private CA may not be trusted by your nodes. Rancher will take that CA certificate, and generate a checksum from it, which the various Rancher components will use to validate their connection to Rancher. If `agent-tls-mode` is set to `strict`, the CA must be uploaded, so that downstream clusters can successfully connect. More information can be found on this setting in [Agent TLS Enforcement](../installation-references/tls-settings.md#agent-tls-enforcement).
|
||||
|
||||
|
||||
| Configuration | Helm Chart Option | Requires cert-manager |
|
||||
@@ -148,7 +148,7 @@ To see options on how to customize the cert-manager install (including for cases
|
||||
:::
|
||||
|
||||
```
|
||||
# If you have installed the CRDs manually instead of with the `--set installCRDs=true` option added to your Helm install command, you should upgrade your CRD resources before upgrading the Helm chart:
|
||||
# If you have installed the CRDs manually, instead of setting `installCRDs` or `crds.enabled` to `true` in your Helm install command, you should upgrade your CRD resources before upgrading the Helm chart:
|
||||
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/<VERSION>/cert-manager.crds.yaml
|
||||
|
||||
# Add the Jetstack Helm repository
|
||||
@@ -161,7 +161,7 @@ helm repo update
|
||||
helm install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--create-namespace \
|
||||
--set installCRDs=true
|
||||
--set crds.enabled=true
|
||||
```
|
||||
|
||||
Once you’ve installed cert-manager, you can verify it is deployed correctly by checking the cert-manager namespace for running pods:
|
||||
@@ -242,6 +242,12 @@ In the following command,
|
||||
- Set `letsEncrypt.ingress.class` to whatever your ingress controller is, e.g., `traefik`, `nginx`, `haproxy`, etc.
|
||||
- For Kubernetes v1.25 or later, set `global.cattle.psp.enabled` to `false` when using Rancher v2.7.2-v2.7.4. This is not necessary for Rancher v2.7.5 and above, but you can still manually set the option if you choose.
|
||||
|
||||
:::warning
|
||||
|
||||
When `agent-tls-mode` is set to `strict` (the default value for new installs of Rancher starting from v2.9.0), you must supply the `privateCA=true` chart value (e.x. through `--set privateCA=true`) and upload the Let's Encrypt Certificate Authority as outlined in [Adding TLS Secrets](../resources/add-tls-secrets.md). Information on identifying the Let's Encrypt Root CA can be found in the Let's Encrypt [docs](https://letsencrypt.org/certificates/). If you don't upload the CA, then Rancher may fail to connect to new or existing downstream clusters.
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
|
||||
@@ -49,7 +49,7 @@ See the [rancher/rancher-cleanup repo](https://github.com/rancher/rancher-cleanu
|
||||
### Step 2: Restore the Backup and Bring Up Rancher
|
||||
|
||||
At this point, there should be no Rancher-related resources on the upstream cluster. Therefore, the next step will be the same as if you were migrating Rancher to a new cluster that contains no Rancher resources.
|
||||
/home/btat/rancher-docs/docs/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md
|
||||
|
||||
Follow these [instructions](../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md) to install the Rancher-Backup Helm chart and restore Rancher to its previous state.
|
||||
Please keep in mind that:
|
||||
1. Step 3 can be skipped, because the Cert-Manager app should still exist on the upstream (local) cluster if it was installed before.
|
||||
|
||||
@@ -190,3 +190,19 @@ If you want to use encrypted private keys, you should use `ssh-agent` to load yo
|
||||
### Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
|
||||
|
||||
The node is not reachable on the configured `address` and `port`.
|
||||
|
||||
### Agent reports TLS errors
|
||||
|
||||
When using Rancher, you may encounter error messages from the `fleet-agent`, `system-agent`, or `cluster-agent`, such as the message below:
|
||||
```
|
||||
tls: failed to verify certificate: x509: failed to load system roots and no roots provided; readdirent /dev/null: not a directory
|
||||
```
|
||||
|
||||
This occurs when Rancher was configured with `agent-tls-mode` set to `strict`, but couldn't find cacerts in the `cacert` setting. To resolve the issue, set the `agent-tls-mode` to `system-store`, or upload the CA for Rancher as described in [Adding TLS Secrets](../resources/add-tls-secrets.md).
|
||||
|
||||
### New Cluster Deployment is stuck in "Waiting for Agent to check in"
|
||||
|
||||
When Rancher has `agent-tls-mode` set to `strict`, new clusters may fail to provision and report a generic "Waiting for Agent to check in" error message. The root cause of this is similar to the above case of TLS errors - Rancher's agent can't determine which CA Rancher is using (or can't verify that Rancher's cert is actually signed by the specified certificate authority).
|
||||
|
||||
To resolve the issue, set the `agent-tls-mode` to `system-store` or upload the CA for Rancher as described in [Adding TLS Secrets](../resources/add-tls-secrets.md).
|
||||
|
||||
|
||||
@@ -49,7 +49,6 @@ For [air-gapped installs only,](../other-installation-methods/air-gapped-helm-cl
|
||||
|
||||
Follow the steps to upgrade Rancher server:
|
||||
|
||||
|
||||
### 1. Back up Your Kubernetes Cluster that is Running Rancher Server
|
||||
|
||||
Use the [backup application](../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md) to back up Rancher.
|
||||
@@ -119,7 +118,6 @@ If you are installing Rancher in an air-gapped environment, skip the rest of thi
|
||||
|
||||
:::
|
||||
|
||||
|
||||
Get the values, which were passed with `--set`, from the current Rancher Helm chart that is installed.
|
||||
|
||||
```
|
||||
|
||||
@@ -26,18 +26,24 @@ The following is a list of feature flags available in Rancher. If you've upgrade
|
||||
- `multi-cluster-management`: Allows multi-cluster provisioning and management of Kubernetes clusters. This flag can only be set at install time. It can't be enabled or disabled later.
|
||||
- `rke1-custom-node-cleanup`: Enables cleanup of deleted RKE1 custom nodes. We recommend that you keep this flag enabled, to prevent removed nodes from attempting to rejoin the cluster.
|
||||
- `rke2`: Enables provisioning RKE2 clusters. This flag is enabled by default.
|
||||
- `token-hashing`: Enables token hashing. Once enabled, existing tokens will be hashed and all new tokens will be hashed automatically with the SHA256 algorithm. Once a token is hashed it can't be undone. This flag can't be disabled after its enabled. See [API Tokens](../../../reference-guides/about-the-api/api-tokens.md#token-hashing) for more information.
|
||||
- `token-hashing`: Enables token hashing. Once enabled, existing tokens will be hashed and all new tokens will be hashed automatically with the SHA256 algorithm. Once a token is hashed it can't be undone. This flag can't be disabled after its enabled. See [API Tokens](../../../api/api-tokens.md#token-hashing) for more information.
|
||||
- `uiextension`: Enables UI extensions. This flag is enabled by default. Enabling or disabling the flag forces the Rancher pod to restart. The first time this flag is set to `true`, it creates a CRD and enables the controllers and endpoints necessary for the feature to work. If set to `false`, it disables the previously mentioned controllers and endpoints. Setting `uiextension` to `false` has no effect on the CRD -- it does not create a CRD if it does not yet exist, nor does it delete the CRD if it already exists.
|
||||
- `unsupported-storage-drivers`: Enables types for storage providers and provisioners that aren't enabled by default. See [Allow Unsupported Storage Drivers](../../../how-to-guides/advanced-user-guides/enable-experimental-features/unsupported-storage-drivers.md) for more information.
|
||||
- `ui-sql-cache`: Enables a SQLite-based cache for UI tables. See [UI Server-Side Pagination](../../../how-to-guides/advanced-user-guides/enable-experimental-features/ui-server-side-pagination.md) for more information.
|
||||
|
||||
|
||||
The following table shows the availability and default values for some feature flags in Rancher. Features marked "GA" are generally available:
|
||||
|
||||
| Feature Flag Name | Default Value | Status | Available As Of |
|
||||
| ----------------------------- | ------------- | ------------ | --------------- |
|
||||
| `continuous-delivery` | `true` | GA | v2.6.0 |
|
||||
| `fleet` | `true` | Can no longer be disabled | v2.6.0 |
|
||||
| `fleet` | `true` | GA | v2.5.0 |
|
||||
| `harvester` | `true` | Experimental | v2.6.1 |
|
||||
| `legacy` | `false` for new installs, `true` for upgrades | GA | v2.6.0 |
|
||||
| `rke1-custom-node-cleanup`| `true` | GA | v2.6.0 |
|
||||
| `rke2` | `true` | Experimental | v2.6.0 |
|
||||
| `token-hashing` | `false` for new installs, `true` for upgrades | GA | v2.6.0 |
|
||||
| Feature Flag Name | Default Value | Status | Available As Of | Additional Information |
|
||||
| ----------------------------- | ------------- | ------------ | --------------- | ---------------------- |
|
||||
| `continuous-delivery` | `true` | GA | v2.6.0 | |
|
||||
| `external-rules` | v2.7.14: `false`, v2.8.5: `true` | Removed | v2.7.14, v2.8.5 | This flag affected [external `RoleTemplate` behavior](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#external-roletemplate-behavior). It is removed in Rancher v2.9.0 and later as the behavior is enabled by default. |
|
||||
| `fleet` | `true` | Can no longer be disabled | v2.6.0 | |
|
||||
| `fleet` | `true` | GA | v2.5.0 | |
|
||||
| `harvester` | `true` | Experimental | v2.6.1 | |
|
||||
| `legacy` | `false` for new installs, `true` for upgrades | GA | v2.6.0 | |
|
||||
| `rke1-custom-node-cleanup`| `true` | GA | v2.6.0 | |
|
||||
| `rke2` | `true` | Experimental | v2.6.0 | |
|
||||
| `token-hashing` | `false` for new installs, `true` for upgrades | GA | v2.6.0 | |
|
||||
| `uiextension` | `true` | GA | v2.9.0 |
|
||||
| `ui-sql-cache` | `false` | Highly experimental | v2.9.0 |
|
||||
@@ -32,6 +32,7 @@ For information on enabling experimental features, refer to [this page.](../../.
|
||||
| ------------------------------ | ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `additionalTrustedCAs` | false | `bool` - See [Additional Trusted CAs](#additional-trusted-cas) |
|
||||
| `addLocal` | "true" | `string` - Have Rancher detect and import the "local" (upstream) Rancher server cluster. _Note: This option is no longer available in v2.5.0. Consider using the `restrictedAdmin` option to prevent users from modifying the local cluster._ |
|
||||
| `agentTLSMode` | "" | `string` - either `system-store` or `strict`. See [Agent TLS Enforcement](./tls-settings.md#agent-tls-enforcement) |
|
||||
| `antiAffinity` | "preferred" | `string` - AntiAffinity rule for Rancher pods - "preferred, required" |
|
||||
| `auditLog.destination` | "sidecar" | `string` - Stream to sidecar container console or hostPath volume - "sidecar, hostPath" |
|
||||
| `auditLog.hostPath` | "/var/log/rancher/audit" | `string` - log file destination on host (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
@@ -206,7 +207,7 @@ You may terminate the SSL/TLS on a L7 load balancer external to the Rancher clus
|
||||
|
||||
:::note
|
||||
|
||||
If you are using a Private CA signed certificate, add `--set privateCA=true` and see [Adding TLS Secrets - Using a Private CA Signed Certificate](../../../getting-started/installation-and-upgrade/resources/add-tls-secrets.md) to add the CA cert for Rancher.
|
||||
If you are using a Private CA signed certificate (or if `agent-tls-mode` is set to `strict`), add `--set privateCA=true` and see [Adding TLS Secrets - Using a Private CA Signed Certificate](../../../getting-started/installation-and-upgrade/resources/add-tls-secrets.md) to add the CA cert for Rancher.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
@@ -23,3 +23,82 @@ The default TLS configuration only accepts TLS 1.2 and secure TLS cipher suites.
|
||||
|-----|-----|-----|-----|
|
||||
| `CATTLE_TLS_MIN_VERSION` | Minimum TLS version | `1.2` | `1.0`, `1.1`, `1.2`, `1.3` |
|
||||
| `CATTLE_TLS_CIPHERS` | Allowed TLS cipher suites | `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`,<br/>`TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`,<br/>`TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305`,<br/>`TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`,<br/>`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`,<br/>`TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305` | See [Golang tls constants](https://golang.org/pkg/crypto/tls/#pkg-constants) |
|
||||
|
||||
## Agent TLS Enforcement
|
||||
|
||||
The `agent-tls-mode` setting controls how Rancher's agents (`cluster-agent`, `fleet-agent`, and `system-agent`) validate Rancher's certificate.
|
||||
|
||||
When the value is set to `strict`, Rancher's agents only trust certificates generated by the Certificate Authority contained in the `cacerts` setting.
|
||||
When the value is set to `system-store`, Rancher's agents trust any certificate generated by a public Certificate Authority contained in the operating system's trust store including those signed by authorities such as Let's Encrypt. This can be a security risk, since any certificate generated by these external authorities, which are outside the user's control, are considered valid in this state.
|
||||
|
||||
While the `strict` option enables a higher level of security, it requires Rancher to have access to the CA which generated the certificate visible to the agents. In the case of certain certificate configurations (notably, external certificates), this is not automatic, and extra configuration is needed. See the [installation guide](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md#3-choose-your-ssl-configuration) for more information on which scenarios require extra configuration.
|
||||
|
||||
In Rancher v2.9.0 and later, this setting defaults to `strict` on new installs. For users installing or upgrading from a prior Rancher version, it is set to `system-store`.
|
||||
|
||||
### Preparing for the Setting Change
|
||||
|
||||
Each cluster contains a condition in the status field called `AgentTlsStrictCheck`. If `AgentTlsStrictCheck` is set to `"True"`, this indicates that the agents for the cluster are ready to operate in `strict` mode. You can manually inspect each cluster to see if they are ready using the Rancher UI or a kubectl command such as the following:
|
||||
|
||||
```bash
|
||||
## the below command skips ouputs $CLUSTER_NAME,$STATUS for all non-local clusters
|
||||
kubectl get cluster.management.cattle.io -o jsonpath='{range .items[?(@.metadata.name!="local")]}{.metadata.name},{.status.conditions[?(@.type=="AgentTlsStrictCheck")].status}{"\n"}{end}'
|
||||
```
|
||||
|
||||
### Changing the Setting
|
||||
|
||||
You can change the setting using the Rancher UI or the `agentTLSMode` [helm chart option](./helm-chart-options.md).
|
||||
|
||||
:::note
|
||||
|
||||
If you specify the value through the Helm chart, you may only modify the value with Helm.
|
||||
|
||||
:::
|
||||
|
||||
:::warning
|
||||
|
||||
Depending on your cert setup, additional action may be required, such as uploading the Certificate Authority which signed your certs. Review the [installation guide](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md#3-choose-your-ssl-configuration) before changing the setting to see if any additional requirements apply to your setup.
|
||||
|
||||
:::
|
||||
|
||||
To change the setting's value through the UI, navigate to the **Global Settings** page, and find the `agent-tls-mode` setting near the bottom of the page. When you change the setting through the UI, Rancher first checks that all downstream clusters have the condition `AgentTlsStrictCheck` set to `"True"` before allowing the request. This prevents outages from a certificate mismatch.
|
||||
|
||||
|
||||
#### Overriding the Setting Validation Checks
|
||||
|
||||
In some cases, you may want to override the check ensuring all agents can accept the new TLS configuration:
|
||||
|
||||
:::warning
|
||||
|
||||
Rancher checks the status of all downstream clusters to prevent outages. Overriding this check is not recommended, and should be done with great caution.
|
||||
|
||||
:::
|
||||
|
||||
1. As an admin, generate a kubeconfig for the local cluster. In the below examples, this was saved to the `local_kubeconfig.yaml` file.
|
||||
2. Retrieve the current setting and save it to `setting.yaml`:
|
||||
```bash
|
||||
kubectl get setting agent-tls-mode -o yaml --kubeconfig=local_kubeconfig.yaml > setting.yaml
|
||||
```
|
||||
3. Update the `setting.yaml` file, replacing `value` with `strict`. Adding the `cattle.io/force: "true"` annotation overrides the cluster condition check, and should only be done with great care:
|
||||
|
||||
:::warning
|
||||
|
||||
Including the `cattle.io/force` annotation with any value (including, for example `"false"`) overrides the cluster condition check.
|
||||
|
||||
:::
|
||||
|
||||
```yaml
|
||||
apiVersion: management.cattle.io/v3
|
||||
customized: false
|
||||
default: strict
|
||||
kind: Setting
|
||||
metadata:
|
||||
name: agent-tls-mode
|
||||
annotations:
|
||||
cattle.io/force: "true"
|
||||
source: ""
|
||||
value: strict
|
||||
```
|
||||
4. Apply the new version of the setting:
|
||||
```bash
|
||||
kubectl apply -f setting.yaml --kubeconfig=local_kubeconfig.yaml
|
||||
```
|
||||
|
||||
@@ -22,7 +22,7 @@ Starting with version 1.24, the above defaults to true.
|
||||
|
||||
For users looking to use another container runtime, Rancher has the edge-focused K3s and datacenter-focused RKE2 Kubernetes distributions that use containerd as the default runtime. Imported RKE2 and K3s Kubernetes clusters can then be upgraded and managed through Rancher going forward.
|
||||
|
||||
### FAQ
|
||||
## FAQ
|
||||
|
||||
<br/>
|
||||
|
||||
@@ -46,6 +46,6 @@ A: You can use a runtime like containerd with Kubernetes that does not require D
|
||||
|
||||
Q: If I am already using RKE1 and want to switch to RKE2, what are my migration options?
|
||||
|
||||
A: Today, you can stand up a new cluster and migrate workloads to a new RKE2 cluster that uses containerd. Rancher is exploring the possibility of an in-place upgrade path.
|
||||
A: Today, you can stand up a new cluster and migrate workloads to a new RKE2 cluster that uses containerd. For details, see the [RKE to RKE2 Replatforming Guide](https://links.imagerelay.com/cdn/3404/ql/5606a3da2365422ab2250d348aa07112/rke_to_rke2_replatforming_guide.pdf).
|
||||
|
||||
<br/>
|
||||
|
||||
@@ -216,6 +216,14 @@ Each node used should have a static IP configured, regardless of whether you are
|
||||
|
||||
To operate properly, Rancher requires a number of ports to be open on Rancher nodes and on downstream Kubernetes cluster nodes. [Port Requirements](port-requirements.md) lists all the necessary ports for Rancher and Downstream Clusters for the different cluster types.
|
||||
|
||||
### Load Balancer Requirements
|
||||
|
||||
If you use a load balancer, it should be be HTTP/2 compatible.
|
||||
|
||||
To receive help from SUSE Support, Rancher Prime customers who use load balancers (or any other middleboxes such as firewalls), must use one that is HTTP/2 compatible.
|
||||
|
||||
When HTTP/2 is not available, Rancher falls back to HTTP/1.1. However, since HTTP/2 offers improved web application performance, using HTTP/1.1 can create performance issues.
|
||||
|
||||
## Dockershim Support
|
||||
|
||||
For more information on Dockershim support, refer to [this page](dockershim.md).
|
||||
|
||||
@@ -28,7 +28,7 @@ For security purposes, SSL (Secure Sockets Layer) is required when using Rancher
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
### Option A: Default Self-Signed Certificate
|
||||
## Option A: Default Self-Signed Certificate
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Click to expand</summary>
|
||||
@@ -55,7 +55,7 @@ docker run -d --restart=unless-stopped \
|
||||
|
||||
</details>
|
||||
|
||||
### Option B: Bring Your Own Certificate: Self-Signed
|
||||
## Option B: Bring Your Own Certificate: Self-Signed
|
||||
|
||||
<details id="option-b">
|
||||
<summary>Click to expand</summary>
|
||||
@@ -98,7 +98,7 @@ docker run -d --restart=unless-stopped \
|
||||
|
||||
</details>
|
||||
|
||||
### Option C: Bring Your Own Certificate: Signed by Recognized CA
|
||||
## Option C: Bring Your Own Certificate: Signed by Recognized CA
|
||||
|
||||
<details id="option-c">
|
||||
<summary>Click to expand</summary>
|
||||
@@ -143,8 +143,6 @@ docker run -d --restart=unless-stopped \
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
:::note
|
||||
|
||||
If you don't intend to send telemetry data, opt out [telemetry](../../../../faq/telemetry.md) during the initial login.
|
||||
|
||||
@@ -25,7 +25,7 @@ We recommend setting up the following infrastructure for a high-availability ins
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
- **A private image registry** to distribute container images to your machines.
|
||||
|
||||
### 1. Set up Linux Nodes
|
||||
## 1. Set up Linux Nodes
|
||||
|
||||
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
|
||||
|
||||
@@ -33,7 +33,7 @@ Make sure that your nodes fulfill the general installation requirements for [OS,
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up External Datastore
|
||||
## 2. Set up External Datastore
|
||||
|
||||
The ability to run Kubernetes using a datastore other than etcd sets K3s apart from other Kubernetes distributions. This feature provides flexibility to Kubernetes operators. The available options allow you to select a datastore that best fits your use case.
|
||||
|
||||
@@ -49,7 +49,7 @@ For an example of one way to set up the database, refer to this [tutorial](../..
|
||||
|
||||
For the complete list of options that are available for configuring a K3s cluster datastore, refer to the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/datastore/)
|
||||
|
||||
### 3. Set up the Load Balancer
|
||||
## 3. Set up the Load Balancer
|
||||
|
||||
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
|
||||
|
||||
@@ -72,7 +72,7 @@ Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance
|
||||
|
||||
:::
|
||||
|
||||
### 4. Set up the DNS Record
|
||||
## 4. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
@@ -82,7 +82,7 @@ You will need to specify this hostname in a later step when you install Rancher,
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
### 5. Set up a Private Image Registry
|
||||
## 5. Set up a Private Image Registry
|
||||
|
||||
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing container images to your machines.
|
||||
|
||||
@@ -106,13 +106,13 @@ To install the Rancher management server on a high-availability RKE cluster, we
|
||||
|
||||
These nodes must be in the same region/data center. You may place these servers in separate availability zones.
|
||||
|
||||
### Why three nodes?
|
||||
## Why Three Nodes?
|
||||
|
||||
In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
|
||||
|
||||
The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes.
|
||||
|
||||
### 1. Set up Linux Nodes
|
||||
## 1. Set up Linux Nodes
|
||||
|
||||
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
|
||||
|
||||
@@ -120,7 +120,7 @@ Make sure that your nodes fulfill the general installation requirements for [OS,
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up the Load Balancer
|
||||
## 2. Set up the Load Balancer
|
||||
|
||||
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
|
||||
|
||||
@@ -143,7 +143,7 @@ Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance
|
||||
|
||||
:::
|
||||
|
||||
### 3. Set up the DNS Record
|
||||
## 3. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
@@ -153,7 +153,7 @@ You will need to specify this hostname in a later step when you install Rancher,
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
### 4. Set up a Private Image Registry
|
||||
## 4. Set up a Private Image Registry
|
||||
|
||||
Rancher supports air gap installs using a secure private registry. You must have your own private registry or other means of distributing container images to your machines.
|
||||
|
||||
@@ -176,7 +176,7 @@ If you need to create a private registry, refer to the documentation pages for y
|
||||
|
||||
:::
|
||||
|
||||
### 1. Set up a Linux Node
|
||||
## 1. Set up a Linux Node
|
||||
|
||||
This host will be disconnected from the Internet, but needs to be able to connect to your private registry.
|
||||
|
||||
@@ -184,7 +184,7 @@ Make sure that your node fulfills the general installation requirements for [OS,
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up a Private Docker Registry
|
||||
## 2. Set up a Private Docker Registry
|
||||
|
||||
Rancher supports air gap installs using a private registry on your bastion server. You must have your own private registry or other means of distributing container images to your machines.
|
||||
|
||||
@@ -193,4 +193,4 @@ If you need help with creating a private registry, please refer to the [official
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### [Next: Collect and Publish Images to your Private Registry](publish-images.md)
|
||||
## [Next: Collect and Publish Images to your Private Registry](publish-images.md)
|
||||
|
||||
@@ -23,14 +23,15 @@ The steps to set up an air-gapped Kubernetes cluster on RKE, RKE2, or K3s are sh
|
||||
|
||||
In this guide, we are assuming you have created your nodes in your air gapped environment and have a secure Docker private registry on your bastion server.
|
||||
|
||||
### Installation Outline
|
||||
## Installation Outline
|
||||
|
||||
1. [Prepare Images Directory](#1-prepare-images-directory)
|
||||
2. [Create Registry YAML](#2-create-registry-yaml)
|
||||
3. [Install K3s](#3-install-k3s)
|
||||
4. [Save and Start Using the kubeconfig File](#4-save-and-start-using-the-kubeconfig-file)
|
||||
|
||||
### 1. Prepare Images Directory
|
||||
## 1. Prepare Images Directory
|
||||
|
||||
Obtain the images tar file for your architecture from the [releases](https://github.com/k3s-io/k3s/releases) page for the version of K3s you will be running.
|
||||
|
||||
Place the tar file in the `images` directory before starting K3s on each node, for example:
|
||||
@@ -40,7 +41,8 @@ sudo mkdir -p /var/lib/rancher/k3s/agent/images/
|
||||
sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/
|
||||
```
|
||||
|
||||
### 2. Create Registry YAML
|
||||
## 2. Create Registry YAML
|
||||
|
||||
Create the registries.yaml file at `/etc/rancher/k3s/registries.yaml`. This will tell K3s the necessary details to connect to your private registry.
|
||||
|
||||
The registries.yaml file should look like this before plugging in the necessary information:
|
||||
@@ -66,7 +68,7 @@ Note, at this time only secure registries are supported with K3s (SSL with custo
|
||||
|
||||
For more information on private registries configuration file for K3s, refer to the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/private-registry/)
|
||||
|
||||
### 3. Install K3s
|
||||
## 3. Install K3s
|
||||
|
||||
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [Rancher Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/).
|
||||
|
||||
@@ -98,7 +100,7 @@ K3s additionally provides a `--resolv-conf` flag for kubelets, which may help wi
|
||||
|
||||
:::
|
||||
|
||||
### 4. Save and Start Using the kubeconfig File
|
||||
## 4. Save and Start Using the kubeconfig File
|
||||
|
||||
When you installed K3s on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/k3s/k3s.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location.
|
||||
|
||||
@@ -138,7 +140,7 @@ kubectl --kubeconfig ~/.kube/config/k3s.yaml get pods --all-namespaces
|
||||
|
||||
For more information about the `kubeconfig` file, refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/cluster-access/) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files.
|
||||
|
||||
### Note on Upgrading
|
||||
## Note on Upgrading
|
||||
|
||||
Upgrading an air-gap environment can be accomplished in the following manner:
|
||||
|
||||
@@ -151,14 +153,15 @@ Upgrading an air-gap environment can be accomplished in the following manner:
|
||||
|
||||
In this guide, we are assuming you have created your nodes in your air-gapped environment and have a secure Docker private registry on your bastion server.
|
||||
|
||||
### Installation Outline
|
||||
## Installation Outline
|
||||
|
||||
1. [Create RKE2 configuration](#1-create-rke2-configuration)
|
||||
2. [Create Registry YAML](#2-create-registry-yaml)
|
||||
3. [Install RKE2](#3-install-rke2)
|
||||
4. [Save and Start Using the kubeconfig File](#4-save-and-start-using-the-kubeconfig-file)
|
||||
|
||||
### 1. Create RKE2 configuration
|
||||
## 1. Create RKE2 configuration
|
||||
|
||||
Create the config.yaml file at `/etc/rancher/rke2/config.yaml`. This will contain all the configuration options necessary to create a highly available RKE2 cluster.
|
||||
|
||||
On the first server the minimum config is:
|
||||
@@ -186,7 +189,8 @@ RKE2 additionally provides a `resolv-conf` option for kubelets, which may help w
|
||||
|
||||
:::
|
||||
|
||||
### 2. Create Registry YAML
|
||||
## 2. Create Registry YAML
|
||||
|
||||
Create the registries.yaml file at `/etc/rancher/rke2/registries.yaml`. This will tell RKE2 the necessary details to connect to your private registry.
|
||||
|
||||
The registries.yaml file should look like this before plugging in the necessary information:
|
||||
@@ -210,7 +214,7 @@ configs:
|
||||
|
||||
For more information on private registries configuration file for RKE2, refer to the [RKE2 documentation.](https://docs.rke2.io/install/containerd_registry_configuration)
|
||||
|
||||
### 3. Install RKE2
|
||||
## 3. Install RKE2
|
||||
|
||||
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
|
||||
|
||||
@@ -239,7 +243,7 @@ systemctl start rke2-server.service
|
||||
|
||||
For more information, refer to the [RKE2 documentation](https://docs.rke2.io/install/airgap).
|
||||
|
||||
### 4. Save and Start Using the kubeconfig File
|
||||
## 4. Save and Start Using the kubeconfig File
|
||||
|
||||
When you installed RKE2 on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/rke2/rke2.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location.
|
||||
|
||||
@@ -279,7 +283,7 @@ kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces
|
||||
|
||||
For more information about the `kubeconfig` file, refer to the [RKE2 documentation](https://docs.rke2.io/cluster_access) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files.
|
||||
|
||||
### Note on Upgrading
|
||||
## Note on Upgrading
|
||||
|
||||
Upgrading an air-gap environment can be accomplished in the following manner:
|
||||
|
||||
@@ -291,7 +295,7 @@ Upgrading an air-gap environment can be accomplished in the following manner:
|
||||
<TabItem value="RKE">
|
||||
We will create a Kubernetes cluster using Rancher Kubernetes Engine (RKE). Before being able to start your Kubernetes cluster, you’ll need to install RKE and create a RKE config file.
|
||||
|
||||
### 1. Install RKE
|
||||
## 1. Install RKE
|
||||
|
||||
Install RKE by following the instructions in the [RKE documentation.](https://rancher.com/docs/rke/latest/en/installation/)
|
||||
|
||||
@@ -301,7 +305,7 @@ Certified version(s) of RKE based on the Rancher version can be found in the [Ra
|
||||
|
||||
:::
|
||||
|
||||
### 2. Create an RKE Config File
|
||||
## 2. Create an RKE Config File
|
||||
|
||||
From a system that can access ports 22/TCP and 6443/TCP on the Linux host node(s) that you set up in a previous step, use the sample below to create a new file named `rancher-cluster.yml`.
|
||||
|
||||
@@ -352,7 +356,7 @@ private_registries:
|
||||
is_default: true
|
||||
```
|
||||
|
||||
### 3. Run RKE
|
||||
## 3. Run RKE
|
||||
|
||||
After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster:
|
||||
|
||||
@@ -360,7 +364,7 @@ After configuring `rancher-cluster.yml`, bring up your Kubernetes cluster:
|
||||
rke up --config ./rancher-cluster.yml
|
||||
```
|
||||
|
||||
### 4. Save Your Files
|
||||
## 4. Save Your Files
|
||||
|
||||
:::note Important:
|
||||
|
||||
@@ -383,8 +387,8 @@ The "rancher-cluster" parts of the two latter file names are dependent on how yo
|
||||
|
||||
:::
|
||||
|
||||
### Issues or errors?
|
||||
## Issues or Errors?
|
||||
|
||||
See the [Troubleshooting](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md) page.
|
||||
|
||||
### [Next: Install Rancher](install-rancher-ha.md)
|
||||
## [Next: Install Rancher](install-rancher-ha.md)
|
||||
|
||||
@@ -8,7 +8,7 @@ title: 4. Install Rancher
|
||||
|
||||
This section is about how to deploy Rancher for your air gapped environment in a high-availability Kubernetes installation. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
|
||||
|
||||
### Privileged Access for Rancher
|
||||
## Privileged Access for Rancher
|
||||
|
||||
When the Rancher server is deployed in the Docker container, a local Kubernetes cluster is installed within the container for Rancher to use. Because many features of Rancher run as deployments, and privileged mode is required to run containers within containers, you will need to install Rancher with the `--privileged` option.
|
||||
|
||||
@@ -92,7 +92,7 @@ Recent changes to cert-manager require an upgrade. If you are upgrading Rancher
|
||||
|
||||
:::
|
||||
|
||||
##### 1. Add the cert-manager repo
|
||||
##### 1. Add the cert-manager Repo
|
||||
|
||||
From a system connected to the internet, add the cert-manager repo to Helm:
|
||||
|
||||
@@ -101,7 +101,7 @@ helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
##### 2. Fetch the cert-manager chart
|
||||
##### 2. Fetch the cert-manager Chart
|
||||
|
||||
Fetch the latest cert-manager chart available from the [Helm chart repository](https://artifacthub.io/packages/helm/cert-manager/cert-manager).
|
||||
|
||||
@@ -109,7 +109,7 @@ Fetch the latest cert-manager chart available from the [Helm chart repository](h
|
||||
helm fetch jetstack/cert-manager --version v1.11.0
|
||||
```
|
||||
|
||||
##### 3. Retrieve the Cert-Manager CRDs
|
||||
##### 3. Retrieve the cert-manager CRDs
|
||||
|
||||
Download the required CRD file for cert-manager:
|
||||
```plain
|
||||
@@ -120,7 +120,7 @@ Download the required CRD file for cert-manager:
|
||||
|
||||
Copy the fetched charts to a system that has access to the Rancher server cluster to complete installation.
|
||||
|
||||
##### 1. Install Cert-Manager
|
||||
#### 1. Install cert-manager
|
||||
|
||||
Install cert-manager with the same options you would use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry.
|
||||
|
||||
@@ -160,7 +160,8 @@ If you are using self-signed certificates, install cert-manager:
|
||||
|
||||
</details>
|
||||
|
||||
##### 2. Install Rancher
|
||||
#### 2. Install Rancher
|
||||
|
||||
First, refer to [Adding TLS Secrets](../../resources/add-tls-secrets.md) to publish the certificate files so Rancher and the ingress controller can use them.
|
||||
|
||||
Then, create the namespace for Rancher using kubectl:
|
||||
@@ -192,9 +193,9 @@ Placeholder | Description
|
||||
|
||||
**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.5.8`
|
||||
|
||||
#### Option B: Certificates From Files using Kubernetes Secrets
|
||||
#### Option B: Certificates From Files Using Kubernetes Secrets
|
||||
|
||||
##### 1. Create secrets
|
||||
##### 1. Create Secrets
|
||||
|
||||
Create Kubernetes secrets from your own certificates for Rancher to use. The common name for the cert will need to match the `hostname` option in the command below, or the ingress controller will fail to provision the site for Rancher.
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ First configure the HTTP proxy settings on the K3s systemd service, so that K3s'
|
||||
```
|
||||
cat <<'EOF' | sudo tee /etc/default/k3s > /dev/null
|
||||
HTTP_PROXY=http://${proxy_host}
|
||||
HTTPS_PROXY=http://${proxy_host}"
|
||||
HTTPS_PROXY=http://${proxy_host}
|
||||
NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
|
||||
EOF
|
||||
```
|
||||
@@ -71,7 +71,7 @@ Then you have to configure the HTTP proxy settings on the RKE2 systemd service,
|
||||
```
|
||||
cat <<'EOF' | sudo tee /etc/default/rke2-server > /dev/null
|
||||
HTTP_PROXY=http://${proxy_host}
|
||||
HTTPS_PROXY=http://${proxy_host}"
|
||||
HTTPS_PROXY=http://${proxy_host}
|
||||
NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
|
||||
EOF
|
||||
```
|
||||
|
||||
@@ -109,7 +109,7 @@ Rancher Server is distributed as a Docker image, which have tags attached to the
|
||||
| -------------------------- | ------ |
|
||||
| `rancher/rancher:latest` | Our latest development release. These builds are validated through our CI automation framework. These releases are not recommended for production environments. |
|
||||
| `rancher/rancher:stable` | Our newest stable release. This tag is recommended for production. |
|
||||
| `rancher/rancher:<v2.X.X>` | You can install specific versions of Rancher by using the tag from a previous release. See what's available at DockerHub. |
|
||||
| `rancher/rancher:<v2.X.X>` | You can install specific versions of Rancher by using the tag from a previous release. See what's available at Docker Hub. |
|
||||
|
||||
:::note
|
||||
|
||||
|
||||
@@ -102,8 +102,6 @@ There is a [known issue](https://github.com/rancher/rancher/issues/25478) in whi
|
||||
|
||||
### Maintaining Availability for Applications During Upgrades
|
||||
|
||||
_Available as of RKE v1.1.0_
|
||||
|
||||
In [this section of the RKE documentation,](https://rancher.com/docs/rke/latest/en/upgrades/maintaining-availability/) you'll learn the requirements to prevent downtime for your applications when upgrading the cluster.
|
||||
|
||||
### Configuring the Upgrade Strategy in the cluster.yml
|
||||
|
||||
@@ -36,7 +36,7 @@ Administrators might configure the RKE metadata settings to do the following:
|
||||
- Change the metadata URL that Rancher uses to sync the metadata, which is useful for air gap setups if you need to sync Rancher locally instead of with GitHub
|
||||
- Prevent Rancher from auto-syncing the metadata, which is one way to prevent new and unsupported Kubernetes versions from being available in Rancher
|
||||
|
||||
### Refresh Kubernetes Metadata
|
||||
## Refresh Kubernetes Metadata
|
||||
|
||||
The option to refresh the Kubernetes metadata is available for administrators by default, or for any user who has the **Manage Cluster Drivers** [global role.](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md)
|
||||
|
||||
@@ -74,7 +74,7 @@ If you don't have an air gap setup, you don't need to specify the URL where Ranc
|
||||
|
||||
However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL to point to the new location of the JSON file.
|
||||
|
||||
### Air Gap Setups
|
||||
## Air Gap Setups
|
||||
|
||||
Rancher relies on a periodic refresh of the `rke-metadata-config` to download new Kubernetes version metadata if it is supported with the current version of the Rancher server. For a table of compatible Kubernetes and Rancher versions, refer to the [service terms section.](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.2.8/)
|
||||
|
||||
|
||||
17
docs/glossary.md
Normal file
17
docs/glossary.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title: Glossary
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/glossary"/>
|
||||
</head>
|
||||
|
||||
This page covers Rancher-specific terminology and symbols which might be unfamiliar, or which differ between Rancher versions.
|
||||
|
||||
```mdx-code-block
|
||||
import Glossary, {toc as GlossaryTOC} from "/shared-files/_glossary.md"
|
||||
|
||||
<Glossary />
|
||||
|
||||
export const toc = GlossaryTOC;
|
||||
```
|
||||
@@ -80,11 +80,11 @@ If you use a certificate signed by a recognized CA, installing your certificate
|
||||
|
||||
1. Enter the following command.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:latest --no-cacerts
|
||||
```
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:latest --no-cacerts
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
@@ -28,14 +28,14 @@ spec:
|
||||
rkeConfig:
|
||||
machineGlobalConfig:
|
||||
audit-policy-file: |
|
||||
apiVersion: audit.k8s.io/v1
|
||||
kind: Policy
|
||||
rules:
|
||||
- level: RequestResponse
|
||||
resources:
|
||||
- group: ""
|
||||
resources:
|
||||
- pods
|
||||
apiVersion: audit.k8s.io/v1
|
||||
kind: Policy
|
||||
rules:
|
||||
- level: RequestResponse
|
||||
resources:
|
||||
- group: ""
|
||||
resources:
|
||||
- pods
|
||||
```
|
||||
|
||||
### Method 2: Use the Directives, `machineSelectorFiles` and `machineGlobalConfig`
|
||||
|
||||
@@ -36,12 +36,12 @@ The usage below defines rules about what the audit log should record and what da
|
||||
|
||||
The following table displays what parts of API transactions are logged for each [`AUDIT_LEVEL`](#api-audit-log-options) setting.
|
||||
|
||||
| `AUDIT_LEVEL` Setting | Request Metadata | Request Body | Response Metadata | Response Body |
|
||||
| --------------------- | ---------------- | ------------ | ----------------- | ------------- |
|
||||
| `0` | | | | |
|
||||
| `1` | ✓ | | | |
|
||||
| `2` | ✓ | ✓ | | |
|
||||
| `3` | ✓ | ✓ | ✓ | ✓ |
|
||||
| `AUDIT_LEVEL` Setting | Metadata | Request Body | Response Body |
|
||||
| --------------------- | -------- | ------------ | ------------- |
|
||||
| `0` | | | |
|
||||
| `1` | ✓ | | |
|
||||
| `2` | ✓ | ✓ | |
|
||||
| `3` | ✓ | ✓ | ✓ |
|
||||
|
||||
## Viewing API Audit Logs
|
||||
|
||||
|
||||
@@ -0,0 +1,41 @@
|
||||
---
|
||||
title: UI Server-Side Pagination
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/ui-server-side-pagination"/>
|
||||
</head>
|
||||
|
||||
:::caution
|
||||
UI server-side pagination is not intended for use in production at this time. This feature is considered highly experimental. SUSE customers should consult SUSE Support before activating this feature.
|
||||
:::
|
||||
|
||||
|
||||
UI server-side pagination caching provides an optional SQLite-backed cache of Kubernetes objects to improve performance. This unlocks sorting, filtering and pagination features used by the UI to restrict the amount of resources it fetches and stores in browser memory. These features are primarily used to improve list performance for resources with high counts.
|
||||
|
||||
This feature creates file system based caches in the `rancher` pods of the upstream cluster, and in the `cattle-cluster-agent` pods of the downstream clusters. In most environments, disk usage and I/O should not be significant. However, you should monitor activity after you enable caching.
|
||||
|
||||
SQLite-backed caching persists copies of any cached Kubernetes objects to disk. See [Encrypting SQLite-backed Caching](#encrypting-sqlite-backed-caches) if this is a security concern.
|
||||
|
||||
## Enabling UI Server-Side Pagination
|
||||
|
||||
1. In the upper left corner, click **☰ > Global Settings > Feature Flags**.
|
||||
1. Find **`ui-sql-cache`** and select **⋮ > Activate > Activate**.
|
||||
1. Wait for Rancher to restart. This also restarts agents on all downstream clusters.
|
||||
1. In the upper left corner, click **☰ > Global Settings > Performance**.
|
||||
1. Go to **Server-side Pagination** and check the **Enable Server-side Pagination** option.
|
||||
1. Click **Apply**.
|
||||
1. Reload the page with the browser button (or the equivalent keyboard combination, typically `CTRL + R` on Windows and Linux, and `⌘ + R` on macOS).
|
||||
|
||||
|
||||
## Encrypting SQLite-backed Caches
|
||||
|
||||
UI server-side pagination persists copies of any cached Kubernetes objects to disk. If you're concerned about the safety of this data, you can encrypt all objects before they are persisted to disk, by setting the environment variable `CATTLE_ENCRYPT_CACHE_ALL` to `true` in `rancher` pods in the upstream cluster and `cattle-cluster-agent` pods in the downstream clusters.
|
||||
|
||||
Secrets and security Tokens are always encrypted regardless of the above setting.
|
||||
|
||||
## Known Limitations of UI Server-Side Pagination
|
||||
|
||||
This initial release improves the performance of Pods, Secrets, Nodes and ConfigMaps in the Cluster Explorer pages, and most resources in the Explorer's **More Resources** section.
|
||||
|
||||
Pages can't be automatically refreshed. You can manually refresh table contents by clicking the **Refresh** button.
|
||||
@@ -0,0 +1,62 @@
|
||||
---
|
||||
title: Enabling User Retention
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-user-retention"/>
|
||||
</head>
|
||||
|
||||
In Rancher v2.8.5 and later, you can enable user retention to automatically disable or delete inactive user accounts after a configurable time period.
|
||||
|
||||
The user retention feature is off by default.
|
||||
|
||||
## Enabling User Retention with kubectl
|
||||
|
||||
To enable user retention, you must set `user-retention-cron`. You must also set at least one of `disable-inactive-user-after` or `delete-inactive-user-after`. You can use `kubectl edit setting <name-of-setting>` to open your editor of choice and set these values.
|
||||
|
||||
## Configuring Rancher to Delete Users, Disable Users, or Combine Operations
|
||||
|
||||
Rancher uses two global user retention settings to determine if and when users are disabled or deleted after a certain period of inactivity. Disabled accounts must be re-enabled before users can log in again. If an account is deleted without being disabled, users may be able to log in through external authentication and the deleted account will be recreated.
|
||||
|
||||
The global settings, `disable-inactive-user-after` and `delete-inactive-user-after`, do not block one another from running.
|
||||
|
||||
For example, you can set both operations to run. If you give `disable-inactive-user-after` a shorter duration than `delete-inactive-user-after`, the user retention process disables inactive accounts before deleting them.
|
||||
|
||||
You can also edit some user retention settings on a specific user's `UserAttribute`. Setting these values overrides the global settings. See [User-specific User Retention Overrides](#user-specific-user-retention-overrides) for more details.
|
||||
|
||||
### Required User Retention Settings
|
||||
|
||||
The following are global settings:
|
||||
|
||||
- `user-retention-cron`: Describes how often the user retention process runs. The value is a cron expression (for example, `0 * * * *` for every hour).
|
||||
- `disable-inactive-user-after`: The amount of time that a user account can be inactive before the process disables an account. Disabling an account forces the user to request that an administrator re-enable the account before they can log in to use it. Values are expressed in [time.Duration units](https://pkg.go.dev/time#ParseDuration) (for example, `720h` for 720 hours or 30 days). The value must be greater than `auth-user-session-ttl-minutes`, which is `16h` by default. If the value is not set, set to the empty string, or is equal to 0, the process does not disable any inactive accounts.
|
||||
- `delete-inactive-user-after`: The amount of time that a user account can be inactive before the process deletes the account. Values are expressed in time.Duration units (for example, `720h` for 720 hours or 30 days). The value must be greater than `auth-user-session-ttl-minutes`, which is `16h` by default. The value should be greater than `336h` (14 days), otherwise it is rejected by the Rancher webhook. If you need the value to be lower than 14 days, you can [bypass the webhook](../../reference-guides/rancher-webhook.md#bypassing-the-webhook). If the value is not set, set to the empty string, or is equal to 0, the process does not delete any inactive accounts.
|
||||
|
||||
### Optional User Retention Settings
|
||||
|
||||
The following are global settings:
|
||||
|
||||
- `user-retention-dry-run`: If set to `true`, the user retention process runs without actually deleting or disabling any user accounts. This can help test user retention behavior before allowing the process to disable or delete user accounts in a production environment.
|
||||
- `user-last-login-default`: If a user does not have `UserAttribute.LastLogin` set on their account, this setting is used instead. The value is expressed as an [RFC 3339 date-time](https://datatracker.ietf.org/doc/html/rfc3339#section-5.6) truncated to the last second; for example, `2023-03-01T00:00:00Z`. If the value is set to the empty string or is equal to 0, this setting is not used.
|
||||
|
||||
#### User-specific User Retention Overrides
|
||||
|
||||
The following are user-specific overrides to the global settings for special cases. These settings are applied by editing the `UserAttribute` associated with a given account:
|
||||
|
||||
```
|
||||
kubectl edit userattribute <user-name>
|
||||
```
|
||||
|
||||
- `disableAfter`: The user-specific override for `disable-inactive-user-after`. The value is expressed in [time.Duration units](https://pkg.go.dev/time#ParseDuration) and truncated to the second. If the value is set to `0s` then the account won't be subject to disabling.
|
||||
- `deleteAfter`: The user-specific override for `delete-inactive-user-after`. The value is expressed in [time.Duration units](https://pkg.go.dev/time#ParseDuration) and truncated to the second. If the value is set to `0s` then the account won't be subject to deletion.
|
||||
|
||||
## Viewing User Retention Settings in the Rancher UI
|
||||
|
||||
You can see which user retention settings are applied to which users.
|
||||
|
||||
1. In the upper left corner, click **☰ > Users & Authentication**.
|
||||
1. In the left navigation menu, select **Users**.
|
||||
|
||||
The **Disable After** and **Delete After** columns for each user account indicate how long the account can be inactive before it is disabled or deleted from Rancher. There is also a **Last Login** column roughly indicating when the account was last active.
|
||||
|
||||
The same information is available if you click a user's name in the **Users** table and select the **Detail** tab.
|
||||
@@ -6,19 +6,21 @@ title: Generate and View Traffic from Istio
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/istio-setup-guide/generate-and-view-traffic"/>
|
||||
</head>
|
||||
|
||||
This section describes how to view the traffic that is being managed by Istio.
|
||||
|
||||
## The Kiali Traffic Graph
|
||||
|
||||
The Istio overview page provides a link to the Kiali dashboard. From the Kiali dashboard, you are able to view graphs for each namespace. The Kiali graph provides a powerful way to visualize the topology of your Istio service mesh. It shows you which services communicate with each other.
|
||||
The Istio overview page provides a link to the Kiali dashboard. From the Kiali dashboard, you can view graphs for each namespace. The Kiali graph provides a powerful way to visualize the topology of your Istio service mesh. It shows you which services communicate with each other.
|
||||
|
||||
:::note Prerequisites:
|
||||
## Prerequisites
|
||||
|
||||
To enable traffic to show up in the graph, ensure you have prometheus installed in the cluster. Rancher-istio installs Kiali configured by default to work with the rancher-monitoring chart. You can use rancher-monitoring or install your own monitoring solution. Optional: you can change configuration on how data scraping occurs by setting the [Selectors & Scrape Configs](../../../integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md) options.
|
||||
To enable traffic to show up in the graph, ensure that you have Prometheus installed in the cluster. `Rancher-istio` installs Kiali, and configures it by default to work with the `rancher-monitoring` chart. You can use `rancher-monitoring` or install your own monitoring solution.
|
||||
|
||||
:::
|
||||
Additionally, for Istio installations version `103.1.0+up1.19.6` and later, Kiali uses a token value for its authentication strategy. If you are trying to generate or retrieve the token (e.g. for login), note that the name of the Kiali service account in Rancher is `kiali`. For more information, refer to the [Kiali token authentication FAQ](https://kiali.io/docs/faq/authentication/).
|
||||
|
||||
To see the traffic graph,
|
||||
Optional: You can configure which namespaces data scraping occurs in by setting the Helm chart options described in [Selectors & Scrape Configs](../../../integrations-in-rancher/istio/configuration-options/selectors-and-scrape-configurations.md).
|
||||
|
||||
## Traffic Visualization
|
||||
|
||||
To see the traffic graph follow the steps below:
|
||||
|
||||
1. In the cluster where Istio is installed, click **Istio** in the left navigation bar.
|
||||
1. Click the **Kiali** link.
|
||||
|
||||
@@ -42,7 +42,7 @@ For more information about the default limits, see [this page.](../../../referen
|
||||
|
||||
### Enable Monitoring for use without SSL
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Go to the cluster that you created and click **Explore**.
|
||||
1. Click **Cluster Tools** (bottom left corner).
|
||||
1. Click **Install** by Monitoring.
|
||||
@@ -77,3 +77,79 @@ key.pfx=`base64-content`
|
||||
```
|
||||
|
||||
Then **Cert File Path** would be set to `/etc/alertmanager/secrets/cert.pem`.
|
||||
|
||||
## Rancher Performance Dashboard
|
||||
|
||||
When monitoring is installed on the upstream (local) cluster, you are given basic health metrics about the Rancher pods, such as CPU and memory data. To get advanced metrics for your local Rancher server, you must additionally enable the Rancher Performance Dashboard for Grafana.
|
||||
|
||||
This dashboard provides access to the following advanced metrics:
|
||||
|
||||
- Handler Average Execution Times Over Last 5 Minutes
|
||||
- Rancher API Average Request Times Over Last 5 Minutes
|
||||
- Subscribe Average Request Times Over Last 5 Minutes
|
||||
- Lasso Controller Work Queue Depth (Top 20)
|
||||
- Number of Rancher Requests (Top 20)
|
||||
- Number of Failed Rancher API Requests (Top 20)
|
||||
- K8s Proxy Store Average Request Times Over Last 5 Minutes (Top 20)
|
||||
- K8s Proxy Client Average Request Times Over Last 5 Minutes (Top 20)
|
||||
- Cached Objects by GroupVersionKind (Top 20)
|
||||
- Lasso Handler Executions (Top 20)
|
||||
- Handler Executions Over Last 2 Minutes (Top 20)
|
||||
- Total Handler Executions with Error (Top 20)
|
||||
- Data Transmitted by Remote Dialer Sessions (Top 20)
|
||||
- Errors for Remote Dialer Sessions (Top 20)
|
||||
- Remote Dialer Connections Removed (Top 20)
|
||||
- Remote Dialer Connections Added by Client (Top 20)
|
||||
|
||||
:::note
|
||||
|
||||
Profiling data (such as advanced memory or CPU analysis) is not present as it is a very context-dependent technique that's meant for debugging and not intended for normal observation.
|
||||
|
||||
:::
|
||||
|
||||
### Enabling the Rancher Performance Dashboard
|
||||
|
||||
To enable the Rancher Performance Dashboard:
|
||||
|
||||
<Tabs groupId="UIorCLI">
|
||||
<TabItem value="Helm">
|
||||
|
||||
Use the following options with the Helm CLI:
|
||||
|
||||
```bash
|
||||
--set extraEnv\[0\].name="CATTLE_PROMETHEUS_METRICS" --set-string extraEnv\[0\].value=true
|
||||
```
|
||||
|
||||
You can also include the following snippet in your Rancher Helm chart's values.yaml file:
|
||||
|
||||
```yaml
|
||||
extraEnv:
|
||||
- name: "CATTLE_PROMETHEUS_METRICS"
|
||||
value: "true"
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="UI">
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Go to the row of the `local` cluster and click **Explore**.
|
||||
1. Click **Workloads > Deployments**.
|
||||
1. Use the dropdown menu at the top to filter for **All Namespaces**.
|
||||
1. Under the `cattle-system` namespace, go to the `rancher` row and click **⋮ > Edit Config**
|
||||
1. Under **Environment Variables**, click **Add Variable**.
|
||||
1. For **Type**, select `Key/Value Pair`.
|
||||
1. For **Variable Name**, enter `CATTLE_PROMETHEUS_METRICS`.
|
||||
1. For **Value**, enter `true`.
|
||||
1. Click **Save** to apply the change.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Accessing the Rancher Performance Dashboard
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Go to the row of the `local` cluster and click **Explore**.
|
||||
1. Click **Monitoring**
|
||||
1. Select the **Grafana** dashboard.
|
||||
1. From the sidebar, click **Search dashboards**.
|
||||
1. Enter `Rancher Performance Debugging` and select it.
|
||||
|
||||
@@ -6,7 +6,17 @@ title: Opening Ports with firewalld
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/open-ports-with-firewalld"/>
|
||||
</head>
|
||||
|
||||
> We recommend disabling firewalld. For Kubernetes 1.19.x and higher, firewalld must be turned off.
|
||||
:::danger
|
||||
|
||||
Enabling firewalld can cause serious network communication problems.
|
||||
|
||||
For proper network function, firewalld must be disabled on systems running RKE2. [Firewalld conflicts with Canal](https://docs.rke2.io/known_issues#firewalld-conflicts-with-default-networking), RKE2's default networking stack.
|
||||
|
||||
Firewalld must also be disabled on systems running Kubernetes 1.19 and later.
|
||||
|
||||
If you enable firewalld on systems running Kubernetes 1.18 or earlier, understand that this may cause networking issues. CNIs in Kubernetes dynamically update iptables and networking rules independently of any external firewalls, such as firewalld. This can cause unexpected behavior when the CNI and the external firewall conflict.
|
||||
|
||||
:::
|
||||
|
||||
Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm.
|
||||
|
||||
|
||||
@@ -8,9 +8,9 @@ title: Tuning etcd for Large Installations
|
||||
|
||||
When Rancher is used to manage [a large infrastructure](../../getting-started/installation-and-upgrade/installation-requirements/installation-requirements.md) it is recommended to increase the default keyspace for etcd from the default 2 GB. The maximum setting is 8 GB and the host should have enough RAM to keep the entire dataset in memory. When increasing this value you should also increase the size of the host. The keyspace size can also be adjusted in smaller installations if you anticipate a high rate of change of pods during the garbage collection interval.
|
||||
|
||||
The etcd data set is automatically cleaned up on a five minute interval by Kubernetes. There are situations, e.g. deployment thrashing, where enough events could be written to etcd and deleted before garbage collection occurs and cleans things up causing the keyspace to fill up. If you see `mvcc: database space exceeded` errors, in the etcd logs or Kubernetes API server logs, you should consider increasing the keyspace size. This can be accomplished by setting the [quota-backend-bytes](https://etcd.io/docs/v3.4.0/op-guide/maintenance/#space-quota) setting on the etcd servers.
|
||||
The etcd data set is automatically cleaned up on a five minute interval by Kubernetes. There are situations, e.g. deployment thrashing, where enough events could be written to etcd and deleted before garbage collection occurs and cleans things up causing the keyspace to fill up. If you see `mvcc: database space exceeded` errors, in the etcd logs or Kubernetes API server logs, you should consider increasing the keyspace size. This can be accomplished by setting the [quota-backend-bytes](https://etcd.io/docs/v3.5/op-guide/maintenance/#space-quota) setting on the etcd servers.
|
||||
|
||||
### Example: This snippet of the RKE cluster.yml file increases the keyspace size to 5GB
|
||||
## Example: This Snippet of the RKE Cluster.yml file Increases the Keyspace Size to 5GB
|
||||
|
||||
```yaml
|
||||
# RKE cluster.yml
|
||||
@@ -21,9 +21,9 @@ services:
|
||||
quota-backend-bytes: 5368709120
|
||||
```
|
||||
|
||||
## Scaling etcd disk performance
|
||||
## Scaling etcd Disk Performance
|
||||
|
||||
You can follow the recommendations from [the etcd docs](https://etcd.io/docs/v3.4.0/tuning/#disk) on how to tune the disk priority on the host.
|
||||
You can follow the recommendations from [the etcd docs](https://etcd.io/docs/v3.5/tuning/#disk) on how to tune the disk priority on the host.
|
||||
|
||||
Additionally, to reduce IO contention on the disks for etcd, you can use a dedicated device for the data and wal directory. Based on etcd best practices, mirroring RAID configurations are unnecessary because etcd replicates data between the nodes in the cluster. You can use striping RAID configurations to increase available IOPS.
|
||||
|
||||
|
||||
@@ -16,11 +16,11 @@ Want to provide a user with access to _all_ projects within a cluster? See [Addi
|
||||
|
||||
:::
|
||||
|
||||
### Adding Members to a New Project
|
||||
## Adding Members to a New Project
|
||||
|
||||
You can add members to a project as you create it (recommended if possible). For details on creating a new project, refer to the [cluster administration section.](../../how-to-guides/new-user-guides/manage-clusters/projects-and-namespaces.md)
|
||||
|
||||
### Adding Members to an Existing Project
|
||||
## Adding Members to an Existing Project
|
||||
|
||||
Following project creation, you can add users as project members so that they can access its resources.
|
||||
|
||||
|
||||
@@ -56,4 +56,6 @@ If you want to use a node driver that Rancher doesn't support out-of-the-box, yo
|
||||
|
||||
### Developing Your Own Node Drivers
|
||||
|
||||
Node drivers are implemented with [Docker Machine](https://docs.docker.com/machine/).
|
||||
Node drivers are implemented with [Rancher Machine](https://github.com/rancher/machine), a fork of [Docker Machine](https://github.com/docker/machine). Docker Machine is no longer under active development.
|
||||
|
||||
Refer to the original [Docker Machine documentation](https://github.com/docker/docs/blob/vnext-engine/machine/overview.md) for details on how to develop your own node drivers.
|
||||
|
||||
@@ -60,4 +60,4 @@ To convert an existing cluster to use an RKE template,
|
||||
|
||||
- A new RKE template is created.
|
||||
- The cluster is converted to use the new template.
|
||||
- New clusters can be [created from the new template.](apply-templates.md#creating-a-cluster-from-an-rke-template)
|
||||
- New clusters can be [created from the new template.](#creating-a-cluster-from-an-rke-template)
|
||||
@@ -21,20 +21,21 @@ The account used to enable the external provider will be granted admin permissio
|
||||
|
||||
The Rancher authentication proxy integrates with the following external authentication services.
|
||||
|
||||
| Auth Service |
|
||||
| ------------------------------------------------------------------------------------------------ |
|
||||
| [Microsoft Active Directory](configure-active-directory.md) |
|
||||
| [GitHub](configure-github.md) |
|
||||
| [Microsoft Azure AD](configure-azure-ad.md) |
|
||||
| [FreeIPA](configure-freeipa.md) |
|
||||
| [OpenLDAP](../configure-openldap/configure-openldap.md) |
|
||||
| Auth Service |
|
||||
|------------------------------------------------------------------------------------------------------------------------|
|
||||
| [Microsoft Active Directory](configure-active-directory.md) |
|
||||
| [GitHub](configure-github.md) |
|
||||
| [Microsoft Azure AD](configure-azure-ad.md) |
|
||||
| [FreeIPA](configure-freeipa.md) |
|
||||
| [OpenLDAP](../configure-openldap/configure-openldap.md) |
|
||||
| [Microsoft AD FS](../configure-microsoft-ad-federation-service-saml/configure-microsoft-ad-federation-service-saml.md) |
|
||||
| [PingIdentity](configure-pingidentity.md) |
|
||||
| [Keycloak (OIDC)](configure-keycloak-oidc.md) |
|
||||
| [Keycloak (SAML)](configure-keycloak-saml.md) |
|
||||
| [Okta](configure-okta-saml.md) |
|
||||
| [Google OAuth](configure-google-oauth.md) |
|
||||
| [Shibboleth](../configure-shibboleth-saml/configure-shibboleth-saml.md) |
|
||||
| [PingIdentity](configure-pingidentity.md) |
|
||||
| [Keycloak (OIDC)](configure-keycloak-oidc.md) |
|
||||
| [Keycloak (SAML)](configure-keycloak-saml.md) |
|
||||
| [Okta](configure-okta-saml.md) |
|
||||
| [Google OAuth](configure-google-oauth.md) |
|
||||
| [Shibboleth](../configure-shibboleth-saml/configure-shibboleth-saml.md) |
|
||||
| [Generic (OIDC)](configure-generic-oidc.md) |
|
||||
|
||||
However, Rancher also provides [local authentication](create-local-users.md).
|
||||
|
||||
@@ -62,6 +63,12 @@ After you configure Rancher to allow sign on using an external authentication se
|
||||
| Allow members of Clusters, Projects, plus Authorized Users and Organizations | Any user in the authorization service and any group added as a **Cluster Member** or **Project Member** can log in to Rancher. Additionally, any user in the authentication service or group you add to the **Authorized Users and Organizations** list may log in to Rancher. |
|
||||
| Restrict access to only Authorized Users and Organizations | Only users in the authentication service or groups added to the Authorized Users and Organizations can log in to Rancher. |
|
||||
|
||||
:::warning
|
||||
|
||||
Only trusted admin-level users should have access to the local cluster, which manages all of the other clusters in a Rancher instance. Rancher is directly installed on the local cluster, and Rancher's management features allow admins on the local cluster to provision, modify, connect to, and view details about downstream clusters. Since the local cluster is key to a Rancher instance's architecture, inappropriate access carries security risks.
|
||||
|
||||
:::
|
||||
|
||||
To set the Rancher access level for users in the authorization service, follow these steps:
|
||||
|
||||
1. In the upper left corner, click **☰ > Users & Authentication**.
|
||||
|
||||
@@ -133,7 +133,17 @@ Here are a few examples of permission combinations that satisfy Rancher's needs:
|
||||
|
||||
:::
|
||||
|
||||
#### 4. Copy Azure Application Data
|
||||
#### 4. Allow Public Client Flows
|
||||
|
||||
To login from Rancher CLI you must allow public client flows:
|
||||
|
||||
1. From the left navigation menu, select **Authentication**.
|
||||
|
||||
1. Under **Advanced Settings**, select **Yes** on the toggle next to **Allow public client flows**.
|
||||
|
||||

|
||||
|
||||
#### 5. Copy Azure Application Data
|
||||
|
||||

|
||||
|
||||
@@ -167,7 +177,7 @@ Custom Endpoints are not tested or fully supported by Rancher.
|
||||
|
||||
You'll also need to manually enter the Graph, Token, and Auth Endpoints.
|
||||
|
||||
- From <b>App registrations</b>, click <b>Endpoints</b>:
|
||||
- From **App registrations**, click **Endpoints**:
|
||||
|
||||

|
||||
|
||||
@@ -176,7 +186,7 @@ You'll also need to manually enter the Graph, Token, and Auth Endpoints.
|
||||
- **OAuth 2.0 token endpoint (v1)** (Token Endpoint)
|
||||
- **OAuth 2.0 authorization endpoint (v1)** (Auth Endpoint)
|
||||
|
||||
#### 5. Configure Azure AD in Rancher
|
||||
#### 6. Configure Azure AD in Rancher
|
||||
|
||||
To complete configuration, enter information about your AD instance in the Rancher UI.
|
||||
|
||||
@@ -188,7 +198,7 @@ To complete configuration, enter information about your AD instance in the Ranch
|
||||
|
||||
1. Click **AzureAD**.
|
||||
|
||||
1. Complete the **Configure Azure AD Account** form using the information you copied while completing [Copy Azure Application Data](#4-copy-azure-application-data).
|
||||
1. Complete the **Configure Azure AD Account** form using the information you copied while completing [Copy Azure Application Data](#5-copy-azure-application-data).
|
||||
|
||||
:::caution
|
||||
|
||||
@@ -221,6 +231,8 @@ To complete configuration, enter information about your AD instance in the Ranch
|
||||
|
||||
<code>http<span>s://g</span>raph.microsoft.com<del>/abb5adde-bee8-4821-8b03-e63efdc7701c</del></code>
|
||||
|
||||
1. (Optional) In Rancher v2.9.0 and later, you can filter users' group memberships in Azure AD to reduce the amount of log data generated. See steps 4–5 of [Filtering Users by Azure AD Auth Group Memberships](#filtering-users-by-azure-ad-auth-group-memberships) for full instructions.
|
||||
|
||||
1. Click **Enable**.
|
||||
|
||||
**Result:** Azure Active Directory authentication is configured.
|
||||
@@ -314,6 +326,29 @@ Endpoint | https://login.partner.microsoftonline.cn/
|
||||
Graph Endpoint | https://microsoftgraph.chinacloudapi.cn
|
||||
Token Endpoint | https://login.partner.microsoftonline.cn/{tenantID}/oauth2/v2.0/token
|
||||
|
||||
## Filtering Users by Azure AD Auth Group Memberships
|
||||
|
||||
In Rancher v2.9.0 and later, you can filter users' group memberships from Azure AD to reduce the amount of log data generated. If you did not filter group memberships during initial setup, you can still add filters on an existing Azure AD configuration.
|
||||
|
||||
:::warning
|
||||
|
||||
Filtering out a user group membership affects more than just logging.
|
||||
|
||||
Since the filter prevents Rancher from seeing that the user belongs to an excluded group, it also does not see any permissions from that group. This means that excluding a group from the filter can have the side effect of denying users permissions they should have.
|
||||
|
||||
:::
|
||||
|
||||
1. In Rancher, in the top left corner, click **☰ > Users & Authentication**.
|
||||
|
||||
1. In the left navigation menu, click **Auth Provider**.
|
||||
|
||||
1. Click **AzureAD**.
|
||||
|
||||
1. Click the checkbox next to **Limit users by group membership**.
|
||||
|
||||
1. Enter an [OData filter clause](https://learn.microsoft.com/en-us/odata/concepts/queryoptions-overview#filter) into the **Group Membership Filter** field. For example, if you want to limit logging to group memberships whose name starts with `test`, click the checkbox and enter `startswith(displayName,'test')`.
|
||||
|
||||

|
||||
|
||||
## Deprecated Azure AD Graph API
|
||||
|
||||
@@ -328,4 +363,3 @@ Token Endpoint | https://login.partner.microsoftonline.cn/{tenantID}/oauth2/v2
|
||||
>- If you don't wish to upgrade to v2.7.0+ after the Azure AD Graph API is retired, you'll need to either:
|
||||
- Use the built-in Rancher auth or
|
||||
- Use another third-party auth system and set that up in Rancher. Please see the [authentication docs](authentication-config.md) to learn how to configure other open authentication providers.
|
||||
|
||||
|
||||
@@ -0,0 +1,110 @@
|
||||
---
|
||||
title: Configure Generic OIDC
|
||||
description: Create an OpenID Connect (OIDC) client and configure Rancher to work with your authentication provider. Your users can then sign into Rancher using their login from the authentication provider.
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-generic-oidc"/>
|
||||
</head>
|
||||
|
||||
If your organization uses an OIDC provider for user authentication, you can configure Rancher to allow login using Identity Provider (IdP) credentials. Rancher supports integration with the OpenID Connect (OIDC) protocol and the SAML protocol. Both implementations are functionally equivalent when used with Rancher. The following instructions describe how to configure Rancher to work using the OIDC protocol.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- In Rancher:
|
||||
- Generic OIDC is disabled.
|
||||
|
||||
:::note
|
||||
Consult the documentation for your specific IdP to complete the listed prerequisites.
|
||||
:::
|
||||
|
||||
- In your IdP:
|
||||
- Create a new client with the settings below:
|
||||
|
||||
Setting | Value
|
||||
------------|------------
|
||||
`Client ID` | <CLIENT_ID> (e.g. `rancher`)
|
||||
`Name` | <CLIENT_NAME> (e.g. `rancher`)
|
||||
`Client Protocol` | `openid-connect`
|
||||
`Access Type` | `confidential`
|
||||
`Valid Redirect URI` | `https://yourRancherHostURL/verify-auth`
|
||||
|
||||
- In the new OIDC client, create mappers to expose the users fields.
|
||||
- Create a new Groups Mapper with the settings below:
|
||||
|
||||
Setting | Value
|
||||
------------|------------
|
||||
`Name` | `Groups Mapper`
|
||||
`Mapper Type` | `Group Membership`
|
||||
`Token Claim Name` | `groups`
|
||||
`Add to ID token` | `OFF`
|
||||
`Add to access token` | `OFF`
|
||||
`Add to user info` | `ON`
|
||||
|
||||
- Create a new Client Audience with the settings below:
|
||||
|
||||
Setting | Value
|
||||
------------|------------
|
||||
`Name` | `Client Audience`
|
||||
`Mapper Type` | `Audience`
|
||||
`Included Client Audience` | <CLIENT_NAME>
|
||||
`Add to access token` | `ON`
|
||||
|
||||
- Create a new "Groups Path" with the settings below.
|
||||
|
||||
Setting | Value
|
||||
------------|------------
|
||||
`Name` | `Group Path`
|
||||
`Mapper Type` | `Group Membership`
|
||||
`Token Claim Name` | `full_group_path`
|
||||
`Full group path` | `ON`
|
||||
`Add to user info` | `ON`
|
||||
|
||||
- Important: Rancher will use the value received in the "sub" claim to form the PrincipalID which is the unique identifier in Rancher. It is important to make this a value that will be unique and immutable.
|
||||
|
||||
## Configuring Generic OIDC in Rancher
|
||||
|
||||
1. In the upper left corner of the Rancher UI, click **☰ > Users & Authentication**.
|
||||
1. In the left navigation bar, click **Auth Provider**.
|
||||
1. Select **Generic OIDC**.
|
||||
1. Complete the **Configure an OIDC account** form. For help with filling the form, see the [configuration reference](#configuration-reference).
|
||||
1. Click **Enable**.
|
||||
|
||||
Rancher will redirect you to the IdP login page. Enter your IdP credentials to validate your Rancher Keycloak configuration.
|
||||
|
||||
:::note
|
||||
|
||||
You may need to disable your popup blocker to see the IdP login page.
|
||||
|
||||
:::
|
||||
|
||||
**Result:** Rancher is configured to work with your provider using the OIDC protocol. Your users can now sign into Rancher using their IdP logins.
|
||||
|
||||
## Configuration Reference
|
||||
|
||||
| Field | Description |
|
||||
| ------------------------- |----------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Client ID | The Client ID of your OIDC client. |
|
||||
| Client Secret | The generated Secret of your OIDC client. |
|
||||
| Private Key/Certificate | A key/certificate pair to create a secure shell between Rancher and your IdP. Required if HTTPS/SSL is enabled on your OIDC server. |
|
||||
| Endpoints | Choose whether to use the generated values for the Rancher URL, Issue, and Auth Endpoint fields or to provide manual overrides if incorrect. |
|
||||
| Rancher URL | The URL for your Rancher Server. |
|
||||
| Issuer | The URL of your IdP. If your provider has discovery enabled, Rancher uses the Issuer URL to fetch all of the required URLs. |
|
||||
| Auth Endpoint | The URL where users are redirected to authenticate. |
|
||||
## Troubleshooting
|
||||
|
||||
If you are experiencing issues while testing the connection to the OIDC server, first double-check the configuration options of your OIDC client. You can also inspect the Rancher logs to help pinpoint what's causing issues. Debug logs may contain more detailed information about the error. Please refer to [How can I enable debug logging](../../../../faq/technical-items.md#how-can-i-enable-debug-logging) in this documentation.
|
||||
|
||||
All Generic OIDC related log entries are prepended with either `[generic oidc]` or `[oidc]`.
|
||||
|
||||
### You are not redirected to your authentication provider
|
||||
|
||||
If you fill out the **Configure a Generic OIDC account** form and click on **Enable**, and you are not redirected to your IdP, verify your OIDC client configuration.
|
||||
|
||||
### The generated `Issuer` and `Auth Endpoint` are incorrect
|
||||
|
||||
If the `Issuer` and `Auth Endpoint` are generated incorrectly, open the **Configure an OIDC account** form, change **Endpoints** to `Specify (advanced)` and override the `Issuer` value.
|
||||
|
||||
### Error: "Invalid grant_type"
|
||||
|
||||
In some cases, the "Invalid grant_type" error message may be misleading and is actually caused by setting the `Valid Redirect URI` incorrectly.
|
||||
@@ -51,7 +51,6 @@ You can integrate Okta with Rancher, so that authenticated users can access Ranc
|
||||
|
||||
:::
|
||||
|
||||
|
||||
1. After you complete the **Configure Okta Account** form, click **Enable**.
|
||||
|
||||
Rancher redirects you to the IdP login page. Enter credentials that authenticate with Okta IdP to validate your Rancher Okta configuration.
|
||||
|
||||
@@ -30,6 +30,14 @@ Within Rancher, each person authenticates as a _user_, which is a login that gra
|
||||
|
||||
For more information how authorization works and how to customize roles, see [Roles Based Access Control (RBAC)](manage-role-based-access-control-rbac/manage-role-based-access-control-rbac.md).
|
||||
|
||||
## User Retention
|
||||
|
||||
In Rancher v2.8.5 and later, you can enable user retention. This feature automatically removes inactive users after a configurable period of time.
|
||||
|
||||
The user retention feature is disabled by default.
|
||||
|
||||
For more information, see [Enabling User Retention](../../advanced-user-guides/enable-user-retention.md).
|
||||
|
||||
## Pod Security Policies
|
||||
|
||||
_Pod Security Policies_ (or PSPs) are objects that control security-sensitive aspects of pod specification, e.g. root privileges. If a pod does not meet the conditions specified in the PSP, Kubernetes will not allow it to start, and Rancher will display an error message.
|
||||
@@ -82,4 +90,4 @@ The following features are available under **Global Configuration**:
|
||||
- **Global DNS Entries**
|
||||
- **Global DNS Providers**
|
||||
|
||||
As these are legacy features, please see the Rancher v2.0—v2.4 docs on [catalogs](/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/helm-charts-in-rancher.md), [global DNS entries](/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md#adding-a-global-dns-entry), and [global DNS providers](/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md#editing-a-global-dns-provider) for more details.
|
||||
As these are legacy features, please see the Rancher v2.0—v2.4 docs on [catalogs](/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/helm-charts-in-rancher.md), [global DNS entries](/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md#adding-a-global-dns-entry), and [global DNS providers](/versioned_docs/version-2.0-2.4/how-to-guides/new-user-guides/helm-charts-in-rancher/globaldns.md#editing-a-global-dns-provider) for more details.
|
||||
@@ -23,7 +23,7 @@ This option replaces "Rancher" with the value you provide in most places. Files
|
||||
|
||||
### Support Links
|
||||
|
||||
Use a url address to send new "File an Issue" reports instead of sending users to the Github issues page. Optionally show Rancher community support links.
|
||||
Use a url address to send new "File an Issue" reports instead of sending users to the GitHub issues page. Optionally show Rancher community support links.
|
||||
|
||||
### Logo
|
||||
|
||||
|
||||
@@ -54,8 +54,20 @@ Since the private registry cannot be configured after the cluster is created, yo
|
||||
1. Select **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, click **Create**.
|
||||
1. Choose a cluster type.
|
||||
1. In the **Cluster Configuration** go to the **Registries** tab and select **Pull images for Rancher from a private registry**.
|
||||
1. Enter the registry hostname and credentials.
|
||||
1. In the **Cluster Configuration** go to the **Registries** tab.
|
||||
1. Check the box next to **Enable cluster scoped container registry for Rancher system container images**.
|
||||
1. Enter the registry hostname.
|
||||
1. Under **Authentication** select **Create a HTTP Basic Auth Secret** and fill in the credential fields.
|
||||
1. Click **Create**.
|
||||
|
||||
**Result:** The new cluster pulls images from the private registry.
|
||||
|
||||
### Working with Private Registry Credentials
|
||||
|
||||
When working with private registries, it is important to ensure that any secrets created for these registries are properly backed up. When you add a private registry credential secret through the Rancher GUI and select **Create a HTTP Basic Auth Secret**, the secret is included in backup operations using Rancher Backups.
|
||||
|
||||
However, if you create a credential secret outside of the Rancher GUI, such as by using kubectl or Terraform, you must add the `fleet.cattle.io/managed=true` label to indicate that the secret should be included in backups created by Rancher Backups.
|
||||
|
||||
For example, if you have a custom private registry named "my-private-registry" and create a secret called "my-reg-creds" for it, apply the `fleet.cattle.io/managed=true` label to this secret. This ensures that your backup process captures the secret, providing easy restoration if needed.
|
||||
|
||||
By following this guidance, you can ensure that all of your private registry credentials are backed up and easily accessible in the event of a restore or migration.
|
||||
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title: JSON Web Token (JWT) Authentication
|
||||
---
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/jwt-authentication"/>
|
||||
</head>
|
||||
|
||||
Many 3rd party integrations available for Kubernetes, such as GitLab and HashiCorp Vault, involve giving an external process access to the Kubernetes API using a native Kubernetes Service Account token for authentication.
|
||||
|
||||
In Rancher v2.9.0 and later, service accounts on downstream clusters can now authenticate through a JSON web token (JWT) using the Rancher authentication proxy. In Rancher versions earlier than v2.9.0, only Rancher-issued tokens were supported.
|
||||
|
||||
To enable this feature, follow these steps:
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. Click **Advanced** to open the dropdown menu.
|
||||
1. Select **JWT Authentication**.
|
||||
1. Click the checkbox for the cluster you want to enable JWT authentication for, and click **Enable**. Alternatively, you can click **⋮** > **Enable**.
|
||||
@@ -238,3 +238,9 @@ When you revoke the cluster membership for a standard user that's explicitly ass
|
||||
- Exercise any [individual project roles](#project-role-reference) they are assigned.
|
||||
|
||||
If you want to completely revoke a user's access within a cluster, revoke both their cluster and project memberships.
|
||||
|
||||
### External `RoleTemplate` Behavior
|
||||
|
||||
In Rancher v2.9.0 and later, external `RoleTemplate` objects can only be created if the backing `ClusterRole` exists in the local cluster or the `ExternalRules` is set in your configuration.
|
||||
|
||||
For context, the backing `ClusterRole` holds cluster rules and privileges, and shares the same `metadata.name` used in the `RoleTemplate` in your respective cluster referenced by the `ClusterRoleTemplateBinding/ProjectRoleTemplateBinding`. Additionally, note that `escalate` permissions on `RoleTemplates` are required to create external `RoleTemplates` with `ExternalRules`.
|
||||
|
||||
@@ -62,21 +62,6 @@ Install the [`rancher-backup chart`](https://github.com/rancher/backup-restore-o
|
||||
|
||||
### 2. Restore from backup using a Restore custom resource
|
||||
|
||||
:::note Important:
|
||||
|
||||
Kubernetes v1.22, available as an experimental feature of v2.6.3, does not support restoring from backup files containing CRDs with the apiVersion `apiextensions.k8s.io/v1beta1`. In v1.22, the default `resourceSet` in the rancher-backup app is updated to collect only CRDs that use `apiextensions.k8s.io/v1`. There are currently two ways to work around this issue:
|
||||
|
||||
1. Update the default `resourceSet` to collect the CRDs with the apiVersion v1.
|
||||
1. Update the default `resourceSet` and the client to use the new APIs internally, with `apiextensions.k8s.io/v1` as the replacement.
|
||||
|
||||
:::note
|
||||
|
||||
When making or restoring backups for v1.22, the Rancher version and the local cluster's Kubernetes version should be the same. The Kubernetes version should be considered when restoring a backup since the supported apiVersion in the cluster and in the backup file could be different.
|
||||
|
||||
:::
|
||||
|
||||
:::
|
||||
|
||||
1. When using S3 object storage as the backup source for a restore that requires credentials, create a `Secret` object in this cluster to add the S3 credentials. The secret data must have two keys - `accessKey`, and `secretKey`, that contain the S3 credentials.
|
||||
|
||||
The secret can be created in any namespace, this example uses the default namespace.
|
||||
|
||||
@@ -79,15 +79,20 @@ If you are using [local snapshots](./back-up-rancher-launched-kubernetes-cluster
|
||||
1. In the **Clusters** page, go to the cluster where you want to remove nodes.
|
||||
1. In the **Machines** tab, click **⋮ > Delete** on each node you want to delete. Initially, you will see the nodes hang in a `deleting` state, but once all etcd nodes are deleting, they will be removed together. This is due to the fact that Rancher sees all etcd nodes deleting and proceeds to "short circuit" the etcd safe-removal logic.
|
||||
|
||||
1. After all etcd nodes are removed, add a new etcd node that you are planning to restore from.
|
||||
1. After all etcd nodes are removed, add the new etcd node that you are planning to restore from. Assign the new node the role of `all` (etcd, controlplane, and worker).
|
||||
|
||||
- For custom clusters, go to the **Registration** tab then copy and run the registration command on your node. If the node has previously been used in a cluster, [clean the node](../manage-clusters/clean-cluster-nodes.md#cleaning-up-nodes) first.
|
||||
- If the node was previously in a cluster, [clean the node](../manage-clusters/clean-cluster-nodes.md#cleaning-up-nodes) first.
|
||||
- For custom clusters, go to the **Registration** tab and check the box for `etcd, controlplane, and worker`. Then copy and run the registration command on your node.
|
||||
- For node driver clusters, a new node is provisioned automatically.
|
||||
|
||||
At this point, Rancher will indicate that restoration from etcd snapshot is required.
|
||||
|
||||
1. Restore from an etcd snapshot.
|
||||
|
||||
:::note
|
||||
As the etcd node is a clean node, you may need to manually create the `/var/lib/rancher/<k3s/rke2>/server/db/snapshots/` path.
|
||||
:::
|
||||
|
||||
- For S3 snapshots, restore using the UI.
|
||||
1. Click the **Snapshots** tab to view the list of saved snapshots.
|
||||
1. Go to the snapshot you want to restore and click **⋮ > Restore**.
|
||||
@@ -95,7 +100,15 @@ If you are using [local snapshots](./back-up-rancher-launched-kubernetes-cluster
|
||||
1. Click **Restore**.
|
||||
- For local snapshots, restore using the UI is **not** available.
|
||||
1. In the upper right corner, click **⋮ > Edit YAML**.
|
||||
1. Define `spec.cluster.rkeConfig.etcdSnapshotRestore.name` as the filename of the snapshot on disk in `/var/lib/rancher/<k3s/rke2>/server/db/snapshots/`.
|
||||
1. The example YAML below can be added under your `rkeConfig` to configure the etcd restore:
|
||||
|
||||
```yaml
|
||||
...
|
||||
rkeConfig:
|
||||
etcdSnapshotRestore:
|
||||
name: <string> # This field is required. Refers to the filename of the associated etcdsnapshot object.
|
||||
...
|
||||
```
|
||||
|
||||
1. After restoration is successful, you can scale your etcd nodes back up to the desired redundancy.
|
||||
|
||||
|
||||
@@ -58,7 +58,7 @@ To display prerelease versions:
|
||||
| rancher-logging | 100.0.0+up3.12.0 | 100.1.2+up3.17.4 |
|
||||
| rancher-longhorn | 100.0.0+up1.1.2 | 100.1.2+up1.2.4 |
|
||||
| rancher-monitoring | 100.0.0+up16.6.0 | 100.1.2+up19.0.3 |
|
||||
| rancher-sriov (experimental) | 100.0.0+up0.1.0 | 100.0.3+up0.1.0 |
|
||||
| rancher-sriov<sup>[1](#sriov-chart-deprecation-and-migration)</sup> | 100.0.0+up0.1.0 | 100.0.3+up0.1.0 |
|
||||
| rancher-vsphere-cpi | 100.3.0+up1.2.1 | 100.3.0+up1.2.1 |
|
||||
| rancher-vsphere-csi | 100.3.0+up2.5.1-rancher1 | 100.3.0+up2.5.1-rancher1 |
|
||||
| rancher-wins-upgrader | 0.0.100 | 100.0.1+up0.0.1 |
|
||||
@@ -163,10 +163,37 @@ spec:
|
||||
|
||||
:::
|
||||
|
||||
### Add Custom OCI Chart Repositories
|
||||
|
||||
:::caution
|
||||
|
||||
This feature is currently experimental and is not officially supported in Rancher.
|
||||
|
||||
:::
|
||||
|
||||
Helm v3 introduced storing Helm charts as [Open Container Initiative (OCI)](https://opencontainers.org/about/overview/) artifacts in container registries. With Rancher v2.9.0, you can add [OCI-based Helm chart repositories](https://helm.sh/docs/topics/registries/) alongside HTTP-based and Git-based repositories. This means you can deploy apps that are stored as OCI artifacts. For more information, see [Using OCI Helm Chart Repositories](./oci-repositories.md).
|
||||
|
||||
### Helm Compatibility
|
||||
|
||||
Only Helm 3 compatible charts are supported.
|
||||
|
||||
### Refresh Chart Repositories
|
||||
|
||||
The **Refresh** button can be used to sync changes from selected Helm chart repositories on the **Repositories** page.
|
||||
|
||||
To refresh a chart repository:
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Find the name of the cluster whose repositories you want to access. Click **Explore** at the end of the cluster's row.
|
||||
1. In the left navigation menu on the **Cluster Dashboard**, click **Apps > Repositories**.
|
||||
1. Use the toggle next to the **State** field to select all repositories, or toggle specified chart repositories to sync changes.
|
||||
1. Click **Refresh**.
|
||||
1. The **⋮** at the end of each chart repository row also includes a **Refresh** option, which can be clicked to refresh the respective repository.
|
||||
|
||||
Non-Airgap Rancher installations upon refresh will reflect any chart repository changes immediately and you will see the **State** field for updated repositories move from `In Progress` to `Active` once the action is completed.
|
||||
|
||||
Airgap installations where Rancher is configured to use the packaged copy of Helm system charts ([`useBundledSystemChart=true`](../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md#helm-chart-options-for-air-gap-installations)) will only refer to the [system-chart](https://github.com/rancher/system-charts) repository that comes bundled and will not be able to be refreshed or synced.
|
||||
|
||||
## Deploy and Upgrade Charts
|
||||
|
||||
To install and deploy a chart:
|
||||
@@ -212,6 +239,31 @@ To upgrade legacy multi-cluster apps:
|
||||
1. Click **☰**.
|
||||
1. Under **Legacy Apps**, click **Multi-cluster Apps**.
|
||||
|
||||
### Chart-Specific Information
|
||||
|
||||
#### sriov Chart Deprecation and Migration
|
||||
|
||||
The `sriov` (SR-IOV network operator) chart from the Rancher Charts repository is deprecated and will be removed in Rancher v2.10. Please migrate to the `sriov-network-operator` chart from the SUSE Edge repository (https://github.com/suse-edge/charts) instead.
|
||||
|
||||
To migrate, follow these steps:
|
||||
|
||||
1. Add the SUSE Edge repository to your cluster by following the steps in [Add Custom Git Repositories](#add-custom-git-repositories).
|
||||
1. For the **Git Repo URL** field, enter `https://github.com/suse-edge/charts`.
|
||||
1. Click **Create**.
|
||||
1. In the left navigation menu on the **Cluster Dashboard**, click **Apps > Charts**.
|
||||
1. Find the `sriov-network-operator` chart and click on it.
|
||||
1. Click **Install**.
|
||||
1. In the **Name** field, enter the same name you used for your existing `sriov` chart installation.
|
||||
1. Click **Next**.
|
||||
1. Click **Install**.
|
||||
|
||||
**Result:** Rancher redirects to the **Installed Apps** page where your existing installation enters the **Updating** state. The migration is complete when it enters the **Deployed** state.
|
||||
|
||||
## Limitations
|
||||
|
||||
Dashboard apps or Rancher feature charts can't be installed using the Rancher CLI.
|
||||
- Dashboard apps or Rancher feature charts can't be installed using the Rancher CLI.
|
||||
|
||||
- When determining the most recent version to display for the **Upgradable** column on the **Apps > Installed Apps** page, rather than only considering versions of the Helm chart from the repository it was installed from, Rancher considers versions of the Helm chart from all repositories on the cluster.
|
||||
|
||||
For example, suppose you install `cert-manager` v1.13.0 from repository A, where v1.14.0 is now the most recent version available. In this case, you expect **Upgradable** to display v1.14.0. However, if the cluster also has access to repository B where v1.15.0 of `cert-manager` is available, then **Upgradable** displays v1.15.0 even though the original installation used repository A.
|
||||
|
||||
@@ -0,0 +1,115 @@
|
||||
---
|
||||
title: Using OCI-Based Helm Chart Repositories
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/helm-charts-in-rancher/oci-registries"/>
|
||||
</head>
|
||||
|
||||
:::caution
|
||||
|
||||
This feature is currently experimental and is not officially supported in Rancher.
|
||||
|
||||
:::
|
||||
|
||||
Helm v3 introduced storing Helm charts as [Open Container Initiative (OCI)](https://opencontainers.org/about/overview/) artifacts in container registries. With Rancher v2.9.0, you can add [OCI-based Helm chart repositories](https://helm.sh/docs/topics/registries/) alongside HTTP-based and Git-based repositories. This means that you can deploy apps that are stored as OCI artifacts.
|
||||
|
||||
## Add an OCI-Based Helm Chart Repository
|
||||
|
||||
To add an OCI-based Helm chart repository through the Rancher UI:
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
2. Find the name of the cluster whose repositories you want to access. Click **Explore** at the end of the cluster's row.
|
||||
3. In the left navigation bar, select **Apps > Repositories**.
|
||||
4. Click **Create**.
|
||||
5. Enter a **Name** for the registry. Select **OCI Repository** as the target.
|
||||
6. Enter the **OCI Repository Host URL** for the registry. The registry endpoint must not contain anything besides OCI Helm Chart artifacts. The artifacts should all have unique names. If you attempt to add an endpoint that contains any other kinds of files or artifacts, the OCI repository will not be added.
|
||||
|
||||
:::note
|
||||
|
||||
You can use the **OCI URL** field to fine-tune how many charts from the registry are available for installation on Rancher. More generic endpoints target more charts, as the following examples demonstrate:
|
||||
|
||||
- `oci://<registry-host>`: Every chart in the registry becomes available for installation, regardless of namespace or tag.
|
||||
- `oci://<registry-host>/<namespace>`: Every chart in the specified namespace within the registry becomes available for installation.
|
||||
- `oci://<registry-host>/<namespace>/<chart-name>`: Only the specified chart and any associated tags or versions of that chart become available for installation.
|
||||
- `oci://<registry-host>/<namespace>/<chart-name>:<tag>`: Only the chart with the specified tag becomes available for installation.
|
||||
|
||||
:::
|
||||
|
||||
7. Set up authentication. Select **Basicauth** from the authentication field and enter a username and password as required. Otherwise, create or select an **Authentication** secret. See [Authentication](#authentication-for-oci-based-helm-chart-repositories) for a full description.
|
||||
8. (optional) Enter a base64 encoded DER certificate in the **CA Cert Bundle** field. This field is for cases where you have a private OCI-based Helm chart repository and need Rancher to trust its certificates.
|
||||
9. (optional) To allow insecure connections without performing an SSL check, select **Skip TLS Verification**. To force Rancher to use HTTP instead of HTTPS to send requests to the repository, select **Insecure Plain Http**.
|
||||
10. (optional) If your repository has a rate limiting policy and may respond with status code `429 Too Many Requests`, you may want to fill out the fields under **Exponential Back Off**:
|
||||
- **Min Wait**: The minimum duration in seconds that Rancher should wait before retrying. The default is 1 second.
|
||||
- **Max Wait**: The maximum duration in seconds that Rancher should wait before retrying. The default is 5 second.
|
||||
- **Max Number of Retries**: The default is 5 retries.
|
||||
|
||||
Once these values are set, Rancher responds to the `429` status code by staggering requests based on the minimum and maximum wait values. The wait time between retries increases exponentially, until Rancher has sent the maximum number of retries set. See [Rate Limiting](#rate-limiting-of-oci-based-helm-chart-repositories) for more details.
|
||||
11. Add any labels and annotations.
|
||||
12. Click **Create**.
|
||||
|
||||
It may take some time for the OCI repository to activate. This is particularly true if the OCI endpoint contains multiple namespaces.
|
||||
|
||||
## Authentication for OCI-Based Helm Chart Repositories
|
||||
|
||||
Rancher supports BasicAuth for OCI registries. You must create a [**BasicAuth** Kubernetes secret](https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret). You can also [create the secret through the Rancher UI](../kubernetes-resources-setup/secrets.md).
|
||||
|
||||
|
||||
The CRD that is linked to the OCI-based Helm repository is `ClusterRepo`.
|
||||
|
||||
## View Helm Charts in OCI-Based Helm Chart Repositories
|
||||
|
||||
To view Helm charts in the OCI-based Helm chart repository after it achieves an `Active` state:
|
||||
|
||||
1. Click **☰**. Under **Explore Cluster** in the left navigation menu, select a cluster.
|
||||
1. Click **Apps > Charts**.
|
||||
1. Select the OCI-based Helm chart repository from the dropdown.
|
||||
|
||||
## Refresh an OCI-Based Helm Chart Repository
|
||||
|
||||
Rancher automatically refreshes the OCI-based Helm chart repository every 6 hours.
|
||||
|
||||
If you need to update immediately, you can [perform a manual refresh](./helm-charts-in-rancher.md#refresh-chart-repositories).
|
||||
|
||||
## Update an OCI-Based Helm Chart Repository Configuration
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Find the name of the cluster whose repositories you want to access. Click **Explore** at the end of the cluster's row.
|
||||
1. In the left navigation bar, select **Apps > Repositories**.
|
||||
1. Find the row associated with the OCI-based Helm chart repository, and click **⋮**.
|
||||
1. From the submenu, select **Edit Config**.
|
||||
|
||||
## Delete an OCI-Based Helm Chart Repository
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Find the name of the cluster whose repositories you want to access. Click **Explore** at the end of the cluster's row.
|
||||
1. In the left navigation bar, select **Apps > Repositories**.
|
||||
1. Select the row associated with the OCI-based Helm chart repository, and click **Delete**.
|
||||
|
||||
## Size Limitations of OCI-Based Helm Chart Repositories in Rancher
|
||||
|
||||
Due to security concerns, there are limitations on how large of a Helm chart you can deploy through an OCI-based repository, and how much metadata you can use to describe the Helm charts within a single OCI endpoint.
|
||||
|
||||
Rancher can deploy OCI Helm charts up to 20 MB in size.
|
||||
|
||||
## Rate Limiting of OCI-Based Helm Chart Repositories
|
||||
|
||||
Different OCI registries implement rate limiting in different ways.
|
||||
|
||||
Most servers return a `Retry-After` header, indicating how long to wait before rate limiting is lifted.
|
||||
|
||||
Docker Hub returns a `429` status code when it completes all allocated requests. It also returns a `RateLimit-Remaining` header which describes the rate limiting policy.
|
||||
|
||||
Rancher currently checks for the `Retry-After` header. It also handles Docker Hub-style responses (status code `429` and the `RateLimit-Remaining` header) and automatically waits before making a new request. When handling `Retry-After` or Docker Hub-style responses, Rancher ignores `ExponentialBackOff` values.
|
||||
|
||||
If you have an OCI-based Helm chart repository which doesn't implement the `Retry-After` or `RateLimit-Remaining` headers, and think you may be rate-limited at some point, fill out the fields under **Exponential Back Off** when you add the repository.
|
||||
|
||||
For example, if you have an OCI-based Helm chart repository that doesn't return a `Retry-After` header, but you know that the server allows 50 requests in 24 hours, you can provide Rancher a **Min Wait** value of **86400** seconds, a **Max Wait** value of **90000** seconds, and a **Max Number of Retries** value of **1**. Then, if Rancher gets rate limited by the server, Rancher will wait for 24 hours before trying again. The request should succeed as Rancher hasn't sent any other requests in the previous 24 hours.
|
||||
|
||||
## Troubleshooting OCI-based Helm Registries
|
||||
|
||||
- To enhance logging information, [enable the debug option](../../../troubleshooting/other-troubleshooting-tips/logging.md#kubernetes-install) while deploying Rancher.
|
||||
|
||||
- If there is any discrepancy between the repository contents and Rancher, you should refresh the cluster repository as a first resort. If the discrepancy persists, delete the OCI-based Helm chart repository from Rancher and add it again. Deleting the repository won't delete any Helm charts that are already installed.
|
||||
|
||||
- Apps installed through OCI-based Helm chart repositories are subject to a known issue with how Rancher displays upgradeable version information. See the [Limitations](./helm-charts-in-rancher.md#limitations) section of **Helm Charts and Apps** for more details.
|
||||
@@ -19,7 +19,7 @@ These nodes must be in the same region. You may place these servers in separate
|
||||
To install the Rancher management server on a high-availability RKE2 cluster, we recommend setting up the following infrastructure:
|
||||
|
||||
- **Three Linux nodes,** typically virtual machines, in the infrastructure provider of your choice.
|
||||
- **A load balancer** to direct traffic to the two nodes.
|
||||
- **A load balancer** to direct traffic to the nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
|
||||
### 1. Set up Linux Nodes
|
||||
@@ -51,7 +51,7 @@ Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance
|
||||
|
||||
:::
|
||||
|
||||
### 4. Set up the DNS Record
|
||||
### 3. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
@@ -59,4 +59,4 @@ Depending on your environment, this may be an A record pointing to the load bala
|
||||
|
||||
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
@@ -49,5 +49,5 @@ number of nodes for each Kubernetes role, refer to the section on [recommended a
|
||||
|
||||
### Networking
|
||||
|
||||
* Minimize network latency. Rancher recommends minimizing latency between the etcd nodes. The default setting for `heartbeat-interval` is `500`, and the default setting for `election-timeout` is `5000`. These [settings for etcd tuning](https://coreos.com/etcd/docs/latest/tuning.html) allow etcd to run in most networks (except really high latency networks).
|
||||
* Minimize network latency. Rancher recommends minimizing latency between the etcd nodes. The default setting for `heartbeat-interval` is `500`, and the default setting for `election-timeout` is `5000`. These [settings for etcd tuning](https://etcd.io/docs/v3.5/tuning/) allow etcd to run in most networks (except really high latency networks).
|
||||
* Cluster nodes should be located within a single region. Most cloud providers provide multiple availability zones within a region, which can be used to create higher availability for your cluster. Using multiple availability zones is fine for nodes with any role. If you are using [Kubernetes Cloud Provider](../set-up-cloud-providers/set-up-cloud-providers.md) resources, consult the documentation for any restrictions (i.e. zone storage restrictions).
|
||||
|
||||
@@ -57,7 +57,7 @@ The number of nodes that you can lose at once while maintaining cluster availabi
|
||||
|
||||
References:
|
||||
|
||||
* [Official etcd documentation on optimal etcd cluster size](https://etcd.io/docs/v3.4.0/faq/#what-is-failure-tolerance)
|
||||
* [Official etcd documentation on optimal etcd cluster size](https://etcd.io/docs/v3.5/faq/#what-is-failure-tolerance)
|
||||
* [Official Kubernetes documentation on operating etcd clusters for Kubernetes](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/)
|
||||
|
||||
### Number of Worker Nodes
|
||||
|
||||
@@ -0,0 +1,211 @@
|
||||
---
|
||||
title: Migrating Azure In-tree to Out-of-tree
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-azure"/>
|
||||
</head>
|
||||
|
||||
Kubernetes is moving away from maintaining cloud providers in-tree.
|
||||
|
||||
Starting with Kubernetes 1.29, in-tree cloud providers have been disabled. You must disable `DisableCloudProviders` and `DisableKubeletCloudCredentialProvider` to use the in-tree Azure cloud provider or migrate from in-tree cloud provider to out-of-tree provider. You can disable the required feature gates by setting `feature-gates=DisableCloudProviders=false` as an additional argument for the cluster's Kubelet, Controller Manager, and API Server in the advanced cluster configuration. Additionally, set `DisableKubeletCloudCredentialProvider=false` in the Kubelet's arguments to enable in-tree functionality for authenticating to Azure container registries for image pull credentials. See [upstream docs](https://github.com/kubernetes/kubernetes/pull/117503) for more details.
|
||||
|
||||
In Kubernetes v1.30 and later, the in-tree cloud providers have been removed. Rancher allows you to upgrade to Kubernetes v1.30 when you migrate from an in-tree to out-of-tree provider.
|
||||
|
||||
To migrate from the in-tree cloud provider to the out-of-tree Azure cloud provider, you must stop the existing cluster's kube controller manager and install the Azure cloud controller manager.
|
||||
|
||||
If it's acceptable to have some downtime during migration, follow the instructions to [set up an external cloud provider](../set-up-cloud-providers/azure.md#using-the-out-of-tree-azure-cloud-provider). These instructions outline how to configure the out-of-tree cloud provider for a newly provisioned cluster. During set up, there will be some downtime, as there is a time gap between when the old cloud provider stops running and when the new cloud provider starts to run.
|
||||
|
||||
If your setup can't tolerate any control plane downtime, you must enable leader migration. This facilitates a smooth transition from the controllers in the kube controller manager to their counterparts in the cloud controller manager.
|
||||
|
||||
:::note Important:
|
||||
The Kubernetes [cloud controller migration documentation](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/#before-you-begin) states that it's possible to migrate with the same Kubernetes version, but assumes that the migration is part of a Kubernetes upgrade. Refer to the Kubernetes documentation on [migrating to use the cloud controller manager](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/) to see if you need to customize your setup before migrating. Confirm your [migration configuration values](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/#default-configuration). If your cloud provider provides an implementation of the Node IPAM controller, you also need to [migrate the IPAM controller](https://kubernetes.io/docs/tasks/administer-cluster/controller-manager-leader-migration/#node-ipam-controller-migration).
|
||||
|
||||
Starting with Kubernetes v1.26, in-tree persistent volume types `kubernetes.io/azure-disk` and `kubernetes.io/azure-file` are deprecated and no longer supported. There are no plans to remove these drivers following their deprecation, however you should migrate to the corresponding CSI drivers, `disk.csi.azure.com` and `file.csi.azure.com`. To review the migration options for your storage classes and upgrade your cluster to use Azure Disks and Azure Files CSI drivers, see [Migrate from in-tree to CSI drivers](https://learn.microsoft.com/en-us/azure/aks/csi-migrate-in-tree-volumes).
|
||||
:::
|
||||
|
||||
<Tabs groupId="k8s-distro">
|
||||
<TabItem value="RKE2">
|
||||
|
||||
1. Update the cluster config to enable leader migration:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineSelectorConfig:
|
||||
- config:
|
||||
kube-controller-manager-arg:
|
||||
- enable-leader-migration
|
||||
machineLabelSelector:
|
||||
matchExpressions:
|
||||
- key: rke.cattle.io/control-plane-role
|
||||
operator: In
|
||||
values:
|
||||
- 'true'
|
||||
```
|
||||
|
||||
Note that the cloud provider is still `azure` at this step:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineGlobalConfig:
|
||||
cloud-provider-name: azure
|
||||
```
|
||||
|
||||
2. Cordon control plane nodes so that Azure cloud controller pods run on nodes only after upgrading to the external cloud provider:
|
||||
|
||||
```shell
|
||||
kubectl cordon -l "node-role.kubernetes.io/control-plane=true"
|
||||
```
|
||||
|
||||
3. To deploy the Azure cloud controller manager, use any of the available options:
|
||||
- UI: Follow steps 1-10 of [Helm chart installation from UI](../set-up-cloud-providers/azure.md#helm-chart-installation-from-ui) to install the cloud controller manager chart.
|
||||
- CLI: Follow steps 1-4 of [Helm chart installation from CLI](../set-up-cloud-providers/azure.md#helm-chart-installation-from-cli).
|
||||
- Update the cluster's additional manifest: Follow steps 2-3 to [install the cloud controller manager chart](../set-up-cloud-providers/azure.md#using-the-out-of-tree-azure-cloud-provider).
|
||||
|
||||
Confirm that the chart is installed but that the new pods aren't running yet due to cordoned controlplane nodes.
|
||||
|
||||
4. To enable leader migration, add `--enable-leader-migration` to the container arguments of `cloud-controller-manager`:
|
||||
|
||||
```shell
|
||||
kubectl -n kube-system patch deployment cloud-controller-manager \
|
||||
--type=json \
|
||||
-p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--enable-leader-migration"}]'
|
||||
```
|
||||
|
||||
5. Update the provisioning cluster to change the cloud provider and remove leader migration args from the kube controller manager.
|
||||
If upgrading the Kubernetes version, set the Kubernetes version as well in the `spec.kubernetesVersion` section of the cluster YAML file.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineGlobalConfig:
|
||||
cloud-provider-name: external
|
||||
```
|
||||
|
||||
Remove `enable-leader-migration` from the kube controller manager:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineSelectorConfig:
|
||||
- config:
|
||||
kube-controller-manager-arg:
|
||||
- enable-leader-migration
|
||||
machineLabelSelector:
|
||||
matchExpressions:
|
||||
- key: rke.cattle.io/control-plane-role
|
||||
operator: In
|
||||
values:
|
||||
- 'true'
|
||||
```
|
||||
|
||||
6. Uncordon control plane nodes so that Azure cloud controller pods now run on nodes:
|
||||
|
||||
```shell
|
||||
kubectl uncordon -l "node-role.kubernetes.io/control-plane=true"
|
||||
```
|
||||
|
||||
7. Update the cluster. The `cloud-controller-manager` pods should now be running.
|
||||
|
||||
```shell
|
||||
kubectl rollout status deployment -n kube-system cloud-controller-manager
|
||||
kubectl rollout status daemonset -n kube-system cloud-node-manager
|
||||
```
|
||||
|
||||
8. The cloud provider is responsible for setting the ProviderID of the node. Check if all nodes are initialized with the ProviderID:
|
||||
|
||||
```shell
|
||||
kubectl describe nodes | grep "ProviderID"
|
||||
```
|
||||
|
||||
9. (Optional) You can also disable leader migration after the upgrade, as leader migration is not required with only one cloud-controller-manager.
|
||||
Update the `cloud-controller-manager` deployment to remove leader migration from the container arguments:
|
||||
|
||||
```yaml
|
||||
- --enable-leader-migration=true
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="RKE">
|
||||
|
||||
1. Update the cluster config to enable leader migration in `cluster.yml`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
kube-controller:
|
||||
extra_args:
|
||||
enable-leader-migration: "true"
|
||||
```
|
||||
|
||||
Note that the cloud provider is still `azure` at this step:
|
||||
|
||||
```yaml
|
||||
cloud_provider:
|
||||
name: azure
|
||||
```
|
||||
|
||||
2. Cordon the control plane nodes, so that Azure cloud controller pods run on nodes only after upgrading to the external cloud provider:
|
||||
|
||||
```shell
|
||||
kubectl cordon -l "node-role.kubernetes.io/controlplane=true"
|
||||
```
|
||||
|
||||
3. To install the Azure cloud controller manager, follow the same steps as when installing Azure cloud provider on a new cluster:
|
||||
- UI: Follow steps 1-10 of [Helm chart installation from UI](../set-up-cloud-providers/azure.md#helm-chart-installation-from-ui) to install the cloud controller manager chart.
|
||||
- CLI: Follow steps 1-4 of [Helm chart installation from CLI](../set-up-cloud-providers/azure.md#helm-chart-installation-from-cli) to install the cloud controller manager chart.
|
||||
|
||||
4. Confirm that the chart is installed but that the new pods aren't running yet due to cordoned controlplane nodes. After updating the cluster in the next step, RKE will upgrade and uncordon each node, and schedule `cloud-controller-manager` pods.
|
||||
|
||||
5. To enable leader migration, add `--enable-leader-migration` to the container arguments of `cloud-controller-manager`:
|
||||
|
||||
```shell
|
||||
kubectl -n kube-system patch deployment cloud-controller-manager \
|
||||
--type=json \
|
||||
-p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--enable-leader-migration"}]'
|
||||
```
|
||||
|
||||
6. Update `cluster.yml` to change the cloud provider to `external` and remove the leader migration arguments from the kube-controller.
|
||||
|
||||
```yaml
|
||||
rancher_kubernetes_engine_config:
|
||||
cloud_provider:
|
||||
name: external
|
||||
```
|
||||
|
||||
Remove `enable-leader-migration` if you don't want it enabled in your cluster:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
kube-controller:
|
||||
extra_args:
|
||||
enable-leader-migration: "true"
|
||||
```
|
||||
|
||||
7. If you're upgrading the cluster's Kubernetes version, set the Kubernetes version as well.
|
||||
|
||||
8. Update the cluster. The `cloud-controller-manager` pods should now be running.
|
||||
|
||||
```shell
|
||||
kubectl rollout status deployment -n kube-system cloud-controller-manager
|
||||
kubectl rollout status daemonset -n kube-system cloud-node-manager
|
||||
```
|
||||
|
||||
9. The cloud provider is responsible for setting the ProviderID of the node. Verify that all nodes are initialized with the ProviderID:
|
||||
|
||||
```shell
|
||||
kubectl describe nodes | grep "ProviderID"
|
||||
```
|
||||
|
||||
10. (Optional) You can also disable leader migration after the upgrade, as leader migration is not required with only one cloud-controller-manager.
|
||||
Update the `cloud-controller-manager` deployment to remove leader migration from the container arguments:
|
||||
|
||||
```yaml
|
||||
- --enable-leader-migration=true
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
@@ -108,7 +108,7 @@ Regarding CPU and memory, it is recommended that the different planes of Kuberne
|
||||
|
||||
For hardware recommendations for large Kubernetes clusters, refer to the official Kubernetes documentation on [building large clusters.](https://kubernetes.io/docs/setup/best-practices/cluster-large/)
|
||||
|
||||
For hardware recommendations for etcd clusters in production, refer to the official [etcd documentation.](https://etcd.io/docs/v3.4.0/op-guide/hardware/)
|
||||
For hardware recommendations for etcd clusters in production, refer to the official [etcd documentation.](https://etcd.io/docs/v3.5/op-guide/hardware/)
|
||||
|
||||
## Networking Requirements
|
||||
|
||||
|
||||
@@ -184,9 +184,7 @@ To prevent issues when upgrading, the [Kubernetes upgrade best practices](https:
|
||||
|
||||
## Authorized Cluster Endpoint Support for RKE2 and K3s Clusters
|
||||
|
||||
_Available as of v2.6.3_
|
||||
|
||||
Authorized Cluster Endpoint (ACE) support has been added for registered RKE2 and K3s clusters. This support includes manual steps you will perform on the downstream cluster to enable the ACE. For additional information on the authorized cluster endpoint, click [here](../manage-clusters/access-clusters/authorized-cluster-endpoint.md).
|
||||
Rancher supports Authorized Cluster Endpoints (ACE) for registered RKE2 and K3s clusters. This support includes manual steps you will perform on the downstream cluster to enable the ACE. For additional information on the authorized cluster endpoint, click [here](../manage-clusters/access-clusters/authorized-cluster-endpoint.md).
|
||||
|
||||
:::note Notes:
|
||||
|
||||
|
||||
@@ -332,7 +332,7 @@ Refer to the offical AWS upstream documentation for the [cloud controller manage
|
||||
<Tabs groupId="k8s-distro">
|
||||
<TabItem value="RKE2">
|
||||
|
||||
Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on Github.
|
||||
Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on GitHub.
|
||||
|
||||
1. Add the Helm repository:
|
||||
|
||||
@@ -465,7 +465,7 @@ kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager
|
||||
|
||||
<TabItem value="RKE">
|
||||
|
||||
Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on Github.
|
||||
Official upstream docs for [Helm chart installation](https://github.com/kubernetes/cloud-provider-aws/tree/master/charts/aws-cloud-controller-manager) can be found on GitHub.
|
||||
|
||||
1. Add the Helm repository:
|
||||
|
||||
@@ -737,7 +737,7 @@ nodeSelector:
|
||||
10. Install the chart and confirm that the Daemonset `aws-cloud-controller-manager` deploys successfully:
|
||||
|
||||
```shell
|
||||
kubectl rollout status daemonset -n kube-system aws-cloud-controller-manager
|
||||
kubectl rollout status deployment -n kube-system aws-cloud-controller-manager
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
@@ -6,6 +6,17 @@ title: Setting up the Azure Cloud Provider
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/azure"/>
|
||||
</head>
|
||||
|
||||
:::note Important:
|
||||
|
||||
In Kubernetes 1.30 and later, you must use an out-of-tree Azure cloud provider. The Azure cloud provider has been [removed completely](https://github.com/kubernetes/kubernetes/pull/122857), and won't work after an upgrade to Kubernetes 1.30. The steps listed below are still required to set up an Azure cloud provider. You can [set up an out-of-tree cloud provider](#using-the-out-of-tree-azure-cloud-provider) after completing the prerequisites for Azure.
|
||||
|
||||
You can also [migrate from an in-tree to an out-of-tree Azure cloud provider](../migrate-to-an-out-of-tree-cloud-provider/migrate-to-out-of-tree-azure.md) on Kubernetes 1.29 and earlier. All existing clusters must migrate prior to upgrading to v1.30 in order to stay functional.
|
||||
|
||||
Starting with Kubernetes 1.29, in-tree cloud providers have been disabled. You must disable `DisableCloudProviders` and `DisableKubeletCloudCredentialProvider` to use the in-tree Azure cloud provider. You can do this by setting `feature-gates=DisableCloudProviders=false` as an additional argument for the cluster's Kubelet, Controller Manager, and API Server in the advanced cluster configuration. Additionally, set `DisableKubeletCloudCredentialProvider=false` in the Kubelet's arguments to enable in-tree functionality for authenticating to Azure container registries for image pull credentials. See [upstream docs](https://github.com/kubernetes/kubernetes/pull/117503) for more details.
|
||||
|
||||
Starting with Kubernetes version 1.26, in-tree persistent volume types `kubernetes.io/azure-disk` and `kubernetes.io/azure-file` are deprecated and will no longer be supported. For new clusters, [install the CSI drivers](#installing-csi-drivers), or migrate to the corresponding CSI drivers `disk.csi.azure.com` and `file.csi.azure.com` by following the [upstream migration documentation](https://learn.microsoft.com/en-us/azure/aks/csi-migrate-in-tree-volumes).
|
||||
:::
|
||||
|
||||
When using the `Azure` cloud provider, you can leverage the following capabilities:
|
||||
|
||||
- **Load Balancers:** Launches an Azure Load Balancer within a specific Network Security Group.
|
||||
@@ -76,12 +87,15 @@ Only hosts expected to be load balancer back ends need to be in this group.
|
||||
|
||||
## RKE2 Cluster Set-up in Rancher
|
||||
|
||||
:::note Important:
|
||||
This section is valid only for creating clusters with the in-tree cloud provider.
|
||||
:::
|
||||
|
||||
1. Choose "Azure" from the Cloud Provider drop-down in the Cluster Configuration section.
|
||||
|
||||
1. * Supply the Cloud Provider Configuration. Note that Rancher will automatically create a new Network Security Group, Resource Group, Availability Set, Subnet, and Virtual Network. If you already have some or all of these created, you will need to specify them before creating the cluster.
|
||||
* You can click on "Show Advanced" to see more of these automatically generated names and update them if
|
||||
necessary. Your Cloud Provider Configuration **must** match the fields in the Machine Pools section. If you have multiple pools, they must all use the same Resource Group, Availability Set, Subnet, Virtual Network, and Network Security Group.
|
||||
* An example is provided below. You will modify it as needed.
|
||||
2. Supply the Cloud Provider Configuration. Note that Rancher automatically creates a new Network Security Group, Resource Group, Availability Set, Subnet, and Virtual Network. If you already have some or all of these created, you must specify them before creating the cluster.
|
||||
* Click **Show Advanced** to view or edit these automatically generated names. Your Cloud Provider Configuration **must** match the fields in the **Machine Pools** section. If you have multiple pools, they must all use the same Resource Group, Availability Set, Subnet, Virtual Network, and Network Security Group.
|
||||
* An example is provided below. Modify it as needed.
|
||||
|
||||
<details id="v2.6.0-cloud-provider-config-file">
|
||||
<summary>Example Cloud Provider Config</summary>
|
||||
@@ -110,6 +124,492 @@ Only hosts expected to be load balancer back ends need to be in this group.
|
||||
|
||||
</details>
|
||||
|
||||
1. Under the **Cluster Configuration > Advanced** section, click **Add** under **Additional Controller Manager Args** and add this flag: `--configure-cloud-routes=false`
|
||||
3. Under the **Cluster Configuration > Advanced** section, click **Add** under **Additional Controller Manager Args** and add this flag: `--configure-cloud-routes=false`
|
||||
|
||||
1. Click the **Create** button to submit the form and create the cluster.
|
||||
4. Click **Create** to submit the form and create the cluster.
|
||||
|
||||
## Cloud Provider Configuration
|
||||
|
||||
Rancher automatically creates a new Network Security Group, Resource Group, Availability Set, Subnet, and Virtual Network. If you already have some or all of these created, you will need to specify them before creating the cluster. You can check **RKE1 Node Templates** or **RKE2 Machine Pools** to view or edit these automatically generated names.
|
||||
|
||||
**Refer to the full list of configuration options in the [upstream docs](https://cloud-provider-azure.sigs.k8s.io/install/configs/).**
|
||||
|
||||
:::note
|
||||
1. `useInstanceMetadata` must be set to `true` for the cloud provider to correctly configure `providerID`.
|
||||
2. `excludeMasterFromStandardLB` must be set to `false` if you need to add nodes labeled `node-role.kubernetes.io/master` to the backend of the Azure Load Balancer (ALB).
|
||||
3. `loadBalancerSku` can be set to `basic` or `standard`. Basic SKU will be deprecated in September 2025. Refer to the [Azure upstream docs](https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/public-ip-basic-upgrade-guidance#basic-sku-vs-standard-sku) for more information.
|
||||
:::
|
||||
|
||||
Azure supports reading the cloud config from Kubernetes secrets. The secret is a serialized version of the azure.json file. When the secret is changed, the cloud controller manager reconstructs itself without restarting the pod. It is recommended for the Helm chart to read the Cloud Provider Config from the secret.
|
||||
|
||||
Note that the chart reads the Cloud Provider Config from a given secret name in the `kube-system` namespace. Since Azure reads Kubernetes secrets, RBAC also needs to be configured. An example secret for the Cloud Provider Config is shown below. Modify it as needed and create the secret.
|
||||
|
||||
```yaml
|
||||
# azure-cloud-config.yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: azure-cloud-config
|
||||
namespace: kube-system
|
||||
type: Opaque
|
||||
stringData:
|
||||
cloud-config: |-
|
||||
{
|
||||
"cloud": "AzurePublicCloud",
|
||||
"tenantId": "<tenant-id>",
|
||||
"subscriptionId": "<subscription-id>",
|
||||
"aadClientId": "<client-id>",
|
||||
"aadClientSecret": "<tenant-id>",
|
||||
"resourceGroup": "docker-machine",
|
||||
"location": "westus",
|
||||
"subnetName": "docker-machine",
|
||||
"securityGroupName": "rancher-managed-kqmtsjgJ",
|
||||
"securityGroupResourceGroup": "docker-machine",
|
||||
"vnetName": "docker-machine-vnet",
|
||||
"vnetResourceGroup": "docker-machine",
|
||||
"primaryAvailabilitySetName": "docker-machine",
|
||||
"routeTableResourceGroup": "docker-machine",
|
||||
"cloudProviderBackoff": false,
|
||||
"useManagedIdentityExtension": false,
|
||||
"useInstanceMetadata": true,
|
||||
"loadBalancerSku": "standard",
|
||||
"excludeMasterFromStandardLB": false,
|
||||
}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
labels:
|
||||
kubernetes.io/cluster-service: "true"
|
||||
name: system:azure-cloud-provider-secret-getter
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
resourceNames: ["azure-cloud-config"]
|
||||
verbs:
|
||||
- get
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
kubernetes.io/cluster-service: "true"
|
||||
name: system:azure-cloud-provider-secret-getter
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:azure-cloud-provider-secret-getter
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: azure-cloud-config
|
||||
namespace: kube-system
|
||||
```
|
||||
|
||||
## Using the Out-of-tree Azure Cloud Provider
|
||||
|
||||
<Tabs groupId="k8s-distro">
|
||||
<TabItem value="RKE2">
|
||||
|
||||
1. Select **External** from the **Cloud Provider** drop-down in the **Cluster Configuration** section.
|
||||
|
||||
2. Prepare the Cloud Provider Configuration to set it in the next step. Note that Rancher automatically creates a new Network Security Group, Resource Group, Availability Set, Subnet, and Virtual Network. If you already have some or all of these created, you must specify them before creating the cluster.
|
||||
- Click **Show Advanced** to view or edit these automatically generated names. Your Cloud Provider Configuration **must** match the fields in the **Machine Pools** section. If you have multiple pools, they must all use the same Resource Group, Availability Set, Subnet, Virtual Network, and Network Security Group.
|
||||
|
||||
3. Under **Cluster Configuration > Advanced**, click **Add** under **Additional Controller Manager Args** and add this flag: `--configure-cloud-routes=false`.
|
||||
|
||||
Note that the chart reads the Cloud Provider Config from the secret in the `kube-system` namespace. An example secret for the Cloud Provider Config is shown below. Modify it as needed. Refer to the full list of configuration options in the [upstream docs](https://cloud-provider-azure.sigs.k8s.io/install/configs/).
|
||||
|
||||
```yaml
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChart
|
||||
metadata:
|
||||
name: azure-cloud-controller-manager
|
||||
namespace: kube-system
|
||||
spec:
|
||||
chart: cloud-provider-azure
|
||||
repo: https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo
|
||||
targetNamespace: kube-system
|
||||
bootstrap: true
|
||||
valuesContent: |-
|
||||
infra:
|
||||
clusterName: <cluster-name>
|
||||
cloudControllerManager:
|
||||
cloudConfigSecretName: azure-cloud-config
|
||||
cloudConfig: null
|
||||
clusterCIDR: null
|
||||
enableDynamicReloading: 'true'
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/control-plane: 'true'
|
||||
allocateNodeCidrs: 'false'
|
||||
hostNetworking: true
|
||||
caCertDir: /etc/ssl
|
||||
configureCloudRoutes: 'false'
|
||||
enabled: true
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
key: node-role.kubernetes.io/master
|
||||
- effect: NoSchedule
|
||||
key: node-role.kubernetes.io/control-plane
|
||||
value: 'true'
|
||||
- effect: NoSchedule
|
||||
key: node.cloudprovider.kubernetes.io/uninitialized
|
||||
value: 'true'
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: azure-cloud-config
|
||||
namespace: kube-system
|
||||
type: Opaque
|
||||
stringData:
|
||||
cloud-config: |-
|
||||
{
|
||||
"cloud": "AzurePublicCloud",
|
||||
"tenantId": "<tenant-id>",
|
||||
"subscriptionId": "<subscription-id>",
|
||||
"aadClientId": "<client-id>",
|
||||
"aadClientSecret": "<tenant-id>",
|
||||
"resourceGroup": "docker-machine",
|
||||
"location": "westus",
|
||||
"subnetName": "docker-machine",
|
||||
"securityGroupName": "rancher-managed-kqmtsjgJ",
|
||||
"securityGroupResourceGroup": "docker-machine",
|
||||
"vnetName": "docker-machine-vnet",
|
||||
"vnetResourceGroup": "docker-machine",
|
||||
"primaryAvailabilitySetName": "docker-machine",
|
||||
"routeTableResourceGroup": "docker-machine",
|
||||
"cloudProviderBackoff": false,
|
||||
"useManagedIdentityExtension": false,
|
||||
"useInstanceMetadata": true,
|
||||
"loadBalancerSku": "standard",
|
||||
"excludeMasterFromStandardLB": false,
|
||||
}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
labels:
|
||||
kubernetes.io/cluster-service: "true"
|
||||
name: system:azure-cloud-provider-secret-getter
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
resourceNames: ["azure-cloud-config"]
|
||||
verbs:
|
||||
- get
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
kubernetes.io/cluster-service: "true"
|
||||
name: system:azure-cloud-provider-secret-getter
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:azure-cloud-provider-secret-getter
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: azure-cloud-config
|
||||
namespace: kube-system
|
||||
```
|
||||
|
||||
4. Click **Create** to submit the form and create the cluster.
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="RKE1">
|
||||
|
||||
1. Choose **External** from the **Cloud Provider** drop-down in the **Cluster Options** section. This sets `--cloud-provider=external` for Kubernetes components.
|
||||
|
||||
2. Install the `cloud-provider-azure` chart after the cluster finishes provisioning. Note that the cluster is not successfully provisioned and nodes are still in an `uninitialized` state until you deploy the cloud controller manager. This can be done [manually using CLI](#helm-chart-installation-from-cli), or via [Helm charts in UI](#helm-chart-installation-from-ui).
|
||||
|
||||
Refer to the [official Azure upstream documentation](https://cloud-provider-azure.sigs.k8s.io/install/azure-ccm/) for more details on deploying the Cloud Controller Manager.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Helm Chart Installation from CLI
|
||||
|
||||
Official upstream docs for [Helm chart installation](https://github.com/kubernetes-sigs/cloud-provider-azure/tree/master/helm/cloud-provider-azure) can be found on Github.
|
||||
|
||||
1. Create a `azure-cloud-config` secret with the required [cloud provider config](#cloud-provider-configuration).
|
||||
|
||||
```shell
|
||||
kubectl apply -f azure-cloud-config.yaml
|
||||
```
|
||||
|
||||
2. Add the Helm repository:
|
||||
|
||||
```shell
|
||||
helm repo add azure-cloud-controller-manager https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo
|
||||
helm repo update
|
||||
```
|
||||
|
||||
3. Create a `values.yaml` file with the following contents to override the default `values.yaml`:
|
||||
|
||||
<Tabs groupId="k8s-distro">
|
||||
<TabItem value="RKE2">
|
||||
|
||||
```yaml
|
||||
# values.yaml
|
||||
infra:
|
||||
clusterName: <cluster-name>
|
||||
cloudControllerManager:
|
||||
cloudConfigSecretName: azure-cloud-config
|
||||
cloudConfig: null
|
||||
clusterCIDR: null
|
||||
enableDynamicReloading: 'true'
|
||||
configureCloudRoutes: 'false'
|
||||
allocateNodeCidrs: 'false'
|
||||
caCertDir: /etc/ssl
|
||||
enabled: true
|
||||
replicas: 1
|
||||
hostNetworking: true
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/control-plane: 'true'
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
key: node-role.kubernetes.io/master
|
||||
- effect: NoSchedule
|
||||
key: node-role.kubernetes.io/control-plane
|
||||
value: 'true'
|
||||
- effect: NoSchedule
|
||||
key: node.cloudprovider.kubernetes.io/uninitialized
|
||||
value: 'true'
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="RKE">
|
||||
|
||||
```yaml
|
||||
# values.yaml
|
||||
cloudControllerManager:
|
||||
cloudConfigSecretName: azure-cloud-config
|
||||
cloudConfig: null
|
||||
clusterCIDR: null
|
||||
enableDynamicReloading: 'true'
|
||||
configureCloudRoutes: 'false'
|
||||
allocateNodeCidrs: 'false'
|
||||
caCertDir: /etc/ssl
|
||||
enabled: true
|
||||
replicas: 1
|
||||
hostNetworking: true
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/controlplane: 'true'
|
||||
node-role.kubernetes.io/control-plane: null
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
key: node-role.kubernetes.io/controlplane
|
||||
value: 'true'
|
||||
- effect: NoSchedule
|
||||
key: node.cloudprovider.kubernetes.io/uninitialized
|
||||
value: 'true'
|
||||
infra:
|
||||
clusterName: <cluster-name>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
4. Install the Helm chart:
|
||||
|
||||
```shell
|
||||
helm upgrade --install cloud-provider-azure azure-cloud-controller-manager/cloud-provider-azure -n kube-system --values values.yaml
|
||||
```
|
||||
|
||||
Verify that the Helm chart installed successfully:
|
||||
|
||||
```shell
|
||||
helm status cloud-provider-azure -n kube-system
|
||||
```
|
||||
|
||||
5. (Optional) Verify that the cloud controller manager update succeeded:
|
||||
|
||||
```shell
|
||||
kubectl rollout status deployment -n kube-system cloud-controller-manager
|
||||
kubectl rollout status daemonset -n kube-system cloud-node-manager
|
||||
```
|
||||
|
||||
6. The cloud provider is responsible for setting the ProviderID of the node. Check if all nodes are initialized with the ProviderID:
|
||||
|
||||
```shell
|
||||
kubectl describe nodes | grep "ProviderID"
|
||||
```
|
||||
|
||||
### Helm Chart Installation from UI
|
||||
|
||||
1. Click **☰**, then select the name of the cluster from the left navigation.
|
||||
|
||||
2. Select **Apps** > **Repositories**.
|
||||
|
||||
3. Click the **Create** button.
|
||||
|
||||
4. Enter `https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo` in the **Index URL** field.
|
||||
|
||||
5. Select **Apps** > **Charts** from the left navigation and install **cloud-provider-azure** chart.
|
||||
|
||||
6. Select the namespace, `kube-system`, and enable **Customize Helm options before install**.
|
||||
|
||||
7. Replace `cloudConfig: /etc/kubernetes/azure.json` to read from the Cloud Config Secret and enable dynamic reloading:
|
||||
|
||||
```yaml
|
||||
cloudConfigSecretName: azure-cloud-config
|
||||
enableDynamicReloading: 'true'
|
||||
```
|
||||
|
||||
8. Update the following fields as required:
|
||||
|
||||
```yaml
|
||||
allocateNodeCidrs: 'false'
|
||||
configureCloudRoutes: 'false'
|
||||
clusterCIDR: null
|
||||
```
|
||||
|
||||
<Tabs groupId="k8s-distro">
|
||||
<TabItem value="RKE2">
|
||||
|
||||
9. Rancher-provisioned RKE2 nodes have the selector `node-role.kubernetes.io/control-plane` set to `true`. Update the nodeSelector:
|
||||
```yaml
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/control-plane: 'true'
|
||||
```
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="RKE">
|
||||
|
||||
10. Rancher-provisioned RKE nodes are tainted `node-role.kubernetes.io/controlplane`. Update tolerations and the nodeSelector:
|
||||
|
||||
```yaml
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
key: node.cloudprovider.kubernetes.io/uninitialized
|
||||
value: 'true'
|
||||
- effect: NoSchedule
|
||||
value: 'true'
|
||||
key: node-role.kubernetes.io/controlplane
|
||||
```
|
||||
|
||||
```yaml
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/controlplane: 'true'
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
11. Install the chart and confirm that the cloud controller and cloud node manager deployed successfully:
|
||||
|
||||
```shell
|
||||
kubectl rollout status deployment -n kube-system cloud-controller-manager
|
||||
kubectl rollout status daemonset -n kube-system cloud-node-manager
|
||||
```
|
||||
|
||||
12. The cloud provider is responsible for setting the ProviderID of the node. Check if all nodes are initialized with the ProviderID:
|
||||
|
||||
```shell
|
||||
kubectl describe nodes | grep "ProviderID"
|
||||
```
|
||||
|
||||
### Installing CSI Drivers
|
||||
|
||||
Install [Azure Disk CSI driver](https://github.com/kubernetes-sigs/azuredisk-csi-driver) or [Azure File CSI Driver](https://github.com/kubernetes-sigs/azurefile-csi-driver) to access [Azure Disk](https://azure.microsoft.com/en-us/services/storage/disks/) or [Azure File](https://azure.microsoft.com/en-us/services/storage/disks/) volumes respectively.
|
||||
|
||||
The steps to install the Azure Disk CSI driver are shown below. You can install the Azure File CSI Driver in a similar manner by following the [helm installation documentation](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/charts/README.md).
|
||||
|
||||
::: note Important:
|
||||
|
||||
Clusters must be provisioned using `Managed Disk` to use Azure Disk. You can configure this when creating **RKE1 Node Templates** or **RKE2 Machine Pools*.
|
||||
|
||||
:::
|
||||
|
||||
Official upstream docs for [Helm chart installation](https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/charts/README.md) can be found on Github.
|
||||
|
||||
1. Add and update the helm repository:
|
||||
|
||||
```shell
|
||||
helm repo add azuredisk-csi-driver https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/charts
|
||||
helm repo update azuredisk-csi-driver
|
||||
```
|
||||
|
||||
1. Install the chart as shown below, updating the --version argument as needed. Refer to the full list of latest chart configurations in the [upstream docs](https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/charts/README.md#latest-chart-configuration).
|
||||
|
||||
```shell
|
||||
helm install azuredisk-csi-driver azuredisk-csi-driver/azuredisk-csi-driver --namespace kube-system --version v1.30.1 --set controller.cloudConfigSecretName=azure-cloud-config --set controller.cloudConfigSecretNamespace=kube-system --set controller.runOnControlPlane=true
|
||||
```
|
||||
|
||||
2. (Optional) Verify that the azuredisk-csi-driver installation succeeded:
|
||||
|
||||
```shell
|
||||
kubectl --namespace=kube-system get pods --selector="app.kubernetes.io/name=azuredisk-csi-driver" --watch
|
||||
```
|
||||
|
||||
3. Provision an example Storage Class:
|
||||
|
||||
```shell
|
||||
cat <<EOF | kubectl create -f -
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1
|
||||
metadata:
|
||||
name: standard
|
||||
provisioner: kubernetes.io/azure-disk
|
||||
parameters:
|
||||
storageaccounttype: Standard_LRS
|
||||
kind: Managed
|
||||
EOF
|
||||
```
|
||||
|
||||
Verify that the storage class has been provisioned:
|
||||
```shell
|
||||
kubectl get storageclasses
|
||||
```
|
||||
|
||||
4. Create a PersistentVolumeClaim:
|
||||
```shell
|
||||
cat <<EOF | kubectl create -f -
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: azure-disk-pvc
|
||||
spec:
|
||||
storageClassName: standard
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
EOF
|
||||
```
|
||||
|
||||
Verify that the PersistentVolumeClaim and PersistentVolume have been created:
|
||||
```shell
|
||||
kubectl get persistentvolumeclaim
|
||||
kubectl get persistentvolume
|
||||
```
|
||||
|
||||
5. Attach the new Azure Disk:
|
||||
|
||||
You can now mount the Kubernetes PersistentVolume into a Kubernetes Pod. The disk can be consumed by any Kubernetes object type, including a Deployment, DaemonSet, or StatefulSet. However, the following example simply mounts the PersistentVolume into a standalone Pod.
|
||||
|
||||
```shell
|
||||
cat <<EOF | kubectl create -f -
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mypod-dynamic-azuredisk
|
||||
spec:
|
||||
containers:
|
||||
- name: mypod
|
||||
image: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: "http-server"
|
||||
volumeMounts:
|
||||
- mountPath: "/usr/share/nginx/html"
|
||||
name: storage
|
||||
volumes:
|
||||
- name: storage
|
||||
persistentVolumeClaim:
|
||||
claimName: azure-disk-pvc
|
||||
EOF
|
||||
```
|
||||
|
||||
@@ -21,65 +21,48 @@ To interact with Azure APIs, an AKS cluster requires an Azure Active Directory (
|
||||
Before creating the service principal, you need to obtain the following information from the [Microsoft Azure Portal](https://portal.azure.com):
|
||||
|
||||
- Subscription ID
|
||||
- Client ID
|
||||
- Client ID (also known as app ID)
|
||||
- Client secret
|
||||
|
||||
The below sections describe how to set up these prerequisites using either the Azure command line tool or the Azure portal.
|
||||
|
||||
### Setting Up the Service Principal with the Azure Command Line Tool
|
||||
|
||||
You can create the service principal by running this command:
|
||||
You must assign roles to the service principal so that it has communication privileges with the AKS API. It also needs access to create and list virtual networks.
|
||||
|
||||
In the following example, the command creates the service principal and gives it the Contributor role. The Contributor role can manage anything on AKS but cannot give access to others. Note that you must provide `scopes` a full path to at least one Azure resource:
|
||||
|
||||
```
|
||||
az ad sp create-for-rbac --skip-assignment
|
||||
az ad sp create-for-rbac --role Contributor --scopes /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>
|
||||
```
|
||||
|
||||
The result should show information about the new service principal:
|
||||
|
||||
```
|
||||
{
|
||||
"appId": "xxxx--xxx",
|
||||
"displayName": "<SERVICE-PRINCIPAL-NAME>",
|
||||
"name": "http://<SERVICE-PRINCIPAL-NAME>",
|
||||
"password": "<SECRET>",
|
||||
"tenant": "<TENANT NAME>"
|
||||
"displayName": "<service-principal-name>",
|
||||
"name": "http://<service-principal-name>",
|
||||
"password": "<secret>",
|
||||
"tenant": "<tenant-name>"
|
||||
}
|
||||
```
|
||||
|
||||
You also need to add roles to the service principal so that it has privileges for communication with the AKS API. It also needs access to create and list virtual networks.
|
||||
|
||||
Below is an example command for assigning the Contributor role to a service principal. Contributors can manage anything on AKS but cannot give access to others:
|
||||
The following creates a [Resource Group](https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-cli) to contain your Azure resources:
|
||||
|
||||
```
|
||||
az role assignment create \
|
||||
--assignee $appId \
|
||||
--scope /subscriptions/$<SUBSCRIPTION-ID>/resourceGroups/$<GROUP> \
|
||||
--role Contributor
|
||||
```
|
||||
|
||||
You can also create the service principal and give it Contributor privileges by combining the two commands into one. In this command, the scope needs to provide a full path to an Azure resource:
|
||||
|
||||
```
|
||||
az ad sp create-for-rbac \
|
||||
--scope /subscriptions/$<SUBSCRIPTION-ID>/resourceGroups/$<GROUP> \
|
||||
--role Contributor
|
||||
```
|
||||
|
||||
Create the Resource Group by running this command:
|
||||
|
||||
```
|
||||
az group create --location AZURE_LOCATION_NAME --resource-group AZURE_RESOURCE_GROUP_NAME
|
||||
az group create --location <azure-location-name> --resource-group <resource-group-name>
|
||||
```
|
||||
|
||||
### Setting Up the Service Principal from the Azure Portal
|
||||
|
||||
You can also follow these instructions to set up a service principal and give it role-based access from the Azure Portal.
|
||||
Follow these instructions to set up a service principal and give it role-based access from the Azure Portal.
|
||||
|
||||
1. Go to the Microsoft Azure Portal [home page](https://portal.azure.com).
|
||||
|
||||
1. Click **Azure Active Directory**.
|
||||
1. Click **App registrations**.
|
||||
1. Click **New registration**.
|
||||
1. Enter a name. This will be the name of your service principal.
|
||||
1. Enter a name for your service principal.
|
||||
1. Optional: Choose which accounts can use the service principal.
|
||||
1. Click **Register**.
|
||||
1. You should now see the name of your service principal under **Azure Active Directory > App registrations**.
|
||||
@@ -101,7 +84,7 @@ To give role-based access to your service principal,
|
||||
|
||||
**Result:** Your service principal now has access to AKS.
|
||||
|
||||
## 1. Create the AKS Cloud Credentials
|
||||
## Create the AKS Cloud Credentials
|
||||
|
||||
1. In the Rancher UI, click **☰ > Cluster Management**.
|
||||
1. Click **Cloud Credentials**.
|
||||
@@ -110,7 +93,7 @@ To give role-based access to your service principal,
|
||||
1. Fill out the form. For help with filling out the form, see the [configuration reference.](../../../../reference-guides/cluster-configuration/rancher-server-configuration/aks-cluster-configuration.md#cloud-credentials)
|
||||
1. Click **Create**.
|
||||
|
||||
## 2. Create the AKS Cluster
|
||||
## Create the AKS Cluster
|
||||
|
||||
Use Rancher to set up and configure your Kubernetes cluster.
|
||||
|
||||
@@ -124,7 +107,8 @@ Use Rancher to set up and configure your Kubernetes cluster.
|
||||
|
||||
You can access your cluster after its state is updated to **Active**.
|
||||
|
||||
## Role-based Access Control
|
||||
## Configure Role-based Access Control
|
||||
|
||||
When provisioning an AKS cluster in the Rancher UI, RBAC is not configurable because it is required to be enabled.
|
||||
|
||||
RBAC is required for AKS clusters that are registered or imported into Rancher.
|
||||
@@ -135,8 +119,8 @@ Assign the Rancher AKSv2 role to the service principal with the Azure Command Li
|
||||
|
||||
```
|
||||
az role assignment create \
|
||||
--assignee CLIENT_ID \
|
||||
--scope "/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP_NAME" \
|
||||
--assignee <client-id> \
|
||||
--scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>" \
|
||||
--role "Rancher AKSv2"
|
||||
```
|
||||
|
||||
|
||||
@@ -95,10 +95,15 @@ This [tutorial](https://aws.amazon.com/blogs/opensource/managing-eks-clusters-ra
|
||||
|
||||
These are the minimum set of permissions necessary to access the full functionality of Rancher's EKS driver. You'll need additional permissions for Rancher to provision the `Service Role` and `VPC` resources. If you create these resources **before** you create the cluster, they'll be available when you configure the cluster.
|
||||
|
||||
:::note
|
||||
In EKS v1.23 and above, you must use the out-of-tree drivers for EBS-backed volumes. You need [specific permissions](#ebs-csi-driver-addon-permissions) to enable this add-on.
|
||||
:::
|
||||
|
||||
Resource | Description
|
||||
---------|------------
|
||||
Service Role | Provides permissions that allow Kubernetes to manage resources on your behalf. Rancher can create the service role with the following [Service Role Permissions](#service-role-permissions).
|
||||
VPC | Provides isolated network resources utilised by EKS and worker nodes. Rancher can create the VPC resources with the following [VPC Permissions](#vpc-permissions).
|
||||
EBS CSI Driver add-on | Provides permissions that allow Kubernetes to interact with EBS and configure the cluster to enable the add-on (required for EKS v1.23 and above). Rancher can install the add-on with the following [EBS CSI Driver addon Permissions](#ebs-csi-driver-addon-permissions).
|
||||
|
||||
|
||||
Resource targeting uses `*` as the ARN of many of the resources created cannot be known before creating the EKS cluster in Rancher.
|
||||
@@ -129,6 +134,7 @@ Resource targeting uses `*` as the ARN of many of the resources created cannot b
|
||||
"ec2:DescribeAvailabilityZones",
|
||||
"ec2:DescribeAccountAttributes",
|
||||
"ec2:DeleteTags",
|
||||
"ec2:DeleteLaunchTemplateVersions",
|
||||
"ec2:DeleteLaunchTemplate",
|
||||
"ec2:DeleteSecurityGroup",
|
||||
"ec2:DeleteKeyPair",
|
||||
@@ -314,6 +320,43 @@ These are permissions that are needed by Rancher to create a Virtual Private Clo
|
||||
}
|
||||
```
|
||||
|
||||
### EBS CSI Driver addon Permissions
|
||||
|
||||
The following are the required permissions for installing the Amazon EBS CSI Driver add-on.
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"iam:GetRole",
|
||||
"eks:DescribeAddonConfiguration",
|
||||
"eks:UpdateAddon",
|
||||
"eks:ListAddons",
|
||||
"iam:CreateRole",
|
||||
"iam:AttachRolePolicy",
|
||||
"eks:DescribeAddon",
|
||||
"iam:CreateOpenIDConnectProvider",
|
||||
"iam:PassRole",
|
||||
"eks:DescribeIdentityProviderConfig",
|
||||
"eks:DeleteAddon",
|
||||
"iam:ListOpenIDConnectProviders",
|
||||
"iam:ListAttachedRolePolicies",
|
||||
"eks:CreateAddon",
|
||||
"eks:DescribeCluster",
|
||||
"eks:DescribeAddonVersions",
|
||||
"sts:AssumeRoleWithWebIdentity",
|
||||
"eks:AssociateIdentityProviderConfig",
|
||||
"eks:ListIdentityProviderConfigs"
|
||||
],
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Syncing
|
||||
|
||||
The EKS provisioner can synchronize the state of an EKS cluster between Rancher and the provider. For an in-depth technical explanation of how this works, see [Syncing.](../../../../reference-guides/cluster-configuration/rancher-server-configuration/sync-clusters.md)
|
||||
|
||||
@@ -46,7 +46,7 @@ If you need to create a private registry, refer to the documentation pages for y
|
||||
:::
|
||||
|
||||
1. Select a namespace for the registry.
|
||||
1. Select the website that hosts your private registry. Then enter credentials that authenticate with the registry. For example, if you use DockerHub, provide your DockerHub username and password.
|
||||
1. Select the website that hosts your private registry. Then enter credentials that authenticate with the registry. For example, if you use Docker Hub, provide your Docker Hub username and password.
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:**
|
||||
@@ -89,7 +89,7 @@ Before v2.6, secrets were required to be in a project scope. Projects are no lon
|
||||
:::
|
||||
|
||||
1. Select a namespace for the registry.
|
||||
1. Select the website that hosts your private registry. Then enter credentials that authenticate with the registry. For example, if you use DockerHub, provide your DockerHub username and password.
|
||||
1. Select the website that hosts your private registry. Then enter credentials that authenticate with the registry. For example, if you use Docker Hub, provide your Docker Hub username and password.
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:**
|
||||
|
||||
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: Graceful Shutdown for VMware vSphere Virtual Machines
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/vsphere/shutdown-vm"/>
|
||||
</head>
|
||||
|
||||
In Rancher v2.8.3 and later, you can configure the graceful shutdown of virtual machines (VMs) for VMware vSphere node driver clusters. Graceful shutdown introduces a delay before the VM is forcibly deleted, which allows time for terminating any running processes and open connections.
|
||||
|
||||
In RKE2/K3s, you can set up graceful shutdown when you create the cluster, or edit the cluster configuration to add it afterward.
|
||||
|
||||
In RKE, you can edit node templates to similar results.
|
||||
|
||||
:::note
|
||||
|
||||
Since Rancher can't detect the platform of an imported cluster, you cannot enable graceful shutdown on VMware vSphere clusters you have imported.
|
||||
|
||||
:::
|
||||
|
||||
## Enable Graceful Shutdown During VMware vSphere Cluster Creation
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="RKE2/K3s">
|
||||
|
||||
In RKE2/K3s, you can configure new VMware vSphere clusters with graceful shutdown for VMs:
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Click **Create** and select **VMware vSphere** to provision a new cluster.
|
||||
1. Under **Machine Pools > Scheduling**, in the **Graceful Shutdown Timeout** field, enter an integer value greater than 0. The value you enter is the amount of time in seconds Rancher waits before deleting VMs on the cluster. If the value is set to `0`, graceful shutdown is disabled.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="RKE">
|
||||
|
||||
In RKE, you can't directly configure a new cluster with graceful shutdown. However, you can configure node templates which automatically create node pools with graceful shutdown enabled. The node template can then be used to provision new VMware vSphere clusters that have a graceful shutdown delay.
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. From the left navigation, select **RKE1 Configuration > Node Templates**.
|
||||
1. Click **Add Template** and select **vSphere** to create a node template.
|
||||
1. Under **2. Scheduling**, in the **Graceful Shutdown Timeout** field, enter an integer value greater than 0. The value you enter is the amount of time in seconds Rancher waits before deleting VMs on the cluster. If the value is set to `0`, graceful shutdown is disabled.
|
||||
|
||||
When you [use the newly-created node template to create node pools](../use-new-nodes-in-an-infra-provider.md), the nodes will gracefully shutdown of VMs according to the **Graceful Shutdown Timeout** value you have set.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Enable Graceful Shutdown in Existing RKE2/K3s Clusters
|
||||
|
||||
In RKE2/K3s, you can edit the configuration of an existing VMware vSphere cluster to enable graceful shutdown, which adds a delay before deleting VMs.
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, find the VMware vSphere hosted cluster you want to edit. Click **⋮** at the end of the row associated with the cluster. Select **Edit Config**.
|
||||
1. Under **Machine Pools > Scheduling**, in the **Graceful Shutdown Timeout** field, enter an integer value greater than 0. The value you enter is the amount of time in seconds Rancher waits before deleting VMs on the cluster. If the value is set to `0`, graceful shutdown is disabled.
|
||||
|
||||
## Enable Graceful Shutdown in Existing RKE Clusters
|
||||
|
||||
In RKE, you can't directly edit an existing cluster's configuration to add graceful shutdown to existing VMware vSphere clusters. However, you can edit the configuration of existing node templates. As noted in [Updating a Node Template](../../../../../reference-guides/user-settings/manage-node-templates.md#updating-a-node-template), all node pools using the node template automatically use the updated information when new nodes are added to the cluster.
|
||||
|
||||
To edit an existing node template to enable graceful shutdown:
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. From the left navigation, select **RKE1 Configuration > Node Templates**.
|
||||
1. Find the VMware vSphere node template you want to edit. Click **⋮** at the end of the row associated with the template. Select **Edit**.
|
||||
1. Under **2. Scheduling**, in the **Graceful Shutdown Timeout** field, enter an integer value greater than 0. The value you enter is the amount of time in seconds Rancher waits before deleting VMs on the cluster. If the value is set to `0`, graceful shutdown is disabled.
|
||||
1. Click **Save**.
|
||||
@@ -15,9 +15,9 @@ Rancher can provision nodes in vSphere and install Kubernetes on them. When crea
|
||||
|
||||
A vSphere cluster may consist of multiple groups of VMs with distinct properties, such as the amount of memory or the number of vCPUs. This grouping allows for fine-grained control over the sizing of nodes for each Kubernetes role.
|
||||
|
||||
## VMware vSphere Enhancements in Rancher v2.3
|
||||
## VMware vSphere Enhancements
|
||||
|
||||
The vSphere node templates have been updated, allowing you to bring cloud operations on-premises with the following enhancements:
|
||||
The vSphere node templates allow you to bring cloud operations on-premises with the following enhancements:
|
||||
|
||||
### Self-healing Node Pools
|
||||
|
||||
@@ -39,12 +39,6 @@ For the fields to be populated, your setup needs to fulfill the [prerequisites.]
|
||||
|
||||
You can provision VMs with any operating system that supports `cloud-init`. Only YAML format is supported for the [cloud config.](https://cloudinit.readthedocs.io/en/latest/topics/examples.html)
|
||||
|
||||
### Video Walkthrough of v2.3.3 Node Template Features
|
||||
|
||||
In this YouTube video, we demonstrate how to set up a node template with the new features designed to help you bring cloud operations to on-premises clusters.
|
||||
|
||||
<YouTube id="dPIwg6x1AlU"/>
|
||||
|
||||
## Creating a VMware vSphere Cluster
|
||||
|
||||
In [this section,](provision-kubernetes-clusters-in-vsphere.md) you'll learn how to use Rancher to install an [RKE](https://rancher.com/docs/rke/latest/en/) Kubernetes cluster in vSphere.
|
||||
|
||||
@@ -19,11 +19,11 @@ The kubeconfig file and its contents are specific to each cluster. It can be dow
|
||||
1. Find the cluster whose kubeconfig you want to download, and select **⁝** at the end of the row.
|
||||
1. Select **Download KubeConfig** from the submenu.
|
||||
|
||||
You will need a separate kubeconfig file for each cluster that you have access to in Rancher.
|
||||
You need a separate kubeconfig file for each cluster that you have access to in Rancher.
|
||||
|
||||
After you download the kubeconfig file, you will be able to use the kubeconfig file and its Kubernetes [contexts](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration) to access your downstream cluster.
|
||||
After you download the kubeconfig file, you are able to use the kubeconfig file and its Kubernetes [contexts](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration) to access your downstream cluster.
|
||||
|
||||
If admins have [kubeconfig token generation turned off](../../../../reference-guides/about-the-api/api-tokens.md#disable-tokens-in-generated-kubeconfigs), the kubeconfig file requires [rancher cli](./authorized-cluster-endpoint.md) to be present in your PATH.
|
||||
If admins have [kubeconfig token generation turned off](../../../../api/api-tokens.md#disable-tokens-in-generated-kubeconfigs), the kubeconfig file requires that the [Rancher CLI](../../../../reference-guides/cli-with-rancher/rancher-cli.md) to be present in your PATH.
|
||||
|
||||
### Two Authentication Methods for RKE Clusters
|
||||
|
||||
@@ -36,7 +36,7 @@ For RKE clusters, the kubeconfig file allows you to be authenticated in two ways
|
||||
|
||||
This second method, the capability to connect directly to the cluster's Kubernetes API server, is important because it lets you access your downstream cluster if you can't connect to Rancher.
|
||||
|
||||
To use the authorized cluster endpoint, you will need to configure kubectl to use the extra kubectl context in the kubeconfig file that Rancher generates for you when the RKE cluster is created. This file can be downloaded from the cluster view in the Rancher UI, and the instructions for configuring kubectl are on [this page.](use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster)
|
||||
To use the authorized cluster endpoint, you need to configure kubectl to use the extra kubectl context in the kubeconfig file that Rancher generates for you when the RKE cluster is created. This file can be downloaded from the cluster view in the Rancher UI, and the instructions for configuring kubectl are on [this page.](use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster)
|
||||
|
||||
These methods of communicating with downstream Kubernetes clusters are also explained in the [architecture page](../../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md) in the larger context of explaining how Rancher works and how Rancher communicates with downstream clusters.
|
||||
|
||||
|
||||
@@ -122,7 +122,7 @@ Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
|
||||
|
||||
## Cleaning up Nodes
|
||||
|
||||
<Tabs>
|
||||
<Tabs groupId="k8s-distro" queryString>
|
||||
<TabItem value="RKE1">
|
||||
|
||||
Before you run the following commands, first remove the node through the Rancher UI.
|
||||
|
||||
@@ -19,7 +19,7 @@ To provision new storage for your workloads, follow these steps:
|
||||
1. [Add a storage class and configure it to use your storage.](#1-add-a-storage-class-and-configure-it-to-use-your-storage)
|
||||
2. [Use the Storage Class for Pods Deployed with a StatefulSet.](#2-use-the-storage-class-for-pods-deployed-with-a-statefulset)
|
||||
|
||||
### Prerequisites
|
||||
## Prerequisites
|
||||
|
||||
- To set up persistent storage, the `Manage Volumes` [role](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference) is required.
|
||||
- If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider.
|
||||
@@ -42,7 +42,7 @@ hostPath | `host-path`
|
||||
|
||||
To use a storage provisioner that is not on the above list, you will need to use a [feature flag to enable unsupported storage drivers.](../../../../advanced-user-guides/enable-experimental-features/unsupported-storage-drivers.md)
|
||||
|
||||
### 1. Add a storage class and configure it to use your storage
|
||||
## 1. Add a storage class and configure it to use your storage
|
||||
|
||||
These steps describe how to set up a storage class at the cluster level.
|
||||
|
||||
@@ -59,7 +59,7 @@ These steps describe how to set up a storage class at the cluster level.
|
||||
|
||||
For full information about the storage class parameters, refer to the official [Kubernetes documentation.](https://kubernetes.io/docs/concepts/storage/storage-classes/#parameters).
|
||||
|
||||
### 2. Use the Storage Class for Pods Deployed with a StatefulSet
|
||||
## 2. Use the Storage Class for Pods Deployed with a StatefulSet
|
||||
|
||||
StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the StorageClass that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound to dynamically provisioned storage using the StorageClass defined in its PersistentVolumeClaim.
|
||||
|
||||
@@ -70,7 +70,7 @@ StatefulSets manage the deployment and scaling of Pods while maintaining a stick
|
||||
1. Click **StatefulSet**.
|
||||
1. In the **Volume Claim Templates** tab, click **Add Claim Template**.
|
||||
1. Enter a name for the persistent volume.
|
||||
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
|
||||
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
|
||||
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
|
||||
1. Click **Launch**.
|
||||
|
||||
@@ -84,8 +84,8 @@ To attach the PVC to an existing workload,
|
||||
1. Go to the workload that will use storage provisioned with the StorageClass that you cared at click **⋮ > Edit Config**.
|
||||
1. In the **Volume Claim Templates** section, click **Add Claim Template**.
|
||||
1. Enter a persistent volume name.
|
||||
1. In the **StorageClass* field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
|
||||
1. In the **StorageClass** field, select the StorageClass that will dynamically provision storage for pods managed by this StatefulSet.
|
||||
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage.
|
||||
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC. If not, Rancher will provision new persistent storage.
|
||||
|
||||
@@ -20,12 +20,12 @@ To set up storage, follow these steps:
|
||||
2. [Add a PersistentVolume that refers to the persistent storage.](#2-add-a-persistentvolume-that-refers-to-the-persistent-storage)
|
||||
3. [Use the Storage Class for Pods Deployed with a StatefulSet.](#3-use-the-storage-class-for-pods-deployed-with-a-statefulset)
|
||||
|
||||
### Prerequisites
|
||||
## Prerequisites
|
||||
|
||||
- To create a persistent volume as a Kubernetes resource, you must have the `Manage Volumes` [role.](../../../authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-role-reference)
|
||||
- If you are provisioning storage for a cluster hosted in the cloud, the storage and cluster hosts must have the same cloud provider.
|
||||
|
||||
### 1. Set up persistent storage
|
||||
## 1. Set up persistent storage
|
||||
|
||||
Creating a persistent volume in Rancher will not create a storage volume. It only creates a Kubernetes resource that maps to an existing volume. Therefore, before you can create a persistent volume as a Kubernetes resource, you must have storage provisioned.
|
||||
|
||||
@@ -33,7 +33,7 @@ The steps to set up a persistent storage device will differ based on your infras
|
||||
|
||||
If you have a pool of block storage, and you don't want to use a cloud provider, Longhorn could help you provide persistent storage to your Kubernetes cluster. For more information, see [Cloud Native Storage with Longhorn](../../../../../integrations-in-rancher/longhorn/longhorn.md).
|
||||
|
||||
### 2. Add a PersistentVolume that refers to the persistent storage
|
||||
## 2. Add a PersistentVolume that refers to the persistent storage
|
||||
|
||||
These steps describe how to set up a PersistentVolume at the cluster level in Kubernetes.
|
||||
|
||||
@@ -52,7 +52,7 @@ These steps describe how to set up a PersistentVolume at the cluster level in Ku
|
||||
**Result:** Your new persistent volume is created.
|
||||
|
||||
|
||||
### 3. Use the Storage Class for Pods Deployed with a StatefulSet
|
||||
## 3. Use the Storage Class for Pods Deployed with a StatefulSet
|
||||
|
||||
StatefulSets manage the deployment and scaling of Pods while maintaining a sticky identity for each Pod. In this StatefulSet, we will configure a VolumeClaimTemplate. Each Pod managed by the StatefulSet will be deployed with a PersistentVolumeClaim based on this VolumeClaimTemplate. The PersistentVolumeClaim will refer to the PersistentVolume that we created. Therefore, when each Pod managed by the StatefulSet is deployed, it will be bound a PersistentVolume as defined in its PersistentVolumeClaim.
|
||||
|
||||
@@ -86,4 +86,4 @@ The following steps describe how to assign persistent storage to an existing wor
|
||||
1. In the **Mount Point** field, enter the path that the workload will use to access the volume.
|
||||
1. Click **Launch**.
|
||||
|
||||
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC.
|
||||
**Result:** The workload will make a request for the specified amount of disk space to the Kubernetes master. If a PV with the specified resources is available when the workload is deployed, the Kubernetes master will bind the PV to the PVC.
|
||||
|
||||
@@ -304,7 +304,7 @@ cloud-provider|-|Cloud provider type|
|
||||
|max-node-provision-time|"15m"|Maximum time CA waits for node to be provisioned|
|
||||
|nodes|-|sets min,max size and other configuration data for a node group in a format accepted by cloud provider. Can be used multiple times. Format: `<min>:<max>:<other...>`|
|
||||
|node-group-auto-discovery|-|One or more definition(s) of node group auto-discovery. A definition is expressed `<name of discoverer>:[<key>[=<value>]]`|
|
||||
|estimator|-|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|
||||
|estimator|"binpacking"|Type of resource estimator to be used in scale up. Available values: ["binpacking"]|
|
||||
|expander|"random"|Type of node group expander to be used in scale up. Available values: `["random","most-pods","least-waste","price","priority"]`|
|
||||
|ignore-daemonsets-utilization|false|Should CA ignore DaemonSet pods when calculating resource utilization for scaling down|
|
||||
|ignore-mirror-pods-utilization|false|Should CA ignore Mirror pods when calculating resource utilization for scaling down|
|
||||
|
||||
@@ -76,7 +76,7 @@ To manage individual nodes, browse to the cluster that you want to manage and th
|
||||
|
||||
## Viewing a Node in the Rancher API
|
||||
|
||||
Select this option to view the node's [API endpoints](../../../reference-guides/about-the-api/about-the-api.md).
|
||||
Select this option to view the node's [API endpoints](../../../api/quickstart.md).
|
||||
|
||||
## Deleting a Node
|
||||
|
||||
@@ -100,7 +100,7 @@ For [nodes hosted by an infrastructure provider](../launch-kubernetes-with-ranch
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster where you want to SSH into a node and click the name of the cluster.
|
||||
1. On the **Machine Pools** tab, find the node that you want to remote into and click **⋮ > Download SSH Key**. A ZIP file containing files used for SSH will be downloaded.
|
||||
1. On the **Machine Pools** tab, find the node that you want to remote into and click **⋮ > Download SSH Key**. A ZIP file containing files used for SSH is then downloaded.
|
||||
1. Extract the ZIP file to any location.
|
||||
1. Open Terminal. Change your location to the extracted ZIP file.
|
||||
1. Enter the following command:
|
||||
@@ -111,13 +111,13 @@ For [nodes hosted by an infrastructure provider](../launch-kubernetes-with-ranch
|
||||
|
||||
## Cordoning a Node
|
||||
|
||||
_Cordoning_ a node marks it as unschedulable. This feature is useful for performing short tasks on the node during small maintenance windows, like reboots, upgrades, or decommissions. When you're done, power back on and make the node schedulable again by uncordoning it.
|
||||
_Cordoning_ a node marks it as unschedulable. This feature is useful for performing short tasks on the node during small maintenance windows, like reboots, upgrades or decommissions. When you're done, power back on and make the node schedulable again by uncordoning it.
|
||||
|
||||
## Draining a Node
|
||||
|
||||
_Draining_ is the process of first cordoning the node, and then evicting all its pods. This feature is useful for performing node maintenance (like kernel upgrades or hardware maintenance). It prevents new pods from deploying to the node while redistributing existing pods so that users don't experience service interruption.
|
||||
|
||||
- For pods with a replica set, the pod is replaced by a new pod that will be scheduled to a new node. Additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
|
||||
- For pods with a replica set, the pod is replaced by a new pod that is scheduled to a new node. Additionally, if the pod is part of a service, then clients are automatically redirected to the new pod.
|
||||
|
||||
- For pods with no replica set, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
|
||||
|
||||
@@ -127,20 +127,21 @@ However, you can override the conditions draining when you initiate the drain. Y
|
||||
|
||||
### Aggressive and Safe Draining Options
|
||||
|
||||
When you configure the upgrade strategy for the cluster, you will be able to enable node draining. If node draining is enabled, you will be able to configure how pods are deleted and rescheduled.
|
||||
When you configure the upgrade strategy for the cluster, you can enable node draining. If node draining is enabled, you are able to configure how pods are deleted and rescheduled.
|
||||
|
||||
- **Aggressive Mode**
|
||||
|
||||
In this mode, pods won't get rescheduled to a new node, even if they do not have a controller. Kubernetes expects you to have your own logic that handles the deletion of these pods.
|
||||
|
||||
Kubernetes also expects the implementation to decide what to do with pods using emptyDir. If a pod uses emptyDir to store local data, you might not be able to safely delete it, since the data in the emptyDir will be deleted once the pod is removed from the node. Choosing aggressive mode will delete these pods.
|
||||
Kubernetes also expects the implementation to decide what to do with pods using emptyDir. If a pod uses emptyDir to store local data, you might not be able to safely delete it, since the data in the emptyDir is deleted once the pod is removed from the node. Choosing aggressive mode deletes these pods.
|
||||
|
||||
- **Safe Mode**
|
||||
|
||||
If a node has standalone pods or ephemeral data it will be cordoned but not drained.
|
||||
If a node has stand-alone pods or ephemeral data it is cordoned but not drained.
|
||||
|
||||
### Grace Period
|
||||
|
||||
The timeout given to each pod for cleaning things up, so they will have chance to exit gracefully. For example, when pods might need to finish any outstanding requests, roll back transactions or save state to some external storage. If negative, the default value specified in the pod will be used.
|
||||
The timeout given to each pod for cleaning things up so they have a chance to exit gracefully. For example, when pods might need to finish any outstanding requests, roll back transactions or save state to an external storage. If negative, the default value specified in the pod is used.
|
||||
|
||||
### Timeout
|
||||
|
||||
@@ -156,17 +157,17 @@ The [timeout setting](https://github.com/kubernetes/kubernetes/pull/64378) was n
|
||||
|
||||
If there's any error related to user input, the node enters a `cordoned` state because the drain failed. You can either correct the input and attempt to drain the node again, or you can abort by uncordoning the node.
|
||||
|
||||
If the drain continues without error, the node enters a `draining` state. You'll have the option to stop the drain when the node is in this state, which will stop the drain process and change the node's state to `cordoned`.
|
||||
If the drain continues without error, the node enters a `draining` state. You'll have the option to stop the drain when the node is in this state, which then stops the drain process and changes the node's state to `cordoned`.
|
||||
|
||||
Once drain successfully completes, the node will be in a state of `drained`. You can then power off or delete the node.
|
||||
Once drain successfully completes, the node is in a state of `drained`. You can then power off or delete the node.
|
||||
|
||||
**Want to know more about cordon and drain?** See the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/).
|
||||
|
||||
## Labeling a Node to be Ignored by Rancher
|
||||
|
||||
Some solutions, such as F5's BIG-IP integration, may require creating a node that is never registered to a cluster.
|
||||
Certain solutions, such as F5's BIG-IP integration, may require creating a node that is never registered to a cluster.
|
||||
|
||||
Since the node will never finish registering, it will always be shown as unhealthy in the Rancher UI.
|
||||
Since the node never finishes registering, it is always shown as unhealthy in the Rancher UI.
|
||||
|
||||
In that case, you may want to label the node to be ignored by Rancher so that Rancher only shows nodes as unhealthy when they are actually failing.
|
||||
|
||||
@@ -181,16 +182,16 @@ There is an [open issue](https://github.com/rancher/rancher/issues/24172) in whi
|
||||
|
||||
### Labeling Nodes to be Ignored with kubectl
|
||||
|
||||
To add a node that will be ignored by Rancher, use `kubectl` to create a node that has the following label:
|
||||
To add a node that is ignored by Rancher, use `kubectl` to create a node that has the following label:
|
||||
|
||||
```
|
||||
cattle.rancher.io/node-status: ignore
|
||||
```
|
||||
|
||||
**Result:** If you add the node to a cluster, Rancher will not attempt to sync with this node. The node can still be part of the cluster and can be listed with `kubectl`.
|
||||
**Result**: If you add the node to a cluster, Rancher skips syncing with this node. The node can still be part of the cluster and can be listed with `kubectl`.
|
||||
|
||||
If the label is added before the node is added to the cluster, the node will not be shown in the Rancher UI.
|
||||
If the label is added before the node is added to the cluster, the node is not shown in the Rancher UI.
|
||||
|
||||
If the label is added after the node is added to a Rancher cluster, the node will not be removed from the UI.
|
||||
If the label is added after the node is added to a Rancher cluster, the node is not removed from the UI.
|
||||
|
||||
If you delete the node from the Rancher server using the Rancher UI or API, the node will not be removed from the cluster if the `nodeName` is listed in the Rancher settings in the Rancher API under `v3/settings/ignore-node-name`.
|
||||
If you delete the node from the Rancher server using the Rancher UI or API, the node is not removed from the cluster if the `nodeName` is listed in the Rancher settings in the Rancher API under `v3/settings/ignore-node-name`.
|
||||
|
||||
@@ -173,12 +173,12 @@ To add members:
|
||||
|
||||
### 4. Optional: Add Resource Quotas
|
||||
|
||||
Resource quotas limit the resources that a project (and its namespaces) can consume. For more information, see [Resource Quotas](projects-and-namespaces.md).
|
||||
Resource quotas limit the resources that a project (and its namespaces) can consume. For more information, see [Resource Quotas](../../advanced-user-guides/manage-projects/manage-project-resource-quotas/manage-project-resource-quotas.md).
|
||||
|
||||
To add a resource quota,
|
||||
|
||||
1. In the **Resource Quotas** tab, click **Add Resource**.
|
||||
1. Select a **Resource Type**. For more information, see [Resource Quotas.](projects-and-namespaces.md).
|
||||
1. Select a **Resource Type**. For more information, see [Resource Quotas.](../../advanced-user-guides/manage-projects/manage-project-resource-quotas/manage-project-resource-quotas.md).
|
||||
1. Enter values for the **Project Limit** and the **Namespace Default Limit**.
|
||||
1. **Optional:** Specify **Container Default Resource Limit**, which will be applied to every container started in the project. The parameter is recommended if you have CPU or Memory limits set by the Resource Quota. It can be overridden on per an individual namespace or a container level. For more information, see [Container Default Resource Limit](../../advanced-user-guides/manage-projects/manage-project-resource-quotas/manage-project-resource-quotas.md)
|
||||
1. Click **Create**.
|
||||
|
||||
@@ -25,11 +25,11 @@ To manage permissions in a vanilla Kubernetes cluster, cluster admins configure
|
||||
|
||||
:::note
|
||||
|
||||
If you create a namespace with `kubectl`, it may be unusable because `kubectl` doesn't require your new namespace to be scoped within a project that you have access to. If your permissions are restricted to the project level, it is better to [create a namespace through Rancher](manage-namespaces.md) to ensure that you will have permission to access the namespace.
|
||||
If you create a namespace with `kubectl`, it may be unusable because `kubectl` doesn't require your new namespace to be scoped within a project that you have access to. If your permissions are restricted to the project level, it is better to [create a namespace through Rancher](#creating-namespaces) to ensure that you will have permission to access the namespace.
|
||||
|
||||
:::
|
||||
|
||||
### Creating Namespaces
|
||||
## Creating Namespaces
|
||||
|
||||
Create a new namespace to isolate apps and resources in a project.
|
||||
|
||||
@@ -50,7 +50,7 @@ When working with project resources that you can assign to a namespace (i.e., [w
|
||||
|
||||
**Result:** Your namespace is added to the project. You can begin assigning cluster resources to the namespace.
|
||||
|
||||
### Moving Namespaces to Another Project
|
||||
## Moving Namespaces to Another Project
|
||||
|
||||
Cluster admins and members may occasionally need to move a namespace to another project, such as when you want a different team to start using the application.
|
||||
|
||||
@@ -71,7 +71,7 @@ Cluster admins and members may occasionally need to move a namespace to another
|
||||
|
||||
**Result:** Your namespace is moved to a different project (or is unattached from all projects). If any project resources are attached to the namespace, the namespace releases them and then attached resources from the new project.
|
||||
|
||||
### Editing Namespace Resource Quotas
|
||||
## Editing Namespace Resource Quotas
|
||||
|
||||
You can always override the namespace default limit to provide a specific namespace with access to more (or less) project resources.
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ To configure the custom resources, go to the **Cluster Dashboard** To configure
|
||||
1. On the **Clusters** page, go to the cluster where you want to configure CIS scans and click **Explore**.
|
||||
1. In the left navigation bar, click **CIS Benchmark**.
|
||||
|
||||
### Scans
|
||||
## Scans
|
||||
|
||||
A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed.
|
||||
|
||||
@@ -31,7 +31,7 @@ spec:
|
||||
scanProfileName: rke-profile-hardened
|
||||
```
|
||||
|
||||
### Profiles
|
||||
## Profiles
|
||||
|
||||
A profile contains the configuration for the CIS scan, which includes the benchmark version to use and any specific tests to skip in that benchmark.
|
||||
|
||||
@@ -66,7 +66,7 @@ spec:
|
||||
- "1.1.21"
|
||||
```
|
||||
|
||||
### Benchmark Versions
|
||||
## Benchmark Versions
|
||||
|
||||
A benchmark version is the name of benchmark to run using `kube-bench`, as well as the valid configuration parameters for that benchmark.
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ When a cluster scan is run, you need to select a Profile which points to a speci
|
||||
|
||||
Follow all the steps below to add a custom Benchmark Version and run a scan using it.
|
||||
|
||||
### 1. Prepare the Custom Benchmark Version ConfigMap
|
||||
## 1. Prepare the Custom Benchmark Version ConfigMap
|
||||
|
||||
To create a custom benchmark version, first you need to create a ConfigMap containing the benchmark version's config files and upload it to your Kubernetes cluster where you want to run the scan.
|
||||
|
||||
@@ -42,7 +42,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom
|
||||
kubectl create configmap -n <namespace> foo --from-file=<path to directory foo>
|
||||
```
|
||||
|
||||
### 2. Add a Custom Benchmark Version to a Cluster
|
||||
## 2. Add a Custom Benchmark Version to a Cluster
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster where you want to add a custom benchmark and click **Explore**.
|
||||
@@ -54,7 +54,7 @@ To prepare a custom benchmark version ConfigMap, suppose we want to add a custom
|
||||
1. Add the minimum and maximum Kubernetes version limits applicable, if any.
|
||||
1. Click **Create**.
|
||||
|
||||
### 3. Create a New Profile for the Custom Benchmark Version
|
||||
## 3. Create a New Profile for the Custom Benchmark Version
|
||||
|
||||
To run a scan using your custom benchmark version, you need to add a new Profile pointing to this benchmark version.
|
||||
|
||||
@@ -66,7 +66,7 @@ To run a scan using your custom benchmark version, you need to add a new Profile
|
||||
1. Choose the Benchmark Version from the dropdown.
|
||||
1. Click **Create**.
|
||||
|
||||
### 4. Run a Scan Using the Custom Benchmark Version
|
||||
## 4. Run a Scan Using the Custom Benchmark Version
|
||||
|
||||
Once the Profile pointing to your custom benchmark version `foo` has been created, you can create a new Scan to run the custom test configs in the Benchmark Version.
|
||||
|
||||
|
||||
@@ -18,11 +18,10 @@ In order to deploy and run the adapter successfully, you need to ensure its vers
|
||||
:::
|
||||
|
||||
| Rancher Version | Adapter Version |
|
||||
|-----------------|:----------------:|
|
||||
| v2.8.0 | v103.0.0+up3.0.0 |
|
||||
| v2.8.1 | v103.0.0+up3.0.0 |
|
||||
| v2.8.2 | v103.0.0+up3.0.0 |
|
||||
| v2.8.3 | v103.0.1+up3.0.1 |
|
||||
|-----------------|------------------|
|
||||
| v2.9.2 | v104.0.0+up4.0.0 |
|
||||
| v2.9.1 | v104.0.0+up4.0.0 |
|
||||
| v2.9.0 | v104.0.0+up4.0.0 |
|
||||
|
||||
### 1. Gain Access to the Local Cluster
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: Supportconfig bundle
|
||||
title: Supportconfig Bundle
|
||||
---
|
||||
|
||||
<head>
|
||||
@@ -12,7 +12,7 @@ These bundles can be created through Rancher or through direct access to the clu
|
||||
|
||||
> **Note:** Only admin users can generate/download supportconfig bundles, regardless of method.
|
||||
|
||||
### Accessing through Rancher
|
||||
## Accessing Through Rancher
|
||||
|
||||
First, click on the hamburger menu. Then click the `Get Support` button.
|
||||
|
||||
@@ -24,7 +24,7 @@ In the next page, click on the `Generate Support Config` button.
|
||||
|
||||

|
||||
|
||||
### Accessing without rancher
|
||||
## Accessing Without Rancher
|
||||
|
||||
First, generate a kubeconfig for the cluster that Rancher is installed on.
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ title: Cluster API (CAPI) with Rancher Turtles
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher/cluster-api"/>
|
||||
</head>
|
||||
|
||||
[Rancher Turtles](https://turtles.docs.rancher.com/) is a [Rancher extension](../rancher-extensions.md) that manages the lifecycle of provisioned Kubernetes clusters, by providing integration between your Cluster API (CAPI) and Rancher. With Rancher Turtles, you can:
|
||||
[Rancher Turtles](https://turtles.docs.rancher.com/) is a [Kubernetes Operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/#operators-in-kubernetes) that manages the lifecycle of provisioned Kubernetes clusters, by providing integration between your Cluster API (CAPI) and Rancher. With Rancher Turtles, you can:
|
||||
|
||||
- Import CAPI clusters into Rancher, by installing the Rancher Cluster Agent in CAPI provisioned clusters.
|
||||
- Configure the [CAPI Operator](https://turtles.docs.rancher.com/reference-guides/rancher-turtles-chart/values#cluster-api-operator-values).
|
||||
|
||||
@@ -185,7 +185,7 @@ For detailed information on the values supported by the chart and their usage, r
|
||||
|
||||
:::note
|
||||
|
||||
Remember that if you opt for this installation option, you must manage the CAPI Operator installation yourself. You can follow the [CAPI Operator guide](https://turtles.docs.rancher.com/tasks/capi-operator/intro) in the Rancher Turtles documentation for assistance.
|
||||
Remember that if you opt for this installation option, you must manage the CAPI Operator installation yourself. You can follow the [CAPI Operator guide](https://turtles.docs.rancher.com/contributing/install_capi_operator) in the Rancher Turtles documentation for assistance.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
@@ -63,6 +63,8 @@ The Helm chart in the git repository must include its dependencies in the charts
|
||||
|
||||
- **Temporary Workaround**: By default, user-defined secrets are not backed up in Fleet. It is necessary to recreate secrets if performing a disaster recovery restore or migration of Rancher into a fresh cluster. To modify resourceSet to include extra resources you want to backup, refer to docs [here](https://github.com/rancher/backup-restore-operator#user-flow).
|
||||
|
||||
- **Debug logging**: To enable debug logging of Fleet components, create a new **fleet** entry in the existing **rancher-config** ConfigMap in the **cattle-system** namespace with the value `{"debug": 1, "debugLevel": 1}`. The Fleet application restarts after you save the ConfigMap.
|
||||
|
||||
## Documentation
|
||||
|
||||
The Fleet documentation is at https://fleet.rancher.io/.
|
||||
See the [official Fleet documentation](https://fleet.rancher.io/) to learn more.
|
||||
|
||||
@@ -30,7 +30,20 @@ When adding Fleet agent environment variables for the proxy, replace <PROXY_IP>
|
||||
|
||||
## Setting Environment Variables in the Rancher UI
|
||||
|
||||
To add the environment variable to an existing cluster,
|
||||
To add the environment variable to an existing cluster:
|
||||
|
||||
<Tabs groupId="k8s-distro">
|
||||
<TabItem value="RKE2/K3s" default>
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to add environment variables and click **⋮ > Edit Config**.
|
||||
1. Click **Agent Environment Vars** under **Cluster configuration**.
|
||||
1. Click **Add**.
|
||||
1. Enter the [required environment variables](#required-environment-variables)
|
||||
1. Click **Save**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="RKE">
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where you want to add environment variables and click **⋮ > Edit Config**.
|
||||
@@ -39,6 +52,9 @@ To add the environment variable to an existing cluster,
|
||||
1. Enter the [required environment variables](#required-environment-variables)
|
||||
1. Click **Save**.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
**Result:** The Fleet agent works behind a proxy.
|
||||
|
||||
## Setting Environment Variables on Private Nodes
|
||||
@@ -55,4 +71,4 @@ export HTTP_PROXY=http://${proxy_private_ip}:8888
|
||||
export HTTPS_PROXY=http://${proxy_private_ip}:8888
|
||||
export NO_PROXY=127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
|
||||
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
|
||||
```
|
||||
```
|
||||
|
||||
@@ -8,7 +8,7 @@ title: Overview
|
||||
|
||||
Introduced in Rancher v2.6.1, [Harvester](https://docs.harvesterhci.io/) is an open-source hyper-converged infrastructure (HCI) software built on Kubernetes. Harvester installs on bare metal servers and provides integrated virtualization and distributed storage capabilities. Although Harvester operates using Kubernetes, it does not require users to know Kubernetes concepts, making it a more user-friendly application.
|
||||
|
||||
### Feature Flag
|
||||
## Feature Flag
|
||||
|
||||
The Harvester feature flag is used to manage access to the Virtualization Management (VM) page in Rancher where users can navigate directly to Harvester clusters and access the Harvester UI. The Harvester feature flag is enabled by default. Click [here](../../how-to-guides/advanced-user-guides/enable-experimental-features/enable-experimental-features.md) for more information on feature flags in Rancher.
|
||||
|
||||
@@ -22,7 +22,7 @@ To navigate to the Harvester cluster, click **☰ > Virtualization Management**.
|
||||
|
||||
* Users may import a Harvester cluster only on the Virtualization Management page. Importing a cluster on the Cluster Management page is not supported, and a warning will advise you to return to the VM page to do so.
|
||||
|
||||
### Harvester Node Driver
|
||||
## Harvester Node Driver
|
||||
|
||||
The [Harvester node driver](https://docs.harvesterhci.io/v1.1/rancher/node/node-driver/) is generally available for RKE and RKE2 options in Rancher. The node driver is available whether or not the Harvester feature flag is enabled. Note that the node driver is off by default. Users may create RKE or RKE2 clusters on Harvester only from the Cluster Management page.
|
||||
|
||||
@@ -30,7 +30,7 @@ Harvester allows `.ISO` images to be uploaded and displayed through the Harveste
|
||||
|
||||
See [Provisioning Drivers](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/about-provisioning-drivers/about-provisioning-drivers.md#node-drivers) for more information on node drivers in Rancher.
|
||||
|
||||
### Port Requirements
|
||||
## Port Requirements
|
||||
|
||||
The port requirements for the Harvester cluster can be found [here](https://docs.harvesterhci.io/v1.1/install/requirements#networking).
|
||||
|
||||
|
||||
18
docs/integrations-in-rancher/integrations-in-rancher.md
Normal file
18
docs/integrations-in-rancher/integrations-in-rancher.md
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title: Integrations in Rancher
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher"/>
|
||||
</head>
|
||||
|
||||
Prime is the Rancher ecosystem’s enterprise offering, with additional security, extended lifecycles, and access to Prime-exclusive documentation. Rancher Prime installation assets are hosted on a trusted SUSE registry, owned and managed by Rancher. The trusted Prime registry includes only stable releases that have been community-tested.
|
||||
|
||||
Prime also offers options for production support, as well as add-ons to your subscription that tailor to your commercial needs.
|
||||
|
||||
To learn more and get started with Rancher Prime, please visit [this page](https://www.rancher.com/quick-start).
|
||||
|
||||
import DocCardList from '@theme/DocCardList';
|
||||
import { useCurrentSidebarCategory } from '@docusaurus/theme-common/internal';
|
||||
|
||||
<DocCardList items={useCurrentSidebarCategory().items.slice(0,9)} />
|
||||
@@ -1,54 +0,0 @@
|
||||
---
|
||||
title: Integrations in Rancher
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/integrations-in-rancher"/>
|
||||
</head>
|
||||
|
||||
import {Card, CardSection} from '@site/src/components/CardComponents';
|
||||
import {RocketRegular} from '@fluentui/react-icons';
|
||||
|
||||
Prime is the Rancher ecosystem’s enterprise offering, with additional security, extended lifecycles, and access to Prime-exclusive documentation. Rancher Prime installation assets are hosted on a trusted SUSE registry, owned and managed by Rancher. The trusted Prime registry includes only stable releases that have been community-tested.
|
||||
|
||||
Prime also offers options for production support, as well as add-ons to your subscription that tailor to your commercial needs.
|
||||
|
||||
To learn more and get started with Rancher Prime, please visit [this page](https://www.rancher.com/quick-start).
|
||||
|
||||
<CardSection
|
||||
id="Gettingstarted"
|
||||
icon={<RocketRegular />}
|
||||
>
|
||||
<Card
|
||||
title="Kubernetes Distributions"
|
||||
to="./integrations-in-rancher/kubernetes-distributions"
|
||||
/>
|
||||
<Card
|
||||
title="Virtualization on Kubernetes with Harvester"
|
||||
to="./integrations-in-rancher/harvester"
|
||||
/>
|
||||
<Card
|
||||
title="Cloud Native Storage with Longhorn"
|
||||
to="./integrations-in-rancher/longhorn"
|
||||
/>
|
||||
<Card
|
||||
title="Container Security with NeuVector"
|
||||
to="./integrations-in-rancher/neuvector"
|
||||
/>
|
||||
<Card
|
||||
title="Advanced Policy Management with Kubewarden"
|
||||
to="./integrations-in-rancher/kubewarden"
|
||||
/>
|
||||
<Card
|
||||
title="Operating System Management with Elemental"
|
||||
to="./integrations-in-rancher/elemental"
|
||||
/>
|
||||
<Card
|
||||
title="Continuous Delivery with Fleet"
|
||||
to="./integrations-in-rancher/fleet"
|
||||
/>
|
||||
<Card
|
||||
title="Kubernetes on the Desktop"
|
||||
to="./integrations-in-rancher/rancher-desktop"
|
||||
/>
|
||||
</CardSection>
|
||||
@@ -45,7 +45,7 @@ To configure the resources allocated to an Istio component,
|
||||
1. In the left navigation bar, click **Apps**.
|
||||
1. Click **Installed Apps**.
|
||||
1. Go to the `istio-system` namespace. In one of the Istio workloads, such as `rancher-istio`, click **⋮ > Edit/Upgrade**.
|
||||
1. Click **Upgrade** to edit the base components via changes to the values.yaml or add an [overlay file](configuration-options/configuration-options.md#overlay-file). For more information about editing the overlay file, see [this section.](cpu-and-memory-allocations.md#editing-the-overlay-file)
|
||||
1. Click **Upgrade** to edit the base components via changes to the values.yaml or add an [overlay file](configuration-options/configuration-options.md#overlay-file). For more information about editing the overlay file, see [this section.](#editing-the-overlay-file)
|
||||
1. Change the CPU or memory allocations, the nodes where each component will be scheduled to, or the node tolerations.
|
||||
1. Click **Upgrade**. to rollout changes
|
||||
|
||||
|
||||
@@ -43,10 +43,14 @@ It also includes the following:
|
||||
|
||||
### Kiali
|
||||
|
||||
Kiali is a comprehensive visualization aid used for graphing traffic flow throughout the service mesh. It allows you to see how they are connected, including the traffic rates and latencies between them.
|
||||
[Kiali](https://kiali.io/) is a comprehensive visualization aid used for graphing traffic flow throughout the service mesh. It allows you to see how they are connected, including the traffic rates and latencies between them.
|
||||
|
||||
You can check the health of the service mesh, or drill down to see the incoming and outgoing requests to a single component.
|
||||
|
||||
:::note
|
||||
For Istio installations `103.1.0+up1.19.6` and later, Kiali uses a token value for its authentication strategy. The name of the Kiali service account in Rancher is `kiali`. Use this name if you are writing commands that require you to enter the name of the Kiali service account (for example, if you are trying to generate or retrieve a session token). For more information, refer to the [Kiali token authentication FAQ](https://kiali.io/docs/faq/authentication/).
|
||||
:::
|
||||
|
||||
### Jaeger
|
||||
|
||||
Our Istio installer includes a quick-start, all-in-one installation of [Jaeger,](https://www.jaegertracing.io/) a tool used for tracing distributed systems.
|
||||
@@ -71,6 +75,10 @@ To remove Istio components from a cluster, namespace, or workload, refer to the
|
||||
|
||||
> By default, only cluster-admins have access to Kiali. For instructions on how to allow admin, edit or views roles to access them, see [this section.](rbac-for-istio.md)
|
||||
|
||||
:::note
|
||||
For Istio installations version `103.1.0+up1.19.6` and later, Kiali uses a token value for its authentication strategy. The name of the Kiali service account in Rancher is `kiali`. Use this name if you are writing commands that require you to enter the name of the Kiali service account (for example, if you are trying to generate or retrieve a session token). For more information, refer to the [Kiali token authentication FAQ](https://kiali.io/docs/faq/authentication/).
|
||||
:::
|
||||
|
||||
After Istio is set up in a cluster, Grafana, Prometheus, and Kiali are available in the Rancher UI.
|
||||
|
||||
To access the Grafana and Prometheus visualizations,
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user