mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-04-14 10:25:40 +00:00
Add v2.14 preview docs (#2212)
This commit is contained in:
18
versioned_docs/version-2.14/api/api-reference.mdx
Normal file
18
versioned_docs/version-2.14/api/api-reference.mdx
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title: API Reference
|
||||
hide_table_of_contents: true
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/api-reference"/>
|
||||
</head>
|
||||
|
||||
:::note
|
||||
|
||||
At this time, not all Rancher resources are available through the Rancher Kubernetes API.
|
||||
|
||||
:::
|
||||
|
||||
import ApiDocMdx from '@theme/ApiDocMdx';
|
||||
|
||||
<ApiDocMdx id="rancher-api-v2-13" />
|
||||
90
versioned_docs/version-2.14/api/api-tokens.md
Normal file
90
versioned_docs/version-2.14/api/api-tokens.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
title: Using API Tokens
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/api-tokens"/>
|
||||
</head>
|
||||
|
||||
Rancher v2.8.0 introduced the [Rancher Kubernetes API](./api-reference.mdx) which can be used to manage Rancher resources through `kubectl`. This page covers information on API tokens used with the [Rancher CLI](../reference-guides/cli-with-rancher/cli-with-rancher.md), [kubeconfig files](../how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md#about-the-kubeconfig-file), Terraform and the [v3 API browser](./v3-rancher-api-guide.md#enable-view-in-api).
|
||||
|
||||
By default, some cluster-level API tokens are generated with infinite time-to-live (`ttl=0`). In other words, API tokens with `ttl=0` never expire unless you invalidate them. Tokens are not invalidated by changing a password.
|
||||
|
||||
You can deactivate API tokens by deleting them or by deactivating the user account.
|
||||
|
||||
## Deleting Tokens
|
||||
|
||||
To delete a token:
|
||||
|
||||
1. Go to the list of all tokens in the Rancher API view at `https://<Rancher-Server-IP>/v3/tokens`.
|
||||
|
||||
1. Access the token you want to delete by its ID. For example, `https://<Rancher-Server-IP>/v3/tokens/kubectl-shell-user-vqkqt`
|
||||
|
||||
1. Click **Delete**.
|
||||
|
||||
The following is a complete list of tokens generated with `ttl=0`:
|
||||
|
||||
| Token | Description |
|
||||
| ----------------- | -------------------------------------------------------------------------------------- |
|
||||
| `kubectl-shell-*` | Access to `kubectl` shell in the browser |
|
||||
| `agent-*` | Token for agent deployment |
|
||||
| `compose-token-*` | Token for compose |
|
||||
| `helm-token-*` | Token for Helm chart deployment |
|
||||
| `drain-node-*` | Token for drain (Rancher uses `kubectl` for drain because there is no native Kubernetes API). |
|
||||
|
||||
## Setting TTL on Kubeconfig Tokens
|
||||
|
||||
Admins can set a global time-to-live (TTL) on Kubeconfig tokens. Changing the default kubeconfig TTL can be done by navigating to global settings and setting [`kubeconfig-default-token-ttl-minutes`](#kubeconfig-default-token-ttl-minutes) to the desired duration in minutes. As of Rancher v2.8, the default value of [`kubeconfig-default-token-ttl-minutes`](#kubeconfig-default-token-ttl-minutes) is `43200`, which means that tokens expire in 30 days.
|
||||
|
||||
:::note
|
||||
|
||||
This setting is used by all kubeconfig tokens except those created by the CLI to [generate kubeconfig tokens](#disable-tokens-in-generated-kubeconfigs).
|
||||
|
||||
:::
|
||||
|
||||
## Disable Tokens in Generated Kubeconfigs
|
||||
|
||||
Set the `kubeconfig-generate-token` setting to `false`. This setting instructs Rancher to no longer automatically generate a token when a user clicks on download a kubeconfig file. When this setting is deactivated, a generated kubeconfig references the [Rancher CLI](../reference-guides/cli-with-rancher/kubectl-utility.md#authentication-with-kubectl-and-kubeconfig-tokens-with-ttl) to retrieve a short-lived token for the cluster. When this kubeconfig is used in a client, such as `kubectl`, the Rancher CLI needs to be installed to complete the log in request.
|
||||
|
||||
## Token Hashing
|
||||
|
||||
You can [enable token hashing](../how-to-guides/advanced-user-guides/enable-experimental-features/enable-experimental-features.md), where tokens undergo a one-way hash using the SHA256 algorithm. This is a non-reversible process: once enabled, this feature cannot be disabled. You should first evaluate this setting in a test environment, and/or take backups before enabling.
|
||||
|
||||
This feature affects all tokens which include, but are not limited to, the following:
|
||||
|
||||
- Kubeconfig tokens
|
||||
- Bearer tokens API keys/calls
|
||||
- Tokens used by internal operations
|
||||
|
||||
## Token Settings
|
||||
|
||||
These global settings affect Rancher token behavior.
|
||||
|
||||
| Setting | Description |
|
||||
| ------- | ----------- |
|
||||
| [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) | TTL in minutes on a user auth session token. |
|
||||
| [`auth-user-session-idle-ttl-minutes`](#auth-user-session-idle-ttl-minutes) | TTL in minutes on a user auth session token, without user activity. |
|
||||
| [`kubeconfig-default-token-ttl-minutes`](#kubeconfig-default-token-ttl-minutes) | Default TTL applied to all kubeconfig tokens except for tokens [generated by Rancher CLI](#disable-tokens-in-generated-kubeconfigs). |
|
||||
| [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes) | Max TTL for all tokens except those controlled by [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes). |
|
||||
| [`kubeconfig-generate-token`](#kubeconfig-generate-token) | If true, automatically generate tokens when a user downloads a kubeconfig. |
|
||||
|
||||
### auth-user-session-ttl-minutes
|
||||
|
||||
Time to live (TTL) duration in minutes, used to determine when a user auth session token expires. When expired, the user must log in and obtain a new token. This setting is not affected by [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes). Session tokens are created when a user logs into Rancher.
|
||||
|
||||
### auth-user-session-idle-ttl-minutes
|
||||
|
||||
Time to live (TTL) without user activity for login sessions tokens, in minutes.
|
||||
By default, `auth-user-session-idle-ttl-minutes` is set to the same value as [`auth-user-session-ttl-minutes`](#auth-user-session-ttl-minutes) (for backward compatibility). It must never exceed the value of `auth-user-session-ttl-minutes`.
|
||||
|
||||
### kubeconfig-default-token-ttl-minutes
|
||||
|
||||
Time to live (TTL) duration in minutes, used to determine when a kubeconfig token expires. When the token is expired, the API rejects the token. This setting can't be larger than [`auth-token-max-ttl-minutes`](#auth-token-max-ttl-minutes). This setting applies to tokens generated in a requested kubeconfig file, except for tokens [generated by Rancher CLI](#disable-tokens-in-generated-kubeconfigs). As of Rancher v2.8, the default duration is `43200`, which means that tokens expire in 30 days.
|
||||
|
||||
### auth-token-max-ttl-minutes
|
||||
|
||||
Maximum Time to Live (TTL) in minutes allowed for auth tokens. If a user attempts to create a token with a TTL greater than `auth-token-max-ttl-minutes`, Rancher sets the token TTL to the value of `auth-token-max-ttl-minutes`. Applies to all kubeconfig tokens and API tokens. As of Rancher v2.8, the default duration is `129600`, which means that tokens expire in 90 days.
|
||||
|
||||
### kubeconfig-generate-token
|
||||
|
||||
When true, kubeconfigs requested through the UI contain a valid token. When false, kubeconfigs contain a command that uses the Rancher CLI to prompt the user to log in. [The CLI then retrieves and caches a token for the user](../reference-guides/cli-with-rancher/kubectl-utility.md#authentication-with-kubectl-and-kubeconfig-tokens-with-ttl).
|
||||
20
versioned_docs/version-2.14/api/extension-apiserver.md
Normal file
20
versioned_docs/version-2.14/api/extension-apiserver.md
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
title: Extension API Server
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/extension-apiserver"/>
|
||||
</head>
|
||||
|
||||
Rancher extends Kubernetes with additional APIs by registering an extension API server using the [Kubernetes API Aggregation Layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
|
||||
|
||||
## Aggregation Layer is Required
|
||||
|
||||
The API aggregation layer must be configured on the local Kubernetes cluster for the `v1.ext.cattle.io` `APIService` to work correctly. If the `APIService` does not receive a registration request after the Rancher server starts, the pod will crash with a log entry indicating the error. If your pods are consistently failing to detect registration despite having a correctly configured cluster, you can increase the timeout by setting the `.Values.aggregationRegistrationTimeout` in Helm.
|
||||
|
||||
All versions of Kubernetes supported in this Rancher versions K8s distributions (RKE2/K3s) will have the aggregation layer configured and enabled by default. However, if you suspect that your cluster configuration is incorrect, refer to the [Kubernetes Aggregation Layer documentation](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/) for information on configuring the aggregation layer.
|
||||
|
||||
|
||||
:::note
|
||||
If the underlying Kubernetes distribution does not support the aggregation layer, you must migrate to a Kubernetes distribution that does before upgrading.
|
||||
:::
|
||||
152
versioned_docs/version-2.14/api/quickstart.md
Normal file
152
versioned_docs/version-2.14/api/quickstart.md
Normal file
@@ -0,0 +1,152 @@
|
||||
---
|
||||
title: RK-API Quick Start Guide
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/quickstart"/>
|
||||
</head>
|
||||
|
||||
You can access Rancher's resources through the Kubernetes API. This guide helps you get started on using this API as a Rancher user.
|
||||
|
||||
1. In the upper left corner, click **☰ > Global Settings**.
|
||||
2. Find and copy the address in the `server-url` field.
|
||||
3. [Create](../reference-guides/user-settings/api-keys.md#creating-an-api-key) a Rancher API key with no scope.
|
||||
|
||||
:::danger
|
||||
|
||||
A Rancher API key with no scope grants unrestricted access to all resources that the user can access. To prevent unauthorized use, this key should be stored securely and rotated frequently.
|
||||
|
||||
:::
|
||||
|
||||
4. Create a `kubeconfig.yaml` file. Replace `$SERVER_URL` with the server url and `$API_KEY` with your Rancher API key:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Config
|
||||
clusters:
|
||||
- name: "rancher"
|
||||
cluster:
|
||||
server: "$SERVER_URL"
|
||||
|
||||
users:
|
||||
- name: "rancher"
|
||||
user:
|
||||
token: "$API_KEY"
|
||||
|
||||
contexts:
|
||||
- name: "rancher"
|
||||
context:
|
||||
user: "rancher"
|
||||
cluster: "rancher"
|
||||
|
||||
current-context: "rancher"
|
||||
```
|
||||
|
||||
You can use this file with any compatible tool, such as kubectl or [client-go](https://github.com/kubernetes/client-go). For a quick demo, see the [kubectl example](#api-kubectl-example).
|
||||
|
||||
For more information on handling more complex certificate setups, see [Specifying CA Certs](#specifying-ca-certs).
|
||||
|
||||
For more information on available kubeconfig options, see the [upstream documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
|
||||
|
||||
## API kubectl Example
|
||||
|
||||
In this example, we'll show how to use kubectl to create a project, followed by deleting it. For a list of other Rancher resources available, refer to the [API Reference](./api-reference.mdx) page.
|
||||
|
||||
:::note
|
||||
|
||||
At this time, not all Rancher resources are available through the Rancher Kubernetes API.
|
||||
|
||||
:::
|
||||
|
||||
1. Set your KUBECONFIG environment variable to the kubeconfig file you just created:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=$(pwd)/kubeconfig.yaml
|
||||
```
|
||||
|
||||
2. Use `kubectl explain` to view the available fields for projects, or complex sub-fields of resources:
|
||||
|
||||
```bash
|
||||
kubectl explain projects
|
||||
kubectl explain projects.spec
|
||||
```
|
||||
|
||||
Not all resources may have detailed output.
|
||||
|
||||
3. Add the following content to a file named `project.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: Project
|
||||
metadata:
|
||||
# name should be unique across all projects in every cluster
|
||||
name: p-abc123
|
||||
# generateName can be used instead of `name` to randomly generate a name.
|
||||
# generateName: p-
|
||||
# namespace should match spec.ClusterName.
|
||||
namespace: local
|
||||
spec:
|
||||
# clusterName should match `metadata.Name` of the target cluster.
|
||||
clusterName: local
|
||||
description: Example Project
|
||||
# displayName is the human-readable name and is visible from the UI.
|
||||
displayName: Example
|
||||
```
|
||||
|
||||
4. Create the project:
|
||||
|
||||
```bash
|
||||
kubectl create -f project.yaml
|
||||
```
|
||||
|
||||
5. Delete the project:
|
||||
|
||||
How you delete the project depends on how you created the project name.
|
||||
|
||||
**A. If you used `name` when creating the project**:
|
||||
|
||||
```bash
|
||||
kubectl delete -f project.yaml
|
||||
```
|
||||
|
||||
**B. If you used `generateName`**:
|
||||
|
||||
Replace `$PROJECT_NAME` with the randomly generated name of the project displayed by Kubectl after you created the project.
|
||||
|
||||
```bash
|
||||
kubectl delete project $PROJECT_NAME -n local
|
||||
```
|
||||
|
||||
## Specifying CA Certs
|
||||
|
||||
To ensure that your tools can recognize Rancher's CA certificates, most setups require additional modifications to the above template.
|
||||
|
||||
1. In the upper left corner, click **☰ > Global Settings**.
|
||||
2. Find and copy the value in the `ca-certs` field.
|
||||
3. Save the value in a file named `rancher.crt`.
|
||||
|
||||
:::note
|
||||
If your Rancher instance is proxied by another service, you must extract the certificate that the service is using, and add it to the kubeconfig file, as demonstrated in step 5.
|
||||
:::
|
||||
|
||||
4. The following commands convert `rancher.crt` to base64 output, trim all new-lines, and update the cluster in the kubeconfig with the certificate, then finish by removing the `rancher.crt` file:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=$PATH_TO_RANCHER_KUBECONFIG
|
||||
kubectl config set clusters.rancher.certificate-authority-data $(cat rancher.crt | base64 -i - | tr -d '\n')
|
||||
rm rancher.crt
|
||||
```
|
||||
5. (Optional) If you use self-signed certificatess that aren't trusted by your system, you can set the insecure option in your kubeconfig with kubectl:
|
||||
|
||||
:::danger
|
||||
|
||||
This option shouldn't be used in production as it is a security risk.
|
||||
|
||||
:::
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=$PATH_TO_RANCHER_KUBECONFIG
|
||||
kubectl config set clusters.rancher.insecure-skip-tls-verify true
|
||||
```
|
||||
|
||||
If your Rancher instance is proxied by another service, you must extract the certificate that the service is using, and add it to the kubeconfig file, as demonstrated above.
|
||||
94
versioned_docs/version-2.14/api/v3-rancher-api-guide.md
Normal file
94
versioned_docs/version-2.14/api/v3-rancher-api-guide.md
Normal file
@@ -0,0 +1,94 @@
|
||||
---
|
||||
title: Previous v3 Rancher API Guide
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/v3-rancher-api-guide"/>
|
||||
</head>
|
||||
|
||||
Rancher v2.8.0 introduced the Rancher Kubernetes API (RK-API). The previous v3 Rancher API is still available. This page describes the v3 API. For more information on RK-API, see the [RK-API quickstart](./quickstart.md) and [reference guide](./api-reference.mdx).
|
||||
|
||||
## How to Use the API
|
||||
|
||||
The previous v3 API has its own user interface accessible from a [web browser](#enable-view-in-api). This is an easy way to see resources, perform actions, and see the equivalent `curl` or HTTP request & response. To access it:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Rancher v2.6.4+">
|
||||
|
||||
1. Click your user avatar in the upper right corner.
|
||||
1. Click **Account & API Keys**.
|
||||
1. Under the **API Keys** section, find the **API Endpoint** field and click the link. The link looks something like `https://<RANCHER_FQDN>/v3`, where `<RANCHER_FQDN>` is the fully qualified domain name of your Rancher deployment.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Rancher before v2.6.4">
|
||||
|
||||
Go to the URL endpoint at `https://<RANCHER_FQDN>/v3`, where `<RANCHER_FQDN>` is the fully qualified domain name of your Rancher deployment.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Authentication
|
||||
|
||||
API requests must include authentication information. Authentication is done with HTTP basic authentication using [API keys](../reference-guides/user-settings/api-keys.md). API keys can create new clusters and have access to multiple clusters via `/v3/clusters/`. [Cluster and project roles](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md) apply to these keys and restrict what clusters and projects the account can see and what actions they can take.
|
||||
|
||||
By default, certain cluster-level API tokens are generated with infinite time-to-live (`ttl=0`). In other words, API tokens with `ttl=0` never expire unless you invalidate them. For details on how to invalidate them, refer to the [API tokens page](api-tokens.md).
|
||||
|
||||
## Making Requests
|
||||
|
||||
The API is generally RESTful but has several features to make the definition of everything discoverable by a client so that generic clients can be written instead of having to write specific code for every type of resource. For detailed info about the generic API spec, [see further documentation](https://github.com/rancher/api-spec/blob/master/specification.md).
|
||||
|
||||
- Every type has a Schema which describes:
|
||||
- The URL to get to the collection of this type of resource.
|
||||
- Every field the resource can have, along with their type, basic validation rules, whether they are required or optional, etc.
|
||||
- Every action that is possible on this type of resource, with their inputs and outputs (also as schemas).
|
||||
- Every field that allows filtering.
|
||||
- What HTTP verb methods are available for the collection itself, or for individual resources in the collection.
|
||||
|
||||
The design allows you to load just the list of schemas and access everything about the API. The UI for the API contains no code specific to Rancher itself. The URL to get Schemas is sent in every HTTP response as a `X-Api-Schemas` header. From there you can follow the `collection` link on each schema to know where to list resources, and follow other `links` inside of the returned resources to get any other information.
|
||||
|
||||
In practice, you may just want to construct URL strings. We highly suggest limiting this to the top-level to list a collection (`/v3/<type>`) or get a specific resource (`/v3/<type>/<id>`). Anything deeper than that is subject to change in future releases.
|
||||
|
||||
Resources have relationships between each other called links. Each resource includes a map of `links` with the name of the link and the URL where you can retrieve that information. Again, you should `GET` the resource and then follow the URL in the `links` map, not construct these strings yourself.
|
||||
|
||||
Most resources have actions, which do something or change the state of the resource. To use them, send a HTTP `POST` to the URL in the `actions` map of the action you want. Certain actions require input or produce output. See the individual documentation for each type or the schemas for specific information.
|
||||
|
||||
To edit a resource, send a HTTP `PUT` to the `links.update` link on the resource with the fields that you want to change. If the link is missing then you don't have permission to update the resource. Unknown fields and ones that are not editable are ignored.
|
||||
|
||||
To delete a resource, send a HTTP `DELETE` to the `links.remove` link on the resource. If the link is missing then you don't have permission to update the resource.
|
||||
|
||||
To create a new resource, HTTP `POST` to the collection URL in the schema (which is `/v3/<type>`).
|
||||
|
||||
## Filtering
|
||||
|
||||
Most collections can be filtered on the server-side by common fields using HTTP query parameters. The `filters` map shows you what fields can be filtered on and what the filtered values were for the request you made. The API UI has controls to setup filtering and show you the appropriate request. For simple "equals" matches it's just `field=value`. Modifiers can be added to the field name, for example, `field_gt=42` for "field is greater than 42." See the [API spec](https://github.com/rancher/api-spec/blob/master/specification.md#filtering) for full details.
|
||||
|
||||
## Sorting
|
||||
|
||||
Most collections can be sorted on the server-side by common fields using HTTP query parameters. The `sortLinks` map shows you what sorts are available, along with the URL to get the collection sorted by that. It also includes info about what the current response was sorted by, if specified.
|
||||
|
||||
## Pagination
|
||||
|
||||
API responses are paginated with a limit of 100 resources per page by default. This can be changed with the `limit` query parameter, up to a maximum of 1000, for example, `/v3/pods?limit=1000`. The `pagination` map in collection responses tells you whether or not you have the full result set and has a link to the next page if you do not.
|
||||
|
||||
## Capturing v3 API Calls
|
||||
|
||||
You can use browser developer tools to capture how the v3 API is called. For example, you could follow these steps to use the Chrome developer tools to get the API call for provisioning a Rancher Kubernetes distribution cluster:
|
||||
|
||||
1. In the Rancher UI, go to **Cluster Management** and click **Create.**
|
||||
1. Click one of the cluster types. This example uses Digital Ocean.
|
||||
1. Fill out the form with a cluster name and node template, but don't click **Create**.
|
||||
1. You need to open the developer tools before the cluster creation to see the API call being recorded. To open the tools, right-click the Rancher UI and click **Inspect.**
|
||||
1. In the developer tools, click the **Network** tab.
|
||||
1. On the **Network** tab, make sure **Fetch/XHR** is selected.
|
||||
1. In the Rancher UI, click **Create**. In the developer tools, you should see a new network request with the name `cluster?_replace=true`.
|
||||
1. Right-click `cluster?_replace=true` and click **Copy > Copy as cURL.**
|
||||
1. Paste the result into any text editor. You can see the POST request, including the URL it was sent to, all headers, and the full body of the request. This command can be used to create a cluster from the command line. Note: the request should be stored in a safe place because it contains credentials.
|
||||
|
||||
### Enable View in API
|
||||
|
||||
You can also view captured v3 API calls for your respective clusters and resources. This feature is not enabled by default. To enable it:
|
||||
|
||||
1. Click your **User Tile** in the top right corner of the UI and select **Preferences** from the drop-down menu.
|
||||
2. Under the **Advanced Features** section, click **Enable "View in API"**
|
||||
|
||||
Once checked, the **View in API** link is displayed under the **⋮** sub-menu on resource pages in the UI.
|
||||
197
versioned_docs/version-2.14/api/workflows/kubeconfigs.md
Normal file
197
versioned_docs/version-2.14/api/workflows/kubeconfigs.md
Normal file
@@ -0,0 +1,197 @@
|
||||
---
|
||||
title: Kubeconfigs
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/workflows/kubeconfigs"/>
|
||||
</head>
|
||||
|
||||
## Kubeconfig Resource
|
||||
|
||||
Kubeconfig is a Rancher resource `kubeconfigs.ext.cattle.io` that allows generating `v1.Config` kubeconfig files for interacting with Rancher and clusters managed by Rancher.
|
||||
|
||||
```sh
|
||||
kubectl api-resources --api-group=ext.cattle.io
|
||||
```
|
||||
|
||||
To get a description of the fields and structure of the Kubeconfig resource, run:
|
||||
|
||||
```sh
|
||||
kubectl explain kubeconfigs.ext.cattle.io
|
||||
```
|
||||
|
||||
## Creating a Kubeconfig
|
||||
|
||||
Only a **valid and active** Rancher user can create a Kubeconfig. For example, trying to create a Kubeconfig using a `system:admin` service account will lead to an error:
|
||||
|
||||
```bash
|
||||
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||
apiVersion: ext.cattle.io/v1
|
||||
kind: Kubeconfig
|
||||
EOF
|
||||
Error from server (Forbidden): error when creating "STDIN": kubeconfigs.ext.cattle.io is forbidden: user system:admin is not a Rancher user
|
||||
```
|
||||
|
||||
:::warning Important
|
||||
|
||||
The kubeconfig content is generated and returned in the `.status.value` field **only once** when the Kubeconfig is successfully created because it contains secret values for created tokens. Therefore it has to be captured by using an appropriate output option, such as `-o jsonpath='{.status.value}'`, or `-o yaml`.
|
||||
|
||||
:::
|
||||
|
||||
A kubeconfig can be created for more than one cluster at a time by specifying a list of cluster names in the `spec.clusters` field. You can look up cluster names by listing `clusters.management.cattle.io` resources.
|
||||
|
||||
```sh
|
||||
kubectl get clusters.management.cattle.io -o=jsonpath="{.items[*]['metadata.name', 'spec.displayName']}{'\n'}"
|
||||
local local
|
||||
c-m-p66cdvlj downstream1
|
||||
```
|
||||
|
||||
The `metadata.name` and `metadata.generateName` fields are ignored, and the name of the new Kubeconfig is automatically generated using the prefix `kubeconfig-`.
|
||||
|
||||
You can use the `spec.currentContext` field to set the cluster name, and it is used to set the current context in the kubeconfig. If you do not set the `spec.currentContext` field, then the first cluster in the `spec.clusters` list will be used as the current context. For ACE-enabled clusters that don't have an FQDN set, the first control plane node will be used as the current context.
|
||||
|
||||
For ACE-enabled clusters, if the FQDN is set, then that will be used as a cluster entry in the kubeconfig; otherwise, entries for all control plane nodes will be created.
|
||||
|
||||
```bash
|
||||
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||
apiVersion: ext.cattle.io/v1
|
||||
kind: Kubeconfig
|
||||
spec:
|
||||
clusters: [c-m-p66cdvlj, c-m-fcd3g5h]
|
||||
description: My Kubeconfig
|
||||
currentContext: c-m-p66cdvlj
|
||||
EOF
|
||||
```
|
||||
|
||||
If `"*"` is specified as the first item in the `spec.clusters` field, the kubeconfig will be created for all clusters that the user has access to, if any.
|
||||
|
||||
```bash
|
||||
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||
apiVersion: ext.cattle.io/v1
|
||||
kind: Kubeconfig
|
||||
spec:
|
||||
clusters: ["*"]
|
||||
description: My Kubeconfig
|
||||
EOF
|
||||
```
|
||||
|
||||
If `spec.ttl` is not specified, the Kubeconfig's tokens will be created with the expiration time defined in the `kubeconfig-default-token-ttl-minutes` setting, which is 30 days by default. If `spec.ttl` is specified, it should be greater than 0 and less than or equal to the value of the `kubeconfig-default-token-ttl-minutes` setting expressed in seconds.
|
||||
|
||||
```bash
|
||||
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||
apiVersion: ext.cattle.io/v1
|
||||
kind: Kubeconfig
|
||||
spec:
|
||||
clusters: [c-m-p66cdvlj] # Downstream cluster
|
||||
ttl: 7200 # 2 hours
|
||||
EOF
|
||||
```
|
||||
|
||||
## Listing Kubeconfigs
|
||||
|
||||
Listing previously generated Kubeconfigs can be useful for cleaning up backing tokens if the Kubeconfig is no longer needed (e.g., it was issued temporarily). Admins can list all Kubeconfigs, while regular users can only view their own.
|
||||
|
||||
```sh
|
||||
kubectl get kubeconfig
|
||||
NAME TTL TOKENS STATUS AGE
|
||||
kubeconfig-zp786 30d 2/2 Complete 18d
|
||||
kubeconfig-7zvzp 30d 1/1 Complete 12d
|
||||
kubeconfig-jznml 30d 1/1 Complete 12d
|
||||
```
|
||||
Use `-o wide` to get more details:
|
||||
|
||||
```sh
|
||||
kubectl get kubeconfig -o wide
|
||||
NAME TTL TOKENS STATUS AGE USER CLUSTERS DESCRIPTION
|
||||
kubeconfig-zp786 30d 2/2 Complete 18d user-w5gcf * all clusters
|
||||
kubeconfig-7zvzp 30d 1/1 Complete 12d u-w7drc *
|
||||
kubeconfig-jznml 30d 1/1 Complete 12d u-w7drc *
|
||||
```
|
||||
|
||||
## Viewing a Kubeconfig
|
||||
|
||||
Admins can get any Kubeconfig, while regular users can only get their own.
|
||||
|
||||
```sh
|
||||
kubectl get kubeconfig kubeconfig-zp786
|
||||
NAME TTL TOKENS STATUS AGE
|
||||
kubeconfig-zp786 30d 2/2 Complete 18d
|
||||
```
|
||||
|
||||
Use `-o wide` to get more details:
|
||||
|
||||
```sh
|
||||
kubectl get kubeconfig kubeconfig-zp786 -o wide
|
||||
NAME TTL TOKENS STATUS AGE USER CLUSTERS DESCRIPTION
|
||||
kubeconfig-zp786 30d 2/2 Complete 18d user-w5gcf * all clusters
|
||||
```
|
||||
|
||||
## Deleting a Kubeconfig
|
||||
|
||||
Admins can delete any Kubeconfig, while regular users can only delete their own. When a Kubeconfig is deleted, the kubeconfig tokens are also deleted.
|
||||
|
||||
```sh
|
||||
kubectl delete kubeconfig kubeconfig-zp786
|
||||
kubeconfig.ext.cattle.io "kubeconfig-zp786" deleted
|
||||
```
|
||||
|
||||
To delete a Kubeconfig using preconditions:
|
||||
|
||||
```sh
|
||||
cat <<EOF | k delete --raw /apis/ext.cattle.io/v1/kubeconfigs/kubeconfig-zp786 -f -
|
||||
{
|
||||
"apiVersion": "v1",
|
||||
"kind": "DeleteOptions",
|
||||
"preconditions": {
|
||||
"uid": "52183e05-d382-47d2-b4b9-d0735823ce90",
|
||||
"resourceVersion": "31331505"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
## Deleting a Collection of Kubeconfigs
|
||||
|
||||
Admins can delete any Kubeconfig, while regular users can only delete their own.
|
||||
|
||||
To delete all Kubeconfigs:
|
||||
|
||||
```sh
|
||||
kubectl delete --raw /apis/ext.cattle.io/v1/kubeconfigs
|
||||
```
|
||||
|
||||
To delete a collection of Kubeconfigs by label:
|
||||
|
||||
```sh
|
||||
kubectl delete --raw /apis/ext.cattle.io/v1/kubeconfigs?labelSelector=foo%3Dbar
|
||||
```
|
||||
|
||||
## Updating a Kubeconfig
|
||||
|
||||
Only the `metadata`, e.g. adding a label or an annotation, and the `spec.description` field can be updated. All other `spec` fields are immutable.
|
||||
|
||||
To edit a Kubeconfig:
|
||||
|
||||
```sh
|
||||
kubectl edit kubeconfig kubeconfig-zp786
|
||||
```
|
||||
|
||||
To patch a Kubeconfig and update its description:
|
||||
|
||||
```sh
|
||||
kubectl patch kubeconfig kubeconfig-zp786 -type merge -p '{"spec":{"description":"Updated description"}}'
|
||||
kubeconfig.ext.cattle.io/kubeconfig-zp786 patched
|
||||
|
||||
kubectl get kubeconfig kubeconfig-fdcpl -o jsonpath='{.spec.description}'
|
||||
Updated description
|
||||
```
|
||||
|
||||
To patch a Kubeconfig and add a label:
|
||||
|
||||
```sh
|
||||
kubectl patch kubeconfig kubeconfig-zp786 -type merge -p '{"metadata":{"labels":{"foo":"bar"}}}'
|
||||
kubeconfig.ext.cattle.io/kubeconfig-zp786 patched
|
||||
|
||||
kubectl get kubeconfig kubeconfig-zp786 -o jsonpath='{.metadata.labels.foo}'
|
||||
bar
|
||||
```
|
||||
219
versioned_docs/version-2.14/api/workflows/projects.md
Normal file
219
versioned_docs/version-2.14/api/workflows/projects.md
Normal file
@@ -0,0 +1,219 @@
|
||||
---
|
||||
title: Projects
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/workflows/projects"/>
|
||||
</head>
|
||||
|
||||
## Creating a Project
|
||||
|
||||
Project resources may only be created on the management cluster. See below for [creating namespaces under projects in a managed cluster](#creating-a-namespace-in-a-project).
|
||||
|
||||
### Creating a Basic Project
|
||||
|
||||
```bash
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: Project
|
||||
metadata:
|
||||
generateName: p-
|
||||
namespace: c-m-abcde
|
||||
spec:
|
||||
clusterName: c-m-abcde
|
||||
displayName: myproject
|
||||
EOF
|
||||
```
|
||||
|
||||
When creating a new project, you have two primary options for setting the name:
|
||||
|
||||
- **Automatic Generation:** Use `metadata.generateName` to ensure a unique project ID. However, note that you must use `kubectl create` (instead of `kubectl apply`) with this option, as `kubectl apply` does not support it.
|
||||
- **Manual Naming:** You can explicitly set the project ID using `metadata.name`. If a project with that exact name already exists, the name request is denied.
|
||||
The display name seen in the UI is set by `spec.displayName`. If `spec.displayName` is not provided, the field `metadata.name` is used instead.
|
||||
|
||||
Set `metadata.namespace` and `spec.clusterName` to the ID for the cluster the project belongs to.
|
||||
|
||||
If you create a project through a cluster member account and want that account to be able to access the project, you must include the annotation `field.cattle.io/creatorId`, and set it to the cluster member account's user ID.
|
||||
|
||||
```bash
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: Project
|
||||
metadata:
|
||||
annotations:
|
||||
field.cattle.io/creatorId: user-id
|
||||
generateName: p-
|
||||
namespace: c-m-abcde
|
||||
spec:
|
||||
clusterName: c-m-abcde
|
||||
displayName: myproject
|
||||
EOF
|
||||
```
|
||||
|
||||
Setting the `field.cattle.io/creatorId` field creates a `ProjectRoleTemplateBinding` that grants the specified user the ability to see project resources with the `get` command and view the project in the Rancher UI. Cluster owner and admin accounts don't need to set this annotation to perform these tasks.
|
||||
|
||||
Setting the `field.cattle.io/creator-principal-name` annotation to the user's principal preserves it in a projectroletemplatebinding automatically created for the project owner.
|
||||
|
||||
If you don't want the creator to be added as the owner member (e.g. if the creator is a cluster administrator) to the project you may set the `field.cattle.io/no-creator-rbac` annotation to `true`, which will prevent the corresponding projectroletemplatebinding from being created.
|
||||
|
||||
### Creating a Project With a Resource Quota
|
||||
|
||||
Refer to [Kubernetes Resource Quota](https://kubernetes.io/docs/concepts/policy/resource-quotas/).
|
||||
|
||||
```bash
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: Project
|
||||
metadata:
|
||||
generateName: p-
|
||||
namespace: c-m-abcde
|
||||
spec:
|
||||
clusterName: c-m-abcde
|
||||
displayName: myproject
|
||||
resourceQuota:
|
||||
limit:
|
||||
limitsCpu: 1000m
|
||||
namespaceDefaultResourceQuota:
|
||||
limit:
|
||||
limitsCpu: 50m
|
||||
EOF
|
||||
```
|
||||
|
||||
### Creating a Project With Container Limit Ranges
|
||||
|
||||
Refer to [Kubernetes Limit Ranges](https://kubernetes.io/docs/concepts/policy/limit-range/).
|
||||
|
||||
```bash
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: Project
|
||||
metadata:
|
||||
generateName: p-
|
||||
namespace: c-m-abcde
|
||||
spec:
|
||||
clusterName: c-m-abcde
|
||||
displayName: myproject
|
||||
containerDefaultResourceLimit:
|
||||
limitsCpu: 100m
|
||||
limitsMemory: 100Mi
|
||||
requestsCpu: 50m
|
||||
requestsMemory: 50Mi
|
||||
EOF
|
||||
```
|
||||
|
||||
### Backing Namespace
|
||||
|
||||
After creating the project, the field `status.backingNamespace` gets populated. This represents the namespace in the management cluster that is created to manage project related resources. Examples of resources stored in the backing namespace are [project scoped secrets](../../how-to-guides/new-user-guides/kubernetes-resources-setup/secrets.md#creating-secrets-in-projects) and [project role template bindings](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#project-roles).
|
||||
|
||||
## Adding a Member to a Project
|
||||
|
||||
Look up the project's [backing namespace](#backing-namespace) to specify the `metadata.namespace` field value and look up the project's ID to specify the `projectName` field value.
|
||||
|
||||
```bash
|
||||
kubectl --namespace c-m-abcde get projects
|
||||
```
|
||||
|
||||
Look up the role template ID to specify the `roleTemplateName` field value (e.g. `project-member` or `project-owner`).
|
||||
|
||||
```bash
|
||||
kubectl get roletemplates
|
||||
```
|
||||
|
||||
When adding a user member specify the `userPrincipalName` field:
|
||||
|
||||
```bash
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: ProjectRoleTemplateBinding
|
||||
metadata:
|
||||
generateName: prtb-
|
||||
namespace: c-m-abcde-p-vwxyz
|
||||
projectName: c-m-abcde:p-vwxyz
|
||||
roleTemplateName: project-member
|
||||
userPrincipalName: keycloak_user://user
|
||||
EOF
|
||||
```
|
||||
|
||||
When adding a group member specify the `groupPrincipalName` field instead:
|
||||
|
||||
```bash
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: ProjectRoleTemplateBinding
|
||||
metadata:
|
||||
generateName: prtb-
|
||||
namespace: p-vwxyz
|
||||
projectName: c-m-abcde:p-vwxyz
|
||||
roleTemplateName: project-member
|
||||
groupPrincipalName: keycloak_group://group
|
||||
EOF
|
||||
```
|
||||
|
||||
Create a projectroletemplatebinding for each role you want to assign to the project member.
|
||||
|
||||
## Listing Project Members
|
||||
|
||||
Look up the project backing namespace:
|
||||
|
||||
```bash
|
||||
kubectl --namespace c-m-abcde get projects
|
||||
```
|
||||
|
||||
To list projectroletemplatebindings in the project's backing namespace:
|
||||
|
||||
```bash
|
||||
kubectl --namespace c-m-abcde-p-vwxyz get projectroletemplatebindings
|
||||
```
|
||||
|
||||
## Deleting a Member From a Project
|
||||
|
||||
Lookup the projectroletemplatebinding IDs containing the member in the project's namespace as decribed in the [Listing Project Members](#listing-project-members) section.
|
||||
|
||||
Delete the projectroletemplatebinding from the project's namespace:
|
||||
|
||||
```bash
|
||||
kubectl --namespace c-m-abcde-p-vwxyz delete projectroletemplatebindings prtb-qx874 prtb-7zw7s
|
||||
```
|
||||
|
||||
## Creating a Namespace in a Project
|
||||
|
||||
The Project resource resides in the management cluster, even if the Project is for a managed cluster. The namespaces under the project reside in the managed cluster.
|
||||
|
||||
On the management cluster, look up the project ID for the cluster you are administrating if generated using `metadata.generateName`:
|
||||
|
||||
```bash
|
||||
kubectl --namespace c-m-abcde get projects
|
||||
```
|
||||
|
||||
On the managed cluster, create a namespace with a project annotation:
|
||||
|
||||
```bash
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: mynamespace
|
||||
annotations:
|
||||
field.cattle.io/projectId: c-m-abcde:p-vwxyz
|
||||
EOF
|
||||
```
|
||||
|
||||
Note the format, `<cluster ID>:<project ID>`.
|
||||
|
||||
## Deleting a Project
|
||||
|
||||
Look up the project to delete in the cluster namespace:
|
||||
|
||||
```bash
|
||||
kubectl --namespace c-m-abcde get projects
|
||||
```
|
||||
|
||||
Delete the project under the cluster namespace:
|
||||
|
||||
```bash
|
||||
kubectl --namespace c-m-abcde delete project p-vwxyz
|
||||
```
|
||||
|
||||
Note that this command doesn't delete the namespaces and resources that formerly belonged to the project.
|
||||
|
||||
It does delete all project role template bindings for the projects, so recreating the project will not restore members added to the project, and you have to add users as members again.
|
||||
126
versioned_docs/version-2.14/api/workflows/tokens.md
Normal file
126
versioned_docs/version-2.14/api/workflows/tokens.md
Normal file
@@ -0,0 +1,126 @@
|
||||
---
|
||||
title: Tokens
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/api/workflows/tokens"/>
|
||||
</head>
|
||||
|
||||
## Token Resource
|
||||
|
||||
Rancher has an imperative API resource `tokens.ext.cattle.io` that allows you to generate tokens for authenticating with Rancher.
|
||||
|
||||
```sh
|
||||
kubectl api-resources --api-group=ext.cattle.io
|
||||
```
|
||||
|
||||
To get a description of the fields and structure of the Token resource, run:
|
||||
|
||||
```sh
|
||||
kubectl explain tokens.ext.cattle.io
|
||||
```
|
||||
|
||||
## Creating a Token
|
||||
|
||||
:::caution
|
||||
The Token value is only returned once in the `status.value` field.
|
||||
:::
|
||||
|
||||
Since Rancher v2.13.0 the `status.bearerToken` now contains a fully formed and ready-to-use Bearer token that can be used to authenticate to [Rancher API](../v3-rancher-api-guide.md).
|
||||
|
||||
Only a **valid and active** Rancher user can create a Token. Otherwise, you will get an error displayed (`Error from server (Forbidden)...`) when attempting to create a Token.
|
||||
|
||||
```bash
|
||||
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||
apiVersion: ext.cattle.io/v1
|
||||
kind: Token
|
||||
EOF
|
||||
Error from server (Forbidden): error when creating "STDIN": tokens.ext.cattle.io is forbidden: user system:admin is not a Rancher user
|
||||
```
|
||||
|
||||
A Token is always created for the user making the request. Attempting to create a Token for a different user, by specifying a different `spec.userID`, is forbidden and will fail.
|
||||
|
||||
- The `spec.description` field can be set to an arbitrary human-readable description of the Token's purpose. The default value is empty.
|
||||
|
||||
- The `spec.kind` field can be set to the kind of Token. The value `session` indicates a login Token. All other values, including the default empty string, indicate a kind of derived Token.
|
||||
|
||||
- The `metadata.name` and `metadata.generateName` fields are ignored, and the name of the new Token is automatically generated using the prefix `token-`.
|
||||
|
||||
```bash
|
||||
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||
apiVersion: ext.cattle.io/v1
|
||||
kind: Token
|
||||
spec:
|
||||
description: My Token
|
||||
EOF
|
||||
```
|
||||
|
||||
- If the `spec.ttl` is not specified, the Token is created with the expiration time defined in the `auth-token-max-ttl-minutes` setting. The default expiration time is 90 days. If `spec.ttl` is specified, it should be greater than 0 and less than or equal to the value of the `auth-token-max-ttl-minutes` setting expressed in milliseconds.
|
||||
|
||||
```bash
|
||||
kubectl create -o jsonpath='{.status.value}' -f -<<EOF
|
||||
apiVersion: ext.cattle.io/v1
|
||||
kind: Token
|
||||
spec:
|
||||
ttl: 7200000 # 2 hours
|
||||
EOF
|
||||
```
|
||||
|
||||
## Listing Tokens
|
||||
|
||||
Listing previously generated Tokens can help clean up tokens that are no longer needed (e.g., they were issued temporarily). Admins can list all Tokens, while regular users can only see their own.
|
||||
|
||||
```sh
|
||||
kubectl get tokens.ext.cattle.io
|
||||
NAME KIND TTL AGE
|
||||
token-chjc9 90d 18s
|
||||
token-6fzgj 90d 16s
|
||||
token-8nbrm 90d 14s
|
||||
```
|
||||
|
||||
Use `-o wide` to get more details:
|
||||
|
||||
```sh
|
||||
kubectl get tokens.ext.cattle.io -o wide
|
||||
NAME USER KIND TTL AGE DESCRIPTION
|
||||
token-chjc9 user-jtghh 90d 24s example
|
||||
token-6fzgj user-jtghh 90d 22s box
|
||||
token-8nbrm user-jtghh 90d 20s jinx
|
||||
```
|
||||
|
||||
## Viewing a Token
|
||||
|
||||
Admins can get any Token, while regular users can only get their own.
|
||||
|
||||
```sh
|
||||
kubectl get tokens.ext.cattle.io token-chjc9
|
||||
NAME KIND TTL AGE
|
||||
token-chjc9 90d 18s
|
||||
```
|
||||
|
||||
Use `-o wide` to get more details:
|
||||
|
||||
```sh
|
||||
kubectl get tokens.ext.cattle.io token-chjc9 -o wide
|
||||
NAME USER KIND TTL AGE DESCRIPTION
|
||||
token-chjc9 user-jtghh 90d 24s example
|
||||
```
|
||||
|
||||
## Deleting a Token
|
||||
|
||||
Admins can delete any Token, while regular users can only delete their own.
|
||||
|
||||
```sh
|
||||
kubectl delete tokens.ext.cattle.io token-chjc9
|
||||
token.ext.cattle.io "token-chjc9" deleted
|
||||
```
|
||||
|
||||
## Updating a Token
|
||||
|
||||
Only the metadata fields `spec.description`, `spec.ttl`, and `spec.enabled` can be updated. All other `spec` fields are immutable. Admins can extend the `spec.ttl` field, while regular users can only reduce the value.
|
||||
|
||||
An example `kubectl` command to edit a Token:
|
||||
|
||||
```sh
|
||||
kubectl edit tokens.ext.cattle.io token-zp786
|
||||
```
|
||||
187
versioned_docs/version-2.14/api/workflows/users.md
Normal file
187
versioned_docs/version-2.14/api/workflows/users.md
Normal file
@@ -0,0 +1,187 @@
|
||||
---
|
||||
title: Users
|
||||
---
|
||||
|
||||
## User Resource
|
||||
|
||||
The `User` resource (users.management.cattle.io) represents a user account in Rancher.
|
||||
|
||||
To get a description of the fields and structure of the `User` resource, run:
|
||||
|
||||
```sh
|
||||
kubectl explain users.management.cattle.io
|
||||
```
|
||||
|
||||
## Creating a User
|
||||
|
||||
Creating a local user is a two-step process: you must create the `User` resource, then provide a password via a Kubernetes `Secret`.
|
||||
|
||||
Only a user with sufficient permissions can create a `User` resource.
|
||||
|
||||
```bash
|
||||
kubectl create -f -<<EOF
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: User
|
||||
metadata:
|
||||
name: testuser
|
||||
displayName: "Test User"
|
||||
username: "testuser"
|
||||
EOF
|
||||
```
|
||||
The user's password must be provided in a `Secret` object within the `cattle-local-user-passwords` namespace. The Rancher webhook will automatically hash the password and update the `Secret`.
|
||||
|
||||
:::important
|
||||
|
||||
Important: The `Secret` must have the same name as the metadata.name (and username) of the `User` resource.
|
||||
|
||||
:::
|
||||
|
||||
```bash
|
||||
kubectl create -f -<<EOF
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: testuser
|
||||
namespace: cattle-local-user-passwords
|
||||
type: Opaque
|
||||
stringData:
|
||||
password: Pass1234567!
|
||||
EOF
|
||||
```
|
||||
|
||||
After the plaintext password is submitted, the Rancher-Webhook automatically hashes it, replacing the content of the `Secret`, ensuring that the plaintext password is never stored:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
password: 1c1Y4CdjlehGWFz26F414x2qoj4gch5L5OXsx35MAa8=
|
||||
salt: m8Co+CfMDo5XwVl0FqYzGcRIOTgRrwFSqW8yurh5DcE=
|
||||
kind: Secret
|
||||
metadata:
|
||||
annotations:
|
||||
cattle.io/password-hash: pbkdf2sha3512
|
||||
name: testuser
|
||||
namespace: cattle-local-user-passwords
|
||||
ownerReferences:
|
||||
- apiVersion: management.cattle.io/v3
|
||||
kind: User
|
||||
name: testuser
|
||||
uid: 663ffb4f-8178-46c8-85a3-337f4d5cbc2e
|
||||
uid: bade9f0a-b06f-4a77-9a39-4284dc2349c5
|
||||
type: Opaque
|
||||
```
|
||||
|
||||
## Updating User's Password
|
||||
|
||||
To change a user's password, use the `PasswordChangeRequest` resource, which handles secure password updates.
|
||||
|
||||
```yaml
|
||||
kubectl create -f -<<EOF
|
||||
apiVersion: ext.cattle.io/v1
|
||||
kind: PasswordChangeRequest
|
||||
spec:
|
||||
userID: "testuser"
|
||||
currentPassword: "Pass1234567!"
|
||||
newPassword: "NewPass1234567!"
|
||||
EOF
|
||||
```
|
||||
|
||||
## Listing Users
|
||||
|
||||
List all `User` resources in the cluster:
|
||||
|
||||
```sh
|
||||
kubectl get users
|
||||
NAME AGE
|
||||
testuser 3m54s
|
||||
user-4n5ws 12m
|
||||
```
|
||||
|
||||
## Viewing a User
|
||||
|
||||
View a specific `User` resource by name:
|
||||
|
||||
```sh
|
||||
kubectl get user testuser
|
||||
NAME AGE
|
||||
testuser 3m54s
|
||||
```
|
||||
|
||||
## Deleting a User
|
||||
|
||||
Deleting a user will automatically delete the corresponding password `Secret`.
|
||||
|
||||
```sh
|
||||
kubectl delete user testuser
|
||||
user.management.cattle.io "testuser" deleted
|
||||
```
|
||||
|
||||
## Get a Current User's Information
|
||||
|
||||
A client uses the `SelfUser` resource to retrieve information about the currently authenticated user without knowing their ID. The user ID is returned in the `.status.userID` field.
|
||||
|
||||
```bash
|
||||
kubectl create -o jsonpath='{.status.userID}' -f -<<EOF
|
||||
apiVersion: ext.cattle.io/v1
|
||||
kind: SelfUser
|
||||
EOF
|
||||
|
||||
testuser
|
||||
```
|
||||
|
||||
## Refreshing a User's Group Membership
|
||||
|
||||
Updates to user group memberships are triggered by the `GroupMembershipRefreshRequest` resource.
|
||||
|
||||
:::note
|
||||
Group membership is only supported for external authentication providers.
|
||||
:::
|
||||
|
||||
### For a Single User
|
||||
|
||||
```bash
|
||||
kubectl create -o jsonpath='{.status}' -f -<<EOF
|
||||
apiVersion: ext.cattle.io/v1
|
||||
kind: GroupMembershipRefreshRequest
|
||||
spec:
|
||||
userId: testuser
|
||||
EOF
|
||||
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"lastTransitionTime": "2025-11-10T12:01:03Z",
|
||||
"message": "",
|
||||
"reason": "",
|
||||
"status": "True",
|
||||
"type": "UserRefreshInitiated"
|
||||
}
|
||||
],
|
||||
"summary": "Completed"
|
||||
}
|
||||
```
|
||||
|
||||
### For All Users
|
||||
|
||||
```bash
|
||||
|
||||
kubectl create -o jsonpath='{.status}' -f -<<EOF
|
||||
apiVersion: ext.cattle.io/v1
|
||||
kind: GroupMembershipRefreshRequest
|
||||
spec:
|
||||
userId: "*"
|
||||
EOF
|
||||
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"lastTransitionTime": "2025-11-10T12:01:59Z",
|
||||
"message": "",
|
||||
"reason": "",
|
||||
"status": "True",
|
||||
"type": "UserRefreshInitiated"
|
||||
}
|
||||
],
|
||||
"summary": "Completed"
|
||||
}
|
||||
```
|
||||
117
versioned_docs/version-2.14/contribute-to-rancher.md
Normal file
117
versioned_docs/version-2.14/contribute-to-rancher.md
Normal file
@@ -0,0 +1,117 @@
|
||||
---
|
||||
title: Contributing to Rancher
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/contribute-to-rancher"/>
|
||||
</head>
|
||||
|
||||
Learn about the repositories used for Rancher and Rancher docs, how to build Rancher repositories, and what information to include when you file an issue.
|
||||
|
||||
For more detailed information on how to contribute to the development of Rancher projects, refer to the [Rancher Developer Wiki](https://github.com/rancher/rancher/wiki). The wiki has resources on many topics, including the following:
|
||||
|
||||
- How to set up the Rancher development environment and run tests
|
||||
- The typical flow of an issue through the development lifecycle
|
||||
- Coding guidelines and development best practices
|
||||
- Debugging and troubleshooting
|
||||
- Developing the Rancher API
|
||||
|
||||
On the Rancher Users Slack, the channel for developers is **#developer**.
|
||||
|
||||
## Rancher Docs
|
||||
|
||||
If you have suggestions for the documentation on this website, [open](https://github.com/rancher/rancher-docs/issues/new/choose) an issue in the main [Rancher docs](https://github.com/rancher/rancher-docs) repository. This repo contains documentation for Rancher v2.0 and later.
|
||||
|
||||
See the [Rancher docs README](https://github.com/rancher/rancher-docs#readme) for more details on contributing to and building the Rancher v2.x docs repo.
|
||||
|
||||
For documentation describing Rancher v1.6 and earlier, see the [Rancher 1.x docs](https://github.com/rancher/rancher.github.io) repo, which contains source files for https://rancher.com/docs/rancher/v1.6/en/.
|
||||
|
||||
## Rancher Repositories
|
||||
|
||||
All of repositories are located within our main GitHub organization. There are many repositories used for Rancher, but we'll provide descriptions of some of the main ones used in Rancher.
|
||||
|
||||
Repository | URL | Description
|
||||
-----------|-----|-------------
|
||||
Rancher | https://github.com/rancher/rancher | This repository is the main source code for Rancher 2.x.
|
||||
Types | https://github.com/rancher/types | This repository is the repository that has all the API types for Rancher 2.x.
|
||||
API Framework | https://github.com/rancher/norman | This repository is an API framework for building Rancher style APIs backed by Kubernetes Custom Resources.
|
||||
User Interface | https://github.com/rancher/dashboard/ | This repository is the source of the Dashboard UI.
|
||||
(Rancher) Docker Machine | https://github.com/rancher/machine | This repository is the source of the Docker Machine binary used when using Node Drivers. This is a fork of the `docker/machine` repository.
|
||||
machine-package | https://github.com/rancher/machine-package | This repository is used to build the Rancher Docker Machine binary.
|
||||
kontainer-engine | https://github.com/rancher/kontainer-engine | This repository is the source of kontainer-engine, the tool to provision hosted Kubernetes clusters.
|
||||
CLI | https://github.com/rancher/cli | This repository is the source code for the Rancher CLI used in Rancher 2.x.
|
||||
(Rancher) Helm repository | https://github.com/rancher/helm | This repository is the source of the packaged Helm binary. This is a fork of the `helm/helm` repository.
|
||||
loglevel repository | https://github.com/rancher/loglevel | This repository is the source of the loglevel binary, used to dynamically change log levels.
|
||||
|
||||
To see all libraries/projects used in Rancher, see the [`go.mod` file](https://github.com/rancher/rancher/blob/master/go.mod) in the `rancher/rancher` repository.
|
||||
|
||||
<br/>
|
||||
<sup>Rancher components used for provisioning/managing Kubernetes clusters.</sup>
|
||||
|
||||
### Building Rancher Repositories
|
||||
|
||||
Every repository should have a Makefile and can be built using the `make` command. The `make` targets are based on the scripts in the `/scripts` directory in the repository, and each target will use [Dapper](https://github.com/rancher/dapper) to run the target in an isolated environment. The `Dockerfile.dapper` will be used for this process, and includes all the necessary build tooling needed.
|
||||
|
||||
The default target is `ci`, and will run `./scripts/validate`, `./scripts/build`, `./scripts/test` and `./scripts/package`. The resulting binaries of the build will be in `./build/bin` and are usually also packaged in a Docker image.
|
||||
|
||||
### Rancher Bugs, Issues or Questions
|
||||
|
||||
If you find any bugs or are having any trouble, please search the [reported issue](https://github.com/rancher/rancher/issues) as someone may have experienced the same issue or we are actively working on a solution.
|
||||
|
||||
If you can't find anything related to your issue, contact us by [filing an issue](https://github.com/rancher/rancher/issues/new). Though we have many repositories related to Rancher, we want the bugs filed in the Rancher repository so we won't miss them! If you want to ask a question or ask fellow users about an use case, we suggest creating a post on the [Rancher Forums](https://forums.rancher.com).
|
||||
|
||||
#### Checklist for Filing Issues
|
||||
|
||||
Please follow this checklist when filing an issue which will helps us investigate and fix the issue. More info means more data we can use to determine what is causing the issue or what might be related to the issue.
|
||||
|
||||
:::note
|
||||
|
||||
For large amounts of data, please use [GitHub Gist](https://gist.github.com/) or similar and link the created resource in the issue.
|
||||
|
||||
:::
|
||||
|
||||
:::note Important:
|
||||
|
||||
Please remove any sensitive data as it will be publicly viewable.
|
||||
|
||||
:::
|
||||
|
||||
- **Resources:** Provide as much as detail as possible on the used resources. As the source of the issue can be many things, including as much of detail as possible helps to determine the root cause. See some examples below:
|
||||
- **Hosts:** What specifications does the host have, like CPU/memory/disk, what cloud does it happen on, what Amazon Machine Image are you using, what DigitalOcean droplet are you using, what image are you provisioning that we can rebuild or use when we try to reproduce
|
||||
- **Operating System:** What operating system are you using? Providing specifics helps here like the output of `cat /etc/os-release` for exact OS release and `uname -r` for exact kernel used
|
||||
- **Docker:** What Docker version are you using, how did you install it? Most of the details of Docker can be found by supplying output of `docker version` and `docker info`
|
||||
- **Environment:** Are you in a proxy environment, are you using recognized CA/self signed certificates, are you using an external loadbalancer
|
||||
- **Rancher:** What version of Rancher are you using, this can be found on the bottom left of the UI or be retrieved from the image tag you are running on the host
|
||||
- **Clusters:** What kind of cluster did you create, how did you create it, what did you specify when you were creating it
|
||||
- **Steps to reproduce the issue:** Provide as much detail on how you got into the reported situation. This helps the person to reproduce the situation you are in.
|
||||
- Provide manual steps or automation scripts used to get from a newly created setup to the situation you reported.
|
||||
- **Logs:** Provide data/logs from the used resources.
|
||||
- Rancher
|
||||
- Docker install
|
||||
|
||||
```
|
||||
docker logs \
|
||||
--timestamps \
|
||||
$(docker ps | grep -E "rancher/rancher:|rancher/rancher " | awk '{ print $1 }')
|
||||
```
|
||||
- Kubernetes install using `kubectl`
|
||||
|
||||
:::note
|
||||
|
||||
Make sure you configured the correct kubeconfig (for example, `export KUBECONFIG=$PWD/kube_config_cluster.yml` if Rancher is installed on a Kubernetes cluster) or are using the embedded kubectl via the UI.
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
kubectl -n cattle-system \
|
||||
logs \
|
||||
-l app=rancher \
|
||||
--timestamps=true
|
||||
```
|
||||
- System logging (these might not all exist, depending on operating system)
|
||||
- `/var/log/messages`
|
||||
- `/var/log/syslog`
|
||||
- `/var/log/kern.log`
|
||||
- Docker daemon logging (these might not all exist, depending on operating system)
|
||||
- `/var/log/docker.log`
|
||||
- **Metrics:** If you are experiencing performance issues, please provide as much of data (files or screenshots) of metrics which can help determining what is going on. If you have an issue related to a machine, it helps to supply output of `top`, `free -m`, `df` which shows processes/memory/disk usage.
|
||||
@@ -0,0 +1,179 @@
|
||||
---
|
||||
title: Container Network Interface (CNI) Providers
|
||||
description: Learn about Container Network Interface (CNI), the CNI providers Rancher provides, the features they offer, and how to choose a provider for you
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/container-network-interface-providers"/>
|
||||
</head>
|
||||
|
||||
## What is CNI?
|
||||
|
||||
CNI (Container Network Interface), a [Cloud Native Computing Foundation project](https://cncf.io/), consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted.
|
||||
|
||||
Kubernetes uses CNI as an interface between network providers and Kubernetes pod networking.
|
||||
|
||||

|
||||
|
||||
For more information visit [CNI GitHub project](https://github.com/containernetworking/cni).
|
||||
|
||||
## What Network Models are Used in CNI?
|
||||
|
||||
CNI network providers implement their network fabric using either an encapsulated network model such as Virtual Extensible Lan ([VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan)) or an unencapsulated network model such as Border Gateway Protocol ([BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol)).
|
||||
|
||||
### What is an Encapsulated Network?
|
||||
|
||||
This network model provides a logical Layer 2 (L2) network encapsulated over the existing Layer 3 (L3) network topology that spans the Kubernetes cluster nodes. With this model you have an isolated L2 network for containers without needing routing distribution, all at the cost of minimal overhead in terms of processing and increased IP package size, which comes from an IP header generated by overlay encapsulation. Encapsulation information is distributed by UDP ports between Kubernetes workers, interchanging network control plane information about how MAC addresses can be reached. Common encapsulation used in this kind of network model is VXLAN, Internet Protocol Security (IPSec), and IP-in-IP.
|
||||
|
||||
In simple terms, this network model generates a kind of network bridge extended between Kubernetes workers, where pods are connected.
|
||||
|
||||
This network model is used when an extended L2 bridge is preferred. This network model is sensitive to L3 network latencies of the Kubernetes workers. If datacenters are in distinct geolocations, be sure to have low latencies between them to avoid eventual network segmentation.
|
||||
|
||||
CNI network providers using this network model include Flannel, Canal, Weave, and Cilium. By default, Calico is not using this model, but it can be configured to do so.
|
||||
|
||||

|
||||
|
||||
### What is an Unencapsulated Network?
|
||||
|
||||
This network model provides an L3 network to route packets between containers. This model doesn't generate an isolated l2 network, nor generates overhead. These benefits come at the cost of Kubernetes workers having to manage any route distribution that's needed. Instead of using IP headers for encapsulation, this network model uses a network protocol between Kubernetes workers to distribute routing information to reach pods, such as [BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol).
|
||||
|
||||
In simple terms, this network model generates a kind of network router extended between Kubernetes workers, which provides information about how to reach pods.
|
||||
|
||||
This network model is used when a routed L3 network is preferred. This mode dynamically updates routes at the OS level for Kubernetes workers. It's less sensitive to latency.
|
||||
|
||||
CNI network providers using this network model include Calico and Cilium. Cilium may be configured with this model although it is not the default mode.
|
||||
|
||||

|
||||
|
||||
## What CNI Providers are Provided by Rancher?
|
||||
|
||||
### RKE2 Kubernetes clusters
|
||||
|
||||
Out-of-the-box, Rancher provides the following CNI network providers for RKE2 Kubernetes clusters: Calico, Canal, Cilium, and Flannel.
|
||||
|
||||
You can choose your CNI network provider when you create new Kubernetes clusters from Rancher.
|
||||
|
||||
#### Calico
|
||||
|
||||

|
||||
|
||||
Calico enables networking and network policy in Kubernetes clusters across the cloud. By default, Calico uses a pure, unencapsulated IP network fabric and policy engine to provide networking for your Kubernetes workloads. Workloads are able to communicate over both cloud infrastructure and on-prem using BGP.
|
||||
|
||||
Calico also provides a stateless IP-in-IP or VXLAN encapsulation mode that can be used, if necessary. Calico also offers policy isolation, allowing you to secure and govern your Kubernetes workloads using advanced ingress and egress policies.
|
||||
|
||||
Kubernetes workers should open TCP port `179` if using BGP or UDP port `4789` if using VXLAN encapsulation. In addition, TCP port `5473` is needed when using Typha. See [the port requirements for user clusters](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) for more details.
|
||||
|
||||
:::note Important:
|
||||
|
||||
In Rancher v2.6.3, Calico probes fail on Windows nodes upon RKE2 installation. <b>Note that this issue is resolved in v2.6.4.</b>
|
||||
|
||||
- To work around this issue, first navigate to `https://<rancherserverurl>/v3/settings/windows-rke2-install-script`.
|
||||
|
||||
- There, change the current setting: `https://raw.githubusercontent.com/rancher/wins/v0.1.3/install.ps1` to this new setting: `https://raw.githubusercontent.com/rancher/rke2/master/windows/rke2-install.ps1`.
|
||||
|
||||
:::
|
||||
|
||||

|
||||
|
||||
For more information, see the following pages:
|
||||
|
||||
- [Project Calico Official Site](https://www.projectcalico.org/)
|
||||
- [Project Calico GitHub Page](https://github.com/projectcalico/calico)
|
||||
|
||||
#### Canal
|
||||
|
||||

|
||||
|
||||
Canal is a CNI network provider that gives you the best of Flannel and Calico. It allows users to easily deploy Calico and Flannel networking together as a unified networking solution, combining Calico’s network policy enforcement with the rich superset of Calico (unencapsulated) and/or Flannel (encapsulated) network connectivity options.
|
||||
|
||||
In Rancher, Canal is the default CNI network provider combined with Flannel and VXLAN encapsulation.
|
||||
|
||||
Kubernetes workers should open UDP port `8472` (VXLAN) and TCP port `9099` (health checks). If using Wireguard, you should open UDP ports `51820` and `51821`. For more details, refer to [the port requirements for user clusters](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md).
|
||||
|
||||

|
||||
|
||||
For more information, refer to the [Rancher maintained Canal source](https://github.com/rancher/rke2-charts/tree/main-source/packages/rke2-canal) and the [Canal GitHub Page](https://github.com/projectcalico/canal).
|
||||
|
||||
#### Cilium
|
||||
|
||||

|
||||
|
||||
Cilium enables networking and network policies (L3, L4, and L7) in Kubernetes. By default, Cilium uses eBPF technologies to route packets inside the node and VXLAN to send packets to other nodes. Unencapsulated techniques can also be configured.
|
||||
|
||||
Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4240` for health checks. In addition, ICMP 8/0 must be enabled for health checks. For more information, check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements).
|
||||
|
||||
##### Ingress Routing Across Nodes in Cilium
|
||||
<br/>
|
||||
By default, Cilium does not allow pods to contact pods on other nodes. To work around this, enable the ingress controller to route requests across nodes with a `CiliumNetworkPolicy`.
|
||||
|
||||
After selecting the Cilium CNI and enabling Project Network Isolation for your new cluster, configure as follows:
|
||||
|
||||
```
|
||||
apiVersion: cilium.io/v2
|
||||
kind: CiliumNetworkPolicy
|
||||
metadata:
|
||||
name: hn-nodes
|
||||
namespace: default
|
||||
spec:
|
||||
endpointSelector: {}
|
||||
ingress:
|
||||
- fromEntities:
|
||||
- remote-node
|
||||
```
|
||||
|
||||
#### Flannel
|
||||
|
||||

|
||||
|
||||
Flannel is a simple and easy way to configure L3 network fabric designed for Kubernetes. Flannel runs a single binary agent named flanneld on each host, which is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms, with the default encapsulation being [VXLAN](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#vxlan).
|
||||
|
||||
Encapsulated traffic is unencrypted by default. Flannel provides two solutions for encryption:
|
||||
|
||||
* [IPSec](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#ipsec), which makes use of [strongSwan](https://www.strongswan.org/) to establish encrypted IPSec tunnels between Kubernetes workers. It is an experimental backend for encryption.
|
||||
* [WireGuard](https://github.com/flannel-io/flannel/blob/master/Documentation/backends.md#wireguard), which is a more faster-performing alternative to strongSwan.
|
||||
|
||||
Kubernetes workers should open UDP port `8472` (VXLAN). See [the port requirements for user clusters](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md#networking-requirements) for more details.
|
||||
|
||||

|
||||
|
||||
For more information, see the [Flannel GitHub Page](https://github.com/flannel-io/flannel).
|
||||
|
||||
## CNI Features by Provider
|
||||
|
||||
The following table summarizes the different features available for each CNI network provider provided by Rancher.
|
||||
|
||||
| Provider | Network Model | Route Distribution | Network Policies | Mesh | External Datastore | Encryption | Ingress/Egress Policies |
|
||||
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
|
||||
| Canal | Encapsulated (VXLAN) | No | Yes | No | K8s API | Yes | Yes |
|
||||
| Flannel | Encapsulated (VXLAN) | No | No | No | K8s API | Yes | No |
|
||||
| Calico | Encapsulated (VXLAN,IPIP) OR Unencapsulated | Yes | Yes | Yes | Etcd and K8s API | Yes | Yes |
|
||||
| Weave | Encapsulated | Yes | Yes | Yes | No | Yes | Yes |
|
||||
| Cilium | Encapsulated (VXLAN) | Yes | Yes | Yes | Etcd and K8s API | Yes | Yes |
|
||||
|
||||
- Network Model: Encapsulated or unencapsulated. For more information, see [What Network Models are Used in CNI?](#what-network-models-are-used-in-cni)
|
||||
|
||||
- Route Distribution: An exterior gateway protocol designed to exchange routing and reachability information on the Internet. BGP can assist with pod-to-pod networking between clusters. This feature is a must on unencapsulated CNI network providers, and it is typically done by BGP. If you plan to build clusters split across network segments, route distribution is a feature that's nice-to-have.
|
||||
|
||||
- Network Policies: Kubernetes offers functionality to enforce rules about which services can communicate with each other using network policies. This feature is stable as of Kubernetes v1.7 and is ready to use with certain networking plugins.
|
||||
|
||||
- Mesh: This feature allows service-to-service networking communication between distinct Kubernetes clusters.
|
||||
|
||||
- External Datastore: CNI network providers with this feature need an external datastore for its data.
|
||||
|
||||
- Encryption: This feature allows cyphered and secure network control and data planes.
|
||||
|
||||
- Ingress/Egress Policies: This feature allows you to manage routing control for both Kubernetes and non-Kubernetes communications.
|
||||
|
||||
## CNI Community Popularity
|
||||
|
||||
<CNIPopularityTable />
|
||||
|
||||
## Which CNI Provider Should I Use?
|
||||
|
||||
It depends on your project needs. There are many different providers, which each have various features and options. There isn't one provider that meets everyone's needs.
|
||||
|
||||
Canal is the default CNI network provider. We recommend it for most use cases. It provides encapsulated networking for containers with Flannel, while adding Calico network policies that can provide project/namespace isolation in terms of networking.
|
||||
|
||||
## How can I configure a CNI network provider?
|
||||
|
||||
Please see [Cluster Options](../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md) on how to configure a network provider for your cluster. For more advanced configuration options, please see how to configure your cluster using a [Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md#cluster-config-file-reference).
|
||||
22
versioned_docs/version-2.14/faq/deprecated-features.md
Normal file
22
versioned_docs/version-2.14/faq/deprecated-features.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
title: Deprecated Features in Rancher
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/deprecated-features"/>
|
||||
</head>
|
||||
|
||||
## Where can I find out which features have been deprecated in Rancher?
|
||||
|
||||
Rancher will publish deprecated features as part of the [release notes](https://github.com/rancher/rancher/releases) for Rancher found on GitHub. Please consult the following patch releases for deprecated features:
|
||||
|
||||
| Patch Version | Release Date |
|
||||
|---------------|---------------|
|
||||
| [2.13.3](https://github.com/rancher/rancher/releases/tag/v2.13.3) | February 25, 2026 |
|
||||
| [2.13.2](https://github.com/rancher/rancher/releases/tag/v2.13.2) | January 29, 2026 |
|
||||
| [2.13.1](https://github.com/rancher/rancher/releases/tag/v2.13.1) | December 18, 2025 |
|
||||
| [2.13.0](https://github.com/rancher/rancher/releases/tag/v2.13.0) | November 25, 2025 |
|
||||
|
||||
## What can I expect when a feature is marked for deprecation?
|
||||
|
||||
In the release where functionality is marked as "Deprecated", it will still be available and supported allowing upgrades to follow the usual procedure. Once upgraded, users/admins should start planning to move away from the deprecated functionality before upgrading to the release it marked as removed. The recommendation for new deployments is to not use the deprecated feature.
|
||||
47
versioned_docs/version-2.14/faq/general-faq.md
Normal file
47
versioned_docs/version-2.14/faq/general-faq.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: General FAQ
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/general-faq"/>
|
||||
</head>
|
||||
|
||||
This FAQ is a work in progress designed to answer the questions most frequently asked about Rancher v2.x.
|
||||
|
||||
See the [Technical FAQ](technical-items.md) for frequently asked technical questions.
|
||||
|
||||
## Is it possible to manage Azure Kubernetes Services with Rancher v2.x?
|
||||
|
||||
Yes. See our [Cluster Administration](../how-to-guides/new-user-guides/manage-clusters/manage-clusters.md) guide for what Rancher features are available on AKS, as well as our [documentation on AKS](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-aks.md).
|
||||
|
||||
## Does Rancher support Windows?
|
||||
|
||||
Yes. Rancher supports Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md)
|
||||
|
||||
## Does Rancher support Istio?
|
||||
|
||||
Yes. Rancher supports [Istio](../integrations-in-rancher/istio/istio.md).
|
||||
|
||||
## Will Rancher v2.x support Hashicorp's Vault for storing secrets?
|
||||
|
||||
As of Rancher v2.9, Rancher [supports authentication with service account tokens](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/jwt-authentication.md), which is used by Vault and other integrations.
|
||||
|
||||
## Does Rancher v2.x support RKT containers as well?
|
||||
|
||||
At this time, we only support Docker.
|
||||
|
||||
## Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes?
|
||||
|
||||
Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. Always refer to the [Rancher Support Matrix](https://rancher.com/support-maintenance-terms/) for details about what is officially supported.
|
||||
|
||||
## Are you planning on supporting Traefik for existing setups?
|
||||
|
||||
We don't currently plan on providing embedded Traefik support, but we're still exploring load-balancing approaches.
|
||||
|
||||
## Can I import OpenShift Kubernetes clusters into v2.x?
|
||||
|
||||
Our goal is to run any Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet.
|
||||
|
||||
## Is Longhorn integrated with Rancher?
|
||||
|
||||
Yes. Longhorn is integrated with Rancher v2.5 and later.
|
||||
@@ -0,0 +1,27 @@
|
||||
---
|
||||
title: Installing and Configuring kubectl
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/install-and-configure-kubectl"/>
|
||||
</head>
|
||||
|
||||
`kubectl` is a CLI utility for running commands against Kubernetes clusters. It's required for many maintenance and administrative tasks in Rancher 2.x.
|
||||
|
||||
## Installation
|
||||
|
||||
See [kubectl Installation](https://kubernetes.io/docs/tasks/tools/install-kubectl/) for installation on your operating system.
|
||||
|
||||
## Configuration
|
||||
|
||||
When you create a Kubernetes cluster with RKE2/K3s, the Kubeconfig file is stored at `/etc/rancher/rke2/rke2.yaml` or `/etc/rancher/k3s/k3s.yaml` depending on your chosen distribution. These files are used to configure access to the Kubernetes cluster.
|
||||
|
||||
Test your connectivity with `kubectl` and see if you can get the list of nodes back.
|
||||
|
||||
```shell
|
||||
kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
165.227.114.63 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
165.227.116.167 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
165.227.127.226 Ready controlplane,etcd,worker 11m v1.10.1
|
||||
```
|
||||
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: Rancher is No Longer Needed
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/rancher-is-no-longer-needed"/>
|
||||
</head>
|
||||
|
||||
This page is intended to answer questions about what happens if you don't want Rancher anymore, if you don't want a cluster to be managed by Rancher anymore, or if the Rancher server is deleted.
|
||||
|
||||
|
||||
## If the Rancher server is deleted, what happens to the workloads in my downstream clusters?
|
||||
|
||||
If Rancher is ever deleted or unrecoverable, all workloads in the downstream Kubernetes clusters managed by Rancher will continue to function as normal.
|
||||
|
||||
## If the Rancher server is deleted, how do I access my downstream clusters?
|
||||
|
||||
The capability to access a downstream cluster without Rancher depends on the type of cluster and the way that the cluster was created. To summarize:
|
||||
|
||||
- **Registered/Imported clusters:** The cluster will be unaffected and you can access the cluster using the same methods that you did before the cluster was registered into Rancher.
|
||||
- **Hosted Kubernetes clusters:** If you created the cluster in a cloud-hosted Kubernetes provider such as EKS, GKE, or AKS, you can continue to manage the cluster using your provider's cloud credentials.
|
||||
- **Rancher provisioned clusters:** To access an [RKE2/K3s cluster](../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) the cluster must have the [authorized cluster endpoint](../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) enabled, and you must have already downloaded the cluster's kubeconfig file from the Rancher UI. With this endpoint, you can access your cluster with kubectl directly instead of communicating through the Rancher server's [authentication proxy.](../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#1-the-authentication-proxy) For instructions on how to configure kubectl to use the authorized cluster endpoint, refer to the section about directly accessing clusters with [kubectl and the kubeconfig file.](../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) These clusters will use a snapshot of the authentication as it was configured when Rancher was removed.
|
||||
|
||||
## What if I don't want Rancher anymore?
|
||||
|
||||
:::note
|
||||
|
||||
The previously recommended [System Tools](../reference-guides/system-tools.md) has been deprecated since June 2022.
|
||||
|
||||
:::
|
||||
|
||||
If you [installed Rancher on a Kubernetes cluster,](../getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md) remove Rancher by using the [Rancher Cleanup](https://github.com/rancher/rancher-cleanup) tool.
|
||||
|
||||
Uninstalling Rancher in high-availability (HA) mode will also remove all `helm-operation-*` pods and the following apps:
|
||||
|
||||
- fleet
|
||||
- fleet-agent
|
||||
- rancher-operator
|
||||
- rancher-webhook
|
||||
|
||||
Custom resources (CRDs) and custom namespaces will still need to be manually removed.
|
||||
|
||||
If you installed Rancher with Docker, you can uninstall Rancher by removing the single Docker container that it runs in.
|
||||
|
||||
Imported clusters will not be affected by Rancher being removed. For other types of clusters, refer to the section on [accessing downstream clusters when Rancher is removed.](#if-the-rancher-server-is-deleted-how-do-i-access-my-downstream-clusters)
|
||||
|
||||
## What if I don't want my registered cluster managed by Rancher?
|
||||
|
||||
If a registered cluster is deleted from the Rancher UI, the cluster is detached from Rancher, leaving it intact and accessible by the same methods that were used to access it before it was registered in Rancher.
|
||||
|
||||
To detach the cluster,
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
2. Go to the registered cluster that should be detached from Rancher and click **⋮ > Delete**.
|
||||
3. Click **Delete**.
|
||||
|
||||
**Result:** The registered cluster is detached from Rancher and functions normally outside of Rancher.
|
||||
|
||||
## What if I don't want my hosted Kubernetes cluster managed by Rancher?
|
||||
|
||||
At this time, there is no functionality to detach these clusters from Rancher. In this context, "detach" is defined as the ability to remove Rancher components from the cluster and manage access to the cluster independently of Rancher.
|
||||
|
||||
The capability to manage these clusters without Rancher is being tracked in this [issue.](https://github.com/rancher/rancher/issues/25234)
|
||||
|
||||
For information about how to access clusters if the Rancher server is deleted, refer to [this section.](#if-the-rancher-server-is-deleted-how-do-i-access-my-downstream-clusters)
|
||||
21
versioned_docs/version-2.14/faq/security.md
Normal file
21
versioned_docs/version-2.14/faq/security.md
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
title: Security FAQ
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/security"/>
|
||||
</head>
|
||||
|
||||
## Is there a Hardening Guide?
|
||||
|
||||
The Hardening Guide is located in the main [Security](../reference-guides/rancher-security/rancher-security.md) section.
|
||||
|
||||
## Have hardened Rancher Kubernetes clusters been evaluated by the CIS Kubernetes Benchmark? Where can I find the results?
|
||||
|
||||
We have run the CIS Kubernetes benchmark against a hardened Rancher Kubernetes cluster. The results of that assessment can be found in the main [Security](../reference-guides/rancher-security/rancher-security.md) section.
|
||||
|
||||
## How does Rancher verify communication with downstream clusters, and what are some associated security concerns?
|
||||
|
||||
Communication between the Rancher server and downstream clusters is performed through agents. Rancher uses either a registered certificate authority (CA) bundle or the local trust store to verify communication between Rancher agents and the Rancher server. Using a CA bundle for verification is more strict, as only the certificates based on that bundle are trusted. If TLS verification for a explicit CA bundle fails, Rancher may fall back to using the local trust store for verifying future communication. Any CA within the local trust store can then be used to generate a valid certificate.
|
||||
|
||||
As described in [Rancher Security Update CVE-2024-22030](https://www.suse.com/c/rancher-security-update/), under a narrow set of circumstances, malicious actors can take over Rancher nodes by exploiting the behavior of Rancher CAs. For the attack to succeed, the malicious actor must generate a valid certificate from either a valid CA in the targeted Rancher server, or from a valid registered CA. The attacker also needs to either hijack or spoof the Rancher server-url as a preliminary step. Rancher is currently evaluating Rancher CA behavior to mitigate against this and any similar avenues of attack.
|
||||
184
versioned_docs/version-2.14/faq/technical-items.md
Normal file
184
versioned_docs/version-2.14/faq/technical-items.md
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
title: Technical FAQ
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/faq/technical-items"/>
|
||||
</head>
|
||||
|
||||
## How can I reset the administrator password?
|
||||
|
||||
Docker install:
|
||||
|
||||
```
|
||||
$ docker exec -ti <container_id> reset-password
|
||||
New password for default administrator (user-xxxxx):
|
||||
<new_password>
|
||||
```
|
||||
|
||||
Kubernetes install (Helm):
|
||||
|
||||
```
|
||||
$ KUBECONFIG=./kube_config_cluster.yml
|
||||
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher --no-headers | head -1 | awk '{ print $1 }') -c rancher -- reset-password
|
||||
New password for default administrator (user-xxxxx):
|
||||
<new_password>
|
||||
```
|
||||
|
||||
## I deleted/deactivated the last admin, how can I fix it?
|
||||
|
||||
Docker install:
|
||||
|
||||
```
|
||||
$ docker exec -ti <container_id> ensure-default-admin
|
||||
New default administrator (user-xxxxx)
|
||||
New password for default administrator (user-xxxxx):
|
||||
<new_password>
|
||||
```
|
||||
|
||||
Kubernetes install (Helm):
|
||||
|
||||
```
|
||||
$ KUBECONFIG=./kube_config_cluster.yml
|
||||
$ kubectl --kubeconfig $KUBECONFIG -n cattle-system exec $(kubectl --kubeconfig $KUBECONFIG -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- ensure-default-admin
|
||||
New password for default administrator (user-xxxxx):
|
||||
<new_password>
|
||||
```
|
||||
|
||||
## How can I enable debug logging?
|
||||
|
||||
See [Troubleshooting: Logging](../troubleshooting/other-troubleshooting-tips/logging.md)
|
||||
|
||||
## My ClusterIP does not respond to ping
|
||||
|
||||
ClusterIP is a virtual IP, which will not respond to ping. Best way to test if the ClusterIP is configured correctly, is by using `curl` to access the IP and port to see if it responds.
|
||||
|
||||
## Where can I manage Node Templates?
|
||||
|
||||
Node Templates can be accessed by opening your account menu (top right) and selecting `Node Templates`.
|
||||
|
||||
## Why is my Layer-4 Load Balancer in `Pending` state?
|
||||
|
||||
The Layer-4 Load Balancer is created as `type: LoadBalancer`. In Kubernetes, this needs a cloud provider or controller that can satisfy these requests, otherwise these will be in `Pending` state forever. More information can be found on [Cloud Providers](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-cloud-providers/set-up-cloud-providers.md) or [Create External Load Balancer](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/)
|
||||
|
||||
## Where is the state of Rancher stored?
|
||||
|
||||
- Docker Install: in the embedded etcd of the `rancher/rancher` container, located at `/var/lib/rancher`.
|
||||
- Kubernetes install: default location is in the `/var/lib/rancher/rke2` or `/var/lib/rancher/k3s` directories of the respective RKE2/K3s cluster created to run Rancher.
|
||||
|
||||
## How are the supported Docker versions determined?
|
||||
|
||||
We follow the validated Docker versions for upstream Kubernetes releases. The validated versions can be found under [External Dependencies](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md#external-dependencies) in the Kubernetes release CHANGELOG.md.
|
||||
|
||||
## How can I access nodes created by Rancher?
|
||||
|
||||
SSH keys to access the nodes created by Rancher can be downloaded via the **Nodes** view. Choose the node which you want to access and click on the vertical ⋮ button at the end of the row, and choose **Download Keys** as shown in the picture below.
|
||||
|
||||

|
||||
|
||||
Unzip the downloaded zip file, and use the file `id_rsa` to connect to you host. Be sure to use the correct username (`rancher` or `docker` for RancherOS, `ubuntu` for Ubuntu, `ec2-user` for Amazon Linux)
|
||||
|
||||
```
|
||||
$ ssh -i id_rsa user@ip_of_node
|
||||
```
|
||||
|
||||
## How can I automate task X in Rancher?
|
||||
|
||||
The UI consists of static files, and works based on responses of the API. That means every action/task that you can execute in the UI, can be automated via the API. There are 2 ways to do this:
|
||||
|
||||
* Visit `https://your_rancher_ip/v3` and browse the API options.
|
||||
* Capture the API calls when using the UI (Most commonly used for this is [Chrome Developer Tools](https://developers.google.com/web/tools/chrome-devtools/#network) but you can use anything you like)
|
||||
|
||||
## The IP address of a node changed, how can I recover?
|
||||
|
||||
A node is required to have a static IP configured (or a reserved IP via DHCP). If the IP of a node has changed, you will have to remove it from the cluster and add it again. After it is removed, Rancher will update the cluster to the correct state. If the cluster is no longer in `Provisioning` state, the node is removed from the cluster.
|
||||
|
||||
When the IP address of the node changed, Rancher lost connection to the node, so it will be unable to clean the node properly. See [Cleaning cluster nodes](../how-to-guides/new-user-guides/manage-clusters/clean-cluster-nodes.md) to clean the node.
|
||||
|
||||
When the node is removed from the cluster, and the node is cleaned, you can add the node to the cluster.
|
||||
|
||||
## How can I add more arguments/binds/environment variables to Kubernetes components in a Rancher Launched Kubernetes cluster?
|
||||
|
||||
You can add more arguments/binds/environment variables via the respective [RKE2 Config File](../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md#cluster-configuration) or [K3s Config File](../reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md#cluster-configuration).
|
||||
|
||||
## How do I check if my certificate chain is valid?
|
||||
|
||||
Use the `openssl verify` command to validate your certificate chain:
|
||||
|
||||
:::tip
|
||||
|
||||
Configure `SSL_CERT_DIR` and `SSL_CERT_FILE` to a dummy location to make sure the OS-installed certificates are not used when verifying manually.
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
SSL_CERT_DIR=/dummy SSL_CERT_FILE=/dummy openssl verify -CAfile ca.pem rancher.yourdomain.com.pem
|
||||
rancher.yourdomain.com.pem: OK
|
||||
```
|
||||
|
||||
If you receive the error `unable to get local issuer certificate`, the chain is incomplete. This usually means that there is an intermediate CA certificate that issued your server certificate. If you already have this certificate, you can use it in the verification of the certificate like shown below:
|
||||
|
||||
```
|
||||
SSL_CERT_DIR=/dummy SSL_CERT_FILE=/dummy openssl verify -CAfile ca.pem -untrusted intermediate.pem rancher.yourdomain.com.pem
|
||||
rancher.yourdomain.com.pem: OK
|
||||
```
|
||||
|
||||
If you have successfully verified your certificate chain, you should include needed intermediate CA certificates in the server certificate to complete the certificate chain for any connection made to Rancher (for example, by the Rancher agent). The order of the certificates in the server certificate file should be first the server certificate itself (contents of `rancher.yourdomain.com.pem`), followed by intermediate CA certificate(s) (contents of `intermediate.pem`).
|
||||
|
||||
```
|
||||
-----BEGIN CERTIFICATE-----
|
||||
%YOUR_CERTIFICATE%
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN CERTIFICATE-----
|
||||
%YOUR_INTERMEDIATE_CERTIFICATE%
|
||||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
If you still get errors during verification, you can retrieve the subject and the issuer of the server certificate using the following command:
|
||||
|
||||
```
|
||||
openssl x509 -noout -subject -issuer -in rancher.yourdomain.com.pem
|
||||
subject= /C=GB/ST=England/O=Alice Ltd/CN=rancher.yourdomain.com
|
||||
issuer= /C=GB/ST=England/O=Alice Ltd/CN=Alice Intermediate CA
|
||||
```
|
||||
|
||||
## How do I check `Common Name` and `Subject Alternative Names` in my server certificate?
|
||||
|
||||
Although technically an entry in `Subject Alternative Names` is required, having the hostname in both `Common Name` and as entry in `Subject Alternative Names` gives you maximum compatibility with older browser/applications.
|
||||
|
||||
Check `Common Name`:
|
||||
|
||||
```
|
||||
openssl x509 -noout -subject -in cert.pem
|
||||
subject= /CN=rancher.my.org
|
||||
```
|
||||
|
||||
Check `Subject Alternative Names`:
|
||||
|
||||
```
|
||||
openssl x509 -noout -in cert.pem -text | grep DNS
|
||||
DNS:rancher.my.org
|
||||
```
|
||||
|
||||
## Why does it take 5+ minutes for a pod to be rescheduled when a node has failed?
|
||||
|
||||
This is due to a combination of the following default Kubernetes settings:
|
||||
|
||||
* kubelet
|
||||
* `node-status-update-frequency`: Specifies how often kubelet posts node status to master (default 10s)
|
||||
* kube-controller-manager
|
||||
* `node-monitor-period`: The period for syncing NodeStatus in NodeController (default 5s)
|
||||
* `node-monitor-grace-period`: Amount of time which we allow running Node to be unresponsive before marking it unhealthy (default 40s)
|
||||
* `pod-eviction-timeout`: The grace period for deleting pods on failed nodes (default 5m0s)
|
||||
|
||||
See [Kubernetes: kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) and [Kubernetes: kube-controller-manager](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/) for more information on these settings.
|
||||
|
||||
In Kubernetes v1.13, the `TaintBasedEvictions` feature is enabled by default. See [Kubernetes: Taint based Evictions](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#taint-based-evictions) for more information.
|
||||
|
||||
* kube-apiserver (Kubernetes v1.13 and up)
|
||||
* `default-not-ready-toleration-seconds`: Indicates the tolerationSeconds of the toleration for notReady:NoExecute that is added by default to every pod that does not already have such a toleration.
|
||||
* `default-unreachable-toleration-seconds`: Indicates the tolerationSeconds of the toleration for unreachable:NoExecute that is added by default to every pod that does not already have such a toleration.
|
||||
|
||||
## Can I use keyboard shortcuts in the UI?
|
||||
|
||||
Yes, most parts of the UI can be reached using keyboard shortcuts. For an overview of the available shortcuts, press `?` anywhere in the UI.
|
||||
@@ -0,0 +1,101 @@
|
||||
---
|
||||
title: Upgrading in an Air-Gapped Environment
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/air-gapped-upgrades"/>
|
||||
</head>
|
||||
|
||||
:::note
|
||||
|
||||
These instructions assume you have already followed the instructions for a Kubernetes upgrade on [this page,](upgrades.md) including the prerequisites, up until step 3. Upgrade Rancher.
|
||||
|
||||
:::
|
||||
|
||||
## Rancher Helm Upgrade Options
|
||||
|
||||
To upgrade with Helm, apply the same options that you used when installing Rancher. Refer to the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
Based on the choice you made during installation, complete one of the procedures below.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<VERSION>` | The version number of the output tarball.
|
||||
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry.
|
||||
`<CERTMANAGER_VERSION>` | Cert-manager version running on k8s cluster.
|
||||
|
||||
|
||||
### Option A: Default Self-signed Certificate
|
||||
|
||||
```
|
||||
helm upgrade rancher ./rancher-<VERSION>.tgz \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set certmanager.version=<CERTMANAGER_VERSION> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
#### Resolving UPGRADE FAILED Error
|
||||
|
||||
If you encounter the error message, `Error: UPGRADE FAILED: "rancher" has no deployed releases`, Rancher might have been installed via the `helm template` command. To successfully upgrade Rancher, use the following command instead:
|
||||
|
||||
```
|
||||
helm template rancher ./rancher-<VERSION>.tgz --output-dir . \
|
||||
--no-hooks \ # prevent files for Helm hooks from being generated
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set certmanager.version=<CERTMANAGER_VERSION> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
After you run the Helm command, apply the rendered template:
|
||||
|
||||
```
|
||||
kubectl -n cattle-system apply -R -f ./rancher
|
||||
```
|
||||
|
||||
### Option B: Certificates from Files using Kubernetes Secrets
|
||||
|
||||
```plain
|
||||
helm upgrade rancher ./rancher-<VERSION>.tgz \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`:
|
||||
|
||||
```plain
|
||||
helm upgrade rancher ./rancher-<VERSION>.tgz \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set privateCA=true \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
## Verify the Upgrade
|
||||
|
||||
Log into Rancher to confirm that the upgrade succeeded.
|
||||
|
||||
:::tip
|
||||
|
||||
Having network issues following upgrade?
|
||||
|
||||
See [Restoring Cluster Networking](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.0-2.4/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/namespace-migration.md).
|
||||
|
||||
:::
|
||||
|
||||
## Known Upgrade Issues
|
||||
|
||||
A list of known issues for each Rancher version can be found in the release notes on [GitHub](https://github.com/rancher/rancher/releases) and on the [Rancher forums.](https://forums.rancher.com/c/announcements/12)
|
||||
@@ -0,0 +1,390 @@
|
||||
---
|
||||
title: Install/Upgrade Rancher on a Kubernetes Cluster
|
||||
description: Learn how to install Rancher in development and production environments. Read about single node and high availability installation
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster"/>
|
||||
</head>
|
||||
|
||||
In this section, you'll learn how to deploy Rancher on a Kubernetes cluster using the Helm CLI.
|
||||
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Kubernetes Cluster](#kubernetes-cluster)
|
||||
- [Ingress Controller](#ingress-controller)
|
||||
- [CLI Tools](#cli-tools)
|
||||
|
||||
### Kubernetes Cluster
|
||||
|
||||
Set up the Rancher server's local Kubernetes cluster.
|
||||
|
||||
Rancher can be installed on any Kubernetes cluster. This cluster can use upstream Kubernetes, or it can use one of Rancher's Kubernetes distributions, or it can be a managed Kubernetes cluster from a provider such as Amazon EKS.
|
||||
|
||||
For help setting up a Kubernetes cluster, we provide these tutorials:
|
||||
|
||||
- **K3s:** For the tutorial to install a K3s Kubernetes cluster, refer to [this page.](../../../how-to-guides/new-user-guides/kubernetes-cluster-setup/k3s-for-rancher.md) For help setting up the infrastructure for a high-availability K3s cluster, refer to [this page.](../../../how-to-guides/new-user-guides/infrastructure-setup/ha-k3s-kubernetes-cluster.md)
|
||||
- **RKE2:** For the tutorial to install an RKE2 Kubernetes cluster, refer to [this page.](../../../how-to-guides/new-user-guides/kubernetes-cluster-setup/rke2-for-rancher.md) For help setting up the infrastructure for a high-availability RKE2 cluster, refer to [this page.](../../../how-to-guides/new-user-guides/infrastructure-setup/ha-rke2-kubernetes-cluster.md)
|
||||
- **Amazon EKS:** For details on how to install Rancher on Amazon EKS, including how to install an Ingress controller so that the Rancher server can be accessed, refer to [this page.](rancher-on-amazon-eks.md)
|
||||
- **AKS:** For details on how to install Rancher with Azure Kubernetes Service, including how to install an Ingress controller so that the Rancher server can be accessed, refer to [this page.](rancher-on-aks.md)
|
||||
- **GKE:** For details on how to install Rancher with Google Kubernetes Engine, including how to install an Ingress controller so that the Rancher server can be accessed, refer to [this page.](rancher-on-gke.md) GKE has two modes of operation when creating a Kubernetes cluster, Autopilot and Standard mode. The cluster configuration for Autopilot mode has restrictions on editing the kube-system namespace. However, Rancher needs to create resources in the kube-system namespace during installation. As a result, you will not be able to install Rancher on a GKE cluster created in Autopilot mode.
|
||||
|
||||
|
||||
### Ingress Controller
|
||||
|
||||
The Rancher UI and API are exposed through an Ingress. This means the Kubernetes cluster that you install Rancher in must contain an Ingress controller.
|
||||
|
||||
For RKE2 and K3s installations, you don't have to install the Ingress controller manually because one is installed by default.
|
||||
|
||||
For distributions that do not include an Ingress Controller by default, like a hosted Kubernetes cluster such as EKS, GKE, or AKS, you have to deploy an Ingress controller first. Note that the Rancher Helm chart does not set an `ingressClassName` on the ingress by default. Because of this, you have to configure the Ingress controller to also watch ingresses without an `ingressClassName`.
|
||||
|
||||
Examples are included in the **Amazon EKS**, **AKS**, and **GKE** tutorials above.
|
||||
|
||||
### CLI Tools
|
||||
|
||||
The following CLI tools are required for setting up the Kubernetes cluster. Please make sure these tools are installed and available in your `$PATH`.
|
||||
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) - Kubernetes command-line tool.
|
||||
- [helm](https://docs.helm.sh/using_helm/#installing-helm) - Package management for Kubernetes. Refer to the [Helm version requirements](../resources/helm-version-requirements.md) to choose a version of Helm to install Rancher. Refer to the [instructions provided by the Helm project](https://helm.sh/docs/intro/install/) for your specific platform.
|
||||
|
||||
## Install the Rancher Helm Chart
|
||||
|
||||
:::important
|
||||
|
||||
**Important:** In Rancher Community v2.13.1 if your registry configuration is one of the following you may see Rancher generate the `cattle-cluster-agent` image with an incorrect `docker.io` path segment:
|
||||
|
||||
- Environments where a **cluster-scoped container registry** is configured for system images.
|
||||
- Environments where a **global `system-default-registry`** is configured (e.g. airgap setups), even if no cluster-scoped registry is set.
|
||||
|
||||
**Workaround for Affected Setups:** As a workaround, override the `cattle-cluster-agent` image via the `CATTLE_AGENT_IMAGE` environment variable. This value must **not** contain any registry prefix (Rancher will handle that automatically). It should be set only to the repository and tag, for example:`rancher/rancher-agent:v2.13.1`
|
||||
|
||||
**Helm `install` example:**
|
||||
|
||||
```bash
|
||||
helm install rancher rancher-latest/rancher \
|
||||
...
|
||||
--set extraEnv[0].name=CATTLE_AGENT_IMAGE \
|
||||
--set extraEnv[0].value=rancher/rancher-agent:v2.13.1
|
||||
```
|
||||
|
||||
**Helm `upgrade` example:**
|
||||
|
||||
```bash
|
||||
helm upgrade rancher rancher-latest/rancher \
|
||||
...
|
||||
--set extraEnv[0].name=CATTLE_AGENT_IMAGE \
|
||||
--set extraEnv[0].value=rancher/rancher-agent:v2.13.1
|
||||
```
|
||||
|
||||
**Important Upgrade Note:**
|
||||
|
||||
The `CATTLE_AGENT_IMAGE` override is intended only as a temporary workaround for the affected configurations. Once a Rancher version is available that corrects this behavior, the `CATTLE_AGENT_IMAGE` override should be **removed** from Helm values, so that Rancher can resume managing the agent image normally and automatically track future image and tag changes. See [#53187](https://github.com/rancher/rancher/issues/53187#issuecomment-3676484603) for further information.
|
||||
:::
|
||||
|
||||
Rancher is installed using the [Helm](https://helm.sh/) package manager for Kubernetes. Helm charts provide templating syntax for Kubernetes YAML manifest documents. With Helm, we can create configurable deployments instead of just using static files.
|
||||
|
||||
For systems without direct internet access, see [Air Gap: Kubernetes install](../other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md).
|
||||
|
||||
To choose a Rancher version to install, refer to [Choosing a Rancher Version.](../resources/choose-a-rancher-version.md)
|
||||
|
||||
To choose a version of Helm to install Rancher with, refer to the [Helm version requirements](../resources/helm-version-requirements.md)
|
||||
|
||||
:::note
|
||||
|
||||
The installation instructions assume you are using Helm 3.
|
||||
|
||||
:::
|
||||
|
||||
To set up Rancher,
|
||||
|
||||
1. [Add the Helm chart repository](#1-add-the-helm-chart-repository)
|
||||
2. [Create a namespace for Rancher](#2-create-a-namespace-for-rancher)
|
||||
3. [Choose your SSL configuration](#3-choose-your-ssl-configuration)
|
||||
4. [Install cert-manager](#4-install-cert-manager) (unless you are bringing your own certificates, or TLS will be terminated on a load balancer)
|
||||
5. [Install Rancher with Helm and your chosen certificate option](#5-install-rancher-with-helm-and-your-chosen-certificate-option)
|
||||
6. [Verify that the Rancher server is successfully deployed](#6-verify-that-the-rancher-server-is-successfully-deployed)
|
||||
7. [Save your options](#7-save-your-options)
|
||||
|
||||
### 1. Add the Helm Chart Repository
|
||||
|
||||
Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Rancher Version](../resources/choose-a-rancher-version.md).
|
||||
|
||||
- Latest: Recommended for trying out the newest features
|
||||
```
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
```
|
||||
- Stable: Recommended for production environments
|
||||
```
|
||||
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
|
||||
```
|
||||
- Alpha: Experimental preview of upcoming releases.
|
||||
```
|
||||
helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha
|
||||
```
|
||||
Note: Upgrades are not supported to, from, or between Alphas.
|
||||
|
||||
### 2. Create a Namespace for Rancher
|
||||
|
||||
We'll need to define a Kubernetes namespace where the resources created by the Chart should be installed. This should always be `cattle-system`:
|
||||
|
||||
```
|
||||
kubectl create namespace cattle-system
|
||||
```
|
||||
|
||||
### 3. Choose your SSL Configuration
|
||||
|
||||
The Rancher management server is designed to be secure by default and requires SSL/TLS configuration.
|
||||
|
||||
:::note
|
||||
|
||||
If you want to externally terminate SSL/TLS, see [TLS termination on an External Load Balancer](../installation-references/helm-chart-options.md#external-tls-termination). As outlined on that page, this option does have additional requirements for TLS verification.
|
||||
|
||||
:::
|
||||
|
||||
There are three recommended options for the source of the certificate used for TLS termination at the Rancher server:
|
||||
|
||||
- **Rancher-generated TLS certificate:** In this case, you will need to install `cert-manager` into the cluster. Rancher utilizes `cert-manager` to issue and maintain its certificates. Rancher will generate a CA certificate of its own, and sign a cert using that CA. `cert-manager` is then responsible for managing that certificate. No extra action is needed when `agent-tls-mode` is set to strict. More information can be found on this setting in [Agent TLS Enforcement](../installation-references/tls-settings.md#agent-tls-enforcement).
|
||||
- **Let's Encrypt:** The Let's Encrypt option also uses `cert-manager`. However, in this case, cert-manager is combined with a special Issuer for Let's Encrypt that performs all actions (including request and validation) necessary for getting a Let's Encrypt issued cert. This configuration uses HTTP validation (`HTTP-01`), so the load balancer must have a public DNS record and be accessible from the internet. When setting `agent-tls-mode` to `strict`, you must also specify `--privateCA=true` and upload the Let's Encrypt CA as described in [Adding TLS Secrets](../resources/add-tls-secrets.md). More information can be found on this setting in [Agent TLS Enforcement](../installation-references/tls-settings.md#agent-tls-enforcement).
|
||||
- **Bring your own certificate:** This option allows you to bring your own public- or private-CA signed certificate. Rancher will use that certificate to secure websocket and HTTPS traffic. In this case, you must upload this certificate (and associated key) as PEM-encoded files with the name `tls.crt` and `tls.key`. If you are using a private CA, you must also upload that certificate. This is due to the fact that this private CA may not be trusted by your nodes. Rancher will take that CA certificate, and generate a checksum from it, which the various Rancher components will use to validate their connection to Rancher. If `agent-tls-mode` is set to `strict`, the CA must be uploaded, so that downstream clusters can successfully connect. More information can be found on this setting in [Agent TLS Enforcement](../installation-references/tls-settings.md#agent-tls-enforcement).
|
||||
|
||||
|
||||
| Configuration | Helm Chart Option | Requires cert-manager |
|
||||
| ------------------------------ | ----------------------- | ------------------------------------- |
|
||||
| Rancher Generated Certificates (Default) | `ingress.tls.source=rancher` | [yes](#4-install-cert-manager) |
|
||||
| Let’s Encrypt | `ingress.tls.source=letsEncrypt` | [yes](#4-install-cert-manager) |
|
||||
| Certificates from Files | `ingress.tls.source=secret` | no |
|
||||
|
||||
### 4. Install cert-manager
|
||||
|
||||
> You should skip this step if you are bringing your own certificate files (option `ingress.tls.source=secret`), or if you use [TLS termination on an external load balancer](../installation-references/helm-chart-options.md#external-tls-termination).
|
||||
|
||||
This step is only required to use certificates issued by Rancher's generated CA (`ingress.tls.source=rancher`) or to request Let's Encrypt issued certificates (`ingress.tls.source=letsEncrypt`).
|
||||
|
||||
<details id="cert-manager">
|
||||
<summary>Click to Expand</summary>
|
||||
|
||||
:::note Important:
|
||||
|
||||
Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.11.0, please see our [upgrade documentation](../resources/upgrade-cert-manager.md).
|
||||
|
||||
:::
|
||||
|
||||
These instructions are adapted from the [official cert-manager documentation](https://cert-manager.io/docs/installation/kubernetes/#installing-with-helm).
|
||||
|
||||
:::note
|
||||
|
||||
To see options on how to customize the cert-manager install (including for cases where your cluster uses PodSecurityPolicies), see the [cert-manager docs](https://artifacthub.io/packages/helm/cert-manager/cert-manager#configuration).
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
# If you have installed the CRDs manually, instead of setting `installCRDs` or `crds.enabled` to `true` in your Helm install command, you should upgrade your CRD resources before upgrading the Helm chart:
|
||||
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/<VERSION>/cert-manager.crds.yaml
|
||||
|
||||
# Add the Jetstack Helm repository
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
|
||||
# Update your local Helm chart repository cache
|
||||
helm repo update
|
||||
|
||||
# Install the cert-manager Helm chart
|
||||
helm install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--create-namespace \
|
||||
--set crds.enabled=true
|
||||
```
|
||||
|
||||
Once you’ve installed cert-manager, you can verify it is deployed correctly by checking the cert-manager namespace for running pods:
|
||||
|
||||
```
|
||||
kubectl get pods --namespace cert-manager
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cert-manager-5c6866597-zw7kh 1/1 Running 0 2m
|
||||
cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m
|
||||
cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### 5. Install Rancher with Helm and Your Chosen Certificate Option
|
||||
|
||||
The exact command to install Rancher differs depending on the certificate configuration.
|
||||
|
||||
However, irrespective of the certificate configuration, the name of the Rancher installation in the `cattle-system` namespace should always be `rancher`.
|
||||
|
||||
:::tip Testing and Development:
|
||||
|
||||
This final command to install Rancher requires a domain name that forwards traffic to Rancher. If you are using the Helm CLI to set up a proof-of-concept, you can use a fake domain name when passing the `hostname` option. An example of a fake domain name would be `<IP_OF_LINUX_NODE>.sslip.io`, which would expose Rancher on an IP where it is running. Production installs would require a real domain name.
|
||||
|
||||
:::
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Rancher-generated Certificates">
|
||||
|
||||
The default is for Rancher to generate a CA and uses `cert-manager` to issue the certificate for access to the Rancher server interface.
|
||||
|
||||
Because `rancher` is the default option for `ingress.tls.source`, we are not specifying `ingress.tls.source` when running the `helm install` command.
|
||||
|
||||
- Set the `hostname` to the DNS name you pointed at your load balancer.
|
||||
- Set the `bootstrapPassword` to something unique for the `admin` user.
|
||||
- To install a specific Rancher version, use the `--version` flag, example: `--version 2.7.0`
|
||||
|
||||
```
|
||||
helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set bootstrapPassword=admin
|
||||
```
|
||||
|
||||
If you are installing an alpha version, Helm requires adding the `--devel` option to the install command:
|
||||
|
||||
```
|
||||
helm install rancher rancher-alpha/rancher --devel
|
||||
```
|
||||
|
||||
Wait for Rancher to be rolled out:
|
||||
|
||||
```
|
||||
kubectl -n cattle-system rollout status deploy/rancher
|
||||
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
|
||||
deployment "rancher" successfully rolled out
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Let's Encrypt">
|
||||
|
||||
This option uses `cert-manager` to automatically request and renew [Let's Encrypt](https://letsencrypt.org/) certificates. This is a free service that provides you with a valid certificate as Let's Encrypt is a trusted CA.
|
||||
|
||||
:::note
|
||||
|
||||
You need to have port 80 open as the HTTP-01 challenge can only be done on port 80.
|
||||
|
||||
:::
|
||||
|
||||
In the following command,
|
||||
|
||||
- `hostname` is set to the public DNS record,
|
||||
- Set the `bootstrapPassword` to something unique for the `admin` user.
|
||||
- `ingress.tls.source` is set to `letsEncrypt`
|
||||
- `letsEncrypt.email` is set to the email address used for communication about your certificate (for example, expiry notices)
|
||||
- Set `letsEncrypt.ingress.class` to whatever your ingress controller is, e.g., `traefik`, `nginx`, `haproxy`, etc.
|
||||
|
||||
:::warning
|
||||
|
||||
When `agent-tls-mode` is set to `strict` (the default value for new installs of Rancher starting from v2.9.0), you must supply the `privateCA=true` chart value (e.x. through `--set privateCA=true`) and upload the Let's Encrypt Certificate Authority as outlined in [Adding TLS Secrets](../resources/add-tls-secrets.md). Information on identifying the Let's Encrypt Root CA can be found in the Let's Encrypt [docs](https://letsencrypt.org/certificates/). If you don't upload the CA, then Rancher may fail to connect to new or existing downstream clusters.
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set bootstrapPassword=admin \
|
||||
--set ingress.tls.source=letsEncrypt \
|
||||
--set letsEncrypt.email=me@example.org \
|
||||
--set letsEncrypt.ingress.class=nginx
|
||||
```
|
||||
|
||||
If you are installing an alpha version, Helm requires adding the `--devel` option to the install command:
|
||||
|
||||
```
|
||||
helm install rancher rancher-alpha/rancher --devel
|
||||
```
|
||||
|
||||
Wait for Rancher to be rolled out:
|
||||
|
||||
```
|
||||
kubectl -n cattle-system rollout status deploy/rancher
|
||||
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
|
||||
deployment "rancher" successfully rolled out
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Certificates from Files">
|
||||
In this option, Kubernetes secrets are created from your own certificates for Rancher to use.
|
||||
|
||||
When you run this command, the `hostname` option must match the `Common Name` or a `Subject Alternative Names` entry in the server certificate or the Ingress controller will fail to configure correctly.
|
||||
|
||||
Although an entry in the `Subject Alternative Names` is technically required, having a matching `Common Name` maximizes compatibility with older browsers and applications.
|
||||
|
||||
:::note
|
||||
|
||||
If you want to check if your certificates are correct, see [How do I check Common Name and Subject Alternative Names in my server certificate?](../../../faq/technical-items.md#how-do-i-check-common-name-and-subject-alternative-names-in-my-server-certificate)
|
||||
|
||||
:::
|
||||
|
||||
- Set the `hostname`.
|
||||
- Set the `bootstrapPassword` to something unique for the `admin` user.
|
||||
- Set `ingress.tls.source` to `secret`.
|
||||
|
||||
```
|
||||
helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set bootstrapPassword=admin \
|
||||
--set ingress.tls.source=secret
|
||||
```
|
||||
If you are installing an alpha version, Helm requires adding the `--devel` option to the install command:
|
||||
|
||||
```
|
||||
helm install rancher rancher-alpha/rancher --devel
|
||||
```
|
||||
|
||||
If you are using a Private CA signed certificate , add `--set privateCA=true` to the command:
|
||||
|
||||
```
|
||||
helm install rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set bootstrapPassword=admin \
|
||||
--set ingress.tls.source=secret \
|
||||
--set privateCA=true
|
||||
```
|
||||
|
||||
Now that Rancher is deployed, see [Adding TLS Secrets](../resources/add-tls-secrets.md) to publish the certificate files so Rancher and the Ingress controller can use them.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
The Rancher chart configuration has many options for customizing the installation to suit your specific environment. Here are some common advanced scenarios.
|
||||
|
||||
- [HTTP Proxy](../installation-references/helm-chart-options.md#http-proxy)
|
||||
- [Private Container Image Registry](../installation-references/helm-chart-options.md#private-registry-and-air-gap-installs)
|
||||
- [TLS Termination on an External Load Balancer](../installation-references/helm-chart-options.md#external-tls-termination)
|
||||
|
||||
See the [Chart Options](../installation-references/helm-chart-options.md) for the full list of options.
|
||||
|
||||
|
||||
### 6. Verify that the Rancher Server is Successfully Deployed
|
||||
|
||||
After adding the secrets, check if Rancher was rolled out successfully:
|
||||
|
||||
```
|
||||
kubectl -n cattle-system rollout status deploy/rancher
|
||||
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
|
||||
deployment "rancher" successfully rolled out
|
||||
```
|
||||
|
||||
If you see the following error: `error: deployment "rancher" exceeded its progress deadline`, you can check the status of the deployment by running the following command:
|
||||
|
||||
```
|
||||
kubectl -n cattle-system get deploy rancher
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
rancher 3 3 3 3 3m
|
||||
```
|
||||
|
||||
It should show the same count for `DESIRED` and `AVAILABLE`.
|
||||
|
||||
### 7. Save Your Options
|
||||
|
||||
Make sure you save the `--set` options you used. You will need to use the same options when you upgrade Rancher to new versions with Helm.
|
||||
|
||||
### Finishing Up
|
||||
|
||||
That's it. You should have a functional Rancher server.
|
||||
|
||||
In a web browser, go to the DNS name that forwards traffic to your load balancer. Then you should be greeted by the colorful login page.
|
||||
|
||||
Doesn't work? Take a look at the [Troubleshooting](troubleshooting.md) Page
|
||||
@@ -0,0 +1,151 @@
|
||||
---
|
||||
title: Installing Rancher on Azure Kubernetes Service
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-aks"/>
|
||||
</head>
|
||||
|
||||
This page covers how to install Rancher on Microsoft's Azure Kubernetes Service (AKS).
|
||||
|
||||
The guide uses command line tools to provision an AKS cluster with an ingress. If you prefer to provision your cluster using the Azure portal, refer to the [official documentation](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal).
|
||||
|
||||
If you already have an AKS Kubernetes cluster, skip to the step about [installing an ingress.](#5-install-an-ingress) Then install the Rancher Helm chart following the instructions on [this page.](install-upgrade-on-a-kubernetes-cluster.md#install-the-rancher-helm-chart)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
:::caution
|
||||
|
||||
Deploying to Microsoft Azure will incur charges.
|
||||
|
||||
:::
|
||||
|
||||
- [Microsoft Azure Account](https://azure.microsoft.com/en-us/free/): A Microsoft Azure Account is required to create resources for deploying Rancher and Kubernetes.
|
||||
- [Microsoft Azure Subscription](https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/create-subscription#create-a-subscription-in-the-azure-portal): Use this link to follow a tutorial to create a Microsoft Azure subscription if you don't have one yet.
|
||||
- [Micsoroft Azure Tenant](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-create-new-tenant): Use this link and follow instructions to create a Microsoft Azure tenant.
|
||||
- Your subscription has sufficient quota for at least 2 vCPUs. For details on Rancher server resource requirements, refer to [this section](../installation-requirements/installation-requirements.md)
|
||||
- When installing Rancher with Helm in Azure, use the L7 load balancer to avoid networking issues. For more information, refer to the documentation on [Azure load balancer limitations](https://docs.microsoft.com/en-us/azure/load-balancer/components#limitations).
|
||||
|
||||
## 1. Prepare your Workstation
|
||||
|
||||
Install the following command line tools on your workstation:
|
||||
|
||||
- The Azure CLI, **az:** For help, refer to these [installation steps.](https://docs.microsoft.com/en-us/cli/azure/)
|
||||
- **kubectl:** For help, refer to these [installation steps.](https://kubernetes.io/docs/tasks/tools/#kubectl)
|
||||
- **helm:** For help, refer to these [installation steps.](https://helm.sh/docs/intro/install/)
|
||||
|
||||
## 2. Create a Resource Group
|
||||
|
||||
After installing the CLI, you will need to log in with your Azure account.
|
||||
|
||||
```
|
||||
az login
|
||||
```
|
||||
|
||||
Create a [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal) to hold all relevant resources for your cluster. Use a location that applies to your use case.
|
||||
|
||||
```
|
||||
az group create --name rancher-rg --location eastus
|
||||
```
|
||||
|
||||
## 3. Create the AKS Cluster
|
||||
|
||||
To create an AKS cluster, run the following command. Use a VM size that applies to your use case. Refer to [this article](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes) for available sizes and options. When choosing a Kubernetes version, be sure to first consult the [support matrix](https://rancher.com/support-matrix/) to find the highest version of Kubernetes that has been validated for your Rancher version.
|
||||
|
||||
:::note
|
||||
|
||||
If you're updating from an older version of Kubernetes, to Kubernetes v1.22 or above, you also need to [update](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/) ingress-nginx.
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
az aks create \
|
||||
--resource-group rancher-rg \
|
||||
--name rancher-server \
|
||||
--kubernetes-version <VERSION> \
|
||||
--node-count 3 \
|
||||
--node-vm-size Standard_D2_v3
|
||||
```
|
||||
|
||||
The cluster will take some time to be deployed.
|
||||
|
||||
## 4. Get Access Credentials
|
||||
|
||||
After the cluster is deployed, get the access credentials.
|
||||
|
||||
```
|
||||
az aks get-credentials --resource-group rancher-rg --name rancher-server
|
||||
```
|
||||
|
||||
This command merges your cluster's credentials into the existing kubeconfig and allows `kubectl` to interact with the cluster.
|
||||
|
||||
## 5. Install an Ingress
|
||||
|
||||
The cluster needs an Ingress so that Rancher can be accessed from outside the cluster. Installing an Ingress requires allocating a public IP address. Ensure you have sufficient quota, otherwise it will fail to assign the IP address. Limits for public IP addresses are applicable at a regional level per subscription.
|
||||
|
||||
To make sure that you choose the correct Ingress-NGINX Helm chart, first find an `Ingress-NGINX version` that's compatible with your Kubernetes version in the [Kubernetes/ingress-nginx support table](https://github.com/kubernetes/ingress-nginx#supported-versions-table).
|
||||
|
||||
Then, list the Helm charts available to you by running the following command:
|
||||
|
||||
```
|
||||
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||
helm repo update
|
||||
helm search repo ingress-nginx -l
|
||||
```
|
||||
|
||||
The `helm search` command's output contains an `APP VERSION` column. The versions under this column are equivalent to the `Ingress-NGINX version` you chose earlier. Using the app version, select a chart version that bundles an app compatible with your Kubernetes install. For example, if you have Kubernetes v1.24, you can select the v4.6.0 Helm chart, since Ingress-NGINX v1.7.0 comes bundled with that chart, and v1.7.0 is compatible with Kubernetes v1.24. When in doubt, select the most recent compatible version.
|
||||
|
||||
Now that you know which Helm chart `version` you need, run the following command. It installs an `nginx-ingress-controller` with a Kubernetes load balancer service:
|
||||
|
||||
```
|
||||
helm search repo ingress-nginx -l
|
||||
helm upgrade --install \
|
||||
ingress-nginx ingress-nginx/ingress-nginx \
|
||||
--namespace ingress-nginx \
|
||||
--set controller.service.type=LoadBalancer \
|
||||
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
|
||||
--set controller.service.externalTrafficPolicy=Local \
|
||||
--version 4.6.0 \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
## 6. Get Load Balancer IP
|
||||
|
||||
To get the address of the load balancer, run:
|
||||
|
||||
```
|
||||
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
|
||||
```
|
||||
|
||||
The result should look similar to the following:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
|
||||
AGE
|
||||
ingress-nginx-controller LoadBalancer 10.0.116.18 40.31.180.83 80:31229/TCP,443:31050/TCP
|
||||
67s
|
||||
```
|
||||
|
||||
Save the `EXTERNAL-IP`.
|
||||
|
||||
## 7. Set up DNS
|
||||
|
||||
External traffic to the Rancher server will need to be directed at the load balancer you created.
|
||||
|
||||
Set up a DNS to point at the `EXTERNAL-IP` that you saved. This DNS will be used as the Rancher server URL.
|
||||
|
||||
There are many valid ways to set up the DNS. For help, refer to the [Azure DNS documentation](https://docs.microsoft.com/en-us/azure/dns/)
|
||||
|
||||
## 8. Install the Rancher Helm Chart
|
||||
|
||||
Next, install the Rancher Helm chart by following the instructions on [this page.](install-upgrade-on-a-kubernetes-cluster.md#install-the-rancher-helm-chart) The Helm instructions are the same for installing Rancher on any Kubernetes distribution.
|
||||
|
||||
Use that DNS name from the previous step as the Rancher server URL when you install Rancher. It can be passed in as a Helm option. For example, if the DNS name is `rancher.my.org`, you could run the Helm installation command with the option `--set hostname=rancher.my.org`.
|
||||
|
||||
When installing Rancher on top of this setup, you will also need to pass the value below into the Rancher Helm install command in order to set the name of the ingress controller to be used with Rancher's ingress resource:
|
||||
|
||||
```
|
||||
--set ingress.ingressClassName=nginx
|
||||
```
|
||||
|
||||
Refer [here for the Helm install command](install-upgrade-on-a-kubernetes-cluster.md#5-install-rancher-with-helm-and-your-chosen-certificate-option) for your chosen certificate option.
|
||||
@@ -0,0 +1,155 @@
|
||||
---
|
||||
title: Installing Rancher on Amazon EKS
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-amazon-eks"/>
|
||||
</head>
|
||||
|
||||
This page covers installing Rancher on an Amazon EKS cluster. You can also [install Rancher through the AWS Marketplace](../../quick-start-guides/deploy-rancher-manager/aws-marketplace.md).
|
||||
|
||||
If you already have an EKS Kubernetes cluster, skip to the step about [installing an ingress.](#5-install-an-ingress) Then install the Rancher Helm chart following the instructions on [this page.](install-upgrade-on-a-kubernetes-cluster.md#install-the-rancher-helm-chart)
|
||||
|
||||
## Creating an EKS Cluster for the Rancher Server
|
||||
|
||||
In this section, you'll install an EKS cluster with an ingress by using command line tools. This guide may be useful if you want to use fewer resources while trying out Rancher on EKS.
|
||||
|
||||
:::note Prerequisites:
|
||||
|
||||
- You should already have an AWS account.
|
||||
- It is recommended to use an IAM user instead of the root AWS account. You will need the IAM user's access key and secret key to configure the AWS command line interface.
|
||||
- The IAM user needs the minimum IAM policies described in the official [eksctl documentation.](https://eksctl.io/usage/minimum-iam-policies/)
|
||||
|
||||
:::
|
||||
|
||||
### 1. Prepare your Workstation
|
||||
|
||||
Install the following command line tools on your workstation:
|
||||
|
||||
- **The AWS CLI v2:** For help, refer to these [installation steps.](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
|
||||
- **eksctl:** For help, refer to these [installation steps.](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html)
|
||||
- **kubectl:** For help, refer to these [installation steps.](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html)
|
||||
- **helm:** For help, refer to these [installation steps.](https://helm.sh/docs/intro/install/)
|
||||
|
||||
### 2. Configure the AWS CLI
|
||||
|
||||
To configure the AWS CLI, run the following command:
|
||||
|
||||
```
|
||||
aws configure
|
||||
```
|
||||
|
||||
Then enter the following values:
|
||||
|
||||
| Value | Description |
|
||||
|-------|-------------|
|
||||
| AWS Access Key ID | The access key credential for the IAM user with EKS permissions. |
|
||||
| AWS Secret Access Key | The secret key credential for the IAM user with EKS permissions. |
|
||||
| Default region name | An [AWS region](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html#Concepts.RegionsAndAvailabilityZones.Regions) where the cluster nodes will be located. |
|
||||
| Default output format | Enter `json`. |
|
||||
|
||||
### 3. Create the EKS Cluster
|
||||
|
||||
To create an EKS cluster, run the following command. Use the AWS region that applies to your use case. When choosing a Kubernetes version, be sure to first consult the [support matrix](https://rancher.com/support-matrix/) to find the highest version of Kubernetes that has been validated for your Rancher version.
|
||||
|
||||
**Note:** If you're updating from an older version of Kubernetes, to Kubernetes v1.22 or above, you also need to [update](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/) ingress-nginx.
|
||||
|
||||
```
|
||||
eksctl create cluster \
|
||||
--name rancher-server \
|
||||
--version <VERSION> \
|
||||
--region us-west-2 \
|
||||
--nodegroup-name ranchernodes \
|
||||
--nodes 3 \
|
||||
--nodes-min 1 \
|
||||
--nodes-max 4 \
|
||||
--managed
|
||||
```
|
||||
|
||||
The cluster will take some time to be deployed with CloudFormation.
|
||||
|
||||
### 4. Test the Cluster
|
||||
|
||||
To test the cluster, run:
|
||||
|
||||
```
|
||||
eksctl get cluster
|
||||
```
|
||||
|
||||
The result should look like the following:
|
||||
|
||||
```
|
||||
eksctl get cluster
|
||||
2021-03-18 15:09:35 [ℹ] eksctl version 0.40.0
|
||||
2021-03-18 15:09:35 [ℹ] using region us-west-2
|
||||
NAME REGION EKSCTL CREATED
|
||||
rancher-server-cluster us-west-2 True
|
||||
```
|
||||
|
||||
### 5. Install an Ingress
|
||||
|
||||
The cluster needs an Ingress so that Rancher can be accessed from outside the cluster.
|
||||
|
||||
To make sure that you choose the correct Ingress-NGINX Helm chart, first find an `Ingress-NGINX version` that's compatible with your Kubernetes version in the [Kubernetes/ingress-nginx support table](https://github.com/kubernetes/ingress-nginx#supported-versions-table).
|
||||
|
||||
Then, list the Helm charts available to you by running the following command:
|
||||
|
||||
```
|
||||
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||
helm repo update
|
||||
helm search repo ingress-nginx -l
|
||||
```
|
||||
|
||||
The `helm search` command's output contains an `APP VERSION` column. The versions under this column are equivalent to the `Ingress-NGINX version` you chose earlier. Using the app version, select a chart version that bundles an app compatible with your Kubernetes install. For example, if you have Kubernetes v1.23, you can select the v4.6.0 Helm chart, since Ingress-NGINX v1.7.0 comes bundled with that chart, and v1.7.0 is compatible with Kubernetes v1.23. When in doubt, select the most recent compatible version.
|
||||
|
||||
Now that you know which Helm chart `version` you need, run the following command. It installs an `nginx-ingress-controller` with a Kubernetes load balancer service:
|
||||
|
||||
```
|
||||
helm upgrade --install \
|
||||
ingress-nginx ingress-nginx/ingress-nginx \
|
||||
--namespace ingress-nginx \
|
||||
--set controller.service.type=LoadBalancer \
|
||||
--version 4.6.0 \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
### 6. Get Load Balancer IP
|
||||
|
||||
To get the address of the load balancer, run:
|
||||
|
||||
```
|
||||
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
|
||||
```
|
||||
|
||||
The result should look similar to the following:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
|
||||
AGE
|
||||
ingress-nginx-controller LoadBalancer 10.100.90.18 a904a952c73bf4f668a17c46ac7c56ab-962521486.us-west-2.elb.amazonaws.com 80:31229/TCP,443:31050/TCP
|
||||
27m
|
||||
```
|
||||
|
||||
Save the `EXTERNAL-IP`.
|
||||
|
||||
### 7. Set up DNS
|
||||
|
||||
External traffic to the Rancher server will need to be directed at the load balancer you created.
|
||||
|
||||
Set up a DNS to point at the external IP that you saved. This DNS will be used as the Rancher server URL.
|
||||
|
||||
There are many valid ways to set up the DNS. For help, refer to the AWS documentation on [routing traffic to an ELB load balancer.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html)
|
||||
|
||||
### 8. Install the Rancher Helm Chart
|
||||
|
||||
Next, install the Rancher Helm chart by following the instructions on [this page.](install-upgrade-on-a-kubernetes-cluster.md#install-the-rancher-helm-chart) The Helm instructions are the same for installing Rancher on any Kubernetes distribution.
|
||||
|
||||
Use that DNS name from the previous step as the Rancher server URL when you install Rancher. It can be passed in as a Helm option. For example, if the DNS name is `rancher.my.org`, you could run the Helm installation command with the option `--set hostname=rancher.my.org`.
|
||||
|
||||
When installing Rancher on top of this setup, you will also need to pass the value below into the Rancher Helm install command in order to set the name of the ingress controller to be used with Rancher's ingress resource:
|
||||
|
||||
```
|
||||
--set ingress.ingressClassName=nginx
|
||||
```
|
||||
|
||||
Refer [here for the Helm install command](install-upgrade-on-a-kubernetes-cluster.md#5-install-rancher-with-helm-and-your-chosen-certificate-option) for your chosen certificate option.
|
||||
@@ -0,0 +1,205 @@
|
||||
---
|
||||
title: Installing Rancher on a Google Kubernetes Engine Cluster
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rancher-on-gke"/>
|
||||
</head>
|
||||
|
||||
In this section, you'll learn how to install Rancher using Google Kubernetes Engine.
|
||||
|
||||
If you already have a GKE Kubernetes cluster, skip to the step about [installing an ingress.](#7-install-an-ingress) Then install the Rancher Helm chart following the instructions on [this page.](install-upgrade-on-a-kubernetes-cluster.md#install-the-rancher-helm-chart)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- You will need a Google account.
|
||||
- You will need a Google Cloud billing account. You can manage your Cloud Billing accounts using the Google Cloud Console. For more information about the Cloud Console, visit [General guide to the console.](https://support.google.com/cloud/answer/3465889?hl=en&ref_topic=3340599)
|
||||
- You will need a cloud quota for at least one in-use IP address and at least 2 CPUs. For more details about hardware requirements for the Rancher server, refer to [this section.](../installation-requirements/installation-requirements.md)
|
||||
|
||||
## 1. Enable the Kubernetes Engine API
|
||||
|
||||
Take the following steps to enable the Kubernetes Engine API:
|
||||
|
||||
1. Visit the [Kubernetes Engine page](https://console.cloud.google.com/projectselector/kubernetes?_ga=2.169595943.767329331.1617810440-856599067.1617343886) in the Google Cloud Console.
|
||||
1. Create or select a project.
|
||||
1. Open the project and enable the Kubernetes Engine API for the project. Wait for the API and related services to be enabled. This can take several minutes.
|
||||
1. Make sure that billing is enabled for your Cloud project. For information on how to enable billing for your project, refer to the [Google Cloud documentation.](https://cloud.google.com/billing/docs/how-to/modify-project#enable_billing_for_a_project)
|
||||
|
||||
## 2. Open the Cloud Shell
|
||||
|
||||
Cloud Shell is a shell environment for managing resources hosted on Google Cloud. Cloud Shell comes preinstalled with the `gcloud` command-line tool and kubectl command-line tool. The `gcloud` tool provides the primary command-line interface for Google Cloud, and `kubectl` provides the primary command-line interface for running commands against Kubernetes clusters.
|
||||
|
||||
The following sections describe how to launch the cloud shell from the Google Cloud Console or from your local workstation.
|
||||
|
||||
### Cloud Shell
|
||||
|
||||
To launch the shell from the [Google Cloud Console,](https://console.cloud.google.com) go to the upper-right corner of the console and click the terminal button. When hovering over the button, it is labeled **Activate Cloud Shell**.
|
||||
|
||||
### Local Shell
|
||||
|
||||
To install `gcloud` and `kubectl`, perform the following steps:
|
||||
|
||||
1. Install the Cloud SDK by following [these steps.](https://cloud.google.com/sdk/docs/install) The Cloud SDK includes the `gcloud` command-line tool. The steps vary based on your OS.
|
||||
1. After installing Cloud SDK, install the `kubectl` command-line tool by running the following command:
|
||||
|
||||
```
|
||||
gcloud components install kubectl
|
||||
```
|
||||
In a later step, `kubectl` will be configured to use the new GKE cluster.
|
||||
1. [Install Helm 3](https://helm.sh/docs/intro/install/) if it is not already installed.
|
||||
1. Enable Helm experimental [support for OCI images](https://github.com/helm/community/blob/master/hips/hip-0006.md) with the `HELM_EXPERIMENTAL_OCI` variable. Add the following line to `~/.bashrc` (or `~/.bash_profile` in macOS, or wherever your shell stores environment variables):
|
||||
|
||||
```
|
||||
export HELM_EXPERIMENTAL_OCI=1
|
||||
```
|
||||
1. Run the following command to load your updated `.bashrc` file:
|
||||
|
||||
```
|
||||
source ~/.bashrc
|
||||
```
|
||||
If you are running macOS, use this command:
|
||||
```
|
||||
source ~/.bash_profile
|
||||
```
|
||||
|
||||
|
||||
|
||||
## 3. Configure the gcloud CLI
|
||||
|
||||
Set up default gcloud settings using one of the following methods:
|
||||
|
||||
- Using gcloud init, if you want to be walked through setting defaults.
|
||||
- Using gcloud config, to individually set your project ID, zone, and region.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Using gcloud init">
|
||||
|
||||
1. Run gcloud init and follow the directions:
|
||||
|
||||
```
|
||||
gcloud init
|
||||
```
|
||||
If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:
|
||||
|
||||
```
|
||||
gcloud init --console-only
|
||||
```
|
||||
2. Follow the instructions to authorize gcloud to use your Google Cloud account and select the new project that you created.
|
||||
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Using gcloud config">
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## 4. Confirm that gcloud is configured correctly
|
||||
|
||||
Run:
|
||||
|
||||
```
|
||||
gcloud config list
|
||||
```
|
||||
|
||||
The output should resemble the following:
|
||||
|
||||
```
|
||||
[compute]
|
||||
region = us-west1 # Your chosen region
|
||||
zone = us-west1-b # Your chosen zone
|
||||
[core]
|
||||
account = <Your email>
|
||||
disable_usage_reporting = True
|
||||
project = <Your project ID>
|
||||
|
||||
Your active configuration is: [default]
|
||||
```
|
||||
|
||||
## 5. Create a GKE Cluster
|
||||
|
||||
The following command creates a three-node cluster.
|
||||
|
||||
Replace `cluster-name` with the name of your new cluster.
|
||||
|
||||
When choosing a Kubernetes version, be sure to first consult the [support matrix](https://rancher.com/support-matrix/) to find the highest version of Kubernetes that has been validated for your Rancher version.
|
||||
|
||||
To successfully create a GKE cluster with Rancher, your GKE must be in Standard mode. GKE has two modes of operation when creating a Kubernetes cluster, Autopilot and Standard mode. The cluster configuration for Autopilot mode has restrictions on editing the kube-system namespace. However, Rancher needs to create resources in the kube-system namespace during installation. As a result, you will not be able to install Rancher on a GKE cluster created in Autopilot mode. For more information about the difference between GKE Autopilot mode and Standard mode, visit [Compare GKE Autopilot and Standard.](https://cloud.google.com/kubernetes-engine/docs/resources/autopilot-standard-feature-comparison)
|
||||
|
||||
**Note:** If you're updating from an older version of Kubernetes, to Kubernetes v1.22 or above, you also need to [update](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/) ingress-nginx.
|
||||
|
||||
```
|
||||
gcloud container clusters create cluster-name --num-nodes=3 --cluster-version=<VERSION>
|
||||
```
|
||||
|
||||
## 6. Get Authentication Credentials
|
||||
|
||||
After creating your cluster, you need to get authentication credentials to interact with the cluster:
|
||||
|
||||
```
|
||||
gcloud container clusters get-credentials cluster-name
|
||||
```
|
||||
|
||||
This command configures `kubectl` to use the cluster you created.
|
||||
|
||||
## 7. Install an Ingress
|
||||
|
||||
The cluster needs an Ingress so that Rancher can be accessed from outside the cluster.
|
||||
|
||||
The following command installs an `nginx-ingress-controller` with a LoadBalancer service:
|
||||
|
||||
```
|
||||
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||
helm repo update
|
||||
helm upgrade --install \
|
||||
ingress-nginx ingress-nginx/ingress-nginx \
|
||||
--namespace ingress-nginx \
|
||||
--set controller.service.type=LoadBalancer \
|
||||
--version 4.0.18 \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
## 8. Get the Load Balancer IP
|
||||
|
||||
To get the address of the load balancer, run:
|
||||
|
||||
```
|
||||
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
|
||||
```
|
||||
|
||||
The result should look similar to the following:
|
||||
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
ingress-nginx-controller LoadBalancer 10.3.244.156 35.233.206.34 80:31876/TCP,443:32497/TCP 81s
|
||||
```
|
||||
|
||||
Save the `EXTERNAL-IP`.
|
||||
|
||||
## 9. Set up DNS
|
||||
|
||||
External traffic to the Rancher server will need to be directed at the load balancer you created.
|
||||
|
||||
Set up a DNS to point at the external IP that you saved. This DNS will be used as the Rancher server URL.
|
||||
|
||||
There are many valid ways to set up the DNS. For help, refer to the Google Cloud documentation about [managing DNS records.](https://cloud.google.com/dns/docs/records)
|
||||
|
||||
## 10. Install the Rancher Helm chart
|
||||
|
||||
Next, install the Rancher Helm chart by following the instructions on [this page.](install-upgrade-on-a-kubernetes-cluster.md#install-the-rancher-helm-chart) The Helm instructions are the same for installing Rancher on any Kubernetes distribution.
|
||||
|
||||
Use the DNS name from the previous step as the Rancher server URL when you install Rancher. It can be passed in as a Helm option. For example, if the DNS name is `rancher.my.org`, you could run the Helm installation command with the option `--set hostname=rancher.my.org`.
|
||||
|
||||
When installing Rancher on top of this setup, you will also need to set the name of the ingress controller to be used with Rancher's ingress resource:
|
||||
|
||||
```
|
||||
--set ingress.ingressClassName=nginx
|
||||
```
|
||||
|
||||
Refer [here for the Helm install command](install-upgrade-on-a-kubernetes-cluster.md#5-install-rancher-with-helm-and-your-chosen-certificate-option) for your chosen certificate option.
|
||||
|
||||
In Rancher v2.7.5, if you intend to use the default GKE ingress on your cluster without enabling VPC-native cluster mode, you need to set the following flag:
|
||||
|
||||
```
|
||||
--set service.type=NodePort
|
||||
```
|
||||
|
||||
This is necessary because of compatibility issues between this setup and ClusterIP, the default type for `cattle-system/rancher`.
|
||||
@@ -0,0 +1,39 @@
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: AdmissionConfiguration
|
||||
plugins:
|
||||
- configuration:
|
||||
apiVersion: pod-security.admission.config.k8s.io/v1
|
||||
defaults:
|
||||
audit: restricted
|
||||
audit-version: latest
|
||||
enforce: restricted
|
||||
enforce-version: latest
|
||||
warn: restricted
|
||||
warn-version: latest
|
||||
exemptions:
|
||||
namespaces:
|
||||
- ingress-nginx
|
||||
- kube-system
|
||||
- cattle-system
|
||||
- cattle-epinio-system
|
||||
- cattle-fleet-system
|
||||
- cattle-fleet-local-system
|
||||
- longhorn-system
|
||||
- cattle-neuvector-system
|
||||
- cattle-monitoring-system
|
||||
- rancher-alerting-drivers
|
||||
- cis-operator-system
|
||||
- cattle-csp-adapter-system
|
||||
- cattle-externalip-system
|
||||
- cattle-gatekeeper-system
|
||||
- istio-system
|
||||
- cattle-istio-system
|
||||
- cattle-logging-system
|
||||
- cattle-windows-gmsa-system
|
||||
- cattle-sriov-system
|
||||
- cattle-ui-plugin-system
|
||||
- tigera-operator
|
||||
- cattle-provisioning-capi-system
|
||||
kind: PodSecurityConfiguration
|
||||
name: PodSecurity
|
||||
path: ""
|
||||
@@ -0,0 +1,157 @@
|
||||
---
|
||||
title: Rollbacks
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/rollbacks"/>
|
||||
</head>
|
||||
|
||||
This page outlines how to rollback Rancher to a previous version after an upgrade.
|
||||
|
||||
Follow the instructions from this page when:
|
||||
- The running Rancher instance has been upgraded to a newer version after the backup was made.
|
||||
- The upstream (local) cluster is the same as where the backup was made.
|
||||
|
||||
:::tip
|
||||
|
||||
* Follow these steps to [migrate Rancher](../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md).
|
||||
* If you need to restore Rancher to its previous state at the same Rancher version, see the [restore documentation](../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher.md).
|
||||
|
||||
:::
|
||||
|
||||
## Alternative Steps for Special Scenarios
|
||||
|
||||
Alternative steps need to be performed for rollbacks in the following scenarios:
|
||||
- Rolling back from v2.6.4 and later to an earlier version of v2.6.x.
|
||||
- Rolling back from v2.7.7 and later to an earlier version of v2.7.x.
|
||||
|
||||
In Rancher v2.6.4, the cluster-api module is upgraded from v0.4.4 to v1.0.2. The cluster-api v1.0.2, in turn, upgrades the apiVersions of its Custom Resource Definitions (CRDs) from `cluster.x-k8s.io/v1alpha4` to `cluster.x-k8s.io/v1beta1`. Custom Resources (CRs) that use the older apiVersion (v1alpha4) are incompatible with v1beta1, which causes rollbacks to fail when you attempt to move from Rancher v2.6.4 to any previous version of Rancher v2.6.x.
|
||||
|
||||
In Rancher v2.7.7, the app `rancher-provisioning-capi` is installed on the upstream (local) cluster automatically as a replacement for the embedded cluster-api controllers. Conflicts and unexpected errors will occur if the upstream cluster contains both the app, and Rancher v2.7.6 and earlier. Therefore, alternative steps are needed if you attempt to move from Rancher v2.7.7 to any previous version of Rancher v2.7.x.
|
||||
|
||||
### Step 1: Clean Up the Upstream (Local) Cluster
|
||||
|
||||
To avoid rollback failure, follow these [instructions](https://github.com/rancher/rancher-cleanup/blob/main/README.md) to run the scripts **before** you attempt a restore operation or rollback:
|
||||
|
||||
* `cleanup.sh`: Cleans up the cluster.
|
||||
* `verify.sh`: Checks for any Rancher-related resources in the cluster.
|
||||
|
||||
:::caution
|
||||
|
||||
There will be downtime while `cleanup.sh` runs, since the script deletes resources created by Rancher.
|
||||
|
||||
:::
|
||||
|
||||
**Result:** all Rancher-related resources should be cleaned up on the upstream (local) cluster.
|
||||
|
||||
See the [rancher/rancher-cleanup repo](https://github.com/rancher/rancher-cleanup) for more details and source code.
|
||||
|
||||
### Step 2: Restore the Backup and Bring Up Rancher
|
||||
|
||||
At this point, there should be no Rancher-related resources on the upstream cluster. Therefore, the next step will be the same as if you were migrating Rancher to a new cluster that contains no Rancher resources.
|
||||
|
||||
Follow these [instructions](../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md) to install the Rancher-Backup Helm chart and restore Rancher to its previous state.
|
||||
Please keep in mind that:
|
||||
1. Step 3 can be skipped, because the Cert-Manager app should still exist on the upstream (local) cluster if it was installed before.
|
||||
2. At Step 4, install the Rancher version you intend to roll back to.
|
||||
|
||||
## Rolling Back to Rancher v2.5.0+
|
||||
|
||||
To roll back to Rancher v2.5.0+, use the **Rancher Backups** application and restore Rancher from backup.
|
||||
|
||||
Rancher has to be started with the lower/previous version after a rollback.
|
||||
|
||||
A restore is performed by creating a Restore custom resource.
|
||||
|
||||
:::note Important:
|
||||
|
||||
* Follow the instructions from this page for restoring Rancher on the same cluster where it was backed up from. In order to migrate Rancher to a new cluster, follow the steps to [migrate Rancher.](../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md)
|
||||
|
||||
* While restoring Rancher on the same setup, the Rancher deployment is manually scaled down before the restore starts, then the operator will scale it back up once the restore completes. As a result, Rancher and its UI will be unavailable until the restore is complete. While the UI is unavailable, use the original cluster kubeconfig with the restore YAML file: `kubectl create -f restore.yaml`.
|
||||
|
||||
:::
|
||||
|
||||
### Step 1: Create the Restore Custom Resource
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Go to the local cluster and click **Explore**.
|
||||
1. In the left navigation bar, click **Rancher Backups > Restore**.
|
||||
:::note
|
||||
|
||||
If the Rancher Backups app is not visible, you will need to install it from the Charts page in **Apps**. Refer [here](../../../how-to-guides/new-user-guides/helm-charts-in-rancher/helm-charts-in-rancher.md#access-charts) for more information.
|
||||
|
||||
:::
|
||||
|
||||
1. Click **Create**.
|
||||
1. Create the Restore with the form or with YAML. For help creating the Restore resource using the online form, refer to the [configuration reference](../../../reference-guides/backup-restore-configuration/restore-configuration.md) and to the [examples.](../../../reference-guides/backup-restore-configuration/examples.md)
|
||||
1. To use the YAML editor, you can click **Create > Create from YAML.** Enter the Restore YAML. The following is an example Restore custom resource:
|
||||
|
||||
```yaml
|
||||
apiVersion: resources.cattle.io/v1
|
||||
kind: Restore
|
||||
metadata:
|
||||
name: restore-migration
|
||||
spec:
|
||||
backupFilename: backup-b0450532-cee1-4aa1-a881-f5f48a007b1c-2020-09-15T07-27-09Z.tar.gz
|
||||
encryptionConfigSecretName: encryptionconfig
|
||||
storageLocation:
|
||||
s3:
|
||||
credentialSecretName: s3-creds
|
||||
credentialSecretNamespace: default
|
||||
bucketName: rancher-backups
|
||||
folder: rancher
|
||||
region: us-west-2
|
||||
endpoint: s3.us-west-2.amazonaws.com
|
||||
```
|
||||
For help configuring the Restore, refer to the [configuration reference](../../../reference-guides/backup-restore-configuration/restore-configuration.md) and to the [examples.](../../../reference-guides/backup-restore-configuration/examples.md)
|
||||
|
||||
1. Click **Create**.
|
||||
|
||||
**Result:** The backup file is created and updated to the target storage location. The resources are restored in this order:
|
||||
|
||||
1. Custom Resource Definitions (CRDs)
|
||||
2. Cluster-scoped resources
|
||||
3. Namespaced resources
|
||||
|
||||
To check how the restore is progressing, you can check the logs of the operator. Follow these steps to get the logs:
|
||||
|
||||
```yaml
|
||||
kubectl get pods -n cattle-resources-system
|
||||
kubectl logs -n cattle-resources-system -f
|
||||
```
|
||||
|
||||
### Step 2: Roll Back to a Previous Rancher Version
|
||||
|
||||
Rancher can be rolled back using the Helm CLI. To roll back to the previous version:
|
||||
|
||||
```yaml
|
||||
helm rollback rancher -n cattle-system
|
||||
```
|
||||
|
||||
If the previous revision is not the intended target, you can specify a revision to roll back to. To see the deployment history:
|
||||
|
||||
```yaml
|
||||
helm history rancher -n cattle-system
|
||||
```
|
||||
|
||||
When the target revision is determined, perform the rollback. This example will roll back to revision `3`:
|
||||
|
||||
```yaml
|
||||
helm rollback rancher 3 -n cattle-system
|
||||
```
|
||||
|
||||
## Rolling Back to Rancher v2.2-v2.4+
|
||||
|
||||
To roll back to Rancher before v2.5, follow the procedure detailed here: [Restoring Backups — Kubernetes installs](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.0-2.4/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup.md) Restoring a snapshot of the Rancher server cluster will revert Rancher to the version and state at the time of the snapshot.
|
||||
|
||||
For information on how to roll back Rancher installed with Docker, refer to [this page.](../other-installation-methods/rancher-on-a-single-node-with-docker/roll-back-docker-installed-rancher.md)
|
||||
|
||||
:::note
|
||||
|
||||
Managed clusters are authoritative for their state. This means restoring the Rancher server will not revert workload deployments or changes made on managed clusters after the snapshot was taken.
|
||||
|
||||
:::
|
||||
|
||||
## Rolling Back to Rancher v2.0-v2.1
|
||||
|
||||
Rolling back to Rancher v2.0-v2.1 is no longer supported. The instructions for rolling back to these versions are preserved [here](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.0-2.4/how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup/roll-back-to-v2.0-v2.1.md) and are intended to be used only in cases where upgrading to Rancher v2.2+ is not feasible.
|
||||
@@ -0,0 +1,200 @@
|
||||
---
|
||||
title: Troubleshooting the Rancher Server Kubernetes Cluster
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/troubleshooting"/>
|
||||
</head>
|
||||
|
||||
This section describes how to troubleshoot an installation of Rancher on a Kubernetes cluster.
|
||||
|
||||
### Relevant Namespaces
|
||||
|
||||
Most of the troubleshooting will be done on objects in these 3 namespaces.
|
||||
|
||||
- `cattle-system` - `rancher` deployment and pods.
|
||||
- `ingress-nginx` - Ingress controller pods and services.
|
||||
- `cert-manager` - `cert-manager` pods.
|
||||
|
||||
### "default backend - 404"
|
||||
|
||||
A number of things can cause the ingress-controller not to forward traffic to your rancher instance. Most of the time its due to a bad ssl configuration.
|
||||
|
||||
Things to check
|
||||
|
||||
- [Is Rancher Running](#check-if-rancher-is-running)
|
||||
- [Cert CN is "Kubernetes Ingress Controller Fake Certificate"](#cert-cn-is-kubernetes-ingress-controller-fake-certificate)
|
||||
|
||||
### Check if Rancher is Running
|
||||
|
||||
Use `kubectl` to check the `cattle-system` system namespace and see if the Rancher pods are in a Running state.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system get pods
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
If the state is not `Running`, run a `describe` on the pod and check the Events.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe pod
|
||||
|
||||
...
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal Scheduled 11m default-scheduler Successfully assigned rancher-784d94f59b-vgqzh to localhost
|
||||
Normal SuccessfulMountVolume 11m kubelet, localhost MountVolume.SetUp succeeded for volume "rancher-token-dj4mt"
|
||||
Normal Pulling 11m kubelet, localhost pulling image "rancher/rancher:v2.0.4"
|
||||
Normal Pulled 11m kubelet, localhost Successfully pulled image "rancher/rancher:v2.0.4"
|
||||
Normal Created 11m kubelet, localhost Created container
|
||||
Normal Started 11m kubelet, localhost Started container
|
||||
```
|
||||
|
||||
### Check the Rancher Logs
|
||||
|
||||
Use `kubectl` to list the pods.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system get pods
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/rancher-784d94f59b-vgqzh 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
Use `kubectl` and the pod name to list the logs from the pod.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system logs -f rancher-784d94f59b-vgqzh
|
||||
```
|
||||
|
||||
### Cert CN is "Kubernetes Ingress Controller Fake Certificate"
|
||||
|
||||
Use your browser to check the certificate details. If it says the Common Name is "Kubernetes Ingress Controller Fake Certificate", something may have gone wrong with reading or issuing your SSL cert.
|
||||
|
||||
:::note
|
||||
|
||||
If you are using LetsEncrypt to issue certs, it can sometimes take a few minutes to issue the cert.
|
||||
|
||||
:::
|
||||
|
||||
### Checking for issues with cert-manager issued certs (Rancher Generated or LetsEncrypt)
|
||||
|
||||
`cert-manager` has 3 parts.
|
||||
|
||||
- `cert-manager` pod in the `cert-manager` namespace.
|
||||
- `Issuer` object in the `cattle-system` namespace.
|
||||
- `Certificate` object in the `cattle-system` namespace.
|
||||
|
||||
Work backwards and do a `kubectl describe` on each object and check the events. You can track down what might be missing.
|
||||
|
||||
For example there is a problem with the Issuer:
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe certificate
|
||||
...
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Warning IssuerNotReady 18s (x23 over 19m) cert-manager Issuer rancher not ready
|
||||
```
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe issuer
|
||||
...
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Warning ErrInitIssuer 19m (x12 over 19m) cert-manager Error initializing issuer: secret "tls-rancher" not found
|
||||
Warning ErrGetKeyPair 9m (x16 over 19m) cert-manager Error getting keypair for CA issuer: secret "tls-rancher" not found
|
||||
```
|
||||
|
||||
### Checking for Issues with Your Own SSL Certs
|
||||
|
||||
Your certs get applied directly to the Ingress object in the `cattle-system` namespace.
|
||||
|
||||
Check the status of the Ingress object and see if its ready.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system describe ingress
|
||||
```
|
||||
|
||||
If its ready and the SSL is still not working you may have a malformed cert or secret.
|
||||
|
||||
Check the nginx-ingress-controller logs. Because the nginx-ingress-controller has multiple containers in its pod you will need to specify the name of the container.
|
||||
|
||||
```
|
||||
kubectl -n ingress-nginx logs -f nginx-ingress-controller-rfjrq nginx-ingress-controller
|
||||
...
|
||||
W0705 23:04:58.240571 7 backend_ssl.go:49] error obtaining PEM from secret cattle-system/tls-rancher-ingress: error retrieving secret cattle-system/tls-rancher-ingress: secret cattle-system/tls-rancher-ingress was not found
|
||||
```
|
||||
|
||||
### No matches for kind "Issuer"
|
||||
|
||||
The SSL configuration option you have chosen requires cert-manager to be installed before installing Rancher or else the following error is shown:
|
||||
|
||||
```
|
||||
Error: validation failed: unable to recognize "": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
|
||||
```
|
||||
|
||||
Install cert-manager and try installing Rancher again.
|
||||
|
||||
|
||||
### Canal Pods show READY 2/3
|
||||
|
||||
The most common cause of this issue is port 8472/UDP is not open between the nodes. Check your local firewall, network routing or security groups.
|
||||
|
||||
Once the network issue is resolved, the `canal` pods should timeout and restart to establish their connections.
|
||||
|
||||
### nginx-ingress-controller Pods show RESTARTS
|
||||
|
||||
The most common cause of this issue is the `canal` pods have failed to establish the overlay network. See [canal Pods show READY `2/3`](#canal-pods-show-ready-23) for troubleshooting.
|
||||
|
||||
|
||||
### Failed to dial to /var/run/docker.sock: ssh: rejected: administratively prohibited (open failed)
|
||||
|
||||
Some causes of this error include:
|
||||
|
||||
* User specified to connect with does not have permission to access the Docker socket. This can be checked by logging into the host and running the command `docker ps`:
|
||||
|
||||
```
|
||||
$ ssh user@server
|
||||
user@server$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
```
|
||||
|
||||
See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly.
|
||||
|
||||
* When using RedHat/CentOS as operating system, you cannot use the user `root` to connect to the nodes because of [Bugzilla #1527565](https://bugzilla.redhat.com/show_bug.cgi?id=1527565). You will need to add a separate user and configure it to access the Docker socket. See [Manage Docker as a non-root user](https://docs.docker.com/install/linux/linux-postinstall/#manage-docker-as-a-non-root-user) how to set this up properly.
|
||||
|
||||
* SSH server version is not version 6.7 or higher. This is needed for socket forwarding to work, which is used to connect to the Docker socket over SSH. This can be checked using `sshd -V` on the host you are connecting to, or using netcat:
|
||||
```
|
||||
$ nc xxx.xxx.xxx.xxx 22
|
||||
SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.10
|
||||
```
|
||||
|
||||
### Failed to dial ssh using address [xxx.xxx.xxx.xxx:xx]: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
|
||||
|
||||
The key file specified as `ssh_key_path` is not correct for accessing the node. Double-check if you specified the correct `ssh_key_path` for the node and if you specified the correct user to connect with.
|
||||
|
||||
### Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
|
||||
|
||||
The node is not reachable on the configured `address` and `port`.
|
||||
|
||||
### Agent reports TLS errors
|
||||
|
||||
When using Rancher, you may encounter error messages from the `fleet-agent`, `system-agent`, or `cluster-agent`, such as the message below:
|
||||
```
|
||||
tls: failed to verify certificate: x509: failed to load system roots and no roots provided; readdirent /dev/null: not a directory
|
||||
```
|
||||
|
||||
This occurs when Rancher was configured with `agent-tls-mode` set to `strict`, but couldn't find cacerts in the `cacert` setting. To resolve the issue, set the `agent-tls-mode` to `system-store`, or upload the CA for Rancher as described in [Adding TLS Secrets](../resources/add-tls-secrets.md).
|
||||
|
||||
### New Cluster Deployment is stuck in "Waiting for Agent to check in"
|
||||
|
||||
When Rancher has `agent-tls-mode` set to `strict`, new clusters may fail to provision and report a generic "Waiting for Agent to check in" error message. The root cause of this is similar to the above case of TLS errors - Rancher's agent can't determine which CA Rancher is using (or can't verify that Rancher's cert is actually signed by the specified certificate authority).
|
||||
|
||||
To resolve the issue, set the `agent-tls-mode` to `system-store` or upload the CA for Rancher as described in [Adding TLS Secrets](../resources/add-tls-secrets.md).
|
||||
|
||||
@@ -0,0 +1,239 @@
|
||||
---
|
||||
title: Upgrades
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades"/>
|
||||
</head>
|
||||
|
||||
The following instructions will guide you through upgrading a Rancher server that was installed on a Kubernetes cluster with Helm. These steps also apply to air-gapped installs with Helm.
|
||||
|
||||
For the instructions to upgrade Rancher installed with Docker, refer to [this page.](../other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher.md)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Access to Kubeconfig
|
||||
|
||||
Helm should be run from the same location as your Kubeconfig file, or the same location where you run your `kubectl` commands from.
|
||||
|
||||
If you installed Kubernetes with RKE2/K3s, the Kubeconfig is stored in the `/etc/rancher/rke2/rke2.yaml` or `/etc/rancher/k3s/k3s.yaml` directory depending on your chosen distribution.
|
||||
|
||||
The Kubeconfig can also be manually targeted for the intended cluster with the `--kubeconfig` tag (see: https://helm.sh/docs/helm/helm/)
|
||||
|
||||
### Review Known Issues
|
||||
|
||||
Review the list of known issues for each Rancher version, which can be found in the release notes on [GitHub](https://github.com/rancher/rancher/releases) and on the [Rancher forums.](https://forums.rancher.com/c/announcements/12)
|
||||
|
||||
Note that upgrades _to_ or _from_ any chart in the [rancher-alpha repository](../resources/choose-a-rancher-version.md#helm-chart-repositories) aren't supported.
|
||||
|
||||
### Helm Version
|
||||
|
||||
:::important
|
||||
|
||||
**Important:** In Rancher Community v2.13.1 if your registry configuration is one of the following you may see Rancher generate the `cattle-cluster-agent` image with an incorrect `docker.io` path segment:
|
||||
|
||||
- Environments where a **cluster-scoped container registry** is configured for system images.
|
||||
- Environments where a **global `system-default-registry`** is configured (e.g. airgap setups), even if no cluster-scoped registry is set.
|
||||
|
||||
**Workaround for Affected Setups:** As a workaround, override the `cattle-cluster-agent` image via the `CATTLE_AGENT_IMAGE` environment variable. This value must **not** contain any registry prefix (Rancher will handle that automatically). It should be set only to the repository and tag, for example:`rancher/rancher-agent:v2.13.1`
|
||||
|
||||
**Helm `install` example:**
|
||||
|
||||
```bash
|
||||
helm install rancher rancher-latest/rancher \
|
||||
...
|
||||
--set extraEnv[0].name=CATTLE_AGENT_IMAGE \
|
||||
--set extraEnv[0].value=rancher/rancher-agent:v2.13.1
|
||||
```
|
||||
|
||||
**Helm `upgrade` example:**
|
||||
|
||||
```bash
|
||||
helm upgrade rancher rancher-latest/rancher \
|
||||
...
|
||||
--set extraEnv[0].name=CATTLE_AGENT_IMAGE \
|
||||
--set extraEnv[0].value=rancher/rancher-agent:v2.13.1
|
||||
```
|
||||
|
||||
**Important Upgrade Note:**
|
||||
|
||||
The `CATTLE_AGENT_IMAGE` override is intended only as a temporary workaround for the affected configurations. Once a Rancher version is available that corrects this behavior, the `CATTLE_AGENT_IMAGE` override should be **removed** from Helm values, so that Rancher can resume managing the agent image normally and automatically track future image and tag changes. See [#53187](https://github.com/rancher/rancher/issues/53187#issuecomment-3676484603) for further information.
|
||||
:::
|
||||
|
||||
The upgrade instructions assume you are using Helm 3.
|
||||
|
||||
<DeprecationHelm2 />
|
||||
|
||||
For migration of installs started with Helm 2, refer to the official [Helm 2 to 3 migration docs.](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) The [Helm 2 upgrade page here](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.0-2.4/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/helm2.md) provides a copy of the older upgrade instructions that used Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible.
|
||||
|
||||
### For air-gapped installs: Populate private registry
|
||||
|
||||
For [air-gapped installs only,](../other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) collect and populate images for the new Rancher server version. Follow the guide to [populate your private registry](../other-installation-methods/air-gapped-helm-cli-install/publish-images.md) with the images for the Rancher version that you want to upgrade to.
|
||||
|
||||
### For upgrades with cert-manager older than 0.8.0
|
||||
|
||||
[Let's Encrypt will be blocking cert-manager instances older than 0.8.0 starting November 1st 2019.](https://community.letsencrypt.org/t/blocking-old-cert-manager-versions/98753) Upgrade cert-manager to the latest version by following [these instructions.](../resources/upgrade-cert-manager.md)
|
||||
|
||||
## Upgrade Outline
|
||||
|
||||
Follow the steps to upgrade Rancher server:
|
||||
|
||||
### 1. Back up Your Kubernetes Cluster that is Running Rancher Server
|
||||
|
||||
Use the [backup application](../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher.md) to back up Rancher.
|
||||
|
||||
You'll use the backup as a restore point if something goes wrong during upgrade.
|
||||
|
||||
### 2. Update the Helm chart repository
|
||||
|
||||
1. Update your local Helm repo cache.
|
||||
|
||||
```
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Get the repository name that you used to install Rancher.
|
||||
|
||||
For information about the repos and their differences, see [Helm Chart Repositories](../resources/choose-a-rancher-version.md#helm-chart-repositories).
|
||||
|
||||
- Latest: Recommended for trying out the newest features
|
||||
```
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
```
|
||||
- Stable: Recommended for production environments
|
||||
```
|
||||
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
|
||||
```
|
||||
- Alpha: Experimental preview of upcoming releases.
|
||||
```
|
||||
helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha
|
||||
```
|
||||
Note: Upgrades are not supported to, from, or between Alphas.
|
||||
|
||||
```
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://charts.helm.sh/stable
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
If you want to switch to a different Helm chart repository, please follow the [steps on how to switch repositories](../resources/choose-a-rancher-version.md#switching-to-a-different-helm-chart-repository). If you switch repositories, make sure to list the repositories again before continuing onto Step 3 to ensure you have the correct one added.
|
||||
|
||||
:::
|
||||
|
||||
1. Fetch the latest chart to install Rancher from the Helm chart repository.
|
||||
|
||||
This command will pull down the latest charts and save it in the current directory as a `.tgz` file.
|
||||
|
||||
```plain
|
||||
helm fetch rancher-<CHART_REPO>/rancher
|
||||
```
|
||||
You can fetch the chart for the specific version you are upgrading to by adding in the `--version=` tag. For example:
|
||||
|
||||
```plain
|
||||
helm fetch rancher-<CHART_REPO>/rancher --version=2.6.8
|
||||
```
|
||||
|
||||
### 3. Upgrade Rancher
|
||||
|
||||
This section describes how to upgrade normal (Internet-connected) or air-gapped installations of Rancher with Helm.
|
||||
|
||||
:::note Air Gap Instructions:
|
||||
|
||||
If you are installing Rancher in an air-gapped environment, skip the rest of this page and render the Helm template by following the instructions on [this page.](air-gapped-upgrades.md)
|
||||
|
||||
:::
|
||||
|
||||
Get the values, which were passed with `--set`, from the current Rancher Helm chart that is installed.
|
||||
|
||||
```
|
||||
helm get values rancher -n cattle-system
|
||||
|
||||
hostname: rancher.my.org
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
There will be more values that are listed with this command. This is just an example of one of the values.
|
||||
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
||||
Your deployment name may vary; for example, if you're deploying Rancher through the AWS Marketplace, the deployment name is 'rancher-stable'.
|
||||
Thus:
|
||||
```
|
||||
helm get values rancher-stable -n cattle-system
|
||||
|
||||
hostname: rancher.my.org
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
If you are upgrading cert-manager to the latest version from v1.5 or below, follow the [cert-manager upgrade docs](../resources/upgrade-cert-manager.md#option-c-upgrade-cert-manager-from-versions-15-and-below) to learn how to upgrade cert-manager without needing to perform an uninstall or reinstall of Rancher. Otherwise, follow the [steps to upgrade Rancher](#steps-to-upgrade-rancher) below.
|
||||
|
||||
#### Steps to Upgrade Rancher
|
||||
|
||||
Upgrade Rancher to the latest version with all your settings.
|
||||
|
||||
Take all the values from the previous step and append them to the command using `--set key=value`.
|
||||
|
||||
|
||||
```
|
||||
helm upgrade rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
The above is an example, there may be more values from the previous step that need to be appended.
|
||||
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
||||
If you deploy Rancher through the AWS Marketplace, the deployment name is 'rancher-stable'.
|
||||
Thus:
|
||||
```
|
||||
helm upgrade rancher-stable rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
Alternatively, it's possible to export the current values to a file and reference that file during upgrade. For example, to only change the Rancher version:
|
||||
|
||||
1. Export the current values to a file:
|
||||
```
|
||||
helm get values rancher -n cattle-system -o yaml > values.yaml
|
||||
```
|
||||
1. Update only the Rancher version:
|
||||
|
||||
|
||||
```
|
||||
helm upgrade rancher rancher-<CHART_REPO>/rancher \
|
||||
--namespace cattle-system \
|
||||
-f values.yaml \
|
||||
--version=2.6.8
|
||||
```
|
||||
|
||||
### 4. Verify the Upgrade
|
||||
|
||||
Log into Rancher to confirm that the upgrade succeeded.
|
||||
|
||||
:::tip
|
||||
|
||||
Having network issues following upgrade?
|
||||
|
||||
See [Restoring Cluster Networking](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.0-2.4/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/namespace-migration.md).
|
||||
|
||||
:::
|
||||
|
||||
## Known Upgrade Issues
|
||||
|
||||
A list of known issues for each Rancher version can be found in the release notes on [GitHub](https://github.com/rancher/rancher/releases) and on the [Rancher forums.](https://forums.rancher.com/c/announcements/12)
|
||||
@@ -0,0 +1,96 @@
|
||||
---
|
||||
title: Installing/Upgrading Rancher
|
||||
description: Learn how to install Rancher in development and production environments. Read about single node and high availability installation
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade"/>
|
||||
</head>
|
||||
|
||||
This section provides an overview of the architecture options of installing Rancher, describing advantages of each option.
|
||||
|
||||
## Terminology
|
||||
|
||||
In this section,
|
||||
|
||||
- **The Rancher server** manages and provisions Kubernetes clusters. You can interact with downstream Kubernetes clusters through the Rancher server's user interface. The Rancher management server can be installed on any Kubernetes cluster, including hosted clusters, such as Amazon EKS clusters.
|
||||
- **RKE (Rancher Kubernetes Engine)** is a certified Kubernetes distribution and CLI/library which creates and manages a Kubernetes cluster.
|
||||
- **K3s (Lightweight Kubernetes)** is also a fully compliant Kubernetes distribution. It is newer than RKE, easier to use, and more lightweight, with a binary size of less than 100 MB.
|
||||
- **RKE2** is a fully conformant Kubernetes distribution that focuses on security and compliance within the U.S. Federal Government sector.
|
||||
|
||||
## Overview of Installation Options
|
||||
|
||||
Rancher can be installed on these main architectures:
|
||||
|
||||
### High-availability Kubernetes Install with the Helm CLI
|
||||
|
||||
We recommend using Helm, a Kubernetes package manager, to install Rancher on multiple nodes on a dedicated Kubernetes cluster. For RKE clusters, three nodes are required to achieve a high-availability cluster. For K3s clusters, only two nodes are required.
|
||||
|
||||
### Rancher on EKS Install with the AWS Marketplace
|
||||
|
||||
Rancher can be installed on to Amazon Elastic Kubernetes Service (EKS) [through the AWS Marketplace](../quick-start-guides/deploy-rancher-manager/aws-marketplace.md). The EKS cluster deployed is production-ready and follows AWS best practices.
|
||||
|
||||
### Single-node Kubernetes Install
|
||||
|
||||
Rancher can be installed on a single-node Kubernetes cluster. In this case, the Rancher server doesn't have high availability, which is important for running Rancher in production.
|
||||
|
||||
However, this option is useful if you want to save resources by using a single node in the short term, while preserving a high-availability migration path. In the future, you can add nodes to the cluster to get a high-availability Rancher server.
|
||||
|
||||
### Docker Install
|
||||
|
||||
For test and demonstration purposes, Rancher can be installed with Docker on a single node. A local Kubernetes cluster is installed in the single Docker container, and Rancher is installed on the local cluster.
|
||||
|
||||
The Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.](../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md)
|
||||
|
||||
### Other Options
|
||||
|
||||
There are also separate instructions for installing Rancher in an air gap environment or behind an HTTP proxy:
|
||||
|
||||
| Level of Internet Access | Kubernetes Installation - Strongly Recommended | Docker Installation |
|
||||
| ---------------------------------- | ------------------------------ | ---------- |
|
||||
| With direct access to the Internet | [Docs](install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md) | [Docs](other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md) |
|
||||
| Behind an HTTP proxy | [Docs](other-installation-methods/rancher-behind-an-http-proxy/rancher-behind-an-http-proxy.md) | These [docs,](other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md) plus this [configuration](../../reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md) |
|
||||
| In an air gap environment | [Docs](other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) | [Docs](other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) |
|
||||
|
||||
We recommend installing Rancher on a Kubernetes cluster, because in a multi-node cluster, the Rancher management server becomes highly available. This high-availability configuration helps maintain consistent access to the downstream Kubernetes clusters that Rancher will manage.
|
||||
|
||||
For that reason, we recommend that for a production-grade architecture, you should set up a high-availability Kubernetes cluster, then install Rancher on it. After Rancher is installed, you can use Rancher to deploy and manage Kubernetes clusters.
|
||||
|
||||
For testing or demonstration purposes, you can install Rancher in single Docker container. In this Docker install, you can use Rancher to set up Kubernetes clusters out-of-the-box. The Docker install allows you to explore the Rancher server functionality, but it is intended to be used for development and testing purposes only.
|
||||
|
||||
Our [instructions for installing Rancher on Kubernetes](install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md) describe how to first use K3s or RKE to create and manage a Kubernetes cluster, then install Rancher onto that cluster.
|
||||
|
||||
When the nodes in your Kubernetes cluster are running and fulfill the [node requirements,](installation-requirements/installation-requirements.md) you will use Helm to deploy Rancher onto Kubernetes. Helm uses Rancher's Helm chart to install a replica of Rancher on each node in the Kubernetes cluster. We recommend using a load balancer to direct traffic to each replica of Rancher in the cluster.
|
||||
|
||||
For a longer discussion of Rancher architecture, refer to the [architecture overview,](../../reference-guides/rancher-manager-architecture/rancher-manager-architecture.md) [recommendations for production-grade architecture,](../../reference-guides/rancher-manager-architecture/architecture-recommendations.md) or our [best practices guide.](../../reference-guides/best-practices/rancher-server/tips-for-running-rancher.md)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before installing Rancher, make sure that your nodes fulfill all of the [installation requirements.](installation-requirements/installation-requirements.md)
|
||||
|
||||
## Architecture Tip
|
||||
|
||||
For the best performance and greater security, we recommend a separate, dedicated Kubernetes cluster for the Rancher management server. Running user workloads on this cluster is not advised. After deploying Rancher, you can [create or import clusters](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md) for running your workloads.
|
||||
|
||||
For more architecture recommendations, refer to [this page.](../../reference-guides/rancher-manager-architecture/architecture-recommendations.md)
|
||||
|
||||
### More Options for Installations on a Kubernetes Cluster
|
||||
|
||||
Refer to the [Helm chart options](installation-references/helm-chart-options.md) for details on installing Rancher on a Kubernetes cluster with other configurations, including:
|
||||
|
||||
- With [API auditing to record all transactions](installation-references/helm-chart-options.md#api-audit-log)
|
||||
- With [TLS termination on a load balancer](installation-references/helm-chart-options.md#external-tls-termination)
|
||||
- With a [custom Ingress](installation-references/helm-chart-options.md#customizing-your-ingress)
|
||||
|
||||
In the Rancher installation instructions, we recommend using K3s or RKE to set up a Kubernetes cluster before installing Rancher on the cluster. Both K3s and RKE have many configuration options for customizing the Kubernetes cluster to suit your specific environment. For the full list of their capabilities, refer to their documentation:
|
||||
|
||||
- [RKE configuration options](https://rancher.com/docs/rke/latest/en/config-options/)
|
||||
- [K3s configuration options](https://rancher.com/docs/k3s/latest/en/installation/install-options/)
|
||||
|
||||
### More Options for Installations with Docker
|
||||
|
||||
Refer to the [docs about options for Docker installs](other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md) for details about other configurations including:
|
||||
|
||||
- With [API auditing to record all transactions](../../reference-guides/single-node-rancher-in-docker/advanced-options.md#api-audit-log)
|
||||
- With an [external load balancer](../../how-to-guides/advanced-user-guides/configure-layer-7-nginx-load-balancer.md)
|
||||
- With a [persistent data store](../../reference-guides/single-node-rancher-in-docker/advanced-options.md#persistent-data)
|
||||
@@ -0,0 +1,77 @@
|
||||
---
|
||||
title: Feature Flags
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-references/feature-flags"/>
|
||||
</head>
|
||||
|
||||
With feature flags, you can try out optional or experimental features, and enable legacy features that are being phased out.
|
||||
|
||||
To learn more about feature values and how to enable them, see [Enabling Experimental Features](../../../how-to-guides/advanced-user-guides/enable-experimental-features/enable-experimental-features.md).
|
||||
|
||||
:::note
|
||||
|
||||
Some feature flags require a restart of the Rancher container. Features that require a restart are marked in the Rancher UI.
|
||||
|
||||
:::
|
||||
|
||||
The following is a list of feature flags available in Rancher. If you've upgraded from a previous Rancher version, you may see additional flags in the Rancher UI, such as `proxy` or `dashboard` (both [discontinued](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.5/reference-guides/installation-references/feature-flags.md)):
|
||||
|
||||
- `aggregated-roletemplates`: Use cluster role aggregation architecture for RoleTemplates, ProjectRoleTemplateBindings, and ClusterRoleTemplateBindings. See [RoleTemplate Aggregation](../../../how-to-guides/advanced-user-guides/enable-experimental-features/role-template-aggregation.md) for more information.
|
||||
- `clean-stale-secrets`: Removes stale secrets from the `cattle-impersonation-system` namespace. This slowly cleans up old secrets which are no longer being used by the impersonation system.
|
||||
- `continuous-delivery`: Allows Fleet GitOps to be disabled separately from Fleet. See [Continuous Delivery.](../../../how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery.md) for more information.
|
||||
- `fleet`: The Rancher provisioning framework in v2.6 and later requires Fleet. The flag will be automatically enabled when you upgrade, even if you disabled this flag in an earlier version of Rancher. See [Continuous Delivery with Fleet](../../../integrations-in-rancher/fleet/fleet.md) for more information.
|
||||
- `harvester`: Manages access to the Virtualization Management page, where users can navigate directly to Harvester clusters and access the Harvester UI. See [Harvester Integration Overview](../../../integrations-in-rancher/harvester/overview.md) for more information.
|
||||
- `imperative-api-extension`: Enables Rancher's [extension API server](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) to register new APIs to Kubernetes. This flag is enabled by default. See the [Extension API Server](../../../api/extension-apiserver.md) page for more information.
|
||||
- `istio-virtual-service-ui`: Enables a [visual interface](../../../how-to-guides/advanced-user-guides/enable-experimental-features/istio-traffic-management-features.md) to create, read, update, and delete Istio virtual services and destination rules, which are Istio traffic management features.
|
||||
- `legacy`: Enables a set of features from 2.5.x and earlier, that are slowly being phased out in favor of newer implementations. These are a mix of deprecated features as well as features that will eventually be available to newer versions. This flag is disabled by default on new Rancher installations. If you're upgrading from a previous version of Rancher, this flag is enabled.
|
||||
- `managed-system-upgrade-controller`: Enables the installation of the system-upgrade-controller app in downstream imported RKE2/K3s clusters, as well as in the local cluster if it is an RKE2/K3s cluster.
|
||||
|
||||
:::note Important:
|
||||
|
||||
This `managed-system-upgrade-controller` flag is intended for **internal use only** and does not have an associated Feature CR. Use with caution.
|
||||
|
||||
To control whether Rancher should manage the Kubernetes version of imported RKE2/K3s clusters, it is recommended to use the [imported-cluster-version-management](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#configuring-version-management-for-rke2-and-k3s-clusters) feature that is available in Rancher v2.11.0 or newer.
|
||||
|
||||
:::
|
||||
|
||||
:::danger
|
||||
|
||||
If the `managed-system-upgrade-controller` flag was **disabled** in Rancher v2.10.x, and any imported RKE2/K3s clusters were upgraded **outside of Rancher**, follow the steps below to prevent the unexpected installation of the system-upgrade-controller app and to ensure the [imported-cluster-version-management](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#configuring-version-management-for-rke2-and-k3s-clusters) feature works correctly:
|
||||
|
||||
1. Upgrade Rancher to v2.11.0 or newer, making sure to **retain** the `managed-system-upgrade-controller=false` feature flag in Helm values if it was set during the v2.10.x installation.
|
||||
1. After Rancher is fully up and running, disable the `imported-cluster-version-management` setting. You can do this either through the Rancher UI by clicking **☰ > Global Settings > Settings > imported-cluster-version-management**, or by editing the corresponding `Setting.management.cattle.io/v3` custom resource via kubectl.
|
||||
1. Perform a second Helm upgrade, this time omitting the `managed-system-upgrade-controller=false` feature flag.
|
||||
|
||||
Now, the imported cluster version management is disabled by default, and Rancher no longer installs the system-upgrade-controller app on imported clusters automatically.
|
||||
|
||||
You can enable this feature on a per-cluster basis. For more information, please refer to the [documentation](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#configuring-version-management-for-rke2-and-k3s-clusters).
|
||||
|
||||
:::
|
||||
|
||||
- `multi-cluster-management`: Allows multi-cluster provisioning and management of Kubernetes clusters. This flag can only be set at install time. It can't be enabled or disabled later.
|
||||
- `rke2`: Enables provisioning RKE2 clusters. This flag is enabled by default.
|
||||
- `token-hashing`: Enables token hashing. Once enabled, existing tokens will be hashed and all new tokens will be hashed automatically with the SHA256 algorithm. Once a token is hashed it can't be undone. This flag can't be disabled after its enabled. See [API Tokens](../../../api/api-tokens.md#token-hashing) for more information.
|
||||
- `uiextension`: Enables UI extensions. This flag is enabled by default. Enabling or disabling the flag forces the Rancher pod to restart. The first time this flag is set to `Active`, it creates a CRD and enables the controllers and endpoints necessary for the feature to work. If set to `Disabled`, it disables the previously mentioned controllers and endpoints. Setting `uiextension` to `Disabled` has no effect on the CRD -- it does not create a CRD if it does not yet exist, nor does it delete the CRD if it already exists.
|
||||
- `unsupported-storage-drivers`: Enables types for storage providers and provisioners that aren't enabled by default. See [Allow Unsupported Storage Drivers](../../../how-to-guides/advanced-user-guides/enable-experimental-features/unsupported-storage-drivers.md) for more information.
|
||||
- `ui-sql-cache`: Enables an SQLite-based cache for UI tables and Server-Side Pagination. See [UI Server-Side Pagination](../../../how-to-guides/advanced-user-guides/ui-server-side-pagination.md) for more information.
|
||||
|
||||
The following table shows the availability and default values for some feature flags in Rancher. Features marked "GA" are generally available:
|
||||
|
||||
| Feature Flag Name | Default Value | Status | Available As Of | Additional Information |
|
||||
| ----------------------------- | ------------- | ------------ | --------------- | ---------------------- |
|
||||
| `aggregated-roletemplates` | `Disabled` | Experimental | v2.11.0 | This flag value is locked on install and can't be changed. |
|
||||
| `clean-stale-secrets` | `Active` | GA | v2.10.2 | |
|
||||
| `continuous-delivery` | `Active` | GA | v2.6.0 | |
|
||||
| `external-rules` | v2.7.14: `Disabled`, v2.8.5: `Active` | Removed | v2.7.14, v2.8.5 | This flag affected [external `RoleTemplate` behavior](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/cluster-and-project-roles.md#external-roletemplate-behavior). It is removed in Rancher v2.9.0 and later as the behavior is enabled by default. |
|
||||
| `fleet` | `Active` | Can no longer be disabled | v2.6.0 | |
|
||||
| `fleet` | `Active` | GA | v2.5.0 | |
|
||||
| `harvester` | `Active` | Experimental | v2.6.1 | |
|
||||
| `imperative-api-extension` | `Active` | GA | v2.11.0 | |
|
||||
| `legacy` | `Disabled` for new installs, `Active` for upgrades | GA | v2.6.0 | |
|
||||
| `managed-system-upgrade-controller` | `Active` | GA | v2.10.0 | |
|
||||
| `rke2` | `true` | Experimental | v2.6.0 | |
|
||||
| `token-hashing` | `Disabled` for new installs, `Active` for upgrades | GA | v2.6.0 | |
|
||||
| `uiextension` | `Active` | GA | v2.9.0 | |
|
||||
| `ui-sql-cache` | `Active` | GA | v2.9.0 | |
|
||||
@@ -0,0 +1,316 @@
|
||||
---
|
||||
title: Rancher Helm Chart Options
|
||||
keywords: [rancher helm chart, rancher helm options, rancher helm chart options, helm chart rancher, helm options rancher, helm chart options rancher]
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-references/helm-chart-options"/>
|
||||
</head>
|
||||
|
||||
This page is a configuration reference for the Rancher Helm chart.
|
||||
|
||||
For help choosing a Helm chart version, refer to [this page.](../../../getting-started/installation-and-upgrade/resources/choose-a-rancher-version.md)
|
||||
|
||||
For information on enabling experimental features, refer to [this page.](../../../how-to-guides/advanced-user-guides/enable-experimental-features/enable-experimental-features.md)
|
||||
|
||||
## Common Options
|
||||
|
||||
| Option | Default Value | Description |
|
||||
| ------------------------- | ------------- | ---------------------------------------------------------------------------------- |
|
||||
| `bootstrapPassword` | " " | `string` - Set the [bootstrap password](#bootstrap-password) for the first admin user. After logging in, the admin should reset their password. A randomly generated bootstrap password is used if this value is not set.
|
||||
| `hostname` | " " | `string` - the Fully Qualified Domain Name for your Rancher Server |
|
||||
| `ingress.tls.source` | "rancher" | `string` - Where to get the cert for the ingress. - "rancher, letsEncrypt, secret" |
|
||||
| `letsEncrypt.email` | " " | `string` - Your email address |
|
||||
| `letsEncrypt.environment` | "production" | `string` - Valid options: "staging, production" |
|
||||
| `privateCA` | false | `bool` - Set to true if your cert is signed by a private CA |
|
||||
|
||||
<br/>
|
||||
|
||||
## Advanced Options
|
||||
|
||||
| Option | Default Value | Description |
|
||||
| ------------------------------ | ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `additionalTrustedCAs` | false | `bool` - See [Additional Trusted CAs](#additional-trusted-cas) |
|
||||
| `addLocal` | "true" | `string` - Have Rancher detect and import the "local" (upstream) Rancher server cluster. _Note: This option is no longer available in v2.5.0. Consider using the `restrictedAdmin` option to prevent users from modifying the local cluster._ |
|
||||
| `agentTLSMode` | "" | `string` - either `system-store` or `strict`. See [Agent TLS Enforcement](./tls-settings.md#agent-tls-enforcement) |
|
||||
| `antiAffinity` | "preferred" | `string` - AntiAffinity rule for Rancher pods - "preferred, required" |
|
||||
| `auditLog.destination` | "sidecar" | `string` - Stream to sidecar container console or hostPath volume - "sidecar, hostPath" |
|
||||
| `auditLog.hostPath` | "/var/log/rancher/audit" | `string` - log file destination on host (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
| `auditLog.enabled` | false | `bool` - Enables / disables audit logging. |
|
||||
| `auditLog.level` | 0 | `int` - Sets the [API Audit Log](../../../how-to-guides/advanced-user-guides/enable-api-audit-log.md) level [0-3]. |
|
||||
| `auditLog.maxAge` | 1 | `int` - maximum number of days to retain old audit log files (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
| `auditLog.maxBackup` | 1 | `int` - maximum number of audit log files to retain (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
| `auditLog.maxSize` | 100 | `int` - maximum size in megabytes of the audit log file before it gets rotated (only applies when `auditLog.destination` is set to `hostPath`) |
|
||||
| `auditLog.image.repository` | "registry.suse.com/bci/bci-micro" | `string` - Location for the image used to collect audit logs. |
|
||||
| `auditLog.image.tag` | "15.4.14.3" | `string` - Tag for the image used to collect audit logs. |
|
||||
| `auditLog.image.pullPolicy` | "IfNotPresent" | `string` - Override imagePullPolicy for auditLog images - "Always", "Never", "IfNotPresent". |
|
||||
| `busyboxImage` | "" | `string` - Image location for busybox image used to collect audit logs. _Note: This option is deprecated use `auditLog.image.repository` to control auditing sidecar image._ |
|
||||
| `certmanager.version` | "" | `string` - set cert-manager compatibility |
|
||||
| `debug` | false | `bool` - set debug flag on rancher server |
|
||||
| `extraEnv` | [] | `list` - set additional environment variables for Rancher |
|
||||
| `imagePullSecrets` | [] | `list` - list of names of Secret resource containing private registry credentials |
|
||||
| `ingress.configurationSnippet` | "" | `string` - additional Nginx configuration. Can be used for proxy configuration. |
|
||||
| `ingress.extraAnnotations` | {} | `map` - additional annotations to customize the ingress |
|
||||
| `ingress.enabled` | true | When set to false, Helm will not install a Rancher ingress. Set the option to false to deploy your own ingress. |
|
||||
| `letsEncrypt.ingress.class` | "" | `string` - optional ingress class for the cert-manager acmesolver ingress that responds to the Let's Encrypt ACME challenges. Options: traefik, nginx. | |
|
||||
| `noProxy` | "127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local,cattle-system.svc" | `string` - comma separated list of hostnames or ip address not to use the proxy | |
|
||||
| `proxy` | "" | `string` - HTTP[S] proxy server for Rancher |
|
||||
| `rancherImage` | "rancher/rancher" | `string` - rancher image source |
|
||||
| `rancherImagePullPolicy` | "IfNotPresent" | `string` - Override imagePullPolicy for rancher server images - "Always", "Never", "IfNotPresent" |
|
||||
| `rancherImageTag` | same as chart version | `string` - rancher/rancher image tag |
|
||||
| `replicas` | 3 | `int` - Number of Rancher server replicas. Setting to -1 will dynamically choose 1, 2, or 3 based on the number of available nodes in the cluster. |
|
||||
| `resources` | {} | `map` - rancher pod resource requests & limits |
|
||||
| `systemDefaultRegistry` | "" | `string` - private registry to be used for all system container images, e.g., http://registry.example.com/ |
|
||||
| `tls` | "ingress" | `string` - See [External TLS Termination](#external-tls-termination) for details. - "ingress, external" |
|
||||
| `useBundledSystemChart` | `false` | `bool` - select to use the system-charts packaged with Rancher server. This option is used for air gapped installations. |
|
||||
|
||||
|
||||
When using Rancher v2.12.0 and above, Rancher will use an audit logging controller that watches `AuditPolicy` CRs for configuring additional redactions, for more info see [API Audit Log](../../../how-to-guides/advanced-user-guides/enable-api-audit-log.md).
|
||||
|
||||
|
||||
### Bootstrap Password
|
||||
|
||||
You can [set a specific bootstrap password](../resources/bootstrap-password.md) during Rancher installation. If you don't set a specific bootstrap password, Rancher randomly generates a password for the first admin account.
|
||||
|
||||
When you log in for the first time, use the bootstrap password you set to log in. If you did not set a bootstrap password, the Rancher UI shows commands that can be used to [retrieve the bootstrap password](../resources/bootstrap-password.md#retrieving-the-bootstrap-password). Run those commands and log in to the account. After you log in for the first time, you are asked to reset the admin password.
|
||||
|
||||
### API Audit Log
|
||||
|
||||
Enabling the [API Audit Log](../../../how-to-guides/advanced-user-guides/enable-api-audit-log.md).
|
||||
|
||||
You can collect this log as you would any container log. Enable [logging](../../../integrations-in-rancher/logging/logging.md) for the `System` Project on the Rancher server cluster.
|
||||
|
||||
```plain
|
||||
--set auditLog.enabled=true --set auditLog.level=1
|
||||
```
|
||||
|
||||
By default enabling Audit Logging will create a sidecar container in the Rancher pod. This container (`rancher-audit-log`) will stream the log to `stdout`. You can collect this log as you would any container log. When using the sidecar as the audit log destination, the `hostPath`, `maxAge`, `maxBackups`, and `maxSize` options do not apply. It's advised to use your OS or Docker daemon's log rotation features to control disk space use. Enable [logging](../../../integrations-in-rancher/logging/logging.md) for the Rancher server cluster or System Project.
|
||||
|
||||
Set the `auditLog.destination` to `hostPath` to forward logs to volume shared with the host system instead of streaming to a sidecar container. When setting the destination to `hostPath` you may want to adjust the other auditLog parameters for log rotation.
|
||||
|
||||
### Setting Extra Environment Variables
|
||||
|
||||
You can set extra environment variables for Rancher server using `extraEnv`. This list is passed to the Rancher deployment in its YAML format. It is embedded under `env` for the Rancher container. Refer to the Kubernetes documentation for setting container environment variables, `extraEnv` can use any of the keys referenced in [Define Environment Variables for a Container](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container).
|
||||
|
||||
Consider an example that uses the `name` and `value` keys:
|
||||
|
||||
```plain
|
||||
--set 'extraEnv[0].name=CATTLE_TLS_MIN_VERSION'
|
||||
--set 'extraEnv[0].value=1.0'
|
||||
```
|
||||
|
||||
If passing sensitive data as the value for an environment variable, such as proxy authentication credentials, it is strongly recommended that a secret reference is used. This will prevent sensitive data from being exposed in Helm or the Rancher deployment.
|
||||
|
||||
Consider an example that uses the `name`, `valueFrom.secretKeyRef.name`, and `valueFrom.secretKeyRef.key` keys. See example in [HTTP Proxy](#http-proxy)
|
||||
|
||||
### TLS Settings
|
||||
|
||||
When you install Rancher inside of a Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. The possible TLS settings depend on the used ingress controller.
|
||||
|
||||
See [TLS settings](tls-settings.md) for more information and options.
|
||||
|
||||
### Import `local` Cluster
|
||||
|
||||
By default Rancher server will detect and import the `local` cluster it's running on. User with access to the `local` cluster will essentially have "root" access to all the clusters managed by Rancher server.
|
||||
|
||||
:::caution
|
||||
|
||||
If you turn addLocal off, most Rancher v2.5 features won't work, including the EKS provisioner.
|
||||
|
||||
:::
|
||||
|
||||
If this is a concern in your environment you can set this option to "false" on your initial install.
|
||||
|
||||
This option is only effective on the initial Rancher install. See [Issue 16522](https://github.com/rancher/rancher/issues/16522) for more information.
|
||||
|
||||
```plain
|
||||
--set addLocal="false"
|
||||
```
|
||||
|
||||
### Customizing your Ingress
|
||||
|
||||
To customize or use a different ingress with Rancher server you can set your own Ingress annotations.
|
||||
|
||||
Example on setting a custom certificate issuer:
|
||||
|
||||
```plain
|
||||
--set ingress.extraAnnotations.'cert-manager\.io/cluster-issuer'=issuer-name
|
||||
```
|
||||
|
||||
Example on setting a static proxy header with `ingress.configurationSnippet`. This value is parsed like a template so variables can be used.
|
||||
|
||||
```plain
|
||||
--set ingress.configurationSnippet='more_set_input_headers X-Forwarded-Host {{ .Values.hostname }};'
|
||||
```
|
||||
|
||||
### HTTP Proxy
|
||||
|
||||
Rancher requires internet access for some functionality (Helm charts). Use `proxy` to set your proxy server or use `extraEnv` to set the `HTTPS_PROXY` environment variable to point to your proxy server.
|
||||
|
||||
Add your IP exceptions to the `noProxy` chart value as a comma separated list. Make sure you add the following values:
|
||||
- Pod cluster IP range (default: `10.42.0.0/16`).
|
||||
- Service cluster IP range (default: `10.43.0.0/16`).
|
||||
- Internal cluster domains (default: `.svc,.cluster.local`).
|
||||
- Any worker cluster `controlplane` nodes.
|
||||
Rancher supports CIDR notation ranges in this list.
|
||||
|
||||
When not including sensitive data, the `proxy` or `extraEnv` chart options can be used. When using `extraEnv` the `noProxy` Helm option is ignored. Therefore, the `NO_PROXY` environment variable must also be set with `extraEnv`.
|
||||
|
||||
The following is an example of setting proxy using the `proxy` chart option:
|
||||
|
||||
```plain
|
||||
--set proxy="http://<proxy_url:proxy_port>/"
|
||||
```
|
||||
|
||||
Example of setting proxy using the `extraEnv` chart option:
|
||||
```plain
|
||||
--set extraEnv[1].name=HTTPS_PROXY
|
||||
--set extraEnv[1].value="http://<proxy_url>:<proxy_port>/"
|
||||
--set extraEnv[2].name=NO_PROXY
|
||||
--set extraEnv[2].value="127.0.0.0/8\,10.0.0.0/8\,172.16.0.0/12\,192.168.0.0/16\,.svc\,.cluster.local"
|
||||
```
|
||||
|
||||
When including sensitive data, such as proxy authentication credentials, use the `extraEnv` option with `valueFrom.secretRef` to prevent sensitive data from being exposed in Helm or the Rancher deployment.
|
||||
|
||||
The following is an example of using `extraEnv` to configure proxy. This example secret would contain the value `"http://<username>:<password>@<proxy_url>:<proxy_port>/"` in the secret's `"https-proxy-url"` key:
|
||||
```plain
|
||||
--set extraEnv[1].name=HTTPS_PROXY
|
||||
--set extraEnv[1].valueFrom.secretKeyRef.name=secret-name
|
||||
--set extraEnv[1].valueFrom.secretKeyRef.key=https-proxy-url
|
||||
--set extraEnv[2].name=NO_PROXY
|
||||
--set extraEnv[2].value="127.0.0.0/8\,10.0.0.0/8\,172.16.0.0/12\,192.168.0.0/16\,.svc\,.cluster.local"
|
||||
```
|
||||
|
||||
To learn more about how to configure environment variables, refer to [Define Environment Variables for a Container](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container).
|
||||
|
||||
### Additional Trusted CAs
|
||||
|
||||
If you have private registries, catalogs or a proxy that intercepts certificates, you may need to add more trusted CAs to Rancher.
|
||||
|
||||
```plain
|
||||
--set additionalTrustedCAs=true
|
||||
```
|
||||
|
||||
Once the Rancher deployment is created, copy your CA certs in pem format into a file named `ca-additional.pem` and use `kubectl` to create the `tls-ca-additional` secret in the `cattle-system` namespace.
|
||||
|
||||
```plain
|
||||
kubectl -n cattle-system create secret generic tls-ca-additional --from-file=ca-additional.pem=./ca-additional.pem
|
||||
```
|
||||
|
||||
### Private Registry and Air Gap Installs
|
||||
|
||||
For details on installing Rancher with a private registry, see the [air gap installation docs.](../other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md)
|
||||
|
||||
## External TLS Termination
|
||||
|
||||
We recommend configuring your load balancer as a Layer 4 balancer, forwarding plain 80/tcp and 443/tcp to the Rancher Management cluster nodes. The Ingress Controller on the cluster will redirect http traffic on port 80 to https on port 443.
|
||||
|
||||
You may terminate the SSL/TLS on a L7 load balancer external to the Rancher cluster (ingress). Use the `--set tls=external` option and point your load balancer at port http 80 on all of the Rancher cluster nodes. This will expose the Rancher interface on http port 80. Be aware that clients that are allowed to connect directly to the Rancher cluster will not be encrypted. If you choose to do this we recommend that you restrict direct access at the network level to just your load balancer.
|
||||
|
||||
:::note
|
||||
|
||||
If you are using a Private CA signed certificate (or if `agent-tls-mode` is set to `strict`), add `--set privateCA=true` and see [Adding TLS Secrets - Using a Private CA Signed Certificate](../../../getting-started/installation-and-upgrade/resources/add-tls-secrets.md) to add the CA cert for Rancher.
|
||||
|
||||
:::
|
||||
|
||||
Your load balancer must support long lived websocket connections and will need to insert proxy headers so Rancher can route links correctly.
|
||||
|
||||
### Configuring Ingress for External TLS when Using NGINX v0.22
|
||||
|
||||
In NGINX v0.22, the behavior of NGINX has [changed](https://github.com/kubernetes/ingress-nginx/blob/06efac9f0b6f8f84b553f58ccecf79dc42c75cc6/Changelog.md) regarding forwarding headers and external TLS termination. Therefore, in the scenario that you are using external TLS termination configuration with NGINX v0.22, you must enable the `use-forwarded-headers` option for ingress:
|
||||
|
||||
For RKE2 installations, you can create a custom `rke2-ingress-nginx-config.yaml` file at `/var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml` containing this required setting to enable using forwarded headers with external TLS termination. Without this required setting applied, the external LB will continuously respond with redirect loops it receives from the ingress controller. (This can be created before or after rancher is installed, rke2 server agent will notice this addition and automatically apply it.)
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: helm.cattle.io/v1
|
||||
kind: HelmChartConfig
|
||||
metadata:
|
||||
name: rke2-ingress-nginx
|
||||
namespace: kube-system
|
||||
spec:
|
||||
valuesContent: |-
|
||||
controller:
|
||||
config:
|
||||
use-forwarded-headers: "true"
|
||||
```
|
||||
|
||||
### Required Headers
|
||||
|
||||
- `Host`
|
||||
- `X-Forwarded-Proto`
|
||||
- `X-Forwarded-Port`
|
||||
- `X-Forwarded-For`
|
||||
|
||||
### Recommended Timeouts
|
||||
|
||||
- Read Timeout: `1800 seconds`
|
||||
- Write Timeout: `1800 seconds`
|
||||
- Connect Timeout: `30 seconds`
|
||||
|
||||
### Health Checks
|
||||
|
||||
Rancher will respond `200` to health checks on the `/healthz` endpoint.
|
||||
|
||||
### Example NGINX config
|
||||
|
||||
This NGINX configuration is tested on NGINX 1.14.
|
||||
|
||||
:::caution
|
||||
|
||||
This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - HTTP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/).
|
||||
|
||||
:::
|
||||
|
||||
- Replace `IP_NODE1`, `IP_NODE2` and `IP_NODE3` with the IP addresses of the nodes in your cluster.
|
||||
- Replace both occurrences of `FQDN` to the DNS name for Rancher.
|
||||
- Replace `/certs/fullchain.pem` and `/certs/privkey.pem` to the location of the server certificate and the server certificate key respectively.
|
||||
|
||||
```
|
||||
worker_processes 4;
|
||||
worker_rlimit_nofile 40000;
|
||||
|
||||
events {
|
||||
worker_connections 8192;
|
||||
}
|
||||
|
||||
http {
|
||||
upstream rancher {
|
||||
server IP_NODE_1:80;
|
||||
server IP_NODE_2:80;
|
||||
server IP_NODE_3:80;
|
||||
}
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default Upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name FQDN;
|
||||
ssl_certificate /certs/fullchain.pem;
|
||||
ssl_certificate_key /certs/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_pass http://rancher;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
|
||||
proxy_read_timeout 900s;
|
||||
proxy_buffering off;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name FQDN;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,9 @@
|
||||
---
|
||||
title: Installation References
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-references"/>
|
||||
</head>
|
||||
|
||||
Please see the following reference guides for other installation resources: [Rancher Helm chart options](helm-chart-options.md), [TLS settings](tls-settings.md), and [feature flags](feature-flags.md).
|
||||
@@ -0,0 +1,104 @@
|
||||
---
|
||||
title: TLS Settings
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-references/tls-settings"/>
|
||||
</head>
|
||||
|
||||
Changing the default TLS settings depends on the chosen installation method.
|
||||
|
||||
## Running Rancher in a highly available Kubernetes cluster
|
||||
|
||||
When you install Rancher inside of a Kubernetes cluster, TLS is offloaded at the cluster's ingress controller. The possible TLS settings depend on the used ingress controller:
|
||||
|
||||
* nginx-ingress-controller (default for RKE2): [Default TLS Version and Ciphers](https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers).
|
||||
* traefik (default for K3s): [TLS Options](https://doc.traefik.io/traefik/https/tls/#tls-options).
|
||||
|
||||
## Running Rancher in a single Docker container
|
||||
|
||||
The default TLS configuration only accepts TLS 1.2 and secure TLS cipher suites. You can change this by setting the following environment variables:
|
||||
|
||||
| Parameter | Description | Default | Available options |
|
||||
|-----|-----|-----|-----|
|
||||
| `CATTLE_TLS_MIN_VERSION` | Minimum TLS version | `1.2` | `1.0`, `1.1`, `1.2`, `1.3` |
|
||||
| `CATTLE_TLS_CIPHERS` | Allowed TLS cipher suites | `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`,<br/>`TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`,<br/>`TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305`,<br/>`TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`,<br/>`TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`,<br/>`TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305` | See [Golang tls constants](https://golang.org/pkg/crypto/tls/#pkg-constants) |
|
||||
|
||||
## Agent TLS Enforcement
|
||||
|
||||
The `agent-tls-mode` setting controls how Rancher's agents (`cluster-agent`, `fleet-agent`, and `system-agent`) validate Rancher's certificate.
|
||||
|
||||
When the value is set to `strict`, Rancher's agents only trust certificates generated by the Certificate Authority contained in the `cacerts` setting.
|
||||
When the value is set to `system-store`, Rancher's agents trust any certificate generated by a public Certificate Authority contained in the operating system's trust store including those signed by authorities such as Let's Encrypt. This can be a security risk, since any certificate generated by these external authorities, which are outside the user's control, are considered valid in this state.
|
||||
|
||||
While the `strict` option enables a higher level of security, it requires Rancher to have access to the CA which generated the certificate visible to the agents. In the case of certain certificate configurations (notably, external certificates), this is not automatic, and extra configuration is needed. See the [installation guide](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md#3-choose-your-ssl-configuration) for more information on which scenarios require extra configuration.
|
||||
|
||||
In Rancher v2.9.0 and later, this setting defaults to `strict` on new installs. For users installing or upgrading from a prior Rancher version, it is set to `system-store`.
|
||||
|
||||
### Preparing for the Setting Change
|
||||
|
||||
Each cluster contains a condition in the status field called `AgentTlsStrictCheck`. If `AgentTlsStrictCheck` is set to `"True"`, this indicates that the agents for the cluster are ready to operate in `strict` mode. You can manually inspect each cluster to see if they are ready using the Rancher UI or a kubectl command such as the following:
|
||||
|
||||
```bash
|
||||
## the below command skips ouputs $CLUSTER_NAME,$STATUS for all non-local clusters
|
||||
kubectl get cluster.management.cattle.io -o jsonpath='{range .items[?(@.metadata.name!="local")]}{.metadata.name},{.status.conditions[?(@.type=="AgentTlsStrictCheck")].status}{"\n"}{end}'
|
||||
```
|
||||
|
||||
### Changing the Setting
|
||||
|
||||
You can change the setting using the Rancher UI or the `agentTLSMode` [helm chart option](./helm-chart-options.md).
|
||||
|
||||
:::note
|
||||
|
||||
If you specify the value through the Helm chart, you may only modify the value with Helm.
|
||||
|
||||
:::
|
||||
|
||||
:::warning
|
||||
|
||||
Depending on your cert setup, additional action may be required, such as uploading the Certificate Authority which signed your certs. Review the [installation guide](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md#3-choose-your-ssl-configuration) before changing the setting to see if any additional requirements apply to your setup.
|
||||
|
||||
:::
|
||||
|
||||
To change the setting's value through the UI, navigate to the **Global Settings** page, and find the `agent-tls-mode` setting near the bottom of the page. When you change the setting through the UI, Rancher first checks that all downstream clusters have the condition `AgentTlsStrictCheck` set to `"True"` before allowing the request. This prevents outages from a certificate mismatch.
|
||||
|
||||
|
||||
#### Overriding the Setting Validation Checks
|
||||
|
||||
In some cases, you may want to override the check ensuring all agents can accept the new TLS configuration:
|
||||
|
||||
:::warning
|
||||
|
||||
Rancher checks the status of all downstream clusters to prevent outages. Overriding this check is not recommended, and should be done with great caution.
|
||||
|
||||
:::
|
||||
|
||||
1. As an admin, generate a kubeconfig for the local cluster. In the below examples, this was saved to the `local_kubeconfig.yaml` file.
|
||||
2. Retrieve the current setting and save it to `setting.yaml`:
|
||||
```bash
|
||||
kubectl get setting agent-tls-mode -o yaml --kubeconfig=local_kubeconfig.yaml > setting.yaml
|
||||
```
|
||||
3. Update the `setting.yaml` file, replacing `value` with `strict`. Adding the `cattle.io/force: "true"` annotation overrides the cluster condition check, and should only be done with great care:
|
||||
|
||||
:::warning
|
||||
|
||||
Including the `cattle.io/force` annotation with any value (including, for example `"false"`) overrides the cluster condition check.
|
||||
|
||||
:::
|
||||
|
||||
```yaml
|
||||
apiVersion: management.cattle.io/v3
|
||||
customized: false
|
||||
default: strict
|
||||
kind: Setting
|
||||
metadata:
|
||||
name: agent-tls-mode
|
||||
annotations:
|
||||
cattle.io/force: "true"
|
||||
source: ""
|
||||
value: strict
|
||||
```
|
||||
4. Apply the new version of the setting:
|
||||
```bash
|
||||
kubectl apply -f setting.yaml --kubeconfig=local_kubeconfig.yaml
|
||||
```
|
||||
@@ -0,0 +1,198 @@
|
||||
---
|
||||
title: Installation Requirements
|
||||
description: Learn the node requirements for each node running Rancher server when you’re configuring Rancher to run either in a Kubernetes setup
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-requirements"/>
|
||||
</head>
|
||||
|
||||
This page describes the software, hardware, and networking requirements for the nodes where the Rancher server will be installed. The Rancher server can be installed on a single node or a high-availability Kubernetes cluster.
|
||||
|
||||
:::note Important:
|
||||
|
||||
If you install Rancher on a Kubernetes cluster, requirements are different from the [node requirements for downstream user clusters,](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md) which will run your apps and services.
|
||||
|
||||
:::
|
||||
|
||||
The Rancher UI works best in Firefox or Chromium based browsers (Chrome, Edge, Opera, Brave, etc).
|
||||
|
||||
See our page on [best practices](../../../reference-guides/best-practices/rancher-server/tips-for-running-rancher.md) for a list of recommendations for running a Rancher server in production.
|
||||
|
||||
## Kubernetes Compatibility with Rancher
|
||||
|
||||
Rancher needs to be installed on a supported Kubernetes version. Consult the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions) to ensure that your intended version of Kubernetes is supported.
|
||||
|
||||
Regardless of version and distribution, the Kubernetes cluster must have the aggregation API layer properly configured to support the [extension API](../../../api/extension-apiserver.md) used by Rancher.
|
||||
|
||||
### Install Rancher on a Hardened Kubernetes Cluster
|
||||
|
||||
If you install Rancher on a hardened Kubernetes cluster, check the [Exempting Required Rancher Namespaces](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md#exempting-required-rancher-namespaces) section for detailed requirements.
|
||||
|
||||
### Install Rancher on an IPv6-only or Dual-stack Kubernetes Cluster
|
||||
|
||||
You can deploy Rancher on an IPv6-only or dual-stack Kubernetes cluster.
|
||||
|
||||
For details on Rancher’s IPv6-only and dual-stack support, see the [IPv4/IPv6 Dual-stack](../../../reference-guides/dual-stack.md) page.
|
||||
|
||||
## Operating Systems and Container Runtime Requirements
|
||||
|
||||
All supported operating systems are 64-bit x86. Rancher should work with any modern Linux distribution.
|
||||
|
||||
The [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions) lists which OS versions were tested for each Rancher version.
|
||||
|
||||
The `ntp` (Network Time Protocol) package should be installed. This prevents errors with certificate validation that can occur when the time is not synchronized between the client and server.
|
||||
|
||||
Some distributions of Linux may have default firewall rules that block communication within the Kubernetes cluster. Since Kubernetes v1.19, firewalld must be turned off, because it conflicts with the Kubernetes networking plugins.
|
||||
|
||||
If you don't feel comfortable doing so, you might check suggestions in the [respective issue](https://github.com/rancher/rancher/issues/28840). Some users were successful [creating a separate firewalld zone with a policy of ACCEPT for the Pod CIDR](https://github.com/rancher/rancher/issues/28840#issuecomment-787404822).
|
||||
|
||||
If you plan to run Rancher on ARM64, see [Running on ARM64 (Experimental).](../../../how-to-guides/advanced-user-guides/enable-experimental-features/rancher-on-arm64.md)
|
||||
|
||||
### RKE2 Specific Requirements
|
||||
|
||||
RKE2 bundles its own container runtime, containerd.
|
||||
|
||||
For details on which OS versions were tested with RKE2, refer to the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions).
|
||||
|
||||
### K3s Specific Requirements
|
||||
|
||||
For the container runtime, K3s bundles its own containerd by default. Alternatively, you can configure K3s to use an already installed Docker runtime. For more information on using K3s with Docker see the [K3s documentation.](https://docs.k3s.io/advanced#using-docker-as-the-container-runtime)
|
||||
|
||||
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [Rancher support matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions). To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script.
|
||||
|
||||
If you are installing Rancher on a K3s cluster with **Raspbian Buster**, follow [these steps](https://rancher.com/docs/k3s/latest/en/advanced/#enabling-legacy-iptables-on-raspbian-buster) to switch to legacy iptables.
|
||||
|
||||
If you are installing Rancher on a K3s cluster with Alpine Linux, follow [these steps](https://rancher.com/docs/k3s/latest/en/advanced/#additional-preparation-for-alpine-linux-setup) for additional setup.
|
||||
|
||||
## Hardware Requirements
|
||||
|
||||
The following sections describe the CPU, memory, and I/O requirements for nodes where Rancher is installed. Requirements vary based on the size of the infrastructure.
|
||||
|
||||
### Practical Considerations
|
||||
|
||||
Rancher's hardware footprint depends on a number of factors, including:
|
||||
|
||||
- Size of the managed infrastructure (e.g., node count, cluster count).
|
||||
- Complexity of the desired access control rules (e.g., `RoleBinding` object count).
|
||||
- Number of workloads (e.g., Kubernetes deployments, Fleet deployments).
|
||||
- Usage patterns (e.g., subset of functionality actively used, frequency of use, number of concurrent users).
|
||||
|
||||
Since there are a high number of influencing factors that may vary over time, the requirements listed here should be understood as reasonable starting points that work well for most use cases. Nevertheless, your use case may have different requirements. For inquiries about a specific scenario please [contact Rancher](https://rancher.com/contact/) for further guidance.
|
||||
|
||||
In particular, requirements on this page are subject to typical use assumptions, which include:
|
||||
- Under 60,000 total Kubernetes resources, per type.
|
||||
- Up to 120 pods per node.
|
||||
- Up to 200 CRDs in the upstream (local) cluster.
|
||||
- Up to 100 CRDs in downstream clusters.
|
||||
- Up to 50 Fleet deployments.
|
||||
|
||||
Higher numbers are possible but requirements might be higher. If you have more than 20,000 resources of the same type, loading time of the whole list through the Rancher UI might take several seconds.
|
||||
|
||||
:::note Evolution:
|
||||
|
||||
Rancher's codebase evolves, use cases change, and the body of accumulated Rancher experience grows every day.
|
||||
|
||||
Hardware requirement recommendations are subject to change over time, as guidelines improve in accuracy and become more concrete.
|
||||
|
||||
If you find that your Rancher deployment no longer complies with the listed recommendations, [contact Rancher](https://rancher.com/contact/) for a re-evaluation.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
### RKE2 Kubernetes
|
||||
|
||||
The following table lists minimum CPU and memory requirements for each node in the [upstream cluster](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md).
|
||||
|
||||
Please note that a highly available setup with at least three nodes is required for production.
|
||||
|
||||
| Managed Infrastructure Size | Maximum Number of Clusters | Maximum Number of Nodes | vCPUs | RAM |
|
||||
|-----------------------------|----------------------------|-------------------------|-------|-------|
|
||||
| Small | 150 | 1500 | 4 | 16 GB |
|
||||
| Medium | 300 | 3000 | 8 | 32 GB |
|
||||
| Large (*) | 500 | 5000 | 16 | 64 GB |
|
||||
| Larger (†) | (†) | (†) | (†) | (†) |
|
||||
|
||||
(*): Large deployments require that you [follow best practices](../../../reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md) for adequate performance.
|
||||
|
||||
(†): Larger deployment sizes are generally possible with ad-hoc hardware recommendations and tuning. You can [contact Rancher](https://rancher.com/contact/) for a custom evaluation.
|
||||
|
||||
Refer to RKE2 documentation for more detailed information on [RKE2 general requirements](https://docs.rke2.io/install/requirements).
|
||||
|
||||
### K3s Kubernetes
|
||||
|
||||
The following table lists minimum CPU and memory requirements for each node in the [upstream cluster](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md).
|
||||
|
||||
Please note that a highly available setup with at least three nodes is required for production.
|
||||
|
||||
| Managed Infrastructure Size | Maximum Number of Clusters | Maximum Number of Nodes | vCPUs | RAM | External Database Host (*) |
|
||||
|-----------------------------|----------------------------|-------------------------|-------|-------|----------------------------|
|
||||
| Small | 150 | 1500 | 4 | 16 GB | 2 vCPUs, 8 GB + 1000 IOPS |
|
||||
| Medium | 300 | 3000 | 8 | 32 GB | 4 vCPUs, 16 GB + 2000 IOPS |
|
||||
| Large (†) | 500 | 5000 | 16 | 64 GB | 8 vCPUs, 32 GB + 4000 IOPS |
|
||||
|
||||
(*): External Database Host refers to hosting the K3s cluster data store on an [dedicated external host](https://docs.k3s.io/datastore). This is optional. Exact requirements depend on the external data store.
|
||||
|
||||
(†): Large deployments require that you [follow best practices](../../../reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md) for adequate performance.
|
||||
|
||||
Refer to the K3s documentation for more detailed information on [general requirements](https://docs.k3s.io/installation/requirements).
|
||||
|
||||
### Hosted Kubernetes
|
||||
|
||||
The following table lists minimum CPU and memory requirements for each node in the [upstream cluster](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md).
|
||||
|
||||
Please note that a highly available setup with at least three nodes is required for production.
|
||||
|
||||
These requirements apply to hosted Kubernetes clusters such as Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or Google Kubernetes Engine (GKE). They don't apply to Rancher SaaS solutions such as [Rancher Prime Hosted](https://www.rancher.com/products/rancher).
|
||||
|
||||
| Managed Infrastructure Size | Maximum Number of Clusters | Maximum Number of Nodes | vCPUs | RAM |
|
||||
|-----------------------------|----------------------------|-------------------------|-------|-------|
|
||||
| Small | 150 | 1500 | 4 | 16 GB |
|
||||
| Medium | 300 | 3000 | 8 | 32 GB |
|
||||
| Large (*) | 500 | 5000 | 16 | 64 GB |
|
||||
|
||||
(*): Large deployments require that you [follow best practices](../../../reference-guides/best-practices/rancher-server/tuning-and-best-practices-for-rancher-at-scale.md) for adequate performance.
|
||||
|
||||
## Ingress
|
||||
|
||||
Each node in the Kubernetes cluster that Rancher is installed on should run an Ingress.
|
||||
|
||||
The Ingress should be deployed as DaemonSet to ensure your load balancer can successfully route traffic to all nodes.
|
||||
|
||||
For RKE2 and K3s installations, you don't have to install the Ingress manually because it is installed by default.
|
||||
|
||||
For hosted Kubernetes clusters (EKS, GKE, AKS), you will need to set up the ingress.
|
||||
|
||||
- **Amazon EKS:** For details on how to install Rancher on Amazon EKS, including how to install an ingress so that the Rancher server can be accessed, refer to [this page.](../install-upgrade-on-a-kubernetes-cluster/rancher-on-amazon-eks.md)
|
||||
- **AKS:** For details on how to install Rancher with Azure Kubernetes Service, including how to install an ingress so that the Rancher server can be accessed, refer to [this page.](../install-upgrade-on-a-kubernetes-cluster/rancher-on-aks.md)
|
||||
- **GKE:** For details on how to install Rancher with Google Kubernetes Engine, including how to install an ingress so that the Rancher server can be accessed, refer to [this page.](../install-upgrade-on-a-kubernetes-cluster/rancher-on-gke.md)
|
||||
|
||||
## Disks
|
||||
|
||||
Rancher performance depends on etcd in the cluster performance. To ensure optimal speed, we recommend always using SSD disks to back your Rancher management Kubernetes cluster. On cloud providers, you will also want to use the minimum size that allows the maximum IOPS. In larger clusters, consider using dedicated storage devices for etcd data and wal directories.
|
||||
|
||||
## Networking Requirements
|
||||
|
||||
This section describes the networking requirements for the node(s) where the Rancher server is installed.
|
||||
|
||||
:::caution
|
||||
|
||||
If a server containing Rancher has the `X-Frame-Options=DENY` header, some pages in the new Rancher UI will not be able to render after upgrading from the legacy UI. This is because some legacy pages are embedded as iFrames in the new UI.
|
||||
|
||||
:::
|
||||
|
||||
### Node IP Addresses
|
||||
|
||||
Each node used should have a static IP configured, regardless of whether you are installing Rancher on a single node or on an HA cluster. In case of DHCP, each node should have a DHCP reservation to make sure the node gets the same IP allocated.
|
||||
|
||||
### Port Requirements
|
||||
|
||||
To operate properly, Rancher requires a number of ports to be open on Rancher nodes and on downstream Kubernetes cluster nodes. [Port Requirements](port-requirements.md) lists all the necessary ports for Rancher and Downstream Clusters for the different cluster types.
|
||||
|
||||
### Load Balancer Requirements
|
||||
|
||||
If you use a load balancer, it should be be HTTP/2 compatible.
|
||||
|
||||
To receive help from SUSE Support, Rancher Prime customers who use load balancers (or any other middleboxes such as firewalls), must use one that is HTTP/2 compatible.
|
||||
|
||||
When HTTP/2 is not available, Rancher falls back to HTTP/1.1. However, since HTTP/2 offers improved web application performance, using HTTP/1.1 can create performance issues.
|
||||
@@ -0,0 +1,301 @@
|
||||
---
|
||||
title: Port Requirements
|
||||
description: Read about port requirements needed in order for Rancher to operate properly, both for Rancher nodes and downstream Kubernetes cluster nodes
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/installation-requirements/port-requirements"/>
|
||||
</head>
|
||||
|
||||
import PortsIaasNodes from '@site/src/components/PortsIaasNodes'
|
||||
import PortsCustomNodes from '@site/src/components/PortsCustomNodes'
|
||||
import PortsImportedHosted from '@site/src/components/PortsImportedHosted'
|
||||
|
||||
To operate properly, Rancher requires a number of ports to be open on Rancher nodes and on downstream Kubernetes cluster nodes.
|
||||
|
||||
## Rancher Nodes
|
||||
|
||||
The following table lists the ports that need to be open to and from nodes that are running the Rancher server.
|
||||
|
||||
The port requirements differ based on the Rancher server architecture.
|
||||
|
||||
Rancher can be installed on any Kubernetes cluster. For Rancher installs on a K3s or RKE2 Kubernetes cluster, refer to the tabs below. For other Kubernetes distributions, refer to the distribution's documentation for the port requirements for cluster nodes.
|
||||
|
||||
:::note Notes:
|
||||
|
||||
- Rancher nodes may also require additional outbound access for any external authentication provider which is configured (LDAP for example).
|
||||
- Kubernetes recommends TCP 30000-32767 for node port services.
|
||||
- For firewalls, traffic may need to be enabled within the cluster and pod CIDR.
|
||||
- Rancher nodes may also need outbound access to an external S3 location which is used for storing cluster backups (Minio for example).
|
||||
|
||||
:::
|
||||
|
||||
### Ports for Rancher Server Nodes on K3s
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The K3s server needs port 6443 to be accessible by the nodes.
|
||||
|
||||
The nodes need to be able to reach other nodes over UDP port 8472 when Flannel VXLAN is used. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel. However, if you do not use Flannel and provide your own custom CNI, then port 8472 is not needed by K3s.
|
||||
|
||||
If you wish to utilize the metrics server, you will need to open port 10250 on each node.
|
||||
|
||||
:::note Important:
|
||||
|
||||
The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disables access to port 8472.
|
||||
|
||||
:::
|
||||
|
||||
The following tables break down the port requirements for inbound and outbound traffic:
|
||||
|
||||
<figcaption>Inbound Rules for Rancher Server Nodes</figcaption>
|
||||
|
||||
| Protocol | Port | Source | Description
|
||||
|-----|-----|----------------|---|
|
||||
| TCP | 80 | Load balancer/proxy that does external SSL termination | Rancher UI/API when external SSL termination is used |
|
||||
| TCP | 443 | <ul><li>server nodes</li><li>agent nodes</li><li>hosted/registered Kubernetes</li><li>any source that needs to be able to use the Rancher UI or API</li></ul> | Rancher agent, Rancher UI/API, kubectl |
|
||||
| TCP | 6443 | K3s server nodes | Kubernetes API
|
||||
| UDP | 8472 | K3s server and agent nodes | Required only for Flannel VXLAN.
|
||||
| TCP | 10250 | K3s server and agent nodes | kubelet
|
||||
|
||||
<figcaption>Outbound Rules for Rancher Nodes</figcaption>
|
||||
|
||||
| Protocol | Port | Destination | Description |
|
||||
| -------- | ---- | -------------------------------------------------------- | --------------------------------------------- |
|
||||
| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver |
|
||||
| TCP | 443 | git.rancher.io | Rancher catalog |
|
||||
| TCP | 2376 | Any node IP from a node created using Node driver | Docker daemon TLS port used by Docker Machine |
|
||||
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
|
||||
|
||||
</details>
|
||||
|
||||
### Ports for Rancher Server Nodes on RKE2
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The RKE2 server needs port 6443 and 9345 to be accessible by other nodes in the cluster.
|
||||
|
||||
All nodes need to be able to reach other nodes over UDP port 8472 when Flannel VXLAN is used.
|
||||
|
||||
If you wish to utilize the metrics server, you will need to open port 10250 on each node.
|
||||
|
||||
:::note Important:
|
||||
|
||||
The VXLAN port on nodes should not be exposed to the world as it opens up your cluster network to be accessed by anyone. Run your nodes behind a firewall/security group that disables access to port 8472.
|
||||
|
||||
:::
|
||||
|
||||
<figcaption>Inbound Rules for RKE2 Server Nodes</figcaption>
|
||||
|
||||
| Protocol | Port | Source | Description
|
||||
|-----|-----|----------------|---|
|
||||
| TCP | 9345 | RKE2 server and agent nodes | Node registration. Port should be open on all server nodes to all other nodes in the cluster.
|
||||
| TCP | 6443 | RKE2 agent nodes | Kubernetes API
|
||||
| UDP | 8472 | RKE2 server and agent nodes | Required only for Flannel VXLAN
|
||||
| TCP | 10250 | RKE2 server and agent nodes | kubelet
|
||||
| TCP | 2379 | RKE2 server nodes | etcd client port
|
||||
| TCP | 2380 | RKE2 server nodes | etcd peer port
|
||||
| TCP | 30000-32767 | RKE2 server and agent nodes | NodePort port range. Can use TCP or UDP.
|
||||
| TCP | 5473 | Calico-node pod connecting to typha pod | Required when deploying with Calico
|
||||
| HTTP | 80 | Load balancer/proxy that does external SSL termination | Rancher UI/API when external SSL termination is used |
|
||||
| HTTPS | 443 | <ul><li>hosted/registered Kubernetes</li><li>any source that needs to be able to use the Rancher UI or API</li></ul> | Rancher agent, Rancher UI/API, kubectl. Not needed if you have a load balancer doing TLS termination. |
|
||||
|
||||
Typically all outbound traffic is allowed.
|
||||
</details>
|
||||
|
||||
### Ports for Rancher Server in Docker
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The following tables break down the port requirements for Rancher nodes, for inbound and outbound traffic:
|
||||
|
||||
<figcaption>Inbound Rules for Rancher Node</figcaption>
|
||||
|
||||
| Protocol | Port | Source | Description
|
||||
|-----|-----|----------------|---|
|
||||
| TCP | 80 | Load balancer/proxy that does external SSL termination | Rancher UI/API when external SSL termination is used
|
||||
| TCP | 443 | <ul><li>hosted/registered Kubernetes</li><li>any source that needs to be able to use the Rancher UI or API</li></ul> | Rancher agent, Rancher UI/API, kubectl
|
||||
|
||||
<figcaption>Outbound Rules for Rancher Node</figcaption>
|
||||
|
||||
| Protocol | Port | Source | Description |
|
||||
|-----|-----|----------------|---|
|
||||
| TCP | 22 | Any node IP from a node created using Node Driver | SSH provisioning of nodes using Node Driver |
|
||||
| TCP | 443 | git.rancher.io | Rancher catalog |
|
||||
| TCP | 2376 | Any node IP from a node created using a node driver | Docker daemon TLS port used by Docker Machine |
|
||||
| TCP | 6443 | Hosted/Imported Kubernetes API | Kubernetes API server |
|
||||
|
||||
</details>
|
||||
|
||||
## Downstream Kubernetes Cluster Nodes
|
||||
|
||||
Downstream Kubernetes clusters run your apps and services. This section describes what ports need to be opened on the nodes in downstream clusters so that Rancher can communicate with them.
|
||||
|
||||
The port requirements differ depending on how the downstream cluster was launched. Each of the tabs below list the ports that need to be opened for different [cluster types](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md).
|
||||
|
||||
The following diagram depicts the ports that are opened for each [cluster type](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md).
|
||||
|
||||
<figcaption>Port Requirements for the Rancher Management Plane</figcaption>
|
||||
|
||||

|
||||
|
||||
:::tip
|
||||
|
||||
If security isn't a large concern and you're okay with opening a few additional ports, you can use the table in [Commonly Used Ports](#commonly-used-ports) as your port reference instead of the comprehensive tables below.
|
||||
|
||||
:::
|
||||
|
||||
### Ports for Harvester Clusters
|
||||
|
||||
Refer to the [Harvester Integration Overview](../../../integrations-in-rancher/harvester/overview.md#port-requirements) for more information on Harvester port requirements.
|
||||
|
||||
|
||||
### Ports for Rancher Launched Kubernetes Clusters using Node Pools
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The following table depicts the port requirements for [Rancher Launched Kubernetes](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) with nodes created in an [Infrastructure Provider](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md).
|
||||
|
||||
:::note
|
||||
|
||||
The required ports are automatically opened by Rancher during creation of clusters in cloud providers like Amazon EC2 or DigitalOcean.
|
||||
|
||||
:::
|
||||
|
||||
<PortsIaasNodes/>
|
||||
|
||||
</details>
|
||||
|
||||
### Ports for Rancher Launched Kubernetes Clusters using Custom Nodes
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The following table depicts the port requirements for [Rancher Launched Kubernetes](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) with [Custom Nodes](../../../reference-guides/cluster-configuration/rancher-server-configuration/use-existing-nodes/use-existing-nodes.md).
|
||||
|
||||
<PortsCustomNodes/>
|
||||
|
||||
</details>
|
||||
|
||||
### Ports for Hosted Kubernetes Clusters
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The following table depicts the port requirements for [hosted clusters](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/set-up-clusters-from-hosted-kubernetes-providers/set-up-clusters-from-hosted-kubernetes-providers.md).
|
||||
|
||||
<PortsImportedHosted/>
|
||||
|
||||
</details>
|
||||
|
||||
### Ports for Registered Clusters
|
||||
|
||||
:::note
|
||||
|
||||
Registered clusters were called imported clusters before Rancher v2.5.
|
||||
|
||||
:::
|
||||
|
||||
<details>
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
The following table depicts the port requirements for [registered clusters](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md).
|
||||
|
||||
<PortsImportedHosted/>
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## Other Port Considerations
|
||||
|
||||
### Commonly Used Ports
|
||||
|
||||
These ports are typically opened on your Kubernetes nodes, regardless of what type of cluster it is.
|
||||
|
||||
import CommonPortsTable from '../../../shared-files/_common-ports-table.md';
|
||||
|
||||
<CommonPortsTable />
|
||||
|
||||
----
|
||||
|
||||
### Local Node Traffic
|
||||
|
||||
Ports marked as `local traffic` (i.e., `9099 TCP`) in the above requirements are used for Kubernetes healthchecks (`livenessProbe` and`readinessProbe`).
|
||||
These healthchecks are executed on the node itself. In most cloud environments, this local traffic is allowed by default.
|
||||
|
||||
However, this traffic may be blocked when:
|
||||
|
||||
- You have applied strict host firewall policies on the node.
|
||||
- You are using nodes that have multiple interfaces (multihomed).
|
||||
|
||||
In these cases, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as source or destination in your security group, explicitly opening ports only applies to the private interface of the nodes / instances.
|
||||
|
||||
### Rancher AWS EC2 Security Group
|
||||
|
||||
When using the [AWS EC2 node driver](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster.md) to provision cluster nodes in Rancher, you can choose to let Rancher create a security group called `rancher-nodes`. The following rules are automatically added to this security group.
|
||||
|
||||
| Type | Protocol | Port Range | Source/Destination | Rule Type |
|
||||
|-----------------|:--------:|:-----------:|------------------------|:---------:|
|
||||
| SSH | TCP | 22 | 0.0.0.0/0 and ::/0 | Inbound |
|
||||
| HTTP | TCP | 80 | 0.0.0.0/0 and ::/0 | Inbound |
|
||||
| Custom TCP Rule | TCP | 443 | 0.0.0.0/0 and ::/0 | Inbound |
|
||||
| Custom TCP Rule | TCP | 2376 | 0.0.0.0/0 and ::/0 | Inbound |
|
||||
| Custom TCP Rule | TCP | 6443 | 0.0.0.0/0 and ::/0 | Inbound |
|
||||
| Custom TCP Rule | TCP | 179 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom TCP Rule | TCP | 9345 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom TCP Rule | TCP | 2379-2380 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom TCP Rule | TCP | 10250-10252 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom TCP Rule | TCP | 10256 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom UDP Rule | UDP | 4789 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom UDP Rule | UDP | 8472 | sg-xxx (rancher-nodes) | Inbound |
|
||||
| Custom TCP Rule | TCP | 30000-32767 | 0.0.0.0/0 and ::/0 | Inbound |
|
||||
| Custom UDP Rule | UDP | 30000-32767 | 0.0.0.0/0 and ::/0 | Inbound |
|
||||
| All traffic | All | All | 0.0.0.0/0 and ::/0 | Outbound |
|
||||
|
||||
### Opening SUSE Linux Ports
|
||||
|
||||
SUSE Linux may have a firewall that blocks all ports by default. To open the ports needed for adding the host to a custom cluster,
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="SLES 15 / openSUSE Leap 15">
|
||||
|
||||
1. SSH into the instance.
|
||||
1. Start YaST in text mode:
|
||||
```
|
||||
sudo yast2
|
||||
```
|
||||
|
||||
1. Navigate to **Security and Users** > **Firewall** > **Zones:public** > **Ports**. To navigate within the interface, follow these [instructions](https://doc.opensuse.org/documentation/leap/reference/html/book-reference/cha-yast-text.html#sec-yast-cli-navigate).
|
||||
1. To open the required ports, enter them into the **TCP Ports** and **UDP Ports** fields. In this example, ports 9796 and 10250 are also opened for monitoring. The resulting fields should look similar to the following:
|
||||
```yaml
|
||||
TCP Ports
|
||||
22, 80, 443, 2376, 2379, 2380, 6443, 9099, 9796, 10250, 10254, 30000-32767
|
||||
UDP Ports
|
||||
8472, 30000-32767
|
||||
```
|
||||
|
||||
1. When all required ports are enter, select **Accept**.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="SLES 12 / openSUSE Leap 42">
|
||||
|
||||
1. SSH into the instance.
|
||||
1. Edit /`etc/sysconfig/SuSEfirewall2` and open the required ports. In this example, ports 9796 and 10250 are also opened for monitoring:
|
||||
```
|
||||
FW_SERVICES_EXT_TCP="22 80 443 2376 2379 2380 6443 9099 9796 10250 10254 30000:32767"
|
||||
FW_SERVICES_EXT_UDP="8472 30000:32767"
|
||||
FW_ROUTE=yes
|
||||
```
|
||||
1. Restart the firewall with the new ports:
|
||||
```
|
||||
SuSEfirewall2
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
**Result:** The node has the open ports required to be added to a custom cluster.
|
||||
@@ -0,0 +1,34 @@
|
||||
---
|
||||
title: Air-Gapped Helm CLI Install
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install"/>
|
||||
</head>
|
||||
|
||||
This section is about using the Helm CLI to install the Rancher server in an air gapped environment. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
|
||||
|
||||
The installation steps differ depending on whether Rancher is installed on a K3s Kubernetes cluster or a single Docker container.
|
||||
|
||||
For more information on each installation option, refer to [this page.](../../installation-and-upgrade.md)
|
||||
|
||||
Throughout the installation instructions, there will be _tabs_ for each installation option.
|
||||
|
||||
:::note Important:
|
||||
|
||||
If you install Rancher following the Docker installation guide, there is no upgrade path to transition your Docker Installation to a Kubernetes Installation.
|
||||
|
||||
:::
|
||||
|
||||
## Installation Outline
|
||||
|
||||
1. [Set up infrastructure and private registry](infrastructure-private-registry.md)
|
||||
2. [Collect and publish images to your private registry](publish-images.md)
|
||||
3. [Set up a Kubernetes cluster (Skip this step for Docker installations)](install-kubernetes.md)
|
||||
4. [Install Rancher](install-rancher-ha.md)
|
||||
|
||||
## Upgrades
|
||||
|
||||
To upgrade Rancher with Helm CLI in an air gap environment, follow [this procedure.](../../install-upgrade-on-a-kubernetes-cluster/upgrades.md)
|
||||
|
||||
### [Next: Prepare your Node(s)](infrastructure-private-registry.md)
|
||||
@@ -0,0 +1,144 @@
|
||||
---
|
||||
title: Docker Install Commands
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/docker-install-commands"/>
|
||||
</head>
|
||||
|
||||
The Docker installation is for Rancher users who want to test out Rancher.
|
||||
|
||||
Instead of running on a Kubernetes cluster, you install the Rancher server component on a single node using a `docker run` command. Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
|
||||
|
||||
The backup application can be used to migrate the Rancher server from a Docker install to a Kubernetes install using [these steps.](../../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md)
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
| Environment Variable Key | Environment Variable Value | Description |
|
||||
| -------------------------------- | -------------------------------- | ---- |
|
||||
| `CATTLE_SYSTEM_DEFAULT_REGISTRY` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
|
||||
| `CATTLE_SYSTEM_CATALOG` | `bundled` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. |
|
||||
|
||||
:::note Do you want to..
|
||||
|
||||
- Configure custom CA root certificate to access your services? See [Custom CA root certificate](../../resources/custom-ca-root-certificates.md).
|
||||
- Record all transactions with the Rancher API? See [API Auditing](../../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#api-audit-log).
|
||||
|
||||
:::
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
## Option A: Default Self-Signed Certificate
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself.
|
||||
|
||||
Log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder.
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
|
||||
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](../../installation-references/helm-chart-options.md) that you want to install. |
|
||||
|
||||
Privileged access is [required.](./install-rancher-ha.md#privileged-access-for-rancher)
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
|
||||
--privileged \
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Option B: Bring Your Own Certificate: Self-Signed
|
||||
|
||||
<details id="option-b">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher.
|
||||
|
||||
:::note Prerequisites:
|
||||
|
||||
From a computer with an internet connection, create a self-signed certificate using [OpenSSL](https://www.openssl.org/) or another method of your choice.
|
||||
|
||||
- The certificate files must be in PEM format.
|
||||
- In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](../rancher-on-a-single-node-with-docker/certificate-troubleshooting.md)
|
||||
|
||||
:::
|
||||
|
||||
After creating your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Use the `-v` flag and provide the path to your certificates to mount them in your container.
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `<CERT_DIRECTORY>` | The path to the directory containing your certificate files. |
|
||||
| `<FULL_CHAIN.pem>` | The path to your full certificate chain. |
|
||||
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
|
||||
| `<CA_CERTS.pem>` | The path to the certificate authority's certificate. |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
|
||||
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](../../installation-references/helm-chart-options.md) that you want to install. |
|
||||
|
||||
Privileged access is [required.](./install-rancher-ha.md#privileged-access-for-rancher)
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
|
||||
--privileged \
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Option C: Bring Your Own Certificate: Signed by Recognized CA
|
||||
|
||||
<details id="option-c">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
In development or testing environments where you're exposing an app publicly, use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings.
|
||||
|
||||
:::note Prerequisite:
|
||||
|
||||
The certificate files must be in PEM format.
|
||||
|
||||
:::
|
||||
|
||||
After obtaining your certificate, log into your Linux host, and then run the installation command below. When entering the command, use the table below to replace each placeholder. Because your certificate is signed by a recognized CA, mounting an additional CA certificate file is unnecessary.
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `<CERT_DIRECTORY>` | The path to the directory containing your certificate files. |
|
||||
| `<FULL_CHAIN.pem>` | The path to your full certificate chain. |
|
||||
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port. |
|
||||
| `<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](../../installation-references/helm-chart-options.md) that you want to install. |
|
||||
|
||||
:::note
|
||||
|
||||
Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
:::
|
||||
|
||||
Privileged access is [required.](./install-rancher-ha.md#privileged-access-for-rancher)
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--no-cacerts \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
|
||||
--privileged
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
</details>
|
||||
@@ -0,0 +1,196 @@
|
||||
---
|
||||
title: '1. Set up Infrastructure and Private Registry'
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/infrastructure-private-registry"/>
|
||||
</head>
|
||||
|
||||
In this section, you will provision the underlying infrastructure for your Rancher management server in an air gapped environment. You will also set up the private container image registry that must be available to your Rancher node(s).
|
||||
|
||||
An air gapped environment is an environment where the Rancher server is installed offline or behind a firewall.
|
||||
|
||||
The infrastructure depends on whether you are installing Rancher on a K3s Kubernetes cluster, an RKE Kubernetes cluster, or a single Docker container. For more information on each installation option, refer to [this page.](../../installation-and-upgrade.md)
|
||||
|
||||
Rancher can be installed on any Kubernetes cluster. The RKE and K3s Kubernetes infrastructure tutorials below are still included for convenience.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="K3s">
|
||||
|
||||
We recommend setting up the following infrastructure for a high-availability installation:
|
||||
|
||||
- **Two Linux nodes,** typically virtual machines, in the infrastructure provider of your choice.
|
||||
- **An external database** to store the cluster data. PostgreSQL, MySQL, and etcd are supported.
|
||||
- **A load balancer** to direct traffic to the two nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
- **A private image registry** to distribute container images to your machines.
|
||||
|
||||
## 1. Set up Linux Nodes
|
||||
|
||||
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
|
||||
|
||||
Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.](../../installation-requirements/installation-requirements.md)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
## 2. Set up External Datastore
|
||||
|
||||
The ability to run Kubernetes using a datastore other than etcd sets K3s apart from other Kubernetes distributions. This feature provides flexibility to Kubernetes operators. The available options allow you to select a datastore that best fits your use case.
|
||||
|
||||
For a high-availability K3s installation, you will need to set up one of the following external databases:
|
||||
|
||||
* [PostgreSQL](https://www.postgresql.org/) (certified against versions 10.7 and 11.5)
|
||||
* [MySQL](https://www.mysql.com/) (certified against version 5.7)
|
||||
* [etcd](https://etcd.io/) (certified against version 3.3.15)
|
||||
|
||||
When you install Kubernetes, you will pass in details for K3s to connect to the database.
|
||||
|
||||
For an example of one way to set up the database, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/mysql-database-in-amazon-rds.md) for setting up a MySQL database on Amazon's RDS service.
|
||||
|
||||
For the complete list of options that are available for configuring a K3s cluster datastore, refer to the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/datastore/)
|
||||
|
||||
## 3. Set up the Load Balancer
|
||||
|
||||
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
|
||||
|
||||
When Kubernetes gets set up in a later step, the K3s tool will deploy a Traefik Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
|
||||
|
||||
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the Traefik Ingress controller to listen for traffic destined for the Rancher hostname. The Traefik Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.
|
||||
|
||||
For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer:
|
||||
|
||||
- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment.
|
||||
- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.](../../installation-references/helm-chart-options.md#external-tls-termination)
|
||||
|
||||
For an example showing how to set up an NGINX load balancer, refer to [this page.](../../../../how-to-guides/new-user-guides/infrastructure-setup/nginx-load-balancer.md)
|
||||
|
||||
For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.](../../../../how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md)
|
||||
|
||||
:::note Important:
|
||||
|
||||
Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
|
||||
|
||||
:::
|
||||
|
||||
## 4. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
Depending on your environment, this may be an A record pointing to the load balancer IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on.
|
||||
|
||||
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
## 5. Set up a Private Image Registry
|
||||
|
||||
Rancher supports air gap installs using a private registry. You must have your own private registry or other means of distributing container images to your machines.
|
||||
|
||||
In a later step, when you set up your K3s Kubernetes cluster, you will create a [private registries configuration file](https://rancher.com/docs/k3s/latest/en/installation/private-registry/) with details from this registry.
|
||||
|
||||
If you need to create a private registry, refer to the documentation pages for your respective runtime:
|
||||
|
||||
* [Containerd](https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration).
|
||||
* [Nerdctl commands and managed registry services](https://github.com/containerd/nerdctl/blob/main/docs/registry.md).
|
||||
* [Docker](https://docs.docker.com/registry/deploying/).
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="RKE">
|
||||
|
||||
To install the Rancher management server on a high-availability RKE cluster, we recommend setting up the following infrastructure:
|
||||
|
||||
- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere.
|
||||
- **A load balancer** to direct front-end traffic to the three nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
- **A private image registry** to distribute container images to your machines.
|
||||
|
||||
These nodes must be in the same region/data center. You may place these servers in separate availability zones.
|
||||
|
||||
## Why Three Nodes?
|
||||
|
||||
In an RKE cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
|
||||
|
||||
The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes.
|
||||
|
||||
## 1. Set up Linux Nodes
|
||||
|
||||
These hosts will be disconnected from the internet, but require being able to connect with your private registry.
|
||||
|
||||
Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.](../../installation-requirements/installation-requirements.md)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
## 2. Set up the Load Balancer
|
||||
|
||||
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
|
||||
|
||||
When Kubernetes gets set up in a later step, the RKE tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
|
||||
|
||||
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.
|
||||
|
||||
For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer:
|
||||
|
||||
- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment.
|
||||
- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.](../../installation-references/helm-chart-options.md#external-tls-termination)
|
||||
|
||||
For an example showing how to set up an NGINX load balancer, refer to [this page.](../../../../how-to-guides/new-user-guides/infrastructure-setup/nginx-load-balancer.md)
|
||||
|
||||
For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.](../../../../how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md)
|
||||
|
||||
:::caution
|
||||
|
||||
Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
|
||||
|
||||
:::
|
||||
|
||||
## 3. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
Depending on your environment, this may be an A record pointing to the LB IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on.
|
||||
|
||||
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
## 4. Set up a Private Image Registry
|
||||
|
||||
Rancher supports air gap installs using a secure private registry. You must have your own private registry or other means of distributing container images to your machines.
|
||||
|
||||
In a later step, when you set up your RKE Kubernetes cluster, you will create a [private registries configuration file](https://rancher.com/docs/rke/latest/en/config-options/private-registries/) with details from this registry.
|
||||
|
||||
If you need to create a private registry, refer to the documentation pages for your respective runtime:
|
||||
|
||||
* [Containerd](https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration).
|
||||
* [Nerdctl commands and managed registry services](https://github.com/containerd/nerdctl/blob/main/docs/registry.md).
|
||||
* [Docker](https://docs.docker.com/registry/deploying/).
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker">
|
||||
|
||||
:::note Notes:
|
||||
|
||||
- The Docker installation is for Rancher users that are wanting to test out Rancher. Since there is only one node and a single Docker container, if the node goes down, you will lose all the data of your Rancher server.
|
||||
|
||||
- The Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.](../../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md)
|
||||
|
||||
:::
|
||||
|
||||
## 1. Set up a Linux Node
|
||||
|
||||
This host will be disconnected from the Internet, but needs to be able to connect to your private registry.
|
||||
|
||||
Make sure that your node fulfills the general installation requirements for [OS, containers, hardware, and networking.](../../installation-requirements/installation-requirements.md)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
## 2. Set up a Private Docker Registry
|
||||
|
||||
Rancher supports air gap installs using a private registry on your bastion server. You must have your own private registry or other means of distributing container images to your machines.
|
||||
|
||||
If you need help with creating a private registry, please refer to the [official Docker documentation.](https://docs.docker.com/registry/).
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## [Next: Collect and Publish Images to your Private Registry](publish-images.md)
|
||||
@@ -0,0 +1,301 @@
|
||||
---
|
||||
title: '3. Install Kubernetes (Skip for Docker Installs)'
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-kubernetes"/>
|
||||
</head>
|
||||
|
||||
:::note
|
||||
|
||||
Skip this section if you are installing Rancher on a single node with Docker.
|
||||
|
||||
:::
|
||||
|
||||
This section describes how to install a Kubernetes cluster according to our [best practices for the Rancher server environment.](../../../../reference-guides/rancher-manager-architecture/architecture-recommendations.md#environment-for-kubernetes-installations) This cluster should be dedicated to run only the Rancher server.
|
||||
|
||||
Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes providers.
|
||||
|
||||
The steps to set up an air-gapped Kubernetes cluster on RKE2 or K3s are shown below.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="K3s">
|
||||
|
||||
In this guide, we are assuming you have created your nodes in your air gapped environment and have a secure Docker private registry on your bastion server.
|
||||
|
||||
## Installation Outline
|
||||
|
||||
1. [Prepare Images Directory](#1-prepare-images-directory)
|
||||
2. [Create Registry YAML](#2-create-registry-yaml)
|
||||
3. [Install K3s](#3-install-k3s)
|
||||
4. [Save and Start Using the kubeconfig File](#4-save-and-start-using-the-kubeconfig-file)
|
||||
|
||||
## 1. Prepare Images Directory
|
||||
|
||||
Obtain the images tar file for your architecture from the [releases](https://github.com/k3s-io/k3s/releases) page for the version of K3s you will be running.
|
||||
|
||||
Place the tar file in the `images` directory before starting K3s on each node, for example:
|
||||
|
||||
```sh
|
||||
sudo mkdir -p /var/lib/rancher/k3s/agent/images/
|
||||
sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/
|
||||
```
|
||||
|
||||
## 2. Create Registry YAML
|
||||
|
||||
Create the registries.yaml file at `/etc/rancher/k3s/registries.yaml`. This will tell K3s the necessary details to connect to your private registry.
|
||||
|
||||
The registries.yaml file should look like this before plugging in the necessary information:
|
||||
|
||||
```yaml
|
||||
---
|
||||
mirrors:
|
||||
customreg:
|
||||
endpoint:
|
||||
- "https://ip-to-server:5000"
|
||||
configs:
|
||||
customreg:
|
||||
auth:
|
||||
username: xxxxxx # this is the registry username
|
||||
password: xxxxxx # this is the registry password
|
||||
tls:
|
||||
cert_file: <path to the cert file used in the registry>
|
||||
key_file: <path to the key file used in the registry>
|
||||
ca_file: <path to the ca file used in the registry>
|
||||
```
|
||||
|
||||
Note, at this time only secure registries are supported with K3s (SSL with custom CA).
|
||||
|
||||
For more information on private registries configuration file for K3s, refer to the [K3s documentation.](https://rancher.com/docs/k3s/latest/en/installation/private-registry/)
|
||||
|
||||
## 3. Install K3s
|
||||
|
||||
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [Rancher Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/).
|
||||
|
||||
To specify the K3s (Kubernetes) version, use the INSTALL_K3S_VERSION (e.g., `INSTALL_K3S_VERSION="v1.24.10+k3s1"`) environment variable when running the K3s installation script.
|
||||
|
||||
Obtain the K3s binary from the [releases](https://github.com/k3s-io/k3s/releases) page, matching the same version used to get the airgap images tar.
|
||||
Also obtain the K3s install script at https://get.k3s.io
|
||||
|
||||
Place the binary in `/usr/local/bin` on each node.
|
||||
Place the install script anywhere on each node, and name it `install.sh`.
|
||||
|
||||
Install K3s on each server:
|
||||
|
||||
```
|
||||
INSTALL_K3S_SKIP_DOWNLOAD=true INSTALL_K3S_VERSION=<VERSION> ./install.sh
|
||||
```
|
||||
|
||||
Install K3s on each agent:
|
||||
|
||||
```
|
||||
INSTALL_K3S_SKIP_DOWNLOAD=true INSTALL_K3S_VERSION=<VERSION> K3S_URL=https://<SERVER>:6443 K3S_TOKEN=<TOKEN> ./install.sh
|
||||
```
|
||||
|
||||
Where `<SERVER>` is the IP or valid DNS of the server and `<TOKEN>` is the node-token from the server found at `/var/lib/rancher/k3s/server/node-token`.
|
||||
|
||||
:::note
|
||||
|
||||
K3s additionally provides a `--resolv-conf` flag for kubelets, which may help with configuring DNS in air-gapped networks.
|
||||
|
||||
:::
|
||||
|
||||
## 4. Save and Start Using the kubeconfig File
|
||||
|
||||
When you installed K3s on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/k3s/k3s.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location.
|
||||
|
||||
To use this `kubeconfig` file,
|
||||
|
||||
1. Install [kubectl,](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) a Kubernetes command-line tool.
|
||||
2. Copy the file at `/etc/rancher/k3s/k3s.yaml` and save it to the directory `~/.kube/config` on your local machine.
|
||||
3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your load balancer, referring to port 6443. (The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443.) Here is an example `k3s.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: [CERTIFICATE-DATA]
|
||||
server: [LOAD-BALANCER-DNS]:6443 # Edit this line
|
||||
name: default
|
||||
contexts:
|
||||
- context:
|
||||
cluster: default
|
||||
user: default
|
||||
name: default
|
||||
current-context: default
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: default
|
||||
user:
|
||||
password: [PASSWORD]
|
||||
username: admin
|
||||
```
|
||||
|
||||
**Result:** You can now use `kubectl` to manage your K3s cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using `kubectl`:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig ~/.kube/config/k3s.yaml get pods --all-namespaces
|
||||
```
|
||||
|
||||
For more information about the `kubeconfig` file, refer to the [K3s documentation](https://rancher.com/docs/k3s/latest/en/cluster-access/) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files.
|
||||
|
||||
## Note on Upgrading
|
||||
|
||||
Upgrading an air-gap environment can be accomplished in the following manner:
|
||||
|
||||
1. Download the new air-gap images (tar file) from the [releases](https://github.com/k3s-io/k3s/releases) page for the version of K3s you will be upgrading to. Place the tar in the `/var/lib/rancher/k3s/agent/images/` directory on each node. Delete the old tar file.
|
||||
2. Copy and replace the old K3s binary in `/usr/local/bin` on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables.
|
||||
3. Restart the K3s service (if not restarted automatically by installer).
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="RKE2">
|
||||
|
||||
In this guide, we are assuming you have created your nodes in your air-gapped environment and have a secure Docker private registry on your bastion server.
|
||||
|
||||
## Installation Outline
|
||||
|
||||
1. [Create RKE2 configuration](#1-create-rke2-configuration)
|
||||
2. [Create Registry YAML](#2-create-registry-yaml)
|
||||
3. [Install RKE2](#3-install-rke2)
|
||||
4. [Save and Start Using the kubeconfig File](#4-save-and-start-using-the-kubeconfig-file)
|
||||
|
||||
## 1. Create RKE2 configuration
|
||||
|
||||
Create the config.yaml file at `/etc/rancher/rke2/config.yaml`. This will contain all the configuration options necessary to create a highly available RKE2 cluster.
|
||||
|
||||
On the first server the minimum config is:
|
||||
|
||||
```
|
||||
token: my-shared-secret
|
||||
tls-san:
|
||||
- loadbalancer-dns-domain.com
|
||||
```
|
||||
|
||||
On each other server the config file should contain the same token and tell RKE2 to connect to the existing first server:
|
||||
|
||||
```
|
||||
server: https://ip-of-first-server:9345
|
||||
token: my-shared-secret
|
||||
tls-san:
|
||||
- loadbalancer-dns-domain.com
|
||||
```
|
||||
|
||||
For more information, refer to the [RKE2 documentation](https://docs.rke2.io/install/ha).
|
||||
|
||||
:::note
|
||||
|
||||
RKE2 additionally provides a `resolv-conf` option for kubelets, which may help with configuring DNS in air-gap networks.
|
||||
|
||||
:::
|
||||
|
||||
## 2. Create Registry YAML
|
||||
|
||||
Create the registries.yaml file at `/etc/rancher/rke2/registries.yaml`. This will tell RKE2 the necessary details to connect to your private registry.
|
||||
|
||||
The registries.yaml file should look like this before plugging in the necessary information:
|
||||
|
||||
```
|
||||
---
|
||||
mirrors:
|
||||
customreg:
|
||||
endpoint:
|
||||
- "https://ip-to-server:5000"
|
||||
configs:
|
||||
customreg:
|
||||
auth:
|
||||
username: xxxxxx # this is the registry username
|
||||
password: xxxxxx # this is the registry password
|
||||
tls:
|
||||
cert_file: <path to the cert file used in the registry>
|
||||
key_file: <path to the key file used in the registry>
|
||||
ca_file: <path to the ca file used in the registry>
|
||||
```
|
||||
|
||||
For more information on private registries configuration file for RKE2, refer to the [RKE2 documentation.](https://docs.rke2.io/install/private_registry)
|
||||
|
||||
## 3. Install RKE2
|
||||
|
||||
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/)
|
||||
|
||||
Download the install script, rke2, rke2-images, and sha256sum archives from the release and upload them into a directory on each server:
|
||||
|
||||
```
|
||||
mkdir /tmp/rke2-artifacts && cd /tmp/rke2-artifacts/
|
||||
wget https://github.com/rancher/rke2/releases/download/v1.21.5%2Brke2r2/rke2-images.linux-amd64.tar.zst
|
||||
wget https://github.com/rancher/rke2/releases/download/v1.21.5%2Brke2r2/rke2.linux-amd64.tar.gz
|
||||
wget https://github.com/rancher/rke2/releases/download/v1.21.5%2Brke2r2/sha256sum-amd64.txt
|
||||
curl -sfL https://get.rke2.io --output install.sh
|
||||
```
|
||||
|
||||
Next, run install.sh using the directory on each server, as in the example below:
|
||||
|
||||
```
|
||||
INSTALL_RKE2_ARTIFACT_PATH=/tmp/rke2-artifacts sh install.sh
|
||||
```
|
||||
|
||||
Then enable and start the service on all servers:
|
||||
|
||||
``
|
||||
systemctl enable rke2-server.service
|
||||
systemctl start rke2-server.service
|
||||
``
|
||||
|
||||
For more information, refer to the [RKE2 documentation](https://docs.rke2.io/install/airgap).
|
||||
|
||||
## 4. Save and Start Using the kubeconfig File
|
||||
|
||||
When you installed RKE2 on each Rancher server node, a `kubeconfig` file was created on the node at `/etc/rancher/rke2/rke2.yaml`. This file contains credentials for full access to the cluster, and you should save this file in a secure location.
|
||||
|
||||
To use this `kubeconfig` file,
|
||||
|
||||
1. Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl), a Kubernetes command-line tool.
|
||||
2. Copy the file at `/etc/rancher/rke2/rke2.yaml` and save it to the directory `~/.kube/config` on your local machine.
|
||||
3. In the kubeconfig file, the `server` directive is defined as localhost. Configure the server as the DNS of your load balancer, referring to port 6443. (The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443.) Here is an example `rke2.yaml`:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
certificate-authority-data: [CERTIFICATE-DATA]
|
||||
server: [LOAD-BALANCER-DNS]:6443 # Edit this line
|
||||
name: default
|
||||
contexts:
|
||||
- context:
|
||||
cluster: default
|
||||
user: default
|
||||
name: default
|
||||
current-context: default
|
||||
kind: Config
|
||||
preferences: {}
|
||||
users:
|
||||
- name: default
|
||||
user:
|
||||
password: [PASSWORD]
|
||||
username: admin
|
||||
```
|
||||
|
||||
**Result:** You can now use `kubectl` to manage your RKE2 cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using `kubectl`:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig ~/.kube/config/rke2.yaml get pods --all-namespaces
|
||||
```
|
||||
|
||||
For more information about the `kubeconfig` file, refer to the [RKE2 documentation](https://docs.rke2.io/cluster_access) or the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) about organizing cluster access using `kubeconfig` files.
|
||||
|
||||
## Note on Upgrading
|
||||
|
||||
Upgrading an air-gap environment can be accomplished in the following manner:
|
||||
|
||||
1. Download the new air-gap artifacts and install script from the [releases](https://github.com/rancher/rke2/releases) page for the version of RKE2 you will be upgrading to.
|
||||
2. Run the script again just as you had done in the past with the same environment variables.
|
||||
3. Restart the RKE2 service.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Issues or Errors?
|
||||
|
||||
See the [Troubleshooting](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md) page.
|
||||
|
||||
## [Next: Install Rancher](install-rancher-ha.md)
|
||||
@@ -0,0 +1,244 @@
|
||||
---
|
||||
title: 4. Install Rancher
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha"/>
|
||||
</head>
|
||||
|
||||
This section is about how to deploy Rancher for your air gapped environment in a high-availability Kubernetes installation. An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
|
||||
|
||||
## Privileged Access for Rancher
|
||||
|
||||
When the Rancher server is deployed in the Docker container, a local Kubernetes cluster is installed within the container for Rancher to use. Because many features of Rancher run as deployments, and privileged mode is required to run containers within containers, you will need to install Rancher with the `--privileged` option.
|
||||
|
||||
## Docker Instructions
|
||||
|
||||
If you want to continue the air gapped installation using Docker commands, skip the rest of this page and follow the instructions on [this page.](docker-install-commands.md)
|
||||
|
||||
## Kubernetes Instructions
|
||||
|
||||
Rancher recommends installing Rancher on a Kubernetes cluster. A highly available Kubernetes install is comprised of three nodes running the Rancher server components on a Kubernetes cluster. The persistence layer (etcd) is also replicated on these three nodes, providing redundancy and data duplication in case one of the nodes fails.
|
||||
|
||||
### 1. Add the Helm Chart Repository
|
||||
|
||||
From a system that has access to the internet, fetch the latest Helm chart and copy the resulting manifests to a system that has access to the Rancher server cluster.
|
||||
|
||||
1. If you haven't already, install `helm` locally on a workstation that has internet access. Note: Refer to the [Helm version requirements](../../resources/helm-version-requirements.md) to choose a version of Helm to install Rancher.
|
||||
|
||||
2. Use `helm repo add` command to add the Helm chart repository that contains charts to install Rancher. For more information about the repository choices and which is best for your use case, see [Choosing a Rancher Version](../../resources/choose-a-rancher-version.md).
|
||||
- Latest: Recommended for trying out the newest features
|
||||
```
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
```
|
||||
- Stable: Recommended for production environments
|
||||
```
|
||||
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
|
||||
```
|
||||
- Alpha: Experimental preview of upcoming releases.
|
||||
```
|
||||
helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha
|
||||
```
|
||||
Note: Upgrades are not supported to, from, or between Alphas.
|
||||
|
||||
3. Fetch the latest Rancher chart. This will pull down the chart and save it in the current directory as a `.tgz` file.
|
||||
```plain
|
||||
helm fetch rancher-<CHART_REPO>/rancher
|
||||
```
|
||||
|
||||
If you require a specific version of Rancher, you can fetch this with the Helm `--version` parameter like in the following example:
|
||||
```plain
|
||||
helm fetch rancher-stable/rancher --version=v2.4.8
|
||||
```
|
||||
|
||||
### 2. Choose your SSL Configuration
|
||||
|
||||
Rancher Server is designed to be secure by default and requires SSL/TLS configuration.
|
||||
|
||||
When Rancher is installed on an air gapped Kubernetes cluster, there are two recommended options for the source of the certificate.
|
||||
|
||||
:::note
|
||||
|
||||
If you want terminate SSL/TLS externally, see [TLS termination on an External Load Balancer](../../installation-references/helm-chart-options.md#external-tls-termination).
|
||||
|
||||
:::
|
||||
|
||||
| Configuration | Chart option | Description | Requires cert-manager |
|
||||
| ------------------------------------------ | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
|
||||
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br/> This is the **default** and does not need to be added when rendering the Helm template. | yes |
|
||||
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s). <br/> This option must be passed when rendering the Rancher Helm template. | no |
|
||||
|
||||
### Helm Chart Options for Air Gap Installations
|
||||
|
||||
When setting up the Rancher Helm template, there are several options in the Helm chart that are designed specifically for air gap installations.
|
||||
|
||||
| Chart Option | Chart Value | Description |
|
||||
| ----------------------- | -------------------------------- | ---- |
|
||||
| `certmanager.version` | `<version>` | Configure proper Rancher TLS issuer depending of running cert-manager version. |
|
||||
| `systemDefaultRegistry` | `<REGISTRY.YOURDOMAIN.COM:PORT>` | Configure Rancher server to always pull from your private registry when provisioning clusters. |
|
||||
| `useBundledSystemChart` | `true` | Configure Rancher server to use the packaged copy of Helm system charts. The [system charts](https://github.com/rancher/system-charts) repository contains all the catalog items required for features such as monitoring, logging, alerting and global DNS. These [Helm charts](https://github.com/rancher/system-charts) are located in GitHub, but since you are in an air gapped environment, using the charts that are bundled within Rancher is much easier than setting up a Git mirror. |
|
||||
|
||||
### 3. Fetch the Cert-Manager Chart
|
||||
|
||||
Based on the choice your made in [2. Choose your SSL Configuration](#2-choose-your-ssl-configuration), complete one of the procedures below.
|
||||
|
||||
#### Option A: Default Self-Signed Certificate
|
||||
|
||||
By default, Rancher generates a CA and uses cert-manager to issue the certificate for access to the Rancher server interface.
|
||||
|
||||
:::note
|
||||
|
||||
Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.11.0, please see our [upgrade cert-manager documentation](../../resources/upgrade-cert-manager.md).
|
||||
|
||||
:::
|
||||
|
||||
##### 1. Add the cert-manager Repo
|
||||
|
||||
From a system connected to the internet, add the cert-manager repo to Helm:
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
##### 2. Fetch the cert-manager Chart
|
||||
|
||||
Fetch the latest cert-manager chart available from the [Helm chart repository](https://artifacthub.io/packages/helm/cert-manager/cert-manager).
|
||||
|
||||
```plain
|
||||
helm fetch jetstack/cert-manager --version v1.11.0
|
||||
```
|
||||
|
||||
##### 3. Retrieve the cert-manager CRDs
|
||||
|
||||
Download the required CRD file for cert-manager:
|
||||
```plain
|
||||
curl -L -o cert-manager-crd.yaml https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.crds.yaml
|
||||
```
|
||||
|
||||
### 4. Install Rancher
|
||||
|
||||
Copy the fetched charts to a system that has access to the Rancher server cluster to complete installation.
|
||||
|
||||
#### 1. Install cert-manager
|
||||
|
||||
Install cert-manager with the same options you would use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry.
|
||||
|
||||
:::note
|
||||
|
||||
To see options on how to customize the cert-manager install (including for cases where your cluster uses PodSecurityPolicies), see the [cert-manager docs](https://artifacthub.io/packages/helm/cert-manager/cert-manager#configuration).
|
||||
|
||||
:::
|
||||
|
||||
<details id="install-cert-manager">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you are using self-signed certificates, install cert-manager:
|
||||
|
||||
1. Create the namespace for cert-manager.
|
||||
|
||||
```plain
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
2. Create the cert-manager CustomResourceDefinitions (CRDs).
|
||||
|
||||
```plain
|
||||
kubectl apply -f cert-manager-crd.yaml
|
||||
```
|
||||
|
||||
3. Install cert-manager.
|
||||
|
||||
```plain
|
||||
helm install cert-manager ./cert-manager-v1.11.0.tgz \
|
||||
--namespace cert-manager \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller \
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook \
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector \
|
||||
--set startupapicheck.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-ctl
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
#### 2. Install Rancher
|
||||
|
||||
First, refer to [Adding TLS Secrets](../../resources/add-tls-secrets.md) to publish the certificate files so Rancher and the ingress controller can use them.
|
||||
|
||||
Then, create the namespace for Rancher using kubectl:
|
||||
|
||||
```plain
|
||||
kubectl create namespace cattle-system
|
||||
```
|
||||
|
||||
Next, install Rancher, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<VERSION>` | The version number of the output tarball.
|
||||
`<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry.
|
||||
`<CERTMANAGER_VERSION>` | Cert-manager version running on k8s cluster.
|
||||
|
||||
```plain
|
||||
helm install rancher ./rancher-<VERSION>.tgz \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set certmanager.version=<CERTMANAGER_VERSION> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
**Optional**: To install a specific Rancher version, set the `rancherImageTag` value, example: `--set rancherImageTag=v2.5.8`
|
||||
|
||||
#### Option B: Certificates From Files Using Kubernetes Secrets
|
||||
|
||||
##### 1. Create Secrets
|
||||
|
||||
Create Kubernetes secrets from your own certificates for Rancher to use. The common name for the cert will need to match the `hostname` option in the command below, or the ingress controller will fail to provision the site for Rancher.
|
||||
|
||||
##### 2. Install Rancher
|
||||
|
||||
Install Rancher, declaring your chosen options. Use the reference table below to replace each placeholder. Rancher needs to be configured to use the private registry in order to provision any Rancher launched Kubernetes clusters or Rancher tools.
|
||||
|
||||
|
||||
| Placeholder | Description |
|
||||
| -------------------------------- | ----------------------------------------------- |
|
||||
| `<VERSION>` | The version number of the output tarball. |
|
||||
| `<RANCHER.YOURDOMAIN.COM>` | The DNS name you pointed at your load balancer. |
|
||||
| `<REGISTRY.YOURDOMAIN.COM:PORT>` | The DNS name for your private registry. |
|
||||
|
||||
```plain
|
||||
helm install rancher ./rancher-<VERSION>.tgz \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
If you are using a Private CA signed cert, add `--set privateCA=true` following `--set ingress.tls.source=secret`:
|
||||
|
||||
```plain
|
||||
helm install rancher ./rancher-<VERSION>.tgz \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set privateCA=true \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Use the packaged Rancher system charts
|
||||
```
|
||||
|
||||
The installation is complete.
|
||||
|
||||
## Additional Resources
|
||||
|
||||
These resources could be helpful when installing Rancher:
|
||||
|
||||
- [Importing and installing extensions in an air-gapped environment](../../../../integrations-in-rancher/rancher-extensions.md#importing-and-installing-extensions-in-an-air-gapped-environment)
|
||||
- [Rancher Helm chart options](../../installation-references/helm-chart-options.md)
|
||||
- [Adding TLS secrets](../../resources/add-tls-secrets.md)
|
||||
- [Troubleshooting Rancher Kubernetes Installations](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md)
|
||||
@@ -0,0 +1,309 @@
|
||||
---
|
||||
title: '2. Collect and Publish Images to your Private Registry'
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/publish-images"/>
|
||||
</head>
|
||||
|
||||
This section describes how to set up your private registry so that when you install Rancher, Rancher will pull all the required images from this registry.
|
||||
|
||||
By default, all images used to [provision Kubernetes clusters](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md) or launch any tools in Rancher, e.g. monitoring, pipelines, alerts, are pulled from Docker Hub. In an air gapped installation of Rancher, you will need a private registry that is located somewhere accessible by your Rancher server. Then, you will load the registry with all the images.
|
||||
|
||||
Populating the private registry with images is the same process for installing Rancher with Docker and for installing Rancher on a Kubernetes cluster.
|
||||
|
||||
The steps in this section differ depending on whether or not you are planning to use Rancher to provision a downstream cluster with Windows nodes or not. By default, we provide the steps of how to populate your private registry assuming that Rancher will provision downstream Kubernetes clusters with only Linux nodes. But if you plan on provisioning any [downstream Kubernetes clusters using Windows nodes](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/use-windows-clusters/use-windows-clusters.md), there are separate instructions to support the images needed.
|
||||
|
||||
:::note Prerequisites:
|
||||
|
||||
You must have a [private registry](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry) available to use.
|
||||
|
||||
If the registry has certs, follow [this K3s documentation](https://rancher.com/docs/k3s/latest/en/installation/private-registry/) about adding a private registry. The certs and registry configuration files need to be mounted into the Rancher container.
|
||||
|
||||
:::
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Linux Only Clusters">
|
||||
|
||||
For Rancher servers that will only provision Linux clusters, these are the steps to populate your private registry.
|
||||
|
||||
1. [Find the required assets for your Rancher version](#1-find-the-required-assets-for-your-rancher-version)
|
||||
2. [Collect the cert-manager image](#2-collect-the-cert-manager-image) (unless you are bringing your own certificates or terminating TLS on a load balancer)
|
||||
3. [Save the images to your workstation](#3-save-the-images-to-your-workstation)
|
||||
4. [Populate the private registry](#4-populate-the-private-registry)
|
||||
|
||||
### Prerequisites
|
||||
|
||||
These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space.
|
||||
|
||||
If you will use ARM64 hosts, the registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests.
|
||||
|
||||
### 1. Find the required assets for your Rancher version
|
||||
|
||||
1. Go to our [releases page,](https://github.com/rancher/rancher/releases) find the Rancher v2.x.x release that you want to install, and click **Assets**. Note: Don't use releases marked `rc` or `Pre-release`, as they are not stable for production environments.
|
||||
|
||||
2. From the release's **Assets** section, download the following files, which are required to install Rancher in an air gap environment:
|
||||
|
||||
| Release File | Description |
|
||||
| ---------------- | -------------- |
|
||||
| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. |
|
||||
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
|
||||
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
### 2. Collect the cert-manager image
|
||||
|
||||
:::note
|
||||
|
||||
Skip this step if you are using your own certificates, or if you are terminating TLS on an external load balancer.
|
||||
|
||||
:::
|
||||
|
||||
In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://artifacthub.io/packages/helm/cert-manager/cert-manager) image to `rancher-images.txt` as well.
|
||||
|
||||
1. Fetch the latest `cert-manager` Helm chart and parse the template for image details:
|
||||
|
||||
:::note
|
||||
|
||||
Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation](../../resources/upgrade-cert-manager.md).
|
||||
|
||||
:::
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm fetch jetstack/cert-manager
|
||||
helm template ./cert-manager-<version>.tgz | awk '$1 ~ /image:/ {print $2}' | sed s/\"//g >> ./rancher-images.txt
|
||||
```
|
||||
|
||||
2. Sort and unique the images list to remove any overlap between the sources:
|
||||
|
||||
```plain
|
||||
sort -u rancher-images.txt -o rancher-images.txt
|
||||
```
|
||||
|
||||
### 3. Save the images to your workstation
|
||||
|
||||
1. Make `rancher-save-images.sh` an executable:
|
||||
```
|
||||
chmod +x rancher-save-images.sh
|
||||
```
|
||||
|
||||
1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images:
|
||||
```plain
|
||||
./rancher-save-images.sh --image-list ./rancher-images.txt
|
||||
```
|
||||
**Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
### 4. Populate the private registry
|
||||
|
||||
Next, you will move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images.
|
||||
|
||||
Move the images in the `rancher-images.tar.gz` to your private registry using the scripts to load the images.
|
||||
|
||||
The `rancher-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. The `rancher-images.tar.gz` should also be in the same directory.
|
||||
|
||||
1. Log into your private registry if required:
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
1. Make `rancher-load-images.sh` an executable:
|
||||
```
|
||||
chmod +x rancher-load-images.sh
|
||||
```
|
||||
|
||||
1. Use `rancher-load-images.sh` to extract, tag and push `rancher-images.txt` and `rancher-images.tar.gz` to your private registry:
|
||||
```plain
|
||||
./rancher-load-images.sh --image-list ./rancher-images.txt --registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Linux and Windows Clusters">
|
||||
|
||||
For Rancher servers that will provision Linux and Windows clusters, there are distinctive steps to populate your private registry for the Windows images and the Linux images. Since a Windows cluster is a mix of Linux and Windows nodes, the Linux images pushed into the private registry are manifests.
|
||||
|
||||
## Windows Steps
|
||||
|
||||
The Windows images need to be collected and pushed from a Windows server workstation.
|
||||
|
||||
1. <a href="#windows-1">Find the required assets for your Rancher version</a>
|
||||
2. <a href="#windows-2">Save the images to your Windows Server workstation</a>
|
||||
3. <a href="#windows-3">Prepare the Docker daemon</a>
|
||||
4. <a href="#windows-4">Populate the private registry</a>
|
||||
|
||||
### Prerequisites
|
||||
|
||||
These steps expect you to use a Windows Server 1809 workstation that has internet access, access to your private registry, and at least 50 GB of disk space.
|
||||
|
||||
The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters.
|
||||
|
||||
Your registry must support manifests. As of April 2020, Amazon Elastic Container Registry does not support manifests.
|
||||
|
||||
<a name="windows-1"></a>
|
||||
|
||||
### 1. Find the required assets for your Rancher version
|
||||
|
||||
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments.
|
||||
|
||||
2. From the release's "Assets" section, download the following files:
|
||||
|
||||
| Release File | Description |
|
||||
|----------------------------|------------------|
|
||||
| `rancher-windows-images.txt` | This file contains a list of Windows images needed to provision Windows clusters. |
|
||||
| `rancher-save-images.ps1` | This script pulls all the images in the `rancher-windows-images.txt` from Docker Hub and saves all of the images as `rancher-windows-images.tar.gz`. |
|
||||
| `rancher-load-images.ps1` | This script loads the images from the `rancher-windows-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
<a name="windows-2"></a>
|
||||
|
||||
### 2. Save the images to your Windows Server workstation
|
||||
|
||||
1. Using `powershell`, go to the directory that has the files that were downloaded in the previous step.
|
||||
|
||||
1. Run `rancher-save-images.ps1` to create a tarball of all the required images:
|
||||
```plain
|
||||
./rancher-save-images.ps1
|
||||
```
|
||||
|
||||
**Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-windows-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
<a name="windows-3"></a>
|
||||
|
||||
### 3. Prepare the Docker daemon
|
||||
|
||||
Append your private registry address to the `allow-nondistributable-artifacts` config field in the Docker daemon (`C:\ProgramData\Docker\config\daemon.json`). Since the base image of Windows images are maintained by the `mcr.microsoft.com` registry, this step is required as the layers in the Microsoft registry are missing from Docker Hub and need to be pulled into the private registry.
|
||||
|
||||
```json
|
||||
{
|
||||
...
|
||||
"allow-nondistributable-artifacts": [
|
||||
...
|
||||
"<REGISTRY.YOURDOMAIN.COM:PORT>"
|
||||
]
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
<a name="windows-4"></a>
|
||||
|
||||
### 4. Populate the private registry
|
||||
|
||||
Move the images in the `rancher-windows-images.tar.gz` to your private registry using the scripts to load the images.
|
||||
|
||||
The `rancher-windows-images.txt` is expected to be on the workstation in the same directory that you are running the `rancher-load-images.ps1` script. The `rancher-windows-images.tar.gz` should also be in the same directory.
|
||||
|
||||
1. Using `powershell`, log into your private registry if required:
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
1. Using `powershell`, use `rancher-load-images.ps1` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry:
|
||||
```plain
|
||||
./rancher-load-images.ps1 --registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
## Linux Steps
|
||||
|
||||
The Linux images need to be collected and pushed from a Linux host, but _must be done after_ populating the Windows images into the private registry. These step are different from the Linux only steps as the Linux images that are pushed will actually manifests that support Windows and Linux images.
|
||||
|
||||
1. <a href="#linux-1">Find the required assets for your Rancher version</a>
|
||||
2. <a href="#linux-2">Collect all the required images</a>
|
||||
3. <a href="#linux-3">Save the images to your Linux workstation</a>
|
||||
4. <a href="#linux-4">Populate the private registry</a>
|
||||
|
||||
### Prerequisites
|
||||
|
||||
You must populate the private registry with the Windows images before populating the private registry with Linux images. If you have already populated the registry with Linux images, you will need to follow these instructions again as they will publish manifests that support Windows and Linux images.
|
||||
|
||||
These steps expect you to use a Linux workstation that has internet access, access to your private registry, and at least 20 GB of disk space.
|
||||
|
||||
The workstation must have Docker 18.02+ in order to support manifests, which are required when provisioning Windows clusters.
|
||||
|
||||
<a name="linux-1"></a>
|
||||
|
||||
### 1. Find the required assets for your Rancher version
|
||||
|
||||
1. Browse to our [releases page](https://github.com/rancher/rancher/releases) and find the Rancher v2.x.x release that you want to install. Don't download releases marked `rc` or `Pre-release`, as they are not stable for production environments. Click **Assets**.
|
||||
|
||||
2. From the release's **Assets** section, download the following files:
|
||||
|
||||
| Release File | Description |
|
||||
|----------------------------| -------------------------- |
|
||||
| `rancher-images.txt` | This file contains a list of images needed to install Rancher, provision clusters and user Rancher tools. |
|
||||
| `rancher-windows-images.txt` | This file contains a list of images needed to provision Windows clusters. |
|
||||
| `rancher-save-images.sh` | This script pulls all the images in the `rancher-images.txt` from Docker Hub and saves all of the images as `rancher-images.tar.gz`. |
|
||||
| `rancher-load-images.sh` | This script loads images from the `rancher-images.tar.gz` file and pushes them to your private registry. |
|
||||
|
||||
<a name="linux-2"></a>
|
||||
|
||||
### 2. Collect all the required images
|
||||
|
||||
**For Kubernetes Installs using Rancher Generated Self-Signed Certificate:** In a Kubernetes Install, if you elect to use the Rancher default self-signed TLS certificates, you must add the [`cert-manager`](https://artifacthub.io/packages/helm/cert-manager/cert-manager) image to `rancher-images.txt` as well. You skip this step if you are using you using your own certificates.
|
||||
|
||||
|
||||
1. Fetch the latest `cert-manager` Helm chart and parse the template for image details:
|
||||
|
||||
:::note
|
||||
|
||||
Recent changes to cert-manager require an upgrade. If you are upgrading Rancher and using a version of cert-manager older than v0.12.0, please see our [upgrade documentation](../../resources/upgrade-cert-manager.md).
|
||||
|
||||
:::
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm fetch jetstack/cert-manager
|
||||
helm template ./cert-manager-<version>.tgz | awk '$1 ~ /image:/ {print $2}' | sed s/\"//g >> ./rancher-images.txt
|
||||
```
|
||||
|
||||
2. Sort and unique the images list to remove any overlap between the sources:
|
||||
```plain
|
||||
sort -u rancher-images.txt -o rancher-images.txt
|
||||
```
|
||||
|
||||
<a name="linux-3"></a>
|
||||
|
||||
### 3. Save the images to your workstation
|
||||
|
||||
1. Make `rancher-save-images.sh` an executable:
|
||||
```
|
||||
chmod +x rancher-save-images.sh
|
||||
```
|
||||
|
||||
1. Run `rancher-save-images.sh` with the `rancher-images.txt` image list to create a tarball of all the required images:
|
||||
```plain
|
||||
./rancher-save-images.sh --image-list ./rancher-images.txt
|
||||
```
|
||||
|
||||
**Result:** Docker begins pulling the images used for an air gap install. Be patient. This process takes a few minutes. When the process completes, your current directory will output a tarball named `rancher-images.tar.gz`. Check that the output is in the directory.
|
||||
|
||||
<a name="linux-4"></a>
|
||||
|
||||
### 4. Populate the private registry
|
||||
|
||||
Move the images in the `rancher-images.tar.gz` to your private registry using the `rancher-load-images.sh script` to load the images.
|
||||
|
||||
The image list, `rancher-images.txt` or `rancher-windows-images.txt`, is expected to be on the workstation in the same directory that you are running the `rancher-load-images.sh` script. The `rancher-images.tar.gz` should also be in the same directory.
|
||||
|
||||
1. Log into your private registry if required:
|
||||
```plain
|
||||
docker login <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
1. Make `rancher-load-images.sh` an executable:
|
||||
```
|
||||
chmod +x rancher-load-images.sh
|
||||
```
|
||||
|
||||
1. Use `rancher-load-images.sh` to extract, tag and push the images from `rancher-images.tar.gz` to your private registry:
|
||||
|
||||
```plain
|
||||
./rancher-load-images.sh --image-list ./rancher-images.txt \
|
||||
--windows-image-list ./rancher-windows-images.txt \
|
||||
--registry <REGISTRY.YOURDOMAIN.COM:PORT>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### [Next step for Kubernetes Installs - Launch a Kubernetes Cluster](install-kubernetes.md)
|
||||
|
||||
### [Next step for Docker Installs - Install Rancher](install-rancher-ha.md)
|
||||
@@ -0,0 +1,23 @@
|
||||
---
|
||||
title: Other Installation Methods
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods"/>
|
||||
</head>
|
||||
|
||||
### Air Gapped Installations
|
||||
|
||||
Follow [these steps](air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) to install the Rancher server in an air gapped environment.
|
||||
|
||||
An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
|
||||
|
||||
### Docker Installations
|
||||
|
||||
The [single-node Docker installation](rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md) is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster using Helm, you install the Rancher server component on a single node using a `docker run` command.
|
||||
|
||||
The Docker installation is for development and testing environments only.
|
||||
|
||||
Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
|
||||
|
||||
The Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.](../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md)
|
||||
@@ -0,0 +1,114 @@
|
||||
---
|
||||
title: '2. Install Kubernetes'
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-kubernetes"/>
|
||||
</head>
|
||||
|
||||
Once the infrastructure is ready, you can continue with setting up a Kubernetes cluster to install Rancher in.
|
||||
|
||||
The steps to set up RKE2 or K3s are shown below.
|
||||
|
||||
For convenience, export the IP address and port of your proxy into an environment variable and set up the `HTTP_PROXY` variables for your current shell on every node:
|
||||
|
||||
:::caution
|
||||
|
||||
The `NO_PROXY` environment variable is not standardized, and the accepted format of the value can differ between applications. When configuring the `NO_PROXY` variable for Rancher, the value must adhere to the format expected by Golang.
|
||||
|
||||
Specifically, the value should be a comma-delimited string which only contains IP addresses, CIDR notation, domain names, or special DNS labels (e.g. `*`). For a full description of the expected value format, refer to the [**upstream Golang documentation**](https://pkg.go.dev/golang.org/x/net/http/httpproxy#Config)
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
export proxy_host="10.0.0.5:8888"
|
||||
export HTTP_PROXY=http://${proxy_host}
|
||||
export HTTPS_PROXY=http://${proxy_host}
|
||||
export NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16
|
||||
```
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="K3s">
|
||||
|
||||
First configure the HTTP proxy settings on the K3s systemd service, so that K3s's containerd can pull images through the proxy:
|
||||
|
||||
```
|
||||
cat <<'EOF' | sudo tee /etc/default/k3s > /dev/null
|
||||
HTTP_PROXY=http://${proxy_host}
|
||||
HTTPS_PROXY=http://${proxy_host}
|
||||
NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
|
||||
EOF
|
||||
```
|
||||
|
||||
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [Rancher Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/).
|
||||
|
||||
To specify the K3s (Kubernetes) version, use the INSTALL_K3S_VERSION (e.g., `INSTALL_K3S_VERSION="v1.24.10+k3s1"`) environment variable when running the K3s installation script.
|
||||
|
||||
On the first node, create a new cluster:
|
||||
```
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=<VERSION> K3S_TOKEN=<TOKEN> sh -s - server --cluster-init
|
||||
```
|
||||
|
||||
And then join the other nodes:
|
||||
```
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=<VERSION> K3S_TOKEN=<TOKEN> sh -s - server --server https://<SERVER>:6443
|
||||
```
|
||||
|
||||
Where `<SERVER>` is the IP or valid DNS of the server and `<TOKEN>` is the node-token from the server found at `/var/lib/rancher/k3s/server/node-token`.
|
||||
|
||||
For more information on installing K3s see the [K3s installation docs](https://docs.k3s.io/installation).
|
||||
|
||||
To have a look at your cluster run:
|
||||
|
||||
```
|
||||
kubectl cluster-info
|
||||
kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="RKE2">
|
||||
|
||||
On every node, run the RKE2 installation script. Ensure that the RKE2 version you are installing is [supported by Rancher](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/).
|
||||
|
||||
```
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_CHANNEL=v1.xx sh -
|
||||
```
|
||||
|
||||
Then you have to configure the HTTP proxy settings on the RKE2 systemd service, so that RKE2's containerd can pull images through the proxy:
|
||||
|
||||
```
|
||||
cat <<'EOF' | sudo tee /etc/default/rke2-server > /dev/null
|
||||
HTTP_PROXY=http://${proxy_host}
|
||||
HTTPS_PROXY=http://${proxy_host}
|
||||
NO_PROXY=127.0.0.0/8,10.0.0.0/8,cattle-system.svc,172.16.0.0/12,192.168.0.0/16,.svc,.cluster.local
|
||||
EOF
|
||||
```
|
||||
|
||||
Next create the RKE2 configuration file on every node following the [RKE2 High Availability documentation](https://docs.rke2.io/install/ha).
|
||||
|
||||
After that start and enable the `rke2-server` service:
|
||||
|
||||
```
|
||||
systemctl enable rke2-server.service
|
||||
systemctl start rke2-server.service
|
||||
```
|
||||
|
||||
For more information on installing RKE2 see the [RKE2 documentation](https://docs.rke2.io).
|
||||
|
||||
To have a look at your cluster run:
|
||||
|
||||
```
|
||||
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
|
||||
alias kubectl=/var/lib/rancher/rke2/bin/kubectl
|
||||
kubectl cluster-info
|
||||
kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Issues or errors?
|
||||
|
||||
See the [Troubleshooting](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md) page.
|
||||
|
||||
### [Next: Install Rancher](install-rancher.md)
|
||||
@@ -0,0 +1,104 @@
|
||||
---
|
||||
title: 3. Install Rancher
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/install-rancher"/>
|
||||
</head>
|
||||
|
||||
Now that you have a running RKE2/K3s cluster, you can install Rancher in it. For security reasons all traffic to Rancher must be encrypted with TLS. For this tutorial you are going to automatically issue a self-signed certificate through [cert-manager](https://cert-manager.io/). In a real-world use-case you will likely use Let's Encrypt or provide your own certificate.
|
||||
|
||||
### Install the Helm CLI
|
||||
|
||||
<DeprecationHelm2 />
|
||||
|
||||
Install the [Helm](https://helm.sh/docs/intro/install/) CLI on a host where you have a kubeconfig to access your Kubernetes cluster:
|
||||
|
||||
```
|
||||
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
|
||||
chmod +x get_helm.sh
|
||||
sudo ./get_helm.sh
|
||||
```
|
||||
|
||||
### Install cert-manager
|
||||
|
||||
Add the cert-manager Helm repository:
|
||||
|
||||
```
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
```
|
||||
|
||||
Create a namespace for cert-manager:
|
||||
|
||||
```
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
Install the CustomResourceDefinitions of cert-manager:
|
||||
|
||||
```
|
||||
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/<VERSION>/cert-manager.crds.yaml
|
||||
```
|
||||
|
||||
And install it with Helm. Note that cert-manager also needs your proxy configured in case it needs to communicate with Let's Encrypt or other external certificate issuers:
|
||||
|
||||
:::note
|
||||
|
||||
To see options on how to customize the cert-manager install (including for cases where your cluster uses PodSecurityPolicies), see the [cert-manager docs](https://artifacthub.io/packages/helm/cert-manager/cert-manager#configuration).
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
helm upgrade --install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--set http_proxy=http://${proxy_host} \
|
||||
--set https_proxy=http://${proxy_host} \
|
||||
--set no_proxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local
|
||||
```
|
||||
|
||||
Now you should wait until cert-manager is finished starting up:
|
||||
|
||||
```
|
||||
kubectl rollout status deployment -n cert-manager cert-manager
|
||||
kubectl rollout status deployment -n cert-manager cert-manager-webhook
|
||||
```
|
||||
|
||||
### Install Rancher
|
||||
|
||||
Next you can install Rancher itself. First, add the Helm repository:
|
||||
|
||||
```
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
```
|
||||
|
||||
Create a namespace:
|
||||
|
||||
```
|
||||
kubectl create namespace cattle-system
|
||||
```
|
||||
|
||||
And install Rancher with Helm. Rancher also needs a proxy configuration so that it can communicate with external application catalogs or retrieve Kubernetes version update metadata:
|
||||
|
||||
```
|
||||
helm upgrade --install rancher rancher-latest/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.example.com \
|
||||
--set proxy=http://${proxy_host} \
|
||||
--set noProxy=127.0.0.0/8\\,10.0.0.0/8\\,cattle-system.svc\\,172.16.0.0/12\\,192.168.0.0/16\\,.svc\\,.cluster.local
|
||||
```
|
||||
|
||||
After waiting for the deployment to finish:
|
||||
|
||||
```
|
||||
kubectl rollout status deployment -n cattle-system rancher
|
||||
```
|
||||
|
||||
You can now navigate to `https://rancher.example.com` and start using Rancher.
|
||||
|
||||
### Additional Resources
|
||||
|
||||
These resources could be helpful when installing Rancher:
|
||||
|
||||
- [Rancher Helm chart options](../../installation-references/helm-chart-options.md)
|
||||
- [Adding TLS secrets](../../resources/add-tls-secrets.md)
|
||||
- [Troubleshooting Rancher Kubernetes Installations](../../install-upgrade-on-a-kubernetes-cluster/troubleshooting.md)
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title: Installing Rancher behind an HTTP Proxy
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy"/>
|
||||
</head>
|
||||
|
||||
In a lot of enterprise environments, servers or VMs running on premise do not have direct Internet access, but must connect to external services through a HTTP(S) proxy for security reasons. This tutorial shows step by step how to set up a highly available Rancher installation in such an environment.
|
||||
|
||||
Alternatively, it is also possible to set up Rancher completely air-gapped without any Internet access. This process is described in detail in the [Rancher docs](../air-gapped-helm-cli-install/air-gapped-helm-cli-install.md).
|
||||
|
||||
## Installation Outline
|
||||
|
||||
1. [Set up infrastructure](set-up-infrastructure.md)
|
||||
2. [Set up a Kubernetes cluster](install-kubernetes.md)
|
||||
3. [Install Rancher](install-rancher.md)
|
||||
@@ -0,0 +1,67 @@
|
||||
---
|
||||
title: '1. Set up Infrastructure'
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/rancher-behind-an-http-proxy/set-up-infrastructure"/>
|
||||
</head>
|
||||
|
||||
In this section, you will provision the underlying infrastructure for your Rancher management server with internet access through a HTTP proxy.
|
||||
|
||||
To install the Rancher management server on a high-availability RKE2/K3s cluster, we recommend setting up the following infrastructure:
|
||||
|
||||
- **Three Linux nodes,** typically virtual machines, in an infrastructure provider such as Amazon's EC2, Google Compute Engine, or vSphere.
|
||||
- **A load balancer** to direct front-end traffic to the three nodes.
|
||||
- **A DNS record** to map a URL to the load balancer. This will become the Rancher server URL, and downstream Kubernetes clusters will need to reach it.
|
||||
|
||||
These nodes must be in the same region/data center. You may place these servers in separate availability zones.
|
||||
|
||||
### Why three nodes?
|
||||
|
||||
In an RKE2/K3s cluster, Rancher server data is stored on etcd. This etcd database runs on all three nodes.
|
||||
|
||||
The etcd database requires an odd number of nodes so that it can always elect a leader with a majority of the etcd cluster. If the etcd database cannot elect a leader, etcd can suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems), requiring the cluster to be restored from backup. If one of the three etcd nodes fails, the two remaining nodes can elect a leader because they have the majority of the total number of etcd nodes.
|
||||
|
||||
### 1. Set up Linux Nodes
|
||||
|
||||
These hosts will connect to the internet through an HTTP proxy.
|
||||
|
||||
Make sure that your nodes fulfill the general installation requirements for [OS, container runtime, hardware, and networking.](../../installation-requirements/installation-requirements.md)
|
||||
|
||||
For an example of one way to set up Linux nodes, refer to this [tutorial](../../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md) for setting up nodes as instances in Amazon EC2.
|
||||
|
||||
### 2. Set up the Load Balancer
|
||||
|
||||
You will also need to set up a load balancer to direct traffic to the Rancher replica on both nodes. That will prevent an outage of any single node from taking down communications to the Rancher management server.
|
||||
|
||||
When Kubernetes gets set up in a later step, the RKE2/K3s tool will deploy an NGINX Ingress controller. This controller will listen on ports 80 and 443 of the worker nodes, answering traffic destined for specific hostnames.
|
||||
|
||||
When Rancher is installed (also in a later step), the Rancher system creates an Ingress resource. That Ingress tells the NGINX Ingress controller to listen for traffic destined for the Rancher hostname. The NGINX Ingress controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster.
|
||||
|
||||
For your implementation, consider if you want or need to use a Layer-4 or Layer-7 load balancer:
|
||||
|
||||
- **A layer-4 load balancer** is the simpler of the two choices, in which you are forwarding TCP traffic to your nodes. We recommend configuring your load balancer as a Layer 4 balancer, forwarding traffic to ports TCP/80 and TCP/443 to the Rancher management cluster nodes. The Ingress controller on the cluster will redirect HTTP traffic to HTTPS and terminate SSL/TLS on port TCP/443. The Ingress controller will forward traffic to port TCP/80 to the Ingress pod in the Rancher deployment.
|
||||
- **A layer-7 load balancer** is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of handling TLS termination at the load balancer, as opposed to Rancher doing TLS termination itself. This can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with. If you decide to terminate the SSL/TLS traffic on a layer-7 load balancer, you will need to use the `--set tls=external` option when installing Rancher in a later step. For more information, refer to the [Rancher Helm chart options.](../../installation-references/helm-chart-options.md#external-tls-termination)
|
||||
|
||||
For an example showing how to set up an NGINX load balancer, refer to [this page.](../../../../how-to-guides/new-user-guides/infrastructure-setup/nginx-load-balancer.md)
|
||||
|
||||
For a how-to guide for setting up an Amazon ELB Network Load Balancer, refer to [this page.](../../../../how-to-guides/new-user-guides/infrastructure-setup/amazon-elb-load-balancer.md)
|
||||
|
||||
:::note Important:
|
||||
|
||||
Do not use this load balancer (i.e, the `local` cluster Ingress) to load balance applications other than Rancher following installation. Sharing this Ingress with other applications may result in websocket errors to Rancher following Ingress configuration reloads for other apps. We recommend dedicating the `local` cluster to Rancher and no other applications.
|
||||
|
||||
:::
|
||||
|
||||
### 3. Set up the DNS Record
|
||||
|
||||
Once you have set up your load balancer, you will need to create a DNS record to send traffic to this load balancer.
|
||||
|
||||
Depending on your environment, this may be an A record pointing to the LB IP, or it may be a CNAME pointing to the load balancer hostname. In either case, make sure this record is the hostname that you intend Rancher to respond on.
|
||||
|
||||
You will need to specify this hostname in a later step when you install Rancher, and it is not possible to change it later. Make sure that your decision is a final one.
|
||||
|
||||
For a how-to guide for setting up a DNS record to route domain traffic to an Amazon ELB load balancer, refer to the [official AWS documentation.](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer)
|
||||
|
||||
|
||||
### [Next: Set up a Kubernetes cluster](install-kubernetes.md)
|
||||
@@ -0,0 +1,94 @@
|
||||
---
|
||||
title: Troubleshooting Certificates
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/certificate-troubleshooting"/>
|
||||
</head>
|
||||
|
||||
<DockerSupportWarning />
|
||||
|
||||
## How Do I Know if My Certificates are in PEM Format?
|
||||
|
||||
You can recognize the PEM format by the following traits:
|
||||
|
||||
- The file begins with the following header:
|
||||
```
|
||||
-----BEGIN CERTIFICATE-----
|
||||
```
|
||||
- The header is followed by a long string of characters.
|
||||
- The file ends with a footer:
|
||||
-----END CERTIFICATE-----
|
||||
|
||||
PEM Certificate Example:
|
||||
|
||||
```
|
||||
----BEGIN CERTIFICATE-----
|
||||
MIIGVDCCBDygAwIBAgIJAMiIrEm29kRLMA0GCSqGSIb3DQEBCwUAMHkxCzAJBgNV
|
||||
... more lines
|
||||
VWQqljhfacYPgp8KJUJENQ9h5hZ2nSCrI+W00Jcw4QcEdCI8HL5wmg==
|
||||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
PEM Certificate Key Example:
|
||||
|
||||
```
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIGVDCCBDygAwIBAgIJAMiIrEm29kRLMA0GCSqGSIb3DQEBCwUAMHkxCzAJBgNV
|
||||
... more lines
|
||||
VWQqljhfacYPgp8KJUJENQ9h5hZ2nSCrI+W00Jcw4QcEdCI8HL5wmg==
|
||||
-----END RSA PRIVATE KEY-----
|
||||
```
|
||||
|
||||
If your key looks like the example below, see [Converting a Certificate Key From PKCS8 to PKCS1.](#converting-a-certificate-key-from-pkcs8-to-pkcs1)
|
||||
|
||||
```
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
MIIGVDCCBDygAwIBAgIJAMiIrEm29kRLMA0GCSqGSIb3DQEBCwUAMHkxCzAJBgNV
|
||||
... more lines
|
||||
VWQqljhfacYPgp8KJUJENQ9h5hZ2nSCrI+W00Jcw4QcEdCI8HL5wmg==
|
||||
-----END PRIVATE KEY-----
|
||||
```
|
||||
|
||||
## Converting a Certificate Key From PKCS8 to PKCS1
|
||||
|
||||
If you are using a PKCS8 certificate key file, Rancher will log the following line:
|
||||
|
||||
```
|
||||
ListenConfigController cli-config [listener] failed with : failed to read private key: asn1: structure error: tags don't match (2 vs {class:0 tag:16 length:13 isCompound:true})
|
||||
```
|
||||
|
||||
To make this work, you will need to convert the key from PKCS8 to PKCS1 using the command below:
|
||||
|
||||
```
|
||||
openssl rsa -in key.pem -out convertedkey.pem
|
||||
```
|
||||
|
||||
You can now use `convertedkey.pem` as certificate key file for Rancher.
|
||||
|
||||
## What is the Order of Certificates if I Want to Add My Intermediate(s)?
|
||||
|
||||
The order of adding certificates is as follows:
|
||||
|
||||
```
|
||||
-----BEGIN CERTIFICATE-----
|
||||
%YOUR_CERTIFICATE%
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN CERTIFICATE-----
|
||||
%YOUR_INTERMEDIATE_CERTIFICATE%
|
||||
-----END CERTIFICATE-----
|
||||
```
|
||||
|
||||
## How Do I Validate My Certificate Chain?
|
||||
|
||||
You can validate the certificate chain by using the `openssl` binary. If the output of the command (see the command example below) ends with `Verify return code: 0 (ok)`, your certificate chain is valid. The `ca.pem` file must be the same as you added to the `rancher/rancher` container.
|
||||
|
||||
When using a certificate signed by a recognized Certificate Authority, you can omit the `-CAfile` parameter.
|
||||
|
||||
Command:
|
||||
|
||||
```
|
||||
openssl s_client -CAfile ca.pem -connect rancher.yourdomain.com:443
|
||||
...
|
||||
Verify return code: 0 (ok)
|
||||
```
|
||||
@@ -0,0 +1,213 @@
|
||||
---
|
||||
title: Installing Rancher on a Single Node Using Docker
|
||||
description: For development and testing environments only, use a Docker install. Install Docker on a single Linux host, and deploy Rancher with a single Docker container.
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker"/>
|
||||
</head>
|
||||
|
||||
<DockerSupportWarning />
|
||||
|
||||
Rancher can be installed by running a single Docker container.
|
||||
|
||||
In this installation scenario, you'll install Docker on a single Linux host, and then deploy Rancher on your host using a single Docker container.
|
||||
|
||||
:::note Want to use an external load balancer?
|
||||
|
||||
See [Docker Install with an External Load Balancer](../../../../how-to-guides/advanced-user-guides/configure-layer-7-nginx-load-balancer.md) instead.
|
||||
|
||||
:::
|
||||
|
||||
A Docker installation of Rancher is recommended only for development and testing purposes. The ability to migrate Rancher to a high-availability cluster depends on the Rancher version:
|
||||
|
||||
The Rancher backup operator can be used to migrate Rancher from the single Docker container install to an installation on a high-availability Kubernetes cluster. For details, refer to the documentation on [migrating Rancher to a new cluster.](../../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md)
|
||||
|
||||
## Privileged Access for Rancher
|
||||
|
||||
When the Rancher server is deployed in the Docker container, a local Kubernetes cluster is installed within the container for Rancher to use. Because many features of Rancher run as deployments, and privileged mode is required to run containers within containers, you will need to install Rancher with the `--privileged` option.
|
||||
|
||||
## Requirements for OS, Docker, Hardware, and Networking
|
||||
|
||||
Make sure that your node fulfills the general [installation requirements.](../../installation-requirements/installation-requirements.md)
|
||||
|
||||
## 1. Provision Linux Host
|
||||
|
||||
Provision a single Linux host according to our [Requirements](../../installation-requirements/installation-requirements.md) to launch your Rancher server.
|
||||
|
||||
## 2. Choose an SSL Option and Install Rancher
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
:::tip Do you want to..
|
||||
|
||||
- Use a proxy? See [HTTP Proxy Configuration](../../../../reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md)
|
||||
- Configure custom CA root certificate to access your services? See [Custom CA root certificate](../../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#custom-ca-certificate/)
|
||||
- Complete an Air Gap Installation? See [Air Gap: Docker Install](../air-gapped-helm-cli-install/air-gapped-helm-cli-install.md)
|
||||
- Record all transactions with the Rancher API? See [API Auditing](../../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#api-audit-log)
|
||||
|
||||
:::
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
- [Option A: Default Rancher-generated Self-signed Certificate](#option-a-default-rancher-generated-self-signed-certificate)
|
||||
- [Option B: Bring Your Own Certificate, Self-signed](#option-b-bring-your-own-certificate-self-signed)
|
||||
- [Option C: Bring Your Own Certificate, Signed by a Recognized CA](#option-c-bring-your-own-certificate-signed-by-a-recognized-ca)
|
||||
- [Option D: Let's Encrypt Certificate](#option-d-lets-encrypt-certificate)
|
||||
- [Option E: Localhost tunneling, no Certificate](#option-e-localhost-tunneling-no-certificate)
|
||||
|
||||
### Option A: Default Rancher-generated Self-signed Certificate
|
||||
|
||||
If you are installing Rancher in a development or testing environment where identity verification isn't a concern, install Rancher using the self-signed certificate that it generates. This installation option omits the hassle of generating a certificate yourself.
|
||||
|
||||
Log into your host, and run the command below:
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--privileged \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
### Option B: Bring Your Own Certificate, Self-signed
|
||||
In development or testing environments where your team will access your Rancher server, create a self-signed certificate for use with your install so that your team can verify they're connecting to your instance of Rancher.
|
||||
|
||||
:::note Prerequisites:
|
||||
|
||||
Create a self-signed certificate using [OpenSSL](https://www.openssl.org/) or another method of your choice.
|
||||
|
||||
- The certificate files must be in PEM format.
|
||||
- In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](certificate-troubleshooting.md)
|
||||
|
||||
:::
|
||||
|
||||
After creating your certificate, run the Docker command below to install Rancher. Use the `-v` flag and provide the path to your certificates to mount them in your container.
|
||||
|
||||
| Placeholder | Description |
|
||||
| ------------------- | --------------------- |
|
||||
| `<CERT_DIRECTORY>` | The path to the directory containing your certificate files. |
|
||||
| `<FULL_CHAIN.pem>` | The path to your full certificate chain. |
|
||||
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
|
||||
| `<CA_CERTS.pem>` | The path to the certificate authority's certificate. |
|
||||
|
||||
Log into your host, and run the command below:
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
--privileged \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
### Option C: Bring Your Own Certificate, Signed by a Recognized CA
|
||||
|
||||
In production environments where you're exposing an app publicly, you would use a certificate signed by a recognized CA so that your user base doesn't encounter security warnings.
|
||||
|
||||
The Docker install is not recommended for production. These instructions are provided for testing and development purposes only.
|
||||
|
||||
:::note Prerequisites:
|
||||
|
||||
- The certificate files must be in PEM format.
|
||||
- In your certificate file, include all intermediate certificates provided by the recognized CA. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](certificate-troubleshooting.md)
|
||||
|
||||
:::
|
||||
|
||||
After obtaining your certificate, run the Docker command below.
|
||||
|
||||
- Use the `-v` flag and provide the path to your certificates to mount them in your container. Because your certificate is signed by a recognized CA, mounting an additional CA certificate file is unnecessary.
|
||||
- Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
| Placeholder | Description |
|
||||
| ------------------- | ----------------------------- |
|
||||
| `<CERT_DIRECTORY>` | The path to the directory containing your certificate files. |
|
||||
| `<FULL_CHAIN.pem>` | The path to your full certificate chain. |
|
||||
| `<PRIVATE_KEY.pem>` | The path to the private key for your certificate. |
|
||||
|
||||
Log into your host, and run the command below:
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
--privileged \
|
||||
rancher/rancher:latest \
|
||||
--no-cacerts
|
||||
```
|
||||
|
||||
### Option D: Let's Encrypt Certificate
|
||||
|
||||
:::caution
|
||||
|
||||
Let's Encrypt provides rate limits for requesting new certificates. Therefore, limit how often you create or destroy the container. For more information, see [Let's Encrypt documentation on rate limits](https://letsencrypt.org/docs/rate-limits/).
|
||||
|
||||
:::
|
||||
|
||||
For production environments, you also have the option of using [Let's Encrypt](https://letsencrypt.org/) certificates. Let's Encrypt uses an http-01 challenge to verify that you have control over your domain. You can confirm that you control the domain by pointing the hostname that you want to use for Rancher access (for example, `rancher.mydomain.com`) to the IP of the machine it is running on. You can bind the hostname to the IP address by creating an A record in DNS.
|
||||
|
||||
The Docker install is not recommended for production. These instructions are provided for testing and development purposes only.
|
||||
|
||||
:::note Prerequisites:
|
||||
|
||||
- Let's Encrypt is an Internet service. Therefore, this option cannot be used in an internal/air gapped network.
|
||||
- Create a record in your DNS that binds your Linux host IP address to the hostname that you want to use for Rancher access (`rancher.mydomain.com` for example).
|
||||
- Open port `TCP/80` on your Linux host. The Let's Encrypt http-01 challenge can come from any source IP address, so port `TCP/80` must be open to all IP addresses.
|
||||
|
||||
:::
|
||||
|
||||
After you fulfill the prerequisites, you can install Rancher using a Let's Encrypt certificate by running the following command.
|
||||
|
||||
| Placeholder | Description |
|
||||
| ----------------- | ------------------- |
|
||||
| `<YOUR.DNS.NAME>` | Your domain address |
|
||||
|
||||
Log into your host, and run the command below:
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--privileged \
|
||||
rancher/rancher:latest \
|
||||
--acme-domain <YOUR.DNS.NAME>
|
||||
```
|
||||
|
||||
### Option E: Localhost tunneling, no Certificate
|
||||
|
||||
If you are installing Rancher in a development or testing environment where you have a localhost tunneling solution running, such as [ngrok](https://ngrok.com/), avoid generating a certificate. This installation option doesn't require a certificate.
|
||||
|
||||
- You will use `--no-cacerts` in the argument to disable the default CA certificate generated by Rancher.
|
||||
|
||||
Log into your host, and run the command below:
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--privileged \
|
||||
rancher/rancher:latest \
|
||||
--no-cacerts
|
||||
```
|
||||
|
||||
## Advanced Options
|
||||
|
||||
When installing Rancher on a single node with Docker, there are several advanced options that can be enabled:
|
||||
|
||||
- Custom CA Certificate
|
||||
- API Audit Log
|
||||
- TLS Settings
|
||||
- Air Gap
|
||||
- Persistent Data
|
||||
- Running `rancher/rancher` and `rancher/rancher-agent` on the Same Node
|
||||
|
||||
Refer to [this page](../../../../reference-guides/single-node-rancher-in-docker/advanced-options.md) for details.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Refer to [this page](certificate-troubleshooting.md) for frequently asked questions and troubleshooting tips.
|
||||
|
||||
## What's Next?
|
||||
|
||||
- **Recommended:** Review Single Node [Backup](../../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-docker-installed-rancher.md) and [Restore](../../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-docker-installed-rancher.md). Although you don't have any data you need to back up right now, we recommend creating backups after regular Rancher use.
|
||||
- Create a Kubernetes cluster: [Provisioning Kubernetes Clusters](../../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md).
|
||||
@@ -0,0 +1,97 @@
|
||||
---
|
||||
title: Rolling Back Rancher Installed with Docker
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/roll-back-docker-installed-rancher"/>
|
||||
</head>
|
||||
|
||||
<DockerSupportWarning />
|
||||
|
||||
If a Rancher upgrade does not complete successfully, you'll have to roll back to your Rancher setup that you were using before [Docker Upgrade](upgrade-docker-installed-rancher.md). Rolling back restores:
|
||||
|
||||
- Your previous version of Rancher.
|
||||
- Your data backup created before upgrade.
|
||||
|
||||
## Before You Start
|
||||
|
||||
During rollback to a prior version of Rancher, you'll enter a series of commands, filling placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (`<EXAMPLE>`). Here's an example of a command with a placeholder:
|
||||
|
||||
```
|
||||
docker pull rancher/rancher:<PRIOR_RANCHER_VERSION>
|
||||
```
|
||||
|
||||
In this command, `<PRIOR_RANCHER_VERSION>` is the version of Rancher you were running before your unsuccessful upgrade. `v2.0.5` for example.
|
||||
|
||||
Cross reference the image and reference table below to learn how to obtain this placeholder data. Write down or copy this information before starting the procedure below.
|
||||
|
||||
<sup>Terminal <code>docker ps</code> Command, Displaying Where to Find <code><PRIOR_RANCHER_VERSION></code> and <code><RANCHER_CONTAINER_NAME></code></sup>
|
||||
|
||||
| Placeholder | Example | Description |
|
||||
| -------------------------- | -------------------------- | ------------------------------------------------------- |
|
||||
| `<PRIOR_RANCHER_VERSION>` | `v2.0.5` | The rancher/rancher image you used before upgrade. |
|
||||
| `<RANCHER_CONTAINER_NAME>` | `festive_mestorf` | The name of your Rancher container. |
|
||||
| `<RANCHER_VERSION>` | `v2.0.5` | The version of Rancher that the backup is for. |
|
||||
| `<DATE>` | `9-27-18` | The date that the data container or backup was created. |
|
||||
<br/>
|
||||
|
||||
You can obtain `<PRIOR_RANCHER_VERSION>` and `<RANCHER_CONTAINER_NAME>` by logging into your Rancher Server by remote connection and entering the command to view the containers that are running: `docker ps`. You can also view containers that are stopped using a different command: `docker ps -a`. Use these commands for help anytime during while creating backups.
|
||||
|
||||
## Rolling Back Rancher
|
||||
|
||||
If you have issues upgrading Rancher, roll it back to its latest known healthy state by pulling the last version you used and then restoring the backup you made before upgrade.
|
||||
|
||||
:::danger
|
||||
|
||||
Rolling back to a previous version of Rancher destroys any changes made to Rancher following the upgrade. Unrecoverable data loss may occur.
|
||||
|
||||
:::
|
||||
|
||||
1. Using a remote Terminal connection, log into the node running your Rancher Server.
|
||||
|
||||
1. Pull the version of Rancher that you were running before upgrade. Replace the `<PRIOR_RANCHER_VERSION>` with that version.
|
||||
|
||||
For example, if you were running Rancher v2.0.5 before upgrade, pull v2.0.5.
|
||||
|
||||
```
|
||||
docker pull rancher/rancher:<PRIOR_RANCHER_VERSION>
|
||||
```
|
||||
|
||||
1. Stop the container currently running Rancher Server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container.
|
||||
|
||||
```
|
||||
docker stop <RANCHER_CONTAINER_NAME>
|
||||
```
|
||||
You can obtain the name for your Rancher container by entering `docker ps`.
|
||||
|
||||
1. Move the backup tarball that you created during completion of [Docker Upgrade](upgrade-docker-installed-rancher.md) onto your Rancher Server. Change to the directory that you moved it to. Enter `dir` to confirm that it's there.
|
||||
|
||||
If you followed the naming convention we suggested in [Docker Upgrade](upgrade-docker-installed-rancher.md), it will have a name similar to (`rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`).
|
||||
|
||||
1. Run the following command to replace the data in the `rancher-data` container with the data in the backup tarball, replacing the placeholder. Don't forget to close the quotes.
|
||||
|
||||
```
|
||||
docker run --volumes-from rancher-data \
|
||||
-v $PWD:/backup busybox sh -c "rm /var/lib/rancher/* -rf \
|
||||
&& tar zxvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz"
|
||||
```
|
||||
|
||||
1. Start a new Rancher Server container with the `<PRIOR_RANCHER_VERSION>` tag placeholder pointing to the data container.
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--privileged \
|
||||
rancher/rancher:<PRIOR_RANCHER_VERSION>
|
||||
```
|
||||
Privileged access is [required.](rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher)
|
||||
|
||||
:::danger
|
||||
|
||||
**_Do not_** stop the rollback after initiating it, even if the rollback process seems longer than expected. Stopping the rollback may result in database issues during future upgrades.
|
||||
|
||||
:::
|
||||
|
||||
1. Wait a few moments and then open Rancher in a web browser. Confirm that the rollback succeeded and that your data is restored.
|
||||
|
||||
**Result:** Rancher is rolled back to its version and data state before upgrade.
|
||||
@@ -0,0 +1,436 @@
|
||||
---
|
||||
title: Upgrading Rancher Installed with Docker
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/upgrade-docker-installed-rancher"/>
|
||||
</head>
|
||||
|
||||
<DockerSupportWarning />
|
||||
|
||||
The following instructions will guide you through upgrading a Rancher server that was installed with Docker.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Review the [known upgrade issues](../../install-upgrade-on-a-kubernetes-cluster/upgrades.md#known-upgrade-issues)** section in the Rancher documentation for the most noteworthy issues to consider when upgrading Rancher. A more complete list of known issues for each Rancher version can be found in the release notes on [GitHub](https://github.com/rancher/rancher/releases) and on the [Rancher forums](https://forums.rancher.com/c/announcements/12). Note that upgrades to or from any chart in the [rancher-alpha repository](../../resources/choose-a-rancher-version.md#helm-chart-repositories) aren’t supported.
|
||||
- **For [air gap installs only,](../air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) collect and populate images for the new Rancher server version**. Follow the guide to [populate your private registry](../air-gapped-helm-cli-install/publish-images.md) with the images for the Rancher version that you want to upgrade to.
|
||||
|
||||
## Placeholder Review
|
||||
|
||||
During upgrade, you'll enter a series of commands, filling placeholders with data from your environment. These placeholders are denoted with angled brackets and all capital letters (`<EXAMPLE>`).
|
||||
|
||||
Here's an **example** of a command with a placeholder:
|
||||
|
||||
```
|
||||
docker stop <RANCHER_CONTAINER_NAME>
|
||||
```
|
||||
|
||||
In this command, `<RANCHER_CONTAINER_NAME>` is the name of your Rancher container.
|
||||
|
||||
## Get Data for Upgrade Commands
|
||||
|
||||
To obtain the data to replace the placeholders, run:
|
||||
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
|
||||
Write down or copy this information before starting the upgrade.
|
||||
|
||||
<sup>Terminal <code>docker ps</code> Command, Displaying Where to Find <code><RANCHER_CONTAINER_TAG></code> and <code><RANCHER_CONTAINER_NAME></code></sup>
|
||||
|
||||

|
||||
|
||||
| Placeholder | Example | Description |
|
||||
| -------------------------- | -------------------------- | --------------------------------------------------------- |
|
||||
| `<RANCHER_CONTAINER_TAG>` | `v2.1.3` | The rancher/rancher image you pulled for initial install. |
|
||||
| `<RANCHER_CONTAINER_NAME>` | `festive_mestorf` | The name of your Rancher container. |
|
||||
| `<RANCHER_VERSION>` | `v2.1.3` | The version of Rancher that you're creating a backup for. |
|
||||
| `<DATE>` | `2018-12-19` | The date that the data container or backup was created. |
|
||||
<br/>
|
||||
|
||||
You can obtain `<RANCHER_CONTAINER_TAG>` and `<RANCHER_CONTAINER_NAME>` by logging into your Rancher server by remote connection and entering the command to view the containers that are running: `docker ps`. You can also view containers that are stopped using a different command: `docker ps -a`. Use these commands for help anytime during while creating backups.
|
||||
|
||||
## Upgrade
|
||||
|
||||
:::danger
|
||||
Rancher upgrades to version 2.12.0 and later will be blocked if any RKE1-related resources are detected, as the Rancher Kubernetes Engine (RKE/RKE1) is end of life as of **July 31, 2025**. For detailed cleanup and recovery steps, refer to the [RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12](#rke1-resource-validation-and-upgrade-requirements-in-rancher-v212).
|
||||
:::
|
||||
|
||||
During upgrade, you create a copy of the data from your current Rancher container and a backup in case something goes wrong. Then you deploy the new version of Rancher in a new container using your existing data.
|
||||
### 1. Create a copy of the data from your Rancher server container
|
||||
|
||||
1. Using a remote Terminal connection, log into the node running your Rancher server.
|
||||
|
||||
1. Stop the container currently running Rancher server. Replace `<RANCHER_CONTAINER_NAME>` with the name of your Rancher container.
|
||||
|
||||
```
|
||||
docker stop <RANCHER_CONTAINER_NAME>
|
||||
```
|
||||
|
||||
1. <a id="backup"></a>Use the command below, replacing each placeholder, to create a data container from the Rancher container that you just stopped.
|
||||
|
||||
```
|
||||
docker create --volumes-from <RANCHER_CONTAINER_NAME> --name rancher-data rancher/rancher:<RANCHER_CONTAINER_TAG>
|
||||
```
|
||||
|
||||
### 2. Create a backup tarball
|
||||
|
||||
1. <a id="tarball"></a>From the data container that you just created (<code>rancher-data</code>), create a backup tarball (<code>rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz</code>).
|
||||
|
||||
This tarball will serve as a rollback point if something goes wrong during upgrade. Use the following command, replacing each placeholder.
|
||||
```
|
||||
docker run --volumes-from rancher-data -v "$PWD:/backup" --rm busybox tar zcvf /backup/rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz /var/lib/rancher
|
||||
```
|
||||
|
||||
**Step Result:** When you enter this command, a series of commands should run.
|
||||
|
||||
1. Enter the `ls` command to confirm that the backup tarball was created. It will have a name similar to `rancher-data-backup-<RANCHER_VERSION>-<DATE>.tar.gz`.
|
||||
|
||||
```
|
||||
[rancher@ip-10-0-0-50 ~]$ ls
|
||||
rancher-data-backup-v2.1.3-20181219.tar.gz
|
||||
```
|
||||
|
||||
1. Move your backup tarball to a safe location external from your Rancher server.
|
||||
|
||||
### 3. Pull the New Docker Image
|
||||
|
||||
Pull the image of the Rancher version that you want to upgrade to.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](../../installation-references/helm-chart-options.md) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker pull rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
### 4. Start the New Rancher Server Container
|
||||
|
||||
Start a new Rancher server container using the data from the `rancher-data` container. Remember to pass in all the environment variables that you had used when you started the original container.
|
||||
|
||||
:::danger
|
||||
|
||||
**_Do not_** stop the upgrade after initiating it, even if the upgrade process seems longer than expected. Stopping the upgrade may result in database migration errors during future upgrades.
|
||||
|
||||
:::
|
||||
|
||||
If you used a proxy, see [HTTP Proxy Configuration.](../../../../reference-guides/single-node-rancher-in-docker/http-proxy-configuration.md)
|
||||
|
||||
If you configured a custom CA root certificate to access your services, see [Custom CA root certificate.](../../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#custom-ca-certificate)
|
||||
|
||||
If you are recording all transactions with the Rancher API, see [API Auditing](../../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#api-audit-log)
|
||||
|
||||
To see the command to use when starting the new Rancher server container, choose from the following options:
|
||||
|
||||
- Docker Upgrade
|
||||
- Docker Upgrade for Air Gap Installs
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Docker Upgrade">
|
||||
|
||||
Select which option you had installed Rancher server
|
||||
|
||||
#### Option A: Default Self-Signed Certificate
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to use the Rancher generated self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](../../installation-references/helm-chart-options.md) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--privileged \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
Privileged access is [required.](rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher)
|
||||
|
||||
</details>
|
||||
|
||||
#### Option B: Bring Your Own Certificate: Self-Signed
|
||||
|
||||
<details id="option-b">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to bring your own self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificate that you had originally installed with.
|
||||
|
||||
:::note Reminder of the Cert Prerequisite:
|
||||
|
||||
The certificate files must be in PEM format. In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates.
|
||||
|
||||
:::
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<CA_CERTS.pem>` | The path to the certificate authority's certificate.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](../../installation-references/helm-chart-options.md) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
--privileged \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
Privileged access is [required.](rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher)
|
||||
|
||||
</details>
|
||||
|
||||
#### Option C: Bring Your Own Certificate: Signed by Recognized CA
|
||||
|
||||
<details id="option-c">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to use a certificate signed by a recognized CA, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificates that you had originally installed with. Remember to include `--no-cacerts` as an argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
:::note Reminder of the Cert Prerequisite:
|
||||
|
||||
The certificate files must be in PEM format. In your certificate file, include all intermediate certificates provided by the recognized CA. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](certificate-troubleshooting.md)
|
||||
|
||||
:::
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](../../installation-references/helm-chart-options.md) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
--privileged \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG> \
|
||||
--no-cacerts
|
||||
```
|
||||
|
||||
Privileged access is [required.](rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher)
|
||||
</details>
|
||||
|
||||
#### Option D: Let's Encrypt Certificate
|
||||
|
||||
<details id="option-d">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
:::caution
|
||||
|
||||
Let's Encrypt provides rate limits for requesting new certificates. Therefore, limit how often you create or destroy the container. For more information, see [Let's Encrypt documentation on rate limits](https://letsencrypt.org/docs/rate-limits/).
|
||||
|
||||
:::
|
||||
|
||||
If you have selected to use [Let's Encrypt](https://letsencrypt.org/) certificates, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to provide the domain that you had used when you originally installed Rancher.
|
||||
|
||||
:::note Reminder of the Cert Prerequisites:
|
||||
|
||||
- Create a record in your DNS that binds your Linux host IP address to the hostname that you want to use for Rancher access (`rancher.mydomain.com` for example).
|
||||
- Open port `TCP/80` on your Linux host. The Let's Encrypt http-01 challenge can come from any source IP address, so port `TCP/80` must be open to all IP addresses.
|
||||
|
||||
:::
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](../../installation-references/helm-chart-options.md) that you want to upgrade to.
|
||||
`<YOUR.DNS.NAME>` | The domain address that you had originally started with
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--privileged \
|
||||
rancher/rancher:<RANCHER_VERSION_TAG> \
|
||||
--acme-domain <YOUR.DNS.NAME>
|
||||
```
|
||||
|
||||
Privileged access is [required.](rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher)
|
||||
|
||||
</details>
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker Air Gap Upgrade">
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
When starting the new Rancher server container, choose from the following options:
|
||||
|
||||
#### Option A: Default Self-Signed Certificate
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to use the Rancher generated self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container.
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](../../installation-references/helm-chart-options.md) that you want to to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
|
||||
--privileged \
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
|
||||
Privileged access is [required.](rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher)
|
||||
</details>
|
||||
|
||||
#### Option B: Bring Your Own Certificate: Self-Signed
|
||||
|
||||
<details id="option-b">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to bring your own self-signed certificate, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificate that you had originally installed with.
|
||||
|
||||
:::note Reminder of the Cert Prerequisite:
|
||||
|
||||
The certificate files must be in PEM format. In your certificate file, include all intermediate certificates in the chain. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](certificate-troubleshooting.md)
|
||||
|
||||
:::
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<CA_CERTS.pem>` | The path to the certificate authority's certificate.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](../../installation-references/helm-chart-options.md) that you want to upgrade to.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-v /<CERT_DIRECTORY>/<CA_CERTS.pem>:/etc/rancher/ssl/cacerts.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
|
||||
--privileged \
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
Privileged access is [required.](rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher)
|
||||
</details>
|
||||
|
||||
#### Option C: Bring Your Own Certificate: Signed by Recognized CA
|
||||
|
||||
<details id="option-c">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
If you have selected to use a certificate signed by a recognized CA, you add the `--volumes-from rancher-data` to the command that you had started your original Rancher server container and need to have access to the same certificates that you had originally installed with.
|
||||
|
||||
:::note Reminder of the Cert Prerequisite:
|
||||
|
||||
The certificate files must be in PEM format. In your certificate file, include all intermediate certificates provided by the recognized CA. Order your certificates with your certificate first, followed by the intermediates. For an example, see [Certificate Troubleshooting.](certificate-troubleshooting.md)
|
||||
|
||||
:::
|
||||
|
||||
Placeholder | Description
|
||||
------------|-------------
|
||||
`<CERT_DIRECTORY>` | The path to the directory containing your certificate files.
|
||||
`<FULL_CHAIN.pem>` | The path to your full certificate chain.
|
||||
`<PRIVATE_KEY.pem>` | The path to the private key for your certificate.
|
||||
`<REGISTRY.YOURDOMAIN.COM:PORT>` | Your private registry URL and port.
|
||||
`<RANCHER_VERSION_TAG>` | The release tag of the [Rancher version](../../installation-references/helm-chart-options.md) that you want to upgrade to.
|
||||
|
||||
:::note
|
||||
|
||||
Use the `--no-cacerts` as argument to the container to disable the default CA certificate generated by Rancher.
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
docker run -d --volumes-from rancher-data \
|
||||
--restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--no-cacerts \
|
||||
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
|
||||
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
|
||||
-e CATTLE_SYSTEM_DEFAULT_REGISTRY=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
-e CATTLE_SYSTEM_CATALOG=bundled \ # Use the packaged Rancher system charts
|
||||
--privileged
|
||||
<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher:<RANCHER_VERSION_TAG>
|
||||
```
|
||||
privileged access is [required.](rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher)
|
||||
</details>
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
**Result:** You have upgraded Rancher. Data from your upgraded server is now saved to the `rancher-data` container for use in future upgrades.
|
||||
|
||||
### 5. Verify the Upgrade
|
||||
|
||||
Log into Rancher. Confirm that the upgrade succeeded by checking the version displayed in the bottom-left corner of the browser window.
|
||||
|
||||
:::note Having network issues in your user clusters following upgrade?
|
||||
|
||||
See [Restoring Cluster Networking](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.0-2.4/getting-started/installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/upgrades/namespace-migration.md).
|
||||
|
||||
:::
|
||||
|
||||
### 6. Clean up Your Old Rancher Server Container
|
||||
|
||||
Remove the previous Rancher server container. If you only stop the previous Rancher server container (and don't remove it), the container may restart after the next server reboot.
|
||||
|
||||
## RKE1 Resource Validation and Upgrade Requirements in Rancher v2.12
|
||||
|
||||
Rancher v2.12.0 and later has removed support for the Rancher Kubernetes Engine (RKE/RKE1). During upgrade, Rancher validates the cluster resources and blocks the upgrade if any RKE1-related resources are detected.
|
||||
|
||||
This validation affects the following resource types:
|
||||
|
||||
- Clusters with `rkeConfig` (`clusters.management.cattle.io`)
|
||||
- NodeTemplates (`nodetemplates.management.cattle.io`)
|
||||
- ClusterTemplates (`clustertemplates.management.cattle.io`)
|
||||
|
||||
This is particularly relevant for single-node Docker installations, where Rancher is not running during the upgrade. In such cases, controllers are not available to automatically clean up deprecated resources, and the upgrade process will fail early with an error listing the blocking resources.
|
||||
|
||||
### 1. Pre-Upgrade (Recommended)
|
||||
|
||||
Before upgrading, while Rancher is still running:
|
||||
|
||||
- Run the `pre-upgrade-hook` cleanup script to delete all RKE1 clusters and templates. You can find the script in the Rancher GitHub repository: [pre-upgrade-hook.sh](https://github.com/rancher/rancher/blob/v2.12.0/chart/scripts/pre-upgrade-hook.sh).
|
||||
- This allows Rancher to clean up associated resources and finalizers.
|
||||
|
||||
### 2. Post-Upgrade Failure Due to Residual RKE1 Resources
|
||||
|
||||
If the upgrade to Rancher v2.12.0 or later is attempted without prior cleanup of RKE1 resources:
|
||||
|
||||
- The upgrade will fail and display an error listing the resource names that are preventing the upgrade.
|
||||
- This occurs because Rancher includes validation to detect and block upgrades when unsupported RKE1 resources are still present.
|
||||
- To proceed, [rollback](#rolling-back) to the previous Rancher version, delete the identified resources, and then retry after [manual cleanup](#manual-cleanup-after-rollback).
|
||||
|
||||
:::note Helm-based Rancher
|
||||
Helm-based Rancher installations are not affected by this issue, as Rancher remains available during the upgrade and can perform resource cleanup as needed.
|
||||
:::
|
||||
|
||||
### Manual Cleanup After Rollback
|
||||
|
||||
Users should perform the following steps after rolling back to a previous Rancher version:
|
||||
|
||||
- **Manually delete** the resources listed in the upgrade error message (e.g., RKE1 clusters, NodeTemplates, ClusterTemplates).
|
||||
- If deletion is blocked due to **finalizers**, edit the resources and remove the `metadata.finalizers` field.
|
||||
- If a **validating webhook** prevents deletion (e.g., for the `system-project`), please refer to the [Bypassing the Webhook](../../../../reference-guides/rancher-webhook.md#bypassing-the-webhook) documentation.
|
||||
|
||||
## Rolling Back
|
||||
|
||||
If your upgrade does not complete successfully, you can roll back Rancher server and its data back to its last healthy state. For more information, see [Docker Rollback](roll-back-docker-installed-rancher.md).
|
||||
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: Adding TLS Secrets
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/add-tls-secrets"/>
|
||||
</head>
|
||||
|
||||
Kubernetes will create all the objects and services for Rancher, but it will not become available until we populate the `tls-rancher-ingress` secret in the `cattle-system` namespace with the certificate and key.
|
||||
|
||||
Combine the server certificate followed by any intermediate certificate(s) needed into a file named `tls.crt`. Copy your certificate key into a file named `tls.key`.
|
||||
|
||||
For example, [acme.sh](https://acme.sh) provides server certificate and CA chains in `fullchain.cer` file.
|
||||
This `fullchain.cer` should be renamed to `tls.crt` & certificate key file as `tls.key`.
|
||||
|
||||
Use `kubectl` with the `tls` secret type to create the secrets.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
If you want to replace the certificate, you can delete the `tls-rancher-ingress` secret using `kubectl -n cattle-system delete secret tls-rancher-ingress` and add a new one using the command shown above. If you are using a private CA signed certificate, replacing the certificate is only possible if the new certificate is signed by the same CA as the certificate currently in use.
|
||||
|
||||
:::
|
||||
|
||||
## Using a Private CA Signed Certificate
|
||||
|
||||
If you are using a private CA, Rancher requires a copy of the private CA's root certificate or certificate chain, which the Rancher Agent uses to validate the connection to the server.
|
||||
|
||||
Create a file named `cacerts.pem` that only contains the root CA certificate or certificate chain from your private CA, and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
The configured `tls-ca` secret is retrieved when Rancher starts. On a running Rancher installation the updated CA will take effect after new Rancher pods are started.
|
||||
|
||||
The certificate chain must be properly formatted, or components may fail to download resources from the Rancher server.
|
||||
|
||||
:::
|
||||
|
||||
## Adding Additional CA Certificates
|
||||
|
||||
If you are using a node driver that makes API requests with a different CA than the one configured for Rancher, you can add additional root certificates and certificate chains.
|
||||
|
||||
Create a unique file ending in `.pem` for each certificate that is required, and use kubectl to create the
|
||||
`tls-additional` secret in the `cattle-system` namespace.
|
||||
|
||||
```console
|
||||
kubectl -n cattle-system create secret generic tls-additional \
|
||||
--from-file=cacerts1.pem=cacerts1.pem --from-file=cacerts2.pem=cacerts2.pem
|
||||
```
|
||||
|
||||
Rancher mounts these CA root certificates and certificate chains into the node driver pod during provisioning.
|
||||
|
||||
## Updating a Private CA Certificate
|
||||
|
||||
Follow the steps on [this page](update-rancher-certificate.md) to update the SSL certificate of the ingress in a Rancher [high availability Kubernetes installation](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md) or to switch from the default self-signed certificate to a custom certificate.
|
||||
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: Setting up the Bootstrap Password
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/bootstrap-password"/>
|
||||
</head>
|
||||
|
||||
When you install Rancher, you can set a bootstrap password for the first admin account.
|
||||
|
||||
If you choose not to set a bootstrap password, Rancher randomly generates a bootstrap password for the first admin account.
|
||||
|
||||
For details on how to set the bootstrap password, see below.
|
||||
|
||||
## Password Requirements
|
||||
|
||||
The bootstrap password can be any length.
|
||||
|
||||
When you reset the first admin account's password after first login, the new password must be at least 12 characters long.
|
||||
|
||||
You can [customize the minimum password length](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/manage-users-and-groups.md#minimum-password-length) for user accounts, within limitations.
|
||||
|
||||
Minimum password length can be any positive integer value between 2 and 256. Decimal values and leading zeroes are not allowed.
|
||||
|
||||
## Specifying the Bootstrap Password
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Helm">
|
||||
|
||||
During [Rancher installation](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md), set `bootstrapPassword` alongside any other flags for the Rancher Helm chart. For example:
|
||||
|
||||
```bash
|
||||
helm install rancher rancher-<chart-repo>/rancher \
|
||||
--set bootstrapPassword=<password>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker">
|
||||
|
||||
Pass the following value to the [Docker install command](../other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md):
|
||||
|
||||
```bash
|
||||
-e CATTLE_BOOTSTRAP_PASSWORD=<password>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Retrieving the Bootstrap Password
|
||||
|
||||
The bootstrap password is stored in the Docker container logs. After Rancher is installed, the UI shows instructions for how to retrieve the password based on your installation method.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Helm">
|
||||
|
||||
```bash
|
||||
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{ .data.bootstrapPassword|base64decode}}{{ "\n" }}'
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker">
|
||||
|
||||
```bash
|
||||
docker logs container-id 2>&1 | grep "Bootstrap Password:"
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
@@ -0,0 +1,122 @@
|
||||
---
|
||||
title: Choosing a Rancher Version
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/choose-a-rancher-version"/>
|
||||
</head>
|
||||
|
||||
This section describes how to choose a Rancher version.
|
||||
|
||||
For a high-availability installation of Rancher, which is recommended for production, the Rancher server is installed using a **Helm chart** on a Kubernetes cluster. Refer to the [Helm version requirements](helm-version-requirements.md) to choose a version of Helm to install Rancher.
|
||||
|
||||
For Docker installations of Rancher, which is used for development and testing, you will install Rancher as a **Docker image**.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Helm Charts">
|
||||
|
||||
When installing, upgrading, or rolling back Rancher Server when it is [installed on a Kubernetes cluster](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md), Rancher server is installed using a Helm chart on a Kubernetes cluster. Therefore, as you prepare to install or upgrade a high availability Rancher configuration, you must add a Helm chart repository that contains the charts for installing Rancher.
|
||||
|
||||
Refer to the [Helm version requirements](helm-version-requirements.md) to choose a version of Helm to install Rancher.
|
||||
|
||||
### Helm Chart Repositories
|
||||
|
||||
Rancher provides several different Helm chart repositories to choose from. We align our latest and stable Helm chart repositories with the Docker tags that are used for a Docker installation. Therefore, the `rancher-latest` repository will contain charts for all the Rancher versions that have been tagged as `rancher/rancher:latest`. When a Rancher version has been promoted to the `rancher/rancher:stable`, it will get added to the `rancher-stable` repository.
|
||||
|
||||
| Type | Command to Add the Repo | Description of the Repo |
|
||||
| -------------- | ------------ | ----------------- |
|
||||
| rancher-latest | `helm repo add rancher-latest https://releases.rancher.com/server-charts/latest` | Adds a repository of Helm charts for the latest versions of Rancher. We recommend using this repo for testing out new Rancher builds. |
|
||||
| rancher-stable | `helm repo add rancher-stable https://releases.rancher.com/server-charts/stable` | Adds a repository of Helm charts for older, stable versions of Rancher. We recommend using this repo for production environments. |
|
||||
| rancher-alpha | `helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha` | Adds a repository of Helm charts for alpha versions of Rancher for previewing upcoming releases. These releases are discouraged in production environments. Upgrades _to_ or _from_ charts in the rancher-alpha repository to any other chart, regardless or repository, aren't supported. |
|
||||
|
||||
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository).
|
||||
|
||||
:::note
|
||||
|
||||
All charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`.
|
||||
|
||||
:::
|
||||
|
||||
### Helm Chart Versions
|
||||
|
||||
Rancher Helm chart versions match the Rancher version (i.e `appVersion`). Once you've added the repo you can search it to show available versions with the following command:
|
||||
`helm search repo --versions`
|
||||
|
||||
If you have several repos you can specify the repo name, ie. `helm search repo rancher-stable/rancher --versions` <br/>
|
||||
For more information, see https://helm.sh/docs/helm/helm_search_repo/
|
||||
|
||||
To fetch a specific version of your chosen repo, define the `--version` parameter like in the following example:<br/>
|
||||
`helm fetch rancher-stable/rancher --version=2.4.8`
|
||||
|
||||
### Switching to a Different Helm Chart Repository
|
||||
|
||||
After installing Rancher, if you want to change which Helm chart repository to install Rancher from, you will need to follow these steps.
|
||||
|
||||
:::note
|
||||
|
||||
Because the rancher-alpha repository contains only alpha charts, switching between the rancher-alpha repository and the rancher-stable or rancher-latest repository for upgrades is not supported.
|
||||
|
||||
:::
|
||||
|
||||
- Latest: Recommended for trying out the newest features
|
||||
```
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
```
|
||||
- Stable: Recommended for production environments
|
||||
```
|
||||
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
|
||||
```
|
||||
- Alpha: Experimental preview of upcoming releases.
|
||||
```
|
||||
helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha
|
||||
```
|
||||
Note: Upgrades are not supported to, from, or between Alphas.
|
||||
|
||||
1. List the current Helm chart repositories.
|
||||
|
||||
```plain
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://charts.helm.sh/stable
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
2. Remove the existing Helm Chart repository that contains your charts to install Rancher, which will either be `rancher-stable` or `rancher-latest` depending on what you had initially added.
|
||||
|
||||
```plain
|
||||
helm repo remove rancher-<CHART_REPO>
|
||||
```
|
||||
|
||||
3. Add the Helm chart repository that you want to start installing Rancher from.
|
||||
|
||||
```plain
|
||||
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
4. Continue to follow the steps to [upgrade Rancher](../install-upgrade-on-a-kubernetes-cluster/upgrades.md) from the new Helm chart repository.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker Images">
|
||||
|
||||
When performing [Docker installs](../other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md), upgrades, or rollbacks, you can use _tags_ to install a specific version of Rancher.
|
||||
|
||||
### Server Tags
|
||||
|
||||
Rancher Server is distributed as a Docker image, which have tags attached to them. You can specify this tag when entering the command to deploy Rancher. Remember that if you use a tag without an explicit version (like `latest` or `stable`), you must explicitly pull a new version of that image tag. Otherwise, any image cached on the host will be used.
|
||||
|
||||
| Tag | Description |
|
||||
| -------------------------- | ------ |
|
||||
| `rancher/rancher:latest` | Our latest development release. These builds are validated through our CI automation framework. These releases are not recommended for production environments. |
|
||||
| `rancher/rancher:stable` | Our newest stable release. This tag is recommended for production. |
|
||||
| `rancher/rancher:<v2.X.X>` | You can install specific versions of Rancher by using the tag from a previous release. See what's available at Docker Hub. |
|
||||
|
||||
:::note
|
||||
|
||||
- The `master` tag or any tag with `-rc` or another suffix is meant for the Rancher testing team to validate. You should not use these tags, as these builds are not officially supported.
|
||||
- Want to install an alpha review for preview? Install using one of the alpha tags listed on our [announcements page](https://forums.rancher.com/c/announcements) (e.g., `v2.2.0-alpha1`). Caveat: Alpha releases cannot be upgraded to or from any other release.
|
||||
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
@@ -0,0 +1,28 @@
|
||||
---
|
||||
title: About Custom CA Root Certificates
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/custom-ca-root-certificates"/>
|
||||
</head>
|
||||
|
||||
If you're using Rancher in an internal production environment where you aren't exposing apps publicly, use a certificate from a private certificate authority (CA).
|
||||
|
||||
Services that Rancher needs to access are sometimes configured with a certificate from a custom/internal CA root, also known as self signed certificate. If the presented certificate from the service cannot be validated by Rancher, the following error displays: `x509: certificate signed by unknown authority`.
|
||||
|
||||
To validate the certificate, the CA root certificates need to be added to Rancher. As Rancher is written in Go, we can use the environment variable `SSL_CERT_DIR` to point to the directory where the CA root certificates are located in the container. The CA root certificates directory can be mounted using the Docker volume option (`-v host-source-directory:container-destination-directory`) when starting the Rancher container.
|
||||
|
||||
Examples of services that Rancher can access:
|
||||
|
||||
- Catalogs
|
||||
- Authentication providers
|
||||
- Accessing hosting/cloud API when using Node Drivers
|
||||
|
||||
## Installing with the custom CA Certificate
|
||||
|
||||
For details on starting a Rancher container with your private CA certificates mounted, refer to the installation docs:
|
||||
|
||||
- [Docker install Custom CA certificate options](../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#custom-ca-certificate)
|
||||
|
||||
- [Kubernetes install options for Additional Trusted CAs](../installation-references/helm-chart-options.md#additional-trusted-cas)
|
||||
|
||||
@@ -0,0 +1,35 @@
|
||||
---
|
||||
title: Helm Version Requirements
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/helm-version-requirements"/>
|
||||
</head>
|
||||
|
||||
This section contains the requirements for Helm, which is the tool used to install Rancher on a high-availability Kubernetes cluster.
|
||||
|
||||
> The installation instructions have been updated for Helm 3. For migration of installs started with Helm 2, refer to the official [Helm 2 to 3 Migration Docs.](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) [This section](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/helm2.md) provides a copy of the older high-availability Rancher installation instructions that used Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible.
|
||||
|
||||
<DeprecationHelm2 />
|
||||
|
||||
## Identifying the Proper Helm v3 Version
|
||||
|
||||
Select any Helm v3 version that is officially compatible with the Kubernetes version range you are using from our [Rancher Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions).
|
||||
|
||||
To apply this rule, you may need to reference two external resources:
|
||||
|
||||
- **Helm Version Compatibility:** Refer to the [Helm Version Support Policy](https://helm.sh/docs/topics/version_skew/) and select the version matching the rule for your Rancher minor target.
|
||||
- **Rancher's Kubernetes Support Range:** Use the [Rancher Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions) to identify the Kubernetes versions supported by your target Rancher minor version.
|
||||
|
||||
### Example
|
||||
|
||||
- **Scenario:** You are targeting Rancher v2.11.4, which supports Kubernetes versions 1.30 through 1.32.
|
||||
- **Application:** Our rule requires a Helm version that supports this range. You can verify this by checking the Helm version's compatibility with the highest version in the range, Kubernetes v1.32.
|
||||
- **Result:** You find that both Helm v3.17 and Helm v3.18 support the Kubernetes v1.30-v1.32 range.
|
||||
- Although both work, we recommend Helm v3.18 because it is the newest Helm minor version overlapping the supported Kubernetes range.
|
||||
|
||||
## Additional Notes
|
||||
|
||||
- Helm v3.2.x or higher is required to install or upgrade Rancher v2.5.
|
||||
- Helm v2 support was removed in Rancher v2.9.x.
|
||||
- When using tools that run Helm commands for you (like Terraform), you must make sure they are configured to use the correct Helm version.
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title: Setting up Local System Charts for Air Gapped Installations
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/local-system-charts"/>
|
||||
</head>
|
||||
|
||||
The [Charts](https://github.com/rancher/charts) repository contains all the Helm catalog items required for features such as monitoring, logging, alerting and Istio.
|
||||
|
||||
In an air gapped installation of Rancher, you will need to configure Rancher to use a local copy of the system charts. This section describes how to use local system charts using a CLI flag.
|
||||
|
||||
## Using Local System Charts
|
||||
|
||||
A local copy of `system-charts` has been packaged into the `rancher/rancher` container. To be able to use these features in an air gap install, you will need to run the Rancher install command with an extra environment variable, `CATTLE_SYSTEM_CATALOG=bundled`, which tells Rancher to use the local copy of the charts instead of attempting to fetch them from GitHub.
|
||||
|
||||
Example commands for a Rancher installation with a bundled `system-charts` are included in the [air gap installation](../other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) instructions for Docker and Helm installs.
|
||||
@@ -0,0 +1,29 @@
|
||||
---
|
||||
title: Resources
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources"/>
|
||||
</head>
|
||||
|
||||
### Docker Installations
|
||||
|
||||
The [single-node Docker installation](../other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md) is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster using Helm, you install the Rancher server component on a single node using a `docker run` command.
|
||||
|
||||
Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
|
||||
|
||||
### Air-Gapped Installations
|
||||
|
||||
Follow [these steps](../other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) to install the Rancher server in an air gapped environment.
|
||||
|
||||
An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
|
||||
|
||||
### Advanced Options
|
||||
|
||||
When installing Rancher, there are several advanced options that can be enabled during installation. Within each install guide, these options are presented. Learn more about these options:
|
||||
|
||||
- [Custom CA Certificate](custom-ca-root-certificates.md)
|
||||
- [API Audit Log](../../../how-to-guides/advanced-user-guides/enable-api-audit-log.md)
|
||||
- [TLS Settings](../installation-references/tls-settings.md)
|
||||
- [etcd configuration](../../../how-to-guides/advanced-user-guides/tune-etcd-for-large-installs.md)
|
||||
- [Local System Charts for Air Gap Installations](local-system-charts.md) | v2.3.0 |
|
||||
@@ -0,0 +1,267 @@
|
||||
---
|
||||
title: Updating the Rancher Certificate
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/update-rancher-certificate"/>
|
||||
</head>
|
||||
|
||||
## Updating a Private CA Certificate
|
||||
|
||||
Follow these steps to rotate an SSL certificate and private CA used by Rancher [installed on a Kubernetes cluster](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md), or migrate to an SSL certificate signed by a private CA.
|
||||
|
||||
A summary of the steps is as follows:
|
||||
|
||||
1. Create or update the `tls-rancher-ingress` Kubernetes secret object with the new certificate and private key.
|
||||
1. Create or update the `tls-ca` Kubernetes secret object with the root CA certificate (only required when using a private CA).
|
||||
1. Update the Rancher installation using the Helm CLI.
|
||||
1. Reconfigure the Rancher agents to trust the new CA certificate.
|
||||
1. Select Force Update of Fleet clusters to connect fleet-agent to Rancher.
|
||||
|
||||
The details of these instructions are below.
|
||||
|
||||
### 1. Create/update the certificate secret object
|
||||
|
||||
First, concatenate the server certificate followed by any intermediate certificate(s) to a file named `tls.crt` and provide the corresponding certificate key in a file named `tls.key`.
|
||||
|
||||
Use the following command to create the `tls-rancher-ingress` secret object in the Rancher (local) management cluster:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
Alternatively, to update an existing `tls-rancher-ingress` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key \
|
||||
--dry-run --save-config -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
### 2. Create/update the CA certificate secret object
|
||||
|
||||
If the new certificate was signed by a private CA, you will need to copy the corresponding root CA certificate into a file named `cacerts.pem` and create or update the `tls-ca` secret in the `cattle-system` namespace. If the certificate was signed by an intermediate CA, then the `cacerts.pem` must contain both the intermediate and root CA certificates (in this order).
|
||||
|
||||
To create the initial `tls-ca` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem
|
||||
```
|
||||
|
||||
To update an existing `tls-ca` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem \
|
||||
--dry-run --save-config -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
### 3. Reconfigure the Rancher deployment
|
||||
|
||||
If the certificate source remains the same (for example, `secret`), please follow the steps in Step 3a.
|
||||
|
||||
However, if the certificate source is changing (for example, `letsEncrypt` to `secret`), follow the steps in 3b.
|
||||
|
||||
#### 3a. Redeploy the Rancher pods
|
||||
|
||||
This step is required when the certificate source remains the same, but the CA certificate is being updated.
|
||||
|
||||
In this scenario a redeploy of the Rancher pods is needed, this is because the `tls-ca` secret is read by the Rancher pods when starting.
|
||||
|
||||
The command below can be used to redeploy the Rancher pods:
|
||||
```bash
|
||||
kubectl rollout restart deploy/rancher -n cattle-system
|
||||
```
|
||||
|
||||
When the change is completed, navigate to `https://<RANCHER_SERVER_URL>/v3/settings/cacerts` to verify that the value matches the CA certificate written in the `tls-ca` secret earlier. The CA `cacerts` value may not update until all of the redeployed Rancher pods start.
|
||||
|
||||
#### 3b. Update the Helm values for Rancher
|
||||
|
||||
This step is required if the certificate source is changing. If Rancher was previously configured to use the default self-signed certificate (`ingress.tls.source=rancher`) or Let's Encrypt (`ingress.tls.source=letsEncrypt`), and is now using a certificate signed by a private CA (`ingress.tls.source=secret`).
|
||||
|
||||
The below steps update the Helm values for the Rancher chart, so the Rancher pods and ingress are reconfigured to use the new private CA certificate created in Step 1 & 2.
|
||||
|
||||
1. Adjust the values that were used during initial installation, store the current values with:
|
||||
```bash
|
||||
helm get values rancher -n cattle-system -o yaml > values.yaml
|
||||
```
|
||||
1. Retrieve the version string of the currently deployed Rancher chart to use below:
|
||||
```bash
|
||||
helm ls -n cattle-system
|
||||
```
|
||||
1. Update the current Helm values in the `values.yaml` file to contain:
|
||||
```yaml
|
||||
ingress:
|
||||
tls:
|
||||
source: secret
|
||||
privateCA: true
|
||||
```
|
||||
:::note Important:
|
||||
As the certificate is signed by a private CA, it is important to ensure [`privateCA: true`](../installation-references/helm-chart-options.md#common-options) is set in the `values.yaml` file.
|
||||
:::
|
||||
1. Upgrade the Helm application instance using the `values.yaml` file and the current chart version. The version must match to prevent an upgrade of Rancher.
|
||||
```bash
|
||||
helm upgrade rancher rancher-stable/rancher \
|
||||
--namespace cattle-system \
|
||||
-f values.yaml \
|
||||
--version <DEPLOYED_RANCHER_VERSION>
|
||||
```
|
||||
|
||||
When the change is completed, navigate to `https://<RANCHER_SERVER_URL>/v3/settings/cacerts` to verify that the value matches the CA certificate written in the `tls-ca` secret earlier. The CA `cacerts` value may not update until all Rancher pods start.
|
||||
|
||||
### 4. Reconfigure Rancher agents to trust the private CA
|
||||
|
||||
This section covers three methods to reconfigure Rancher agents to trust the private CA. This step is required if either of the following is true:
|
||||
|
||||
- Rancher was previously configured to use the Rancher self-signed certificate (`ingress.tls.source=rancher`) or with a Let's Encrypt issued certificate (`ingress.tls.source=letsEncrypt`)
|
||||
- The certificate was signed by a different private CA
|
||||
|
||||
#### Why is this step required?
|
||||
|
||||
When Rancher is configured with a certificate signed by a private CA, the CA certificate chain is trusted by Rancher agent containers. Agents compare the checksum of the downloaded certificate against the `CATTLE_CA_CHECKSUM` environment variable. This means that, when the private CA certificate used by Rancher has changed, the environment variable `CATTLE_CA_CHECKSUM` must be updated accordingly.
|
||||
|
||||
#### Which method should I choose?
|
||||
|
||||
Method 1 is the easiest, but requires all clusters to be connected to Rancher after the certificates have been rotated. This is usually the case if the process is performed right after updating or redeploying the Rancher deployment (Step 3).
|
||||
|
||||
If the clusters have lost connection to Rancher but [Authorized Cluster Endpoint](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md) (ACE) is enabled on all clusters, then go with method 2.
|
||||
|
||||
Method 3 can be used as a fallback if method 1 and 2 are not possible.
|
||||
|
||||
#### Method 1: Force a redeploy of the Rancher agents
|
||||
|
||||
For each downstream cluster run the following command using the Kubeconfig file of the Rancher (local) management cluster.
|
||||
|
||||
```bash
|
||||
kubectl annotate clusters.management.cattle.io <CLUSTER_ID> io.cattle.agent.force.deploy=true
|
||||
```
|
||||
|
||||
:::note
|
||||
Locate the cluster ID (c-xxxxx) for the downstream cluster, this can be seen in the browser URL bar when viewing the cluster in the Rancher UI, under Cluster Management.
|
||||
:::
|
||||
|
||||
This command will cause the agent manifest to be reapplied with the checksum of the new certificate.
|
||||
|
||||
#### Method 2: Manually update the checksum environment variable
|
||||
|
||||
Manually patch the agent Kubernetes objects by updating the `CATTLE_CA_CHECKSUM` environment variable to the value matching the checksum of the new CA certificate. Generate the new checksum value like so:
|
||||
|
||||
```bash
|
||||
curl -k -s -fL <RANCHER_SERVER_URL>/v3/settings/cacerts | jq -r .value | sha256sum | awk '{print $1}'
|
||||
```
|
||||
|
||||
Using a Kubeconfig for each downstream cluster update the environment variable for the two agent deployments. If the [ACE](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md) is enabled for the cluster, [the kubectl context can be adjusted](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) to connect directly to the downstream cluster.
|
||||
|
||||
```bash
|
||||
kubectl edit -n cattle-system ds/cattle-node-agent
|
||||
kubectl edit -n cattle-system deployment/cattle-cluster-agent
|
||||
```
|
||||
|
||||
#### Method 3: Manually redeploy the Rancher agents
|
||||
|
||||
With this method the Rancher agents are reapplied by running a set of commands on a control plane node of each downstream cluster.
|
||||
|
||||
Repeat the below steps for each downstream cluster:
|
||||
|
||||
1. Retrieve the agent registration kubectl command:
|
||||
1. Locate the cluster ID (c-xxxxx) for the downstream cluster, this can be seen in the URL when viewing the cluster in the Rancher UI under Cluster Management
|
||||
1. Add the Rancher server URL and cluster ID to the following URL: `https://<RANCHER_SERVER_URL>/v3/clusterregistrationtokens?clusterId=<CLUSTER_ID>`
|
||||
1. Copy the command from the `insecureCommand` field, this command is used because a private CA is un use
|
||||
|
||||
2. Run the kubectl command from the previous step using a kubeconfig for the downstream cluster with one of the following methods:
|
||||
1. If the [ACE](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md) is enabled for the cluster, [the context can be adjusted](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) to connect directly to the downstream cluster
|
||||
1. Alternatively, SSH into the control plane node:
|
||||
- RKE: Use the [steps in the document here](https://github.com/rancherlabs/support-tools/tree/master/how-to-retrieve-kubeconfig-from-custom-cluster) to generate a kubeconfig
|
||||
- RKE2/K3s: Use the kubeconfig populated during installation
|
||||
|
||||
### 5. Force Update Fleet clusters to reconnect the fleet-agent to Rancher
|
||||
|
||||
Select 'Force Update' for the clusters within the [Continuous Delivery](../../../integrations-in-rancher/fleet/overview.md#accessing-fleet-in-the-rancher-ui) view of the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher.
|
||||
|
||||
#### Why is this step required?
|
||||
|
||||
Fleet agents in Rancher managed clusters store a kubeconfig that is used to connect to Rancher. The kubeconfig contains a `certificate-authority-data` field containing the CA for the certificate used by Rancher. When changing the CA, this block needs to be updated to allow the fleet-agent to trust the certificate used by Rancher.
|
||||
|
||||
## Updating from a Private CA Certificate to a Public CA Certificate
|
||||
|
||||
Follow these steps to perform the opposite procedure as shown above, to change from a certificate issued by a private CA, to a public or self-signed CA.
|
||||
|
||||
### 1. Create/update the certificate secret object
|
||||
|
||||
First, concatenate the server certificate followed by any intermediate certificate(s) to a file named `tls.crt` and provide the corresponding certificate key in a file named `tls.key`.
|
||||
|
||||
Use the following command to create the `tls-rancher-ingress` secret object in the Rancher (local) management cluster:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
Alternatively, to update an existing `tls-rancher-ingress` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key \
|
||||
--dry-run --save-config -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
### 2. Delete the CA certificate secret object
|
||||
|
||||
You will delete the `tls-ca` secret in the `cattle-system` namespace as it is no longer needed. You may also optionally save a copy of the `tls-ca` secret if desired.
|
||||
|
||||
To save the existing `tls-ca` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system get secret tls-ca -o yaml > tls-ca.yaml
|
||||
```
|
||||
|
||||
To delete the existing `tls-ca` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system delete secret tls-ca
|
||||
```
|
||||
|
||||
### 3. Reconfigure the Rancher deployment
|
||||
|
||||
This step is required if the certificate source is changing. In this scenario it's likely only changing because Rancher was previously configured to use the default self-signed certificate (`ingress.tls.source=rancher`).
|
||||
|
||||
The below steps update the Helm values for the Rancher chart, so the Rancher pods and ingress are reconfigured to use the new certificate created in Step 1.
|
||||
|
||||
1. Adjust the values that were used during initial installation, store the current values with:
|
||||
```bash
|
||||
helm get values rancher -n cattle-system -o yaml > values.yaml
|
||||
```
|
||||
1. Also get the version string of the currently deployed Rancher chart:
|
||||
```bash
|
||||
helm ls -n cattle-system
|
||||
```
|
||||
1. Update the current Helm values in the `values.yaml` file:
|
||||
1. As a private CA is no longer being used, remove the `privateCA: true` field, or set this to `false`
|
||||
1. Adjust the `ingress.tls.source` field as necessary. Please [refer to the chart options](../installation-references/helm-chart-options.md#common-options) for more details. Here are some examples:
|
||||
1. If using a public CA continue with a value of: `secret`
|
||||
1. If using Let's Encrypt update the value to: `letsEncrypt`
|
||||
1. Update the Helm values for the Rancher chart using the `values.yaml` file, and the current chart version to prevent an upgrade:
|
||||
```bash
|
||||
helm upgrade rancher rancher-stable/rancher \
|
||||
--namespace cattle-system \
|
||||
-f values.yaml \
|
||||
--version <DEPLOYED_RANCHER_VERSION>
|
||||
```
|
||||
|
||||
### 4. Reconfigure Rancher agents for the non-private/common certificate
|
||||
|
||||
As a private CA is no longer being used, the `CATTLE_CA_CHECKSUM` environment variable on the downstream cluster agents should be removed or set to "" (an empty string).
|
||||
|
||||
### 5. Force Update Fleet clusters to reconnect the fleet-agent to Rancher
|
||||
|
||||
Select 'Force Update' for the clusters within the [Continuous Delivery](../../../integrations-in-rancher/fleet/overview.md#accessing-fleet-in-the-rancher-ui) view of the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher.
|
||||
|
||||
#### Why is this step required?
|
||||
|
||||
Fleet agents in Rancher managed clusters store a kubeconfig that is used to connect to Rancher. The kubeconfig contains a `certificate-authority-data` field containing the CA for the certificate used by Rancher. When changing the CA, this block needs to be updated to allow the fleet-agent to trust the certificate used by Rancher.
|
||||
@@ -0,0 +1,288 @@
|
||||
---
|
||||
title: Upgrading Cert-Manager
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/upgrade-cert-manager"/>
|
||||
</head>
|
||||
|
||||
Rancher is compatible with the API version cert-manager.io/v1 and was last tested with cert-manager version v1.13.1.
|
||||
|
||||
Rancher uses cert-manager to automatically generate and renew TLS certificates for HA deployments of Rancher. As of Fall 2019, three important changes to cert-manager are set to occur that you need to take action on if you have an HA deployment of Rancher:
|
||||
|
||||
1. [Let's Encrypt will be blocking cert-manager instances older than 0.8.0 starting November 1st 2019.](https://community.letsencrypt.org/t/blocking-old-cert-manager-versions/98753)
|
||||
1. [Cert-manager is deprecating and replacing the certificate.spec.acme.solvers field](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/). This change has no exact deadline.
|
||||
1. [Cert-manager is deprecating `v1alpha1` API and replacing its API group](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/)
|
||||
|
||||
To address these changes, this guide will do two things:
|
||||
|
||||
1. Document the procedure for upgrading cert-manager
|
||||
1. Explain the cert-manager API changes and link to cert-manager's official documentation for migrating your data
|
||||
|
||||
:::note Important:
|
||||
|
||||
If you are upgrading cert-manager to the latest version from a version older than 1.5, follow the steps in [Option C](#option-c-upgrade-cert-manager-from-versions-15-and-below) below to do so. Note that you do not need to reinstall Rancher to perform this upgrade.
|
||||
|
||||
:::
|
||||
|
||||
## Upgrade Cert-Manager
|
||||
|
||||
The namespace used in these instructions depends on the namespace cert-manager is currently installed in. If it is in kube-system use that in the instructions below. You can verify by running `kubectl get pods --all-namespaces` and checking which namespace the cert-manager-\* pods are listed in. Do not change the namespace cert-manager is running in or this can cause issues.
|
||||
|
||||
In order to upgrade cert-manager, follow these instructions:
|
||||
|
||||
### Option A: Upgrade cert-manager with Internet Access
|
||||
|
||||
<details id="normal">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
1. [Back up existing resources](https://cert-manager.io/docs/tutorials/backup/) as a precaution
|
||||
|
||||
```plain
|
||||
kubectl get -o yaml --all-namespaces \
|
||||
issuer,clusterissuer,certificates,certificaterequests > cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
:::note Important:
|
||||
|
||||
If you are upgrading from a version older than 0.11.0, Update the apiVersion on all your backed up resources from `certmanager.k8s.io/v1alpha1` to `cert-manager.io/v1alpha2`. If you use any cert-manager annotations on any of your other resources, you will need to update them to reflect the new API group. For details, refer to the documentation on [additional annotation changes.](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/#additional-annotation-changes)
|
||||
|
||||
:::
|
||||
|
||||
1. [Uninstall existing deployment](https://cert-manager.io/docs/installation/uninstall/kubernetes/#uninstalling-with-helm)
|
||||
|
||||
```plain
|
||||
helm uninstall cert-manager
|
||||
```
|
||||
|
||||
Delete the CustomResourceDefinition using the link to the version vX.Y.Z you installed
|
||||
|
||||
```plain
|
||||
kubectl delete -f https://github.com/cert-manager/cert-manager/releases/download/vX.Y.Z/cert-manager.crds.yaml
|
||||
|
||||
```
|
||||
|
||||
1. Install the CustomResourceDefinition resources separately
|
||||
|
||||
```plain
|
||||
kubectl apply --validate=false -f https://github.com/cert-manager/cert-manager/releases/download/vX.Y.Z/cert-manager.crds.yaml
|
||||
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
If you are running Kubernetes v1.15 or below, you will need to add the `--validate=false` flag to your `kubectl apply` command above. Otherwise, you will receive a validation error relating to the `x-kubernetes-preserve-unknown-fields` field in cert-manager’s CustomResourceDefinition resources. This is a benign error and occurs due to the way kubectl performs resource validation.
|
||||
|
||||
:::
|
||||
|
||||
1. Create the namespace for cert-manager if needed
|
||||
|
||||
```plain
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
1. Add the Jetstack Helm repository
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
```
|
||||
|
||||
1. Update your local Helm chart repository cache
|
||||
|
||||
```plain
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Install the new version of cert-manager
|
||||
|
||||
```plain
|
||||
helm install \
|
||||
cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager
|
||||
```
|
||||
|
||||
1. [Restore back up resources](https://cert-manager.io/docs/tutorials/backup/#restoring-resources)
|
||||
|
||||
```plain
|
||||
kubectl apply -f cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Option B: Upgrade cert-manager in an Air-Gapped Environment
|
||||
|
||||
<details id="airgap">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before you can perform the upgrade, you must prepare your air gapped environment by adding the necessary container images to your private registry and downloading or rendering the required Kubernetes manifest files.
|
||||
|
||||
1. Follow the guide to [Prepare your Private Registry](../other-installation-methods/air-gapped-helm-cli-install/publish-images.md) with the images needed for the upgrade.
|
||||
|
||||
1. From a system connected to the internet, add the cert-manager repo to Helm
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://artifacthub.io/packages/helm/cert-manager/cert-manager).
|
||||
|
||||
```plain
|
||||
helm fetch jetstack/cert-manager
|
||||
```
|
||||
|
||||
1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
|
||||
|
||||
The Helm 3 command is as follows:
|
||||
|
||||
```plain
|
||||
helm template cert-manager ./cert-manager-v0.12.0.tgz --output-dir . \
|
||||
--namespace cert-manager \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
|
||||
```
|
||||
|
||||
<DeprecationHelm2 />
|
||||
|
||||
The Helm 2 command is as follows:
|
||||
|
||||
```plain
|
||||
helm template ./cert-manager-v0.12.0.tgz --output-dir . \
|
||||
--name cert-manager --namespace cert-manager \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
|
||||
```
|
||||
|
||||
1. Download the required CRD file for cert-manager (old and new)
|
||||
|
||||
```plain
|
||||
curl -L -o cert-manager-crd.yaml https://raw.githubusercontent.com/cert-manager/cert-manager/release-0.12/deploy/manifests/00-crds.yaml
|
||||
curl -L -o cert-manager/cert-manager-crd-old.yaml https://raw.githubusercontent.com/cert-manager/cert-manager/release-X.Y/deploy/manifests/00-crds.yaml
|
||||
```
|
||||
|
||||
### Install cert-manager
|
||||
|
||||
1. Back up existing resources as a precaution
|
||||
|
||||
```plain
|
||||
kubectl get -o yaml --all-namespaces \
|
||||
issuer,clusterissuer,certificates,certificaterequests > cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
:::note Important:
|
||||
|
||||
If you are upgrading from a version older than 0.11.0, Update the apiVersion on all your backed up resources from `certmanager.k8s.io/v1alpha1` to `cert-manager.io/v1alpha2`. If you use any cert-manager annotations on any of your other resources, you will need to update them to reflect the new API group. For details, refer to the documentation on [additional annotation changes.](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/#additional-annotation-changes)
|
||||
|
||||
:::
|
||||
|
||||
1. Delete the existing cert-manager installation
|
||||
|
||||
```plain
|
||||
kubectl -n cert-manager \
|
||||
delete deployment,sa,clusterrole,clusterrolebinding \
|
||||
-l 'app=cert-manager' -l 'chart=cert-manager-v0.5.2'
|
||||
```
|
||||
|
||||
Delete the CustomResourceDefinition using the link to the version vX.Y you installed
|
||||
|
||||
```plain
|
||||
kubectl delete -f cert-manager/cert-manager-crd-old.yaml
|
||||
```
|
||||
|
||||
1. Install the CustomResourceDefinition resources separately
|
||||
|
||||
```plain
|
||||
kubectl apply -f cert-manager/cert-manager-crd.yaml
|
||||
```
|
||||
|
||||
:::note Important:
|
||||
|
||||
If you are running Kubernetes v1.15 or below, you will need to add the `--validate=false` flag to your `kubectl apply` command above. Otherwise, you will receive a validation error relating to the `x-kubernetes-preserve-unknown-fields` field in cert-manager’s CustomResourceDefinition resources. This is a benign error and occurs due to the way kubectl performs resource validation.
|
||||
|
||||
:::
|
||||
|
||||
1. Create the namespace for cert-manager
|
||||
|
||||
```plain
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
1. Install cert-manager
|
||||
|
||||
```plain
|
||||
kubectl -n cert-manager apply -R -f ./cert-manager
|
||||
```
|
||||
|
||||
1. [Restore back up resources](https://cert-manager.io/docs/tutorials/backup/#restoring-resources)
|
||||
|
||||
```plain
|
||||
kubectl apply -f cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Option C: Upgrade cert-manager from Versions 1.5 and Below
|
||||
|
||||
<details id="normal">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
Previously, in order to upgrade cert-manager from an older version, an uninstall and reinstall of Rancher was recommended. Using the method below, you may upgrade cert-manager without those additional steps in order to better preserve your production environment:
|
||||
|
||||
1. Install `cmctl`, the cert-manager CLI tool, using [the installation guide](https://cert-manager.io/docs/usage/cmctl/#installation).
|
||||
|
||||
1. Ensure that any cert-manager custom resources that may have been stored in etcd at a deprecated API version get migrated to v1:
|
||||
|
||||
```
|
||||
cmctl upgrade migrate-api-version
|
||||
```
|
||||
Refer to the [API version migration docs](https://cert-manager.io/docs/usage/cmctl/#migrate-api-version) for more information. Please also see the [docs to upgrade from 1.5 to 1.6](https://cert-manager.io/docs/installation/upgrading/upgrading-1.5-1.6/) and the [docs to upgrade from 1.6. to 1.7](https://cert-manager.io/docs/installation/upgrading/upgrading-1.6-1.7/) if needed.
|
||||
|
||||
1. Upgrade cert-manager to v1.7.1 with a normal `helm upgrade`. You may go directly from version 1.5 to 1.7 if desired.
|
||||
|
||||
1. Follow the Helm tutorial to [update the API version of a release manifest](https://helm.sh/docs/topics/kubernetes_apis/#updating-api-versions-of-a-release-manifest). The chart release name is `release_name=rancher` and the release namespace is `release_namespace=cattle-system`.
|
||||
|
||||
1. In the decoded file, search for `cert-manager.io/v1beta1` and **replace it** with `cert-manager.io/v1`.
|
||||
|
||||
1. Upgrade Rancher normally with `helm upgrade`.
|
||||
|
||||
</details>
|
||||
|
||||
### Verify the Deployment
|
||||
|
||||
Once you’ve installed cert-manager, you can verify it is deployed correctly by checking the kube-system namespace for running pods:
|
||||
|
||||
```
|
||||
kubectl get pods --namespace cert-manager
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cert-manager-5c6866597-zw7kh 1/1 Running 0 2m
|
||||
cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m
|
||||
cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
|
||||
```
|
||||
|
||||
## Cert-Manager API change and data migration
|
||||
|
||||
---
|
||||
|
||||
Rancher now supports cert-manager versions 1.6.2 and 1.7.1. We recommend v1.7.x because v 1.6.x will reach end-of-life on March 30, 2022. To read more, see the [cert-manager docs](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md#4-install-cert-manager). For instructions on upgrading cert-manager from version 1.5 to 1.6, see the upstream cert-manager documentation [here](https://cert-manager.io/docs/installation/upgrading/upgrading-1.5-1.6/). For instructions on upgrading cert-manager from version 1.6 to 1.7, see the upstream cert-manager documentation [here](https://cert-manager.io/docs/installation/upgrading/upgrading-1.6-1.7/).
|
||||
|
||||
---
|
||||
|
||||
Cert-manager has deprecated the use of the `certificate.spec.acme.solvers` field and will drop support for it completely in an upcoming release.
|
||||
|
||||
Per the cert-manager documentation, a new format for configuring ACME certificate resources was introduced in v0.8. Specifically, the challenge solver configuration field was moved. Both the old format and new are supported as of v0.9, but support for the old format will be dropped in an upcoming release of cert-manager. The cert-manager documentation strongly recommends that after upgrading you update your ACME Issuer and Certificate resources to the new format.
|
||||
|
||||
Details about the change and migration instructions can be found in the [cert-manager v0.7 to v0.8 upgrade instructions](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/).
|
||||
|
||||
The v0.11 release marks the removal of the v1alpha1 API that was used in previous versions of cert-manager, as well as our API group changing to be cert-manager.io instead of certmanager.k8s.io.
|
||||
|
||||
We have also removed support for the old configuration format that was deprecated in the v0.8 release. This means you must transition to using the new solvers style configuration format for your ACME issuers before upgrading to v0.11. For more information, see the [upgrading to v0.8 guide](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/).
|
||||
|
||||
Details about the change and migration instructions can be found in the [cert-manager v0.10 to v0.11 upgrade instructions](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/).
|
||||
|
||||
More info about [cert-manager upgrade information](https://cert-manager.io/docs/installation/upgrade/).
|
||||
|
||||
@@ -0,0 +1,129 @@
|
||||
---
|
||||
title: Upgrading and Rolling Back Kubernetes
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/upgrade-and-roll-back-kubernetes"/>
|
||||
</head>
|
||||
|
||||
Following an upgrade to the latest version of Rancher, downstream Kubernetes clusters can be upgraded to use the latest supported version of Kubernetes.
|
||||
|
||||
Rancher calls RKE (Rancher Kubernetes Engine) as a library when provisioning and editing RKE clusters. For more information on configuring the upgrade strategy for RKE clusters, refer to the [RKE documentation](https://rancher.com/docs/rke/latest/en/).
|
||||
|
||||
|
||||
## Tested Kubernetes Versions
|
||||
|
||||
Before a new version of Rancher is released, it's tested with the latest minor versions of Kubernetes to ensure compatibility. For details on which versions of Kubernetes were tested on each Rancher version, refer to the [support maintenance terms.](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.6.0/)
|
||||
|
||||
## How Upgrades Work
|
||||
|
||||
RKE v1.1.0 changed the way that clusters are upgraded.
|
||||
|
||||
In this section of the [RKE documentation,](https://rancher.com/docs/rke/latest/en/upgrades/how-upgrades-work) you'll learn what happens when you edit or upgrade your RKE Kubernetes cluster.
|
||||
|
||||
|
||||
## Recommended Best Practice for Upgrades
|
||||
|
||||
When upgrading the Kubernetes version of a cluster, we recommend that you:
|
||||
|
||||
1. Take a snapshot.
|
||||
1. Initiate a Kubernetes upgrade.
|
||||
1. If the upgrade fails, revert the cluster to the pre-upgrade Kubernetes version. This is achieved by selecting the **Restore etcd and Kubernetes version** option. This will return your cluster to the pre-upgrade kubernetes version before restoring the etcd snapshot.
|
||||
|
||||
The restore operation will work on a cluster that is not in a healthy or active state.
|
||||
|
||||
## Upgrading the Kubernetes Version
|
||||
|
||||
:::note Prerequisites:
|
||||
|
||||
- The options below are available for [Rancher-launched Kubernetes clusters](../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md) and [Registered K3s Kubernetes clusters](../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/register-existing-clusters.md#additional-features-for-registered-rke2-and-k3s-clusters).
|
||||
- The following options also apply to imported RKE2 clusters that you have registered. If you import a cluster from an external cloud platform but don't register it, you won't be able to upgrade the Kubernetes version from Rancher.
|
||||
- Before upgrading Kubernetes, [back up your cluster.](../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/backup-restore-and-disaster-recovery.md)
|
||||
|
||||
:::
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster you want to upgrade and click **⋮ > Edit Config**.
|
||||
1. From the **Kubernetes Version** drop-down, choose the version of Kubernetes that you want to use for the cluster.
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:** Kubernetes begins upgrading for the cluster.
|
||||
|
||||
## Rolling Back
|
||||
|
||||
A cluster can be restored to a backup in which the previous Kubernetes version was used. For more information, refer to the following sections:
|
||||
|
||||
- [Backing up a cluster](../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-rancher-launched-kubernetes-clusters.md#how-snapshots-work)
|
||||
- [Restoring a cluster from backup](../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-rancher-launched-kubernetes-clusters-from-backup.md#restoring-a-cluster-from-a-snapshot)
|
||||
|
||||
## Configuring the Upgrade Strategy
|
||||
|
||||
As of RKE v1.1.0, additional upgrade options became available to give you more granular control over the upgrade process. These options can be used to maintain availability of your applications during a cluster upgrade if certain [conditions and requirements](https://rancher.com/docs/rke/latest/en/upgrades/maintaining-availability) are met.
|
||||
|
||||
The upgrade strategy can be configured in the Rancher UI, or by editing the `cluster.yml`. More advanced options are available by editing the `cluster.yml`.
|
||||
|
||||
### Configuring the Maximum Unavailable Worker Nodes in the Rancher UI
|
||||
|
||||
From the Rancher UI, the maximum number of unavailable worker nodes can be configured. During a cluster upgrade, worker nodes will be upgraded in batches of this size.
|
||||
|
||||
By default, the maximum number of unavailable worker is defined as 10 percent of all worker nodes. This number can be configured as a percentage or as an integer. When defined as a percentage, the batch size is rounded down to the nearest node, with a minimum of one node.
|
||||
|
||||
To change the default number or percentage of worker nodes,
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster you want to upgrade and click **⋮ > Edit Config**.
|
||||
1. In the **Upgrade Strategy** tab, enter the **Worker Concurrency** as a fixed number or percentage. To get this number, you can take the number of nodes in your cluster and subtract the max unavailable nodes.
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:** The cluster is updated to use the new upgrade strategy.
|
||||
|
||||
### Enabling Draining Nodes During Upgrades from the Rancher UI
|
||||
|
||||
By default, RKE [cordons](https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration) each node before upgrading it. [Draining](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) is disabled during upgrades by default. If draining is enabled in the cluster configuration, RKE will both cordon and drain the node before it is upgraded.
|
||||
|
||||
To enable draining each node during a cluster upgrade,
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster you want to enable node draining and click **⋮ > Edit Config**.
|
||||
1. Click **⋮ > Edit**.
|
||||
1. In the **Upgrade Strategy** tab, go to the **Drain nodes** field and click **Yes**. Node draining is configured separately for control plane and worker nodes.
|
||||
1. Configure the options for how pods are deleted. For more information about each option, refer to [this section.](../../how-to-guides/new-user-guides/manage-clusters/nodes-and-node-pools.md#aggressive-and-safe-draining-options)
|
||||
1. Optionally, configure a grace period. The grace period is the timeout given to each pod for cleaning things up, so they will have chance to exit gracefully. Pods might need to finish any outstanding requests, roll back transactions or save state to some external storage. If this value is negative, the default value specified in the pod will be used.
|
||||
1. Optionally, configure a timeout, which is the amount of time the drain should continue to wait before giving up.
|
||||
1. Click **Save**.
|
||||
|
||||
**Result:** The cluster is updated to use the new upgrade strategy.
|
||||
|
||||
:::note
|
||||
|
||||
- There is a [known issue](https://github.com/rancher/rancher/issues/25478) in which the Rancher UI doesn't show the state of etcd and controlplane as drained, even though they are being drained.
|
||||
- During an upgrade, nodes may be drained even when no user-visible YAML changes are present. This can occur if non-dynamic configuration files are updated or if a new `system-agent-installer` image is introduced. In such cases, Rancher generates a new upgrade plan, resulting in a new plan hash. When `Upgrade Strategy` is set to `Drain nodes`, this plan change can trigger node draining.
|
||||
|
||||
:::
|
||||
|
||||
### Maintaining Availability for Applications During Upgrades
|
||||
|
||||
In [this section of the RKE documentation,](https://rancher.com/docs/rke/latest/en/upgrades/maintaining-availability/) you'll learn the requirements to prevent downtime for your applications when upgrading the cluster.
|
||||
|
||||
### Configuring the Upgrade Strategy in the cluster.yml
|
||||
|
||||
More advanced upgrade strategy configuration options are available by editing the `cluster.yml`.
|
||||
|
||||
For details, refer to [Configuring the Upgrade Strategy](https://rancher.com/docs/rke/latest/en/upgrades/configuring-strategy) in the RKE documentation. The section also includes an example `cluster.yml` for configuring the upgrade strategy.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If a node doesn't come up after an upgrade, the `rke up` command errors out.
|
||||
|
||||
No upgrade will proceed if the number of unavailable nodes exceeds the configured maximum.
|
||||
|
||||
If an upgrade stops, you may need to fix an unavailable node or remove it from the cluster before the upgrade can continue.
|
||||
|
||||
A failed node could be in many different states:
|
||||
|
||||
- Powered off
|
||||
- Unavailable
|
||||
- User drains a node while upgrade is in process, so there are no kubelets on the node
|
||||
- The upgrade itself failed
|
||||
|
||||
If the max unavailable number of nodes is reached during an upgrade, Rancher user clusters will be stuck in updating state and not move forward with upgrading any other control plane nodes. It will continue to evaluate the set of unavailable nodes in case one of the nodes becomes available. If the node cannot be fixed, you must remove the node in order to continue the upgrade.
|
||||
@@ -0,0 +1,97 @@
|
||||
---
|
||||
title: Upgrading Kubernetes without Upgrading Rancher
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/upgrade-kubernetes-without-upgrading-rancher"/>
|
||||
</head>
|
||||
|
||||
<EOLRKE1Warning />
|
||||
|
||||
The RKE metadata feature allows you to provision clusters with new versions of Kubernetes as soon as they are released, without upgrading Rancher. This feature is useful for taking advantage of patch versions of Kubernetes, for example, if you want to upgrade to Kubernetes v1.14.7 when your Rancher server originally supported v1.14.6.
|
||||
|
||||
:::note
|
||||
|
||||
The Kubernetes API can change between minor versions. Therefore, we don't support introducing minor Kubernetes versions, such as introducing v1.15 when Rancher currently supports v1.14. You would need to upgrade Rancher to add support for minor Kubernetes versions.
|
||||
|
||||
:::
|
||||
|
||||
Rancher's Kubernetes metadata contains information specific to the Kubernetes version that Rancher uses to provision [RKE clusters](../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md). Rancher syncs the data periodically and creates custom resource definitions (CRDs) for **system images,** **service options** and **addon templates**. Consequently, when a new Kubernetes version is compatible with the Rancher server version, the Kubernetes metadata makes the new version available to Rancher for provisioning clusters. The metadata gives you an overview of the information that the [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) (RKE) uses for deploying various Kubernetes versions.
|
||||
|
||||
This table below describes the CRDs that are affected by the periodic data sync.
|
||||
|
||||
:::note
|
||||
|
||||
Only administrators can edit metadata CRDs. It is recommended not to update existing objects unless explicitly advised.
|
||||
|
||||
:::
|
||||
|
||||
| Resource | Description | Rancher API URL |
|
||||
|----------|-------------|-----------------|
|
||||
| System Images | List of system images used to deploy Kubernetes through RKE. | `<RANCHER_SERVER_URL>/v3/rkek8ssystemimages` |
|
||||
| Service Options | Default options passed to Kubernetes components like `kube-api`, `scheduler`, `kubelet`, `kube-proxy`, and `kube-controller-manager` | `<RANCHER_SERVER_URL>/v3/rkek8sserviceoptions` |
|
||||
| Addon Templates | YAML definitions used to deploy addon components like Canal, Calico, Flannel, Weave, Kube-dns, CoreDNS, `metrics-server`, `nginx-ingress` | `<RANCHER_SERVER_URL>/v3/rkeaddons` |
|
||||
|
||||
Administrators might configure the RKE metadata settings to do the following:
|
||||
|
||||
- Refresh the Kubernetes metadata, if a new patch version of Kubernetes comes out and they want Rancher to provision clusters with the latest version of Kubernetes without having to upgrade Rancher
|
||||
- Change the metadata URL that Rancher uses to sync the metadata, which is useful for air gap setups if you need to sync Rancher locally instead of with GitHub
|
||||
- Prevent Rancher from auto-syncing the metadata, which is one way to prevent new and unsupported Kubernetes versions from being available in Rancher
|
||||
|
||||
## Refresh Kubernetes Metadata
|
||||
|
||||
The option to refresh the Kubernetes metadata is available for administrators by default, or for any user who has the **Manage Cluster Drivers** [global role.](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md)
|
||||
|
||||
To force Rancher to refresh the Kubernetes metadata, a manual refresh action is available:
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. In the left navigation menu, click **Drivers**.
|
||||
1. Click **Refresh Kubernetes Metadata**.
|
||||
|
||||
You can configure Rancher to only refresh metadata when desired by setting `refresh-interval-minutes` to `0` (see below) and using this button to perform the metadata refresh manually when desired.
|
||||
|
||||
### Configuring the Metadata Synchronization
|
||||
|
||||
:::caution
|
||||
|
||||
Only administrators can change these settings.
|
||||
|
||||
:::
|
||||
|
||||
The RKE metadata config controls how often Rancher syncs metadata and where it downloads data from. You can configure the metadata from the settings in the Rancher UI, or through the Rancher API at the endpoint `v3/settings/rke-metadata-config`.
|
||||
|
||||
The way that the metadata is configured depends on the Rancher version.
|
||||
|
||||
To edit the metadata config in Rancher,
|
||||
|
||||
1. In the upper left corner, click **☰ > Global Settings**.
|
||||
1. Go to the **rke-metadata-config** section. Click **⋮ > Edit Setting**.
|
||||
1. You can optionally fill in the following parameters:
|
||||
|
||||
- `refresh-interval-minutes`: This is the amount of time that Rancher waits to sync the metadata. To disable the periodic refresh, set `refresh-interval-minutes` to 0.
|
||||
- `url`: This is the HTTP path that Rancher fetches data from. The path must be a direct path to a JSON file. For example, the default URL for Rancher v2.4 is `https://releases.rancher.com/kontainer-driver-metadata/release-v2.4/data.json`.
|
||||
1. Click **Save**.
|
||||
|
||||
If you don't have an air gap setup, you don't need to specify the URL where Rancher gets the metadata, because the default setting is to pull from [Rancher's metadata Git repository.](https://github.com/rancher/kontainer-driver-metadata/blob/dev-v2.5/data/data.json)
|
||||
|
||||
However, if you have an [air gap setup,](#air-gap-setups) you will need to mirror the Kubernetes metadata repository in a location available to Rancher. Then you need to change the URL to point to the new location of the JSON file.
|
||||
|
||||
## Air Gap Setups
|
||||
|
||||
Rancher relies on a periodic refresh of the `rke-metadata-config` to download new Kubernetes version metadata if it is supported with the current version of the Rancher server. For a table of compatible Kubernetes and Rancher versions, refer to the [service terms section.](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.2.8/)
|
||||
|
||||
If you have an air gap setup, you might not be able to get the automatic periodic refresh of the Kubernetes metadata from Rancher's Git repository. In that case, you should disable the periodic refresh to prevent your logs from showing errors. Optionally, you can configure your metadata settings so that Rancher can sync with a local copy of the RKE metadata.
|
||||
|
||||
To sync Rancher with a local mirror of the RKE metadata, an administrator would configure the `rke-metadata-config` settings to point to the mirror. For details, refer to [Configuring the Metadata Synchronization.](#configuring-the-metadata-synchronization)
|
||||
|
||||
After new Kubernetes versions are loaded into the Rancher setup, additional steps would be required in order to use them for launching clusters. Rancher needs access to updated system images. While the metadata settings can only be changed by administrators, any user can download the Rancher system images and prepare a private container image registry for them.
|
||||
|
||||
To download the system images for the private registry:
|
||||
|
||||
1. Click **☰** in the top left corner.
|
||||
1. At the bottom of the left navigation, click the Rancher version number.
|
||||
1. Download the OS specific image lists for Linux or Windows.
|
||||
1. Download `rancher-images.txt`.
|
||||
1. Prepare the private registry using the same steps during the [air gap install](other-installation-methods/air-gapped-helm-cli-install/publish-images.md), but instead of using the `rancher-images.txt` from the releases page, use the one obtained from the previous steps.
|
||||
|
||||
**Result:** The air gap installation of Rancher can now sync the Kubernetes metadata. If you update your private registry when new versions of Kubernetes are released, you can provision clusters with the new version without having to upgrade Rancher.
|
||||
69
versioned_docs/version-2.14/getting-started/overview.md
Normal file
69
versioned_docs/version-2.14/getting-started/overview.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
title: Overview
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/overview"/>
|
||||
</head>
|
||||
|
||||
Rancher is a container management platform built for organizations that deploy containers in production. Rancher makes it easy to run Kubernetes everywhere, meet IT requirements, and empower DevOps teams.
|
||||
|
||||
## Run Kubernetes Everywhere
|
||||
|
||||
Kubernetes has become the container orchestration standard. Most cloud and virtualization vendors now offer it as standard infrastructure. Rancher users have the choice of creating Kubernetes clusters with Rancher Kubernetes distributions (RKE2/K3s) or cloud Kubernetes services, such as GKE, AKS, and EKS. Rancher users can also import and manage their existing Kubernetes clusters created using any Kubernetes distribution or installer.
|
||||
|
||||
## Meet IT Requirements
|
||||
|
||||
Rancher supports centralized authentication, access control, and monitoring for all Kubernetes clusters under its control. For example, you can:
|
||||
|
||||
- Use your Active Directory credentials to access Kubernetes clusters hosted by cloud vendors, such as GKE.
|
||||
- Setup and enforce access control and security policies across all users, groups, projects, clusters, and clouds.
|
||||
- View the health and capacity of your Kubernetes clusters from a single-pane-of-glass.
|
||||
|
||||
## Empower DevOps Teams
|
||||
|
||||
Rancher provides an intuitive user interface for DevOps engineers to manage their application workload. The user does not need to have in-depth knowledge of Kubernetes concepts to start using Rancher. Rancher catalog contains a set of useful DevOps tools. Rancher is certified with a wide selection of cloud native ecosystem products, including, for example, security tools, monitoring systems, container registries, and storage and networking drivers.
|
||||
|
||||
The following figure illustrates the role Rancher plays in IT and DevOps organizations. Each team deploys their applications on the public or private clouds they choose. IT administrators gain visibility and enforce policies across all users, clusters, and clouds.
|
||||
|
||||

|
||||
|
||||
## Features of the Rancher API Server
|
||||
|
||||
The Rancher API server is built on top of an embedded Kubernetes API server and an etcd database. It implements the following functionalities:
|
||||
|
||||
### Authorization and Role-Based Access Control
|
||||
|
||||
- **User management:** The Rancher API server [manages user identities](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/authentication-config.md) that correspond to external authentication providers like Active Directory or GitHub, in addition to local users.
|
||||
- **Authorization:** The Rancher API server manages [access control](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/manage-role-based-access-control-rbac.md) and [security](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards.md) standards.
|
||||
|
||||
### Working with Kubernetes
|
||||
|
||||
- **Provisioning Kubernetes clusters:** The Rancher API server can [provision Kubernetes](../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md) on existing nodes, or perform [Kubernetes upgrades.](installation-and-upgrade/upgrade-and-roll-back-kubernetes.md)
|
||||
- **Catalog management:** Rancher provides the ability to use a [catalog of Helm charts](../how-to-guides/new-user-guides/helm-charts-in-rancher/helm-charts-in-rancher.md) that make it easy to repeatedly deploy applications.
|
||||
- **Managing projects:** A project is a group of multiple namespaces and access control policies within a cluster. A project is a Rancher concept, not a Kubernetes concept, which allows you to manage multiple namespaces as a group and perform Kubernetes operations in them. The Rancher UI provides features for [project administration](../how-to-guides/advanced-user-guides/manage-projects/manage-projects.md) and for [managing applications within projects.](../how-to-guides/new-user-guides/kubernetes-resources-setup/kubernetes-resources-setup.md)
|
||||
- **Fleet Continuous Delivery:** Within Rancher, you can leverage [Fleet Continuous Delivery](../integrations-in-rancher/fleet/fleet.md) to deploy applications from git repositories, without any manual operation, to targeted downstream Kubernetes clusters.
|
||||
- **Istio:** Our [integration with Istio](../integrations-in-rancher/istio/istio.md) is designed so that a Rancher operator, such as an administrator or cluster owner, can deliver Istio to developers. Then developers can use Istio to enforce security policies, troubleshoot problems, or manage traffic for green/blue deployments, canary deployments, or A/B testing.
|
||||
|
||||
### Working with Cloud Infrastructure
|
||||
|
||||
- **Tracking nodes:** The Rancher API server tracks identities of all the [nodes](../how-to-guides/new-user-guides/manage-clusters/nodes-and-node-pools.md) in all clusters.
|
||||
- **Setting up infrastructure:** When configured to use a cloud provider, Rancher can dynamically provision [new nodes](../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/use-new-nodes-in-an-infra-provider.md) and [persistent storage](../how-to-guides/new-user-guides/manage-clusters/create-kubernetes-persistent-storage/create-kubernetes-persistent-storage.md) in the cloud.
|
||||
|
||||
### Cluster Visibility
|
||||
|
||||
- **Logging:** Rancher can integrate with a variety of popular logging services and tools that exist outside of your Kubernetes clusters.
|
||||
- **Monitoring:** Using Rancher, you can monitor the state and processes of your cluster nodes, Kubernetes components, and software deployments through integration with Prometheus, a leading open-source monitoring solution.
|
||||
- **Alerting:** To keep your clusters and applications healthy and driving your organizational productivity forward, you need to stay informed of events occurring in your clusters and projects, both planned and unplanned.
|
||||
|
||||
## Editing Downstream Clusters with Rancher
|
||||
|
||||
The options and settings available for an existing cluster change based on the method that you used to provision it.
|
||||
|
||||
After a cluster is created with Rancher, a cluster administrator can manage cluster membership or manage node pools, among [other options.](../reference-guides/cluster-configuration/cluster-configuration.md)
|
||||
|
||||
The following table summarizes the options and settings available for each cluster type:
|
||||
|
||||
import ClusterCapabilitiesTable from '../shared-files/_cluster-capabilities-table.md';
|
||||
|
||||
<ClusterCapabilitiesTable />
|
||||
@@ -0,0 +1,11 @@
|
||||
---
|
||||
title: Rancher Prime AWS Marketplace Quick Start
|
||||
description: Deploy SUSE Rancher from the AWS Marketplace listing.
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/aws-marketplace"/>
|
||||
</head>
|
||||
|
||||
You can quickly deploy Rancher Prime on Amazon Elastic Kubernetes Service (EKS.) To learn more, see the [instructions](https://suse-enceladus.github.io/marketplace-docs/rancher-prime/aws/?repository=rancher-payg-billing-adapter-llc-prd) under Usage Information in the [AWS Marketplace listing](https://aws.amazon.com/marketplace/pp/prodview-f2bvszurj2p2c).
|
||||
|
||||
@@ -0,0 +1,99 @@
|
||||
---
|
||||
title: Rancher AWS Quick Start Guide
|
||||
description: Read this step by step Rancher AWS guide to quickly deploy a Rancher server with a single-node downstream Kubernetes cluster attached.
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/aws"/>
|
||||
</head>
|
||||
|
||||
The following steps will quickly deploy a Rancher server on AWS in a single-node K3s Kubernetes cluster, with a single-node downstream Kubernetes cluster attached.
|
||||
|
||||
:::caution
|
||||
|
||||
The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../installation-and-upgrade/installation-and-upgrade.md).
|
||||
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
:::caution
|
||||
|
||||
Deploying to Amazon AWS will incur charges.
|
||||
|
||||
:::
|
||||
|
||||
- [Amazon AWS Account](https://aws.amazon.com/account/): An Amazon AWS Account is required to create resources for deploying Rancher and Kubernetes.
|
||||
- [Amazon AWS Access Key](https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html): Use this link to follow a tutorial to create an Amazon AWS Access Key if you don't have one yet.
|
||||
- [IAM Policy created](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html#access_policies_create-start): Defines the permissions an account attached with this policy has.
|
||||
- Install [Terraform](https://developer.hashicorp.com/terraform/install): Used to provision the server and cluster in Amazon AWS.
|
||||
|
||||
### Example IAM Policy
|
||||
|
||||
The AWS module just creates an EC2 KeyPair, an EC2 SecurityGroup and an EC2 instance. A simple policy would be:
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "ec2:*",
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the AWS folder containing the Terraform files by executing `cd quickstart/rancher/aws`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
4. Edit `terraform.tfvars` and customize the following variables:
|
||||
|
||||
- `aws_access_key` - Amazon AWS Access Key
|
||||
- `aws_secret_key` - Amazon AWS Secret Key
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server. See [Setting up the Bootstrap Password](../../installation-and-upgrade/resources/bootstrap-password.md#password-requirements) for password requirements.
|
||||
|
||||
5. **Optional:** Modify optional variables within `terraform.tfvars`. See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [AWS Quickstart Readme](https://github.com/rancher/quickstart/tree/master/rancher/aws) for more information.
|
||||
Suggestions include:
|
||||
|
||||
- `aws_region` - Amazon AWS region, choose the closest instead of the default (`us-east-1`)
|
||||
- `prefix` - Prefix for all created resources
|
||||
- `instance_type` - EC2 instance size used, minimum is `t3a.medium` but `t3a.large` or `t3a.xlarge` could be used if within budget
|
||||
- `add_windows_node` - If true, an additional Windows worker node is added to the workload cluster
|
||||
|
||||
6. Run `terraform init`.
|
||||
|
||||
7. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following:
|
||||
|
||||
```
|
||||
Apply complete! Resources: 16 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
rancher_node_ip = xx.xx.xx.xx
|
||||
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
|
||||
workload_node_ip = yy.yy.yy.yy
|
||||
```
|
||||
|
||||
8. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/rancher/aws`.
|
||||
|
||||
##### Result
|
||||
|
||||
Two Kubernetes clusters are deployed into your AWS account, one running Rancher Server and the other ready for experimentation deployments. Please note that while this setup is a great way to explore Rancher functionality, a production setup should follow our high availability setup guidelines. SSH keys for the VMs are auto-generated and stored in the module directory.
|
||||
|
||||
## What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../deploy-workloads/deploy-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/rancher/aws` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
@@ -0,0 +1,85 @@
|
||||
---
|
||||
title: Rancher Azure Quick Start Guide
|
||||
description: Read this step by step Rancher Azure guide to quickly deploy a Rancher server with a single-node downstream Kubernetes cluster attached.
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/azure"/>
|
||||
</head>
|
||||
|
||||
The following steps will quickly deploy a Rancher server on Azure in a single-node K3s Kubernetes cluster, with a single-node downstream Kubernetes cluster attached.
|
||||
|
||||
:::caution
|
||||
|
||||
The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../installation-and-upgrade/installation-and-upgrade.md).
|
||||
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
:::caution
|
||||
|
||||
Deploying to Microsoft Azure will incur charges.
|
||||
|
||||
:::
|
||||
|
||||
- [Microsoft Azure Account](https://azure.microsoft.com/en-us/free/): A Microsoft Azure Account is required to create resources for deploying Rancher and Kubernetes.
|
||||
- [Microsoft Azure Subscription](https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/create-subscription#create-a-subscription-in-the-azure-portal): Use this link to follow a tutorial to create a Microsoft Azure subscription if you don't have one yet.
|
||||
- [Micsoroft Azure Tenant](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-create-new-tenant): Use this link and follow instructions to create a Microsoft Azure tenant.
|
||||
- [Microsoft Azure Client ID/Secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal): Use this link and follow instructions to create a Microsoft Azure client and secret.
|
||||
- [Terraform](https://developer.hashicorp.com/terraform/install): Used to provision the server and cluster in Microsoft Azure.
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the Azure folder containing the Terraform files by executing `cd quickstart/rancher/azure`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
4. Edit `terraform.tfvars` and customize the following variables:
|
||||
- `azure_subscription_id` - Microsoft Azure Subscription ID
|
||||
- `azure_client_id` - Microsoft Azure Client ID
|
||||
- `azure_client_secret` - Microsoft Azure Client Secret
|
||||
- `azure_tenant_id` - Microsoft Azure Tenant ID
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server. See [Setting up the Bootstrap Password](../../installation-and-upgrade/resources/bootstrap-password.md#password-requirements) for password requirements.
|
||||
|
||||
5. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [Azure Quickstart Readme](https://github.com/rancher/quickstart/tree/master/rancher/azure) for more information. Suggestions include:
|
||||
- `azure_location` - Microsoft Azure region, choose the closest instead of the default (`East US`)
|
||||
- `prefix` - Prefix for all created resources
|
||||
- `instance_type` - Compute instance size used, minimum is `Standard_DS2_v2` but `Standard_DS2_v3` or `Standard_DS3_v2` could be used if within budget
|
||||
- `add_windows_node` - If true, an additional Windows worker node is added to the workload cluster
|
||||
- `windows_admin_password` - The admin password of the windows worker node
|
||||
|
||||
6. Run `terraform init`.
|
||||
|
||||
7. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following:
|
||||
|
||||
```
|
||||
Apply complete! Resources: 16 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
rancher_node_ip = xx.xx.xx.xx
|
||||
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
|
||||
workload_node_ip = yy.yy.yy.yy
|
||||
```
|
||||
|
||||
8. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/rancher/azure`.
|
||||
|
||||
#### Result
|
||||
|
||||
Two Kubernetes clusters are deployed into your Azure account, one running Rancher Server and the other ready for experimentation deployments. Please note that while this setup is a great way to explore Rancher functionality, a production setup should follow our high availability setup guidelines. SSH keys for the VMs are auto-generated and stored in the module directory.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../deploy-workloads/deploy-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/rancher/azure` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
@@ -0,0 +1,24 @@
|
||||
---
|
||||
title: Deploying Rancher Server
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager"/>
|
||||
</head>
|
||||
|
||||
Use one of the following guides to deploy and provision Rancher and a Kubernetes cluster in the provider of your choice.
|
||||
|
||||
- [AWS](aws.md) (uses Terraform)
|
||||
- [AWS Marketplace](aws-marketplace.md) (uses Amazon EKS)
|
||||
- [Azure](azure.md) (uses Terraform)
|
||||
- [DigitalOcean](digitalocean.md) (uses Terraform)
|
||||
- [GCP](gcp.md) (uses Terraform)
|
||||
- [Hetzner Cloud](hetzner-cloud.md) (uses Terraform)
|
||||
- [Linode](linode.md) (uses Terraform)
|
||||
- [Vagrant](vagrant.md)
|
||||
- [Equinix Metal](equinix-metal.md)
|
||||
- [Outscale](outscale-qs.md) (uses Terraform)
|
||||
|
||||
If you prefer, the following guide will take you through the same process in individual steps. Use this if you want to run Rancher in a different provider, on prem, or if you would just like to see how easy it is.
|
||||
|
||||
- [Manual Install](helm-cli.md)
|
||||
@@ -0,0 +1,78 @@
|
||||
---
|
||||
title: Rancher DigitalOcean Quick Start Guide
|
||||
description: Read this step by step Rancher DigitalOcean guide to quickly deploy a Rancher server with a single-node downstream Kubernetes cluster attached.
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/digitalocean"/>
|
||||
</head>
|
||||
|
||||
The following steps will quickly deploy a Rancher server on DigitalOcean in a single-node K3s Kubernetes cluster, with a single-node downstream Kubernetes cluster attached.
|
||||
|
||||
:::caution
|
||||
|
||||
The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../installation-and-upgrade/installation-and-upgrade.md).
|
||||
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
:::caution
|
||||
|
||||
Deploying to DigitalOcean will incur charges.
|
||||
|
||||
:::
|
||||
|
||||
- [DigitalOcean Account](https://www.digitalocean.com): You will require an account on DigitalOcean as this is where the server and cluster will run.
|
||||
- [DigitalOcean Access Key](https://www.digitalocean.com/community/tutorials/how-to-create-a-digitalocean-space-and-api-key): Use this link to create a DigitalOcean Access Key if you don't have one.
|
||||
- [Terraform](https://developer.hashicorp.com/terraform/install): Used to provision the server and cluster to DigitalOcean.
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the DigitalOcean folder containing the Terraform files by executing `cd quickstart/rancher/do`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
4. Edit `terraform.tfvars` and customize the following variables:
|
||||
- `do_token` - DigitalOcean access key
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server. See [Setting up the Bootstrap Password](../../installation-and-upgrade/resources/bootstrap-password.md#password-requirements) for password requirements.
|
||||
|
||||
5. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [DO Quickstart Readme](https://github.com/rancher/quickstart/tree/master/rancher/do) for more information. Suggestions include:
|
||||
- `do_region` - DigitalOcean region, choose the closest instead of the default (`nyc1`)
|
||||
- `prefix` - Prefix for all created resources
|
||||
- `droplet_size` - Droplet size used, minimum is `s-2vcpu-4gb` but `s-4vcpu-8gb` could be used if within budget
|
||||
|
||||
6. Run `terraform init`.
|
||||
|
||||
7. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following:
|
||||
|
||||
```
|
||||
Apply complete! Resources: 15 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
rancher_node_ip = xx.xx.xx.xx
|
||||
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
|
||||
workload_node_ip = yy.yy.yy.yy
|
||||
```
|
||||
|
||||
8. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/rancher/do`.
|
||||
|
||||
#### Result
|
||||
|
||||
Two Kubernetes clusters are deployed into your DigitalOcean account, one running Rancher Server and the other ready for experimentation deployments. Please note that while this setup is a great way to explore Rancher functionality, a production setup should follow our high availability setup guidelines. SSH keys for the VMs are auto-generated and stored in the module directory.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../deploy-workloads/deploy-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/rancher/do` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
@@ -0,0 +1,108 @@
|
||||
---
|
||||
title: Rancher Equinix Metal Quick Start
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/equinix-metal"/>
|
||||
</head>
|
||||
|
||||
## This tutorial walks you through the following:
|
||||
|
||||
- Provisioning an Equinix Metal Server
|
||||
- Installation of Rancher 2.x
|
||||
- Creation of your first cluster
|
||||
- Deployment of an application, Nginx
|
||||
|
||||
:::caution
|
||||
|
||||
The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. The Docker install is not recommended for production environments. For comprehensive setup instructions, see [Installation](../../installation-and-upgrade/installation-and-upgrade.md).
|
||||
|
||||
:::
|
||||
|
||||
## Quick Start Outline
|
||||
|
||||
This Quick Start Guide is divided into different tasks for easier consumption.
|
||||
|
||||
<br/>
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- An [Equinix Metal account](https://deploy.equinix.com/developers/docs/metal/identity-access-management/users/)
|
||||
- An [Equinix Metal project](https://deploy.equinix.com/developers/docs/metal/projects/creating-a-project/)
|
||||
|
||||
|
||||
### 1. Provision a Equinix Metal Host
|
||||
|
||||
Begin deploying an Equinix Metal Host. Equinix Metal Servers can be provisioned from either the Equinix Metal console, API, or CLI. You can find instructions for each deployment type on the [Equinix Metal deployment documentation](https://deploy.equinix.com/developers/docs/metal/deploy/on-demand/). You can find additional information on Equinix Metal server types in the [Equinix Metal Documentation](https://deploy.equinix.com/developers/docs/metal/hardware/standard-servers/).
|
||||
|
||||
:::note Notes:
|
||||
|
||||
- When provisioning a new Equinix Metal Server via the CLI or API you will need to provide the following information: project-id, plan, metro, and operating-system.
|
||||
- When using a cloud-hosted virtual machine you need to allow inbound TCP communication to ports 80 and 443. Please see your cloud host's documentation for information regarding port configuration.
|
||||
- For a full list of port requirements, refer to [Docker Installation](../../../how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/node-requirements-for-rancher-managed-clusters.md).
|
||||
- Provision the host according to our [Requirements](../../installation-and-upgrade/installation-requirements/installation-requirements.md).
|
||||
|
||||
:::
|
||||
### 2. Install Rancher
|
||||
|
||||
To install Rancher on your Equinix Metal host, connect to it and then use a shell to install.
|
||||
|
||||
1. Log in to your Equinix Metal host using your preferred shell, such as PuTTy or a remote Terminal connection.
|
||||
|
||||
2. From your shell, enter the following command:
|
||||
|
||||
```
|
||||
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher
|
||||
```
|
||||
|
||||
**Result:** Rancher is installed.
|
||||
|
||||
### 3. Log In
|
||||
|
||||
Log in to Rancher to begin using the application. After you log in, you'll make some one-time configurations.
|
||||
|
||||
1. Open a web browser and enter the IP address of your host: `https://<SERVER_IP>`.
|
||||
|
||||
Replace `<SERVER_IP>` with your host IP address.
|
||||
|
||||
2. When prompted, create a password for the default `admin` account there cowpoke!
|
||||
|
||||
3. Set the **Rancher Server URL**. The URL can either be an IP address or a host name. However, each node added to your cluster must be able to connect to this URL.<br/><br/>If you use a hostname in the URL, this hostname must be resolvable by DNS on the nodes you want to add to you cluster.
|
||||
|
||||
<br/>
|
||||
|
||||
### 4. Create the Cluster
|
||||
|
||||
Welcome to Rancher! You are now able to create your first Kubernetes cluster.
|
||||
|
||||
In this task, you can use the versatile **Custom** option. This option lets you add _any_ Linux host (cloud-hosted VM, on-prem VM, or bare-metal) to be used in a cluster.
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. From the **Clusters** page, click **Create**.
|
||||
1. Choose **Custom**.
|
||||
1. Enter a **Cluster Name**.
|
||||
1. Click **Next**.
|
||||
1. From **Node Role**, select _all_ the roles: **etcd**, **Control**, and **Worker**.
|
||||
- **Optional**: Rancher auto-detects the IP addresses used for Rancher communication and cluster communication. You can override these using `Public Address` and `Internal Address` in the **Node Address** section.
|
||||
1. Copy the registration command to your clipboard.
|
||||
1. Log in to your Linux host using your preferred shell, such as PuTTy or a remote Terminal connection. Run the command copied to your clipboard.
|
||||
1. When you finish running the command on your Linux host, click **Done**.
|
||||
|
||||
**Result:**
|
||||
|
||||
Your cluster is created and assigned a state of **Provisioning**. Rancher is standing up your cluster.
|
||||
|
||||
You can access your cluster after its state is updated to **Active**.
|
||||
|
||||
**Active** clusters are assigned two Projects:
|
||||
|
||||
- `Default`, containing the `default` namespace
|
||||
- `System`, containing the `cattle-system`, `ingress-nginx`, `kube-public`, and `kube-system` namespaces
|
||||
|
||||
#### Finished
|
||||
|
||||
Congratulations! You have created your first cluster.
|
||||
|
||||
#### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../deploy-workloads/deploy-workloads.md).
|
||||
@@ -0,0 +1,81 @@
|
||||
---
|
||||
title: Rancher GCP Quick Start Guide
|
||||
description: Read this step by step Rancher GCP guide to quickly deploy a Rancher server with a single-node downstream Kubernetes cluster attached.
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/gcp"/>
|
||||
</head>
|
||||
|
||||
The following steps will quickly deploy a Rancher server on GCP in a single-node K3s Kubernetes cluster, with a single-node downstream Kubernetes cluster attached.
|
||||
|
||||
:::caution
|
||||
|
||||
The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../installation-and-upgrade/installation-and-upgrade.md).
|
||||
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
:::caution
|
||||
|
||||
Deploying to Google GCP will incur charges.
|
||||
|
||||
:::
|
||||
|
||||
- [Google GCP Account](https://console.cloud.google.com/): A Google GCP Account is required to create resources for deploying Rancher and Kubernetes.
|
||||
- [Google GCP Project](https://cloud.google.com/appengine/docs/standard/nodejs/building-app/creating-project): Use this link to follow a tutorial to create a GCP Project if you don't have one yet.
|
||||
- [Google GCP Service Account](https://cloud.google.com/iam/docs/creating-managing-service-account-keys): Use this link and follow instructions to create a GCP service account and token file.
|
||||
- [Terraform](https://developer.hashicorp.com/terraform/install): Used to provision the server and cluster in Google GCP.
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the GCP folder containing the Terraform files by executing `cd quickstart/rancher/gcp`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
4. Edit `terraform.tfvars` and customize the following variables:
|
||||
- `gcp_account_json` - GCP service account file path and file name
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server. See [Setting up the Bootstrap Password](../../installation-and-upgrade/resources/bootstrap-password.md#password-requirements) for password requirements.
|
||||
|
||||
5. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [GCP Quickstart Readme](https://github.com/rancher/quickstart/tree/master/rancher/gcp) for more information.
|
||||
Suggestions include:
|
||||
- `gcp_region` - Google GCP region, choose the closest instead of the default (`us-east4`)
|
||||
- `gcp_zone` - Google GCP zone, choose the closest instead of the default (`us-east4-a`)
|
||||
- `prefix` - Prefix for all created resources
|
||||
- `machine_type` - Compute instance size used, minimum is `n1-standard-1` but `n1-standard-2` or `n1-standard-4` could be used if within budget
|
||||
|
||||
6. Run `terraform init`.
|
||||
|
||||
7. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following:
|
||||
|
||||
```
|
||||
Apply complete! Resources: 16 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
rancher_node_ip = xx.xx.xx.xx
|
||||
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
|
||||
workload_node_ip = yy.yy.yy.yy
|
||||
```
|
||||
|
||||
8. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/rancher/gcp`.
|
||||
|
||||
#### Result
|
||||
|
||||
Two Kubernetes clusters are deployed into your GCP account, one running Rancher Server and the other ready for experimentation deployments. Please note that while this setup is a great way to explore Rancher functionality, a production setup should follow our high availability setup guidelines. SSH keys for the VMs are auto-generated and stored in the module directory.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../deploy-workloads/deploy-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/rancher/gcp` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
@@ -0,0 +1,155 @@
|
||||
---
|
||||
title: Helm CLI Quick Start
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli"/>
|
||||
</head>
|
||||
|
||||
These instructions capture a quick way to set up a proof-of-concept Rancher installation.
|
||||
|
||||
These instructions assume you have a Linux virtual machine that you will communicate with from your local workstation. Rancher will be installed on the Linux machine. You will need to retrieve the IP address of that machine so that you can access Rancher from your local workstation. Rancher is designed to manage Kubernetes clusters remotely, so any Kubernetes cluster that Rancher manages in the future will also need to be able to reach this IP address.
|
||||
|
||||
We don't recommend installing Rancher locally because it creates a networking problem. Installing Rancher on localhost does not allow Rancher to communicate with downstream Kubernetes clusters, so on localhost you wouldn't be able to test Rancher's cluster provisioning or cluster management functionality.
|
||||
|
||||
Your Linux machine can be anywhere. It could be an Amazon EC2 instance, a Digital Ocean droplet, or an Azure virtual machine, to name a few examples. Other Rancher docs often use 'node' as a generic term for all of these. One possible way to deploy a Linux machine is by setting up an Amazon EC2 instance as shown in [this tutorial](../../../how-to-guides/new-user-guides/infrastructure-setup/nodes-in-amazon-ec2.md).
|
||||
|
||||
The full installation requirements are [here](../../installation-and-upgrade/installation-requirements/installation-requirements.md).
|
||||
|
||||
## Install K3s on Linux
|
||||
|
||||
Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the [Rancher Support Matrix](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/).
|
||||
|
||||
To specify the K3s (Kubernetes) version, use the INSTALL_K3S_VERSION (e.g., `INSTALL_K3S_VERSION="v1.24.10+k3s1"`) environment variable when running the K3s installation script.
|
||||
|
||||
Install a K3s cluster by running this command on the Linux machine:
|
||||
|
||||
```
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=<VERSION> sh -s - server --cluster-init
|
||||
```
|
||||
|
||||
Using `--cluster-init` allows K3s to use embedded etcd as the datastore and has the ability to convert to an HA setup. Refer to [High Availability with Embedded DB](https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/).
|
||||
|
||||
Save the IP of the Linux machine.
|
||||
|
||||
## Save the kubeconfig to your workstation
|
||||
|
||||
The kubeconfig file is important for accessing the Kubernetes cluster. Copy the file at `/etc/rancher/k3s/k3s.yaml` from the Linux machine and save it to your local workstation in the directory `~/.kube/config`. One way to do this is by using the `scp` tool and run this command on your local machine:
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Mac and Linux">
|
||||
|
||||
```
|
||||
scp root@<IP_OF_LINUX_MACHINE>:/etc/rancher/k3s/k3s.yaml ~/.kube/config
|
||||
```
|
||||
|
||||
In some cases it may need to make sure that your shell has the environment variable `KUBECONFIG=~/.kube/config` defined, for instance, it can be exported in your profile or rc files.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Windows">
|
||||
|
||||
By default, "scp" is not a recognized command, so we need to install a module first.
|
||||
|
||||
In Windows Powershell:
|
||||
|
||||
```
|
||||
Find-Module Posh-SSH
|
||||
Install-Module Posh-SSH
|
||||
|
||||
## Get the remote kubeconfig file
|
||||
scp root@<IP_OF_LINUX_MACHINE>:/etc/rancher/k3s/k3s.yaml $env:USERPROFILE\.kube\config
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Edit the Rancher server URL in the kubeconfig
|
||||
|
||||
In the kubeconfig file, you will need to change the value of the `server` field to `<IP_OF_LINUX_NODE>:6443`. The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443. This edit is needed so that when you run Helm or kubectl commands from your local workstation, you will be able to communicate with the Kubernetes cluster that Rancher will be installed on.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Mac and Linux">
|
||||
|
||||
One way to open the kubeconfig file for editing is to use Vim:
|
||||
|
||||
```
|
||||
vi ~/.kube/config
|
||||
```
|
||||
|
||||
Press `i` to put Vim in insert mode. To save your work, press `Esc`. Then press `:wq` and press `Enter`.
|
||||
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Windows">
|
||||
|
||||
In Windows Powershell, you can use `notepad.exe` for editing the kubeconfig file:
|
||||
|
||||
```
|
||||
notepad.exe $env:USERPROFILE\.kube\config
|
||||
```
|
||||
|
||||
Once edited, either press `ctrl+s` or go to `File > Save` to save your work.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Install Rancher with Helm
|
||||
|
||||
Then from your local workstation, run the following commands. You will need to have [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) and [helm](https://helm.sh/docs/intro/install/) installed.
|
||||
|
||||
:::note
|
||||
|
||||
To see options on how to customize the cert-manager install (including for cases where your cluster uses PodSecurityPolicies), see the [cert-manager docs](https://artifacthub.io/packages/helm/cert-manager/cert-manager#configuration).
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
|
||||
kubectl create namespace cattle-system
|
||||
|
||||
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/<VERSION>/cert-manager.crds.yaml
|
||||
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
|
||||
helm repo update
|
||||
|
||||
helm install cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--create-namespace
|
||||
|
||||
# Windows Powershell
|
||||
helm install cert-manager jetstack/cert-manager `
|
||||
--namespace cert-manager `
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
The final command to install Rancher is below. The command requires a domain name that forwards traffic to the Linux machine. For the sake of simplicity in this tutorial, you can use a fake domain name to create your proof-of-concept. An example of a fake domain name would be `<IP_OF_LINUX_NODE>.sslip.io`.
|
||||
|
||||
To install a specific Rancher version, use the `--version` flag (e.g., `--version 2.6.6`). Otherwise, the latest Rancher is installed by default. Refer to [Choosing a Rancher Version](../../installation-and-upgrade/resources/choose-a-rancher-version.md).
|
||||
|
||||
|
||||
See [Setting up the Bootstrap Password](../../installation-and-upgrade/resources/bootstrap-password.md#password-requirements) for password requirements.
|
||||
|
||||
```
|
||||
helm install rancher rancher-latest/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<IP_OF_LINUX_NODE>.sslip.io \
|
||||
--set replicas=1 \
|
||||
--set bootstrapPassword=<PASSWORD_FOR_RANCHER_ADMIN>
|
||||
|
||||
# Windows Powershell
|
||||
helm install rancher rancher-latest/rancher `
|
||||
--namespace cattle-system `
|
||||
--set hostname=<IP_OF_LINUX_NODE>.sslip.io `
|
||||
--set replicas=1 `
|
||||
--set bootstrapPassword=<PASSWORD_FOR_RANCHER_ADMIN>
|
||||
```
|
||||
|
||||
Now if you navigate to `<IP_OF_LINUX_NODE>.sslip.io` in a web browser, you should see the Rancher UI.
|
||||
|
||||
To make these instructions simple, we used a fake domain name and self-signed certificates to do this installation. Therefore, you will probably need to add a security exception to your web browser to see the Rancher UI. Note that for production installs, you would need a high-availability setup with a load balancer, a real domain name and real certificates.
|
||||
|
||||
These instructions also left out the full installation requirements and other installation options. If you have any issues with these steps, refer to the full [Helm CLI installation docs.](../../installation-and-upgrade/install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md)
|
||||
|
||||
To launch new Kubernetes clusters with your new Rancher server, you may need to set up cloud credentials in Rancher. For more information, see [Launching Kubernetes clusters with Rancher.](../../../how-to-guides/new-user-guides/launch-kubernetes-with-rancher/launch-kubernetes-with-rancher.md)
|
||||
@@ -0,0 +1,80 @@
|
||||
---
|
||||
title: Rancher Hetzner Cloud Quick Start Guide
|
||||
description: Read this step by step Rancher Hetzner Cloud guide to quickly deploy a Rancher server with a single-node downstream Kubernetes cluster attached.
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/hetzner-cloud"/>
|
||||
</head>
|
||||
|
||||
The following steps will quickly deploy a Rancher server on Hetzner Cloud in a single-node K3s Kubernetes cluster, with a single-node downstream Kubernetes cluster attached.
|
||||
|
||||
:::caution
|
||||
|
||||
The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../installation-and-upgrade/installation-and-upgrade.md).
|
||||
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
:::caution
|
||||
|
||||
Deploying to Hetzner Cloud will incur charges.
|
||||
|
||||
:::
|
||||
|
||||
- [Hetzner Cloud Account](https://www.hetzner.com): You will require an account on Hetzner as this is where the server and cluster will run.
|
||||
- [Hetzner API Access Key](https://docs.hetzner.cloud/#getting-started): Use these instructions to create a Hetzner Cloud API Key if you don't have one.
|
||||
- [Terraform](https://developer.hashicorp.com/terraform/install): Used to provision the server and cluster to Hetzner.
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the Hetzner folder containing the Terraform files by executing `cd quickstart/rancher/hcloud`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
4. Edit `terraform.tfvars` and customize the following variables:
|
||||
- `hcloud_token` - Hetzner API access key
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server. See [Setting up the Bootstrap Password](../../installation-and-upgrade/resources/bootstrap-password.md#password-requirements) for password requirements.
|
||||
|
||||
5. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [Hetzner Quickstart Readme](https://github.com/rancher/quickstart/tree/master/rancher/hcloud) for more information.
|
||||
Suggestions include:
|
||||
|
||||
- `prefix` - Prefix for all created resources
|
||||
- `instance_type` - Instance type, minimum required is `cx21`
|
||||
- `hcloud_location` - Hetzner Cloud location, choose the closest instead of the default (`fsn1`)
|
||||
|
||||
6. Run `terraform init`.
|
||||
|
||||
7. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following:
|
||||
|
||||
```
|
||||
Apply complete! Resources: 15 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
rancher_node_ip = xx.xx.xx.xx
|
||||
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
|
||||
workload_node_ip = yy.yy.yy.yy
|
||||
```
|
||||
|
||||
8. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/rancher/hcloud`.
|
||||
|
||||
#### Result
|
||||
|
||||
Two Kubernetes clusters are deployed into your Hetzner account, one running Rancher Server and the other ready for experimentation deployments. Please note that while this setup is a great way to explore Rancher functionality, a production setup should follow our high availability setup guidelines. SSH keys for the VMs are auto-generated and stored in the module directory.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../deploy-workloads/deploy-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/rancher/hcloud` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
@@ -0,0 +1,82 @@
|
||||
---
|
||||
title: Rancher Linode Quick Start Guide
|
||||
description: Read this step by step guide to quickly deploy a Rancher server with a single-node downstream Kubernetes cluster attached.
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/linode"/>
|
||||
</head>
|
||||
|
||||
The following steps will quickly deploy a Rancher server on Linode in a single-node K3s Kubernetes cluster, with a single-node downstream Kubernetes cluster attached.
|
||||
|
||||
:::caution
|
||||
|
||||
The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../installation-and-upgrade/installation-and-upgrade.md).
|
||||
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
:::caution
|
||||
|
||||
Deploying to Linode will incur charges.
|
||||
|
||||
:::
|
||||
|
||||
- [Linode Account](https://www.linode.com/): The Linode account to run provision server and cluster under.
|
||||
- [Linode Personal Access Token](https://techdocs.akamai.com/cloud-computing/docs/manage-personal-access-tokens): A Linode Personal Access Token to authenticate with.
|
||||
- [Terraform](https://developer.hashicorp.com/terraform/install): Used to provision the server and cluster on Linode.
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the Linode folder containing the Terraform files by executing `cd quickstart/rancher/linode`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
4. Edit `terraform.tfvars` and customize the following variables:
|
||||
- `linode_token` - The Linode Personal Access Token mentioned above.
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server. See [Setting up the Bootstrap Password](../../installation-and-upgrade/resources/bootstrap-password.md#password-requirements) for password requirements.
|
||||
|
||||
5. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [Linode Quickstart Readme](https://github.com/rancher/quickstart/tree/master/rancher/linode) for more information. Suggestions include:
|
||||
- `linode_region` - The target Linode region to provision the server and cluster in.
|
||||
- Default: `eu-central`
|
||||
- For a complete list of regions, see the [official Region Availability page](https://www.linode.com/global-infrastructure/availability/).
|
||||
- `prefix` - The prefix for all created infrastructure.
|
||||
- `linode_type` - The type/plan that all infrastructure Linodes should use.
|
||||
- Default: `g6-standard-2`
|
||||
- For a complete list of plans, see the [official Plan Types page](https://techdocs.akamai.com/cloud-computing/docs/compute-instance-plan-types).
|
||||
|
||||
6. Run `terraform init`.
|
||||
|
||||
7. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following:
|
||||
|
||||
```
|
||||
Apply complete! Resources: 15 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
rancher_node_ip = xx.xx.xx.xx
|
||||
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
|
||||
workload_node_ip = yy.yy.yy.yy
|
||||
```
|
||||
|
||||
8. Paste the `rancher_server_url` from the output above into the browser and log in when prompted. The default username is `admin` and the password is defined in `rancher_server_admin_password`.
|
||||
9. `ssh` into the Rancher Server using the `id_rsa` key generated in `quickstart/rancher/linode`.
|
||||
|
||||
#### Result
|
||||
|
||||
Two Kubernetes clusters are deployed on your Linode account, one running Rancher Server and the other ready for experimentation deployments. Please note that while this setup is a great way to explore Rancher functionality, a production setup should follow our high availability setup guidelines. SSH keys for the VMs are auto-generated and stored in the module directory.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../deploy-workloads/deploy-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/rancher/linode` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
@@ -0,0 +1,80 @@
|
||||
---
|
||||
title: Rancher Outscale Quick Start Guide
|
||||
description: Read this step by step Rancher Outscale guide to quickly deploy a Rancher server with a single-node downstream Kubernetes cluster attached.
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/outscale-qs"/>
|
||||
</head>
|
||||
|
||||
The following steps will quickly deploy a Rancher server on Outscale in a single-node K3s Kubernetes cluster, with a single-node downstream Kubernetes cluster attached.
|
||||
|
||||
:::note
|
||||
|
||||
The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../installation-and-upgrade/installation-and-upgrade.md).
|
||||
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
:::caution
|
||||
|
||||
Deploying to Outscale will incur charges.
|
||||
|
||||
:::
|
||||
|
||||
- [Outscale Account](https://en.outscale.com/): You will require an account on Outscale as this is where the server and cluster will run.
|
||||
- [Outscale Access Key](https://docs.outscale.com/en/userguide/About-Access-Keys.html): Use these instructions to create an Outscale Access Key if you don't have one.
|
||||
- [Terraform](https://developer.hashicorp.com/terraform/install): Used to provision the server and cluster in Outscale.
|
||||
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the Outscale folder containing the Terraform files by executing `cd quickstart/rancher/outscale`.
|
||||
|
||||
3. Rename the `terraform.tfvars.example` file to `terraform.tfvars`.
|
||||
|
||||
4. Edit `terraform.tfvars` and customize the following variables:
|
||||
- `access_key_id` - Outscale access key
|
||||
- `secret_key_id` - Outscale secret key
|
||||
- `rancher_server_admin_password` - Admin password for created Rancher server. See [Setting up the Bootstrap Password](../../installation-and-upgrade/resources/bootstrap-password.md#password-requirements) for password requirements.
|
||||
|
||||
5. **Optional:** Modify optional variables within `terraform.tfvars`.
|
||||
See the [Quickstart Readme](https://github.com/rancher/quickstart) and the [Outscale Quickstart Readme](https://github.com/rancher/quickstart/tree/master/rancher/outscale) for more information.
|
||||
Suggestions include:
|
||||
- `region` - Outscale region, choose the closest instead of the default (`eu-west-2`)
|
||||
- `prefix` - Prefix for all created resources
|
||||
- `instance_type` - Instance type, minimum required is `tinav3.c2r4p3`
|
||||
|
||||
6. Run `terraform init`.
|
||||
|
||||
7. To initiate the creation of the environment, run `terraform apply --auto-approve`. Then wait for output similar to the following:
|
||||
|
||||
```
|
||||
Apply complete! Resources: 21 added, 0 changed, 0 destroyed.
|
||||
|
||||
Outputs:
|
||||
|
||||
rancher_node_ip = xx.xx.xx.xx
|
||||
rancher_server_url = https://rancher.xx.xx.xx.xx.sslip.io
|
||||
workload_node_ip = yy.yy.yy.yy
|
||||
```
|
||||
|
||||
8. Paste the `rancher_server_url` from the output above into the browser. Log in when prompted (default username is `admin`, use the password set in `rancher_server_admin_password`).
|
||||
9. ssh to the Rancher Server using the `id_rsa` key generated in `quickstart/rancher/outscale`.
|
||||
|
||||
#### Result
|
||||
|
||||
Two Kubernetes clusters are deployed into your Outscale account, one running Rancher Server and the other ready for experimentation deployments. Please note that while this setup is a great way to explore Rancher functionality, a production setup should follow our high availability setup guidelines. SSH keys for the VMs are auto-generated and stored in the module directory.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../deploy-workloads/deploy-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/rancher/outscale` folder, execute `terraform destroy --auto-approve`.
|
||||
|
||||
2. Wait for confirmation that all resources have been destroyed.
|
||||
@@ -0,0 +1,13 @@
|
||||
---
|
||||
title: Rancher Prime
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/prime"/>
|
||||
</head>
|
||||
|
||||
SUSE Rancher introduces Rancher Prime – an evolution of Rancher – from version v2.7. Rancher Prime is the new commercially available enterprise offering of Rancher, built on the same open source code. The Rancher project will continue to be 100% open source. Prime introduces additional value with greater security assurances, extended lifecycles, access to focused architectures and Kubernetes advisories. Rancher Prime will also offer options to get production support for innovative Rancher projects. With Rancher Prime, installation assets are hosted on a trusted registry owned and managed by Rancher.
|
||||
|
||||
To get started with Rancher Prime, [go to this page](https://www.rancher.com/quick-start) and fill out the form.
|
||||
|
||||
At a minimum, users are expected to have a working knowledge of Kubernetes and peripheral functions such as Permissions, Roles and RBAC.
|
||||
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: Rancher Vagrant Quick Start
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-rancher-manager/vagrant"/>
|
||||
</head>
|
||||
|
||||
The following steps quickly deploy a Rancher Server with a single node cluster attached.
|
||||
|
||||
:::caution
|
||||
|
||||
The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../../installation-and-upgrade/installation-and-upgrade.md).
|
||||
|
||||
:::
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Vagrant](https://developer.hashicorp.com/vagrant): Vagrant is required as this is used to provision the machine based on the Vagrantfile.
|
||||
- [Virtualbox](https://www.virtualbox.org): The virtual machines that Vagrant provisions need to be provisioned to VirtualBox.
|
||||
- At least 4GB of free RAM.
|
||||
|
||||
:::note
|
||||
|
||||
Vagrant requires plugins to create VirtualBox VMs. Install them with the following commands:
|
||||
- `vagrant plugin install vagrant-vboxmanage`
|
||||
- `vagrant plugin install vagrant-vbguest`
|
||||
|
||||
:::
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Clone [Rancher Quickstart](https://github.com/rancher/quickstart) to a folder using `git clone https://github.com/rancher/quickstart`.
|
||||
|
||||
2. Go into the folder containing the Vagrantfile by executing `cd quickstart/rancher/vagrant`.
|
||||
|
||||
3. **Optional:** Edit `config.yaml` to:
|
||||
|
||||
- Change the number of nodes and the memory allocations, if required. (`node.count`, `node.cpus`, `node.memory`)
|
||||
- Change the password of the `admin` user for logging into Rancher. (`admin_password`)
|
||||
|
||||
4. To initiate the creation of the environment run, `vagrant up --provider=virtualbox`.
|
||||
|
||||
5. Once provisioning finishes, go to `https://192.168.56.101` in the browser. The default user/password is `admin/adminPassword`.
|
||||
|
||||
**Result:** Rancher Server and your Kubernetes cluster is installed on VirtualBox.
|
||||
|
||||
### What's Next?
|
||||
|
||||
Use Rancher to create a deployment. For more information, see [Creating Deployments](../deploy-workloads/deploy-workloads.md).
|
||||
|
||||
## Destroying the Environment
|
||||
|
||||
1. From the `quickstart/rancher/vagrant` folder execute `vagrant destroy -f`.
|
||||
|
||||
2. Wait for the confirmation that all resources have been destroyed.
|
||||
@@ -0,0 +1,12 @@
|
||||
---
|
||||
title: Deploying Workloads
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-workloads"/>
|
||||
</head>
|
||||
|
||||
These guides walk you through the deployment of an application, including how to expose the application for use outside of the cluster.
|
||||
|
||||
- [Workload with Ingress](workload-ingress.md)
|
||||
- [Workload with NodePort](nodeports.md)
|
||||
@@ -0,0 +1,142 @@
|
||||
---
|
||||
title: Workload with NodePort Quick Start
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-workloads/nodeports"/>
|
||||
</head>
|
||||
|
||||
### Prerequisite
|
||||
|
||||
You have a running cluster with at least 1 node.
|
||||
|
||||
### 1. Deploying a Workload
|
||||
|
||||
You're ready to create your first Kubernetes [workload](https://kubernetes.io/docs/concepts/workloads/). A workload is an object that includes pods along with other files and info needed to deploy your application.
|
||||
|
||||
For this workload, you'll be deploying the application Rancher Hello-World.
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. From the **Clusters** page, go to the cluster where the workload should be deployed and click **Explore**.
|
||||
1. Click **Workload**.
|
||||
1. Click **Create**.
|
||||
1. Enter a **Name** for your workload.
|
||||
1. From the **Container Image** field, enter `rancher/hello-world`. This field is case-sensitive.
|
||||
1. Click **Add Port**.
|
||||
1. From the **Service Type** drop-down, make sure that **NodePort** is selected.
|
||||
|
||||

|
||||
|
||||
1. From the **Publish the container port** field, enter port `80`.
|
||||
|
||||

|
||||
|
||||
1. Click **Create**.
|
||||
|
||||
**Result:**
|
||||
|
||||
* Your workload is deployed. This process might take a few minutes to complete.
|
||||
* When your workload completes deployment, it's assigned a state of **Active**. You can view this status from the project's **Workloads** page.
|
||||
|
||||
<br/>
|
||||
|
||||
### 2. Viewing Your Application
|
||||
|
||||
From the **Workloads** page, click the link underneath your workload. If your deployment succeeded, your application opens.
|
||||
|
||||
### Attention: Cloud-Hosted Sandboxes
|
||||
|
||||
When using a cloud-hosted virtual machine, you may not have access to the port running the container. In this event, you can test Nginx in an ssh session on the local machine using `Execute Shell`. Use the port number after the `:` in the link under your workload if available, which is `31568` in this example.
|
||||
|
||||
```html
|
||||
gettingstarted@rancher:~$ curl http://localhost:31568
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Rancher</title>
|
||||
<link rel="icon" href="img/favicon.png">
|
||||
<style>
|
||||
body {
|
||||
background-color: white;
|
||||
text-align: center;
|
||||
padding: 50px;
|
||||
font-family: "Open Sans","Helvetica Neue",Helvetica,Arial,sans-serif;
|
||||
}
|
||||
button {
|
||||
background-color: #0075a8;
|
||||
border: none;
|
||||
color: white;
|
||||
padding: 15px 32px;
|
||||
text-align: center;
|
||||
text-decoration: none;
|
||||
display: inline-block;
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
#logo {
|
||||
margin-bottom: 40px;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<img id="logo" src="img/rancher-logo.svg" alt="Rancher logo" width=400 />
|
||||
<h1>Hello world!</h1>
|
||||
<h3>My hostname is hello-world-66b4b9d88b-78bhx</h3>
|
||||
<div id='Services'>
|
||||
<h3>k8s services found 2</h3>
|
||||
|
||||
<b>INGRESS_D1E1A394F61C108633C4BD37AEDDE757</b> tcp://10.43.203.31:80<br />
|
||||
|
||||
<b>KUBERNETES</b> tcp://10.43.0.1:443<br />
|
||||
|
||||
</div>
|
||||
<br />
|
||||
|
||||
<div id='rancherLinks' class="row social">
|
||||
<a class="p-a-xs" href="https://rancher.com/docs"><img src="img/favicon.png" alt="Docs" height="25" width="25"></a>
|
||||
<a class="p-a-xs" href="https://slack.rancher.io/"><img src="img/icon-slack.svg" alt="slack" height="25" width="25"></a>
|
||||
<a class="p-a-xs" href="https://github.com/rancher/rancher"><img src="img/icon-github.svg" alt="github" height="25" width="25"></a>
|
||||
<a class="p-a-xs" href="https://twitter.com/Rancher_Labs"><img src="img/icon-twitter.svg" alt="twitter" height="25" width="25"></a>
|
||||
<a class="p-a-xs" href="https://www.facebook.com/rancherlabs/"><img src="img/icon-facebook.svg" alt="facebook" height="25" width="25"></a>
|
||||
<a class="p-a-xs" href="https://www.linkedin.com/groups/6977008/profile"><img src="img/icon-linkedin.svg" height="25" alt="linkedin" width="25"></a>
|
||||
</div>
|
||||
<br />
|
||||
<button class='button' onclick='myFunction()'>Show request details</button>
|
||||
<div id="reqInfo" style='display:none'>
|
||||
<h3>Request info</h3>
|
||||
<b>Host:</b> 172.22.101.111:31411 <br />
|
||||
<b>Pod:</b> hello-world-66b4b9d88b-78bhx </b><br />
|
||||
|
||||
<b>Accept:</b> [*/*]<br />
|
||||
|
||||
<b>User-Agent:</b> [curl/7.47.0]<br />
|
||||
|
||||
</div>
|
||||
<br />
|
||||
<script>
|
||||
function myFunction() {
|
||||
var x = document.getElementById("reqInfo");
|
||||
if (x.style.display === "none") {
|
||||
x.style.display = "block";
|
||||
} else {
|
||||
x.style.display = "none";
|
||||
}
|
||||
}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
gettingstarted@rancher:~$
|
||||
|
||||
```
|
||||
|
||||
### Finished
|
||||
|
||||
Congratulations! You have successfully deployed a workload exposed via a NodePort.
|
||||
|
||||
#### What's Next?
|
||||
|
||||
When you're done using your sandbox, destroy the Rancher Server and your cluster. See one of the following:
|
||||
|
||||
- [Amazon AWS: Destroying the Environment](../deploy-rancher-manager/aws.md#destroying-the-environment)
|
||||
- [DigitalOcean: Destroying the Environment](../deploy-rancher-manager/digitalocean.md#destroying-the-environment)
|
||||
- [Vagrant: Destroying the Environment](../deploy-rancher-manager/vagrant.md#destroying-the-environment)
|
||||
@@ -0,0 +1,77 @@
|
||||
---
|
||||
title: Workload with Ingress Quick Start
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides/deploy-workloads/workload-ingress"/>
|
||||
</head>
|
||||
|
||||
### Prerequisite
|
||||
|
||||
You have a running cluster with at least 1 node.
|
||||
|
||||
### 1. Deploying a Workload
|
||||
|
||||
You're ready to create your first Kubernetes [workload](https://kubernetes.io/docs/concepts/workloads/). A workload is an object that includes pods along with other files and info needed to deploy your application.
|
||||
|
||||
For this workload, you'll be deploying the application Rancher Hello-World.
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Go to the cluster that you created and click **Explore**.
|
||||
1. Click **Workload**.
|
||||
1. Click **Create**.
|
||||
1. Click **Deployment**.
|
||||
1. Enter a **Name** for your workload.
|
||||
1. From the **Container Image** field, enter `rancher/hello-world`. This field is case-sensitive.
|
||||
1. Click **Add Port** and `Cluster IP` for the `Service Type` and enter `80` in the **Private Container Port** field. You may leave the `Name` blank or specify any name that you wish. Adding a port enables access to the application inside and outside of the cluster. For more information, see [Services](../../../how-to-guides/new-user-guides/kubernetes-resources-setup/workloads-and-pods/workloads-and-pods.md#services).
|
||||
1. Click **Create**.
|
||||
|
||||
**Result:**
|
||||
|
||||
* Your workload is deployed. This process might take a few minutes to complete.
|
||||
* When your workload completes deployment, it's assigned a state of **Active**. You can view this status from the project's **Workloads** page.
|
||||
|
||||
### 2. Expose The Application Via An Ingress
|
||||
|
||||
Now that the application is up and running, it needs to be exposed so that other services can connect.
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Go to the cluster that you created and click **Explore**.
|
||||
|
||||
1. Click **Service Discovery > Ingresses**.
|
||||
|
||||
1. Click **Create.**
|
||||
|
||||
1. When choosing **Namespace**, ensure it is the same as the one used when you created your deployment. Otherwise, your deployment will not be available when you attempt to select **Target Service**, as in Step 8 below.
|
||||
|
||||
1. Enter a **Name**, such as **hello**.
|
||||
|
||||
1. Specify your **Path**, such as `/hello`.
|
||||
|
||||
1. In the **Target Service** field, drop down the list and choose the name that you set for your service.
|
||||
|
||||
1. In the **Port** field, drop down the list and select `80`.
|
||||
|
||||
1. Click **Create** at the bottom right.
|
||||
|
||||
**Result:** The application is assigned a `sslip.io` address and exposed. It may take a minute or two to populate.
|
||||
|
||||
|
||||
### View Your Application
|
||||
|
||||
From the **Deployments** page, find the **Endpoints** column for your deployment and click on an endpoint. The endpoints available will depend on how you configured the port you added to your deployment. For endpoints where you do not see a randomly assigned port, append the path you specified when creating the ingress to the IP address. For example, if your endpoint looks like `xxx.xxx.xxx.xxx` or `https://xxx.xxx.xxx.xxx` change it to `xxx.xxx.xxx.xxx/hello` or `https://xxx.xxx.xxx.xxx/hello`.
|
||||
|
||||
Your application will open in a separate window.
|
||||
|
||||
#### Finished
|
||||
|
||||
Congratulations! You have successfully deployed a workload exposed via an ingress.
|
||||
|
||||
#### What's Next?
|
||||
|
||||
When you're done using your sandbox, destroy the Rancher Server and your cluster. See one of the following:
|
||||
|
||||
- [Amazon AWS: Destroying the Environment](../deploy-rancher-manager/aws.md#destroying-the-environment)
|
||||
- [DigitalOcean: Destroying the Environment](../deploy-rancher-manager/digitalocean.md#destroying-the-environment)
|
||||
- [Linode: Destroying the Environment](../deploy-rancher-manager/linode.md#destroying-the-environment)
|
||||
- [Vagrant: Destroying the Environment](../deploy-rancher-manager/vagrant.md#destroying-the-environment)
|
||||
@@ -0,0 +1,21 @@
|
||||
---
|
||||
title: Rancher Deployment Quick Start Guides
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/quick-start-guides"/>
|
||||
</head>
|
||||
|
||||
:::caution
|
||||
|
||||
The intent of these guides is to quickly launch a sandbox that you can use to evaluate Rancher. These guides are not intended for production environments. For comprehensive setup instructions, see [Installation](../installation-and-upgrade/installation-and-upgrade.md).
|
||||
|
||||
:::
|
||||
|
||||
Use this section of the docs to jump start your deployment and testing of Rancher 2.x. It contains instructions for a simple Rancher setup and some common use cases. We plan on adding more content to this section in the future.
|
||||
|
||||
We have Quick Start Guides for:
|
||||
|
||||
- [Deploying Rancher Server](deploy-rancher-manager/deploy-rancher-manager.md): Get started running Rancher using the method most convenient for you.
|
||||
|
||||
- [Deploying Workloads](deploy-workloads/deploy-workloads.md): Deploy a simple [workload](https://kubernetes.io/docs/concepts/workloads/) and expose it, letting you access it from outside the cluster.
|
||||
17
versioned_docs/version-2.14/glossary.md
Normal file
17
versioned_docs/version-2.14/glossary.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title: Glossary
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/glossary"/>
|
||||
</head>
|
||||
|
||||
This page covers Rancher-specific terminology and symbols which might be unfamiliar, or which differ between Rancher versions.
|
||||
|
||||
```mdx-code-block
|
||||
import Glossary, {toc as GlossaryTOC} from "/shared-files/_glossary.md"
|
||||
|
||||
<Glossary />
|
||||
|
||||
export const toc = GlossaryTOC;
|
||||
```
|
||||
@@ -0,0 +1,11 @@
|
||||
---
|
||||
title: Advanced User Guides
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides"/>
|
||||
</head>
|
||||
|
||||
Advanced user guides are "problem-oriented" docs in which users learn how to answer questions or solve problems. The major difference between these and the new user guides is that these guides are geared toward more experienced or advanced users who have more technical needs from their documentation. These users already have an understanding of Rancher and its functions. They know what they need to accomplish; they just need additional guidance to complete some more complex task they they have encountered while working.
|
||||
|
||||
It should be noted that neither new user guides nor advanced user guides provide detailed explanations or discussions (these kinds of docs belong elsewhere). How-to guides focus on the action of guiding users through repeatable, effective steps to learn new skills, master some task, or overcome some problem.
|
||||
@@ -0,0 +1,16 @@
|
||||
---
|
||||
title: Compliance Scan Guides
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides"/>
|
||||
</head>
|
||||
|
||||
- [Install rancher-compliance](install-rancher-compliance.md)
|
||||
- [Uninstall rancher-compliance](uninstall-rancher-compliance.md)
|
||||
- [Run a Scan](run-a-scan.md)
|
||||
- [Run a Scan Periodically on a Schedule](run-a-scan-periodically-on-a-schedule.md)
|
||||
- [View Reports](view-reports.md)
|
||||
- [Enable Alerting for rancher-compliance](enable-alerting-for-rancher-compliance.md)
|
||||
- [Configure Alerts for Periodic Scan on a Schedule](configure-alerts-for-periodic-scan-on-a-schedule.md)
|
||||
- [Create a Custom Benchmark Version to Run](create-a-custom-compliance-version-to-run.md)
|
||||
@@ -0,0 +1,44 @@
|
||||
---
|
||||
title: Configure Alerts for Periodic Scan on a Schedule
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/configure-alerts-for-periodic-scan-on-a-schedule"/>
|
||||
</head>
|
||||
|
||||
It is possible to run a ClusterScan on a schedule.
|
||||
|
||||
A scheduled scan can also specify if you should receive alerts when the scan completes.
|
||||
|
||||
Alerts are supported only for a scan that runs on a schedule.
|
||||
|
||||
The compliance application supports two types of alerts:
|
||||
|
||||
- Alert on scan completion: This alert is sent out when the scan run finishes. The alert includes details including the ClusterScan's name and the ClusterScanProfile name.
|
||||
- Alert on scan failure: This alert is sent out if there are some test failures in the scan run or if the scan is in a `Fail` state.
|
||||
|
||||
:::note Prerequisite
|
||||
|
||||
Before enabling alerts for `rancher-compliance`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md)
|
||||
|
||||
While configuring the routes for `rancher-compliance` alerts, you can specify the matching using the key-value pair `job: rancher-compliance-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-compliance-scan-alerts)
|
||||
|
||||
:::
|
||||
|
||||
To configure alerts for a scan that runs on a schedule,
|
||||
|
||||
1. Please enable alerts on the `rancher-compliance` application. For more information, see [this page](../../../how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance.md).
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**.
|
||||
1. Click **compliance > Scan**.
|
||||
1. Click **Create**.
|
||||
1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
|
||||
1. Choose the option **Run scan on a schedule**.
|
||||
1. Enter a valid [cron schedule expression](https://en.wikipedia.org/wiki/Cron#CRON_expression) in the field **Schedule**.
|
||||
1. Check the boxes next to the Alert types under **Alerting**.
|
||||
1. Optional: Choose a **Retention Count**, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged.
|
||||
1. Click **Create**.
|
||||
|
||||
**Result:** The scan runs and reschedules to run according to the cron schedule provided. Alerts are sent out when the scan finishes if routes and receiver are configured under `rancher-monitoring` application.
|
||||
|
||||
A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears.
|
||||
@@ -0,0 +1,13 @@
|
||||
---
|
||||
title: Create a Custom Compliance Version for Running a Cluster Scan
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/create-a-custom-compliance-version-to-run"/>
|
||||
</head>
|
||||
|
||||
There could be some Kubernetes cluster setups that require custom configurations of the Compliance tests. For example, the path to the Kubernetes config files or certs might be different than the standard location where the upstream Compliance look for them.
|
||||
|
||||
It is now possible to create a custom compliance version for running a cluster scan using the `rancher-compliance` application.
|
||||
|
||||
For details, see [this page.](../../../integrations-in-rancher/compliance-scans/custom-benchmark.md)
|
||||
@@ -0,0 +1,24 @@
|
||||
---
|
||||
title: Enable Alerting for Rancher Compliance
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/enable-alerting-for-rancher-compliance"/>
|
||||
</head>
|
||||
|
||||
Alerts can be configured to be sent out for a scan that runs on a schedule.
|
||||
|
||||
:::note Prerequisite:
|
||||
|
||||
Before enabling alerts for `rancher-compliance`, make sure to install the `rancher-monitoring` application and configure the Receivers and Routes. For more information, see [this section.](../../../reference-guides/monitoring-v2-configuration/receivers.md)
|
||||
|
||||
While configuring the routes for `rancher-compliance` alerts, you can specify the matching using the key-value pair `job: rancher-compliance-scan`. An example route configuration is [here.](../../../reference-guides/monitoring-v2-configuration/receivers.md#example-route-config-for-compliance-scan-alerts)
|
||||
|
||||
:::
|
||||
|
||||
While installing or upgrading the `rancher-compliance` Helm chart, set the following flag to `true` in the `values.yaml`:
|
||||
|
||||
```yaml
|
||||
alerts:
|
||||
enabled: true
|
||||
```
|
||||
@@ -0,0 +1,15 @@
|
||||
---
|
||||
title: Install Rancher Compliance
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/install-rancher-compliance"/>
|
||||
</head>
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster where you want to install Compliance and click **Explore**.
|
||||
1. In the left navigation bar, click **Apps > Charts**.
|
||||
1. Click **Compliance**
|
||||
1. Click **Install**.
|
||||
|
||||
**Result:** The compliance scan application is deployed on the Kubernetes cluster.
|
||||
@@ -0,0 +1,24 @@
|
||||
---
|
||||
title: Run a Scan Periodically on a Schedule
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan-periodically-on-a-schedule"/>
|
||||
</head>
|
||||
|
||||
To run a ClusterScan on a schedule,
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**.
|
||||
1. Click **Compliance > Scan**.
|
||||
1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
|
||||
1. Choose the option **Run scan on a schedule**.
|
||||
1. Enter a valid <a href="https://en.wikipedia.org/wiki/Cron#CRON_expression" target="_blank">cron schedule expression</a> in the field **Schedule**.
|
||||
1. Choose a **Retention** count, which indicates the number of reports maintained for this recurring scan. By default this count is 3. When this retention limit is reached, older reports will get purged.
|
||||
1. Click **Create**.
|
||||
|
||||
**Result:** The scan runs and reschedules to run according to the cron schedule provided. The **Next Scan** value indicates the next time this scan will run again.
|
||||
|
||||
A report is generated with the scan results every time the scan runs. To see the latest results, click the name of the scan that appears.
|
||||
|
||||
You can also see the previous reports by choosing the report from the **Reports** dropdown on the scan detail page.
|
||||
@@ -0,0 +1,26 @@
|
||||
---
|
||||
title: Run a Scan
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/run-a-scan"/>
|
||||
</head>
|
||||
|
||||
When a ClusterScan custom resource is created, it launches a new compliance scan on the cluster for the chosen ClusterScanProfile.
|
||||
|
||||
:::note
|
||||
|
||||
There is currently a limitation of running only one compliance scan at a time for a cluster. If you create multiple ClusterScan custom resources, they will be run one after the other by the operator, and until one scan finishes, the rest of the ClusterScan custom resources will be in the "Pending" state.
|
||||
|
||||
:::
|
||||
|
||||
To run a scan,
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster where you want to run a compliance scan and click **Explore**.
|
||||
1. Click **Compliance > Scan**.
|
||||
1. Click **Create**.
|
||||
1. Choose a cluster scan profile. The profile determines which CIS Benchmark version will be used and which tests will be performed. If you choose the Default profile, then the Compliance Operator will choose a profile applicable to the type of Kubernetes cluster it is installed on.
|
||||
1. Click **Create**.
|
||||
|
||||
**Result:** A report is generated with the scan results. To see the results, click the name of the scan that appears.
|
||||
@@ -0,0 +1,13 @@
|
||||
---
|
||||
title: Uninstall Rancher Compliance
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/uninstall-rancher-compliance"/>
|
||||
</head>
|
||||
|
||||
1. From the **Cluster Dashboard,** go to the left navigation bar and click **Apps > Installed Apps**.
|
||||
1. Go to the `compliance-operator-system` namespace and check the boxes next to `rancher-compliance-crd` and `rancher-compliance`.
|
||||
1. Click **Delete** and confirm **Delete**.
|
||||
|
||||
**Result:** The `rancher-compliance` application is uninstalled.
|
||||
@@ -0,0 +1,23 @@
|
||||
---
|
||||
title: View Reports
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/compliance-scan-guides/view-reports"/>
|
||||
</head>
|
||||
|
||||
To view the generated Compliance scan reports,
|
||||
|
||||
1. In the upper left corner, click **☰ > Cluster Management**.
|
||||
1. On the **Clusters** page, go to the cluster where you want to run a Compliance scan and click **Explore**.
|
||||
1. Click **Compliance > Scan**.
|
||||
1. The **Scans** page will show the generated reports. To see a detailed report, go to a scan report and click the name.
|
||||
|
||||
One can download the report from the Scans list or from the scan detail page.
|
||||
|
||||
To get the verbose version of the compliance scan results, run the following command on the cluster that was scanned. Note that the scan must be completed before this can be done.
|
||||
|
||||
```console
|
||||
export REPORT="scan-report-name"
|
||||
kubectl get clusterscanreports.compliance.cattle.io $REPORT -o json |jq ".spec.reportJSON | fromjson" | jq -r ".actual_value_map_data" | base64 -d | gunzip | jq .
|
||||
```
|
||||
@@ -0,0 +1,265 @@
|
||||
---
|
||||
title: Docker Install with TLS Termination at Layer-7 NGINX Load Balancer
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/configure-layer-7-nginx-load-balancer"/>
|
||||
</head>
|
||||
|
||||
For development and testing environments that have a special requirement to terminate TLS/SSL at a load balancer instead of your Rancher Server container, deploy Rancher and configure a load balancer to work with it conjunction.
|
||||
|
||||
A layer-7 load balancer can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with.
|
||||
|
||||
This install procedure walks you through deployment of Rancher using a single container, and then provides a sample configuration for a layer-7 NGINX load balancer.
|
||||
|
||||
## Requirements for OS, Docker, Hardware, and Networking
|
||||
|
||||
Make sure that your node fulfills the general [installation requirements.](../../getting-started/installation-and-upgrade/installation-requirements/installation-requirements.md)
|
||||
|
||||
## Installation Outline
|
||||
|
||||
|
||||
## 1. Provision Linux Host
|
||||
|
||||
Provision a single Linux host according to our [Requirements](../../getting-started/installation-and-upgrade/installation-requirements/installation-requirements.md) to launch your Rancher Server.
|
||||
|
||||
## 2. Choose an SSL Option and Install Rancher
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
:::note Do you want to..
|
||||
|
||||
- Complete an Air Gap Installation?
|
||||
- Record all transactions with the Rancher API?
|
||||
|
||||
See [Advanced Options](#advanced-options) below before continuing.
|
||||
|
||||
:::
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Option A-Bring Your Own Certificate: Self-Signed</summary>
|
||||
|
||||
If you elect to use a self-signed certificate to encrypt communication, you must install the certificate on your load balancer (which you'll do later) and your Rancher container. Run the Docker command to deploy Rancher, pointing it toward your certificate.
|
||||
|
||||
:::note Prerequisites:
|
||||
|
||||
Create a self-signed certificate.
|
||||
|
||||
- The certificate files must be in PEM format.
|
||||
|
||||
:::
|
||||
|
||||
**To Install Rancher Using a Self-Signed Cert:**
|
||||
|
||||
1. While running the Docker command to deploy Rancher, point Docker toward your CA certificate file.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /etc/your_certificate_directory/cacerts.pem:/etc/rancher/ssl/cacerts.pem \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
</details>
|
||||
<details id="option-b">
|
||||
<summary>Option B-Bring Your Own Certificate: Signed by Recognized CA</summary>
|
||||
|
||||
If your cluster is public facing, it's best to use a certificate signed by a recognized CA.
|
||||
|
||||
:::note Prerequisites:
|
||||
|
||||
- The certificate files must be in PEM format.
|
||||
|
||||
:::
|
||||
|
||||
**To Install Rancher Using a Cert Signed by a Recognized CA:**
|
||||
|
||||
If you use a certificate signed by a recognized CA, installing your certificate in the Rancher container isn't necessary. We do have to make sure there is no default CA certificate generated and stored, you can do this by passing the `--no-cacerts` parameter to the container.
|
||||
|
||||
1. Enter the following command.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:latest --no-cacerts
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## 3. Configure Load Balancer
|
||||
|
||||
When using a load balancer in front of your Rancher container, there's no need for the container to redirect port communication from port 80 or port 443. By passing the header `X-Forwarded-Proto: https` header, this redirect is disabled.
|
||||
|
||||
The load balancer or proxy has to be configured to support the following:
|
||||
|
||||
- **WebSocket** connections
|
||||
- **SPDY** / **HTTP/2** protocols
|
||||
- Passing / setting the following headers:
|
||||
|
||||
| Header | Value | Description |
|
||||
|--------|-------|-------------|
|
||||
| `Host` | Hostname used to reach Rancher. | To identify the server requested by the client.
|
||||
| `X-Forwarded-Proto` | `https` | To identify the protocol that a client used to connect to the load balancer or proxy.<br /><br/>**Note:** If this header is present, `rancher/rancher` does not redirect HTTP to HTTPS.
|
||||
| `X-Forwarded-Port` | Port used to reach Rancher. | To identify the protocol that client used to connect to the load balancer or proxy.
|
||||
| `X-Forwarded-For` | IP of the client connection. | To identify the originating IP address of a client.
|
||||
### Example NGINX configuration
|
||||
|
||||
This NGINX configuration is tested on NGINX 1.14.
|
||||
|
||||
:::note
|
||||
|
||||
This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - HTTP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/).
|
||||
|
||||
:::
|
||||
|
||||
- Replace `rancher-server` with the IP address or hostname of the node running the Rancher container.
|
||||
- Replace both occurrences of `FQDN` to the DNS name for Rancher.
|
||||
- Replace `/certs/fullchain.pem` and `/certs/privkey.pem` to the location of the server certificate and the server certificate key respectively.
|
||||
|
||||
```
|
||||
worker_processes 4;
|
||||
worker_rlimit_nofile 40000;
|
||||
|
||||
events {
|
||||
worker_connections 8192;
|
||||
}
|
||||
|
||||
http {
|
||||
upstream rancher {
|
||||
server rancher-server:80;
|
||||
}
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default Upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name FQDN;
|
||||
ssl_certificate /certs/fullchain.pem;
|
||||
ssl_certificate_key /certs/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_pass http://rancher;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
|
||||
proxy_read_timeout 900s;
|
||||
proxy_buffering off;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name FQDN;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<br/>
|
||||
|
||||
## What's Next?
|
||||
|
||||
- **Recommended:** Review Single Node [Backup](../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/back-up-docker-installed-rancher.md) and [Restore](../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/restore-docker-installed-rancher.md). Although you don't have any data you need to back up right now, we recommend creating backups after regular Rancher use.
|
||||
- Create a Kubernetes cluster: [Provisioning Kubernetes Clusters](../new-user-guides/kubernetes-clusters-in-rancher-setup/kubernetes-clusters-in-rancher-setup.md).
|
||||
|
||||
<br/>
|
||||
|
||||
## FAQ and Troubleshooting
|
||||
|
||||
For help troubleshooting certificates, see [this section.](../../getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/certificate-troubleshooting.md)
|
||||
|
||||
## Advanced Options
|
||||
|
||||
### API Auditing
|
||||
|
||||
If you want to record all transactions with the Rancher API, enable the [API Auditing](enable-api-audit-log.md) feature by adding the flags below into your install command.
|
||||
|
||||
-e AUDIT_LEVEL=1 \
|
||||
-e AUDIT_LOG_ENABLED=true \
|
||||
-e AUDIT_LOG_PATH=/var/log/auditlog/rancher-api-audit.log \
|
||||
-e AUDIT_LOG_MAXAGE=20 \
|
||||
-e AUDIT_LOG_MAXBACKUP=20 \
|
||||
-e AUDIT_LOG_MAXSIZE=100 \
|
||||
|
||||
### Air Gap
|
||||
|
||||
If you are visiting this page to complete an [Air Gap Installation](../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md), you must pre-pend your private registry URL to the server tag when running the installation command in the option that you choose. Add `<REGISTRY.DOMAIN.COM:PORT>` with your private registry URL in front of `rancher/rancher:latest`.
|
||||
|
||||
**Example:**
|
||||
|
||||
<REGISTRY.DOMAIN.COM:PORT>/rancher/rancher:latest
|
||||
|
||||
### Persistent Data
|
||||
|
||||
Rancher uses etcd as a datastore. When Rancher is installed with Docker, the embedded etcd is being used. The persistent data is at the following path in the container: `/var/lib/rancher`.
|
||||
|
||||
You can bind mount a host volume to this location to preserve data on the host it is running on:
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /opt/rancher:/var/lib/rancher \
|
||||
--privileged \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
This operation requires [privileged access](../../getting-started/installation-and-upgrade/other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md#privileged-access-for-rancher).
|
||||
|
||||
This layer 7 NGINX configuration is tested on NGINX version 1.13 (mainline) and 1.14 (stable).
|
||||
|
||||
:::note
|
||||
|
||||
This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/).
|
||||
|
||||
:::
|
||||
|
||||
```
|
||||
upstream rancher {
|
||||
server rancher-server:80;
|
||||
}
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default Upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name rancher.yourdomain.com;
|
||||
ssl_certificate /etc/your_certificate_directory/fullchain.pem;
|
||||
ssl_certificate_key /etc/your_certificate_directory/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_pass http://rancher;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
|
||||
proxy_read_timeout 900s;
|
||||
proxy_buffering off;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name rancher.yourdomain.com;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
```
|
||||
|
||||
<br/>
|
||||
|
||||
@@ -0,0 +1,138 @@
|
||||
---
|
||||
title: Configure Rancher as an OIDC provider
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/configure-oidc-provider"/>
|
||||
</head>
|
||||
|
||||
Rancher can function as a standard OpenID Connect (OIDC) provider, allowing external applications to use Rancher for authentication.
|
||||
This can be used for enabling single sign-on (SSO) across Rancher Prime components. For example, see the [documentation](https://documentation.suse.com/cloudnative/suse-observability/latest/en/setup/security/authentication/oidc.html) for configuring the OIDC provider for SUSE Observability.
|
||||
|
||||
The OIDC provider can be enabled with the `oidc-provider` feature flag. When this flag is on the following endpoints are available:
|
||||
|
||||
- `https://{rancher-url}/oidc/authorize`: This endpoint initiates the authentication flow. If a user is already logged into Rancher, it returns an authorization code. Otherwise, it redirects the user to the Rancher login page. Authorization codes and related request information are securely stored in session secrets. Codes are single-use and expire after 10 minutes.
|
||||
|
||||
- `https://{rancher-url}/oidc/token`: This endpoint exchanges an authorization code for an `id_token`, `access_token`, and `refresh_token`.
|
||||
|
||||
- `https://{rancher-url}/oidc/.well-known/openid-configuration`: This endpoint returns a JSON document containing the OIDC provider's configuration, including endpoint URLs, supported scopes, claims, and other relevant details.
|
||||
|
||||
- `https://{rancher-url}/oidc/userinfo`: This endpoint provides information about the authenticated user.
|
||||
|
||||
The OIDC provider supports the OIDC Authentication Code Flow with PKCE.
|
||||
|
||||
## Configure OIDCClient
|
||||
|
||||
An `OIDCClient` represents an external application that will be authenticating against Rancher.
|
||||
|
||||
### Programmatically
|
||||
|
||||
Create an `OIDCClient`:
|
||||
|
||||
```yaml
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: OIDCClient
|
||||
metadata:
|
||||
name: oidc-client-test
|
||||
spec:
|
||||
tokenExpirationSeconds: 600 # expiration of the id_token and access_token
|
||||
refreshTokenExpirationSeconds: 3600 # expiration of the refresh_token
|
||||
redirectURIs:
|
||||
- "https://myredirecturl.com" # replace with your redirect url
|
||||
```
|
||||
Rancher automatically generates a client ID and client secret for each `OIDCClient`.
|
||||
Once the resource is created, Rancher populates the status field with the client id:
|
||||
|
||||
```yaml
|
||||
apiVersion: management.cattle.io/v3
|
||||
kind: OIDCClient
|
||||
metadata:
|
||||
name: oidc-client-test
|
||||
spec:
|
||||
tokenExpirationSeconds: 600 # expiration of the id_token and access_token
|
||||
refreshTokenExpirationSeconds: 3600 # expiration of the refresh_token
|
||||
redirectURIs:
|
||||
- "https://myredirecturl.com" # replace with your redirect url
|
||||
status:
|
||||
clientID: client-xxx
|
||||
clientSecrets:
|
||||
client-secret-1:
|
||||
createdAt: "xxx"
|
||||
lastFiveCharacters: xxx
|
||||
```
|
||||
|
||||
Rancher automatically generates a Kubernetes `Secret` in the `cattle-oidc-client-secrets` namespace for each `OIDCClient` resource. The Secret's name matches the `OIDCClient` client ID.
|
||||
Initially, the `Secret` contains a single client secret.
|
||||
|
||||
To retrieve the client secret:
|
||||
|
||||
```
|
||||
kubectl get secret client-xxx -n cattle-oidc-client-secrets -o jsonpath="{.data.client-secret-1}" | base64 -d
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
secret-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
||||
```
|
||||
|
||||
You can now use this client ID and client secret in your OIDC client application.
|
||||
|
||||
#### Managing Client Secrets
|
||||
|
||||
You can manage multiple client secrets per `OIDCClient`. Use annotations on the `OIDCClient` resource to perform secret operations:
|
||||
|
||||
- Creation: Adding the `cattle.io/oidc-client-secret-create: true` annotation triggers the creation of a new client secret.
|
||||
- Removal: Adding the `cattle.io/oidc-client-secret-remove:client-secret-1` annotation removes the specified client secrets.
|
||||
- Regeneration: Adding the `cattle.io/oidc-client-secret-regenerate:client-secret-1` annotation regenerates the specified client secrets.
|
||||
|
||||
### Rancher UI
|
||||
|
||||
Create an OIDCClient:
|
||||
|
||||
1. In the top left corner, click **☰ > Users & Authentication**.
|
||||
1. In the left navigation menu, click **OIDC Apps**.
|
||||
1. Click **Add Application**. Fill out the **Create OIDC App** form.
|
||||
1. Click **Add Application**.
|
||||
|
||||
#### Managing Client Secrets
|
||||
|
||||
In the OIDC App page:
|
||||
|
||||
- Creation: Click **Add new secret**.
|
||||
- Removal: Click **⋮ > Delete**
|
||||
- Regeneration: Click **⋮ > Regenerate**
|
||||
|
||||
## Signing key
|
||||
|
||||
A default key pair for signing the `id_token`, `access_token`, and `refresh_token` tokens is created by Rancher in a `Secret` called `oidc-signing-key` in the `cattle-system` namespace. Only one key will be used for signing, but multiple public keys can be returned in the jwks endpoint in order to avoid disruption when doing a key rotation.
|
||||
|
||||
### Rotation without disruption
|
||||
|
||||
In order to create a new key pair for signing you need to manually create a new keypair and add it to the `oidc-signing-key` `Secret`
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: oidc-signing-key
|
||||
type: Opaque
|
||||
data:
|
||||
key2.pem: <base64-encoded-new-private-key>
|
||||
key1.pub: <base64-encoded-old-public-key>
|
||||
key2.pub: <base64-encoded-new-public-key>
|
||||
```
|
||||
|
||||
Rancher will sign tokens using `key2.pem`, while the JWKS endpoint will serve both `key1.pub` and `key2.pub`. This ensures a smooth
|
||||
key rotation from `key1` to `key2` without disrupting existing token verification. Note that only one private key (.pem) can be stored in the
|
||||
secret at a time, and each key pair must share the same base name, differing only by their suffix: .pem for the private key and .pub for the public key.
|
||||
|
||||
### Rotation with disruption
|
||||
|
||||
Removing the `oidc-signing-key` `Secret` will cause Rancher to regenerate the signing key on the next restart.
|
||||
|
||||
:::warning
|
||||
This will invalidate all previously issued `id_token`, `access_token`, and `refresh_token` tokens making them unusable.
|
||||
:::
|
||||
@@ -0,0 +1,204 @@
|
||||
---
|
||||
title: Enabling the API Audit Log in Downstream Clusters
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-api-audit-log-in-downstream-clusters"/>
|
||||
</head>
|
||||
|
||||
Kubernetes auditing provides a security-relevant chronological set of records about a cluster. Kube-apiserver performs auditing. Requests generate an event at each stage of its execution, which is then preprocessed according to a certain policy and written to a backend. The policy determines what’s recorded and the backend persists the records.
|
||||
|
||||
You might want to configure the audit log as part of compliance with the Center for Internet Security (CIS) Kubernetes Benchmark controls.
|
||||
|
||||
For configuration details, refer to the [official Kubernetes documentation](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/).
|
||||
|
||||
|
||||
<Tabs groupId="k8s-distro">
|
||||
<TabItem value="RKE2" default>
|
||||
|
||||
### Method 1 (Recommended): Set `audit-policy-file` in `machineGlobalConfig` or `machineSelectorConfig`
|
||||
|
||||
You can set `audit-policy-file` in the configuration file using either `machineGlobalConfig` or `machineSelectorConfig`.
|
||||
|
||||
When using `machineGlobalConfig`, Rancher delivers the file to the path `/var/lib/rancher/rke2/etc/config-files/audit-policy-file` on **all nodes** (both control plane and worker nodes), and sets the proper options in the RKE2 server. This may cause unwanted worker node reconciliation when the audit policy is modified.
|
||||
|
||||
To avoid worker node reconciliation, use `machineSelectorConfig` with a label selector to target only control plane nodes. This ensures that the audit policy file is only delivered to control plane nodes.
|
||||
|
||||
Example using `machineGlobalConfig`:
|
||||
```yaml
|
||||
apiVersion: provisioning.cattle.io/v1
|
||||
kind: Cluster
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineGlobalConfig:
|
||||
audit-policy-file: |
|
||||
apiVersion: audit.k8s.io/v1
|
||||
kind: Policy
|
||||
rules:
|
||||
- level: RequestResponse
|
||||
resources:
|
||||
- group: ""
|
||||
resources:
|
||||
- pods
|
||||
```
|
||||
|
||||
Example using `machineSelectorConfig` (recommended to avoid worker node reconciliation):
|
||||
```yaml
|
||||
apiVersion: provisioning.cattle.io/v1
|
||||
kind: Cluster
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineSelectorConfig:
|
||||
- config:
|
||||
audit-policy-file: |
|
||||
apiVersion: audit.k8s.io/v1
|
||||
kind: Policy
|
||||
rules:
|
||||
- level: RequestResponse
|
||||
resources:
|
||||
- group: ""
|
||||
resources:
|
||||
- pods
|
||||
machineLabelSelector:
|
||||
matchLabels:
|
||||
rke.cattle.io/control-plane-role: 'true'
|
||||
```
|
||||
|
||||
### Method 2: Use the Directives, `machineSelectorFiles` and `machineGlobalConfig`
|
||||
|
||||
:::note
|
||||
|
||||
This feature is available in Rancher v2.7.2 and later.
|
||||
|
||||
:::
|
||||
|
||||
You can use `machineSelectorFiles` to deliver the audit policy file to the control plane nodes, and `machineGlobalConfig` to set the options on kube-apiserver.
|
||||
|
||||
As a prerequisite, you must create a [secret](../new-user-guides/kubernetes-resources-setup/secrets.md) or [configmap](../new-user-guides/kubernetes-resources-setup/configmaps.md) to be the source of the audit policy.
|
||||
|
||||
The secret or configmap must meet the following requirements:
|
||||
|
||||
1. It must be in the `fleet-default` namespace where the Cluster object exists.
|
||||
2. It must have the annotation `rke.cattle.io/object-authorized-for-clusters: <cluster-name1>,<cluster-name2>` which permits the target clusters to use it.
|
||||
|
||||
:::tip
|
||||
|
||||
Rancher Dashboard provides an easy-to-use form for creating the secret or configmap.
|
||||
|
||||
:::
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
audit-policy: >-
|
||||
IyBMb2cgYWxsIHJlcXVlc3RzIGF0IHRoZSBNZXRhZGF0YSBsZXZlbC4KYXBpVmVyc2lvbjogYXVkaXQuazhzLmlvL3YxCmtpbmQ6IFBvbGljeQpydWxlczoKLSBsZXZlbDogTWV0YWRhdGE=
|
||||
kind: Secret
|
||||
metadata:
|
||||
annotations:
|
||||
rke.cattle.io/object-authorized-for-clusters: cluster1
|
||||
name: <name1>
|
||||
namespace: fleet-default
|
||||
```
|
||||
|
||||
Enable and configure the audit log by editing the cluster in YAML, and utilizing the `machineSelectorFiles` and `machineGlobalConfig` directives.
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
apiVersion: provisioning.cattle.io/v1
|
||||
kind: Cluster
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineGlobalConfig:
|
||||
kube-apiserver-arg:
|
||||
- audit-policy-file=<customized-path>/dev-audit-policy.yaml
|
||||
- audit-log-path=<customized-path>/dev-audit.logs
|
||||
machineSelectorFiles:
|
||||
- fileSources:
|
||||
- configMap:
|
||||
name: ''
|
||||
secret:
|
||||
items:
|
||||
- key: audit-policy
|
||||
path: <customized-path>/dev-audit-policy.yaml
|
||||
name: dev-audit-policy
|
||||
machineLabelSelector:
|
||||
matchLabels:
|
||||
rke.cattle.io/control-plane-role: 'true'
|
||||
```
|
||||
|
||||
For more information about cluster configuration, refer to the [RKE2 cluster configuration reference](../../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md) pages.
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="K3s">
|
||||
|
||||
:::note
|
||||
|
||||
This feature is available in Rancher v2.7.2 and later.
|
||||
|
||||
:::
|
||||
|
||||
You can use `machineSelectorFiles` to deliver the audit policy file to the control plane nodes, and `machineGlobalConfig` to set the options on kube-apiserver.
|
||||
|
||||
As a prerequisite, you must create a [secret](../new-user-guides/kubernetes-resources-setup/secrets.md) or [configmap](../new-user-guides/kubernetes-resources-setup/configmaps.md) to be the source of the audit policy.
|
||||
|
||||
The secret or configmap must meet the following requirements:
|
||||
|
||||
1. It must be in the `fleet-default` namespace where the Cluster object exists.
|
||||
2. It must have the annotation `rke.cattle.io/object-authorized-for-clusters: <cluster-name1>,<cluster-name2>` which permits the target clusters to use it.
|
||||
|
||||
:::tip
|
||||
|
||||
Rancher Dashboard provides an easy-to-use form for creating the [secret](../new-user-guides/kubernetes-resources-setup/secrets.md) or [configmap](../new-user-guides/kubernetes-resources-setup/configmaps.md).
|
||||
|
||||
:::
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
audit-policy: >-
|
||||
IyBMb2cgYWxsIHJlcXVlc3RzIGF0IHRoZSBNZXRhZGF0YSBsZXZlbC4KYXBpVmVyc2lvbjogYXVkaXQuazhzLmlvL3YxCmtpbmQ6IFBvbGljeQpydWxlczoKLSBsZXZlbDogTWV0YWRhdGE=
|
||||
kind: Secret
|
||||
metadata:
|
||||
annotations:
|
||||
rke.cattle.io/object-authorized-for-clusters: cluster1
|
||||
name: <name1>
|
||||
namespace: fleet-default
|
||||
```
|
||||
|
||||
Enable and configure the audit log by editing the cluster in YAML, and utilizing the `machineSelectorFiles` and `machineGlobalConfig` directives.
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
apiVersion: provisioning.cattle.io/v1
|
||||
kind: Cluster
|
||||
spec:
|
||||
rkeConfig:
|
||||
machineGlobalConfig:
|
||||
kube-apiserver-arg:
|
||||
- audit-policy-file=<customized-path>/dev-audit-policy.yaml
|
||||
- audit-log-path=<customized-path>/dev-audit.logs
|
||||
machineSelectorFiles:
|
||||
- fileSources:
|
||||
- configMap:
|
||||
name: ''
|
||||
secret:
|
||||
items:
|
||||
- key: audit-policy
|
||||
path: <customized-path>/dev-audit-policy.yaml
|
||||
name: dev-audit-policy
|
||||
machineLabelSelector:
|
||||
matchLabels:
|
||||
rke.cattle.io/control-plane-role: 'true'
|
||||
```
|
||||
|
||||
For more information about cluster configuration, refer to the [K3s cluster configuration reference](../../reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md) pages.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
@@ -0,0 +1,685 @@
|
||||
---
|
||||
title: Enabling the API Audit Log to Record System Events
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-api-audit-log"/>
|
||||
</head>
|
||||
|
||||
You can enable the API audit log to record the sequence of system events initiated by individual users. You can know what happened, when it happened, who initiated it, and what cluster it affected. When you enable this feature, all requests to the Rancher API and all responses from it are written to a log.
|
||||
|
||||
You can enable API Auditing during Rancher installation or upgrade.
|
||||
|
||||
## Enabling API Audit Log
|
||||
|
||||
The Audit Log is enabled and configured by passing environment variables to the Rancher server container. See the following to enable on your installation.
|
||||
|
||||
- [Docker Install](../../reference-guides/single-node-rancher-in-docker/advanced-options.md#api-audit-log)
|
||||
|
||||
- [Kubernetes Install](../../getting-started/installation-and-upgrade/installation-references/helm-chart-options.md#api-audit-log)
|
||||
|
||||
## API Audit Log Options
|
||||
|
||||
The usage below defines rules about what the audit log should record and what data it should include:
|
||||
|
||||
| Parameter | Description |
|
||||
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| `AUDIT_LOG_ENABLED` | `false` - Disables the audit log (default setting).<br/>`true` - Enables the audit log. |
|
||||
| `AUDIT_LEVEL` | `0` - Log request and response metadata (default setting).<br/>`1` - Log request and response headers.<br/>`2` - Log request body.<br/>`3` - Log response body. Each log level is cumulative and each subsequent level logs the previous level data. Each log transaction for a request/response pair uses the same `auditID` value.<br/><br/>See [Audit Level Logging](#audit-log-levels) for a table that displays what each setting logs. |
|
||||
| `AUDIT_LOG_PATH` | Log path for Rancher Server API. Default path is `/var/log/auditlog/rancher-api-audit.log`. You can mount the log directory to host. <br/><br/>Usage Example: `AUDIT_LOG_PATH=/my/custom/path/`<br/> |
|
||||
| `AUDIT_LOG_MAXAGE` | Defined the maximum number of days to retain old audit log files. Default is 10 days. |
|
||||
| `AUDIT_LOG_MAXBACKUP` | Defines the maximum number of audit log files to retain. Default is 10. |
|
||||
| `AUDIT_LOG_MAXSIZE` | Defines the maximum size in megabytes of the audit log file before it gets rotated. Default size is 100M. |
|
||||
|
||||
<br/>
|
||||
|
||||
### Audit Log Levels
|
||||
|
||||
The following table displays what parts of API transactions are logged for each [`AUDIT_LEVEL`](#api-audit-log-options) setting.
|
||||
|
||||
|
||||
| `AUDIT_LEVEL` Setting | Metadata | Request Headers | Response Headers | Request Body | Response Body |
|
||||
|--------------------------------|----------------|---------------------------|-----------------------------|----------------------|-----------------------|
|
||||
| 0 | ✓ |
|
||||
| 1 | ✓ | ✓ | ✓
|
||||
| 2 | ✓ | ✓ | ✓ | ✓
|
||||
| 3 | ✓ | ✓ | ✓ | ✓ | ✓
|
||||
|
||||
## Audit Log policies
|
||||
|
||||
Audit log policies allow end users to configure redactions using `AuditPolicy` cluster-scoped CRs in addition to the [default redactions and filters](#default-redactions--filters).
|
||||
|
||||
All configured audit log policies are additive.
|
||||
|
||||
Redaction policies for headers utilize a regular expression (regex) engine to redact headers, while a JSONPath engine is used to redact request/response headers.
|
||||
|
||||
The JSONPath engine does not support the script or filter expressions. For getting started with JSONPath expressions, a good resource to consult is [Stafan Goessner's article on JSONPath](https://goessner.net/articles/JsonPath/).
|
||||
|
||||
The structure of an audit policy CR is as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: auditlog.cattle.io/v1
|
||||
kind: AuditPolicy
|
||||
spec:
|
||||
enabled : true # true/false
|
||||
# list of API request filters
|
||||
filters:
|
||||
- action: allow # allow/deny
|
||||
# would allow logs sent to "/foo/some/endpoint" but not "/foo" or "/foobar".
|
||||
requestURI: "/foo/.*"
|
||||
# additionalRedactions allows configuration of redactions on headers using `jsonpath` expressions
|
||||
additionalRedactions:
|
||||
# redacts headers based on regex expressions
|
||||
- headers:
|
||||
- "Cache.*"
|
||||
# paths redacts information from request and response bodies based on json path expressions
|
||||
paths:
|
||||
- "$.gitCommit"
|
||||
verbosity:
|
||||
level : 0 # matches the levels in the above audit log table
|
||||
# request allows fine-grained control over which request data
|
||||
# gets included. This overrides the behaviour of the generic verbosity.level
|
||||
request:
|
||||
headers : true # true/false
|
||||
body : true # true/false
|
||||
# response allows fine-grained control over which response data
|
||||
# gets included. This overrides the behaviour of the generic verbosity.level
|
||||
response:
|
||||
headers : true # true/false
|
||||
body: true # true/false
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
The following example shows only logging requests containing `login` in the request path to the audit log path:
|
||||
|
||||
```yaml
|
||||
apiVersion: auditlog.cattle.io/v1
|
||||
kind: AuditPolicy
|
||||
metadata:
|
||||
name: filters
|
||||
spec:
|
||||
filters:
|
||||
- action: deny
|
||||
requestUri: ".*"
|
||||
- action: allow
|
||||
requestUri: ".*login.*"
|
||||
```
|
||||
|
||||
The following example shows how to redact specific fields containing `gitCommint` in request/response bodies:
|
||||
|
||||
```yaml
|
||||
apiVersion: auditlog.cattle.io/v1
|
||||
kind: AuditPolicy
|
||||
metadata:
|
||||
name: redactions
|
||||
spec:
|
||||
additionalRedactions:
|
||||
- paths:
|
||||
- "$.gitCommit"
|
||||
```
|
||||
|
||||
### Default redactions & filters
|
||||
|
||||
The audit log controller comes with default built-in redactions for common sensitive information.
|
||||
|
||||
#### Redacted headers
|
||||
|
||||
Generic headers:
|
||||
- `Cookie`
|
||||
- `Set-Cookie`
|
||||
- `X-Api-Set-Cookie-Header`
|
||||
- `Authorization`
|
||||
- `X-Api-Tunnel-Params`
|
||||
- `X-Api-Tunnel-Token`
|
||||
- `X-Api-Auth-Header`
|
||||
- `X-Amz-Security-Token`
|
||||
|
||||
|
||||
#### Redacted body fields
|
||||
|
||||
Generic body fields:
|
||||
|
||||
- `credentials`
|
||||
- `applicationSecret`
|
||||
- `oauthCredential`
|
||||
- `serviceAccountCredential`
|
||||
- `spKey`
|
||||
- `spCert`
|
||||
- `certificate`
|
||||
- `privateKey`
|
||||
- `secretsEncryptionConfig`
|
||||
- `manifestUrl`
|
||||
- `insecureWindowsNodeCommand`
|
||||
- `insecureNodeCommand`
|
||||
- `insecureCommand`
|
||||
- `command`
|
||||
- `nodeCommand`
|
||||
- `windowsNodeCommand`
|
||||
- `clientRandom`
|
||||
|
||||
Generic body regex redactor:
|
||||
- `".*([pP]assword|[Kk]ube[Cc]onfig|[Tt]oken).*"`
|
||||
|
||||
#### Cluster Driver
|
||||
|
||||
By default, any API request with fields tied to cluster drivers will have any non `public*` or `optional*` fields redacted by the audit log controller.
|
||||
|
||||
#### Redacted URIs
|
||||
|
||||
Any endpoint containing `secrets` or `configmaps` will have relevant fields stripped from both the request and response bodies. Additionally, any endpoint containing `/v3/imports/*` will have its URI redacted.
|
||||
|
||||
|
||||
## Viewing API Audit Logs
|
||||
|
||||
### Docker Install
|
||||
|
||||
Share the `AUDIT_LOG_PATH` directory (Default: `/var/log/auditlog`) with the host system. The log can be parsed by standard CLI tools or forwarded on to a log collection tool like Fluentd, Filebeat, Logstash, etc.
|
||||
|
||||
### Kubernetes Install
|
||||
|
||||
Enabling the API Audit Log with the Helm chart install will create a `rancher-audit-log` sidecar container in the Rancher pod. This container will stream the log to standard output (stdout). You can view the log as you would any container log.
|
||||
|
||||
The `rancher-audit-log` container is part of the `rancher` pod in the `cattle-system` namespace.
|
||||
|
||||
#### CLI
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system logs -f rancher-84d886bdbb-s4s69 rancher-audit-log
|
||||
```
|
||||
|
||||
#### Shipping the Audit Log
|
||||
|
||||
You can enable Rancher's built in log collection and shipping for the cluster to ship the audit and other services logs to a supported collection endpoint. See [Rancher Tools - Logging](../../integrations-in-rancher/logging/logging.md) for details.
|
||||
|
||||
## Audit Log Samples
|
||||
|
||||
After you enable auditing, each API request or response is logged by Rancher in the form of JSON. Each of the following code samples provide examples of how to identify each API transaction.
|
||||
|
||||
### Metadata Level
|
||||
|
||||
If you set your `AUDIT_LEVEL` to `0`, Rancher logs the metadata header for every API request, but neither the body nor the request and response headers. The metadata provides basic information about the API transaction, such as the transaction ID, the initiator of the transaction, the time it occurred, etc.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "40bd4e40-875b-4020-933e-4c4f4c4db366",
|
||||
"requestURI": "/v3/schemas",
|
||||
"user": {
|
||||
"name": "user-6j5s6",
|
||||
"group": [
|
||||
"system:authenticated",
|
||||
"system:cattle:authenticated"
|
||||
],
|
||||
"extra": {
|
||||
"principalid": [
|
||||
"local://user-6j5s6"
|
||||
],
|
||||
"requesthost": [
|
||||
"localhost:8443"
|
||||
],
|
||||
"requesttokenid": [
|
||||
"token-zs42h"
|
||||
],
|
||||
"username": [
|
||||
"admin"
|
||||
]
|
||||
}
|
||||
},
|
||||
"method": "GET",
|
||||
"remoteAddr": "127.0.0.1:58652",
|
||||
"responseCode": 200,
|
||||
"requestTimestamp": "2025-06-30T11:13:25-04:00",
|
||||
"responseTimestamp": "2025-06-30T11:13:25-04:00"
|
||||
}
|
||||
```
|
||||
|
||||
### Metadata and headers level
|
||||
|
||||
If you set your `AUDIT_LEVEL` to `1`, Rancher logs the metadata and the request and response headers for every API request.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "f8c83dc6-a080-4e2e-ab43-552bddf01716",
|
||||
"requestURI": "/v1/apps.deployments?page=1&pagesize=100&sort=metadata.name&filter=metadata.namespace!=p-npsl5&filter=metadata.namespace!=p-nzp6c&filter=metadata.namespace!=cattle-fleet-clusters-system&filter=metadata.namespace!=cattle-fleet-system&filter=metadata.namespace!=cattle-global-data&filter=metadata.namespace!=cattle-impersonation-system&filter=metadata.namespace!=cattle-provisioning-capi-system&filter=metadata.namespace!=cattle-system&filter=metadata.namespace!=cattle-ui-plugin-system&filter=metadata.namespace!=cluster-fleet-local-local-1a3d67d0a899&filter=metadata.namespace!=fleet-default&filter=metadata.namespace!=fleet-local&filter=metadata.namespace!=kube-node-lease&filter=metadata.namespace!=kube-public&filter=metadata.namespace!=kube-system&exclude=metadata.managedFields",
|
||||
"user": {
|
||||
"name": "user-6j5s6",
|
||||
"group": [
|
||||
"system:authenticated",
|
||||
"system:cattle:authenticated"
|
||||
],
|
||||
"extra": {
|
||||
"principalid": [
|
||||
"local://user-6j5s6"
|
||||
],
|
||||
"requesthost": [
|
||||
"localhost:8443"
|
||||
],
|
||||
"requesttokenid": [
|
||||
"token-zs42h"
|
||||
],
|
||||
"username": [
|
||||
"admin"
|
||||
]
|
||||
}
|
||||
},
|
||||
"method": "GET",
|
||||
"remoteAddr": "127.0.0.1:58833",
|
||||
"responseCode": 200,
|
||||
"requestTimestamp": "2025-06-30T11:17:04-04:00",
|
||||
"responseTimestamp": "2025-06-30T11:17:04-04:00",
|
||||
"requestHeader": {
|
||||
"Accept": [
|
||||
"application/json"
|
||||
],
|
||||
"Accept-Encoding": [
|
||||
"gzip, deflate, br, zstd"
|
||||
],
|
||||
"Accept-Language": [
|
||||
"en-US,en;q=0.5"
|
||||
],
|
||||
"Connection": [
|
||||
"keep-alive"
|
||||
],
|
||||
"Cookie": [
|
||||
"[redacted]"
|
||||
],
|
||||
"Referer": [
|
||||
"https://localhost:8443/dashboard/c/local/explorer/apps.deployment"
|
||||
],
|
||||
"Sec-Fetch-Dest": [
|
||||
"empty"
|
||||
],
|
||||
"Sec-Fetch-Mode": [
|
||||
"cors"
|
||||
],
|
||||
"Sec-Fetch-Site": [
|
||||
"same-origin"
|
||||
],
|
||||
"User-Agent": [
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:140.0) Gecko/20100101 Firefox/140.0"
|
||||
],
|
||||
"X-Api-Csrf": [
|
||||
"fccc690cab7b0c169b3fc6527edadef3"
|
||||
]
|
||||
},
|
||||
"responseHeader": {
|
||||
"Cache-Control": [
|
||||
"no-cache, no-store, must-revalidate"
|
||||
],
|
||||
"Content-Encoding": [
|
||||
"gzip"
|
||||
],
|
||||
"Content-Type": [
|
||||
"application/json"
|
||||
],
|
||||
"Expires": [
|
||||
"Wed 24 Feb 1982 18:42:00 GMT"
|
||||
],
|
||||
"X-Api-Cattle-Auth": [
|
||||
"true"
|
||||
],
|
||||
"X-Api-Schemas": [
|
||||
"https://localhost:8443/v1/schemas"
|
||||
],
|
||||
"X-Content-Type-Options": [
|
||||
"nosniff"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Metadata, headers and Request Body Level
|
||||
|
||||
If you set your `AUDIT_LEVEL` to `2`, Rancher logs the metadata, the request and response headers and the request body for every API request.
|
||||
|
||||
The code sample below depicts an API request, with both its metadata, headers and request body.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "d1088a09-2a13-4450-970e-0d44bd2c49ee",
|
||||
"requestURI": "/v3/projects",
|
||||
"user": {
|
||||
"name": "user-6j5s6",
|
||||
"group": [
|
||||
"system:authenticated",
|
||||
"system:cattle:authenticated"
|
||||
],
|
||||
"extra": {
|
||||
"principalid": [
|
||||
"local://user-6j5s6"
|
||||
],
|
||||
"requesthost": [
|
||||
"localhost:8443"
|
||||
],
|
||||
"requesttokenid": [
|
||||
"token-zs42h"
|
||||
],
|
||||
"username": [
|
||||
"admin"
|
||||
]
|
||||
}
|
||||
},
|
||||
"method": "POST",
|
||||
"remoteAddr": "127.0.0.1:49966",
|
||||
"responseCode": 201,
|
||||
"requestTimestamp": "2025-06-30T12:32:13-04:00",
|
||||
"responseTimestamp": "2025-06-30T12:32:13-04:00",
|
||||
"requestHeader": {
|
||||
"Accept": [
|
||||
"application/json"
|
||||
],
|
||||
"Accept-Encoding": [
|
||||
"gzip, deflate, br, zstd"
|
||||
],
|
||||
"Accept-Language": [
|
||||
"en-US,en;q=0.5"
|
||||
],
|
||||
"Connection": [
|
||||
"keep-alive"
|
||||
],
|
||||
"Content-Length": [
|
||||
"214"
|
||||
],
|
||||
"Content-Type": [
|
||||
"application/json"
|
||||
],
|
||||
"Cookie": [
|
||||
"[redacted]"
|
||||
],
|
||||
"Impersonate-Extra-Principalid": [
|
||||
"local://user-6j5s6"
|
||||
],
|
||||
"Impersonate-Extra-Requesthost": [
|
||||
"localhost:8443"
|
||||
],
|
||||
"Impersonate-Extra-Requesttokenid": [
|
||||
"token-zs42h"
|
||||
],
|
||||
"Impersonate-Extra-Username": [
|
||||
"admin"
|
||||
],
|
||||
"Impersonate-Group": [
|
||||
"system:authenticated",
|
||||
"system:cattle:authenticated"
|
||||
],
|
||||
"Impersonate-User": [
|
||||
"user-6j5s6"
|
||||
],
|
||||
"Origin": [
|
||||
"https://localhost:8443"
|
||||
],
|
||||
"Priority": [
|
||||
"u=0"
|
||||
],
|
||||
"Referer": [
|
||||
"https://localhost:8443/dashboard/c/local/explorer/management.cattle.io.project/create"
|
||||
],
|
||||
"Sec-Fetch-Dest": [
|
||||
"empty"
|
||||
],
|
||||
"Sec-Fetch-Mode": [
|
||||
"cors"
|
||||
],
|
||||
"Sec-Fetch-Site": [
|
||||
"same-origin"
|
||||
],
|
||||
"User-Agent": [
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:140.0) Gecko/20100101 Firefox/140.0"
|
||||
],
|
||||
"X-Api-Csrf": [
|
||||
"fccc690cab7b0c169b3fc6527edadef3"
|
||||
]
|
||||
},
|
||||
"responseHeader": {
|
||||
"Cache-Control": [
|
||||
"no-cache, no-store, must-revalidate"
|
||||
],
|
||||
"Content-Encoding": [
|
||||
"gzip"
|
||||
],
|
||||
"Content-Type": [
|
||||
"application/json"
|
||||
],
|
||||
"Expires": [
|
||||
"Wed 24 Feb 1982 18:42:00 GMT"
|
||||
],
|
||||
"X-Api-Cattle-Auth": [
|
||||
"true"
|
||||
],
|
||||
"X-Api-Schemas": [
|
||||
"https://localhost:8443/v3/project/schemas"
|
||||
],
|
||||
"X-Content-Type-Options": [
|
||||
"nosniff"
|
||||
]
|
||||
},
|
||||
"requestBody": {
|
||||
"annotations": {},
|
||||
"clusterId": "local",
|
||||
"containerDefaultResourceLimit": {},
|
||||
"creatorId": "local://user-6j5s6",
|
||||
"labels": {},
|
||||
"name": "example-project",
|
||||
"namespaceDefaultResourceQuota": {},
|
||||
"resourceQuota": {},
|
||||
"type": "project"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Metadata, Headers, Request Body and Response Body Level
|
||||
|
||||
If you set your `AUDIT_LEVEL` to `3`, Rancher logs the metadata, request and response headers and request body and response.
|
||||
|
||||
The code sample below depicts an example of an API request with that information logged.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "a9549a5b-4351-4bd5-adcd-12f7ec667a6b",
|
||||
"requestURI": "/v3/projects",
|
||||
"user": {
|
||||
"name": "user-6j5s6",
|
||||
"group": [
|
||||
"system:authenticated",
|
||||
"system:cattle:authenticated"
|
||||
],
|
||||
"extra": {
|
||||
"principalid": [
|
||||
"local://user-6j5s6"
|
||||
],
|
||||
"requesthost": [
|
||||
"localhost:8443"
|
||||
],
|
||||
"requesttokenid": [
|
||||
"token-zs42h"
|
||||
],
|
||||
"username": [
|
||||
"admin"
|
||||
]
|
||||
}
|
||||
},
|
||||
"method": "POST",
|
||||
"remoteAddr": "127.0.0.1:50454",
|
||||
"responseCode": 201,
|
||||
"requestTimestamp": "2025-06-30T12:42:24-04:00",
|
||||
"responseTimestamp": "2025-06-30T12:42:24-04:00",
|
||||
"requestHeader": {
|
||||
"Accept": [
|
||||
"application/json"
|
||||
],
|
||||
"Accept-Encoding": [
|
||||
"gzip, deflate, br, zstd"
|
||||
],
|
||||
"Accept-Language": [
|
||||
"en-US,en;q=0.5"
|
||||
],
|
||||
"Connection": [
|
||||
"keep-alive"
|
||||
],
|
||||
"Content-Length": [
|
||||
"214"
|
||||
],
|
||||
"Content-Type": [
|
||||
"application/json"
|
||||
],
|
||||
"Cookie": [
|
||||
"[redacted]"
|
||||
],
|
||||
"Impersonate-Extra-Principalid": [
|
||||
"local://user-6j5s6"
|
||||
],
|
||||
"Impersonate-Extra-Requesthost": [
|
||||
"localhost:8443"
|
||||
],
|
||||
"Impersonate-Extra-Requesttokenid": [
|
||||
"token-zs42h"
|
||||
],
|
||||
"Impersonate-Extra-Username": [
|
||||
"admin"
|
||||
],
|
||||
"Impersonate-Group": [
|
||||
"system:authenticated",
|
||||
"system:cattle:authenticated"
|
||||
],
|
||||
"Impersonate-User": [
|
||||
"user-6j5s6"
|
||||
],
|
||||
"Origin": [
|
||||
"https://localhost:8443"
|
||||
],
|
||||
"Priority": [
|
||||
"u=0"
|
||||
],
|
||||
"Referer": [
|
||||
"https://localhost:8443/dashboard/c/local/explorer/management.cattle.io.project/create"
|
||||
],
|
||||
"Sec-Fetch-Dest": [
|
||||
"empty"
|
||||
],
|
||||
"Sec-Fetch-Mode": [
|
||||
"cors"
|
||||
],
|
||||
"Sec-Fetch-Site": [
|
||||
"same-origin"
|
||||
],
|
||||
"User-Agent": [
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:140.0) Gecko/20100101 Firefox/140.0"
|
||||
],
|
||||
"X-Api-Csrf": [
|
||||
"fccc690cab7b0c169b3fc6527edadef3"
|
||||
]
|
||||
},
|
||||
"responseHeader": {
|
||||
"Cache-Control": [
|
||||
"no-cache, no-store, must-revalidate"
|
||||
],
|
||||
"Content-Encoding": [
|
||||
"gzip"
|
||||
],
|
||||
"Content-Type": [
|
||||
"application/json"
|
||||
],
|
||||
"Expires": [
|
||||
"Wed 24 Feb 1982 18:42:00 GMT"
|
||||
],
|
||||
"X-Api-Cattle-Auth": [
|
||||
"true"
|
||||
],
|
||||
"X-Api-Schemas": [
|
||||
"https://localhost:8443/v3/project/schemas"
|
||||
],
|
||||
"X-Content-Type-Options": [
|
||||
"nosniff"
|
||||
]
|
||||
},
|
||||
"requestBody": {
|
||||
"annotations": {},
|
||||
"clusterId": "local",
|
||||
"containerDefaultResourceLimit": {},
|
||||
"creatorId": "local://user-6j5s6",
|
||||
"labels": {},
|
||||
"name": "example-project",
|
||||
"namespaceDefaultResourceQuota": {},
|
||||
"resourceQuota": {},
|
||||
"type": "project"
|
||||
},
|
||||
"responseBody": {
|
||||
"actions": {
|
||||
"exportYaml": "https://localhost:8443/v3/projects/local:p-qt6tq?action=exportYaml"
|
||||
},
|
||||
"annotations": {
|
||||
"authz.management.cattle.io/creator-role-bindings": "{\"required\":[\"project-owner\"]}"
|
||||
},
|
||||
"backingNamespace": "local-p-qt6tq",
|
||||
"baseType": "project",
|
||||
"clusterId": "local",
|
||||
"containerDefaultResourceLimit": {
|
||||
"type": "/v3/schemas/containerResourceLimit"
|
||||
},
|
||||
"created": "2025-06-30T16:42:24Z",
|
||||
"createdTS": 1751301744000,
|
||||
"creatorId": "user-6j5s6",
|
||||
"id": "local:p-qt6tq",
|
||||
"labels": {
|
||||
"cattle.io/creator": "norman"
|
||||
},
|
||||
"links": {
|
||||
"basicAuths": "https://localhost:8443/v3/projects/local:p-qt6tq/basicauths",
|
||||
"certificates": "https://localhost:8443/v3/projects/local:p-qt6tq/certificates",
|
||||
"configMaps": "https://localhost:8443/v3/projects/local:p-qt6tq/configmaps",
|
||||
"cronJobs": "https://localhost:8443/v3/projects/local:p-qt6tq/cronjobs",
|
||||
"daemonSets": "https://localhost:8443/v3/projects/local:p-qt6tq/daemonsets",
|
||||
"deployments": "https://localhost:8443/v3/projects/local:p-qt6tq/deployments",
|
||||
"dnsRecords": "https://localhost:8443/v3/projects/local:p-qt6tq/dnsrecords",
|
||||
"dockerCredentials": "https://localhost:8443/v3/projects/local:p-qt6tq/dockercredentials",
|
||||
"horizontalPodAutoscalers": "https://localhost:8443/v3/projects/local:p-qt6tq/horizontalpodautoscalers",
|
||||
"ingresses": "https://localhost:8443/v3/projects/local:p-qt6tq/ingresses",
|
||||
"jobs": "https://localhost:8443/v3/projects/local:p-qt6tq/jobs",
|
||||
"namespacedBasicAuths": "https://localhost:8443/v3/projects/local:p-qt6tq/namespacedbasicauths",
|
||||
"namespacedCertificates": "https://localhost:8443/v3/projects/local:p-qt6tq/namespacedcertificates",
|
||||
"namespacedDockerCredentials": "https://localhost:8443/v3/projects/local:p-qt6tq/namespaceddockercredentials",
|
||||
"namespacedSecrets": "https://localhost:8443/v3/projects/local:p-qt6tq/namespacedsecrets",
|
||||
"namespacedServiceAccountTokens": "[redacted]",
|
||||
"namespacedSshAuths": "https://localhost:8443/v3/projects/local:p-qt6tq/namespacedsshauths",
|
||||
"persistentVolumeClaims": "https://localhost:8443/v3/projects/local:p-qt6tq/persistentvolumeclaims",
|
||||
"pods": "https://localhost:8443/v3/projects/local:p-qt6tq/pods",
|
||||
"projectNetworkPolicies": "https://localhost:8443/v3/projects/local:p-qt6tq/projectnetworkpolicies",
|
||||
"projectRoleTemplateBindings": "https://localhost:8443/v3/projects/local:p-qt6tq/projectroletemplatebindings",
|
||||
"remove": "https://localhost:8443/v3/projects/local:p-qt6tq",
|
||||
"replicaSets": "https://localhost:8443/v3/projects/local:p-qt6tq/replicasets",
|
||||
"replicationControllers": "https://localhost:8443/v3/projects/local:p-qt6tq/replicationcontrollers",
|
||||
"secrets": "https://localhost:8443/v3/projects/local:p-qt6tq/secrets",
|
||||
"self": "https://localhost:8443/v3/projects/local:p-qt6tq",
|
||||
"serviceAccountTokens": "[redacted]",
|
||||
"services": "https://localhost:8443/v3/projects/local:p-qt6tq/services",
|
||||
"sshAuths": "https://localhost:8443/v3/projects/local:p-qt6tq/sshauths",
|
||||
"statefulSets": "https://localhost:8443/v3/projects/local:p-qt6tq/statefulsets",
|
||||
"subscribe": "https://localhost:8443/v3/projects/local:p-qt6tq/subscribe",
|
||||
"update": "https://localhost:8443/v3/projects/local:p-qt6tq",
|
||||
"workloads": "https://localhost:8443/v3/projects/local:p-qt6tq/workloads"
|
||||
},
|
||||
"name": "example-project",
|
||||
"namespaceDefaultResourceQuota": {
|
||||
"limit": {
|
||||
"type": "/v3/schemas/resourceQuotaLimit"
|
||||
},
|
||||
"type": "/v3/schemas/namespaceResourceQuota"
|
||||
},
|
||||
"namespaceId": null,
|
||||
"resourceQuota": {
|
||||
"limit": {
|
||||
"type": "/v3/schemas/resourceQuotaLimit"
|
||||
},
|
||||
"type": "/v3/schemas/projectResourceQuota",
|
||||
"usedLimit": {
|
||||
"type": "/v3/schemas/resourceQuotaLimit"
|
||||
}
|
||||
},
|
||||
"state": "active",
|
||||
"transitioning": "no",
|
||||
"transitioningMessage": "",
|
||||
"type": "project",
|
||||
"uuid": "b582603b-7826-4302-8393-792df2611265"
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,78 @@
|
||||
---
|
||||
title: Enabling Cluster Agent Scheduling Customization
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-cluster-agent-scheduling-customization"/>
|
||||
</head>
|
||||
|
||||
The `cattle-cluster-agent` allows enabling automatic deployment of a Priority Class and Pod Disruption Budget.
|
||||
|
||||
When this feature is enabled, all newly provisioned Node Driver, Custom, and Imported RKE2 and K3s clusters will automatically deploy a Priority Class and Pod Disruption Budget during the provisioning process. Existing clusters can be gradually updated with this new behavior using the [Rancher UI or by setting a specific annotation](#updating-existing-clusters) on cluster objects.
|
||||
|
||||
This feature is disabled by default.
|
||||
|
||||
## Enabling Cluster Agent Scheduling Customization
|
||||
|
||||
:::info
|
||||
Enabling or disabling this feature only impacts new clusters. Existing downstream clusters will not be automatically updated. See [_Updating Existing Clusters_](#updating-existing-clusters).
|
||||
:::
|
||||
|
||||
1. In the upper left corner, click **☰ > Global Settings**
|
||||
1. Select **Feature Flags**
|
||||
1. Find the `cluster-agent-scheduling-customization` feature and click **⋮ > Activate**
|
||||
|
||||
## Configuring the Global Settings
|
||||
|
||||
You can customize the default Priority Class (PC) and Pod Disruption Budget (PDB) by updating the `cluster-agent-default-priority-class` and `cluster-agent-default-pod-disruption-budget` global settings in the Rancher UI. Note that both the Priority Class and Pod Disruption Budget have configuration restrictions:
|
||||
|
||||
+ The `Value` set for the default PC cannot be less than negative 1 billion, or greater than 1 billion.
|
||||
+ The `PreemptionPolicy` set for the PC must be equal to `PreemptLowerPriority` or `Never`.
|
||||
+ You cannot configure the PDB `minAvailable` and `maxUnavailable` fields to both have a non-zero value.
|
||||
+ The PDB `minAvailable` must either be a non-negative whole number integer, or a non-negative whole number percent (e.g. `1` or `100%`).
|
||||
+ The PDB `maxUnavailable` must either be a non-negative whole number integer, or a non-negative whole number percent (e.g. `1` or `100%`).
|
||||
|
||||
|
||||
## Updating Existing Clusters
|
||||
|
||||
:::info
|
||||
When this feature is disabled, you cannot modify the cluster agent scheduling customization fields for existing clusters. However, you can always remove the configuration, regardless of the feature's status.
|
||||
:::
|
||||
|
||||
After enabling this feature, you can configure scheduling customization for existing clusters in two ways:
|
||||
|
||||
+ **Using the Rancher UI**
|
||||
+ Edit the desired cluster and navigate to the **Cluster Agent** tab within the **Cluster Configuration** section.
|
||||
+ Enable the `Prevent Rancher cluster agent pod eviction` checkbox.
|
||||
+ The necessary fields on the associated `clusters.provisioning.cattle.io` or `clusters.management.cattle.io` object will be automatically configured using the values set in the global settings.
|
||||
+ Save the cluster.
|
||||
+ **Using an annotation**
|
||||
+ The `provisioning.cattle.io/enable-scheduling-customization` annotation can be used to update clusters without requiring the use of the Rancher UI. This annotation will be automatically removed from the cluster after the Priority Class and Pod Disruption Budget are configured.
|
||||
+ The value of this annotation can be either `true` or `false`, to add or remove scheduling customization automatically.
|
||||
+ For Node Driver Provisioned and Custom clusters, apply this annotation on the associated `clusters.provisioning.cattle.io` object.
|
||||
+ For Imported clusters, apply the annotation on the associated `clusters.management.cattle.io` object.
|
||||
|
||||
## Applying Updated Global Settings
|
||||
|
||||
In order to prevent unexpected changes in scheduler behavior, Rancher does not update existing downstream clusters when the `cluster-agent-default-priority-class` and `cluster-agent-default-pod-disruption-budget` global settings are changed. There are two ways to update existing clusters to use the most recent global settings:
|
||||
|
||||
+ **Using the Rancher UI**
|
||||
+ When configuring a cluster, an additional checkbox will be shown in the **Cluster Agent** tab within the **Cluster Configuration** section. Checking the `Apply global settings for Priority Class and Pod Disruption Budget` checkbox will automatically update the Priority Class and Pod Disruption Budget to match the global settings once the cluster is saved.
|
||||
+ **Adjusting the cluster yaml**
|
||||
+ You may manually adjust the relevant fields in the cluster object using `kubectl` or the Rancher UI 'Edit As Yaml' feature. Scheduling customization can be found in the `spec.ClusterAgentDeploymentCustomization.SchedulingCustomization` section of the cluster object.
|
||||
+ Alternatively, the `provisioning.cattle.io/enable-scheduling-customization` annotation can be used to remove and re-add the updated scheduling customization fields set on a specific cluster.
|
||||
|
||||
## Downstream Objects
|
||||
|
||||
When this feature is enabled for a given cluster, two downstream resources will be automatically created by Rancher:
|
||||
|
||||
+ A Pod Disruption Budget will be automatically created in the `cattle-system` namespace, named `cattle-cluster-agent-pod-disruption-budget`.
|
||||
+ A Priority Class will be automatically created, named `cattle-cluster-agent-priority-class`.
|
||||
|
||||
These objects are maintained by Rancher and must not be modified or deleted. The Rancher server will automatically update these objects to match the configuration set on the Cluster object and remove them when they are no longer needed.
|
||||
|
||||
### RBAC considerations
|
||||
|
||||
Before enabling this feature on a downstream cluster, cluster administrators should assess their current RBAC configuration to prevent common access to the `cattle-cluster-agent-priority-class`. In cases where external users have access to a cluster, such as when offering clusters as a service, it is recommended to limit access to the `cattle-cluster-agent-priority-class` object to prevent changes or deletion.
|
||||
|
||||
Similar considerations do not need to be made for the `cattle-cluster-agent-pod-disruption-budget` object, as Pod Disruption Budgets are namespaced objects. Rancher will create the `cattle-cluster-agent-pod-disruption-budget` in the privileged `cattle-system` namespace.
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title: Continuous Delivery
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/continuous-delivery"/>
|
||||
</head>
|
||||
|
||||
[Continuous Delivery with Fleet](../../../integrations-in-rancher/fleet/fleet.md) comes preinstalled in Rancher and can't be fully disabled. However, the Fleet feature for GitOps continuous delivery may be disabled using the `continuous-delivery` feature flag.
|
||||
|
||||
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.](enable-experimental-features.md)
|
||||
|
||||
Environment Variable Key | Default Value | Description
|
||||
---|---|---
|
||||
`continuous-delivery` | `true` | This flag disables the GitOps continuous delivery feature of Fleet. |
|
||||
|
||||
If Fleet was disabled in Rancher v2.5.x, it will become enabled if Rancher is upgraded to v2.6.x. Only the continuous delivery part of Fleet can be disabled. When `continuous-delivery` is disabled, the `gitjob` deployment is no longer deployed into the Rancher server's local cluster, and `continuous-delivery` is not shown in the Rancher UI.
|
||||
@@ -0,0 +1,125 @@
|
||||
---
|
||||
title: Enabling Experimental Features
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features"/>
|
||||
</head>
|
||||
|
||||
Rancher includes some features that are experimental and disabled by default. You might want to enable these features, for example, if you decide that the benefits of using an [unsupported storage type](unsupported-storage-drivers.md) outweighs the risk of using an untested feature. Feature flags were introduced to allow you to try these features that are not enabled by default.
|
||||
|
||||
The features can be enabled in three ways:
|
||||
|
||||
- [Enable features when starting Rancher.](#enabling-features-when-starting-rancher) When installing Rancher with a CLI, you can use a feature flag to enable a feature by default.
|
||||
- [Enable features from the Rancher UI](#enabling-features-with-the-rancher-ui) by going to the **Settings** page.
|
||||
- [Enable features with the Rancher API](#enabling-features-with-the-rancher-api) after installing Rancher.
|
||||
|
||||
Each feature has two values:
|
||||
|
||||
- A default value, which can be configured with a flag or environment variable from the command line
|
||||
- A set value, which can be configured with the Rancher API or UI
|
||||
|
||||
If no value has been set, Rancher uses the default value.
|
||||
|
||||
Because the API sets the actual value and the command line sets the default value, that means that if you enable or disable a feature with the API or UI, it will override any value set with the command line.
|
||||
|
||||
For example, if you install Rancher, then set a feature flag to true with the Rancher API, then upgrade Rancher with a command that sets the feature flag to false, the default value will still be false, but the feature will still be enabled because it was set with the Rancher API. If you then deleted the set value (true) with the Rancher API, setting it to NULL, the default value (false) would take effect. See the [feature flags page](../../../getting-started/installation-and-upgrade/installation-references/feature-flags.md) for more information.
|
||||
|
||||
## Enabling Features when Starting Rancher
|
||||
|
||||
When you install Rancher, enable the feature you want with a feature flag. The command is different depending on whether you are installing Rancher on a single node or if you are doing a Kubernetes Installation of Rancher.
|
||||
|
||||
### Enabling Features for Kubernetes Installs
|
||||
|
||||
:::note
|
||||
|
||||
Values set from the Rancher API will override the value passed in through the command line.
|
||||
|
||||
:::
|
||||
|
||||
When installing Rancher with a Helm chart, use the `--set` option. In the below example, two features are enabled by passing the feature flag names in a comma separated list:
|
||||
|
||||
|
||||
```
|
||||
helm install rancher rancher-latest/rancher \
|
||||
--namespace cattle-system \
|
||||
--set hostname=rancher.my.org \
|
||||
--set 'extraEnv[0].name=CATTLE_FEATURES'
|
||||
--set 'extraEnv[0].value=<FEATURE-FLAG-NAME-1>=true,<FEATURE-FLAG-NAME-2>=true'
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
If you are installing an alpha version, Helm requires adding the `--devel` option to the command.
|
||||
|
||||
:::
|
||||
|
||||
### Enabling Features for Air Gap Installs
|
||||
|
||||
To perform an [air gap installation of Rancher](../../../getting-started/installation-and-upgrade/other-installation-methods/air-gapped-helm-cli-install/install-rancher-ha.md), add a Helm chart repository and download a Helm chart, then install Rancher with Helm.
|
||||
|
||||
When you install the Helm chart, you should pass in feature flag names in a comma separated list, as in the following example:
|
||||
|
||||
```
|
||||
helm install rancher ./rancher-<VERSION>.tgz \
|
||||
--namespace cattle-system \
|
||||
--set hostname=<RANCHER.YOURDOMAIN.COM> \
|
||||
--set rancherImage=<REGISTRY.YOURDOMAIN.COM:PORT>/rancher/rancher \
|
||||
--set ingress.tls.source=secret \
|
||||
--set systemDefaultRegistry=<REGISTRY.YOURDOMAIN.COM:PORT> \ # Set a default private registry to be used in Rancher
|
||||
--set useBundledSystemChart=true # Use the packaged Rancher system charts
|
||||
--set 'extraEnv[0].name=CATTLE_FEATURES'
|
||||
--set 'extraEnv[0].value=<FEATURE-FLAG-NAME-1>=true,<FEATURE-FLAG-NAME-2>=true'
|
||||
```
|
||||
|
||||
### Enabling Features for Docker Installs
|
||||
|
||||
When installing Rancher with Docker, use the `--features` option. In the below example, two features are enabled by passing the feature flag names in a comma separated list:
|
||||
|
||||
```
|
||||
docker run -d -p 80:80 -p 443:443 \
|
||||
--restart=unless-stopped \
|
||||
rancher/rancher:rancher-latest \
|
||||
--features=<FEATURE-FLAG-NAME-1>=true,<FEATURE-FLAG-NAME-2>=true
|
||||
```
|
||||
|
||||
|
||||
## Enabling Features with the Rancher UI
|
||||
|
||||
1. In the upper left corner, click **☰ > Global Settings**.
|
||||
1. Click **Feature Flags**.
|
||||
1. To enable a feature, go to the disabled feature you want to enable and click **⋮ > Activate**.
|
||||
|
||||
**Result:** The feature is enabled.
|
||||
|
||||
### Disabling Features with the Rancher UI
|
||||
|
||||
1. In the upper left corner, click **☰ > Global Settings**.
|
||||
1. Click **Feature Flags**. You will see a list of experimental features.
|
||||
1. To disable a feature, go to the enabled feature you want to disable and click **⋮ > Deactivate**.
|
||||
|
||||
**Result:** The feature is disabled.
|
||||
|
||||
## Enabling Features with the Rancher API
|
||||
|
||||
1. Go to `<RANCHER-SERVER-URL>/v3/features`.
|
||||
1. In the `data` section, you will see an array containing all of the features that can be turned on with feature flags. The name of the feature is in the `id` field. Click the name of the feature you want to enable.
|
||||
1. In the upper left corner of the screen, under **Operations,** click **Edit**.
|
||||
1. In the **Value** drop-down menu, click **True**.
|
||||
1. Click **Show Request**.
|
||||
1. Click **Send Request**.
|
||||
1. Click **Close**.
|
||||
|
||||
**Result:** The feature is enabled.
|
||||
|
||||
### Disabling Features with the Rancher API
|
||||
|
||||
1. Go to `<RANCHER-SERVER-URL>/v3/features`.
|
||||
1. In the `data` section, you will see an array containing all of the features that can be turned on with feature flags. The name of the feature is in the `id` field. Click the name of the feature you want to enable.
|
||||
1. In the upper left corner of the screen, under **Operations,** click **Edit**.
|
||||
1. In the **Value** drop-down menu, click **False**.
|
||||
1. Click **Show Request**.
|
||||
1. Click **Send Request**.
|
||||
1. Click **Close**.
|
||||
|
||||
**Result:** The feature is disabled.
|
||||
@@ -0,0 +1,36 @@
|
||||
---
|
||||
title: UI for Istio Virtual Services and Destination Rules
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/istio-traffic-management-features"/>
|
||||
</head>
|
||||
|
||||
This feature enables a UI that lets you create, read, update and delete virtual services and destination rules, which are traffic management features of Istio.
|
||||
|
||||
> **Prerequisite:** Turning on this feature does not enable Istio. A cluster administrator needs to [enable Istio for the cluster](../istio-setup-guide/istio-setup-guide.md) in order to use the feature.
|
||||
|
||||
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.](enable-experimental-features.md)
|
||||
|
||||
Environment Variable Key | Default Value | Status | Available as of
|
||||
---|---|---|---
|
||||
`istio-virtual-service-ui` |`false` | Experimental | v2.3.0
|
||||
`istio-virtual-service-ui` | `true` | GA | v2.3.2
|
||||
|
||||
## About this Feature
|
||||
|
||||
A central advantage of Istio's traffic management features is that they allow dynamic request routing, which is useful for canary deployments, blue/green deployments, or A/B testing.
|
||||
|
||||
When enabled, this feature turns on a page that lets you configure some traffic management features of Istio using the Rancher UI. Without this feature, you need to use `kubectl` to manage traffic with Istio.
|
||||
|
||||
The feature enables two UI tabs: one tab for **Virtual Services** and another for **Destination Rules**.
|
||||
|
||||
- **Virtual services** intercept and direct traffic to your Kubernetes services, allowing you to direct percentages of traffic from a request to different services. You can use them to define a set of routing rules to apply when a host is addressed. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/)
|
||||
- **Destination rules** serve as the single source of truth about which service versions are available to receive traffic from virtual services. You can use these resources to define policies that apply to traffic that is intended for a service after routing has occurred. For details, refer to the [Istio documentation.](https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule)
|
||||
|
||||
To see these tabs,
|
||||
|
||||
1. Click **☰ > Cluster Management**.
|
||||
1. Go to the cluster where Istio is installed and click **Explore**.
|
||||
1. In the left navigation bar, click **Istio**.
|
||||
1. You will see tabs for **Kiali** and **Jaeger**. From the left navigation bar, you can view and configure **Virtual Services** and **Destination Rules**.
|
||||
@@ -0,0 +1,27 @@
|
||||
---
|
||||
title: "Running on ARM64 Mixed Architecture (Experimental)"
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/rancher-on-arm64"/>
|
||||
</head>
|
||||
|
||||
:::caution
|
||||
|
||||
Running on an ARM64 mixed architecture platform is currently an experimental feature and is not yet officially supported in Rancher. Therefore, we do not recommend using ARM64 mixed architecture based nodes in a production environment.
|
||||
|
||||
:::
|
||||
|
||||
The following options are available when using an ARM64 platform:
|
||||
|
||||
- Create custom cluster and adding ARM64 based node(s)
|
||||
- Kubernetes cluster version must be 1.12 or higher
|
||||
- Importing clusters that contain ARM64 based nodes
|
||||
- Kubernetes cluster version must be 1.12 or higher
|
||||
|
||||
Depending on your cluster provisioning refer to [RKE2 cluster configuration options](../../../reference-guides/cluster-configuration/rancher-server-configuration/rke2-cluster-configuration.md) or [K3s cluster configuration options](../../../reference-guides/cluster-configuration/rancher-server-configuration/k3s-cluster-configuration.md) for more information.
|
||||
|
||||
The following features are not tested:
|
||||
|
||||
- Monitoring, alerts, notifiers, pipelines and logging
|
||||
- Launching apps from the catalog
|
||||
@@ -0,0 +1,21 @@
|
||||
---
|
||||
title: RoleTemplate Aggregation
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/role-template-aggregation"/>
|
||||
</head>
|
||||
|
||||
:::caution
|
||||
RoleTemplate aggregation is an experimental feature in v2.13 that changes the RBAC architecture used for RoleTemplates, ClusterRoleTemplateBindings and ProjectRoleTemplateBindings. **It is not supported for production environments**. Breaking changes may occur between v2.13 and v2.14.
|
||||
:::
|
||||
|
||||
RoleTemplate aggregation implements RoleTemplates, ClusterRoleTemplateBindings and ProjectRoleTemplateBindings using the Kubernetes feature [Aggregated ClusterRoles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles). The new architecture results in a net reduction in RBAC objects (Roles, RoleBindings, ClusterRoles and ClusterRoleBindings) both in the Rancher cluster and the downstream clusters.
|
||||
|
||||
For more information on how the feature can improve scalability and performance, please see the [Rancher Blog post](https://www.suse.com/c/rancher_blog/fewer-bindings-more-power-ranchers-rbac-boost-for-enhanced-performance-and-scalability/).
|
||||
|
||||
| Environment Variable Key | Default Value | Description |
|
||||
| --- | --- | --- |
|
||||
| `aggregated-roletemplates` | `false` | [Beta] Make RoleTemplates use aggregation for generated RBAC roles. |
|
||||
|
||||
The value of this feature flag is locked on installation, which shows up in the UI as a lock symbol beside the feature flag. That means the feature can only be set on the first ever installation of Rancher. After that, attempting to modify the value will be denied.
|
||||
@@ -0,0 +1,43 @@
|
||||
---
|
||||
title: Allowing Unsupported Storage Drivers
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-experimental-features/unsupported-storage-drivers"/>
|
||||
</head>
|
||||
|
||||
This feature allows you to use types for storage providers and provisioners that are not enabled by default.
|
||||
|
||||
To enable or disable this feature, refer to the instructions on [the main page about enabling experimental features.](enable-experimental-features.md)
|
||||
|
||||
Environment Variable Key | Default Value | Description
|
||||
---|---|---
|
||||
`unsupported-storage-drivers` | `false` | This feature enables types for storage providers and provisioners that are not enabled by default.
|
||||
|
||||
### Types for Persistent Volume Plugins that are Enabled by Default
|
||||
Below is a list of storage types for persistent volume plugins that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported:
|
||||
|
||||
Name | Plugin
|
||||
--------|----------
|
||||
Amazon EBS Disk | `aws-ebs`
|
||||
AzureFile | `azure-file`
|
||||
AzureDisk | `azure-disk`
|
||||
Google Persistent Disk | `gce-pd`
|
||||
Longhorn | `flex-volume-longhorn`
|
||||
VMware vSphere Volume | `vsphere-volume`
|
||||
Local | `local`
|
||||
Network File System | `nfs`
|
||||
hostPath | `host-path`
|
||||
|
||||
### Types for StorageClass that are Enabled by Default
|
||||
Below is a list of storage types for a StorageClass that are enabled by default. When enabling this feature flag, any persistent volume plugins that are not on this list are considered experimental and unsupported:
|
||||
|
||||
Name | Plugin
|
||||
--------|--------
|
||||
Amazon EBS Disk | `aws-ebs`
|
||||
AzureFile | `azure-file`
|
||||
AzureDisk | `azure-disk`
|
||||
Google Persistent Disk | `gce-pd`
|
||||
Longhorn | `flex-volume-longhorn`
|
||||
VMware vSphere Volume | `vsphere-volume`
|
||||
Local | `local`
|
||||
@@ -0,0 +1,62 @@
|
||||
---
|
||||
title: Enabling User Retention
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/how-to-guides/advanced-user-guides/enable-user-retention"/>
|
||||
</head>
|
||||
|
||||
In Rancher v2.8.5 and later, you can enable user retention to automatically disable or delete inactive user accounts after a configurable time period.
|
||||
|
||||
The user retention feature is off by default.
|
||||
|
||||
## Enabling User Retention with kubectl
|
||||
|
||||
To enable user retention, you must set `user-retention-cron`. You must also set at least one of `disable-inactive-user-after` or `delete-inactive-user-after`. You can use `kubectl edit setting <name-of-setting>` to open your editor of choice and set these values.
|
||||
|
||||
## Configuring Rancher to Delete Users, Disable Users, or Combine Operations
|
||||
|
||||
Rancher uses two global user retention settings to determine if and when users are disabled or deleted after a certain period of inactivity. Disabled accounts must be re-enabled before users can log in again. If an account is deleted without being disabled, users may be able to log in through external authentication and the deleted account will be recreated.
|
||||
|
||||
The global settings, `disable-inactive-user-after` and `delete-inactive-user-after`, do not block one another from running.
|
||||
|
||||
For example, you can set both operations to run. If you give `disable-inactive-user-after` a shorter duration than `delete-inactive-user-after`, the user retention process disables inactive accounts before deleting them.
|
||||
|
||||
You can also edit some user retention settings on a specific user's `UserAttribute`. Setting these values overrides the global settings. See [User-specific User Retention Overrides](#user-specific-user-retention-overrides) for more details.
|
||||
|
||||
### Required User Retention Settings
|
||||
|
||||
The following are global settings:
|
||||
|
||||
- `user-retention-cron`: Describes how often the user retention process runs. The value is a cron expression (for example, `0 * * * *` for every hour).
|
||||
- `disable-inactive-user-after`: The amount of time that a user account can be inactive before the process disables an account. Disabling an account forces the user to request that an administrator re-enable the account before they can log in to use it. Values are expressed in [time.Duration units](https://pkg.go.dev/time#ParseDuration) (for example, `720h` for 720 hours or 30 days). The value must be greater than `auth-user-session-ttl-minutes`, which is `16h` by default. If the value is not set, set to the empty string, or is equal to 0, the process does not disable any inactive accounts.
|
||||
- `delete-inactive-user-after`: The amount of time that a user account can be inactive before the process deletes the account. Values are expressed in time.Duration units (for example, `720h` for 720 hours or 30 days). The value must be greater than `auth-user-session-ttl-minutes`, which is `16h` by default. The value should be greater than `336h` (14 days), otherwise it is rejected by the Rancher webhook. If you need the value to be lower than 14 days, you can [bypass the webhook](../../reference-guides/rancher-webhook.md#bypassing-the-webhook). If the value is not set, set to the empty string, or is equal to 0, the process does not delete any inactive accounts.
|
||||
|
||||
### Optional User Retention Settings
|
||||
|
||||
The following are global settings:
|
||||
|
||||
- `user-retention-dry-run`: If set to `true`, the user retention process runs without actually deleting or disabling any user accounts. This can help test user retention behavior before allowing the process to disable or delete user accounts in a production environment.
|
||||
- `user-last-login-default`: If a user does not have `UserAttribute.LastLogin` set on their account, this setting is used instead. The value is expressed as an [RFC 3339 date-time](https://datatracker.ietf.org/doc/html/rfc3339#section-5.6) truncated to the last second; for example, `2023-03-01T00:00:00Z`. If the value is set to the empty string or is equal to 0, this setting is not used.
|
||||
|
||||
#### User-specific User Retention Overrides
|
||||
|
||||
The following are user-specific overrides to the global settings for special cases. These settings are applied by editing the `UserAttribute` associated with a given account:
|
||||
|
||||
```
|
||||
kubectl edit userattribute <user-name>
|
||||
```
|
||||
|
||||
- `disableAfter`: The user-specific override for `disable-inactive-user-after`. The value is expressed in [time.Duration units](https://pkg.go.dev/time#ParseDuration) and truncated to the second. If the value is set to `0s` then the account won't be subject to disabling.
|
||||
- `deleteAfter`: The user-specific override for `delete-inactive-user-after`. The value is expressed in [time.Duration units](https://pkg.go.dev/time#ParseDuration) and truncated to the second. If the value is set to `0s` then the account won't be subject to deletion.
|
||||
|
||||
## Viewing User Retention Settings in the Rancher UI
|
||||
|
||||
You can see which user retention settings are applied to which users.
|
||||
|
||||
1. In the upper left corner, click **☰ > Users & Authentication**.
|
||||
1. In the left navigation menu, select **Users**.
|
||||
|
||||
The **Disable After** and **Delete After** columns for each user account indicate how long the account can be inactive before it is disabled or deleted from Rancher. There is also a **Last Login** column roughly indicating when the account was last active.
|
||||
|
||||
The same information is available if you click a user's name in the **Users** table and select the **Detail** tab.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user