mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-05 20:53:33 +00:00
Adding preview for v2.12 Rancher documentation.
Signed-off-by: Sunil Singh <sunil.singh@suse.com>
This commit is contained in:
+49
@@ -0,0 +1,49 @@
|
||||
---
|
||||
title: Adding TLS Secrets
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/add-tls-secrets"/>
|
||||
</head>
|
||||
|
||||
Kubernetes will create all the objects and services for Rancher, but it will not become available until we populate the `tls-rancher-ingress` secret in the `cattle-system` namespace with the certificate and key.
|
||||
|
||||
Combine the server certificate followed by any intermediate certificate(s) needed into a file named `tls.crt`. Copy your certificate key into a file named `tls.key`.
|
||||
|
||||
For example, [acme.sh](https://acme.sh) provides server certificate and CA chains in `fullchain.cer` file.
|
||||
This `fullchain.cer` should be renamed to `tls.crt` & certificate key file as `tls.key`.
|
||||
|
||||
Use `kubectl` with the `tls` secret type to create the secrets.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
If you want to replace the certificate, you can delete the `tls-rancher-ingress` secret using `kubectl -n cattle-system delete secret tls-rancher-ingress` and add a new one using the command shown above. If you are using a private CA signed certificate, replacing the certificate is only possible if the new certificate is signed by the same CA as the certificate currently in use.
|
||||
|
||||
:::
|
||||
|
||||
## Using a Private CA Signed Certificate
|
||||
|
||||
If you are using a private CA, Rancher requires a copy of the private CA's root certificate or certificate chain, which the Rancher Agent uses to validate the connection to the server.
|
||||
|
||||
Create a file named `cacerts.pem` that only contains the root CA certificate or certificate chain from your private CA, and use `kubectl` to create the `tls-ca` secret in the `cattle-system` namespace.
|
||||
|
||||
```
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
The configured `tls-ca` secret is retrieved when Rancher starts. On a running Rancher installation the updated CA will take effect after new Rancher pods are started.
|
||||
|
||||
:::
|
||||
|
||||
## Updating a Private CA Certificate
|
||||
|
||||
Follow the steps on [this page](update-rancher-certificate.md) to update the SSL certificate of the ingress in a Rancher [high availability Kubernetes installation](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md) or to switch from the default self-signed certificate to a custom certificate.
|
||||
+68
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: Setting up the Bootstrap Password
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/bootstrap-password"/>
|
||||
</head>
|
||||
|
||||
When you install Rancher, you can set a bootstrap password for the first admin account.
|
||||
|
||||
If you choose not to set a bootstrap password, Rancher randomly generates a bootstrap password for the first admin account.
|
||||
|
||||
For details on how to set the bootstrap password, see below.
|
||||
|
||||
## Password Requirements
|
||||
|
||||
The bootstrap password can be any length.
|
||||
|
||||
When you reset the first admin account's password after first login, the new password must be at least 12 characters long.
|
||||
|
||||
You can [customize the minimum password length](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/manage-users-and-groups.md#minimum-password-length) for user accounts, within limitations.
|
||||
|
||||
Minimum password length can be any positive integer value between 2 and 256. Decimal values and leading zeroes are not allowed.
|
||||
|
||||
## Specifying the Bootstrap Password
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Helm">
|
||||
|
||||
During [Rancher installation](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md), set `bootstrapPassword` alongside any other flags for the Rancher Helm chart. For example:
|
||||
|
||||
```bash
|
||||
helm install rancher rancher-<chart-repo>/rancher \
|
||||
--set bootstrapPassword=<password>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker">
|
||||
|
||||
Pass the following value to the [Docker install command](../other-installation-methods/air-gapped-helm-cli-install/docker-install-commands.md):
|
||||
|
||||
```bash
|
||||
-e CATTLE_BOOTSTRAP_PASSWORD=<password>
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Retrieving the Bootstrap Password
|
||||
|
||||
The bootstrap password is stored in the Docker container logs. After Rancher is installed, the UI shows instructions for how to retrieve the password based on your installation method.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Helm">
|
||||
|
||||
```bash
|
||||
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{ .data.bootstrapPassword|base64decode}}{{ "\n" }}'
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker">
|
||||
|
||||
```bash
|
||||
docker logs container-id 2>&1 | grep "Bootstrap Password:"
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
+122
@@ -0,0 +1,122 @@
|
||||
---
|
||||
title: Choosing a Rancher Version
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/choose-a-rancher-version"/>
|
||||
</head>
|
||||
|
||||
This section describes how to choose a Rancher version.
|
||||
|
||||
For a high-availability installation of Rancher, which is recommended for production, the Rancher server is installed using a **Helm chart** on a Kubernetes cluster. Refer to the [Helm version requirements](helm-version-requirements.md) to choose a version of Helm to install Rancher.
|
||||
|
||||
For Docker installations of Rancher, which is used for development and testing, you will install Rancher as a **Docker image**.
|
||||
|
||||
<Tabs>
|
||||
<TabItem value="Helm Charts">
|
||||
|
||||
When installing, upgrading, or rolling back Rancher Server when it is [installed on a Kubernetes cluster](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md), Rancher server is installed using a Helm chart on a Kubernetes cluster. Therefore, as you prepare to install or upgrade a high availability Rancher configuration, you must add a Helm chart repository that contains the charts for installing Rancher.
|
||||
|
||||
Refer to the [Helm version requirements](helm-version-requirements.md) to choose a version of Helm to install Rancher.
|
||||
|
||||
### Helm Chart Repositories
|
||||
|
||||
Rancher provides several different Helm chart repositories to choose from. We align our latest and stable Helm chart repositories with the Docker tags that are used for a Docker installation. Therefore, the `rancher-latest` repository will contain charts for all the Rancher versions that have been tagged as `rancher/rancher:latest`. When a Rancher version has been promoted to the `rancher/rancher:stable`, it will get added to the `rancher-stable` repository.
|
||||
|
||||
| Type | Command to Add the Repo | Description of the Repo |
|
||||
| -------------- | ------------ | ----------------- |
|
||||
| rancher-latest | `helm repo add rancher-latest https://releases.rancher.com/server-charts/latest` | Adds a repository of Helm charts for the latest versions of Rancher. We recommend using this repo for testing out new Rancher builds. |
|
||||
| rancher-stable | `helm repo add rancher-stable https://releases.rancher.com/server-charts/stable` | Adds a repository of Helm charts for older, stable versions of Rancher. We recommend using this repo for production environments. |
|
||||
| rancher-alpha | `helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha` | Adds a repository of Helm charts for alpha versions of Rancher for previewing upcoming releases. These releases are discouraged in production environments. Upgrades _to_ or _from_ charts in the rancher-alpha repository to any other chart, regardless or repository, aren't supported. |
|
||||
|
||||
Instructions on when to select these repos are available below in [Switching to a Different Helm Chart Repository](#switching-to-a-different-helm-chart-repository).
|
||||
|
||||
:::note
|
||||
|
||||
All charts in the `rancher-stable` repository will correspond with any Rancher version tagged as `stable`.
|
||||
|
||||
:::
|
||||
|
||||
### Helm Chart Versions
|
||||
|
||||
Rancher Helm chart versions match the Rancher version (i.e `appVersion`). Once you've added the repo you can search it to show available versions with the following command:
|
||||
`helm search repo --versions`
|
||||
|
||||
If you have several repos you can specify the repo name, ie. `helm search repo rancher-stable/rancher --versions` <br/>
|
||||
For more information, see https://helm.sh/docs/helm/helm_search_repo/
|
||||
|
||||
To fetch a specific version of your chosen repo, define the `--version` parameter like in the following example:<br/>
|
||||
`helm fetch rancher-stable/rancher --version=2.4.8`
|
||||
|
||||
### Switching to a Different Helm Chart Repository
|
||||
|
||||
After installing Rancher, if you want to change which Helm chart repository to install Rancher from, you will need to follow these steps.
|
||||
|
||||
:::note
|
||||
|
||||
Because the rancher-alpha repository contains only alpha charts, switching between the rancher-alpha repository and the rancher-stable or rancher-latest repository for upgrades is not supported.
|
||||
|
||||
:::
|
||||
|
||||
- Latest: Recommended for trying out the newest features
|
||||
```
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
```
|
||||
- Stable: Recommended for production environments
|
||||
```
|
||||
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
|
||||
```
|
||||
- Alpha: Experimental preview of upcoming releases.
|
||||
```
|
||||
helm repo add rancher-alpha https://releases.rancher.com/server-charts/alpha
|
||||
```
|
||||
Note: Upgrades are not supported to, from, or between Alphas.
|
||||
|
||||
1. List the current Helm chart repositories.
|
||||
|
||||
```plain
|
||||
helm repo list
|
||||
|
||||
NAME URL
|
||||
stable https://charts.helm.sh/stable
|
||||
rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
2. Remove the existing Helm Chart repository that contains your charts to install Rancher, which will either be `rancher-stable` or `rancher-latest` depending on what you had initially added.
|
||||
|
||||
```plain
|
||||
helm repo remove rancher-<CHART_REPO>
|
||||
```
|
||||
|
||||
3. Add the Helm chart repository that you want to start installing Rancher from.
|
||||
|
||||
```plain
|
||||
helm repo add rancher-<CHART_REPO> https://releases.rancher.com/server-charts/<CHART_REPO>
|
||||
```
|
||||
|
||||
4. Continue to follow the steps to [upgrade Rancher](../install-upgrade-on-a-kubernetes-cluster/upgrades.md) from the new Helm chart repository.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="Docker Images">
|
||||
|
||||
When performing [Docker installs](../other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md), upgrades, or rollbacks, you can use _tags_ to install a specific version of Rancher.
|
||||
|
||||
### Server Tags
|
||||
|
||||
Rancher Server is distributed as a Docker image, which have tags attached to them. You can specify this tag when entering the command to deploy Rancher. Remember that if you use a tag without an explicit version (like `latest` or `stable`), you must explicitly pull a new version of that image tag. Otherwise, any image cached on the host will be used.
|
||||
|
||||
| Tag | Description |
|
||||
| -------------------------- | ------ |
|
||||
| `rancher/rancher:latest` | Our latest development release. These builds are validated through our CI automation framework. These releases are not recommended for production environments. |
|
||||
| `rancher/rancher:stable` | Our newest stable release. This tag is recommended for production. |
|
||||
| `rancher/rancher:<v2.X.X>` | You can install specific versions of Rancher by using the tag from a previous release. See what's available at Docker Hub. |
|
||||
|
||||
:::note
|
||||
|
||||
- The `master` tag or any tag with `-rc` or another suffix is meant for the Rancher testing team to validate. You should not use these tags, as these builds are not officially supported.
|
||||
- Want to install an alpha review for preview? Install using one of the alpha tags listed on our [announcements page](https://forums.rancher.com/c/announcements) (e.g., `v2.2.0-alpha1`). Caveat: Alpha releases cannot be upgraded to or from any other release.
|
||||
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
+28
@@ -0,0 +1,28 @@
|
||||
---
|
||||
title: About Custom CA Root Certificates
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/custom-ca-root-certificates"/>
|
||||
</head>
|
||||
|
||||
If you're using Rancher in an internal production environment where you aren't exposing apps publicly, use a certificate from a private certificate authority (CA).
|
||||
|
||||
Services that Rancher needs to access are sometimes configured with a certificate from a custom/internal CA root, also known as self signed certificate. If the presented certificate from the service cannot be validated by Rancher, the following error displays: `x509: certificate signed by unknown authority`.
|
||||
|
||||
To validate the certificate, the CA root certificates need to be added to Rancher. As Rancher is written in Go, we can use the environment variable `SSL_CERT_DIR` to point to the directory where the CA root certificates are located in the container. The CA root certificates directory can be mounted using the Docker volume option (`-v host-source-directory:container-destination-directory`) when starting the Rancher container.
|
||||
|
||||
Examples of services that Rancher can access:
|
||||
|
||||
- Catalogs
|
||||
- Authentication providers
|
||||
- Accessing hosting/cloud API when using Node Drivers
|
||||
|
||||
## Installing with the custom CA Certificate
|
||||
|
||||
For details on starting a Rancher container with your private CA certificates mounted, refer to the installation docs:
|
||||
|
||||
- [Docker install Custom CA certificate options](../../../reference-guides/single-node-rancher-in-docker/advanced-options.md#custom-ca-certificate)
|
||||
|
||||
- [Kubernetes install options for Additional Trusted CAs](../installation-references/helm-chart-options.md#additional-trusted-cas)
|
||||
|
||||
+18
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title: Helm Version Requirements
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/helm-version-requirements"/>
|
||||
</head>
|
||||
|
||||
This section contains the requirements for Helm, which is the tool used to install Rancher on a high-availability Kubernetes cluster.
|
||||
|
||||
> The installation instructions have been updated for Helm 3. For migration of installs started with Helm 2, refer to the official [Helm 2 to 3 Migration Docs.](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) [This section](https://github.com/rancher/rancher-docs/tree/main/archived_docs/en/version-2.0-2.4/getting-started/installation-and-upgrade/advanced-options/advanced-use-cases/helm2/helm2.md) provides a copy of the older high-availability Rancher installation instructions that used Helm 2, and it is intended to be used if upgrading to Helm 3 is not feasible.
|
||||
|
||||
<DeprecationHelm2 />
|
||||
|
||||
- Helm v3.2.x or higher is required to install or upgrade Rancher v2.5.
|
||||
- Helm v2.16.0 or higher is required for Kubernetes v1.16. For the default Kubernetes version, refer to the [release notes](https://github.com/rancher/rke/releases) for the version of RKE that you are using.
|
||||
- Helm v2.15.0 should not be used, because of an issue with converting/comparing numbers.
|
||||
- Helm v2.12.0 should not be used, because of an issue with `cert-manager`.
|
||||
+17
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title: Setting up Local System Charts for Air Gapped Installations
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/local-system-charts"/>
|
||||
</head>
|
||||
|
||||
The [Charts](https://github.com/rancher/charts) repository contains all the Helm catalog items required for features such as monitoring, logging, alerting and Istio.
|
||||
|
||||
In an air gapped installation of Rancher, you will need to configure Rancher to use a local copy of the system charts. This section describes how to use local system charts using a CLI flag.
|
||||
|
||||
## Using Local System Charts
|
||||
|
||||
A local copy of `system-charts` has been packaged into the `rancher/rancher` container. To be able to use these features in an air gap install, you will need to run the Rancher install command with an extra environment variable, `CATTLE_SYSTEM_CATALOG=bundled`, which tells Rancher to use the local copy of the charts instead of attempting to fetch them from GitHub.
|
||||
|
||||
Example commands for a Rancher installation with a bundled `system-charts` are included in the [air gap installation](../other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) instructions for Docker and Helm installs.
|
||||
+29
@@ -0,0 +1,29 @@
|
||||
---
|
||||
title: Resources
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources"/>
|
||||
</head>
|
||||
|
||||
### Docker Installations
|
||||
|
||||
The [single-node Docker installation](../other-installation-methods/rancher-on-a-single-node-with-docker/rancher-on-a-single-node-with-docker.md) is for Rancher users that are wanting to test out Rancher. Instead of running on a Kubernetes cluster using Helm, you install the Rancher server component on a single node using a `docker run` command.
|
||||
|
||||
Since there is only one node and a single Docker container, if the node goes down, there is no copy of the etcd data available on other nodes and you will lose all the data of your Rancher server.
|
||||
|
||||
### Air-Gapped Installations
|
||||
|
||||
Follow [these steps](../other-installation-methods/air-gapped-helm-cli-install/air-gapped-helm-cli-install.md) to install the Rancher server in an air gapped environment.
|
||||
|
||||
An air gapped environment could be where Rancher server will be installed offline, behind a firewall, or behind a proxy.
|
||||
|
||||
### Advanced Options
|
||||
|
||||
When installing Rancher, there are several advanced options that can be enabled during installation. Within each install guide, these options are presented. Learn more about these options:
|
||||
|
||||
- [Custom CA Certificate](custom-ca-root-certificates.md)
|
||||
- [API Audit Log](../../../how-to-guides/advanced-user-guides/enable-api-audit-log.md)
|
||||
- [TLS Settings](../installation-references/tls-settings.md)
|
||||
- [etcd configuration](../../../how-to-guides/advanced-user-guides/tune-etcd-for-large-installs.md)
|
||||
- [Local System Charts for Air Gap Installations](local-system-charts.md) | v2.3.0 |
|
||||
+267
@@ -0,0 +1,267 @@
|
||||
---
|
||||
title: Updating the Rancher Certificate
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/update-rancher-certificate"/>
|
||||
</head>
|
||||
|
||||
## Updating a Private CA Certificate
|
||||
|
||||
Follow these steps to rotate an SSL certificate and private CA used by Rancher [installed on a Kubernetes cluster](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md), or migrate to an SSL certificate signed by a private CA.
|
||||
|
||||
A summary of the steps is as follows:
|
||||
|
||||
1. Create or update the `tls-rancher-ingress` Kubernetes secret object with the new certificate and private key.
|
||||
1. Create or update the `tls-ca` Kubernetes secret object with the root CA certificate (only required when using a private CA).
|
||||
1. Update the Rancher installation using the Helm CLI.
|
||||
1. Reconfigure the Rancher agents to trust the new CA certificate.
|
||||
1. Select Force Update of Fleet clusters to connect fleet-agent to Rancher.
|
||||
|
||||
The details of these instructions are below.
|
||||
|
||||
### 1. Create/update the certificate secret object
|
||||
|
||||
First, concatenate the server certificate followed by any intermediate certificate(s) to a file named `tls.crt` and provide the corresponding certificate key in a file named `tls.key`.
|
||||
|
||||
Use the following command to create the `tls-rancher-ingress` secret object in the Rancher (local) management cluster:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
Alternatively, to update an existing `tls-rancher-ingress` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key \
|
||||
--dry-run --save-config -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
### 2. Create/update the CA certificate secret object
|
||||
|
||||
If the new certificate was signed by a private CA, you will need to copy the corresponding root CA certificate into a file named `cacerts.pem` and create or update the `tls-ca` secret in the `cattle-system` namespace. If the certificate was signed by an intermediate CA, then the `cacerts.pem` must contain both the intermediate and root CA certificates (in this order).
|
||||
|
||||
To create the initial `tls-ca` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem
|
||||
```
|
||||
|
||||
To update an existing `tls-ca` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret generic tls-ca \
|
||||
--from-file=cacerts.pem \
|
||||
--dry-run --save-config -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
### 3. Reconfigure the Rancher deployment
|
||||
|
||||
If the certificate source remains the same (for example, `secret`), please follow the steps in Step 3a.
|
||||
|
||||
However, if the certificate source is changing (for example, `letsEncrypt` to `secret`), follow the steps in 3b.
|
||||
|
||||
#### 3a. Redeploy the Rancher pods
|
||||
|
||||
This step is required when the certificate source remains the same, but the CA certificate is being updated.
|
||||
|
||||
In this scenario a redeploy of the Rancher pods is needed, this is because the `tls-ca` secret is read by the Rancher pods when starting.
|
||||
|
||||
The command below can be used to redeploy the Rancher pods:
|
||||
```bash
|
||||
kubectl rollout restart deploy/rancher -n cattle-system
|
||||
```
|
||||
|
||||
When the change is completed, navigate to `https://<RANCHER_SERVER_URL>/v3/settings/cacerts` to verify that the value matches the CA certificate written in the `tls-ca` secret earlier. The CA `cacerts` value may not update until all of the redeployed Rancher pods start.
|
||||
|
||||
#### 3b. Update the Helm values for Rancher
|
||||
|
||||
This step is required if the certificate source is changing. If Rancher was previously configured to use the default self-signed certificate (`ingress.tls.source=rancher`) or Let's Encrypt (`ingress.tls.source=letsEncrypt`), and is now using a certificate signed by a private CA (`ingress.tls.source=secret`).
|
||||
|
||||
The below steps update the Helm values for the Rancher chart, so the Rancher pods and ingress are reconfigured to use the new private CA certificate created in Step 1 & 2.
|
||||
|
||||
1. Adjust the values that were used during initial installation, store the current values with:
|
||||
```bash
|
||||
helm get values rancher -n cattle-system -o yaml > values.yaml
|
||||
```
|
||||
1. Retrieve the version string of the currently deployed Rancher chart to use below:
|
||||
```bash
|
||||
helm ls -n cattle-system
|
||||
```
|
||||
1. Update the current Helm values in the `values.yaml` file to contain:
|
||||
```yaml
|
||||
ingress:
|
||||
tls:
|
||||
source: secret
|
||||
privateCA: true
|
||||
```
|
||||
:::note Important:
|
||||
As the certificate is signed by a private CA, it is important to ensure [`privateCA: true`](../installation-references/helm-chart-options.md#common-options) is set in the `values.yaml` file.
|
||||
:::
|
||||
1. Upgrade the Helm application instance using the `values.yaml` file and the current chart version. The version must match to prevent an upgrade of Rancher.
|
||||
```bash
|
||||
helm upgrade rancher rancher-stable/rancher \
|
||||
--namespace cattle-system \
|
||||
-f values.yaml \
|
||||
--version <DEPLOYED_RANCHER_VERSION>
|
||||
```
|
||||
|
||||
When the change is completed, navigate to `https://<RANCHER_SERVER_URL>/v3/settings/cacerts` to verify that the value matches the CA certificate written in the `tls-ca` secret earlier. The CA `cacerts` value may not update until all Rancher pods start.
|
||||
|
||||
### 4. Reconfigure Rancher agents to trust the private CA
|
||||
|
||||
This section covers three methods to reconfigure Rancher agents to trust the private CA. This step is required if either of the following is true:
|
||||
|
||||
- Rancher was previously configured to use the Rancher self-signed certificate (`ingress.tls.source=rancher`) or with a Let's Encrypt issued certificate (`ingress.tls.source=letsEncrypt`)
|
||||
- The certificate was signed by a different private CA
|
||||
|
||||
#### Why is this step required?
|
||||
|
||||
When Rancher is configured with a certificate signed by a private CA, the CA certificate chain is trusted by Rancher agent containers. Agents compare the checksum of the downloaded certificate against the `CATTLE_CA_CHECKSUM` environment variable. This means that, when the private CA certificate used by Rancher has changed, the environment variable `CATTLE_CA_CHECKSUM` must be updated accordingly.
|
||||
|
||||
#### Which method should I choose?
|
||||
|
||||
Method 1 is the easiest, but requires all clusters to be connected to Rancher after the certificates have been rotated. This is usually the case if the process is performed right after updating or redeploying the Rancher deployment (Step 3).
|
||||
|
||||
If the clusters have lost connection to Rancher but [Authorized Cluster Endpoint](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md) (ACE) is enabled on all clusters, then go with method 2.
|
||||
|
||||
Method 3 can be used as a fallback if method 1 and 2 are not possible.
|
||||
|
||||
#### Method 1: Force a redeploy of the Rancher agents
|
||||
|
||||
For each downstream cluster run the following command using the Kubeconfig file of the Rancher (local) management cluster.
|
||||
|
||||
```bash
|
||||
kubectl annotate clusters.management.cattle.io <CLUSTER_ID> io.cattle.agent.force.deploy=true
|
||||
```
|
||||
|
||||
:::note
|
||||
Locate the cluster ID (c-xxxxx) for the downstream cluster, this can be seen in the browser URL bar when viewing the cluster in the Rancher UI, under Cluster Management.
|
||||
:::
|
||||
|
||||
This command will cause the agent manifest to be reapplied with the checksum of the new certificate.
|
||||
|
||||
#### Method 2: Manually update the checksum environment variable
|
||||
|
||||
Manually patch the agent Kubernetes objects by updating the `CATTLE_CA_CHECKSUM` environment variable to the value matching the checksum of the new CA certificate. Generate the new checksum value like so:
|
||||
|
||||
```bash
|
||||
curl -k -s -fL <RANCHER_SERVER_URL>/v3/settings/cacerts | jq -r .value | sha256sum | awk '{print $1}'
|
||||
```
|
||||
|
||||
Using a Kubeconfig for each downstream cluster update the environment variable for the two agent deployments. If the [ACE](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md) is enabled for the cluster, [the kubectl context can be adjusted](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) to connect directly to the downstream cluster.
|
||||
|
||||
```bash
|
||||
kubectl edit -n cattle-system ds/cattle-node-agent
|
||||
kubectl edit -n cattle-system deployment/cattle-cluster-agent
|
||||
```
|
||||
|
||||
#### Method 3: Manually redeploy the Rancher agents
|
||||
|
||||
With this method the Rancher agents are reapplied by running a set of commands on a control plane node of each downstream cluster.
|
||||
|
||||
Repeat the below steps for each downstream cluster:
|
||||
|
||||
1. Retrieve the agent registration kubectl command:
|
||||
1. Locate the cluster ID (c-xxxxx) for the downstream cluster, this can be seen in the URL when viewing the cluster in the Rancher UI under Cluster Management
|
||||
1. Add the Rancher server URL and cluster ID to the following URL: `https://<RANCHER_SERVER_URL>/v3/clusterregistrationtokens?clusterId=<CLUSTER_ID>`
|
||||
1. Copy the command from the `insecureCommand` field, this command is used because a private CA is un use
|
||||
|
||||
2. Run the kubectl command from the previous step using a kubeconfig for the downstream cluster with one of the following methods:
|
||||
1. If the [ACE](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint.md) is enabled for the cluster, [the context can be adjusted](../../../how-to-guides/new-user-guides/manage-clusters/access-clusters/use-kubectl-and-kubeconfig.md#authenticating-directly-with-a-downstream-cluster) to connect directly to the downstream cluster
|
||||
1. Alternatively, SSH into the control plane node:
|
||||
- RKE: Use the [steps in the document here](https://github.com/rancherlabs/support-tools/tree/master/how-to-retrieve-kubeconfig-from-custom-cluster) to generate a kubeconfig
|
||||
- RKE2/K3s: Use the kubeconfig populated during installation
|
||||
|
||||
### 5. Force Update Fleet clusters to reconnect the fleet-agent to Rancher
|
||||
|
||||
Select 'Force Update' for the clusters within the [Continuous Delivery](../../../integrations-in-rancher/fleet/overview.md#accessing-fleet-in-the-rancher-ui) view of the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher.
|
||||
|
||||
#### Why is this step required?
|
||||
|
||||
Fleet agents in Rancher managed clusters store a kubeconfig that is used to connect to Rancher. The kubeconfig contains a `certificate-authority-data` field containing the CA for the certificate used by Rancher. When changing the CA, this block needs to be updated to allow the fleet-agent to trust the certificate used by Rancher.
|
||||
|
||||
## Updating from a Private CA Certificate to a Public CA Certificate
|
||||
|
||||
Follow these steps to perform the opposite procedure as shown above, to change from a certificate issued by a private CA, to a public or self-signed CA.
|
||||
|
||||
### 1. Create/update the certificate secret object
|
||||
|
||||
First, concatenate the server certificate followed by any intermediate certificate(s) to a file named `tls.crt` and provide the corresponding certificate key in a file named `tls.key`.
|
||||
|
||||
Use the following command to create the `tls-rancher-ingress` secret object in the Rancher (local) management cluster:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key
|
||||
```
|
||||
|
||||
Alternatively, to update an existing `tls-rancher-ingress` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system create secret tls tls-rancher-ingress \
|
||||
--cert=tls.crt \
|
||||
--key=tls.key \
|
||||
--dry-run --save-config -o yaml | kubectl apply -f -
|
||||
```
|
||||
|
||||
### 2. Delete the CA certificate secret object
|
||||
|
||||
You will delete the `tls-ca` secret in the `cattle-system` namespace as it is no longer needed. You may also optionally save a copy of the `tls-ca` secret if desired.
|
||||
|
||||
To save the existing `tls-ca` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system get secret tls-ca -o yaml > tls-ca.yaml
|
||||
```
|
||||
|
||||
To delete the existing `tls-ca` secret:
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system delete secret tls-ca
|
||||
```
|
||||
|
||||
### 3. Reconfigure the Rancher deployment
|
||||
|
||||
This step is required if the certificate source is changing. In this scenario it's likely only changing because Rancher was previously configured to use the default self-signed certificate (`ingress.tls.source=rancher`).
|
||||
|
||||
The below steps update the Helm values for the Rancher chart, so the Rancher pods and ingress are reconfigured to use the new certificate created in Step 1.
|
||||
|
||||
1. Adjust the values that were used during initial installation, store the current values with:
|
||||
```bash
|
||||
helm get values rancher -n cattle-system -o yaml > values.yaml
|
||||
```
|
||||
1. Also get the version string of the currently deployed Rancher chart:
|
||||
```bash
|
||||
helm ls -n cattle-system
|
||||
```
|
||||
1. Update the current Helm values in the `values.yaml` file:
|
||||
1. As a private CA is no longer being used, remove the `privateCA: true` field, or set this to `false`
|
||||
1. Adjust the `ingress.tls.source` field as necessary. Please [refer to the chart options](../installation-references/helm-chart-options.md#common-options) for more details. Here are some examples:
|
||||
1. If using a public CA continue with a value of: `secret`
|
||||
1. If using Let's Encrypt update the value to: `letsEncrypt`
|
||||
1. Update the Helm values for the Rancher chart using the `values.yaml` file, and the current chart version to prevent an upgrade:
|
||||
```bash
|
||||
helm upgrade rancher rancher-stable/rancher \
|
||||
--namespace cattle-system \
|
||||
-f values.yaml \
|
||||
--version <DEPLOYED_RANCHER_VERSION>
|
||||
```
|
||||
|
||||
### 4. Reconfigure Rancher agents for the non-private/common certificate
|
||||
|
||||
As a private CA is no longer being used, the `CATTLE_CA_CHECKSUM` environment variable on the downstream cluster agents should be removed or set to "" (an empty string).
|
||||
|
||||
### 5. Force Update Fleet clusters to reconnect the fleet-agent to Rancher
|
||||
|
||||
Select 'Force Update' for the clusters within the [Continuous Delivery](../../../integrations-in-rancher/fleet/overview.md#accessing-fleet-in-the-rancher-ui) view of the Rancher UI to allow the fleet-agent in downstream clusters to successfully connect to Rancher.
|
||||
|
||||
#### Why is this step required?
|
||||
|
||||
Fleet agents in Rancher managed clusters store a kubeconfig that is used to connect to Rancher. The kubeconfig contains a `certificate-authority-data` field containing the CA for the certificate used by Rancher. When changing the CA, this block needs to be updated to allow the fleet-agent to trust the certificate used by Rancher.
|
||||
+288
@@ -0,0 +1,288 @@
|
||||
---
|
||||
title: Upgrading Cert-Manager
|
||||
---
|
||||
|
||||
<head>
|
||||
<link rel="canonical" href="https://ranchermanager.docs.rancher.com/getting-started/installation-and-upgrade/resources/upgrade-cert-manager"/>
|
||||
</head>
|
||||
|
||||
Rancher is compatible with the API version cert-manager.io/v1 and was last tested with cert-manager version v1.13.1.
|
||||
|
||||
Rancher uses cert-manager to automatically generate and renew TLS certificates for HA deployments of Rancher. As of Fall 2019, three important changes to cert-manager are set to occur that you need to take action on if you have an HA deployment of Rancher:
|
||||
|
||||
1. [Let's Encrypt will be blocking cert-manager instances older than 0.8.0 starting November 1st 2019.](https://community.letsencrypt.org/t/blocking-old-cert-manager-versions/98753)
|
||||
1. [Cert-manager is deprecating and replacing the certificate.spec.acme.solvers field](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/). This change has no exact deadline.
|
||||
1. [Cert-manager is deprecating `v1alpha1` API and replacing its API group](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/)
|
||||
|
||||
To address these changes, this guide will do two things:
|
||||
|
||||
1. Document the procedure for upgrading cert-manager
|
||||
1. Explain the cert-manager API changes and link to cert-manager's official documentation for migrating your data
|
||||
|
||||
:::note Important:
|
||||
|
||||
If you are upgrading cert-manager to the latest version from a version older than 1.5, follow the steps in [Option C](#option-c-upgrade-cert-manager-from-versions-15-and-below) below to do so. Note that you do not need to reinstall Rancher to perform this upgrade.
|
||||
|
||||
:::
|
||||
|
||||
## Upgrade Cert-Manager
|
||||
|
||||
The namespace used in these instructions depends on the namespace cert-manager is currently installed in. If it is in kube-system use that in the instructions below. You can verify by running `kubectl get pods --all-namespaces` and checking which namespace the cert-manager-\* pods are listed in. Do not change the namespace cert-manager is running in or this can cause issues.
|
||||
|
||||
In order to upgrade cert-manager, follow these instructions:
|
||||
|
||||
### Option A: Upgrade cert-manager with Internet Access
|
||||
|
||||
<details id="normal">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
1. [Back up existing resources](https://cert-manager.io/docs/tutorials/backup/) as a precaution
|
||||
|
||||
```plain
|
||||
kubectl get -o yaml --all-namespaces \
|
||||
issuer,clusterissuer,certificates,certificaterequests > cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
:::note Important:
|
||||
|
||||
If you are upgrading from a version older than 0.11.0, Update the apiVersion on all your backed up resources from `certmanager.k8s.io/v1alpha1` to `cert-manager.io/v1alpha2`. If you use any cert-manager annotations on any of your other resources, you will need to update them to reflect the new API group. For details, refer to the documentation on [additional annotation changes.](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/#additional-annotation-changes)
|
||||
|
||||
:::
|
||||
|
||||
1. [Uninstall existing deployment](https://cert-manager.io/docs/installation/uninstall/kubernetes/#uninstalling-with-helm)
|
||||
|
||||
```plain
|
||||
helm uninstall cert-manager
|
||||
```
|
||||
|
||||
Delete the CustomResourceDefinition using the link to the version vX.Y.Z you installed
|
||||
|
||||
```plain
|
||||
kubectl delete -f https://github.com/cert-manager/cert-manager/releases/download/vX.Y.Z/cert-manager.crds.yaml
|
||||
|
||||
```
|
||||
|
||||
1. Install the CustomResourceDefinition resources separately
|
||||
|
||||
```plain
|
||||
kubectl apply --validate=false -f https://github.com/cert-manager/cert-manager/releases/download/vX.Y.Z/cert-manager.crds.yaml
|
||||
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
If you are running Kubernetes v1.15 or below, you will need to add the `--validate=false` flag to your `kubectl apply` command above. Otherwise, you will receive a validation error relating to the `x-kubernetes-preserve-unknown-fields` field in cert-manager’s CustomResourceDefinition resources. This is a benign error and occurs due to the way kubectl performs resource validation.
|
||||
|
||||
:::
|
||||
|
||||
1. Create the namespace for cert-manager if needed
|
||||
|
||||
```plain
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
1. Add the Jetstack Helm repository
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
```
|
||||
|
||||
1. Update your local Helm chart repository cache
|
||||
|
||||
```plain
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Install the new version of cert-manager
|
||||
|
||||
```plain
|
||||
helm install \
|
||||
cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager
|
||||
```
|
||||
|
||||
1. [Restore back up resources](https://cert-manager.io/docs/tutorials/backup/#restoring-resources)
|
||||
|
||||
```plain
|
||||
kubectl apply -f cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Option B: Upgrade cert-manager in an Air-Gapped Environment
|
||||
|
||||
<details id="airgap">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before you can perform the upgrade, you must prepare your air gapped environment by adding the necessary container images to your private registry and downloading or rendering the required Kubernetes manifest files.
|
||||
|
||||
1. Follow the guide to [Prepare your Private Registry](../other-installation-methods/air-gapped-helm-cli-install/publish-images.md) with the images needed for the upgrade.
|
||||
|
||||
1. From a system connected to the internet, add the cert-manager repo to Helm
|
||||
|
||||
```plain
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
```
|
||||
|
||||
1. Fetch the latest cert-manager chart available from the [Helm chart repository](https://artifacthub.io/packages/helm/cert-manager/cert-manager).
|
||||
|
||||
```plain
|
||||
helm fetch jetstack/cert-manager
|
||||
```
|
||||
|
||||
1. Render the cert manager template with the options you would like to use to install the chart. Remember to set the `image.repository` option to pull the image from your private registry. This will create a `cert-manager` directory with the Kubernetes manifest files.
|
||||
|
||||
The Helm 3 command is as follows:
|
||||
|
||||
```plain
|
||||
helm template cert-manager ./cert-manager-v0.12.0.tgz --output-dir . \
|
||||
--namespace cert-manager \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
|
||||
```
|
||||
|
||||
<DeprecationHelm2 />
|
||||
|
||||
The Helm 2 command is as follows:
|
||||
|
||||
```plain
|
||||
helm template ./cert-manager-v0.12.0.tgz --output-dir . \
|
||||
--name cert-manager --namespace cert-manager \
|
||||
--set image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-controller
|
||||
--set webhook.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-webhook
|
||||
--set cainjector.image.repository=<REGISTRY.YOURDOMAIN.COM:PORT>/quay.io/jetstack/cert-manager-cainjector
|
||||
```
|
||||
|
||||
1. Download the required CRD file for cert-manager (old and new)
|
||||
|
||||
```plain
|
||||
curl -L -o cert-manager-crd.yaml https://raw.githubusercontent.com/cert-manager/cert-manager/release-0.12/deploy/manifests/00-crds.yaml
|
||||
curl -L -o cert-manager/cert-manager-crd-old.yaml https://raw.githubusercontent.com/cert-manager/cert-manager/release-X.Y/deploy/manifests/00-crds.yaml
|
||||
```
|
||||
|
||||
### Install cert-manager
|
||||
|
||||
1. Back up existing resources as a precaution
|
||||
|
||||
```plain
|
||||
kubectl get -o yaml --all-namespaces \
|
||||
issuer,clusterissuer,certificates,certificaterequests > cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
:::note Important:
|
||||
|
||||
If you are upgrading from a version older than 0.11.0, Update the apiVersion on all your backed up resources from `certmanager.k8s.io/v1alpha1` to `cert-manager.io/v1alpha2`. If you use any cert-manager annotations on any of your other resources, you will need to update them to reflect the new API group. For details, refer to the documentation on [additional annotation changes.](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/#additional-annotation-changes)
|
||||
|
||||
:::
|
||||
|
||||
1. Delete the existing cert-manager installation
|
||||
|
||||
```plain
|
||||
kubectl -n cert-manager \
|
||||
delete deployment,sa,clusterrole,clusterrolebinding \
|
||||
-l 'app=cert-manager' -l 'chart=cert-manager-v0.5.2'
|
||||
```
|
||||
|
||||
Delete the CustomResourceDefinition using the link to the version vX.Y you installed
|
||||
|
||||
```plain
|
||||
kubectl delete -f cert-manager/cert-manager-crd-old.yaml
|
||||
```
|
||||
|
||||
1. Install the CustomResourceDefinition resources separately
|
||||
|
||||
```plain
|
||||
kubectl apply -f cert-manager/cert-manager-crd.yaml
|
||||
```
|
||||
|
||||
:::note Important:
|
||||
|
||||
If you are running Kubernetes v1.15 or below, you will need to add the `--validate=false` flag to your `kubectl apply` command above. Otherwise, you will receive a validation error relating to the `x-kubernetes-preserve-unknown-fields` field in cert-manager’s CustomResourceDefinition resources. This is a benign error and occurs due to the way kubectl performs resource validation.
|
||||
|
||||
:::
|
||||
|
||||
1. Create the namespace for cert-manager
|
||||
|
||||
```plain
|
||||
kubectl create namespace cert-manager
|
||||
```
|
||||
|
||||
1. Install cert-manager
|
||||
|
||||
```plain
|
||||
kubectl -n cert-manager apply -R -f ./cert-manager
|
||||
```
|
||||
|
||||
1. [Restore back up resources](https://cert-manager.io/docs/tutorials/backup/#restoring-resources)
|
||||
|
||||
```plain
|
||||
kubectl apply -f cert-manager-backup.yaml
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Option C: Upgrade cert-manager from Versions 1.5 and Below
|
||||
|
||||
<details id="normal">
|
||||
<summary>Click to expand</summary>
|
||||
|
||||
Previously, in order to upgrade cert-manager from an older version, an uninstall and reinstall of Rancher was recommended. Using the method below, you may upgrade cert-manager without those additional steps in order to better preserve your production environment:
|
||||
|
||||
1. Install `cmctl`, the cert-manager CLI tool, using [the installation guide](https://cert-manager.io/docs/usage/cmctl/#installation).
|
||||
|
||||
1. Ensure that any cert-manager custom resources that may have been stored in etcd at a deprecated API version get migrated to v1:
|
||||
|
||||
```
|
||||
cmctl upgrade migrate-api-version
|
||||
```
|
||||
Refer to the [API version migration docs](https://cert-manager.io/docs/usage/cmctl/#migrate-api-version) for more information. Please also see the [docs to upgrade from 1.5 to 1.6](https://cert-manager.io/docs/installation/upgrading/upgrading-1.5-1.6/) and the [docs to upgrade from 1.6. to 1.7](https://cert-manager.io/docs/installation/upgrading/upgrading-1.6-1.7/) if needed.
|
||||
|
||||
1. Upgrade cert-manager to v1.7.1 with a normal `helm upgrade`. You may go directly from version 1.5 to 1.7 if desired.
|
||||
|
||||
1. Follow the Helm tutorial to [update the API version of a release manifest](https://helm.sh/docs/topics/kubernetes_apis/#updating-api-versions-of-a-release-manifest). The chart release name is `release_name=rancher` and the release namespace is `release_namespace=cattle-system`.
|
||||
|
||||
1. In the decoded file, search for `cert-manager.io/v1beta1` and **replace it** with `cert-manager.io/v1`.
|
||||
|
||||
1. Upgrade Rancher normally with `helm upgrade`.
|
||||
|
||||
</details>
|
||||
|
||||
### Verify the Deployment
|
||||
|
||||
Once you’ve installed cert-manager, you can verify it is deployed correctly by checking the kube-system namespace for running pods:
|
||||
|
||||
```
|
||||
kubectl get pods --namespace cert-manager
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
cert-manager-5c6866597-zw7kh 1/1 Running 0 2m
|
||||
cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m
|
||||
cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
|
||||
```
|
||||
|
||||
## Cert-Manager API change and data migration
|
||||
|
||||
---
|
||||
|
||||
Rancher now supports cert-manager versions 1.6.2 and 1.7.1. We recommend v1.7.x because v 1.6.x will reach end-of-life on March 30, 2022. To read more, see the [cert-manager docs](../install-upgrade-on-a-kubernetes-cluster/install-upgrade-on-a-kubernetes-cluster.md#4-install-cert-manager). For instructions on upgrading cert-manager from version 1.5 to 1.6, see the upstream cert-manager documentation [here](https://cert-manager.io/docs/installation/upgrading/upgrading-1.5-1.6/). For instructions on upgrading cert-manager from version 1.6 to 1.7, see the upstream cert-manager documentation [here](https://cert-manager.io/docs/installation/upgrading/upgrading-1.6-1.7/).
|
||||
|
||||
---
|
||||
|
||||
Cert-manager has deprecated the use of the `certificate.spec.acme.solvers` field and will drop support for it completely in an upcoming release.
|
||||
|
||||
Per the cert-manager documentation, a new format for configuring ACME certificate resources was introduced in v0.8. Specifically, the challenge solver configuration field was moved. Both the old format and new are supported as of v0.9, but support for the old format will be dropped in an upcoming release of cert-manager. The cert-manager documentation strongly recommends that after upgrading you update your ACME Issuer and Certificate resources to the new format.
|
||||
|
||||
Details about the change and migration instructions can be found in the [cert-manager v0.7 to v0.8 upgrade instructions](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/).
|
||||
|
||||
The v0.11 release marks the removal of the v1alpha1 API that was used in previous versions of cert-manager, as well as our API group changing to be cert-manager.io instead of certmanager.k8s.io.
|
||||
|
||||
We have also removed support for the old configuration format that was deprecated in the v0.8 release. This means you must transition to using the new solvers style configuration format for your ACME issuers before upgrading to v0.11. For more information, see the [upgrading to v0.8 guide](https://cert-manager.io/docs/installation/upgrading/upgrading-0.7-0.8/).
|
||||
|
||||
Details about the change and migration instructions can be found in the [cert-manager v0.10 to v0.11 upgrade instructions](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/).
|
||||
|
||||
More info about [cert-manager upgrade information](https://cert-manager.io/docs/installation/upgrade/).
|
||||
|
||||
Reference in New Issue
Block a user