Fix links in /versioned_docs/version-2.7

This commit is contained in:
Billy Tat
2023-10-20 17:02:31 -07:00
parent 50ad0b42bc
commit fbab12d5cc
10 changed files with 13 additions and 13 deletions

View File

@@ -277,5 +277,5 @@ v0.11 版本标志着删除先前 Cert-Manager 版本中使用的 v1alpha1 API
如需了解变更细节以及迁移说明,请参见[将 Cert-Manager 从 v0.10 升级到 v0.11](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/)。
如需获得更多信息,请参见 [Cert-Manager 升级](https://cert-manager.io/docs/installation/upgrading/)。
如需获得更多信息,请参见 [Cert-Manager 升级](https://cert-manager.io/docs/installation/upgrade/)。

View File

@@ -23,7 +23,7 @@ Deploying to Microsoft Azure will incur charges.
- [Microsoft Azure Account](https://azure.microsoft.com/en-us/free/): A Microsoft Azure Account is required to create resources for deploying Rancher and Kubernetes.
- [Microsoft Azure Subscription](https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/create-subscription#create-a-subscription-in-the-azure-portal): Use this link to follow a tutorial to create a Microsoft Azure subscription if you don't have one yet.
- [Micsoroft Azure Tenant](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-create-new-tenant): Use this link and follow instructions to create a Microsoft Azure tenant.
- Your subscription has sufficient quota for at least 2 vCPUs. For details on Rancher server resource requirements, refer to [this section](../../../pages-for-subheaders/installation-requirements.md#rke-and-hosted-kubernetes)
- Your subscription has sufficient quota for at least 2 vCPUs. For details on Rancher server resource requirements, refer to [this section](../../../pages-for-subheaders/installation-requirements.md)
- When installing Rancher with Helm in Azure, use the L7 load balancer to avoid networking issues. For more information, refer to the documentation on [Azure load balancer limitations](https://docs.microsoft.com/en-us/azure/load-balancer/components#limitations).
## 1. Prepare your Workstation

View File

@@ -14,7 +14,7 @@ If you already have a GKE Kubernetes cluster, skip to the step about [installing
- You will need a Google account.
- You will need a Google Cloud billing account. You can manage your Cloud Billing accounts using the Google Cloud Console. For more information about the Cloud Console, visit [General guide to the console.](https://support.google.com/cloud/answer/3465889?hl=en&ref_topic=3340599)
- You will need a cloud quota for at least one in-use IP address and at least 2 CPUs. For more details about hardware requirements for the Rancher server, refer to [this section.](../../../pages-for-subheaders/installation-requirements.md#rke-and-hosted-kubernetes)
- You will need a cloud quota for at least one in-use IP address and at least 2 CPUs. For more details about hardware requirements for the Rancher server, refer to [this section.](../../../pages-for-subheaders/installation-requirements.md)
## 1. Enable the Kubernetes Engine API

View File

@@ -50,7 +50,7 @@ See the [rancher/rancher-cleanup repo](https://github.com/rancher/rancher-cleanu
At this point, there should be no Rancher-related resources on the upstream cluster. Therefore, the next step will be the same as if you were migrating Rancher to a new cluster that contains no Rancher resources.
Follow these [instructions](./migrate-rancher-to-new-cluster.md) to install the Rancher-Backup Helm chart and restore Rancher to its previous state.
Follow these [instructions](../../../how-to-guides/new-user-guides/backup-restore-and-disaster-recovery/migrate-rancher-to-new-cluster.md) to install the Rancher-Backup Helm chart and restore Rancher to its previous state.
Please keep in mind that:
1. Step 3 can be skipped, because the Cert-Manager app should still exist on the upstream (local) cluster if it was installed before.
2. At Step 4, install the Rancher version you intend to roll back to.

View File

@@ -282,5 +282,5 @@ We have also removed support for the old configuration format that was deprecate
Details about the change and migration instructions can be found in the [cert-manager v0.10 to v0.11 upgrade instructions](https://cert-manager.io/docs/installation/upgrading/upgrading-0.10-0.11/).
More info about [cert-manager upgrade information](https://cert-manager.io/docs/installation/upgrading/).
More info about [cert-manager upgrade information](https://cert-manager.io/docs/installation/upgrade/).

View File

@@ -33,7 +33,7 @@ The port requirements for the Harvester cluster can be found [here](https://docs
In addition, other networking considerations are as follows:
- Be sure to enable VLAN trunk ports of the physical switch for VM VLAN networks.
- Follow the networking setup guidance [here](https://docs.harvesterhci.io/v1.1/networking/clusternetwork).
- Follow the networking setup guidance [here](https://docs.harvesterhci.io/v1.1/networking/index).
For other port requirements for other guest clusters, such as K3s and RKE1, please see [these docs](https://docs.harvesterhci.io/v1.1/install/requirements/#guest-clusters).

View File

@@ -25,7 +25,7 @@ Rancher needs to be installed on a supported Kubernetes version. Consult the [Ra
### Install Rancher on a Hardened Kubernetes cluster
If you install Rancher on a hardened Kubernetes cluster, check the [Exempting Required Rancher Namespaces](../../../docs/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md#exempting-required-rancher-namespaces) section for detailed requirements.
If you install Rancher on a hardened Kubernetes cluster, check the [Exempting Required Rancher Namespaces](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/psa-config-templates.md#exempting-required-rancher-namespaces) section for detailed requirements.
## Operating Systems and Container Runtime Requirements

View File

@@ -21,7 +21,7 @@ You can choose to not have any operator-level storage location configured. If yo
| Parameter | Description |
| -------------- | -------------- |
| Credential Secret | Choose the credentials for S3 from your secrets in Rancher. [Example](examples.md#example-credential-secret-for-storing-backups-in-s3). |
| Credential Secret | Choose the credentials for S3 from your secrets in Rancher. [Example](backup-configuration.md#example-credentialsecret). |
| Bucket Name | Enter the name of the [S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html) where the backups will be stored. Default: `rancherbackups`. |
| Region | The [AWS region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) where the S3 bucket is located. |
| Folder | The [folder in the S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html) where the backups will be stored. If this field is left empty, the default behavior is to store the backup files in the root folder of the S3 bucket. |

View File

@@ -28,8 +28,8 @@ Etcd is the backing database for Kubernetes and for Rancher. The database may ev
This is typical in Rancher, as many operations create new `RoleBinding` objects in the upstream cluster as a side effect.
You can reduce the number of `RoleBindings` in the upstream cluster in the following ways:
* Limit the use of the [Restricted Admin](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions#restricted-admin) role. Apply other roles wherever possible.
* If you use [external authentication](../../../pages-for-subheaders/authentication-config), use groups to assign roles.
* Limit the use of the [Restricted Admin](../../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/manage-role-based-access-control-rbac/global-permissions.md#restricted-admin) role. Apply other roles wherever possible.
* If you use [external authentication](../../../pages-for-subheaders/authentication-config.md), use groups to assign roles.
* Only add users to clusters and projects when necessary.
* Remove clusters and projects when they are no longer needed.
* Only use custom roles if necessary.
@@ -59,7 +59,7 @@ You should remove any remaining legacy apps that appear in the Cluster Manager U
### Using the Authorized Cluster Endpoint (ACE)
An [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters#4-authorized-cluster-endpoint) (ACE) provides access to the Kubernetes API of Rancher-provisioned RKE, RKE2, and K3s clusters. When enabled, the ACE adds a context to kubeconfig files generated for the cluster. The context uses a direct endpoint to the cluster, thereby bypassing Rancher. This reduces load on Rancher for cases where unmediated API access is acceptable or preferable. See [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters#4-authorized-cluster-endpoint) for more information and configuration instructions.
An [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) (ACE) provides access to the Kubernetes API of Rancher-provisioned RKE, RKE2, and K3s clusters. When enabled, the ACE adds a context to kubeconfig files generated for the cluster. The context uses a direct endpoint to the cluster, thereby bypassing Rancher. This reduces load on Rancher for cases where unmediated API access is acceptable or preferable. See [Authorized Cluster Endpoint](../../../reference-guides/rancher-manager-architecture/communicating-with-downstream-user-clusters.md#4-authorized-cluster-endpoint) for more information and configuration instructions.
### Reducing Event Handler Executions
@@ -93,7 +93,7 @@ You should keep the local Kubernetes cluster up to date. This will ensure that y
Etcd is the backend database for Kubernetes and for Rancher. It plays a very important role in Rancher performance.
The two main bottlenecks to [etcd performance](https://etcd.io/docs/v3.4/op-guide/performance/) are disk and network speed. Etcd should run on dedicated nodes with a fast network setup and with SSDs that have high input/output operations per second (IOPS). For more information regarding etcd performance, see [Slow etcd performance (performance testing and optimization)](https://www.suse.com/support/kb/doc/?id=000020100) and [Tuning etcd for Large Installations](../../../how-to-guides/advanced-user-guides/tune-etcd-for-large-installs). Information on disks can also be found in the [Installation Requirements](../../../pages-for-subheaders/installation-requirements#disks).
The two main bottlenecks to [etcd performance](https://etcd.io/docs/v3.4/op-guide/performance/) are disk and network speed. Etcd should run on dedicated nodes with a fast network setup and with SSDs that have high input/output operations per second (IOPS). For more information regarding etcd performance, see [Slow etcd performance (performance testing and optimization)](https://www.suse.com/support/kb/doc/?id=000020100) and [Tuning etcd for Large Installations](../../../how-to-guides/advanced-user-guides/tune-etcd-for-large-installs.md). Information on disks can also be found in the [Installation Requirements](../../../pages-for-subheaders/installation-requirements.md#disks).
It's best to run etcd on exactly three nodes, as adding more nodes will reduce operation speed. This may be counter-intuitive to common scaling approaches, but it's due to etcd's [replication mechanisms](https://etcd.io/docs/v3.5/faq/#what-is-maximum-cluster-size).

View File

@@ -80,7 +80,7 @@ This issue occurs because firewall rules restrict communication between the API
The webhook provides extra validations on [namespaces](https://github.com/rancher/webhook/blob/release/v0.4/docs.md#psa-label-validation). One of these validations ensures that users can only update PSA relevant labels if they have the proper permissions (`updatepsa` for `projects` in `management.cattle.io`). This can result in specific operators, such as Tigera or Trident, failing when they attempt to deploy namespaces with PSA labels. There are several ways to resolve this issue:
- Configure the application to create a namespace with no PSA labels. If users wish to apply a PSA to these namespaces, they can add them to a project with the desired PSA after configuration. See the [docs on PSS and PSA resources](../../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards) for instructions on how.
- Configure the application to create a namespace with no PSA labels. If users wish to apply a PSA to these namespaces, they can add them to a project with the desired PSA after configuration. See the [docs on PSS and PSA resources](../how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/pod-security-standards.md) for instructions on how.
- This is the preferred option, though not all applications can be configured in this fashion.
- Manually grant the operator permissions to manage PSAs for namespaces.
- This option will introduce security risks, since the operator will now be able to set the PSA for the namespaces it has access to. This could allow the operator to deploy a privileged pod, or effect cluster takeover through other means.