Fix HTML validation errors

This commit is contained in:
Billy Tat
2022-07-15 14:40:00 -07:00
parent 0557a4af10
commit 995d8b67aa
34 changed files with 151 additions and 151 deletions
@@ -15,11 +15,11 @@ For further details on configuring OpenLDAP, refer to the [official documentatio
- [User schema configuration](#user-schema-configuration)
- [Group schema configuration](#group-schema-configuration)
## Background: OpenLDAP Authentication Flow
1. When a user attempts to login with LDAP credentials, Rancher creates an initial bind to the LDAP server using a service account with permissions to search the directory and read user/group attributes.
## Background: OpenLDAP Authentication Flow
1. When a user attempts to login with LDAP credentials, Rancher creates an initial bind to the LDAP server using a service account with permissions to search the directory and read user/group attributes.
2. Rancher then searches the directory for the user by using a search filter based on the provided username and configured attribute mappings.
3. Once the user has been found, they are authenticated with another LDAP bind request using the user's DN and provided password.
3. Once the user has been found, they are authenticated with another LDAP bind request using the user's DN and provided password.
4. Once authentication succeeded, Rancher then resolves the group memberships both from the membership attribute in the user's object and by performing a group search based on the configured user mapping attribute.
# OpenLDAP Server Configuration
@@ -73,7 +73,7 @@ The table below details the parameters for the user schema configuration.
The table below details the parameters for the group schema configuration.
<figcaption>Group Schema Configuration Parameters<figcaption>
<figcaption>Group Schema Configuration Parameters</figcaption>
| Parameter | Description |
|:--|:--|
@@ -161,7 +161,7 @@ Rancher lets you assign _custom project roles_ to a standard user instead of the
The following table lists each built-in custom project role available in Rancher and whether it is also granted by the `Owner`, `Member`, or `Read Only` role.
| Built-in Project Role | Owner | Member<a id="proj-roles"><a/> | Read Only |
| Built-in Project Role | Owner | Member<a id="proj-roles"></a> | Read Only |
| ---------------------------------- | ------------- | ----------------------------- | ------------- |
| Manage Project Members | ✓ | | |
| Create Namespaces | ✓ | ✓ | |
@@ -51,8 +51,8 @@ EKS clusters must have at least one managed node group to be imported into Ranch
1. On the **Clusters** page, **Import Existing**.
1. Choose the type of cluster.
1. Use **Member Roles** to configure user authorization for the cluster. Click **Add Member** to add users that can access the cluster. Use the **Role** drop-down to set permissions for each user.
1. If you are importing a generic Kubernetes cluster in Rancher, perform the following steps for setup:</br>
a. Click **Agent Environment Variables** under **Cluster Options** to set environment variables for [rancher cluster agent]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/rancher-agents/). The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server, `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables can be set using agent environment variables.</br>
1. If you are importing a generic Kubernetes cluster in Rancher, perform the following steps for setup:<br/>
a. Click **Agent Environment Variables** under **Cluster Options** to set environment variables for [rancher cluster agent]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/rancher-agents/). The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server, `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables can be set using agent environment variables.<br/>
b. Enable Project Network Isolation to ensure the cluster supports Kubernetes `NetworkPolicy` resources. Users can select the **Project Network Isolation** option under the **Advanced Options** dropdown to do so.
1. Click **Create**.
1. The prerequisite for `cluster-admin` privileges is shown (see **Prerequisites** above), including an example command to fulfil the prerequisite.
@@ -62,7 +62,7 @@ EKS clusters must have at least one managed node group to be imported into Ranch
**Result:**
- Your cluster is registered and assigned a state of **Pending**. Rancher is deploying resources to manage your cluster.</li>
- Your cluster is registered and assigned a state of **Pending**. Rancher is deploying resources to manage your cluster.
- You can access your cluster after its state is updated to **Active**.
- **Active** clusters are assigned two Projects: `Default` (containing the namespace `default`) and `System` (containing the namespaces `cattle-system`, `ingress-nginx`, `kube-public` and `kube-system`, if present).
@@ -247,10 +247,10 @@ Authorized Cluster Endpoint (ACE) support has been added for registered RKE2 and
sudo systemctl stop {rke2,k3s}-server
sudo systemctl start {rke2,k3s}-server
1. Finally, you **must** go back to the Rancher UI and edit the imported cluster there to complete the ACE enablement. Click on **⋮ > Edit Config**, then click the **Networking** tab under Cluster Configuration. Finally, click the **Enabled** button for **Authorized Endpoint**. Once the ACE is enabled, you then have the option of entering a fully qualified domain name (FQDN) and certificate information.
1. Finally, you **must** go back to the Rancher UI and edit the imported cluster there to complete the ACE enablement. Click on **⋮ > Edit Config**, then click the **Networking** tab under Cluster Configuration. Finally, click the **Enabled** button for **Authorized Endpoint**. Once the ACE is enabled, you then have the option of entering a fully qualified domain name (FQDN) and certificate information.
:::note
:::note
The <b>FQDN</b> field is optional, and if one is entered, it should point to the downstream cluster. Certificate information is only needed if there is a load balancer in front of the downstream cluster that is using an untrusted certificate. If you have a valid certificate, then nothing needs to be added to the <b>CA Certificates</b> field.
:::
@@ -274,7 +274,7 @@ This example annotation indicates that a pod security policy is enabled:
The following annotation indicates Ingress capabilities. Note that that the values of non-primitive objects need to be JSON encoded, with quotations escaped.
```
"capabilities.cattle.io/ingressCapabilities": "[
"capabilities.cattle.io/ingressCapabilities": "[
{
"customDefaultBackend":true,
"ingressProvider":"asdf"
@@ -9,12 +9,12 @@ The following table lists the permissions required for the vSphere user account:
| Privilege Group | Operations |
|:----------------------|:-----------------------------------------------------------------------|
| Datastore | AllocateSpace </br> Browse </br> FileManagement (Low level file operations) </br> UpdateVirtualMachineFiles </br> UpdateVirtualMachineMetadata |
| Datastore | AllocateSpace <br/> Browse <br/> FileManagement (Low level file operations) <br/> UpdateVirtualMachineFiles <br/> UpdateVirtualMachineMetadata |
| Global | Set custom attribute |
| Network | Assign |
| Resource | AssignVMToPool |
| Virtual Machine | Config (All) </br> GuestOperations (All) </br> Interact (All) </br> Inventory (All) </br> Provisioning (All) |
| vSphere Tagging | Assign or Unassign vSphere Tag </br> Assign or Unassign vSphere Tag on Object |
| Virtual Machine | Config (All) <br/> GuestOperations (All) <br/> Interact (All) <br/> Inventory (All) <br/> Provisioning (All) |
| vSphere Tagging | Assign or Unassign vSphere Tag <br/> Assign or Unassign vSphere Tag on Object |
The following steps create a role with the required privileges and then assign it to a new user in the vSphere console:
@@ -37,7 +37,7 @@ The following steps create a role with the required privileges and then assign i
7. Create a new Global Permission. Add the user you created earlier and assign it the role you created earlier. Click **OK**.
{{< img "/img/rancher/globalpermissionuser.png" "image" >}}
{{< img "/img/rancher/globalpermissionrole.png" "image" >}}
**Result:** You now have credentials that Rancher can use to manipulate vSphere resources.
+10 -10
View File
@@ -17,30 +17,30 @@ For users looking to use another container runtime, Rancher has the edge-focused
### FAQ
<br>
<br/>
Q. Do I have to upgrade Rancher to get Ranchers support of the upstream Dockershim?
Q. Do I have to upgrade Rancher to get Ranchers support of the upstream Dockershim?
The upstream support of Dockershim begins for RKE in Kubernetes 1.21. You will need to be on Rancher 2.6 or above to have support for RKE with Kubernetes 1.21. See our [support matrix](https://rancher.com/support-maintenance-terms/all-supported-versions/rancher-v2.6.0/) for details.
<br>
<br/>
Q. I am currently on RKE with Kubernetes 1.20. Do I need to upgrade to RKE with Kubernetes 1.21 sooner to avoid being out of support for Dockershim?
Q. I am currently on RKE with Kubernetes 1.20. Do I need to upgrade to RKE with Kubernetes 1.21 sooner to avoid being out of support for Dockershim?
A. The version of Dockershim in RKE with Kubernetes 1.20 will continue to work and is not scheduled for removal upstream until Kubernetes 1.24. It will only emit a warning of its future deprecation, which Rancher has mitigated in RKE with Kubernetes 1.21. You can plan your upgrade to Kubernetes 1.21 as you would normally, but should consider enabling the external Dockershim by Kubernetes 1.22. The external Dockershim will need to be enabled before upgrading to Kubernetes 1.24, at which point the existing implementation will be removed.
For more information on the deprecation and its timeline, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed).
<br>
<br/>
Q: What are my other options if I dont want to depend on the Dockershim?
Q: What are my other options if I dont want to depend on the Dockershim?
A: You can use a runtime like containerd with Kubernetes that does not require Dockershim support. RKE2 or K3s are two options for doing this.
<br>
<br/>
Q: If I am already using RKE1 and want to switch to RKE2, what are my migration options?
Q: If I am already using RKE1 and want to switch to RKE2, what are my migration options?
A: Rancher is exploring the possibility of an in-place upgrade path. Alternatively you can always migrate workloads from one cluster to another using kubectl.
A: Rancher is exploring the possibility of an in-place upgrade path. Alternatively you can always migrate workloads from one cluster to another using kubectl.
<br>
<br/>
+11 -11
View File
@@ -7,25 +7,25 @@ This FAQ is a work in progress designed to answers the questions our users most
See [Technical FAQ]({{<baseurl>}}/rancher/v2.6/en/faq/technical/), for frequently asked technical questions.
<br>
<br/>
**Does Rancher v2.x support Docker Swarm and Mesos as environment types?**
When creating an environment in Rancher v2.x, Swarm and Mesos will no longer be standard options you can select. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 or so are running Swarm.
<br>
<br/>
**Is it possible to manage Azure Kubernetes Services with Rancher v2.x?**
Yes.
<br>
<br/>
**Does Rancher support Windows?**
As of Rancher 2.3.0, we support Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/windows-clusters/)
<br>
<br/>
**Does Rancher support Istio?**
@@ -33,37 +33,37 @@ As of Rancher 2.3.0, we support [Istio.]({{<baseurl>}}/rancher/v2.6/en/istio/)
Furthermore, Istio is implemented in our micro-PaaS "Rio", which works on Rancher 2.x along with any CNCF compliant Kubernetes cluster. You can read more about it [here](https://rio.io/)
<br>
<br/>
**Will Rancher v2.x support Hashicorp's Vault for storing secrets?**
Secrets management is on our roadmap but we haven't assigned it to a specific release yet.
Secrets management is on our roadmap but we haven't assigned it to a specific release yet.
<br>
<br/>
**Does Rancher v2.x support RKT containers as well?**
At this time, we only support Docker.
<br>
<br/>
**Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes?**
Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. Always refer to the [Rancher Support Matrix](https://rancher.com/support-maintenance-terms/) for details about what is officially supported.
<br>
<br/>
**Are you planning on supporting Traefik for existing setups?**
We don't currently plan on providing embedded Traefik support, but we're still exploring load-balancing approaches.
<br>
<br/>
**Can I import OpenShift Kubernetes clusters into v2.x?**
Our goal is to run any upstream Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet.
<br>
<br/>
**Are you going to integrate Longhorn?**
@@ -46,7 +46,7 @@ CNI network providers using this network model include Calico and Cilium. Cilium
### RKE Kubernetes clusters
Out-of-the-box, Rancher provides the following CNI network providers for RKE Kubernetes clusters: Canal, Flannel, and Weave.
Out-of-the-box, Rancher provides the following CNI network providers for RKE Kubernetes clusters: Canal, Flannel, and Weave.
You can choose your CNI network provider when you create new Kubernetes clusters from Rancher.
@@ -95,7 +95,7 @@ For more information, see the following pages:
### RKE2 Kubernetes clusters
Out-of-the-box, Rancher provides the following CNI network providers for RKE2 Kubernetes clusters: [Canal](#canal) (see above section), Calico, and Cilium.
Out-of-the-box, Rancher provides the following CNI network providers for RKE2 Kubernetes clusters: [Canal](#canal) (see above section), Calico, and Cilium.
You can choose your CNI network provider when you create new Kubernetes clusters from Rancher.
@@ -113,7 +113,7 @@ Kubernetes workers should open TCP port `179` if using BGP or UDP port `4789` if
In Rancher v2.6.3, Calico probes fail on Windows nodes upon RKE2 installation. <b>Note that this issue is resolved in v2.6.4.<b>
- To work around this issue, first navigate to `https://<rancherserverurl>/v3/settings/windows-rke2-install-script`.
- To work around this issue, first navigate to `https://<rancherserverurl>/v3/settings/windows-rke2-install-script`.
- There, change the current setting: `https://raw.githubusercontent.com/rancher/wins/v0.1.3/install.ps1` to this new setting: `https://raw.githubusercontent.com/rancher/rke2/master/windows/rke2-install.ps1`.
@@ -130,12 +130,12 @@ For more information, see the following pages:
![Cilium Logo]({{<baseurl>}}/img/rancher/cilium-logo.png)
Cilium enables networking and network policies (L3, L4, and L7) in Kubernetes. By default, Cilium uses eBPF technologies to route packets inside the node and VXLAN to send packets to other nodes. Unencapsulated techniques can also be configured.
Cilium enables networking and network policies (L3, L4, and L7) in Kubernetes. By default, Cilium uses eBPF technologies to route packets inside the node and VXLAN to send packets to other nodes. Unencapsulated techniques can also be configured.
Cilium recommends kernel versions greater than 5.2 to be able to leverage the full potential of eBPF. Kubernetes workers should open TCP port `8472` for VXLAN and TCP port `4240` for health checks. In addition, ICMP 8/0 must be enabled for health checks. For more information, check [Cilium System Requirements](https://docs.cilium.io/en/latest/operations/system_requirements/#firewall-requirements).
##### Ingress Routing Across Nodes in Cilium
<br>
<br/>
By default, Cilium does not allow pods to contact pods on other nodes. To work around this, enable the ingress controller to route requests across nodes with a `CiliumNetworkPolicy`.
After selecting the Cilium CNI and enabling Project Network Isolation for your new cluster, configure as follows:
@@ -147,7 +147,7 @@ metadata:
name: hn-nodes
namespace: default
spec:
endpointSelector: {}
endpointSelector: {}
ingress:
- fromEntities:
- remote-node
+1 -1
View File
@@ -8,7 +8,7 @@ weight: 8007
The Hardening Guide is now located in the main [Security]({{<baseurl>}}/rancher/v2.6/en/security/) section.
<br>
<br/>
**What are the results of Rancher's Kubernetes cluster when it is CIS benchmarked?**
@@ -72,7 +72,7 @@ Before you create your own custom catalog, you should have a basic understanding
<figcaption>Rancher Chart with <code>questions.yml</code> (top) vs. Helm Chart without (bottom)</figcaption>
![questions.yml]({{<baseurl>}}/img/rancher/rancher-app-2.6.png)
![questions.yml]({{<baseurl>}}/img/rancher/rancher-app-2.6.png)
![values.yaml]({{<baseurl>}}/img/rancher/helm-app-2.6.png)
@@ -126,7 +126,7 @@ This reference contains variables that you can use in `questions.yml` nested und
| max_length | int | false | Max character length.|
| min | int | false | Min integer length. |
| max | int | false | Max integer length. |
| options | []string | false | Specify the options when the variable type is `enum`, for example: options:<br> - "ClusterIP" <br> - "NodePort" <br> - "LoadBalancer"|
| options | []string | false | Specify the options when the variable type is `enum`, for example: options:<br/> - "ClusterIP" <br/> - "NodePort" <br/> - "LoadBalancer"|
| valid_chars | string | false | Regular expression for input chars validation. |
| invalid_chars | string | false | Regular expression for invalid input chars validation.|
| subquestions | []subquestion | false| Add an array of subquestions.|
+3 -3
View File
@@ -38,7 +38,7 @@ Any major versions that are less than the ones mentioned in the table below are
| rancher-wins-upgrader | 0.0.100 | 100.0.0+up0.0.1 |
| neuvector | 100.0.0+up2.2.0 | 100.0.0+up2.2.0 |
</br>
<br/>
**Charts based on upstream:** For charts that are based on upstreams, the +up annotation should inform you of what upstream version the Rancher chart is tracking. Check the upstream version compatibility with Rancher during upgrades also.
- As an example, `100.x.x+up16.6.0` for Monitoring tracks upstream kube-prometheus-stack `16.6.0` with some Rancher patches added to it.
@@ -72,7 +72,7 @@ These items represent helm repositories, and can be either traditional helm endp
To add a private CA for Helm Chart repositories:
- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:</br>
- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:<br/>
```
[...]
spec:
@@ -84,7 +84,7 @@ To add a private CA for Helm Chart repositories:
```
- **Git-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:</br>
- **Git-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:<br/>
```
[...]
spec:
@@ -60,8 +60,8 @@ If you want terminate SSL/TLS externally, see [TLS termination on an External Lo
| Configuration | Chart option | Description | Requires cert-manager |
| ------------------------------------------ | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br> This is the **default** and does not need to be added when rendering the Helm template. | yes |
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s). <br> This option must be passed when rendering the Rancher Helm template. | no |
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br/> This is the **default** and does not need to be added when rendering the Helm template. | yes |
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s). <br/> This option must be passed when rendering the Rancher Helm template. | no |
# Helm Chart Options for Air Gap Installations
@@ -17,28 +17,28 @@ For users looking to use another container runtime, Rancher has the edge-focused
### FAQ
<br>
<br/>
Q. Do I have to upgrade Rancher to get Ranchers support of the upstream Dockershim?
Q. Do I have to upgrade Rancher to get Ranchers support of the upstream Dockershim?
A The upstream support of Dockershim begins for RKE in Kubernetes 1.21. You will need to be on a version of Rancher that supports RKE 1.21. See our support matrix for details.
<br>
<br/>
Q. I am currently on RKE with Kubernetes 1.20. Do I need to upgrade to RKE with Kubernetes 1.21 sooner to avoid being out of support for Dockershim?
Q. I am currently on RKE with Kubernetes 1.20. Do I need to upgrade to RKE with Kubernetes 1.21 sooner to avoid being out of support for Dockershim?
A. The version of Dockershim in RKE with Kubernetes 1.20 will continue to work and it is not deprecated until a later release. For information on the timeline, see the [Kubernetes Dockershim Deprecation FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/#when-will-dockershim-be-removed). It will only emit a warning of its future deprecation, which Rancher has mitigated in RKE with Kubernetes 1.21. You can plan your upgrade to 1.21 as you would normally.
<br>
<br/>
Q: What are my other options if I dont want to depend on the Dockershim?
Q: What are my other options if I dont want to depend on the Dockershim?
A: You can use a runtime like containerd with Kubernetes that does not require Dockershim support. RKE2 or K3s are two options for doing this.
<br>
<br/>
Q: If I am already using RKE1 and want to switch to RKE2, what are my migration options?
Q: If I am already using RKE1 and want to switch to RKE2, what are my migration options?
A: Today, you can stand up a new cluster and migrate workloads to a new RKE2 cluster that uses containerd. Rancher is exploring the possibility of an in-place upgrade path.
A: Today, you can stand up a new cluster and migrate workloads to a new RKE2 cluster that uses containerd. Rancher is exploring the possibility of an in-place upgrade path.
<br>
<br/>
@@ -21,7 +21,7 @@ The usage below defines rules about what the audit log should record and what da
| Parameter | Description |
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| <a id="audit-level"></a>`AUDIT_LEVEL` | `0` - Disable audit log (default setting).<br/>`1` - Log event metadata.<br/>`2` - Log event metadata and request body.</br>`3` - Log event metadata, request body, and response body. Each log transaction for a request/response pair uses the same `auditID` value.<br/><br/>See [Audit Level Logging](#audit-log-levels) for a table that displays what each setting logs. |
| <a id="audit-level"></a>`AUDIT_LEVEL` | `0` - Disable audit log (default setting).<br/>`1` - Log event metadata.<br/>`2` - Log event metadata and request body.<br/>`3` - Log event metadata, request body, and response body. Each log transaction for a request/response pair uses the same `auditID` value.<br/><br/>See [Audit Level Logging](#audit-log-levels) for a table that displays what each setting logs. |
| `AUDIT_LOG_PATH` | Log path for Rancher Server API. Default path is `/var/log/auditlog/rancher-api-audit.log`. You can mount the log directory to host. <br/><br/>Usage Example: `AUDIT_LOG_PATH=/my/custom/path/`<br/> |
| `AUDIT_LOG_MAXAGE` | Defined the maximum number of days to retain old audit log files. Default is 10 days. |
| `AUDIT_LOG_MAXBACKUP` | Defines the maximum number of audit log files to retain. Default is 10. |
@@ -101,12 +101,12 @@ Select the target group named **rancher-tcp-443**, click the tab **Targets** and
Select the instances (Linux nodes) you want to add, and click **Add to registered**.
<hr>
***
**Screenshot Add targets to target group TCP port 443**<br/>
{{< img "/img/rancher/ha/nlb/add-targets-targetgroup-443.png" "Add targets to target group 443">}}
<hr>
***
**Screenshot Added targets to target group TCP port 443**<br/>
{{< img "/img/rancher/ha/nlb/added-targets-targetgroup-443.png" "Added targets to target group 443">}}
@@ -175,7 +175,7 @@ After AWS creates the NLB, click **Close**.
K3s and RKE Kubernetes clusters handle health checks differently because they use different Ingresses by default.
For RKE Kubernetes clusters, NGINX Ingress is used by default, whereas for K3s Kubernetes clusters, Traefik is the default Ingress.
For RKE Kubernetes clusters, NGINX Ingress is used by default, whereas for K3s Kubernetes clusters, Traefik is the default Ingress.
- **Traefik:** The health check path is `/ping`. By default `/ping` is always matched (regardless of Host), and a response from [Traefik itself](https://docs.traefik.io/operations/ping/) is always served.
- **NGINX Ingress:** The default backend of the NGINX Ingress controller has a `/healthz` endpoint. By default `/healthz` is always matched (regardless of Host), and a response from [`ingress-nginx` itself](https://github.com/kubernetes/ingress-nginx/blob/0cbe783f43a9313c9c26136e888324b1ee91a72f/charts/ingress-nginx/values.yaml#L212) is always served.
+9 -9
View File
@@ -5,7 +5,7 @@ weight: 3
This section describes the permissions required to access Istio features.
The rancher istio chart installs three `ClusterRoles`
The rancher istio chart installs three `ClusterRoles`
## Cluster-Admin Access
@@ -30,15 +30,15 @@ Istio creates three `ClusterRoles` and adds Istio CRD access to the following de
ClusterRole create by chart | Default K8s ClusterRole | Rancher Role |
------------------------------:| ---------------------------:|---------:|
`istio-admin` | admin| Project Owner |
`istio-edit`| edit | Project Member |
`istio-view` | view | Read-only |
`istio-admin` | admin| Project Owner |
`istio-edit`| edit | Project Member |
`istio-view` | view | Read-only |
Rancher will continue to use cluster-owner, cluster-member, project-owner, project-member, etc as role names, but will utilize default roles to determine access. For each default K8s `ClusterRole` there are different Istio CRD permissions and K8s actions (Create ( C ), Get ( G ), List ( L ), Watch ( W ), Update ( U ), Patch ( P ), Delete( D ), All ( * )) that can be performed.
Rancher will continue to use cluster-owner, cluster-member, project-owner, project-member, etc as role names, but will utilize default roles to determine access. For each default K8s `ClusterRole` there are different Istio CRD permissions and K8s actions (Create ( C ), Get ( G ), List ( L ), Watch ( W ), Update ( U ), Patch ( P ), Delete( D ), All ( * )) that can be performed.
|CRDs | Admin | Edit | View
|CRDs | Admin | Edit | View
|----------------------------| ------| -----| -----
| <ul><li>`config.istio.io`</li><ul><li>`adapters`</li><li>`attributemanifests`<li>`handlers`</li><li>`httpapispecbindings`</li><li>`httpapispecs`</li><li>`instances`</li><li>`quotaspecbindings`</li><li>`quotaspecs`</li><li>`rules`</lli><li>`templates`</li></ul></ul>| GLW | GLW | GLW
|<ul><li>`networking.istio.io`</li><ul><li>`destinationrules`</li><li>`envoyfilters`<li>`gateways`</li><li>`serviceentries`</li><li>`sidecars`</li><li>`virtualservices`</li><li>`workloadentries`</li></ul></ul>| * | * | GLW
|<ul><li>`security.istio.io`</li><ul><li>`authorizationpolicies`</li><li>`peerauthentications`<li>`requestauthentications`</li></ul></ul>| * | * | GLW
| <ul><li>`config.istio.io`</li><ul><li>`adapters`</li><li>`attributemanifests`</li><li>`handlers`</li><li>`httpapispecbindings`</li><li>`httpapispecs`</li><li>`instances`</li><li>`quotaspecbindings`</li><li>`quotaspecs`</li><li>`rules`</li><li>`templates`</li></ul></ul>| GLW | GLW | GLW
|<ul><li>`networking.istio.io`</li><ul><li>`destinationrules`</li><li>`envoyfilters`</li><li>`gateways`</li><li>`serviceentries`</li><li>`sidecars`</li><li>`virtualservices`</li><li>`workloadentries`</li></ul></ul>| * | * | GLW
|<ul><li>`security.istio.io`</li><ul><li>`authorizationpolicies`</li><li>`peerauthentications`</li><li>`requestauthentications`</li></ul></ul>| * | * | GLW
@@ -184,7 +184,7 @@ spec:
1. Generate a load for the service to test that your pods autoscale as intended. You can use any load-testing tool (Hey, Gatling, etc.), but we're using [Hey](https://github.com/rakyll/hey).
1. Test that pod autoscaling works as intended.<br/></br>
1. Test that pod autoscaling works as intended.<br/><br/>
**To Test Autoscaling Using Resource Metrics:**
{{% accordion id="observe-upscale-2-pods-cpu" label="Upscale to 2 Pods: CPU Usage Up to Target" %}}
Use your load testing tool to scale up to two pods based on CPU Usage.
+6 -6
View File
@@ -130,7 +130,7 @@ PLUGIN_MIRROR | Docker daemon registry mirror
PLUGIN_INSECURE | Docker daemon allows insecure registries
PLUGIN_BUILD_ARGS | Docker build args, a comma separated list
<br>
<br/>
```yaml
# This example shows an environment variable being used
@@ -312,7 +312,7 @@ You can enable notifications to any notifiers based on the build status of a pip
1. If you don't have any existing notifiers, Rancher will provide a warning that no notifiers are set up and provide a link to be able to go to the notifiers page. Follow the [instructions]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-admin/tools/notifiers) to add a notifier. If you already have notifiers, you can add them to the notification by clicking the **Add Recipient** button.
:::note
Notifiers are configured at a cluster level and require a different level of permissions.
:::
@@ -506,7 +506,7 @@ If you need to use security-sensitive information in your pipeline scripts (like
### Prerequisite
Create a secret in the same project as your pipeline, or explicitly in the namespace where pipeline build pods run.
<br>
<br/>
:::note
@@ -564,13 +564,13 @@ Variable Name | Description
# Global Pipeline Execution Settings
After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher.
After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher.
### Changing Pipeline Settings
:::note Prerequisite:
Because the pipelines app was deprecated in favor of Fleet, you will need to turn on the feature flag for legacy
Because the pipelines app was deprecated in favor of Fleet, you will need to turn on the feature flag for legacy
features before using pipelines. Note that pipelines in Kubernetes 1.21+ are no longer supported.
1. In the upper left corner, click **☰ > Global Settings**.
@@ -639,7 +639,7 @@ Rancher sets default compute resources for pipeline steps except for `Build and
:::
### Custom CA
### Custom CA
If you want to use a version control provider with a certificate from a custom/internal CA root, the CA root certificates need to be added as part of the version control provider configuration in order for the pipeline build pods to succeed.
@@ -105,7 +105,7 @@ Rancher lets you assign _custom project roles_ to a standard user instead of the
The following table lists each built-in custom project role available in Rancher and whether it is also granted by the `Owner`, `Member`, or `Read Only` role.
| Built-in Project Role | Owner | Member<a id="proj-roles"><a/> | Read Only |
| Built-in Project Role | Owner | Member<a id="proj-roles"></a> | Read Only |
| ---------------------------------- | ------------- | ----------------------------- | ------------- |
| Manage Project Members | ✓ | | |
| Create Namespaces | ✓ | ✓ | |
@@ -9,25 +9,25 @@ This FAQ is a work in progress designed to answers the questions our users most
See [Technical FAQ]({{<baseurl>}}/rancher/v2.0-v2.4/en/faq/technical/), for frequently asked technical questions.
<br>
<br/>
**Does Rancher v2.x support Docker Swarm and Mesos as environment types?**
When creating an environment in Rancher v2.x, Swarm and Mesos will no longer be standard options you can select. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 or so are running Swarm.
<br>
<br/>
**Is it possible to manage Azure Kubernetes Services with Rancher v2.x?**
Yes.
<br>
<br/>
**Does Rancher support Windows?**
As of Rancher 2.3.0, we support Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.]({{<baseurl>}}/rancher/v2.0-v2.4/en/cluster-provisioning/rke-clusters/windows-clusters/)
<br>
<br/>
**Does Rancher support Istio?**
@@ -35,37 +35,37 @@ As of Rancher 2.3.0, we support [Istio.]({{<baseurl>}}/rancher/v2.0-v2.4/en/clus
Furthermore, Istio is implemented in our micro-PaaS "Rio", which works on Rancher 2.x along with any CNCF compliant Kubernetes cluster. You can read more about it [here](https://rio.io/)
<br>
<br/>
**Will Rancher v2.x support Hashicorp's Vault for storing secrets?**
Secrets management is on our roadmap but we haven't assigned it to a specific release yet.
Secrets management is on our roadmap but we haven't assigned it to a specific release yet.
<br>
<br/>
**Does Rancher v2.x support RKT containers as well?**
At this time, we only support Docker.
<br>
<br/>
**Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and imported Kubernetes?**
Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave (Weave is available as of v2.2.0). Always refer to the [Rancher Support Matrix](https://rancher.com/support-maintenance-terms/) for details about what is officially supported.
<br>
<br/>
**Are you planning on supporting Traefik for existing setups?**
We don't currently plan on providing embedded Traefik support, but we're still exploring load-balancing approaches.
<br>
<br/>
**Can I import OpenShift Kubernetes clusters into v2.x?**
Our goal is to run any upstream Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet.
<br>
<br/>
**Are you going to integrate Longhorn?**
@@ -8,7 +8,7 @@ weight: 8007
The Hardening Guide is now located in the main [Security]({{<baseurl>}}/rancher/v2.0-v2.4/en/security/) section.
<br>
<br/>
**What are the results of Rancher's Kubernetes cluster when it is CIS benchmarked?**
@@ -59,8 +59,8 @@ When Rancher is installed on an air gapped Kubernetes cluster, there are two rec
| Configuration | Chart option | Description | Requires cert-manager |
| ------------------------------------------ | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br> This is the **default** and does not need to be added when rendering the Helm template. | yes |
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s). <br> This option must be passed when rendering the Rancher Helm template. | no |
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br/> This is the **default** and does not need to be added when rendering the Helm template. | yes |
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s). <br/> This option must be passed when rendering the Rancher Helm template. | no |
# 3. Render the Rancher Helm Template
@@ -24,7 +24,7 @@ The usage below defines rules about what the audit log should record and what da
| Parameter | Description |
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| <a id="audit-level"></a>`AUDIT_LEVEL` | `0` - Disable audit log (default setting).<br/>`1` - Log event metadata.<br/>`2` - Log event metadata and request body.</br>`3` - Log event metadata, request body, and response body. Each log transaction for a request/response pair uses the same `auditID` value.<br/><br/>See [Audit Level Logging](#audit-log-levels) for a table that displays what each setting logs. |
| <a id="audit-level"></a>`AUDIT_LEVEL` | `0` - Disable audit log (default setting).<br/>`1` - Log event metadata.<br/>`2` - Log event metadata and request body.<br/>`3` - Log event metadata, request body, and response body. Each log transaction for a request/response pair uses the same `auditID` value.<br/><br/>See [Audit Level Logging](#audit-log-levels) for a table that displays what each setting logs. |
| `AUDIT_LOG_PATH` | Log path for Rancher Server API. Default path is `/var/log/auditlog/rancher-api-audit.log`. You can mount the log directory to host. <br/><br/>Usage Example: `AUDIT_LOG_PATH=/my/custom/path/`<br/> |
| `AUDIT_LOG_MAXAGE` | Defined the maximum number of days to retain old audit log files. Default is 10 days. |
| `AUDIT_LOG_MAXBACKUP` | Defines the maximum number of audit log files to retain. Default is 10. |
@@ -134,7 +134,7 @@ PLUGIN_MIRROR | Docker daemon registry mirror
PLUGIN_INSECURE | Docker daemon allows insecure registries
PLUGIN_BUILD_ARGS | Docker build args, a comma separated list
<br>
<br/>
```yaml
# This example shows an environment variable being used
@@ -526,7 +526,7 @@ If you need to use security-sensitive information in your pipeline scripts (like
### Prerequisite
Create a secret in the same project as your pipeline, or explicitly in the namespace where pipeline build pods run.
<br>
<br/>
>**Note:** Secret injection is disabled on [pull request events](#triggers-and-trigger-rules).
@@ -637,7 +637,7 @@ stages:
>**Note:** Rancher sets default compute resources for pipeline steps except for `Build and Publish Images` and `Run Script` steps. You can override the default value by specifying compute resources in the same way.
### Custom CA
### Custom CA
_Available as of v2.2.0_
@@ -106,7 +106,7 @@ Rancher lets you assign _custom project roles_ to a standard user instead of the
The following table lists each built-in custom project role available in Rancher and whether it is also granted by the `Owner`, `Member`, or `Read Only` role.
| Built-in Project Role | Owner | Member<a id="proj-roles"><a/> | Read Only |
| Built-in Project Role | Owner | Member<a id="proj-roles"></a> | Read Only |
| ---------------------------------- | ------------- | ----------------------------- | ------------- |
| Manage Project Members | ✓ | | |
| Create Namespaces | ✓ | ✓ | |
@@ -98,7 +98,7 @@ EKS clusters must have at least one managed node group to be imported into Ranch
**Result:**
- Your cluster is registered and assigned a state of **Pending.** Rancher is deploying resources to manage your cluster.</li>
- Your cluster is registered and assigned a state of **Pending.** Rancher is deploying resources to manage your cluster.
- You can access your cluster after its state is updated to **Active.**
- **Active** clusters are assigned two Projects: `Default` (containing the namespace `default`) and `System` (containing the namespaces `cattle-system`, `ingress-nginx`, `kube-public` and `kube-system`, if present).
@@ -12,10 +12,10 @@ The following table lists the permissions required for the vSphere user account:
| Privilege Group | Operations |
|:----------------------|:-----------------------------------------------------------------------|
| Datastore | AllocateSpace </br> Browse </br> FileManagement (Low level file operations) </br> UpdateVirtualMachineFiles </br> UpdateVirtualMachineMetadata |
| Datastore | AllocateSpace <br/> Browse <br/> FileManagement (Low level file operations) <br/> UpdateVirtualMachineFiles <br/> UpdateVirtualMachineMetadata |
| Network | Assign |
| Resource | AssignVMToPool |
| Virtual Machine | Config (All) </br> GuestOperations (All) </br> Interact (All) </br> Inventory (All) </br> Provisioning (All) |
| Virtual Machine | Config (All) <br/> GuestOperations (All) <br/> Interact (All) <br/> Inventory (All) <br/> Provisioning (All) |
The following steps create a role with the required privileges and then assign it to a new user in the vSphere console:
@@ -38,7 +38,7 @@ The following steps create a role with the required privileges and then assign i
7. Create a new Global Permission. Add the user you created earlier and assign it the role you created earlier. Click **OK**.
{{< img "/img/rancher/globalpermissionuser.png" "image" >}}
{{< img "/img/rancher/globalpermissionrole.png" "image" >}}
**Result:** You now have credentials that Rancher can use to manipulate vSphere resources.
@@ -30,16 +30,16 @@ Fleet comes preinstalled in Rancher v2.5. Users can leverage continuous delivery
Follow the steps below to access Continuous Delivery in the Rancher UI:
1. Click **Cluster Explorer** in the Rancher UI.
1. Click **Cluster Explorer** in the Rancher UI.
1. In the top left dropdown menu, click **Cluster Explorer > Continuous Delivery.**
1. Select your namespace at the top of the menu, noting the following:
- By default,`fleet-default` is selected which includes all downstream clusters that are registered through Rancher.
- You may switch to `fleet-local`, which only contains the `local` cluster, or you may create your own workspace to which you may assign and move clusters.
1. Select your namespace at the top of the menu, noting the following:
- By default,`fleet-default` is selected which includes all downstream clusters that are registered through Rancher.
- You may switch to `fleet-local`, which only contains the `local` cluster, or you may create your own workspace to which you may assign and move clusters.
- You can then manage clusters by clicking on **Clusters** on the left navigation bar.
1. Click on **Gitrepos** on the left navigation bar to deploy the gitrepo into your clusters in the current workspace.
1. Click on **Gitrepos** on the left navigation bar to deploy the gitrepo into your clusters in the current workspace.
1. Select your [git repository](https://fleet.rancher.io/gitrepo-add/) and [target clusters/cluster group](https://fleet.rancher.io/gitrepo-structure/). You can also create the cluster group in the UI by clicking on **Cluster Groups** from the left navigation bar.
@@ -71,23 +71,23 @@ The Helm chart in the git repository must include its dependencies in the charts
# Troubleshooting
---
* **Known Issue:** Fleet becomes inoperable after a restore using the [backup-restore-operator]({{<baseurl>}}/rancher/v2.5/en/backups/back-up-rancher/#1-install-the-rancher-backup-operator). We will update the community once a permanent solution is in place.
* **Known Issue:** Fleet becomes inoperable after a restore using the [backup-restore-operator]({{<baseurl>}}/rancher/v2.5/en/backups/back-up-rancher/#1-install-the-rancher-backup-operator). We will update the community once a permanent solution is in place.
* **Temporary Workaround:** </br>
1. Find the two service account tokens listed in the fleet-controller and the fleet-controller-bootstrap service accounts. These are under the fleet-system namespace of the local cluster. </br>
2. Remove the non-existent token secret. Doing so allows for only one entry to be present for the service account token secret that actually exists. </br>
3. Delete the fleet-controller Pod in the fleet-system namespace to reschedule. </br>
4. After the service account token issue is resolved, you can force redeployment of the fleet-agents. In the Rancher UI, go to **☰ > Cluster Management**, click on **Clusters** page, then click **Force Update**. </br>
* **Temporary Workaround:** <br/>
1. Find the two service account tokens listed in the fleet-controller and the fleet-controller-bootstrap service accounts. These are under the fleet-system namespace of the local cluster. <br/>
2. Remove the non-existent token secret. Doing so allows for only one entry to be present for the service account token secret that actually exists. <br/>
3. Delete the fleet-controller Pod in the fleet-system namespace to reschedule. <br/>
4. After the service account token issue is resolved, you can force redeployment of the fleet-agents. In the Rancher UI, go to **☰ > Cluster Management**, click on **Clusters** page, then click **Force Update**. <br/>
5. If the fleet-agent bundles remain in a `Modified` state after Step 4, update the field `spec.forceSyncGeneration` for the fleet-agent bundle to force re-creation.
---
* **Known Issue:** clientSecretName and helmSecretName secrets for Fleet gitrepos are not included in the backup nor restore created by the [backup-restore-operator]({{<baseurl>}}/rancher/v2.5/en/backups/back-up-rancher/#1-install-the-rancher-backup-operator). We will update the community once a permanent solution is in place.
* **Known Issue:** clientSecretName and helmSecretName secrets for Fleet gitrepos are not included in the backup nor restore created by the [backup-restore-operator]({{<baseurl>}}/rancher/v2.5/en/backups/back-up-rancher/#1-install-the-rancher-backup-operator). We will update the community once a permanent solution is in place.
* **Temporary Workaround:** </br>
* **Temporary Workaround:** <br/>
By default, user-defined secrets are not backed up in Fleet. It is necessary to recreate secrets if performing a disaster recovery restore or migration of Rancher into a fresh cluster. To modify resourceSet to include extra resources you want to backup, refer to docs [here](https://github.com/rancher/backup-restore-operator#user-flow).
---
# Documentation
The Fleet documentation is at [https://fleet.rancher.io/.](https://fleet.rancher.io/)
+11 -11
View File
@@ -10,25 +10,25 @@ This FAQ is a work in progress designed to answer the questions our users most f
See [Technical FAQ]({{<baseurl>}}/rancher/v2.5/en/faq/technical/), for frequently asked technical questions.
<br>
<br/>
**Does Rancher v2.x support Docker Swarm and Mesos as environment types?**
When creating an environment in Rancher v2.x, Swarm and Mesos will no longer be standard options you can select. However, both Swarm and Mesos will continue to be available as Catalog applications you can deploy. It was a tough decision to make but, in the end, it came down to adoption. For example, out of more than 15,000 clusters, only about 200 or so are running Swarm.
<br>
<br/>
**Is it possible to manage Azure Kubernetes Services with Rancher v2.x?**
Yes.
<br>
<br/>
**Does Rancher support Windows?**
As of Rancher 2.3.0, we support Windows Server 1809 containers. For details on how to set up a cluster with Windows worker nodes, refer to the section on [configuring custom clusters for Windows.]({{<baseurl>}}/rancher/v2.5/en/cluster-provisioning/rke-clusters/windows-clusters/)
<br>
<br/>
**Does Rancher support Istio?**
@@ -36,37 +36,37 @@ As of Rancher 2.3.0, we support [Istio.]({{<baseurl>}}/rancher/v2.5/en/istio/)
Furthermore, Istio is implemented in our micro-PaaS "Rio", which works on Rancher 2.x along with any CNCF compliant Kubernetes cluster. You can read more about it [here](https://rio.io/)
<br>
<br/>
**Will Rancher v2.x support Hashicorp's Vault for storing secrets?**
There is no built-in integration of Rancher and Hashicorp's Vault. Rancher manages Kubernetes and integrates with secrets via the Kubernetes API. Thus in any downstream (managed) cluster, you can use a secret vault of your choice provided it integrates with Kubernetes, including [Vault](https://www.vaultproject.io/docs/platform/k8s).
There is no built-in integration of Rancher and Hashicorp's Vault. Rancher manages Kubernetes and integrates with secrets via the Kubernetes API. Thus in any downstream (managed) cluster, you can use a secret vault of your choice provided it integrates with Kubernetes, including [Vault](https://www.vaultproject.io/docs/platform/k8s).
<br>
<br/>
**Does Rancher v2.x support RKT containers as well?**
At this time, we only support Docker.
<br>
<br/>
**Does Rancher v2.x support Calico, Contiv, Contrail, Flannel, Weave net, etc., for embedded and registered Kubernetes?**
Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave. Always refer to the [Rancher Support Matrix](https://rancher.com/support-maintenance-terms/) for details about what is officially supported.
<br>
<br/>
**Are you planning on supporting Traefik for existing setups?**
We don't currently plan on providing embedded Traefik support, but we're still exploring load-balancing approaches.
<br>
<br/>
**Can I import OpenShift Kubernetes clusters into v2.x?**
Our goal is to run any upstream Kubernetes clusters. Therefore, Rancher v2.x should work with OpenShift, but we haven't tested it yet.
<br>
<br/>
**Are you going to integrate Longhorn?**
@@ -9,7 +9,7 @@ aliases:
The Hardening Guide is now located in the main [Security]({{<baseurl>}}/rancher/v2.5/en/security/) section.
<br>
<br/>
**What are the results of Rancher's Kubernetes cluster when it is CIS benchmarked?**
@@ -52,7 +52,7 @@ These items represent helm repositories, and can be either traditional helm endp
To add a private CA for Helm Chart repositories:
- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:</br>
- **HTTP-based chart repositories**: You must add a base64 encoded copy of the CA certificate in DER format to the spec.caBundle field of the chart repo, such as `openssl x509 -outform der -in ca.pem | base64 -w0`. Click **Edit YAML** for the chart repo and set, as in the following example:<br/>
```
[...]
spec:
@@ -63,7 +63,7 @@ To add a private CA for Helm Chart repositories:
[...]
```
- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click **Edit YAML** for the chart repo, and add the key/value pair as follows:
- **Git-based chart repositories**: It is not currently possible to add a private CA. For git-based chart repositories with a certificate signed by a private CA, you must disable TLS verification. Click **Edit YAML** for the chart repo, and add the key/value pair as follows:
```
[...]
spec:
@@ -73,7 +73,7 @@ To add a private CA for Helm Chart repositories:
> **Note:** Helm chart repositories with authentication
>
> As of Rancher v2.5.12, a new value `disableSameOriginCheck` has been added to the Repo.Spec. This allows users to bypass the same origin checks, sending the repository Authentication information as a Basic Auth Header with all API calls. This is not recommended but can be used as a temporary solution in cases of non-standard Helm chart repositories such as those that have redirects to a different origin URL.
> As of Rancher v2.5.12, a new value `disableSameOriginCheck` has been added to the Repo.Spec. This allows users to bypass the same origin checks, sending the repository Authentication information as a Basic Auth Header with all API calls. This is not recommended but can be used as a temporary solution in cases of non-standard Helm chart repositories such as those that have redirects to a different origin URL.
>
> To use this feature for an existing Helm chart repository, click <b>⋮ > Edit YAML</b>. On the `spec` portion of the YAML file, add `disableSameOriginCheck` and set it to `true`.
>
@@ -82,7 +82,7 @@ To add a private CA for Helm Chart repositories:
spec:
disableSameOriginCheck: true
[...]
```
```
### Helm Compatibility
@@ -67,8 +67,8 @@ When Rancher is installed on an air gapped Kubernetes cluster, there are two rec
| Configuration | Chart option | Description | Requires cert-manager |
| ------------------------------------------ | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- |
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br> This is the **default** and does not need to be added when rendering the Helm template. | yes |
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s). <br> This option must be passed when rendering the Rancher Helm template. | no |
| Rancher Generated Self-Signed Certificates | `ingress.tls.source=rancher` | Use certificates issued by Rancher's generated CA (self signed)<br/> This is the **default** and does not need to be added when rendering the Helm template. | yes |
| Certificates from Files | `ingress.tls.source=secret` | Use your own certificate files by creating Kubernetes Secret(s). <br/> This option must be passed when rendering the Rancher Helm template. | no |
# Helm Chart Options for Air Gap Installations
@@ -25,7 +25,7 @@ The usage below defines rules about what the audit log should record and what da
| Parameter | Description |
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| <a id="audit-level"></a>`AUDIT_LEVEL` | `0` - Disable audit log (default setting).<br/>`1` - Log event metadata.<br/>`2` - Log event metadata and request body.</br>`3` - Log event metadata, request body, and response body. Each log transaction for a request/response pair uses the same `auditID` value.<br/><br/>See [Audit Level Logging](#audit-log-levels) for a table that displays what each setting logs. |
| <a id="audit-level"></a>`AUDIT_LEVEL` | `0` - Disable audit log (default setting).<br/>`1` - Log event metadata.<br/>`2` - Log event metadata and request body.<br/>`3` - Log event metadata, request body, and response body. Each log transaction for a request/response pair uses the same `auditID` value.<br/><br/>See [Audit Level Logging](#audit-log-levels) for a table that displays what each setting logs. |
| `AUDIT_LOG_PATH` | Log path for Rancher Server API. Default path is `/var/log/auditlog/rancher-api-audit.log`. You can mount the log directory to host. <br/><br/>Usage Example: `AUDIT_LOG_PATH=/my/custom/path/`<br/> |
| `AUDIT_LOG_MAXAGE` | Defined the maximum number of days to retain old audit log files. Default is 10 days. |
| `AUDIT_LOG_MAXBACKUP` | Defines the maximum number of audit log files to retain. Default is 10. |
@@ -9,7 +9,7 @@ aliases:
This section describes the permissions required to access Istio features.
The rancher istio chart installs three `ClusterRoles`
The rancher istio chart installs three `ClusterRoles`
## Cluster-Admin Access
@@ -34,15 +34,15 @@ Istio creates three `ClusterRoles` and adds Istio CRD access to the following de
ClusterRole create by chart | Default K8s ClusterRole | Rancher Role |
------------------------------:| ---------------------------:|---------:|
`istio-admin` | admin| Project Owner |
`istio-edit`| edit | Project Member |
`istio-view` | view | Read-only |
`istio-admin` | admin| Project Owner |
`istio-edit`| edit | Project Member |
`istio-view` | view | Read-only |
Rancher will continue to use cluster-owner, cluster-member, project-owner, project-member, etc as role names, but will utilize default roles to determine access. For each default K8s `ClusterRole` there are different Istio CRD permissions and K8s actions (Create ( C ), Get ( G ), List ( L ), Watch ( W ), Update ( U ), Patch ( P ), Delete( D ), All ( * )) that can be performed.
Rancher will continue to use cluster-owner, cluster-member, project-owner, project-member, etc as role names, but will utilize default roles to determine access. For each default K8s `ClusterRole` there are different Istio CRD permissions and K8s actions (Create ( C ), Get ( G ), List ( L ), Watch ( W ), Update ( U ), Patch ( P ), Delete( D ), All ( * )) that can be performed.
|CRDs | Admin | Edit | View
|CRDs | Admin | Edit | View
|----------------------------| ------| -----| -----
| <ul><li>`config.istio.io`</li><ul><li>`adapters`</li><li>`attributemanifests`<li>`handlers`</li><li>`httpapispecbindings`</li><li>`httpapispecs`</li><li>`instances`</li><li>`quotaspecbindings`</li><li>`quotaspecs`</li><li>`rules`</lli><li>`templates`</li></ul></ul>| GLW | GLW | GLW
|<ul><li>`networking.istio.io`</li><ul><li>`destinationrules`</li><li>`envoyfilters`<li>`gateways`</li><li>`serviceentries`</li><li>`sidecars`</li><li>`virtualservices`</li><li>`workloadentries`</li></ul></ul>| * | * | GLW
|<ul><li>`security.istio.io`</li><ul><li>`authorizationpolicies`</li><li>`peerauthentications`<li>`requestauthentications`</li></ul></ul>| * | * | GLW
| <ul><li>`config.istio.io`</li><ul><li>`adapters`</li><li>`attributemanifests`</li><li>`handlers`</li><li>`httpapispecbindings`</li><li>`httpapispecs`</li><li>`instances`</li><li>`quotaspecbindings`</li><li>`quotaspecs`</li><li>`rules`</li><li>`templates`</li></ul></ul>| GLW | GLW | GLW
|<ul><li>`networking.istio.io`</li><ul><li>`destinationrules`</li><li>`envoyfilters`</li><li>`gateways`</li><li>`serviceentries`</li><li>`sidecars`</li><li>`virtualservices`</li><li>`workloadentries`</li></ul></ul>| * | * | GLW
|<ul><li>`security.istio.io`</li><ul><li>`authorizationpolicies`</li><li>`peerauthentications`</li><li>`requestauthentications`</li></ul></ul>| * | * | GLW
@@ -133,7 +133,7 @@ PLUGIN_MIRROR | Docker daemon registry mirror
PLUGIN_INSECURE | Docker daemon allows insecure registries
PLUGIN_BUILD_ARGS | Docker build args, a comma separated list
<br>
<br/>
```yaml
# This example shows an environment variable being used
@@ -518,7 +518,7 @@ If you need to use security-sensitive information in your pipeline scripts (like
### Prerequisite
Create a secret in the same project as your pipeline, or explicitly in the namespace where pipeline build pods run.
<br>
<br/>
>**Note:** Secret injection is disabled on [pull request events](#triggers-and-trigger-rules).
@@ -627,7 +627,7 @@ stages:
>**Note:** Rancher sets default compute resources for pipeline steps except for `Build and Publish Images` and `Run Script` steps. You can override the default value by specifying compute resources in the same way.
### Custom CA
### Custom CA
If you want to use a version control provider with a certificate from a custom/internal CA root, the CA root certificates need to be added as part of the version control provider configuration in order for the pipeline build pods to succeed.