mirror of
https://github.com/rancher/rancher-docs.git
synced 2026-05-05 20:53:33 +00:00
Remove unneeded intermediate folders
This commit is contained in:
@@ -0,0 +1,6 @@
|
||||
---
|
||||
title: Advanced
|
||||
weight: 1000
|
||||
---
|
||||
|
||||
The documents in this section contain resources for less common use cases.
|
||||
+559
@@ -0,0 +1,559 @@
|
||||
---
|
||||
title: Enabling the API Audit Log to Record System Events
|
||||
weight: 4
|
||||
---
|
||||
|
||||
You can enable the API audit log to record the sequence of system events initiated by individual users. You can know what happened, when it happened, who initiated it, and what cluster it affected. When you enable this feature, all requests to the Rancher API and all responses from it are written to a log.
|
||||
|
||||
You can enable API Auditing during Rancher installation or upgrade.
|
||||
|
||||
## Enabling API Audit Log
|
||||
|
||||
The Audit Log is enabled and configured by passing environment variables to the Rancher server container. See the following to enable on your installation.
|
||||
|
||||
- [Docker Install]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/advanced/#api-audit-log)
|
||||
|
||||
- [Kubernetes Install]({{<baseurl>}}/rancher/v2.6/en/installation/install-rancher-on-k8s/chart-options/#api-audit-log)
|
||||
|
||||
## API Audit Log Options
|
||||
|
||||
The usage below defines rules about what the audit log should record and what data it should include:
|
||||
|
||||
| Parameter | Description |
|
||||
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| <a id="audit-level"></a>`AUDIT_LEVEL` | `0` - Disable audit log (default setting).<br/>`1` - Log event metadata.<br/>`2` - Log event metadata and request body.</br>`3` - Log event metadata, request body, and response body. Each log transaction for a request/response pair uses the same `auditID` value.<br/><br/>See [Audit Level Logging](#audit-log-levels) for a table that displays what each setting logs. |
|
||||
| `AUDIT_LOG_PATH` | Log path for Rancher Server API. Default path is `/var/log/auditlog/rancher-api-audit.log`. You can mount the log directory to host. <br/><br/>Usage Example: `AUDIT_LOG_PATH=/my/custom/path/`<br/> |
|
||||
| `AUDIT_LOG_MAXAGE` | Defined the maximum number of days to retain old audit log files. Default is 10 days. |
|
||||
| `AUDIT_LOG_MAXBACKUP` | Defines the maximum number of audit log files to retain. Default is 10. |
|
||||
| `AUDIT_LOG_MAXSIZE` | Defines the maximum size in megabytes of the audit log file before it gets rotated. Default size is 100M. |
|
||||
|
||||
<br/>
|
||||
|
||||
### Audit Log Levels
|
||||
|
||||
The following table displays what parts of API transactions are logged for each [`AUDIT_LEVEL`](#audit-level) setting.
|
||||
|
||||
| `AUDIT_LEVEL` Setting | Request Metadata | Request Body | Response Metadata | Response Body |
|
||||
| --------------------- | ---------------- | ------------ | ----------------- | ------------- |
|
||||
| `0` | | | | |
|
||||
| `1` | ✓ | | | |
|
||||
| `2` | ✓ | ✓ | | |
|
||||
| `3` | ✓ | ✓ | ✓ | ✓ |
|
||||
|
||||
## Viewing API Audit Logs
|
||||
|
||||
### Docker Install
|
||||
|
||||
Share the `AUDIT_LOG_PATH` directory (Default: `/var/log/auditlog`) with the host system. The log can be parsed by standard CLI tools or forwarded on to a log collection tool like Fluentd, Filebeat, Logstash, etc.
|
||||
|
||||
### Kubernetes Install
|
||||
|
||||
Enabling the API Audit Log with the Helm chart install will create a `rancher-audit-log` sidecar container in the Rancher pod. This container will stream the log to standard output (stdout). You can view the log as you would any container log.
|
||||
|
||||
The `rancher-audit-log` container is part of the `rancher` pod in the `cattle-system` namespace.
|
||||
|
||||
#### CLI
|
||||
|
||||
```bash
|
||||
kubectl -n cattle-system logs -f rancher-84d886bdbb-s4s69 rancher-audit-log
|
||||
```
|
||||
|
||||
#### Shipping the Audit Log
|
||||
|
||||
You can enable Rancher's built in log collection and shipping for the cluster to ship the audit and other services logs to a supported collection endpoint. See [Rancher Tools - Logging]({{<baseurl>}}/rancher/v2.6/en/logging) for details.
|
||||
|
||||
## Audit Log Samples
|
||||
|
||||
After you enable auditing, each API request or response is logged by Rancher in the form of JSON. Each of the following code samples provide examples of how to identify each API transaction.
|
||||
|
||||
### Metadata Level
|
||||
|
||||
If you set your `AUDIT_LEVEL` to `1`, Rancher logs the metadata header for every API request, but not the body. The header provides basic information about the API transaction, such as the transaction's ID, who initiated the transaction, the time it occurred, etc.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "30022177-9e2e-43d1-b0d0-06ef9d3db183",
|
||||
"requestURI": "/v3/schemas",
|
||||
"sourceIPs": ["::1"],
|
||||
"user": {
|
||||
"name": "user-f4tt2",
|
||||
"group": ["system:authenticated"]
|
||||
},
|
||||
"verb": "GET",
|
||||
"stage": "RequestReceived",
|
||||
"stageTimestamp": "2018-07-20 10:22:43 +0800"
|
||||
}
|
||||
```
|
||||
|
||||
### Metadata and Request Body Level
|
||||
|
||||
If you set your `AUDIT_LEVEL` to `2`, Rancher logs the metadata header and body for every API request.
|
||||
|
||||
The code sample below depicts an API request, with both its metadata header and body.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "ef1d249e-bfac-4fd0-a61f-cbdcad53b9bb",
|
||||
"requestURI": "/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
|
||||
"sourceIPs": ["::1"],
|
||||
"user": {
|
||||
"name": "user-f4tt2",
|
||||
"group": ["system:authenticated"]
|
||||
},
|
||||
"verb": "PUT",
|
||||
"stage": "RequestReceived",
|
||||
"stageTimestamp": "2018-07-20 10:28:08 +0800",
|
||||
"requestBody": {
|
||||
"hostIPC": false,
|
||||
"hostNetwork": false,
|
||||
"hostPID": false,
|
||||
"paused": false,
|
||||
"annotations": {},
|
||||
"baseType": "workload",
|
||||
"containers": [
|
||||
{
|
||||
"allowPrivilegeEscalation": false,
|
||||
"image": "nginx",
|
||||
"imagePullPolicy": "Always",
|
||||
"initContainer": false,
|
||||
"name": "nginx",
|
||||
"ports": [
|
||||
{
|
||||
"containerPort": 80,
|
||||
"dnsName": "nginx-nodeport",
|
||||
"kind": "NodePort",
|
||||
"name": "80tcp01",
|
||||
"protocol": "TCP",
|
||||
"sourcePort": 0,
|
||||
"type": "/v3/project/schemas/containerPort"
|
||||
}
|
||||
],
|
||||
"privileged": false,
|
||||
"readOnly": false,
|
||||
"resources": {
|
||||
"type": "/v3/project/schemas/resourceRequirements",
|
||||
"requests": {},
|
||||
"limits": {}
|
||||
},
|
||||
"restartCount": 0,
|
||||
"runAsNonRoot": false,
|
||||
"stdin": true,
|
||||
"stdinOnce": false,
|
||||
"terminationMessagePath": "/dev/termination-log",
|
||||
"terminationMessagePolicy": "File",
|
||||
"tty": true,
|
||||
"type": "/v3/project/schemas/container",
|
||||
"environmentFrom": [],
|
||||
"capAdd": [],
|
||||
"capDrop": [],
|
||||
"livenessProbe": null,
|
||||
"volumeMounts": []
|
||||
}
|
||||
],
|
||||
"created": "2018-07-18T07:34:16Z",
|
||||
"createdTS": 1531899256000,
|
||||
"creatorId": null,
|
||||
"deploymentConfig": {
|
||||
"maxSurge": 1,
|
||||
"maxUnavailable": 0,
|
||||
"minReadySeconds": 0,
|
||||
"progressDeadlineSeconds": 600,
|
||||
"revisionHistoryLimit": 10,
|
||||
"strategy": "RollingUpdate"
|
||||
},
|
||||
"deploymentStatus": {
|
||||
"availableReplicas": 1,
|
||||
"conditions": [
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:38Z",
|
||||
"lastTransitionTimeTS": 1531899278000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "Deployment has minimum availability.",
|
||||
"reason": "MinimumReplicasAvailable",
|
||||
"status": "True",
|
||||
"type": "Available"
|
||||
},
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:16Z",
|
||||
"lastTransitionTimeTS": 1531899256000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "ReplicaSet \"nginx-64d85666f9\" has successfully progressed.",
|
||||
"reason": "NewReplicaSetAvailable",
|
||||
"status": "True",
|
||||
"type": "Progressing"
|
||||
}
|
||||
],
|
||||
"observedGeneration": 2,
|
||||
"readyReplicas": 1,
|
||||
"replicas": 1,
|
||||
"type": "/v3/project/schemas/deploymentStatus",
|
||||
"unavailableReplicas": 0,
|
||||
"updatedReplicas": 1
|
||||
},
|
||||
"dnsPolicy": "ClusterFirst",
|
||||
"id": "deployment:default:nginx",
|
||||
"labels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"name": "nginx",
|
||||
"namespaceId": "default",
|
||||
"projectId": "c-bcz5t:p-fdr4s",
|
||||
"publicEndpoints": [
|
||||
{
|
||||
"addresses": ["10.64.3.58"],
|
||||
"allNodes": true,
|
||||
"ingressId": null,
|
||||
"nodeId": null,
|
||||
"podId": null,
|
||||
"port": 30917,
|
||||
"protocol": "TCP",
|
||||
"serviceId": "default:nginx-nodeport",
|
||||
"type": "publicEndpoint"
|
||||
}
|
||||
],
|
||||
"restartPolicy": "Always",
|
||||
"scale": 1,
|
||||
"schedulerName": "default-scheduler",
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"type": "/v3/project/schemas/labelSelector"
|
||||
},
|
||||
"state": "active",
|
||||
"terminationGracePeriodSeconds": 30,
|
||||
"transitioning": "no",
|
||||
"transitioningMessage": "",
|
||||
"type": "deployment",
|
||||
"uuid": "f998037d-8a5c-11e8-a4cf-0245a7ebb0fd",
|
||||
"workloadAnnotations": {
|
||||
"deployment.kubernetes.io/revision": "1",
|
||||
"field.cattle.io/creatorId": "user-f4tt2"
|
||||
},
|
||||
"workloadLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"scheduling": {
|
||||
"node": {}
|
||||
},
|
||||
"description": "my description",
|
||||
"volumes": []
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Metadata, Request Body, and Response Body Level
|
||||
|
||||
If you set your `AUDIT_LEVEL` to `3`, Rancher logs:
|
||||
|
||||
- The metadata header and body for every API request.
|
||||
- The metadata header and body for every API response.
|
||||
|
||||
#### Request
|
||||
|
||||
The code sample below depicts an API request, with both its metadata header and body.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "a886fd9f-5d6b-4ae3-9a10-5bff8f3d68af",
|
||||
"requestURI": "/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
|
||||
"sourceIPs": ["::1"],
|
||||
"user": {
|
||||
"name": "user-f4tt2",
|
||||
"group": ["system:authenticated"]
|
||||
},
|
||||
"verb": "PUT",
|
||||
"stage": "RequestReceived",
|
||||
"stageTimestamp": "2018-07-20 10:33:06 +0800",
|
||||
"requestBody": {
|
||||
"hostIPC": false,
|
||||
"hostNetwork": false,
|
||||
"hostPID": false,
|
||||
"paused": false,
|
||||
"annotations": {},
|
||||
"baseType": "workload",
|
||||
"containers": [
|
||||
{
|
||||
"allowPrivilegeEscalation": false,
|
||||
"image": "nginx",
|
||||
"imagePullPolicy": "Always",
|
||||
"initContainer": false,
|
||||
"name": "nginx",
|
||||
"ports": [
|
||||
{
|
||||
"containerPort": 80,
|
||||
"dnsName": "nginx-nodeport",
|
||||
"kind": "NodePort",
|
||||
"name": "80tcp01",
|
||||
"protocol": "TCP",
|
||||
"sourcePort": 0,
|
||||
"type": "/v3/project/schemas/containerPort"
|
||||
}
|
||||
],
|
||||
"privileged": false,
|
||||
"readOnly": false,
|
||||
"resources": {
|
||||
"type": "/v3/project/schemas/resourceRequirements",
|
||||
"requests": {},
|
||||
"limits": {}
|
||||
},
|
||||
"restartCount": 0,
|
||||
"runAsNonRoot": false,
|
||||
"stdin": true,
|
||||
"stdinOnce": false,
|
||||
"terminationMessagePath": "/dev/termination-log",
|
||||
"terminationMessagePolicy": "File",
|
||||
"tty": true,
|
||||
"type": "/v3/project/schemas/container",
|
||||
"environmentFrom": [],
|
||||
"capAdd": [],
|
||||
"capDrop": [],
|
||||
"livenessProbe": null,
|
||||
"volumeMounts": []
|
||||
}
|
||||
],
|
||||
"created": "2018-07-18T07:34:16Z",
|
||||
"createdTS": 1531899256000,
|
||||
"creatorId": null,
|
||||
"deploymentConfig": {
|
||||
"maxSurge": 1,
|
||||
"maxUnavailable": 0,
|
||||
"minReadySeconds": 0,
|
||||
"progressDeadlineSeconds": 600,
|
||||
"revisionHistoryLimit": 10,
|
||||
"strategy": "RollingUpdate"
|
||||
},
|
||||
"deploymentStatus": {
|
||||
"availableReplicas": 1,
|
||||
"conditions": [
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:38Z",
|
||||
"lastTransitionTimeTS": 1531899278000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "Deployment has minimum availability.",
|
||||
"reason": "MinimumReplicasAvailable",
|
||||
"status": "True",
|
||||
"type": "Available"
|
||||
},
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:16Z",
|
||||
"lastTransitionTimeTS": 1531899256000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "ReplicaSet \"nginx-64d85666f9\" has successfully progressed.",
|
||||
"reason": "NewReplicaSetAvailable",
|
||||
"status": "True",
|
||||
"type": "Progressing"
|
||||
}
|
||||
],
|
||||
"observedGeneration": 2,
|
||||
"readyReplicas": 1,
|
||||
"replicas": 1,
|
||||
"type": "/v3/project/schemas/deploymentStatus",
|
||||
"unavailableReplicas": 0,
|
||||
"updatedReplicas": 1
|
||||
},
|
||||
"dnsPolicy": "ClusterFirst",
|
||||
"id": "deployment:default:nginx",
|
||||
"labels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"name": "nginx",
|
||||
"namespaceId": "default",
|
||||
"projectId": "c-bcz5t:p-fdr4s",
|
||||
"publicEndpoints": [
|
||||
{
|
||||
"addresses": ["10.64.3.58"],
|
||||
"allNodes": true,
|
||||
"ingressId": null,
|
||||
"nodeId": null,
|
||||
"podId": null,
|
||||
"port": 30917,
|
||||
"protocol": "TCP",
|
||||
"serviceId": "default:nginx-nodeport",
|
||||
"type": "publicEndpoint"
|
||||
}
|
||||
],
|
||||
"restartPolicy": "Always",
|
||||
"scale": 1,
|
||||
"schedulerName": "default-scheduler",
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"type": "/v3/project/schemas/labelSelector"
|
||||
},
|
||||
"state": "active",
|
||||
"terminationGracePeriodSeconds": 30,
|
||||
"transitioning": "no",
|
||||
"transitioningMessage": "",
|
||||
"type": "deployment",
|
||||
"uuid": "f998037d-8a5c-11e8-a4cf-0245a7ebb0fd",
|
||||
"workloadAnnotations": {
|
||||
"deployment.kubernetes.io/revision": "1",
|
||||
"field.cattle.io/creatorId": "user-f4tt2"
|
||||
},
|
||||
"workloadLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"scheduling": {
|
||||
"node": {}
|
||||
},
|
||||
"description": "my decript",
|
||||
"volumes": []
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Response
|
||||
|
||||
The code sample below depicts an API response, with both its metadata header and body.
|
||||
|
||||
```json
|
||||
{
|
||||
"auditID": "a886fd9f-5d6b-4ae3-9a10-5bff8f3d68af",
|
||||
"responseStatus": "200",
|
||||
"stage": "ResponseComplete",
|
||||
"stageTimestamp": "2018-07-20 10:33:06 +0800",
|
||||
"responseBody": {
|
||||
"actionLinks": {
|
||||
"pause": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx?action=pause",
|
||||
"resume": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx?action=resume",
|
||||
"rollback": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx?action=rollback"
|
||||
},
|
||||
"annotations": {},
|
||||
"baseType": "workload",
|
||||
"containers": [
|
||||
{
|
||||
"allowPrivilegeEscalation": false,
|
||||
"image": "nginx",
|
||||
"imagePullPolicy": "Always",
|
||||
"initContainer": false,
|
||||
"name": "nginx",
|
||||
"ports": [
|
||||
{
|
||||
"containerPort": 80,
|
||||
"dnsName": "nginx-nodeport",
|
||||
"kind": "NodePort",
|
||||
"name": "80tcp01",
|
||||
"protocol": "TCP",
|
||||
"sourcePort": 0,
|
||||
"type": "/v3/project/schemas/containerPort"
|
||||
}
|
||||
],
|
||||
"privileged": false,
|
||||
"readOnly": false,
|
||||
"resources": {
|
||||
"type": "/v3/project/schemas/resourceRequirements"
|
||||
},
|
||||
"restartCount": 0,
|
||||
"runAsNonRoot": false,
|
||||
"stdin": true,
|
||||
"stdinOnce": false,
|
||||
"terminationMessagePath": "/dev/termination-log",
|
||||
"terminationMessagePolicy": "File",
|
||||
"tty": true,
|
||||
"type": "/v3/project/schemas/container"
|
||||
}
|
||||
],
|
||||
"created": "2018-07-18T07:34:16Z",
|
||||
"createdTS": 1531899256000,
|
||||
"creatorId": null,
|
||||
"deploymentConfig": {
|
||||
"maxSurge": 1,
|
||||
"maxUnavailable": 0,
|
||||
"minReadySeconds": 0,
|
||||
"progressDeadlineSeconds": 600,
|
||||
"revisionHistoryLimit": 10,
|
||||
"strategy": "RollingUpdate"
|
||||
},
|
||||
"deploymentStatus": {
|
||||
"availableReplicas": 1,
|
||||
"conditions": [
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:38Z",
|
||||
"lastTransitionTimeTS": 1531899278000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "Deployment has minimum availability.",
|
||||
"reason": "MinimumReplicasAvailable",
|
||||
"status": "True",
|
||||
"type": "Available"
|
||||
},
|
||||
{
|
||||
"lastTransitionTime": "2018-07-18T07:34:16Z",
|
||||
"lastTransitionTimeTS": 1531899256000,
|
||||
"lastUpdateTime": "2018-07-18T07:34:38Z",
|
||||
"lastUpdateTimeTS": 1531899278000,
|
||||
"message": "ReplicaSet \"nginx-64d85666f9\" has successfully progressed.",
|
||||
"reason": "NewReplicaSetAvailable",
|
||||
"status": "True",
|
||||
"type": "Progressing"
|
||||
}
|
||||
],
|
||||
"observedGeneration": 2,
|
||||
"readyReplicas": 1,
|
||||
"replicas": 1,
|
||||
"type": "/v3/project/schemas/deploymentStatus",
|
||||
"unavailableReplicas": 0,
|
||||
"updatedReplicas": 1
|
||||
},
|
||||
"dnsPolicy": "ClusterFirst",
|
||||
"hostIPC": false,
|
||||
"hostNetwork": false,
|
||||
"hostPID": false,
|
||||
"id": "deployment:default:nginx",
|
||||
"labels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"links": {
|
||||
"remove": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
|
||||
"revisions": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx/revisions",
|
||||
"self": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
|
||||
"update": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx",
|
||||
"yaml": "https://localhost:8443/v3/project/c-bcz5t:p-fdr4s/workloads/deployment:default:nginx/yaml"
|
||||
},
|
||||
"name": "nginx",
|
||||
"namespaceId": "default",
|
||||
"paused": false,
|
||||
"projectId": "c-bcz5t:p-fdr4s",
|
||||
"publicEndpoints": [
|
||||
{
|
||||
"addresses": ["10.64.3.58"],
|
||||
"allNodes": true,
|
||||
"ingressId": null,
|
||||
"nodeId": null,
|
||||
"podId": null,
|
||||
"port": 30917,
|
||||
"protocol": "TCP",
|
||||
"serviceId": "default:nginx-nodeport"
|
||||
}
|
||||
],
|
||||
"restartPolicy": "Always",
|
||||
"scale": 1,
|
||||
"schedulerName": "default-scheduler",
|
||||
"selector": {
|
||||
"matchLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
},
|
||||
"type": "/v3/project/schemas/labelSelector"
|
||||
},
|
||||
"state": "active",
|
||||
"terminationGracePeriodSeconds": 30,
|
||||
"transitioning": "no",
|
||||
"transitioningMessage": "",
|
||||
"type": "deployment",
|
||||
"uuid": "f998037d-8a5c-11e8-a4cf-0245a7ebb0fd",
|
||||
"workloadAnnotations": {
|
||||
"deployment.kubernetes.io/revision": "1",
|
||||
"field.cattle.io/creatorId": "user-f4tt2"
|
||||
},
|
||||
"workloadLabels": {
|
||||
"workload.user.cattle.io/workloadselector": "deployment-default-nginx"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
+40
@@ -0,0 +1,40 @@
|
||||
---
|
||||
title: "Running on ARM64 (Experimental)"
|
||||
weight: 3
|
||||
---
|
||||
|
||||
> **Important:**
|
||||
>
|
||||
> Running on an ARM64 platform is currently an experimental feature and is not yet officially supported in Rancher. Therefore, we do not recommend using ARM64 based nodes in a production environment.
|
||||
|
||||
The following options are available when using an ARM64 platform:
|
||||
|
||||
- Running Rancher on ARM64 based node(s)
|
||||
- Only for Docker Install. Please note that the following installation command replaces the examples found in the [Docker Install]({{<baseurl>}}/rancher/v2.0-v2.4/en/installation/other-installation-methods/single-node-docker) link:
|
||||
|
||||
```
|
||||
# In the last line `rancher/rancher:vX.Y.Z`, be certain to replace "X.Y.Z" with a released version in which ARM64 builds exist. For example, if your matching version is v2.5.8, you would fill in this line with `rancher/rancher:v2.5.8`.
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
--privileged \
|
||||
rancher/rancher:vX.Y.Z
|
||||
```
|
||||
> **Note:** To check if your specific released version is compatible with the ARM64 architecture, you may navigate to your
|
||||
> version's release notes in the following two ways:
|
||||
>
|
||||
> - Manually find your version using https://github.com/rancher/rancher/releases.
|
||||
> - Go directly to your version using the tag and the specific version number. If you plan to use v2.5.8, for example, you may
|
||||
> navigate to https://github.com/rancher/rancher/releases/tag/v2.5.8.
|
||||
|
||||
- Create custom cluster and adding ARM64 based node(s)
|
||||
- Kubernetes cluster version must be 1.12 or higher
|
||||
- CNI Network Provider must be [Flannel]({{<baseurl>}}/rancher/v2.6/en/faq/networking/cni-providers/#flannel)
|
||||
- Importing clusters that contain ARM64 based nodes
|
||||
- Kubernetes cluster version must be 1.12 or higher
|
||||
|
||||
Please see [Cluster Options]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/rke-clusters/options/) how to configure the cluster options.
|
||||
|
||||
The following features are not tested:
|
||||
|
||||
- Monitoring, alerts, notifiers, pipelines and logging
|
||||
- Launching apps from the catalog
|
||||
@@ -0,0 +1,40 @@
|
||||
---
|
||||
title: Tuning etcd for Large Installations
|
||||
weight: 2
|
||||
---
|
||||
|
||||
When running larger Rancher installations with 15 or more clusters it is recommended to increase the default keyspace for etcd from the default 2GB. The maximum setting is 8GB and the host should have enough RAM to keep the entire dataset in memory. When increasing this value you should also increase the size of the host. The keyspace size can also be adjusted in smaller installations if you anticipate a high rate of change of pods during the garbage collection interval.
|
||||
|
||||
The etcd data set is automatically cleaned up on a five minute interval by Kubernetes. There are situations, e.g. deployment thrashing, where enough events could be written to etcd and deleted before garbage collection occurs and cleans things up causing the keyspace to fill up. If you see `mvcc: database space exceeded` errors, in the etcd logs or Kubernetes API server logs, you should consider increasing the keyspace size. This can be accomplished by setting the [quota-backend-bytes](https://etcd.io/docs/v3.4.0/op-guide/maintenance/#space-quota) setting on the etcd servers.
|
||||
|
||||
### Example: This snippet of the RKE cluster.yml file increases the keyspace size to 5GB
|
||||
|
||||
```yaml
|
||||
# RKE cluster.yml
|
||||
---
|
||||
services:
|
||||
etcd:
|
||||
extra_args:
|
||||
quota-backend-bytes: 5368709120
|
||||
```
|
||||
|
||||
## Scaling etcd disk performance
|
||||
|
||||
You can follow the recommendations from [the etcd docs](https://etcd.io/docs/v3.4.0/tuning/#disk) on how to tune the disk priority on the host.
|
||||
|
||||
Additionally, to reduce IO contention on the disks for etcd, you can use a dedicated device for the data and wal directory. Based on etcd best practices, mirroring RAID configurations are unnecessary because etcd replicates data between the nodes in the cluster. You can use stripping RAID configurations to increase available IOPS.
|
||||
|
||||
To implement this solution in an RKE cluster, the `/var/lib/etcd/data` and `/var/lib/etcd/wal` directories will need to have disks mounted and formatted on the underlying host. In the `extra_args` directive of the `etcd` service, you must include the `wal_dir` directory. Without specifying the `wal_dir`, etcd process will try to manipulate the underlying `wal` mount with insufficient permissions.
|
||||
|
||||
```yaml
|
||||
# RKE cluster.yml
|
||||
---
|
||||
services:
|
||||
etcd:
|
||||
extra_args:
|
||||
data-dir: '/var/lib/rancher/etcd/data/'
|
||||
wal-dir: '/var/lib/rancher/etcd/wal/wal_dir'
|
||||
extra_binds:
|
||||
- '/var/lib/etcd/data:/var/lib/rancher/etcd/data'
|
||||
- '/var/lib/etcd/wal:/var/lib/rancher/etcd/wal'
|
||||
```
|
||||
@@ -0,0 +1,108 @@
|
||||
---
|
||||
title: Opening Ports with firewalld
|
||||
weight: 1
|
||||
---
|
||||
|
||||
> We recommend disabling firewalld. For Kubernetes 1.19.x and higher, firewalld must be turned off.
|
||||
|
||||
Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm.
|
||||
|
||||
For example, one Oracle Linux image in AWS has REJECT rules that stop Helm from communicating with Tiller:
|
||||
|
||||
```
|
||||
Chain INPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
|
||||
ACCEPT icmp -- anywhere anywhere
|
||||
ACCEPT all -- anywhere anywhere
|
||||
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
|
||||
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||
|
||||
Chain FORWARD (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
|
||||
|
||||
Chain OUTPUT (policy ACCEPT)
|
||||
target prot opt source destination
|
||||
```
|
||||
|
||||
You can check the default firewall rules with this command:
|
||||
|
||||
```
|
||||
sudo iptables --list
|
||||
```
|
||||
|
||||
This section describes how to use `firewalld` to apply the [firewall port rules]({{<baseurl>}}/rancher/v2.6/en/installation/requirements/ports) for nodes in a high-availability Rancher server cluster.
|
||||
|
||||
# Prerequisite
|
||||
|
||||
Install v7.x or later ofv`firewalld`:
|
||||
|
||||
```
|
||||
yum install firewalld
|
||||
systemctl start firewalld
|
||||
systemctl enable firewalld
|
||||
```
|
||||
|
||||
# Applying Firewall Port Rules
|
||||
|
||||
In the Rancher high-availability installation instructions, the Rancher server is set up on three nodes that have all three Kubernetes roles: etcd, controlplane, and worker. If your Rancher server nodes have all three roles, run the following commands on each node:
|
||||
|
||||
```
|
||||
firewall-cmd --permanent --add-port=22/tcp
|
||||
firewall-cmd --permanent --add-port=80/tcp
|
||||
firewall-cmd --permanent --add-port=443/tcp
|
||||
firewall-cmd --permanent --add-port=2376/tcp
|
||||
firewall-cmd --permanent --add-port=2379/tcp
|
||||
firewall-cmd --permanent --add-port=2380/tcp
|
||||
firewall-cmd --permanent --add-port=6443/tcp
|
||||
firewall-cmd --permanent --add-port=8472/udp
|
||||
firewall-cmd --permanent --add-port=9099/tcp
|
||||
firewall-cmd --permanent --add-port=10250/tcp
|
||||
firewall-cmd --permanent --add-port=10254/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/udp
|
||||
```
|
||||
If your Rancher server nodes have separate roles, use the following commands based on the role of the node:
|
||||
|
||||
```
|
||||
# For etcd nodes, run the following commands:
|
||||
firewall-cmd --permanent --add-port=2376/tcp
|
||||
firewall-cmd --permanent --add-port=2379/tcp
|
||||
firewall-cmd --permanent --add-port=2380/tcp
|
||||
firewall-cmd --permanent --add-port=8472/udp
|
||||
firewall-cmd --permanent --add-port=9099/tcp
|
||||
firewall-cmd --permanent --add-port=10250/tcp
|
||||
|
||||
# For control plane nodes, run the following commands:
|
||||
firewall-cmd --permanent --add-port=80/tcp
|
||||
firewall-cmd --permanent --add-port=443/tcp
|
||||
firewall-cmd --permanent --add-port=2376/tcp
|
||||
firewall-cmd --permanent --add-port=6443/tcp
|
||||
firewall-cmd --permanent --add-port=8472/udp
|
||||
firewall-cmd --permanent --add-port=9099/tcp
|
||||
firewall-cmd --permanent --add-port=10250/tcp
|
||||
firewall-cmd --permanent --add-port=10254/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/udp
|
||||
|
||||
# For worker nodes, run the following commands:
|
||||
firewall-cmd --permanent --add-port=22/tcp
|
||||
firewall-cmd --permanent --add-port=80/tcp
|
||||
firewall-cmd --permanent --add-port=443/tcp
|
||||
firewall-cmd --permanent --add-port=2376/tcp
|
||||
firewall-cmd --permanent --add-port=8472/udp
|
||||
firewall-cmd --permanent --add-port=9099/tcp
|
||||
firewall-cmd --permanent --add-port=10250/tcp
|
||||
firewall-cmd --permanent --add-port=10254/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/tcp
|
||||
firewall-cmd --permanent --add-port=30000-32767/udp
|
||||
```
|
||||
|
||||
After the `firewall-cmd` commands have been run on a node, use the following command to enable the firewall rules:
|
||||
|
||||
```
|
||||
firewall-cmd --reload
|
||||
```
|
||||
|
||||
**Result:** The firewall is updated so that Helm can communicate with the Rancher server nodes.
|
||||
+253
@@ -0,0 +1,253 @@
|
||||
---
|
||||
title: Docker Install with TLS Termination at Layer-7 NGINX Load Balancer
|
||||
weight: 252
|
||||
---
|
||||
|
||||
For development and testing environments that have a special requirement to terminate TLS/SSL at a load balancer instead of your Rancher Server container, deploy Rancher and configure a load balancer to work with it conjunction.
|
||||
|
||||
A layer-7 load balancer can be beneficial if you want to centralize your TLS termination in your infrastructure. Layer-7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 load balancer is not able to concern itself with.
|
||||
|
||||
This install procedure walks you through deployment of Rancher using a single container, and then provides a sample configuration for a layer-7 NGINX load balancer.
|
||||
|
||||
## Requirements for OS, Docker, Hardware, and Networking
|
||||
|
||||
Make sure that your node fulfills the general [installation requirements.]({{<baseurl>}}/rancher/v2.6/en/installation/requirements/)
|
||||
|
||||
## Installation Outline
|
||||
|
||||
<!-- TOC -->
|
||||
|
||||
- [1. Provision Linux Host](#1-provision-linux-host)
|
||||
- [2. Choose an SSL Option and Install Rancher](#2-choose-an-ssl-option-and-install-rancher)
|
||||
- [3. Configure Load Balancer](#3-configure-load-balancer)
|
||||
|
||||
<!-- /TOC -->
|
||||
|
||||
## 1. Provision Linux Host
|
||||
|
||||
Provision a single Linux host according to our [Requirements]({{<baseurl>}}/rancher/v2.6/en/installation/requirements) to launch your Rancher Server.
|
||||
|
||||
## 2. Choose an SSL Option and Install Rancher
|
||||
|
||||
For security purposes, SSL (Secure Sockets Layer) is required when using Rancher. SSL secures all Rancher network communication, like when you login or interact with a cluster.
|
||||
|
||||
> **Do you want to..**.
|
||||
>
|
||||
> - Complete an Air Gap Installation?
|
||||
> - Record all transactions with the Rancher API?
|
||||
>
|
||||
> See [Advanced Options](#advanced-options) below before continuing.
|
||||
|
||||
Choose from the following options:
|
||||
|
||||
<details id="option-a">
|
||||
<summary>Option A-Bring Your Own Certificate: Self-Signed</summary>
|
||||
|
||||
If you elect to use a self-signed certificate to encrypt communication, you must install the certificate on your load balancer (which you'll do later) and your Rancher container. Run the Docker command to deploy Rancher, pointing it toward your certificate.
|
||||
|
||||
> **Prerequisites:**
|
||||
> Create a self-signed certificate.
|
||||
>
|
||||
> - The certificate files must be in PEM format.
|
||||
|
||||
**To Install Rancher Using a Self-Signed Cert:**
|
||||
|
||||
1. While running the Docker command to deploy Rancher, point Docker toward your CA certificate file.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /etc/your_certificate_directory/cacerts.pem:/etc/rancher/ssl/cacerts.pem \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
</details>
|
||||
<details id="option-b">
|
||||
<summary>Option B-Bring Your Own Certificate: Signed by Recognized CA</summary>
|
||||
|
||||
If your cluster is public facing, it's best to use a certificate signed by a recognized CA.
|
||||
|
||||
> **Prerequisites:**
|
||||
>
|
||||
> - The certificate files must be in PEM format.
|
||||
|
||||
**To Install Rancher Using a Cert Signed by a Recognized CA:**
|
||||
|
||||
If you use a certificate signed by a recognized CA, installing your certificate in the Rancher container isn't necessary. We do have to make sure there is no default CA certificate generated and stored, you can do this by passing the `--no-cacerts` parameter to the container.
|
||||
|
||||
1. Enter the following command.
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
rancher/rancher:latest --no-cacerts
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## 3. Configure Load Balancer
|
||||
|
||||
When using a load balancer in front of your Rancher container, there's no need for the container to redirect port communication from port 80 or port 443. By passing the header `X-Forwarded-Proto: https` header, this redirect is disabled.
|
||||
|
||||
The load balancer or proxy has to be configured to support the following:
|
||||
|
||||
- **WebSocket** connections
|
||||
- **SPDY** / **HTTP/2** protocols
|
||||
- Passing / setting the following headers:
|
||||
|
||||
| Header | Value | Description |
|
||||
|--------|-------|-------------|
|
||||
| `Host` | Hostname used to reach Rancher. | To identify the server requested by the client.
|
||||
| `X-Forwarded-Proto` | `https` | To identify the protocol that a client used to connect to the load balancer or proxy.<br /><br/>**Note:** If this header is present, `rancher/rancher` does not redirect HTTP to HTTPS.
|
||||
| `X-Forwarded-Port` | Port used to reach Rancher. | To identify the protocol that client used to connect to the load balancer or proxy.
|
||||
| `X-Forwarded-For` | IP of the client connection. | To identify the originating IP address of a client.
|
||||
### Example NGINX configuration
|
||||
|
||||
This NGINX configuration is tested on NGINX 1.14.
|
||||
|
||||
> **Note:** This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - HTTP Load Balancing](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/).
|
||||
|
||||
- Replace `rancher-server` with the IP address or hostname of the node running the Rancher container.
|
||||
- Replace both occurrences of `FQDN` to the DNS name for Rancher.
|
||||
- Replace `/certs/fullchain.pem` and `/certs/privkey.pem` to the location of the server certificate and the server certificate key respectively.
|
||||
|
||||
```
|
||||
worker_processes 4;
|
||||
worker_rlimit_nofile 40000;
|
||||
|
||||
events {
|
||||
worker_connections 8192;
|
||||
}
|
||||
|
||||
http {
|
||||
upstream rancher {
|
||||
server rancher-server:80;
|
||||
}
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default Upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name FQDN;
|
||||
ssl_certificate /certs/fullchain.pem;
|
||||
ssl_certificate_key /certs/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_pass http://rancher;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
|
||||
proxy_read_timeout 900s;
|
||||
proxy_buffering off;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name FQDN;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<br/>
|
||||
|
||||
## What's Next?
|
||||
|
||||
- **Recommended:** Review [Single Node Backup and Restore]({{<baseurl>}}/rancher/v2.6/en/backups/docker-installs/). Although you don't have any data you need to back up right now, we recommend creating backups after regular Rancher use.
|
||||
- Create a Kubernetes cluster: [Provisioning Kubernetes Clusters]({{<baseurl>}}/rancher/v2.6/en/cluster-provisioning/).
|
||||
|
||||
<br/>
|
||||
|
||||
## FAQ and Troubleshooting
|
||||
|
||||
For help troubleshooting certificates, see [this section.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/troubleshooting)
|
||||
|
||||
## Advanced Options
|
||||
|
||||
### API Auditing
|
||||
|
||||
If you want to record all transactions with the Rancher API, enable the [API Auditing]({{<baseurl>}}/rancher/v2.6/en/installation/resources/advanced/api-audit-log) feature by adding the flags below into your install command.
|
||||
|
||||
-e AUDIT_LEVEL=1 \
|
||||
-e AUDIT_LOG_PATH=/var/log/auditlog/rancher-api-audit.log \
|
||||
-e AUDIT_LOG_MAXAGE=20 \
|
||||
-e AUDIT_LOG_MAXBACKUP=20 \
|
||||
-e AUDIT_LOG_MAXSIZE=100 \
|
||||
|
||||
### Air Gap
|
||||
|
||||
If you are visiting this page to complete an [Air Gap Installation]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/air-gap), you must pre-pend your private registry URL to the server tag when running the installation command in the option that you choose. Add `<REGISTRY.DOMAIN.COM:PORT>` with your private registry URL in front of `rancher/rancher:latest`.
|
||||
|
||||
**Example:**
|
||||
|
||||
<REGISTRY.DOMAIN.COM:PORT>/rancher/rancher:latest
|
||||
|
||||
### Persistent Data
|
||||
|
||||
Rancher uses etcd as a datastore. When Rancher is installed with Docker, the embedded etcd is being used. The persistent data is at the following path in the container: `/var/lib/rancher`.
|
||||
|
||||
You can bind mount a host volume to this location to preserve data on the host it is running on:
|
||||
|
||||
```
|
||||
docker run -d --restart=unless-stopped \
|
||||
-p 80:80 -p 443:443 \
|
||||
-v /opt/rancher:/var/lib/rancher \
|
||||
--privileged \
|
||||
rancher/rancher:latest
|
||||
```
|
||||
|
||||
As of Rancher v2.5, privileged access is [required.]({{<baseurl>}}/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/#privileged-access-for-rancher-v2-5)
|
||||
|
||||
This layer 7 NGINX configuration is tested on NGINX version 1.13 (mainline) and 1.14 (stable).
|
||||
|
||||
> **Note:** This NGINX configuration is only an example and may not suit your environment. For complete documentation, see [NGINX Load Balancing - TCP and UDP Load Balancer](https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/).
|
||||
|
||||
```
|
||||
upstream rancher {
|
||||
server rancher-server:80;
|
||||
}
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default Upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name rancher.yourdomain.com;
|
||||
ssl_certificate /etc/your_certificate_directory/fullchain.pem;
|
||||
ssl_certificate_key /etc/your_certificate_directory/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_pass http://rancher;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
# This allows the ability for the execute shell window to remain open for up to 15 minutes. Without this parameter, the default is 1 minute and will automatically close.
|
||||
proxy_read_timeout 900s;
|
||||
proxy_buffering off;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name rancher.yourdomain.com;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
```
|
||||
|
||||
<br/>
|
||||
|
||||
Reference in New Issue
Block a user