From 3b827f6faed717efea39a7f9d8405374769d35c7 Mon Sep 17 00:00:00 2001 From: Catherine Luse Date: Wed, 26 May 2021 10:34:23 -0700 Subject: [PATCH] Change formatting in logging docs --- content/rancher/v2.5/en/logging/_index.md | 6 +- .../v2.5/en/logging/architecture/_index.md | 18 +++--- .../custom-resource-config/flows/_index.md | 56 ++++++++----------- .../custom-resource-config/outputs/_index.md | 54 +++++++++--------- .../v2.5/en/logging/migrating/_index.md | 44 +++++++-------- .../rancher/v2.5/en/logging/rbac/_index.md | 8 +-- 6 files changed, 88 insertions(+), 98 deletions(-) diff --git a/content/rancher/v2.5/en/logging/_index.md b/content/rancher/v2.5/en/logging/_index.md index 230d7f1ae12..043f8d5d1d4 100644 --- a/content/rancher/v2.5/en/logging/_index.md +++ b/content/rancher/v2.5/en/logging/_index.md @@ -62,15 +62,15 @@ Rancher logging has two roles, `logging-admin` and `logging-view`. For more info # Configuring Logging Custom Resources -To manage Flows, ClusterFlows, Outputs, and ClusterOutputs, go to the **Cluster Explorer** in the Rancher UI. In the upper left corner, click **Cluster Explorer > Logging**. +To manage `Flows,` `ClusterFlows`, `Outputs`, and `ClusterOutputs`, go to the **Cluster Explorer** in the Rancher UI. In the upper left corner, click **Cluster Explorer > Logging**. ### Flows and ClusterFlows -For help with configuring Flows and ClusterFlows, see [this page.](./custom-resource-config/flows) +For help with configuring `Flows` and `ClusterFlows`, see [this page.](./custom-resource-config/flows) ### Outputs and ClusterOutputs -For help with configuring Outputs and ClusterOutputs, see [this page.](./custom-resource-config/outputs) +For help with configuring `Outputs` and `ClusterOutputs`, see [this page.](./custom-resource-config/outputs) # Configuring the Logging Helm Chart diff --git a/content/rancher/v2.5/en/logging/architecture/_index.md b/content/rancher/v2.5/en/logging/architecture/_index.md index c6278e3aa43..7c397a4a82b 100644 --- a/content/rancher/v2.5/en/logging/architecture/_index.md +++ b/content/rancher/v2.5/en/logging/architecture/_index.md @@ -12,26 +12,26 @@ For more details about how the Banzai Cloud Logging operator works, see the [off The following changes were introduced to logging in Rancher v2.5: - The [Banzai Cloud Logging operator](https://banzaicloud.com/docs/one-eye/logging-operator/) now powers Rancher's logging solution in place of the former, in-house solution. -- [Fluent Bit](https://fluentbit.io/) is now used to aggregate the logs, and [Fluentd](https://www.fluentd.org/) is used for filtering the messages and routing them to the outputs. Previously, only Fluentd was used. +- [Fluent Bit](https://fluentbit.io/) is now used to aggregate the logs, and [Fluentd](https://www.fluentd.org/) is used for filtering the messages and routing them to the `Outputs`. Previously, only Fluentd was used. - Logging can be configured with a Kubernetes manifest, because logging now uses a Kubernetes operator with Custom Resource Definitions. - We now support filtering logs. -- We now support writing logs to multiple outputs. +- We now support writing logs to multiple `Outputs`. - We now always collect Control Plane and etcd logs. ### How the Banzai Cloud Logging Operator Works The Logging operator automates the deployment and configuration of a Kubernetes logging pipeline. It deploys and configures a Fluent Bit DaemonSet on every node to collect container and application logs from the node file system. -Fluent Bit queries the Kubernetes API and enriches the logs with metadata about the pods, and transfers both the logs and the metadata to Fluentd. Fluentd receives, filters, and transfers logs to multiple outputs. +Fluent Bit queries the Kubernetes API and enriches the logs with metadata about the pods, and transfers both the logs and the metadata to Fluentd. Fluentd receives, filters, and transfers logs to multiple `Outputs`. -The following custom resources are used to define how logs are filtered and sent to their outputs: +The following custom resources are used to define how logs are filtered and sent to their `Outputs`: -- A Flow is a namespaced custom resource that uses filters and selectors to route log messages to the appropriate outputs. -- A ClusterFlow is used to route cluster-level log messages. -- An Output is a namespaced resource that defines where the log messages are sent. -- A ClusterOutput defines an output that is available from all Flows and ClusterFlows. +- A `Flow` is a namespaced custom resource that uses filters and selectors to route log messages to the appropriate `Outputs`. +- A `ClusterFlow` is used to route cluster-level log messages. +- An `Output` is a namespaced resource that defines where the log messages are sent. +- A `ClusterOutput` defines an `Output` that is available from all `Flows` and `ClusterFlows`. -Each Flow must reference an Output, and each ClusterFlow must reference a ClusterOutput. +Each `Flow` must reference an `Output`, and each `ClusterFlow` must reference a `ClusterOutput`. The following figure from the [Banzai documentation](https://banzaicloud.com/docs/one-eye/logging-operator/#architecture) shows the new logging architecture: diff --git a/content/rancher/v2.5/en/logging/custom-resource-config/flows/_index.md b/content/rancher/v2.5/en/logging/custom-resource-config/flows/_index.md index c78fa73db93..a2d9489b218 100644 --- a/content/rancher/v2.5/en/logging/custom-resource-config/flows/_index.md +++ b/content/rancher/v2.5/en/logging/custom-resource-config/flows/_index.md @@ -3,7 +3,7 @@ title: Flows and ClusterFlows weight: 1 --- -For the full details on configuring Flows and ClusterFlows, see the [Banzai Cloud Logging operator documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/output/) +For the full details on configuring `Flows` and `ClusterFlows`, see the [Banzai Cloud Logging operator documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/output/) - [Configuration](#configuration) - [YAML Example](#yaml-example) @@ -21,18 +21,18 @@ For the full details on configuring Flows and ClusterFlows, see the [Banzai Clou # Changes in v2.5.8 -The Flows and ClusterFlows can now be configured by filling out forms in the Rancher UI. +The `Flows` and `ClusterFlows` can now be configured by filling out forms in the Rancher UI. # Flows -A Flow defines which logs to collect and filter and which output to send the logs to. +A `Flow` defines which logs to collect and filter and which output to send the logs to. -The Flow is a namespaced resource, which means logs will only be collected from the namespace that the flow is deployed in. +The `Flow` is a namespaced resource, which means logs will only be collected from the namespace that the `Flow` is deployed in. -For more details about the Flow custom resource, see [FlowSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/flow_types/) +For more details about the `Flow` custom resource, see [FlowSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/flow_types/) @@ -41,11 +41,9 @@ For more details about the Flow custom resource, see [FlowSpec.](https://banzaic Match statements are used to select which containers to pull logs from. -You can specify match statements to select or exclude logs according to Kubernetes labels, container and host names. +You can specify match statements to select or exclude logs according to Kubernetes labels, container and host names. Match statements are evaluated in the order they are defined and processed only until the first matching select or exclude rule applies. -Match statements are evaluated in the order they are defined and processed only until the first matching select or exclude rule applies. - -Matches can be configured by filling out the Flow or ClusterFlow forms in the Rancher UI. +Matches can be configured by filling out the `Flow` or `ClusterFlow` forms in the Rancher UI. For detailed examples on using the match statement, see the [official documentation on log routing.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/log-routing/) @@ -53,7 +51,7 @@ For detailed examples on using the match statement, see the [official documentat ### Filters -You can define one or more filters within a Flow. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the flow are applied in the order in the definition. +You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the `Flow` are applied in the order in the definition. For a list of filters supported by the Banzai Cloud Logging operator, see [this page.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/filters/) @@ -63,22 +61,20 @@ Filters need to be configured in YAML. ### Outputs -This Output will receive logs from the Flow. +This `Output` will receive logs from the `Flow`. Because the `Flow` is a namespaced resource, the `Output` must reside in same namespace as the `Flow`. -Because the Flow is a namespaced resource, the Output must reside in same namespace as the Flow. - -Outputs can be referenced when filling out the Flow or ClusterFlow forms in the Rancher UI. +`Outputs` can be referenced when filling out the `Flow` or `ClusterFlow` forms in the Rancher UI. # ClusterFlows -Matches, filters and outputs are configured for ClusterFlows in the same way that they are configured for Flows. The key difference is that the ClusterFlow is scoped at the cluster level and can configure log collection across all namespaces. +Matches, filters and `Outputs` are configured for `ClusterFlows` in the same way that they are configured for `Flows`. The key difference is that the `ClusterFlow` is scoped at the cluster level and can configure log collection across all namespaces. -After ClusterFlow selects logs from all namespaces in the cluster, logs from the cluster will be collected and logged to the selected ClusterOutput. +After `ClusterFlow` selects logs from all namespaces in the cluster, logs from the cluster will be collected and logged to the selected `ClusterOutput`. {{% /tab %}} -{{% tab "Rancher before v2.5.8" %}} +{{% tab "Rancher v2.5.0-v2.5.7" %}} - [Flows](#flows-2-5-0) - [Matches](#matches-2-5-0) @@ -91,13 +87,11 @@ After ClusterFlow selects logs from all namespaces in the cluster, logs from the # Flows -A Flow defines which logs to collect and filter and which output to send the logs to. +A `Flow` defines which logs to collect and filter and which `Output` to send the logs to. The `Flow` is a namespaced resource, which means logs will only be collected from the namespace that the `Flow` is deployed in. -The Flow is a namespaced resource, which means logs will only be collected from the namespace that the flow is deployed in. +`Flows` need to be defined in YAML. -Flows need to be defined in YAML. - -For more details about the Flow custom resource, see [FlowSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/flow_types/) +For more details about the `Flow` custom resource, see [FlowSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/flow_types/) @@ -106,9 +100,7 @@ For more details about the Flow custom resource, see [FlowSpec.](https://banzaic Match statements are used to select which containers to pull logs from. -You can specify match statements to select or exclude logs according to Kubernetes labels, container and host names. - -Match statements are evaluated in the order they are defined and processed only until the first matching select or exclude rule applies. +You can specify match statements to select or exclude logs according to Kubernetes labels, container and host names. Match statements are evaluated in the order they are defined and processed only until the first matching select or exclude rule applies. For detailed examples on using the match statement, see the [official documentation on log routing.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/log-routing/) @@ -116,7 +108,7 @@ For detailed examples on using the match statement, see the [official documentat ### Filters -You can define one or more filters within a Flow. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the flow are applied in the order in the definition. +You can define one or more filters within a `Flow`. Filters can perform various actions on the logs, for example, add additional data, transform the logs, or parse values from the records. The filters in the `Flow` are applied in the order in the definition. For a list of filters supported by the Banzai Cloud Logging operator, see [this page.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/filters/) @@ -124,19 +116,19 @@ For a list of filters supported by the Banzai Cloud Logging operator, see [this ### Outputs -This Output will receive logs from the Flow. +This `Output` will receive logs from the `Flow`. -Because the Flow is a namespaced resource, the Output must reside in same namespace as the Flow. +Because the `Flow` is a namespaced resource, the `Output` must reside in same namespace as the `Flow`. # ClusterFlows -Matches, filters and outputs are also configured for ClusterFlows. The only difference is that the ClusterFlow is scoped at the cluster level and can configure log collection across all namespaces. +Matches, filters and `Outputs` are also configured for `ClusterFlows`. The only difference is that the `ClusterFlow` is scoped at the cluster level and can configure log collection across all namespaces. -ClusterFlow selects logs from all namespaces in the cluster. Logs from the cluster will be collected and logged to the selected ClusterOutput. +`ClusterFlow` selects logs from all namespaces in the cluster. Logs from the cluster will be collected and logged to the selected `ClusterOutput`. -ClusterFlows need to be defined in YAML. +`ClusterFlows` need to be defined in YAML. {{% /tab %}} {{% /tabs %}} @@ -144,7 +136,7 @@ ClusterFlows need to be defined in YAML. # YAML Example -The following example Flow transforms the log messages from the default namespace and sends them to an S3 output: +The following example `Flow` transforms the log messages from the default namespace and sends them to an S3 `Output`: ```yaml apiVersion: logging.banzaicloud.io/v1beta1 diff --git a/content/rancher/v2.5/en/logging/custom-resource-config/outputs/_index.md b/content/rancher/v2.5/en/logging/custom-resource-config/outputs/_index.md index 58f5cc7f024..c64e9ba7040 100644 --- a/content/rancher/v2.5/en/logging/custom-resource-config/outputs/_index.md +++ b/content/rancher/v2.5/en/logging/custom-resource-config/outputs/_index.md @@ -3,7 +3,7 @@ title: Outputs and ClusterOutputs weight: 2 --- -For the full details on configuring Outputs and ClusterOutputs, see the [Banzai Cloud Logging operator documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/output/) +For the full details on configuring `Outputs` and `ClusterOutputs`, see the [Banzai Cloud Logging operator documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/output/) - [Configuration](#configuration) - [YAML Examples](#yaml-examples) @@ -17,28 +17,26 @@ For the full details on configuring Outputs and ClusterOutputs, see the [Banzai {{% tabs %}} {{% tab "v2.5.8+" %}} - - - [Outputs](#outputs-2-5-8) - [ClusterOutputs](#clusteroutputs-2-5-8) # Changes in v2.5.8 -The Outputs and ClusterOutputs can now be configured by filling out forms in the Rancher UI. +The `Outputs` and `ClusterOutputs` can now be configured by filling out forms in the Rancher UI. # Outputs -The Output resource defines an output where your Flows can send the log messages. Outputs are the final stage for a logging flow. +The `Output` resource defines where your `Flows` can send the log messages. `Outputs` are the final stage for a logging `Flow`. -The output is a namespaced resource, which means only a Flow within the same namespace can access it. +The `Output` is a namespaced resource, which means only a `Flow` within the same namespace can access it. You can use secrets in these definitions, but they must also be in the same namespace. -For the details of Output custom resource, see [OutputSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/output_types/) +For the details of `Output` custom resource, see [OutputSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/output_types/) -The Rancher UI provides forms for configuring the following Output types: +The Rancher UI provides forms for configuring the following `Output` types: - Amazon ElasticSearch - Azure Storage @@ -58,7 +56,7 @@ The Rancher UI provides forms for configuring the following Output types: - SumoLogic - Syslog -The Rancher UI provides forms for configuring the Output type, target, and access credentials if applicable. +The Rancher UI provides forms for configuring the `Output` type, target, and access credentials if applicable. For example configuration for each logging plugin supported by the logging operator, see the [logging operator documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/) @@ -66,9 +64,9 @@ For example configuration for each logging plugin supported by the logging opera # ClusterOutputs -ClusterOutput defines an Output without namespace restrictions. It is only effective when deployed in the same namespace as the logging operator. +`ClusterOutput` defines an `Output` without namespace restrictions. It is only effective when deployed in the same namespace as the logging operator. -For the details of the ClusterOutput custom resource, see [ClusterOutput.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/clusteroutput_types/) +For the details of the `ClusterOutput` custom resource, see [ClusterOutput.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/clusteroutput_types/) {{% /tab %}} {{% tab "v2.5.0-v2.5.7" %}} @@ -81,13 +79,13 @@ For the details of the ClusterOutput custom resource, see [ClusterOutput.](https # Outputs -The Output resource defines an output where your Flows can send the log messages. Outputs are the final stage for a logging flow. +The `Output` resource defines where your `Flows` can send the log messages. `Outputs` are the final stage for a logging `Flow`. -The output is a namespaced resource, which means only a Flow within the same namespace can access it. +The `Output` is a namespaced resource, which means only a `Flow` within the same namespace can access it. You can use secrets in these definitions, but they must also be in the same namespace. -Outputs are configured in YAML. For the details of Output custom resource, see [OutputSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/output_types/) +`Outputs` are configured in YAML. For the details of `Output` custom resource, see [OutputSpec.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/output_types/) For examples of configuration for each logging plugin supported by the logging operator, see the [logging operator documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/) @@ -95,11 +93,11 @@ For examples of configuration for each logging plugin supported by the logging o # ClusterOutputs -ClusterOutput defines an Output without namespace restrictions. It is only effective when deployed in the same namespace as the logging operator. +`ClusterOutput` defines an `Output` without namespace restrictions. It is only effective when deployed in the same namespace as the logging operator. -The Rancher UI provides forms for configuring the ClusterOutput type, target, and access credentials if applicable. +The Rancher UI provides forms for configuring the `ClusterOutput` type, target, and access credentials if applicable. -ClusterOutputs are configured in YAML. For the details of ClusterOutput custom resource, see [ClusterOutput.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/clusteroutput_types/) +`ClusterOutputs` are configured in YAML. For the details of `ClusterOutput` custom resource, see [ClusterOutput.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/crds/v1beta1/clusteroutput_types/) For example configuration for each logging plugin supported by the logging operator, see the [logging operator documentation.](https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/outputs/) @@ -118,7 +116,7 @@ Once logging is installed, you can use these examples to help craft your own log ### Cluster Output to ElasticSearch -Let's say you wanted to send all logs in your cluster to an `elasticsearch` cluster. First, we create a cluster output. +Let's say you wanted to send all logs in your cluster to an `elasticsearch` cluster. First, we create a cluster `Output`. ```yaml apiVersion: logging.banzaicloud.io/v1beta1 @@ -133,9 +131,9 @@ spec: scheme: http ``` -We have created this cluster output, without elasticsearch configuration, in the same namespace as our operator: `cattle-logging-system.`. Any time we create a cluster flow or cluster output, we have to put it in the `cattle-logging-system` namespace. +We have created this `ClusterOutput`, without elasticsearch configuration, in the same namespace as our operator: `cattle-logging-system.`. Any time we create a `ClusterFlow` or `ClusterOutput`, we have to put it in the `cattle-logging-system` namespace. -Now that we have configured where we want the logs to go, let's configure all logs to go to that output. +Now that we have configured where we want the logs to go, let's configure all logs to go to that `ClusterOutput`. ```yaml apiVersion: logging.banzaicloud.io/v1beta1 @@ -153,7 +151,7 @@ We should now see our configured index with logs in it. ### Output to Splunk -What if we have an application team who only wants logs from a specific namespaces sent to a `splunk` server? For this case, we can use namespaced outputs and flows. +What if we have an application team who only wants logs from a specific namespaces sent to a `splunk` server? For this case, we can use namespaced `Outputs` and `Flows`. Before we start, let's set up that team's application: `coolapp`. @@ -185,7 +183,7 @@ spec: image: paynejacob/loggenerator:latest ``` -With `coolapp` running, we will follow a similar path as when we created a cluster output. However, unlike cluster outputs, we create our output in our application's namespace. +With `coolapp` running, we will follow a similar path as when we created a `ClusterOutput`. However, unlike `ClusterOutputs`, we create our `Output` in our application's namespace. ```yaml apiVersion: logging.banzaicloud.io/v1beta1 @@ -200,7 +198,7 @@ spec: protocol: http ``` -Once again, let's feed our output some logs. +Once again, let's feed our `Output` some logs: ```yaml apiVersion: logging.banzaicloud.io/v1beta1 @@ -216,7 +214,7 @@ spec: ### Output to Syslog -Let's say you wanted to send all logs in your cluster to an `syslog` server. First, we create a cluster output. +Let's say you wanted to send all logs in your cluster to an `syslog` server. First, we create a `ClusterOutput`: ```yaml apiVersion: logging.banzaicloud.io/v1beta1 @@ -240,7 +238,7 @@ apiVersion: logging.banzaicloud.io/v1beta1 transport: tcp ``` -Now that we have configured where we want the logs to go, let's configure all logs to go to that output. +Now that we have configured where we want the logs to go, let's configure all logs to go to that `Output`. ```yaml apiVersion: logging.banzaicloud.io/v1beta1 @@ -255,9 +253,9 @@ apiVersion: logging.banzaicloud.io/v1beta1 ### Unsupported Outputs -For the final example, we create an output to write logs to a destination that is not supported out of the box: +For the final example, we create an `Output` to write logs to a destination that is not supported out of the box: -> **Note on syslog** As of Rancher v2.5.4, `syslog` is a supported output. However, this example still provides an overview on using unsupported plugins. +> **Note on syslog** As of Rancher v2.5.4, `syslog` is a supported `Output`. However, this example still provides an overview on using unsupported plugins. ```yaml apiVersion: v1 @@ -345,4 +343,4 @@ spec: ignore_network_errors_at_startup: false ``` -Let's break down what is happening here. First, we create a deployment of a container that has the additional `syslog` plugin and accepts logs forwarded from another `fluentd`. Next we create an output configured as a forwarder to our deployment. The deployment `fluentd` will then forward all logs to the configured `syslog` destination. \ No newline at end of file +Let's break down what is happening here. First, we create a deployment of a container that has the additional `syslog` plugin and accepts logs forwarded from another `fluentd`. Next we create an `Output` configured as a forwarder to our deployment. The deployment `fluentd` will then forward all logs to the configured `syslog` destination. \ No newline at end of file diff --git a/content/rancher/v2.5/en/logging/migrating/_index.md b/content/rancher/v2.5/en/logging/migrating/_index.md index 5b0a6329cab..0f05903b436 100644 --- a/content/rancher/v2.5/en/logging/migrating/_index.md +++ b/content/rancher/v2.5/en/logging/migrating/_index.md @@ -37,51 +37,51 @@ There are four key concepts to understand for v2.5+ logging: 1. Outputs - _Outputs_ are a configuration resource that determine a destination for collected logs. This is where settings for aggregators such as ElasticSearch, Kafka, etc. are stored. _Outputs_ are namespaced resources. + `Outputs` are a configuration resource that determine a destination for collected logs. This is where settings for aggregators such as ElasticSearch, Kafka, etc. are stored. `Outputs` are namespaced resources. 2. Flows - _Flows_ are a configuration resource that determine collection, filtering, and destination rules for logs. It is within a flow that one will configure what logs to collect, how to mutate or filter them, and which outputs to send the logs to. _Flows_ are namespaced resources, and can connect either to an _Output_ in the same namespace, or a _ClusterOutput_. + `Flows` are a configuration resource that determine collection, filtering, and destination rules for logs. It is within a flow that one will configure what logs to collect, how to mutate or filter them, and which `Outputs` to send the logs to. `Flows` are namespaced resources, and can connect either to an `Output` in the same namespace, or a `ClusterOutput`. 3. ClusterOutputs - _ClusterOutputs_ serve the same functionality as _Outputs_, except they are a cluster-scoped resource. _ClusterOutputs_ are necessary when collecting logs cluster-wide, or if you wish to provide an output to all namespaces in your cluster. + `ClusterOutputs` serve the same functionality as `Outputs`, except they are a cluster-scoped resource. `ClusterOutputs` are necessary when collecting logs cluster-wide, or if you wish to provide an `Output` to all namespaces in your cluster. 4. ClusterFlows - _ClusterFlows_ serve the same function as _Flows_, but at the cluster level. They are used to configure log collection for an entire cluster, instead of on a per-namespace level. _ClusterFlows_ are also where mutations and filters are defined, same as _Flows_ (in functionality). + `ClusterFlows` serve the same function as `Flows`, but at the cluster level. They are used to configure log collection for an entire cluster, instead of on a per-namespace level. `ClusterFlows` are also where mutations and filters are defined, same as `Flows` (in functionality). # Cluster Logging -To configure cluster-wide logging for v2.5+ logging, one needs to setup a _ClusterFlow_. This object defines the source of logs, any transformations or filters to be applied, and finally the output(s) for the logs. +To configure cluster-wide logging for v2.5+ logging, one needs to set up a `ClusterFlow`. This object defines the source of logs, any transformations or filters to be applied, and finally the `Output` (or `Outputs`) for the logs. -> Important: _ClusterFlows_ must be defined within the `cattle-logging-system` namespace. _ClusterFlows_ will not work if defined in any other namespace. +> Important: `ClusterFlows` must be defined within the `cattle-logging-system` namespace. `ClusterFlows` will not work if defined in any other namespace. -In legacy logging, in order to collect logs from across the entire cluster, one only needed to enable cluster-level logging and define the desired output. This basic approach remains in v2.5+ logging. To replicate legacy cluster-level logging, follow these steps: +In legacy logging, in order to collect logs from across the entire cluster, one only needed to enable cluster-level logging and define the desired `Output`. This basic approach remains in v2.5+ logging. To replicate legacy cluster-level logging, follow these steps: -1. Define a _ClusterOutput_ according to the instructions found under [Output Configuration](#output-configuration) -2. Create a _ClusterFlow_, ensuring that it is set to be created in the `cattle-logging-system` namespace - 1. Remove all _Include_ and _Exclude_ rules from the flow definition. This ensures that all logs are gathered. +1. Define a `ClusterOutput` according to the instructions found under [Output Configuration](#output-configuration) +2. Create a `ClusterFlow`, ensuring that it is set to be created in the `cattle-logging-system` namespace + 1. Remove all _Include_ and _Exclude_ rules from the `Flow` definition. This ensures that all logs are gathered. 2. You do not need to configure any filters if you do not wish - default behavior does not require their creation - 3. Define your cluster output(s) + 3. Define your cluster `Output` or `Outputs` -This will result in logs from all sources in the cluster (all pods, and all system components) being collected and sent to the output(s) you defined in the _ClusterFlow_. +This will result in logs from all sources in the cluster (all pods, and all system components) being collected and sent to the `Output` or `Outputs` you defined in the `ClusterFlow`. # Project Logging -Logging in v2.5+ is not project-aware. This means that in order to collect logs from pods running in project namespaces, you will need to define _Flows_ for those namespaces. +Logging in v2.5+ is not project-aware. This means that in order to collect logs from pods running in project namespaces, you will need to define `Flows` for those namespaces. To collect logs from a specific namespace, follow these steps: -1. Define an _Output_ or _ClusterOutput_ according to the instructions found under [Output Configuration](#output-configuration) -2. Create a _Flow_, ensuring that it is set to be created in the namespace in which you want to gather logs. +1. Define an `Output` or `ClusterOutput` according to the instructions found under [Output Configuration](#output-configuration) +2. Create a `Flow`, ensuring that it is set to be created in the namespace in which you want to gather logs. 1. If you wish to define _Include_ or _Exclude_ rules, you may do so. Otherwise, removal of all rules will result in all pods in the target namespace having their logs collected. 2. You do not need to configure any filters if you do not wish - default behavior does not require their creation - 3. Define your output(s) - these can be either _ClusterOutput_ or _Output_ objects. + 3. Define your outputs - these can be either `ClusterOutput` or `Output` objects. -This will result in logs from all sources in the namespace (pods) being collected and sent to the output(s) you defined in your _Flow_. +This will result in logs from all sources in the namespace (pods) being collected and sent to the `Output` (or `Outputs`) you defined in your `Flow`. -> To collect logs from a project, repeat the above steps for every namespace within the project. Alternatively, you can label your project workloads with a common label (e.g. `project=my-project`) and use a _ClusterFlow_ to collect logs from all pods matching this label. +> To collect logs from a project, repeat the above steps for every namespace within the project. Alternatively, you can label your project workloads with a common label (e.g. `project=my-project`) and use a `ClusterFlow` to collect logs from all pods matching this label. # Output Configuration In legacy logging, there are five logging destinations to choose from: Elasticsearch, Splunk, Kafka, Fluentd, and Syslog. With the exception of Syslog, all of these destinations are available in logging v2.5+. @@ -100,7 +100,7 @@ In legacy logging, there are five logging destinations to choose from: Elasticse | SSL Configuration -> Enabled SSL Verification | SSL -> Certificate Authority File | Certificate must now be stored in a secret | -In legacy logging, indices were automatically created according to the format in the "Index Patterns" section. In v2.5 logging, default behavior has been changed to logging to a single index. You can still configure index pattern functionality on the output object by editing as YAML and inputting the following values: +In legacy logging, indices were automatically created according to the format in the "Index Patterns" section. In v2.5 logging, default behavior has been changed to logging to a single index. You can still configure index pattern functionality on the `Output` object by editing as YAML and inputting the following values: ``` ... @@ -147,7 +147,7 @@ _(2) Users can configure either `ca_file` (a path to a PEM-encoded CA certificat ### Fluentd -As of v2.5.2, it is only possible to add a single Fluentd server using the "Edit as Form" option. To add multiple servers, edit the output as YAML and input multiple servers. +As of v2.5.2, it is only possible to add a single Fluentd server using the "Edit as Form" option. To add multiple servers, edit the `Output` as YAML and input multiple servers. | Legacy Logging | v2.5+ Logging | Notes | |------------------------------------------|-----------------------------------------------------|----------------------------------------------------------------------| @@ -169,11 +169,11 @@ _(1) These values are to be specified as paths to files. Those files must be mou ### Syslog -As of v2.5.2, syslog is not currently supported as an output using v2.5+ logging. +As of v2.5.2, syslog is not currently supported for `Outputs` using v2.5+ logging. # Custom Log Fields -In order to add custom log fields, you will need to add the following YAML to your flow configuration: +In order to add custom log fields, you will need to add the following YAML to your `Flow` configuration: ``` ... diff --git a/content/rancher/v2.5/en/logging/rbac/_index.md b/content/rancher/v2.5/en/logging/rbac/_index.md index 6db6fa6135c..063d09d6bf0 100644 --- a/content/rancher/v2.5/en/logging/rbac/_index.md +++ b/content/rancher/v2.5/en/logging/rbac/_index.md @@ -6,16 +6,16 @@ weight: 3 Rancher logging has two roles, `logging-admin` and `logging-view`. -- `logging-admin` gives users full access to namespaced flows and outputs -- `logging-view` allows users to *view* namespaced flows and outputs, and cluster flows and outputs +- `logging-admin` gives users full access to namespaced `Flows` and `Outputs` +- `logging-view` allows users to *view* namespaced `Flows` and `Outputs`, and `ClusterFlows` and `ClusterOutputs` -> **Why choose one role over the other?** Edit access to cluster flow and cluster output resources is powerful. Any user with it has edit access for all logs in the cluster. +> **Why choose one role over the other?** Edit access to `ClusterFlow` and `ClusterOutput` resources is powerful. Any user with it has edit access for all logs in the cluster. In Rancher, the cluster administrator role is the only role with full access to all `rancher-logging` resources. Cluster members are not able to edit or read any logging resources. Project owners and members have the following privileges: Project Owners | Project Members --- | --- -able to create namespaced flows and outputs in their projects' namespaces | only able to view the flows and outputs in projects' namespaces +able to create namespaced `Flows` and `Outputs` in their projects' namespaces | only able to view the `Flows` and `Outputs` in projects' namespaces can collect logs from anything in their projects' namespaces | cannot collect any logs in their projects' namespaces Both project owners and project members require at least *one* namespace in their project to use logging. If they do not, then they may not see the logging button in the top nav dropdown. \ No newline at end of file