Compare commits

...

27 Commits

Author SHA1 Message Date
ismail simsek
7ae2eed876 Prometheus: Make sure "Min Step" has precedence (#115941)
* set minStep value as final step value when set explicitly.

* enhance it with tests

* improve function readability

* a bit more improvement for readability
2026-01-15 00:50:10 +01:00
Isabel Matwawana
cddc4776ef Docs: Fix annotations step (#116302) 2026-01-14 22:09:03 +00:00
Renato Costa
ec941b42ef unified-storage: lift name requirement in storage backend ListHistory (#116291) 2026-01-14 16:36:33 -05:00
Will Assis
873d35b494 unified-storage: sqlkv enable more tests (#116150)
* unified-storage: sqlkv enable more tests
2026-01-14 16:28:14 -05:00
Yuri Tseretyan
d191425f3d Replace clients with app-sdk-generated versions (#116227)
* add GetClientRegistry to user in integration tests to be able to create resource clients
* replace client in alerting notification api tests to generated by app-sdk
2026-01-14 16:13:32 -05:00
chencs
0a66aacfb3 Use a valid regex for Custom all value (#116216) 2026-01-14 16:01:24 -05:00
Jesse David Peterson
9f2f93b401 Docs: Time range pan (#115045)
* docs(time-range): document panel level time range pan feature

* docs(time-range-pan): include panel time range in panel viz docs pages

* docs(fix): correct "x" which was probably a placeholder

* Docs: Add pan panel time range (#115102)

* Added content to panel-zoom shared file from panel-pan.md and deleted panel-pan.md

* Renamed file

* Updated overview page and shared file reference in candlestick

* Updated shared file refs in other pages

* Fixed a couple typos

* docs(time-range-pan): reverence only one video of time range panning

* docs(dashboard): amend dashboard common time range controls section

---------

Co-authored-by: Isabel Matwawana <76437239+imatwawana@users.noreply.github.com>
2026-01-14 16:17:38 -04:00
Mariell Hoversholm
9e399e0b19 Data Source: Proxy fallback routes must match all inputs (#116274) 2026-01-14 21:12:18 +01:00
Renato Costa
2f520454ae unified-storage: save previous_resource_version as microsecond timestamp in compat mode (#116179)
* unified-storage: save `previous_resource_version` as microsecond timestamp in compat mode

* debug test

* fix test

* fix typo

* Fix misleading var name.

`ms` is typically the abbreviation for millisecond, not microsecond.
2026-01-14 14:21:46 -05:00
Ezequiel Victorero
72f7bd3900 Snapshots: Support public snapshot instance in latest version (#116086) 2026-01-14 15:28:16 -03:00
Will Assis
ba416eab4e unified-storage: dont use polling notifier with sqlite in sqlkv (#116283)
* unified-storage: dont use polling notifier with sqlite in sqlkv
2026-01-14 18:22:39 +00:00
Alan Martin
189d50d815 UI: Use react-table column header types in InteractiveTable with story and tests (#116091)
* feat(InteractiveTable): allow custom header rendering

* docs(InteractiveTable): add story for custom header rendering

* test(InteractiveTable): add tests for custom header rendering

* docs(InteractiveTable): add custom header rendering documentation

* fix: test failure from non-a11y code
2026-01-14 17:59:03 +00:00
Mariell Hoversholm
450eaba447 test: skip integration test in short mode (#116280) 2026-01-14 18:33:55 +01:00
Kristina Demeshchik
87f5d5e741 Dashboard: Hide export options in collapsible row (#116155)
* Introduce export options

* Reset keys

* Introduce a new key

* Generate new keys

* Rename the label

* re-generate key

* Fix the spacing

* Remove debuggers

* Add subtitle

* refactor component

* update labels

* faield tests

* Update tooltip

* Linting issue
2026-01-14 12:12:33 -05:00
Andrew Hackmann
5e68b07cac Elasticsearch: Make code editor look more like prometheus (#115461)
* Make code editor look more prometheus

* add warning when switching builders

* address adam's feedback

* yarn
2026-01-14 09:50:35 -07:00
Adela Almasan
99acd3766d Suggestions: Update empty state (#116172) 2026-01-14 10:37:42 -06:00
Konrad Lalik
0faab257b1 Alerting: Add E2E test configuration and fix saved searches tests (#116203)
* Alerting: Fix and stabilize saved searches E2E tests

Stabilizes the saved searches E2E tests by ensuring correct feature
toggles are enabled and improving the data cleanup logic.

Previously, `clearSavedSearches` relied on clearing localStorage, which
was insufficient as saved searches are persisted server-side via the
UserStorage API. The cleanup now correctly invokes the UserStorage API.

Changes:
- Add `alerting` project to Playwright configuration with authentication
- Enable `alertingListViewV2`, `alertingFilterV2`, and `alertingSavedSearches` toggles in tests
- Update `clearSavedSearches` helper to use UserStorage API for reliable cleanup
- Improve test selectors to use more robust `getByRole` and `getByText` queries
- Update `SavedSearchItem` to merge `aria-label` into `tooltip` for consistency

* Fix saved searchers unit tests
2026-01-14 16:41:53 +01:00
Isabel Matwawana
19deffee40 Docs: Dynamic dashboards edit public preview (#116050) 2026-01-14 10:31:11 -05:00
Alyssa Joyner
6385b1f471 [Azure Monitor]: Preserve logs builder query when switching to KQL mode (#116161) 2026-01-14 08:17:21 -07:00
Isabel Matwawana
505fa869ee Docs: Dashboard schema v2 public preview updates (#115293) 2026-01-14 10:03:19 -05:00
Tania
399b3def4f Chore: Fix pluginsAutoUpdate flag evaluation (#116065)
* Experimental: Test flag evaluation

* Attempt to inject requester into the context

* fixup! Attempt to inject requester into the context
2026-01-14 15:57:12 +01:00
Paul Marbach
d6ac674f3e Gauge: Fix issue with gdev dashboard (#116235) 2026-01-14 09:55:34 -05:00
Paul Marbach
0e6651c729 Gauge: Re-introduce minVizHeight and minVizWidth (#116034) 2026-01-14 09:55:12 -05:00
Haris Rozajac
ea2a0936df Dashboard Conversion: Preserve repeat property when converting tabs to rows (#116180)
* preserve repeat property

* fix test

* preserve repeat when converting panels in tabs or rows with autogrid layout

* fix v1 serialization of autogrid
2026-01-14 07:29:51 -07:00
Ryan McKinley
d95c51b20e Chore: Deprecate experimental restore dashboard API (#116256) 2026-01-14 14:09:37 +00:00
Rodrigo Vasconcelos de Barros
d0df6b8de4 Alerting: Provisioning Status Differentiation for ALL resources (#115773)
* Show different badge for converted prometheus provisioned resource

* Update contact points and templates to populate provenance

* Update notification policies

* Handle non k8s contact point in ContactPointHeader

* Fix provenance check in enhanceContactPointsWithMetadata

* Update translations

* Fix unused import

* Refactor provenance enum

* Derive provisioned status from provenance in Route type

* Remove unused imports

* Treat PROVENANCE_NONE as no provenance in isRouteProvisione

* Rename KnownProvenance.None to .Empty to avoid confusion

* Change copy text for resources with converted_prometheus provenance

* Derive provisioned status from provenance in GrafanaManagedContactPoint

* Fix linter errors

* Extract helper method to check if contact point is provisioned

* Replace string literal with constant

* Refactor KnownProvenance enum values

Refactored the KnownProvenance enum to better reflect the known provenances defined by the backend.
Also refactored the methods where we assert if a resource is provisioned to better reflect the cases for which a provenance value reflects no provisioning.
A resource is considered not provisioned when the provenance is equal to '', 'none' or undefined.

* Use provenance to infer provenance status for Templates

Refactored useNotificationTemplateMetadata to use only provenance value, and extracted method used to assert if resource is provisioned or not to k8s/utils in order to be more resource agnostic.

* Replace empty string with 'none' for KnownProvenance enum

The empty string valye for provenance gets mapped to the string literal 'none' before being passed down in the api response, therefore we can use only 'none'

* Replace PROVENANCE_NONE with KnownProvenance.None

Replaced the constant PROVENANCE_NONE with the KnownProvenance.None enum value since the values where duplicated

* Fix JSDoc

* Change copy text for ProvisioningBadge

* Add missing tooltip in notification policy badge

* Add missing tooltip in TemplatesTable badge

* fix conflicts

---------

Co-authored-by: Sonia Aguilar <33540275+soniaAguilarPeiron@users.noreply.github.com>
Co-authored-by: Sonia Aguilar <soniaaguilarpeiron@gmail.com>
2026-01-14 08:58:03 -05:00
Alex Khomenko
9f44f868aa Stars: Fix infinite loading with no starred items (#116248) 2026-01-14 15:48:48 +02:00
130 changed files with 5060 additions and 4061 deletions

View File

@@ -586,6 +586,7 @@
},
"id": -1,
"panels": [],
"repeat": "custom_var_tab",
"title": "Repeated Tab by \"$custom_var_tab\"",
"type": "row"
},
@@ -610,8 +611,11 @@
"y": 22
},
"id": 6,
"maxPerRow": 3,
"options": {},
"pluginVersion": "12.4.0-19736337744",
"repeat": "custom_var_panel",
"repeatDirection": "h",
"targets": [
{
"refId": "A"

View File

@@ -586,6 +586,7 @@
},
"id": -1,
"panels": [],
"repeat": "custom_var_tab",
"title": "Repeated Tab by \"$custom_var_tab\"",
"type": "row"
},
@@ -610,8 +611,11 @@
"y": 22
},
"id": 6,
"maxPerRow": 3,
"options": {},
"pluginVersion": "12.4.0-19736337744",
"repeat": "custom_var_panel",
"repeatDirection": "h",
"targets": [
{
"refId": "A"

View File

@@ -439,6 +439,11 @@ func processTabItem(elements map[string]dashv2alpha1.DashboardElement, tab *dash
rowPanel["title"] = *tab.Spec.Title
}
if tab.Spec.Repeat != nil && tab.Spec.Repeat.Value != "" {
// We only use value here as V1 doesn't support mode
rowPanel["repeat"] = tab.Spec.Repeat.Value
}
rowPanel["gridPos"] = map[string]interface{}{
"x": 0,
"y": currentY,
@@ -819,6 +824,21 @@ func convertAutoGridLayoutToPanelsWithOffset(elements map[string]dashv2alpha1.Da
},
}
// Convert AutoGridRepeatOptions to RepeatOptions if present
// AutoGridRepeatOptions only has mode and value; infer direction and maxPerRow from AutoGrid settings:
// - direction: always "h" (AutoGrid flows horizontally, left-to-right then wraps)
// - maxPerRow: from AutoGrid's maxColumnCount
if item.Spec.Repeat != nil {
directionH := dashv2alpha1.DashboardRepeatOptionsDirectionH
maxPerRow := int64(maxColumnCount)
gridItem.Spec.Repeat = &dashv2alpha1.DashboardRepeatOptions{
Mode: item.Spec.Repeat.Mode,
Value: item.Spec.Repeat.Value,
Direction: &directionH,
MaxPerRow: &maxPerRow,
}
}
panel, err := convertPanelFromElement(&element, &gridItem)
if err != nil {
return nil, fmt.Errorf("failed to convert panel %s: %w", item.Spec.Element.Name, err)

View File

@@ -2117,7 +2117,7 @@
}
],
"title": "Numeric, no series",
"type": "gauge"
"type": "radialbar"
},
{
"datasource": {
@@ -2183,7 +2183,7 @@
}
],
"title": "Non-numeric",
"type": "gauge"
"type": "radialbar"
}
],
"preload": false,
@@ -2201,4 +2201,4 @@
"title": "Panel tests - Gauge (new)",
"uid": "panel-tests-gauge-new",
"weekStart": ""
}
}

View File

@@ -2067,7 +2067,7 @@
}
],
"title": "Numeric, no series",
"type": "gauge"
"type": "radialbar"
},
{
"datasource": {
@@ -2131,7 +2131,7 @@
}
],
"title": "Non-numeric",
"type": "gauge"
"type": "radialbar"
}
],
"preload": false,

View File

@@ -25,10 +25,6 @@ cards:
height: 24
href: ./foundation-sdk/
description: The Grafana Foundation SDK is a set of tools, types, and libraries that let you define Grafana dashboards and resources using familiar programming languages like Go, TypeScript, Python, Java, and PHP. Use it in conjunction with `grafanactl` to push your programmatically generated resources.
- title: JSON schema v2
height: 24
href: ./schema-v2/
description: Grafana dashboards are represented as JSON objects that store metadata, panels, variables, and settings. Observability as Code works with all versions of the JSON model, and it's fully compatible with version 2.
- title: Git Sync (private preview)
height: 24
href: ./provision-resources/intro-git-sync/
@@ -68,7 +64,7 @@ Historically, managing Grafana as code involved various community and Grafana La
- This approach requires handling HTTP requests and responses but provides complete control over resource management.
- `grafanactl`, Git Sync, and the Foundation SDK are all built on top of these APIs.
- To understand Dashboard Schemas accepted by the APIs, refer to the [JSON models documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/).
- To understand Dashboard Schemas accepted by the APIs, refer to the [JSON models documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/build-dashboards/view-dashboard-json-model/index.md).
## Explore

View File

@@ -1,243 +0,0 @@
---
description: A reference for the JSON dashboard schemas used with Observability as Code, including the experimental V2 schema.
keywords:
- configuration
- as code
- dashboards
- git integration
- git sync
- github
labels:
products:
- cloud
- enterprise
- oss
title: JSON schema v2
weight: 500
canonical: https://grafana.com/docs/grafana/latest/as-code/observability-as-code/schema-v2/
aliases:
- ../../observability-as-code/schema-v2/ # /docs/grafana/next/observability-as-code/schema-v2/
---
# Dashboard JSON schema v2
{{< admonition type="caution" >}}
Dashboard JSON schema v2 is an [experimental](https://grafana.com/docs/release-life-cycle/) feature. Engineering and on-call support is not available. Documentation is either limited or not provided outside of code comments. No SLA is provided. To get early access to this feature, request it through [this form](https://docs.google.com/forms/d/e/1FAIpQLSd73nQzuhzcHJOrLFK4ef_uMxHAQiPQh1-rsQUT2MRqbeMLpg/viewform?usp=dialog).
**Do not enable this feature in production environments as it may result in the irreversible loss of data.**
{{< /admonition >}}
Grafana dashboards are represented as JSON objects that store metadata, panels, variables, and settings.
Observability as Code works with all versions of the JSON model, and it's fully compatible with version 2.
## Before you begin
Schema v2 is automatically enabled with the Dynamic Dashboards feature toggle.
To get early access to this feature, request it through [this form](https://docs.google.com/forms/d/e/1FAIpQLSd73nQzuhzcHJOrLFK4ef_uMxHAQiPQh1-rsQUT2MRqbeMLpg/viewform?usp=dialog).
It also requires the new dashboards API feature toggle, `kubernetesDashboards`, to be enabled as well.
For more information on how dashboards behave depending on your feature flag configuration, refer to [Notes and limitations](#notes-and-limitations).
## Accessing the JSON Model
To view the JSON representation of a dashboard:
1. Toggle on the edit mode switch in the top-right corner of the dashboard.
1. Click the gear icon in the top navigation bar to go to **Settings**.
1. Select the **JSON Model** tab.
1. Copy or edit the JSON structure as needed.
## JSON fields
```json
{
"annotations": [],
"cursorSync": "Off",
"editable": true,
"elements": {},
"layout": {
"kind": GridLayout, // Can also be AutoGridLayout, RowsLayout, or TabsLayout
"spec": {
"items": []
}
},
"links": [],
"liveNow": false,
"preload": false,
"tags": [], // Tags associated with the dashboard.
"timeSettings": {
"autoRefresh": "",
"autoRefreshIntervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"fiscalYearStartMonth": 0,
"from": "now-6h",
"hideTimepicker": false,
"timezone": "browser",
"to": "now"
},
"title": "",
"variables": []
},
```
The dashboard JSON sample shown uses the default `GridLayoutKind`.
The JSON in a new dashboard for the other three layout options, `AutoGridLayout`, `RowsLayout`, and `TabsLayout`, are as follows:
**`AutoGridLayout`**
```json
"layout": {
"kind": "AutoGridLayout",
"spec": {
"columnWidthMode": "standard",
"items": [],
"fillScreen": false,
"maxColumnCount": 3,
"rowHeightMode": "standard"
}
},
```
**`RowsLayout`**
```json
"layout": {
"kind": "RowsLayout",
"spec": {
"rows": []
},
```
**`TabsLayout`**
```json
"layout": {
"kind": "TabsLayout",
"spec": {
"tabs": []
},
```
### `DashboardSpec`
The following table explains the usage of the dashboard JSON fields.
The table includes default and other fields:
<!-- prettier-ignore-start -->
| Name | Usage |
| ------------ | ------------------------------------------------------------------------- |
| annotations | Contains the list of annotations that are associated with the dashboard. |
| cursorSync | Dashboard cursor sync behavior.<ul><li>`Off` - No shared crosshair or tooltip (default)</li><li>`Crosshair` - Shared crosshair</li><li>`Tooltip` - Shared crosshair and shared tooltip</li></ul> |
| editable | bool. Whether or not a dashboard is editable. |
| elements | Contains the list of elements included in the dashboard. Supported dashboard elements are: PanelKind and LibraryPanelKind. |
| layout | The dashboard layout. Supported layouts are:<ul><li>GridLayoutKind</li><li>AutoGridLayoutKind</li><li>RowsLayoutKind</li><li>TabsLayoutKind</li></ul> |
| links | Links with references to other dashboards or external websites. |
| liveNow | bool. When set to `true`, the dashboard redraws panels at an interval matching the pixel width. This keeps data "moving left" regardless of the query refresh rate. This setting helps avoid dashboards presenting stale live data. |
| preload | bool. When set to `true`, the dashboard loads all panels when the dashboard is loaded. |
| tags | Contains the list of tags associated with dashboard. |
| timeSettings | All time settings for the dashboard. |
| title | Title of the dashboard. |
| variables | Contains the list of configured template variables. |
<!-- prettier-ignore-end -->
### `annotations`
The configuration for the list of annotations that are associated with the dashboard.
For the JSON and field usage notes, refer to the [annotations schema documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/annotations-schema/).
### `elements`
Dashboards can contain the following elements:
- [PanelKind](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/panel-schema/)
- [LibraryPanelKind](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/librarypanel-schema/)
### `layout`
Dashboards can have four layout options:
- [GridLayoutKind](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/layout-schema/#gridlayoutkind)
- [AutoGridLayoutKind](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/layout-schema/#autogridlayoutkind)
- [RowsLayoutKind](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/layout-schema/#rowslayoutkind)
- [TabsLayoutKind](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/layout-schema/#tabslayoutkind)
For the JSON and field usage notes about each of these, refer to the [layout schema documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/layout-schema/).
### `links`
The configuration for links with references to other dashboards or external websites.
For the JSON and field usage notes, refer to the [links schema documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/links-schema/).
### `tags`
Tags associated with the dashboard. Each tag can be up to 50 characters long.
` [...string]`
### `timesettings`
The `TimeSettingsSpec` defines the default time configuration for the time picker and the refresh picker for the specific dashboard.
For the JSON and field usage notes about the `TimeSettingsSpec`, refer to the [timesettings schema documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/timesettings-schema/).
### `variables`
The `variables` schema defines which variables are used in the dashboard.
There are eight variables types:
- QueryVariableKind
- TextVariableKind
- ConstantVariableKind
- DatasourceVariableKind
- IntervalVariableKind
- CustomVariableKind
- GroupByVariableKind
- AdhocVariableKind
For the JSON and field usage notes about the `variables` spec, refer to the [variables schema documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/variables-schema/).
## Notes and limitations
### Existing dashboards
With schema v2 enabled, you can still open and view your pre-existing dashboards.
Upon saving, theyll be updated to the new schema where you can take advantage of the new features and functionalities.
### Dashboard behavior with disabled feature flags
If you disable the Dynamic dashboards or `kubernetesDashboards` feature flags, you should be aware of how dashboards will behave.
#### Disable Dynamic dashboards
If the Dynamic dashboards feature toggle is disabled, depending on how the dashboard was built, it will behave differently:
- Dashboards built on the new schema through the UI - View only
- Dashboards built on Schema v1 - View and edit
- Dashboards built on the new schema by way of Terraform or the CLI - View and edit
- Provisioned dashboards built on the new schema - View and edit, but the edit experience will be the old experience
#### Disable Dynamic dashboards and `kubernetesDashboards`
Youll be unable to view or edit dashboards created or updated in the new schema.
### Import and export
From the UI, dashboards created on schema v2 can be exported and imported like other dashboards.
When you export them to use in another instance, references of data sources are not persisted but data source types are.
Youll have the option to select the data source of your choice in the import UI.

View File

@@ -1,86 +0,0 @@
---
description: A reference for the JSON annotations schema used with Observability as Code.
keywords:
- configuration
- as code
- as-code
- dashboards
- git integration
- git sync
- github
- annotations
labels:
products:
- cloud
- enterprise
- oss
menuTitle: annotations schema
title: annotations
weight: 100
canonical: https://grafana.com/docs/grafana/latest/as-code/observability-as-code/schema-v2/annotations-schema/
aliases:
- ../../../observability-as-code/schema-v2/annotations-schema/ # /docs/grafana/next/observability-as-code/schema-v2/annotations-schema/
---
# `annotations`
The configuration for the list of annotations that are associated with the dashboard.
```json
"annotations": [
{
"kind": "AnnotationQuery",
"spec": {
"builtIn": false,
"datasource": {
"type": "",
"uid": ""
},
"enable": false,
"hide": false,
"iconColor": "",
"name": ""
}
}
],
```
`AnnotationsQueryKind` consists of:
- kind: "AnnotationQuery"
- spec: [AnnotationQuerySpec](#annotationqueryspec)
## `AnnotationQuerySpec`
| Name | Type/Definition |
| ---------- | ----------------------------------------------------------------- |
| datasource | [`DataSourceRef`](#datasourceref) |
| query | [`DataQueryKind`](#dataquerykind) |
| enable | bool |
| hide | bool |
| iconColor | string |
| name | string |
| builtIn | bool. Default is `false`. |
| filter | [`AnnotationPanelFilter`](#annotationpanelfilter) |
| options | `[string]`: A catch-all field for datasource-specific properties. |
### `DataSourceRef`
| Name | Usage |
| ----- | ---------------------------------- |
| type? | string. The plugin type-id. |
| uid? | The specific data source instance. |
### `DataQueryKind`
| Name | Type |
| ---- | ------ |
| kind | string |
| spec | string |
### `AnnotationPanelFilter`
| Name | Type/Definition |
| -------- | ------------------------------------------------------------------------------ |
| exclude? | bool. Should the specified panels be included or excluded. Default is `false`. |
| ids | `[...uint8]`. Panel IDs that should be included or excluded. |

View File

@@ -1,339 +0,0 @@
---
description: A reference for the JSON layout schema used with Observability as Code.
keywords:
- configuration
- as code
- as-code
- dashboards
- git integration
- git sync
- github
- layout
labels:
products:
- cloud
- enterprise
- oss
menuTitle: layout schema
title: layout
weight: 400
canonical: https://grafana.com/docs/grafana/latest/as-code/observability-as-code/schema-v2/layout-schema/
aliases:
- ../../../observability-as-code/schema-v2/layout-schema/ # /docs/grafana/next/observability-as-code/schema-v2/layout-schema/
---
# `layout`
There are four layout options offering two types of panel control:
**Panel layout options**
These options control the size and position of panels:
- [GridLayoutKind](#gridlayoutkind) - Corresponds to the **Custom** option in the UI. You define panel size and panel positions using x- and y- settings.
- [AutoGridLayoutKind](#autogridlayoutkind) - Corresponds to the **Auto grid** option in the UI. Panel size and position are automatically set based on column and row parameters.
**Panel grouping options**
These options control the grouping of panels:
- [RowsLayoutKind](#rowslayoutkind) - Groups panels into rows.
- [TabsLayoutKind](#tabslayoutkind) - Groups panels into tabs.
## `GridLayoutKind`
The grid layout allows you to manually size and position grid items by setting the height, width, x, and y of each item.
This layout corresponds to the **Custom** option in the UI.
Following is the JSON for a default grid layout, a grid layout item, and a grid layout row:
```json
"kind": "GridLayout",
"spec": {
"items": [
{
"kind": "GridLayoutItem",
"spec": {
"element": {...},
"height": 0,
"width": 0,
"x": 0,
"y": 0
}
},
{
"kind": "GridLayoutRow",
"spec": {
"collapsed": false,
"elements": [],
"title": "",
"y": 0
}
},
]
}
```
`GridLayoutKind` consists of:
- kind: "GridLayout"
- spec: GridLayoutSpec
- items: GridLayoutItemKind` or GridLayoutRowKind`
- GridLayoutItemKind
- kind: "GridLayoutItem"
- spec: [GridLayoutItemSpec](#gridlayoutitemspec)
- GridLayoutRowKind
- kind: "GridLayoutRow"
- spec: [GridLayoutRowSpec](#gridlayoutrowspec)
### `GridLayoutItemSpec`
The following table explains the usage of the grid layout item JSON fields:
| Name | Usage |
| ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| x | integer. Position of the item x-axis. |
| y | integer. Position of the item y-axis. |
| width | Width of the item in pixels. |
| height | Height of the item in pixels. |
| element | `ElementReference`. Reference to a [`PanelKind`](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/panel-schema/) from `dashboard.spec.elements` expressed as JSON Schema reference. |
| repeat? | [RepeatOptions](#repeatoptions). Configured repeat options, if any |
#### `RepeatOptions`
The following table explains the usage of the repeat option JSON fields:
| Name | Usage |
| ---------- | ---------------------------------------------------- |
| mode | `RepeatMode` - "variable" |
| value | string |
| direction? | Options are `h` for horizontal and `v` for vertical. |
| maxPerRow? | integer |
### `GridLayoutRowSpec`
The following table explains the usage of the grid layout row JSON fields:
<!-- prettier-ignore-start -->
| Name | Usage |
| ---- | ----- |
| y | integer. Position of the row y-axis |
| collapsed | bool. Whether or not the row is collapsed |
| title | Row title |
| elements | [`[...GridLayoutItemKind]`](#gridlayoutitemspec). Grid items in the row will have their y value be relative to the row's y value. This means a panel positioned at `y: 0` in a row with `y: 10` will be positioned at `y: 11` (row header has a height of 1) in the dashboard. |
| repeat? | [RowRepeatOptions](#rowrepeatoptions) Configured row repeat options, if any</p> |
<!-- prettier-ignore-end -->
#### `RowRepeatOptions`
| Name | Usage |
| ----- | ------------------------- |
| mode | `RepeatMode` - "variable" |
| value | string |
## `AutoGridLayoutKind`
With an auto grid, Grafana sizes and positions your panels for the best fit based on the column and row constraints that you set.
This layout corresponds to the **Auto grid** option in the UI.
Following is the JSON for a default auto grid layout and a grid layout item:
<!-- prettier-ignore-end -->
```json
"kind": "AutoGridLayout",
"spec": {
"columnWidthMode": "standard",
"fillScreen": false,
"items": [
{
"kind": "AutoGridLayoutItem",
"spec": {
"element": {...},
}
}
],
"maxColumnCount": 3,
"rowHeightMode": "standard"
}
```
`AutoGridLayoutKind` consists of:
- kind: "AutoGridLayout"
- spec: [AutoGridLayoutSpec](#autogridlayoutspec)
### `AutoGridLayoutSpec`
The following table explains the usage of the auto grid layout JSON fields:
<!-- prettier-ignore-start -->
| Name | Usage |
| ---- | ----- |
| maxColumnCount? | number. Default is `3`. |
| columnWidthMode | Options are: `narrow`, `standard`, `wide`, and `custom`. Default is `standard`. |
| columnWidth? | number |
| rowHeightMode | Options are: `short`, `standard`, `tall`, and `custom`. Default is `standard`. |
| rowHeight? | number |
| fillScreen? | bool. Default is `false`. |
| items | `AutoGridLayoutItemKind`. Consists of:<ul><li>kind: "AutoGridLayoutItem"</li><li>spec: [AutoGridLayoutItemSpec](#autogridlayoutitemspec)</li></ul> |
<!-- prettier-ignore-end -->
#### `AutoGridLayoutItemSpec`
The following table explains the usage of the auto grid layout item JSON fields:
<!-- prettier-ignore-start -->
| Name | Usage |
| ---- | ----- |
| element | `ElementReference`. Reference to a [`PanelKind`](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/schema-v2/panel-schema/) from `dashboard.spec.elements` expressed as JSON Schema reference. |
| repeat? | [AutoGridRepeatOptions](#autogridrepeatoptions). Configured repeat options, if any. |
| conditionalRendering? | `ConditionalRenderingGroupKind`. Rules for hiding or showing panels, if any. Consists of:<ul><li>kind: "ConditionalRenderingGroup"</li><li>spec: [ConditionalRenderingGroupSpec](#conditionalrenderinggroupspec)</li></ul> |
<!-- prettier-ignore-end -->
##### `AutoGridRepeatOptions`
The following table explains the usage of the auto grid repeat option JSON fields:
| Name | Usage |
| ----- | ------------------------- |
| mode | `RepeatMode` - "variable" |
| value | String |
##### `ConditionalRenderingGroupSpec`
<!-- prettier-ignore-start -->
| Name | Usage |
| ---- | ----- |
| visibility | Options are `show` and `hide` |
| condition | Options are `and` and `or` |
| items | Options are:<ul><li>ConditionalRenderingVariableKind<ul><li>kind: "ConditionalRenderingVariable"</li><li>spec: [ConditionalRenderingVariableSpec](#conditionalrenderingvariablespec)</li></ul></li><li>ConditionalRenderingDataKind<ul><li>kind: "ConditionalRenderingData"</li><li>spec: [ConditionalRenderingDataSpec](#conditionalrenderingdataspec)</li></ul></li><li>ConditionalRenderingTimeRangeSizeKind<ul><li>kind: "ConditionalRenderingTimeRangeSize"</li><li>spec: [ConditionalRenderingTimeRangeSizeSpec](#conditionalrenderingtimerangesizespec)</li></ul></li></ul> |
<!-- prettier-ignore-end -->
###### `ConditionalRenderingVariableSpec`
| Name | Usage |
| -------- | ------------------------------------ |
| variable | string |
| operator | Options are `equals` and `notEquals` |
| value | string |
###### `ConditionalRenderingDataSpec`
| Name | Type |
| ----- | ---- |
| value | bool |
###### `ConditionalRenderingTimeRangeSizeSpec`
| Name | Type |
| ----- | ------ |
| value | string |
## `RowsLayoutKind`
The `RowsLayoutKind` is one of two options that you can use to group panels.
You can nest any other kind of layout inside a layout row.
Rows can also be nested in auto grids or tabs.
Following is the JSON for a default rows layout row:
```json
"kind": "RowsLayout",
"spec": {
"rows": [
{
"kind": "RowsLayoutRow",
"spec": {
"layout": {
"kind": "GridLayout", // Can also be AutoGridLayout or TabsLayout
"spec": {...}
},
"title": ""
}
}
]
}
```
`RowsLayoutKind` consists of:
- kind: RowsLayout
- spec: RowsLayoutSpec
- rows: RowsLayoutRowKind
- kind: RowsLayoutRow
- spec: [RowsLayoutRowSpec](#rowslayoutrowspec)
### `RowsLayoutRowSpec`
The following table explains the usage of the rows layout row JSON fields:
<!-- prettier-ignore-start -->
| Name | Usage |
| ---- | ----- |
| title? | Title of the row. |
| collapse | bool. Whether or not the row is collapsed. |
| hideHeader? | bool. Whether the row header is hidden or shown. |
| fullScreen? | bool. Whether or not the row takes up the full screen. |
| conditionalRendering? | `ConditionalRenderingGroupKind`. Rules for hiding or showing rows, if any. Consists of:<ul><li>kind: "ConditionalRenderingGroup"</li><li>spec: [ConditionalRenderingGroupSpec](#conditionalrenderinggroupspec)</li></ul> |
| repeat? | [RowRepeatOptions](#rowrepeatoptions). Configured repeat options, if any. |
| layout | Supported layouts are:<ul><li>[GridLayoutKind](#gridlayoutkind)</li><li>[RowsLayoutKind](#rowslayoutkind)</li><li>[AutoGridLayoutKind](#autogridlayoutkind)</li><li>[TabsLayoutKind](#tabslayoutkind)</li></ul> |
<!-- prettier-ignore-end -->
## `TabsLayoutKind`
The `TabsLayoutKind` is one of two options that you can use to group panels.
You can nest any other kind of layout inside a tab.
Tabs can also be nested in auto grids or rows.
Following is the JSON for a default tabs layout tab and a tab:
```json
"kind": "TabsLayout",
"spec": {
"tabs": [
{
"kind": "TabsLayoutTab",
"spec": {
"layout": {
"kind": "GridLayout", // Can also be AutoGridLayout or RowsLayout
"spec": {...}
},
"title": "New tab"
}
}
]
}
```
`TabsLayoutKind` consists of:
- kind: TabsLayout
- spec: TabsLayoutSpec
- tabs: TabsLayoutTabKind
- kind: TabsLayoutTab
- spec: [TabsLayoutTabSpec](#tabslayouttabspec)
### `TabsLayoutTabSpec`
The following table explains the usage of the tabs layout tab JSON fields:
<!-- prettier-ignore-start -->
| Name | Usage |
| ---- | ----- |
| title? | The title of the tab. |
| layout | Supported layouts are:<ul><li>[GridLayoutKind](#gridlayoutkind)</li><li>[RowsLayoutKind](#rowslayoutkind)</li><li>[AutoGridLayoutKind](#autogridlayoutkind)</li><li>[TabsLayoutKind](#tabslayoutkind)</li></ul> |
| conditionalRendering? | `ConditionalRenderingGroupKind`. Rules for hiding or showing panels, if any. Consists of:<ul><li>kind: "ConditionalRenderingGroup"</li><li>spec: [ConditionalRenderingGroupSpec](#conditionalrenderinggroupspec)</li></ul> |
<!-- prettier-ignore-end -->

View File

@@ -1,68 +0,0 @@
---
description: A reference for the JSON library panel schema used with Observability as Code.
keywords:
- configuration
- as code
- as-code
- dashboards
- git integration
- git sync
- github
- library panel
labels:
products:
- cloud
- enterprise
- oss
menuTitle: LibraryPanelKind schema
title: LibraryPanelKind
weight: 300
canonical: https://grafana.com/docs/grafana/latest/as-code/observability-as-code/schema-v2/librarypanel-schema/
aliases:
- ../../../observability-as-code/schema-v2/librarypanel-schema/ # /docs/grafana/next/observability-as-code/schema-v2/librarypanel-schema/
---
# `LibraryPanelKind`
A library panel is a reusable panel that you can use in any dashboard.
When you make a change to a library panel, that change propagates to all instances of where the panel is used.
Library panels streamline reuse of panels across multiple dashboards.
Following is the default library panel element JSON:
```json
"kind": "LibraryPanel",
"spec": {
"id": 0,
"libraryPanel": {
name: "",
uid: "",
}
"title": ""
}
```
The `LibraryPanelKind` consists of:
- kind: "LibraryPanel"
- spec: [LibraryPanelKindSpec](#librarypanelkindspec)
- libraryPanel: [LibraryPanelRef](#librarypanelref)
## `LibraryPanelKindSpec`
The following table explains the usage of the library panel element JSON fields:
| Name | Usage |
| ------------ | ------------------------------------------------ |
| id | Panel ID for the library panel in the dashboard. |
| libraryPanel | [`LibraryPanelRef`](#librarypanelref) |
| title | Title for the library panel in the dashboard. |
### `LibraryPanelRef`
The following table explains the usage of the library panel reference JSON fields:
| Name | Usage |
| ---- | ------------------ |
| name | Library panel name |
| uid | Library panel uid |

View File

@@ -1,67 +0,0 @@
---
description: A reference for the JSON links schema used with Observability as Code.
keywords:
- configuration
- as code
- as-code
- dashboards
- git integration
- git sync
- github
- links
labels:
products:
- cloud
- enterprise
- oss
menuTitle: links schema
title: links
weight: 500
canonical: https://grafana.com/docs/grafana/latest/as-code/observability-as-code/schema-v2/links-schema/
aliases:
- ../../../observability-as-code/schema-v2/links-schema/ # /docs/grafana/next/observability-as-code/schema-v2/links-schema/
---
# `links`
The `links` schema is the configuration for links with references to other dashboards or external websites.
Following are the default JSON fields:
```json
"links": [
{
"asDropdown": false,
"icon": "",
"includeVars": false,
"keepTime": false,
"tags": [],
"targetBlank": false,
"title": "",
"tooltip": "",
"type": "link",
},
],
```
## `DashboardLink`
The following table explains the usage of the dashboard link JSON fields.
The table includes default and other fields:
<!-- prettier-ignore-start -->
| Name | Usage |
| ----------- | --------------------------------------- |
| title | string. Title to display with the link. |
| type | `DashboardLinkType`. Link type. Accepted values are:<ul><li>dashboards - To refer to another dashboard</li><li>link - To refer to an external resource</li></ul> |
| icon | string. Icon name to be displayed with the link. |
| tooltip | string. Tooltip to display when the user hovers their mouse over it. |
| url? | string. Link URL. Only required/valid if the type is link. |
| tags | string. List of tags to limit the linked dashboards. If empty, all dashboards will be displayed. Only valid if the type is dashboards. |
| asDropdown | bool. If true, all dashboards links will be displayed in a dropdown. If false, all dashboards links will be displayed side by side. Only valid if the type is dashboards. Default is `false`. |
| targetBlank | bool. If true, the link will be opened in a new tab. Default is `false`. |
| includeVars | bool. If true, includes current template variables values in the link as query params. Default is `false`. |
| keepTime | bool. If true, includes current time range in the link as query params. Default is `false`. |
| placement? | string. Use placement to display the link somewhere else on the dashboard other than above the visualizations. Use the `inControlsMenu` parameter to render the link in the dashboard controls dropdown menu. |
<!-- prettier-ignore-end -->

View File

@@ -1,305 +0,0 @@
---
description: A reference for the JSON panel schema used with Observability as Code.
keywords:
- configuration
- as code
- as-code
- dashboards
- git integration
- git sync
- github
- panels
labels:
products:
- cloud
- enterprise
- oss
menuTitle: PanelKind schema
title: PanelKind
weight: 200
canonical: https://grafana.com/docs/grafana/latest/as-code/observability-as-code/schema-v2/panel-schema/
aliases:
- ../../../observability-as-code/schema-v2/panel-schema/ # /docs/grafana/next/observability-as-code/schema-v2/panel-schema/
---
# `PanelKind`
The panel element contains all the information about the panel including the visualization type, panel and visualization configuration, queries, and transformations.
There's a panel element for each panel contained in the dashboard.
Following is the default panel element JSON:
```json
"kind": "Panel",
"spec": {
"data": {
"kind": "QueryGroup",
"spec": {...},
"description": "",
"id": 0,
"links": [],
"title": "",
"vizConfig": {
"kind": "",
"spec": {...},
}
}
```
The `PanelKind` consists of:
- kind: "Panel"
- spec: [PanelSpec](#panelspec)
## `PanelSpec`
The following table explains the usage of the panel element JSON fields:
<!-- prettier-ignore-start -->
| Name | Usage |
| ------------ | --------------------------------------------------------------------- |
| data | `QueryGroupKind`, which includes queries and transformations. Consists of:<ul><li>kind: "QueryGroup"</li><li>spec: [QueryGroupSpec](#querygroupspec)</li></ul> |
| description | The panel description. |
| id | The panel ID. |
| links | Links with references to other dashboards or external websites. |
| title | The panel title. |
| vizConfig | `VizConfigKind`. Includes visualization type, field configuration options, and all other visualization options. Consists of:<ul><li>kind: string. Plugin ID.</li><li>spec: [VizConfigSpec](#vizconfigspec)</li></ul> |
| transparent? | bool. Controls whether or not the panel background is transparent. |
<!-- prettier-ignore-end -->
### `QueryGroupSpec`
<!-- prettier-ignore-start -->
| Name | Usage |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| queries | `PanelQueryKind`. Consists of:<ul><li>kind: PanelQuery</li><li>spec: [PanelQuerySpec](#panelqueryspec)</li></ul> |
| transformations | `TransformationKind`. Consists of:<ul><li>kind: string. The transformation ID.</li><li>spec: [DataTransformerConfig](#datatransformerconfig)</li></ul> |
| queryOptions | [`QueryOptionsSpec`](#queryoptionsspec) |
<!-- prettier-ignore-end -->
#### `PanelQuerySpec`
| Name | Usage |
| ----------- | --------------------------------- |
| query | [`DataQueryKind`](#dataquerykind) |
| datasource? | [`DataSourceRef`](#datasourceref) |
##### `DataQueryKind`
| Name | Type |
| ---- | ------ |
| kind | string |
| spec | string |
##### `DataSourceRef`
| Name | Usage |
| ----- | ---------------------------------- |
| type? | string. The plugin type-id. |
| uid? | The specific data source instance. |
#### `DataTransformerConfig`
Transformations allow you to manipulate data returned by a query before the system applies a visualization.
Using transformations you can: rename fields, join time series data, perform mathematical operations across queries, or use the output of one transformation as the input to another transformation.
<!-- prettier-ignore-start -->
| Name | Usage |
| --------- | ------------------------------------------- |
| id | string. Unique identifier of transformer. |
| disabled? | bool. Disabled transformations are skipped. |
| filter? | [`MatcherConfig`](#matcherconfig). Optional frame matcher. When missing it will be applied to all results. |
| topic? | `DataTopic`. Where to pull `DataFrames` from as input to transformation. Options are: `series`, `annotations`, and `alertStates`. |
| options | Options to be passed to the transformer. Valid options depend on the transformer id. |
<!-- prettier-ignore-end -->
##### `MatcherConfig`
Matcher is a predicate configuration.
Based on the configuration a set of field or values, it's filtered to apply an override or transformation.
It comes with in id (to resolve implementation from registry) and a configuration thats specific to a particular matcher type.
| Name | Usage |
| -------- | -------------------------------------------------------------------------------------- |
| id | string. The matcher id. This is used to find the matcher implementation from registry. |
| options? | The matcher options. This is specific to the matcher implementation. |
#### `QueryOptionsSpec`
| Name | Type |
| ----------------- | ------- |
| timeFrom? | string |
| maxDataPoints? | integer |
| timeShift? | string |
| queryCachingTTL? | integer |
| interval? | string |
| cacheTimeout? | string |
| hideTimeOverride? | bool |
### `VizConfigSpec`
| Name | Type/Definition |
| ------------- | --------------------------------------- |
| pluginVersion | string |
| options | string |
| fieldConfig | [FieldConfigSource](#fieldconfigsource) |
#### `FieldConfigSource`
The data model used in Grafana, namely the _data frame_, is a columnar-oriented table structure that unifies both time series and table query results.
Each column within this structure is called a field.
A field can represent a single time series or table column.
Field options allow you to change how the data is displayed in your visualizations.
<!-- prettier-ignore-start -->
| Name | Type/Definition |
| ---------- | ------------------------------------- |
| defaults | [`FieldConfig`](#fieldconfig). Defaults are the options applied to all fields. |
| overrides | The options applied to specific fields overriding the defaults. |
| matcher | [`MatcherConfig`](#matcherconfig). Optional frame matcher. When missing it will be applied to all results. |
| properties | `DynamicConfigValue`. Consists of:<ul><li>`id` - string</li><li>value?</li></ul> |
<!-- prettier-ignore-end -->
##### `FieldConfig`
<!-- prettier-ignore-start -->
| Name | Type/Definition |
| ------------------ | --------------------------------------- |
| displayName? | string. The display value for this field. This supports template variables where empty is auto. |
| displayNameFromDS? | string. This can be used by data sources that return an explicit naming structure for values and labels. When this property is configured, this value is used rather than the default naming strategy. |
| description? | string. Human readable field metadata. |
| path? | string. An explicit path to the field in the data source. When the frame meta includes a path, this will default to `${frame.meta.path}/${field.name}`. When defined, this value can be used as an identifier within the data source scope, and may be used to update the results. |
| writeable? | bool. True if the data source can write a value to the path. Auth/authz are supported separately. |
| filterable? | bool. True if the data source field supports ad-hoc filters. |
| unit? | string. Unit a field should use. The unit you select is applied to all fields except time. You can use the unit's ID available in Grafana or a custom unit. [Available units in Grafana](https://github.com/grafana/grafana/blob/main/packages/grafana-data/src/valueFormats/categories.ts). As custom units, you can use the following formats:<ul><li>`suffix:<suffix>` for custom unit that should go after value.</li><li>`prefix:<prefix>` for custom unit that should go before value.</li><li> `time:<format>` for custom date time formats type for example</li><li>`time:YYYY-MM-DD`</li><li>`si:<base scale><unit characters>` for custom SI units. For example: `si: mF`. You can specify both a unit and the source data scale, so if your source data is represented as milli (thousands of) something, prefix the unit with that SI scale character.</li><li>`count:<unit>` for a custom count unit.</li><li>`currency:<unit>` for custom a currency unit.</li></ul> |
| decimals? | number. Specify the number of decimals Grafana includes in the rendered value. If you leave this field blank, Grafana automatically truncates the number of decimals based on the value. For example 1.1234 will display as 1.12 and 100.456 will display as 100. To display all decimals, set the unit to `string`. |
| min? | number. The minimum value used in percentage threshold calculations. Leave empty for auto calculation based on all series and fields. |
| max? | number. The maximum value used in percentage threshold calculations. Leave empty for auto calculation based on all series and fields. |
| mappings? | `[...ValueMapping]`. Convert input values into a display string. Options are: [`ValueMap`](#valuemap), [`RangeMap`](#rangemap), [`RegexMap`](#rangemap), [`SpecialValueMap`](#specialvaluemap). |
| thresholds? | `ThresholdsConfig`. Map numeric values to states. Consists of:<ul><li>`mode` - `ThresholdsMode`. Options are: `absolute` and `percentage`.</li><li>`steps` - `[...Threshold]`</li></ul> |
| color? | [`FieldColor`](#fieldcolor). Panel color configuration. |
| links? | `[...]`. The behavior when clicking a result. |
| noValue? | string. Alternative to an empty string. |
| custom? | `{...}`. Specified by the `FieldConfig` field in panel plugin schemas. |
<!-- prettier-ignore-end -->
###### `ValueMap`
Maps text values to a color or different display text and color.
For example, you can configure a value mapping so that all instances of the value 10 appear as Perfection! rather than the number.
<!-- prettier-ignore-start -->
| Name | Usage |
| ------- | -------- |
| type | `MappingType` & "value". `MappingType` options are: `value`, `range`, `regex`, and `special`. |
| options | string. [`ValueMappingResult`](#valuemappingresult). Map with `<value_to_match>`: `ValueMappingResult`. For example: `{ "10": { text: "Perfection!", color: "green" } }`. |
<!-- prettier-ignore-end -->
###### `RangeMap`
Maps numerical ranges to a display text and color.
For example, if a value is within a certain range, you can configure a range value mapping to display Low or High rather than the number.
<!-- prettier-ignore-start -->
| Name | Usage |
| ------- | ---------------------------------------------------------------------------------------------------- |
| type | `MappingType` & "range". `MappingType` options are: `value`, `range`, `regex`, and `special`. |
| options | Range to match against and the result to apply when the value is within the range. Spec:<ul><li>`from` - `float64` or `null`. Min value of the range. It can be null which means `-Infinity`.</li><li>`to` - `float64` or `null`. Max value of the range. It can be null which means `+Infinity`.</li><li>`result` - [`ValueMappingResult`](#valuemappingresult) |
<!-- prettier-ignore-end -->
###### `RegexMap`
Maps regular expressions to replacement text and a color.
For example, if a value is `www.example.com`, you can configure a regex value mapping so that Grafana displays www and truncates the domain.
<!-- prettier-ignore-start -->
| Name | Usage |
| ------- | --------------------------------------------------------------------------------------------- |
| type | `MappingType` & "regex". `MappingType` options are: `value`, `range`, `regex`, and `special`. |
| options | Regular expression to match against and the result to apply when the value matches the regex. Spec:<ul><li>`pattern` - string. Regular expression to match against.</li><li>`result` - [`ValueMappingResult`](#valuemappingresult) |
<!-- prettier-ignore-end -->
###### `SpecialValueMap`
Maps special values like Null, NaN (not a number), and boolean values like true and false to a display text and color.
See `SpecialValueMatch` in the following table to see the list of special values.
For example, you can configure a special value mapping so that null values appear as N/A.
<!-- prettier-ignore-start -->
| Name | Usage |
| ------- | ----------------------------------------------------------------------------------------------- |
| type | `MappingType` & "special". `MappingType` options are: `value`, `range`, `regex`, and `special`. |
| options | Spec:<ul><li>`match` - `SpecialValueMatch`. Special value to match against. Types are:<ul><li>true</li><li>false</li><li>null</li><li>nan</li><li>empty</li></ul> </li><li>`result` - [`ValueMappingResult`](#valuemappingresult) |
<!-- prettier-ignore-end -->
###### `ValueMappingResult`
Result used as replacement with text and color when the value matches.
| Name | Usage |
| ----- | ----------------------------------------------------------------------------- |
| text | string. Text to display when the value matches. |
| color | string. Color to use when the value matches. |
| icon | string. Icon to display when the value matches. Only specific visualizations. |
| index | int32. Position in the mapping array. Only used internally. |
###### `FieldColor`
Map a field to a color.
<!-- prettier-ignore-start -->
| Name | Usage |
| ----------- | -------------------------------------------------------------------- |
| mode | [`FieldColorModeId`](#fieldcolormodeid). The main color scheme mode. |
| FixedColor? | string. The fixed color value for fixed or shades color modes. |
| seriesBy? | `FieldColorSeriesByMode`. Some visualizations need to know how to assign a series color from by value color schemes. Defines how to assign a series color from "by value" color schemes. For example for an aggregated data points like a timeseries, the color can be assigned by the min, max or last value. Options are: `min`, `max`, and `last`. |
<!-- prettier-ignore-end -->
###### `FieldColorModeId`
Color mode for a field.
You can specify a single color, or select a continuous (gradient) color schemes, based on a value.
Continuous color interpolates a color using the percentage of a value relative to min and max.
Accepted values are:
<!-- prettier-ignore-start -->
| Name | Description |
| --- | ---- |
| thresholds | From thresholds. Informs Grafana to take the color from the matching threshold. |
| palette-classic | Classic palette. Grafana will assign color by looking up a color in a palette by series index. Useful for graphs and pie charts and other categorical data visualizations. |
| palette-classic-by-name | Classic palette (by name). Grafana will assign color by looking up a color in a palette by series name. Useful for Graphs and pie charts and other categorical data visualizations |
| continuous-GrYlRd | Continuous Green-Yellow-Red palette mode |
| continuous-RdYlGr | Continuous Red-Yellow-Green palette mode |
| continuous-BlYlRd | Continuous Blue-Yellow-Red palette mode |
| continuous-YlRd | Continuous Yellow-Red palette mode |
| continuous-BlPu | Continuous Blue-Purple palette mode |
| continuous-YlBl | Continuous Yellow-Blue palette mode |
| continuous-blues | Continuous Blue palette mode |
| continuous-reds | Continuous Red palette mode |
| continuous-greens | Continuous Green palette mode |
| continuous-purples | Continuous Purple palette mode |
| shades | Shades of a single color. Specify a single color, useful in an override rule. |
| fixed | Fixed color mode. Specify a single color, useful in an override rule. |
<!-- prettier-ignore-end -->

View File

@@ -1,87 +0,0 @@
---
description: A reference for the JSON timesettings schema used with Observability as Code.
keywords:
- configuration
- as code
- as-code
- dashboards
- git integration
- git sync
- github
- time settings
labels:
products:
- cloud
- enterprise
- oss
menuTitle: timesettings schema
title: timesettings
weight: 600
canonical: https://grafana.com/docs/grafana/latest/as-code/observability-as-code/schema-v2/timesettings-schema/
aliases:
- ../../../observability-as-code/schema-v2/timesettings-schema/ # /docs/grafana/next/observability-as-code/schema-v2/timesettings-schema/
---
# `timeSettings`
The `TimeSettingsSpec` defines the default time configuration for the time picker and the refresh picker for the specific dashboard.
Following is the JSON for default time settings:
```json
"timeSettings": {
"autoRefresh": "",
"autoRefreshIntervals": [
"5s",
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"fiscalYearStartMonth": 0,
"from": "now-6h",
"hideTimepicker": false,
"timezone": "browser",
"to": "now"
},
```
`timeSettings` consists of:
- [TimeSettingsSpec](#timesettingsspec)
## `TimeSettingsSpec`
The following table explains the usage of the time settings JSON fields:
<!-- prettier-ignore-start -->
| Name | Usage |
| ---- | ----- |
| timezone? | string. Timezone of dashboard. Accepted values are IANA TZDB zone ID, `browser`, or `utc`. Default is `browser`. |
| from | string. Start time range for dashboard. Accepted values are relative time strings like `now-6h` or absolute time strings like `2020-07-10T08:00:00.000Z`. Default is `now-6h`. |
| to | string. End time range for dashboard. Accepted values are relative time strings like `now-6h` or absolute time strings like `2020-07-10T08:00:00.000Z`. Default is `now`. |
| autoRefresh | string. Refresh rate of dashboard. Represented by interval string. For example: `5s`, `1m`, `1h`, `1d`. No default. In schema v1: `refresh`. |
| autoRefreshIntervals | string. Interval options available in the refresh picker drop-down menu. The default array is `["5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h", "2h", "1d"]`. |
|quickRanges? | Selectable options available in the time picker drop-down menu. Has no effect on provisioned dashboard. Defined in the [`TimeRangeOption`](#timerangeoption) spec. In schema v1: `timepicker.quick_ranges`, not exposed in the UI. |
| hideTimepicker | bool. Whether or not the time picker is visible. Default is `false`. In schema v1: `timepicker.hidden`. |
| weekStart? | Day when the week starts. Expressed by the name of the day in lowercase. For example: `monday`. Options are `saturday`, `monday`, and `sunday`. |
| fiscalYearStartMonth | The month that the fiscal year starts on. `0` = January, `11` = December |
| nowDelay? | string. Override the "now" time by entering a time delay. Use this option to accommodate known delays in data aggregation to avoid null values. In schema v1: `timepicker.nowDelay`. |
<!-- prettier-ignore-end -->
### `TimeRangeOption`
The following table explains the usage of the time range option JSON fields:
| Name | Usage |
| ------- | ---------------------------------- |
| display | string. Default is `Last 6 hours`. |
| from | string. Default is `now-6h`. |
| to | string. Default is `now`. |

View File

@@ -1,501 +0,0 @@
---
description: A reference for the JSON variables schema used with Observability as Code.
keywords:
- configuration
- as code
- as-code
- dashboards
- git integration
- git sync
- github
- variables
labels:
products:
- cloud
- enterprise
- oss
menuTitle: variables schema
title: variables
weight: 700
canonical: https://grafana.com/docs/grafana/latest/as-code/observability-as-code/schema-v2/variables-schema/
aliases:
- ../../../observability-as-code/schema-v2/variables-schema/ # /docs/grafana/next/observability-as-code/schema-v2/variables-schema/
---
# `variables`
The available variable types described in the following sections:
- [QueryVariableKind](#queryvariablekind)
- [TextVariableKind](#textvariablekind)
- [ConstantVariableKind](#constantvariablekind)
- [DatasourceVariableKind](#datasourcevariablekind)
- [IntervalVariableKind](#intervalvariablekind)
- [CustomVariableKind](#customvariablekind)
- [SwitchVariableKind](#switchvariablekind)
- [GroupByVariableKind](#groupbyvariablekind)
- [AdhocVariableKind](#adhocvariablekind)
## `QueryVariableKind`
Following is the JSON for a default query variable:
```json
"variables": [
{
"kind": "QueryVariable",
"spec": {
"current": {
"text": "",
"value": ""
},
"hide": "dontHide",
"includeAll": false,
"multi": false,
"name": "",
"options": [],
"query": defaultDataQueryKind(),
"refresh": "never",
"regex": "",
"skipUrlSync": false,
"sort": "disabled"
}
}
]
```
`QueryVariableKind` consists of:
- kind: "QueryVariable"
- spec: [QueryVariableSpec](#queryvariablespec)
### `QueryVariableSpec`
The following table explains the usage of the query variable JSON fields:
<!-- prettier-ignore-start -->
| Name | Usage |
| ------------ | ---------------------------------------------- |
| name | string. Name of the variable. |
| current | "Text" and a "value" or [`VariableOption`](#variableoption) |
| label? | string |
| hide | `VariableHide`. Options are: `dontHide`, `hideLabel`, and `hideVariable`. |
| refresh | `VariableRefresh`. Options are `never`, `onDashboardLoad`, and `onTimeChanged`. |
| skipUrlSync | bool. Default is `false`. |
| description? | string |
| datasource? | [`DataSourceRef`](#datasourceref) |
| query | `DataQueryKind`. Consists of:<ul><li>kind: string</li><li>spec: string</li></ul> |
| regex | string |
| sort | `VariableSort`. Options are:<ul><li>disabled</li><li>alphabeticalAsc</li><li>alphabeticalDesc</li><li>numericalAsc</li><li>numericalDesc</li><li>alphabeticalCaseInsensitiveAsc</li><li>alphabeticalCaseInsensitiveDesc</li><li>naturalAsc</li><li>naturalDesc</li></ul> |
| definition? | string |
| options | [`VariableOption`](#variableoption) |
| multi | bool. Default is `false`. |
| includeAll | bool. Default is `false`. |
| allValue? | string |
| placeholder? | string |
<!-- prettier-ignore-end -->
#### `VariableOption`
| Name | Usage |
| -------- | -------------------------------------------- |
| selected | bool. Whether or not the option is selected. |
| text | string. Text to be displayed for the option. |
| value | string. Value of the option. |
#### `DataSourceRef`
| Name | Usage |
| ----- | ---------------------------------- |
| type? | string. The plugin type-id. |
| uid? | The specific data source instance. |
## `TextVariableKind`
Following is the JSON for a default text variable:
```json
"variables": [
{
"kind": "TextVariable",
"spec": {
"current": {
"text": "",
"value": ""
},
"hide": "dontHide",
"name": "",
"query": "",
"skipUrlSync": false
}
}
]
```
`TextVariableKind` consists of:
- kind: TextVariableKind
- spec: [TextVariableSpec](#textvariablespec)
### `TextVariableSpec`
The following table explains the usage of the query variable JSON fields:
| Name | Usage |
| ------------ | -------------------------------------------------------------------------------------------------------------------------------- |
| name | string. Name of the variable. |
| current | "Text" and a "value" or `VariableOption`. Refer to the [`VariableOption` definition](#variableoption) under `QueryVariableKind`. |
| query | string |
| label? | string |
| hide | `VariableHide`. Options are: `dontHide`, `hideLabel`, and `hideVariable`. |
| skipUrlSync | bool. Default is `false`. |
| description? | string |
## `ConstantVariableKind`
Following is the JSON for a default constant variable:
```json
"variables": [
{
"kind": "ConstantVariable",
"spec": {
"current": {
"text": "",
"value": ""
},
"hide": "hideVariable",
"name": "",
"query": "",
"skipUrlSync": true
}
}
]
```
`ConstantVariableKind` consists of:
- kind: "ConstantVariable"
- spec: [ConstantVariableSpec](#constantvariablespec)
### `ConstantVariableSpec`
The following table explains the usage of the constant variable JSON fields:
| Name | Usage |
| ------------ | -------------------------------------------------------------------------------------------------------------------------------- |
| name | string. Name of the variable. |
| query | string |
| current | "Text" and a "value" or `VariableOption`. Refer to the [`VariableOption` definition](#variableoption) under `QueryVariableKind`. |
| label? | string |
| hide | `VariableHide`. Options are: `dontHide`, `hideLabel`, and `hideVariable`. |
| skipUrlSync | bool. Default is `false`. |
| description? | string |
## `DatasourceVariableKind`
Following is the JSON for a default data source variable:
```json
"variables": [
{
"kind": "DatasourceVariable",
"spec": {
"current": {
"text": "",
"value": ""
},
"hide": "dontHide",
"includeAll": false,
"multi": false,
"name": "",
"options": [],
"pluginId": "",
"refresh": "never",
"regex": "",
"skipUrlSync": false
}
}
]
```
`DatasourceVariableKind` consists of:
- kind: "DatasourceVariable"
- spec: [DatasourceVariableSpec](#datasourcevariablespec)
### `DatasourceVariableSpec`
The following table explains the usage of the data source variable JSON fields:
| Name | Usage |
| ------------ | -------------------------------------------------------------------------------------------------------------------------------- |
| name | string. Name of the variable. |
| pluginId | string |
| refresh | `VariableRefresh`. Options are `never`, `onDashboardLoad`, and `onTimeChanged`. |
| regex | string |
| current | `Text` and a `value` or `VariableOption`. Refer to the [`VariableOption` definition](#variableoption) under `QueryVariableKind`. |
| options | `VariableOption`. Refer to the [`VariableOption` definition](#variableoption) under `QueryVariableKind`. |
| multi | bool. Default is `false`. |
| includeAll | bool. Default is `false`. |
| allValue? | string |
| label? | string |
| hide | `VariableHide`. Options are: `dontHide`, `hideLabel`, and `hideVariable`. |
| skipUrlSync | bool. Default is `false`. |
| description? | string |
## `IntervalVariableKind`
Following is the JSON for a default interval variable:
```json
"variables": [
{
"kind": "IntervalVariable",
"spec": {
"auto": false,
"auto_count": 0,
"auto_min": "",
"current": {
"text": "",
"value": ""
},
"hide": "dontHide",
"name": "",
"options": [],
"query": "",
"refresh": "never",
"skipUrlSync": false
}
}
]
```
`IntervalVariableKind` consists of:
- kind: "IntervalVariable"
- spec: [IntervalVariableSpec](#intervalvariablespec)
### `IntervalVariableSpec`
The following table explains the usage of the interval variable JSON fields:
| Name | Usage |
| ------------ | -------------------------------------------------------------------------------------------------------------------------------- |
| name | string. Name of the variable. |
| query | string |
| current | `Text` and a `value` or `VariableOption`. Refer to the [`VariableOption` definition](#variableoption) under `QueryVariableKind`. |
| options | `VariableOption`. Refer to the [`VariableOption` definition](#variableoption) under `QueryVariableKind`. |
| auto | bool. Default is `false`. |
| auto_count | integer. Default is `0`. |
| refresh | `VariableRefresh`. Options are `never`, `onDashboardLoad`, and `onTimeChanged`. |
| label? | string |
| hide | `VariableHide`. Options are: `dontHide`, `hideLabel`, and `hideVariable`. |
| skipUrlSync | bool. Default is `false` |
| description? | string |
## `CustomVariableKind`
Following is the JSON for a default custom variable:
```json
"variables": [
{
"kind": "CustomVariable",
"spec": {
"current": defaultVariableOption(),
"hide": "dontHide",
"includeAll": false,
"multi": false,
"name": "",
"options": [],
"query": "",
"skipUrlSync": false
}
}
]
```
`CustomVariableKind` consists of:
- kind: "CustomVariable"
- spec: [CustomVariableSpec](#customvariablespec)
### `CustomVariableSpec`
The following table explains the usage of the custom variable JSON fields:
| Name | Usage |
| ------------ | -------------------------------------------------------------------------------------------------------------------------------- |
| name | string. Name of the variable. |
| query | string |
| current | `Text` and a `value` or `VariableOption`. Refer to the [`VariableOption` definition](#variableoption) under `QueryVariableKind`. |
| options | `VariableOption`. Refer to the [`VariableOption` definition](#variableoption) under `QueryVariableKind`. |
| multi | bool. Default is `false`. |
| includeAll | bool. Default is `false`. |
| allValue? | string |
| label? | string |
| hide | `VariableHide`. Options are: `dontHide`, `hideLabel`, and `hideVariable`. |
| skipUrlSync | bool. Default is `false`. |
| description? | string |
## `SwitchVariableKind`
Following is the JSON for a default switch variable:
```json
"variables": [
{
"kind": "SwitchVariable",
"spec": {
"current": "false",
"enabledValue": "true",
"disabledValue": "false",
"hide": "dontHide",
"name": "",
"skipUrlSync": false
}
}
]
```
`SwitchVariableKind` consists of:
- kind: "SwitchVariable"
- spec: [SwitchVariableSpec](#switchvariablespec)
### `SwitchVariableSpec`
The following table explains the usage of the switch variable JSON fields:
<!-- prettier-ignore-start -->
| Name | Usage |
| -------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| name | string. Name of the variable. |
| current | string. Current value of the switch variable (either `enabledValue` or `disabledValue`). |
| enabledValue | string. Value when the switch is in the enabled state. |
| disabledValue | string. Value when the switch is in the disabled state. |
| label? | string |
| hide | `VariableHide`. Options are: `dontHide`, `hideLabel`, and `hideVariable`. |
| skipUrlSync | bool. Default is `false`. |
| description? | string |
<!-- prettier-ignore-end -->
## `GroupByVariableKind`
Following is the JSON for a default group by variable:
```json
"variables": [
{
"kind": "GroupByVariable",
"spec": {
"current": {
"text": [
""
],
"value": [
""
]
},
"datasource": {},
"hide": "dontHide",
"multi": false,
"name": "",
"options": [],
"skipUrlSync": false
}
}
]
```
`GroupByVariableKind` consists of:
- kind: "GroupByVariable"
- spec: [GroupByVariableSpec](#groupbyvariablespec)
### `GroupByVariableSpec`
The following table explains the usage of the group by variable JSON fields:
| Name | Usage |
| ------------ | -------------------------------------------------------------------------------------------------------------------------------- |
| name | string. Name of the variable |
| datasource? | `DataSourceRef`. Refer to the [`DataSourceRef` definition](#datasourceref) under `QueryVariableKind`. |
| current | `Text` and a `value` or `VariableOption`. Refer to the [`VariableOption` definition](#variableoption) under `QueryVariableKind`. |
| options | `VariableOption`. Refer to the [`VariableOption` definition](#variableoption) under `QueryVariableKind`. |
| multi | bool. Default is `false`. |
| label? | string |
| hide | `VariableHide`. Options are: `dontHide`, `hideLabel`, and `hideVariable`. |
| skipUrlSync | bool. Default is `false`. |
| description? | string. |
## `AdhocVariableKind`
Following is the JSON for a default ad hoc variable:
```json
"variables": [
{
"kind": "AdhocVariable",
"spec": {
"baseFilters": [],
"defaultKeys": [],
"filters": [],
"hide": "dontHide",
"name": "",
"skipUrlSync": false
}
}
]
```
`AdhocVariableKind` consists of:
- kind: "AdhocVariable"
- spec: [AdhocVariableSpec](#adhocvariablespec)
### `AdhocVariableSpec`
The following table explains the usage of the ad hoc variable JSON fields:
| Name | Usage |
| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------- |
| name | string. Name of the variable. |
| datasource? | `DataSourceRef`. Consists of:<ul><li>type? - string. The plugin type-id.</li><li>uid? - string. The specific data source instance.</li></ul> |
| baseFilters | [AdHocFilterWithLabels](#adhocfilterswithlabels) |
| filters | [AdHocFilterWithLabels](#adhocfilterswithlabels) |
| defaultKeys | [MetricFindValue](#metricfindvalue) |
| label? | string |
| hide | `VariableHide`. Options are: `dontHide`, `hideLabel`, and `hideVariable`. |
| skipUrlSync | bool. Default is `false`. |
| description? | string |
#### `AdHocFiltersWithLabels`
The following table explains the usage of the ad hoc variable with labels JSON fields:
| Name | Type |
| ------------ | ------------- |
| key | string |
| operator | string |
| value | string |
| values? | `[...string]` |
| keyLabel | string |
| valueLabels? | `[...string]` |
| forceEdit? | bool |
#### `MetricFindValue`
The following table explains the usage of the metric find value JSON fields:
| Name | Type |
| ----------- | ---------------- |
| text | string |
| value? | string or number |
| group? | string |
| expandable? | bool |

View File

@@ -171,146 +171,3 @@ Status Codes:
- **200** - Ok
- **401** - Unauthorized
- **404** - Dashboard version not found
```http
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
```
The response is a textual representation of the diff, with the dashboard values being in JSON, similar to the diffs seen on sites like GitHub or GitLab.
Status Codes:
- **200** - Ok
- **400** - Bad request (invalid JSON sent)
- **401** - Unauthorized
- **404** - Not found
**Example response (basic diff)**:
```http
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
```
The response here is a summary of the changes, derived from the diff between the two JSON objects.
Status Codes:
- **200** - OK
- **400** - Bad request (invalid JSON sent)
- **401** - Unauthorized
- **404** - Not found
{
"id": 70,
"slug": "my-dashboard",
"status": "success",
"uid": "QA7wKklGz",
"url": "/d/QA7wKklGz/my-dashboard",
"version": 3
}
```
JSON response body schema:
- **slug** - the URL friendly slug of the dashboard's title
- **status** - whether the restoration was successful or not
- **version** - the new dashboard version, following the restoration
Status codes:
- **200** - OK
- **400** - Bad request (specified version has the same content as the current dashboard)
- **401** - Unauthorized
- **404** - Not found (dashboard not found or dashboard version not found)
- **500** - Internal server error (indicates issue retrieving dashboard tags from database)
**Example error response**
```http
HTTP/1.1 404 Not Found
Content-Type: application/json; charset=UTF-8
Content-Length: 46
{
"message": "Dashboard version not found"
}
```
JSON response body schema:
- **message** - Message explaining the reason for the request failure.
## Compare dashboard versions
`POST /api/dashboards/calculate-diff`
Compares two dashboard versions by calculating the JSON diff of them.
**Example request**:
```http
POST /api/dashboards/calculate-diff HTTP/1.1
Accept: text/html
Content-Type: application/json
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk
{
"base": {
"dashboardId": 1,
"version": 1
},
"new": {
"dashboardId": 1,
"version": 2
},
"diffType": "json"
}
```
JSON body schema:
- **base** - an object representing the base dashboard version
- **new** - an object representing the new dashboard version
- **diffType** - the type of diff to return. Can be "json" or "basic".
**Example response (JSON diff)**:
```http
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
<p id="l1" class="diff-line diff-json-same">
<!-- Diff omitted -->
</p>
```
The response is a textual representation of the diff, with the dashboard values being in JSON, similar to the diffs seen on sites like GitHub or GitLab.
Status Codes:
- **200** - Ok
- **400** - Bad request (invalid JSON sent)
- **401** - Unauthorized
- **404** - Not found
**Example response (basic diff)**:
```http
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
<div class="diff-group">
<!-- Diff omitted -->
</div>
```
The response here is a summary of the changes, derived from the diff between the two JSON objects.
Status Codes:
- **200** - OK
- **400** - Bad request (invalid JSON sent)
- **401** - Unauthorized
- **404** - Not found

View File

@@ -4,7 +4,8 @@ comments: |
This file is used in the following visualizations: candlestick, heatmap, state timeline, status history, time series.
---
You can zoom the panel time range in and out, which in turn, changes the dashboard time range.
You can pan the panel time range left and right, and zoom it and in and out.
This, in turn, changes the dashboard time range.
**Zoom in** - Click and drag on the panel to zoom in on a particular time range.
@@ -16,4 +17,9 @@ For example, if the original time range is from 9:00 to 9:59, the time range cha
- Next range: 8:30 - 10:29
- Next range: 7:30 - 11:29
For screen recordings showing these interactions, refer to the [Panel overview documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/panel-overview/#zoom-panel-time-range).
**Pan** - Click and drag the x-axis area of the panel to pan the time range.
The time range shifts by the distance you drag.
For example, if the original time range is from 9:00 to 9:59 and you drag 30 minutes to the right, the time range changes to 9:30 to 10:29.
For screen recordings showing these interactions, refer to the [Panel overview documentation](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/panel-overview/#pan-and-zoom-panel-time-range).

View File

@@ -304,7 +304,8 @@ When things go bad, it often helps if you understand the context in which the fa
In the next part of the tutorial, we simulate some common use cases that someone would add annotations for.
1. To manually add an annotation, click anywhere in your graph, then click **Add annotation**.
1. To manually add an annotation, click anywhere on a graph line to open the data tooltip, then click **Add annotation**.
You can also press `Ctrl` or `Command` and click anywhere in the graph to open the **Add annotation** dialog box.
Note: you might need to save the dashboard first.
1. In **Description**, enter **Migrated user database**.
1. Click **Save**.

View File

@@ -78,9 +78,9 @@ For every dashboard and data source, you can access usage information.
### Dashboard insights
To see dashboard usage information, click the dashboard insights icon in the header.
To see dashboard usage information, click the dashboard insights icon in the sidebar.
![Dashboard insights icon](/media/docs/grafana/dashboards/screenshot-dashboard-insights-icon-11.2.png)
{{< figure src="/media/docs/grafana/dashboards/screenshot-dashboard-insights-v12.4.png" max-width="500px" alt="Dashboard insights icon" >}}
Dashboard insights show the following information:

View File

@@ -2,238 +2,423 @@
aliases:
- ../../../dashboards/build-dashboards/add-organize-panels/ # /docs/grafana/next/dashboards/build-dashboards/add-organize-panels/
- ../../../dashboards/build-dashboards/create-dashboard/ # /docs/grafana/next/dashboards/build-dashboards/create-dashboard/
- ../../../dashboards/build-dashboards/create-dynamic-dashboard/ # /docs/grafana/latest/dashboards/build-dashboards/create-dynamic-dashboard/
- ./create-dynamic-dashboard/ # /docs/grafana/latest/visualizations/dashboards/build-dashboards/create-dynamic-dashboard/
keywords:
- panel
- dashboard
- create
- dynamic dashboard
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Create a dashboard
title: Create a dashboard
title: Create dashboards
description: Create and edit a dashboard
weight: 1
refs:
built-in-special-data-sources:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/#special-data-sources
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/connect-externally-hosted/data-sources/#special-data-sources
visualization-specific-options:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/visualizations/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/visualizations/
configure-standard-options:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/configure-standard-options/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/configure-standard-options/
configure-value-mappings:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/configure-value-mappings/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/configure-value-mappings/
generative-ai-features:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/manage-dashboards/#set-up-generative-ai-features-for-dashboards
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/manage-dashboards/#set-up-generative-ai-features-for-dashboards
configure-thresholds:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/configure-thresholds/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/configure-thresholds/
data-sources:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/connect-externally-hosted/data-sources/
add-a-data-source:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/#add-a-data-source
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/#add-a-data-source
about-users-and-permissions:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/roles-and-permissions/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/roles-and-permissions/
visualizations-options:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/visualizations/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/visualizations/
configure-repeating-panels:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/configure-panel-options/#configure-repeating-panels
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/configure-panel-options/#configure-repeating-panels
override-field-values:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/configure-overrides/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/configure-overrides/
saved-queries:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/query-transform-data/#saved-queries
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data/#saved-queries
save-query:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/query-transform-data/#save-a-query
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data/#save-a-query
image_maps:
- key: editpane-sidebar
src: /media/docs/grafana/dashboards/screenshot-edit-sidebar-v12.4.png
alt: An annotated image of the edit pane and sidebar
points:
- x_coord: 96
y_coord: 17
content: |
**Dashboard options**
Click the icon to open the edit pane. Edit mode only.
- x_coord: 96
y_coord: 25
content: |
**Feedback**
Submit feedback on the new editing experience. Edit mode only.
- x_coord: 96
y_coord: 33
content: |
**Export**
Click to display [export](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/share-dashboards-panels/#export-dashboards) options.
- x_coord: 96
y_coord: 41
content: |
**Content outline**
Navigate a dashboard using the [Content outline](#navigate-using-the-content-outline).
- x_coord: 96
y_coord: 49
content: |
**Dashboard insights**
View [dashboard analytics](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/assess-dashboard-usage/) including information about users, activity, and query counts.
---
## Create a dashboard
# Create dashboards
Dashboards and panels allow you to show your data in visual form. Each panel needs at least one query to display a visualization.
{{< admonition type="note">}}
Dynamic dashboards is currently in public preview. Grafana Labs offers limited support, and breaking changes might occur prior to the feature being made generally available.
For information on the generally available dashboard creation experience, refer to the [documentation for the latest self-managed version of Grafana](https://grafana.com/docs/grafana/latest/visualizations/dashboards/build-dashboards/create-dashboard/).
{{< /admonition >}}
Dashboards and panels allow you to show your data in visual form.
Each panel needs at least one query to display a visualization.
**Before you begin:**
- Ensure that you have the proper permissions. For more information about permissions, refer to [About users and permissions](ref:about-users-and-permissions).
- Identify the dashboard to which you want to add the panel.
- Ensure that you have the proper permissions. For more information about permissions, refer to [About users and permissions](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/administration/roles-and-permissions/).
- Understand the query language of the target data source.
- Ensure that data source for which you are writing a query has been added. For more information about adding a data source, refer to [Add a data source](ref:add-a-data-source) if you need instructions.
## Create a dashboard
To create a dashboard, follow these steps:
{{< shared id="create-dashboard" >}}
1. Click **Dashboards** in the main menu.
1. Click **New** and select **New Dashboard**.
1. On the empty dashboard, click **+ Add visualization**.
![Empty dashboard state](/media/docs/grafana/dashboards/empty-dashboard-10.2.png)
{{< /shared >}}
1. Click **+ Add visualization**.
1. In the dialog box that opens, do one of the following:
- Select one of your existing data sources.
- Select one of the Grafana [built-in special data sources](ref:built-in-special-data-sources).
- Select one of the Grafana [built-in special data sources](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/#special-data-sources).
- Click **Configure a new data source** to set up a new one (Admins only).
{{< figure class="float-right" src="/media/docs/grafana/dashboards/screenshot-data-source-selector-10.0.png" max-width="800px" alt="Select data source modal" >}}
The **Edit panel** view opens with your data source selected.
You can change the panel data source later using the drop-down in the **Queries** tab of the panel editor if needed.
You can change the panel data source later using the drop-down in the **Query** tab of the panel editor if needed.
For more information about data sources, refer to [Data sources](ref:data-sources) for specific guidelines.
For more information about data sources, refer to [Data sources](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/) for specific guidelines.
1. To create a query, do one of the following:
- Write or construct a query in the query language of your data source.
- Open the **Saved queries** drop-down menu and click **Replace query** to reuse a [saved query](ref:saved-queries).
- Open the **Saved queries** drop-down menu and click **Replace query** to reuse a [saved query](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/query-transform-data/#saved-queries).
1. (Optional) To [save the query](ref:save-query) for reuse, open the **Saved queries** drop-down menu and click the **Save query** option.
1. Click **Refresh** to query the data source.
1. (Optional) To add subsequent queries, click **+ Add query** or **+ Add from saved queries**, and refresh the data source as many times as needed.
1. (Optional) To [save the query](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/query-transform-data/#save-a-query) for reuse, open the **Saved queries** drop-down menu and click the **Save query** option.
{{< admonition type="note" >}}
[Saved queries](ref:saved-queries) is currently in [public preview](https://grafana.com/docs/release-life-cycle/) in Grafana Enterprise and Grafana Cloud only.
[Saved queries](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/query-transform-data/#saved-queries) is currently in [public preview](https://grafana.com/docs/release-life-cycle/) in Grafana Enterprise and Grafana Cloud only.
{{< /admonition >}}
1. Click **Refresh** to query the data source.
1. In the visualization list, select a visualization type.
![Visualization selector](/media/docs/grafana/dashboards/screenshot-select-visualization-11-2.png)
{{< figure src="/media/docs/grafana/dashboards/screenshot-select-visualization-v12.png" max-width="350px" alt="Visualization selector" >}}
Grafana displays a preview of your query results with the visualization applied.
For more information about individual visualizations, refer to [Visualizations options](ref:visualizations-options).
For more information about configuring individual visualizations, refer to [Visualizations options](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/visualizations/).
1. Under **Panel options**, enter a title and description for your panel or have Grafana create them using [generative AI features](ref:generative-ai-features).
1. Under **Panel options**, enter a title and description for the panel or have Grafana create them using [generative AI features](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/manage-dashboards/#set-up-generative-ai-features-for-dashboards).
1. Refer to the following documentation for ways you can adjust panel settings.
While not required, most visualizations need some adjustment before they properly display the information that you need.
- [Configure value mappings](ref:configure-value-mappings)
- [Visualization-specific options](ref:visualization-specific-options)
- [Override field values](ref:override-field-values)
- [Configure thresholds](ref:configure-thresholds)
- [Configure standard options](ref:configure-standard-options)
- [Configure value mappings](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/configure-value-mappings/)
- [Visualization-specific options](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/visualizations/)
- [Override field values](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/configure-overrides/)
- [Configure thresholds](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/configure-thresholds/)
- [Configure standard options](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/panels-visualizations/configure-standard-options/)
1. When you've finished editing your panel, click **Save dashboard**.
Alternatively, click **Back to dashboard** if you want to see your changes applied to the dashboard first. Then click **Save dashboard** when you're ready.
1. Enter a title and description for your dashboard or have Grafana create them using [generative AI features](ref:generative-ai-features).
1. When you've finished editing the panel, click **Save**.
1. Enter a title and description for the dashboard if you haven't already or have Grafana create them using [generative AI features](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/manage-dashboards/#set-up-generative-ai-features-for-dashboards).
1. Select a folder, if applicable.
1. Click **Save**
1. Click **Back to dashboard**.
1. (Optional) Continue building the dashboard by clicking one or more of the following options:
- **+ Add panel**: Set panel options in the edit pane or click **Configure** to complete panel setup.
- **+ Add variable**: Follow the steps to [add a variable to the dashboard](#add-variables).
- **Group panels**: Choose from **Group into row** or **Group into tab**. For more information on groupings, refer to [Panel groupings](#panel-groupings).
- **Dashboard options** icon: Open the edit pane to access [panel layout options](#panel-layouts).
1. When you've finished making changes, click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. To add more panels to the dashboard, click **Back to dashboard**.
Then click **Add** in the dashboard header and select **Visualization** in the drop-down.
1. Click **Exit edit**.
![Add drop-down](/media/docs/grafana/dashboards/screenshot-add-dropdown-11.2.png)
## Dashboard edit
When you add additional panels to the dashboard, you're taken straight to the **Edit panel** view.
Now that you've created a basic dashboard, you can augment it with more options.
You can make several updates without leaving the dashboard by using the edit pane, which is explained in the next section.
1. When you've saved all the changes you want to make to the dashboard, click **Exit edit**.
### The edit pane and sidebar
Now, when you want to make more changes to the saved dashboard, click **Edit** in the top-right corner.
The _edit pane_ allows you to make changes without leaving the dashboard, by displaying options associated with the part of the dashboard that's in focus.
The _sidebar_ is on the next to the edit pane, and it includes options that are useful to have available all the time.
### Begin dashboard creation from data source configuration
The following image shows the parts of the edit pane and the sidebar.
Hover your cursor over the numbers to display descriptions of the sidebar options (descriptions also follow the image):
You can start the process of creating a dashboard directly from a data source rather than from the **Dashboards** page.
{{< image-map key="editpane-sidebar" >}}
To begin building a dashboard directly from a data source, follow these steps:
{{< admonition type="note" >}}
The sidebar is displayed in both edit and view mode, but the **Dashboard options** and **Feedback** icons aren't available in view mode.
{{< /admonition >}}
1. Navigate to **Connections > Data sources**.
1. On the row of the data source for which you want to build a dashboard, click **Build a dashboard**.
You can dock, undock, and resize the edit pane.
When the edit pane is closed, you can resize the sidebar so the icon names are visible.
The empty dashboard page opens.
{{< video-embed src="/media/docs/grafana/dashboards/screenrecord-edit-side-v12.4.mp4" >}}
The available configuration options in the edit pane differ depending on the selected dashboard element:
- Dashboards: High-level options are in the edit pane and further configuration options are in the **Settings** page.
- Groupings (rows and tabs): All configuration options are available in the edit pane.
- Panels: High-level options are in the edit pane and further configuration options are in the **Edit panel** view.
### Navigate using the content outline
The **Content outline** provides a tree-like structure that shows you all the parts of the dashboard and their relationships to each other, including panels, rows, tabs, and variables.
The outline also lets you quickly navigate the dashboard and is available in both view and edit modes (note that variables are only included in edit mode).
{{< figure src="/media/docs/grafana/dashboards/screenshot-content-outline-v12.4.png" max-width="750px" alt="Dashboard with outline open" >}}
To navigate the dashboard using the outline, follow these steps:
1. Navigate to the dashboard you want to view or update.
1. In the right sidebar, click the **Content outline** icon to open it.
1. Expand the outline to find the part of the dashboard you want to view or update.
1. Click the tree item to navigate that part of the dashboard.
### Edit a dashboard
To edit a dashboard, follow these steps:
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. Click the part of the dashboard you want to update to open the edit pane, or click the **Dashboard options** icon to open it.
If the dashboard is large, open the **Content outline** and use it to navigate to the part of the dashboard you want to update.
1. Update the dashboard as needed.
1. When you've finished making changes, click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Back to dashboard**, if needed.
1. Click **Exit edit**
## Panel layouts
Panel layouts control the size and arrangement of panels in the dashboard.
There are two panel layout options:
- **Custom**: You can position and size panels individually. This is the default selection for a new dashboard. **Show/hide rules** are not supported.
- **Auto grid**: Panels resize and fit automatically to create a uniform grid. You can't make manual changes to this layout. **Show/hide rules** are supported.
You can use both layouts in row or tab groupings.
### Auto grid layout
In the auto grid layout, panels are automatically sized and positioned as you add them.
There are default parameters to constrain the layout, and you can update these to have more control over the display:
- **Min column width**: Choose from **Standard**, **Narrow**, **Wide**, or **Custom**, for which you can enter the minimum width in pixels.
- **Max columns**: Set a number up to 10.
- **Row height**: Choose from **Standard**, **Short**, **Tall**, and **Custom**, for which you can enter the row height in pixels.
- **Fill screen**: Toggle the switch on to have the panel fill the entire height of the screen. If the panel is in a row, the **Fill screen** toggle for the row must also be enabled (refer to [grouping configuration options](#grouping-configuration-options)).
### Update panel layout
To update the panel layout, follow these steps:
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. Click the dashboard or the grouping that contains the panel layout you want to update.
1. Click the **Dashboard options** icon to open the edit pane, if needed.
1. Under **Layout**, select **Custom** or **Auto grid**.
1. Click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Exit edit**
## Panel groupings
To help create meaningful sections in your dashboard, you can group panels into rows or tabs.
Rows and tabs let you break up big dashboards or make one dashboard out of several smaller ones.
You can think of the dashboard as a series of nested containers: the dashboard is the largest container and it contains panels, rows, or tabs.
Rows and tabs are the next largest containers, and they contain panels.
You can also nest:
- Rows in a row
- Rows in a tab
- Tabs in a row
You can nest up to two levels deep, which means a dashboard can have a maximum of four configuration levels:
- Dashboard
- Grouping 1 - Row or tab
- Grouping 2 - Row or tab
- Panels
You can only have one type of grouping at each level.
Inside of those groupings however, you have to freedom to add different elements.
Also, custom and auto grid panel layouts are supported for rows and tabs, so each grouping can have a different panel layout.
<!-- {{< figure src="/media/docs/grafana/dashboards/screenshot-groupings-v12.4.png" alt="Dashboard with nested groupings" max-width="750px" >}} -->
The following sections describe:
- [Grouping configuration options](#grouping-configuration-options)
- [Grouping layouts](#grouping-layouts)
- [How to group panels](#group-panels)
- [How to ungroup panels](#ungroup-panels)
### Grouping configuration options
The following table describes the options you can set for a row or tab:
<!-- prettier-ignore-start -->
| Option | Description |
| ----------------| --------------------------------------------------------------------------- |
| Title | Title of the row or tab. |
| Fill screen | Toggle the switch on to make the row fill the screen. Rows only. |
| Hide row header | Toggle the switch on to hide row headers in view mode. In edit mode, the row header is visible, but crossed out with the hidden icon next to it. Rows only. |
| Layout | Select the layout. If the grouping contains another grouping, choose from **Rows** or **Tabs**. If the grouping contains panels, choose from **Custom** or **Auto grid**. For more information, refer to [Panel layouts](#panel-layouts) or [Grouping layouts](#grouping-layouts). |
| Repeat options > [Repeat by variable](#configure-repeat-options) | Configure the dashboard to dynamically add panels, rows, or tabs based on the value of a variable. |
| Show / hide rules > [Panel/Row/Tab visibility](#configure-showhide-rules) | Control whether or not panels, rows, or tabs are displayed based on variable values, a time range, or query results (panels only). |
<!-- prettier-ignore-end -->
### Grouping layouts
When you have panels grouped into rows or tabs, the **Layout** options available depend on which dashboard element is selected and the nesting level of that element.
You can nest up to two levels deep, which means a dashboard can have a maximum of four configuration levels, with the following layout options:
- **Dashboard**: Layout options allow you to choose between rows or tabs.
- **Grouping 1 (outer)**: Layout options allow you to choose between rows or tabs.
- **Grouping 2 (inner)**: Layout options allow you to choose between custom and auto grid (refer to [Panel layouts](#panel-layouts)).
- **Panels**: No layout options
You can switch between rows and tabs or update the panel layout by clicking the parent container and changing the layout selection.
### Group panels
To group panels, follow these steps:
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. Under a panel, click **Group panels**.
While grouping is typically used for multiple panels, you can start a grouping with just one panel.
1. Select **Group into row** or **Group into tab**.
All the panels are moved into the grouping, and a dotted blue line surrounds the row or tab.
The edit pane opens, displaying the relevant options.
1. Set the [grouping configuration options](#grouping-configuration-options) in the edit pane.
1. (Optional) Add one or both of the following:
- A [nested grouping](#add-nested-groupings)
- Other [groupings at the same level](#add-more-groupings-at-the-same-level).
1. Click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Exit edit**.
#### Add nested groupings
To add a second-level (or nested) grouping, follow these steps:
1. In the existing grouping, under the panels, click **Group panels**.
{{< figure src="/media/docs/grafana/dashboards/screenshot-nest-group-v12.4.png" alt="Adding a nested grouping" max-width="500px" >}}
1. Click **Group into row** or **Group into tab** (**Group into tab** is only available if the parent grouping is a row).
The new grouping is added inside the first grouping, and the panels are moved into the nested grouping.
The edit pane opens displaying the relevant options.
1. Set the configuration options for the nested grouping.
1. Click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Exit edit**.
#### Add more groupings at the same level
To add more first-level groupings, follow these steps:
1. On the dashboard, outside the existing first-level grouping, click **New row** or **New tab** (only one option will be available).
{{< figure src="/media/docs/grafana/dashboards/screenshot-add-group-v12.4.png" alt="Adding a nested grouping" max-width="500px" >}}
1. Set the configuration options for the new grouping.
1. Click **+ Add panel** to begin adding panels.
1. Click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Exit edit**.
### Ungroup panels
You can ungroup some or all of the dashboard groupings without losing your panels.
Ungrouping behavior depends on whether you're working with first-level or nested groupings:
| Grouping | Action and outcome |
| ---------- | -------------------------------------------------------------------------------------------------- |
| Rows | **Ungroup rows** ungroups all first-level rows in the dashboard and all of their nested groupings. |
| Tabs | **Ungroup tabs** ungroups all first-level tabs in the dashboard and all of their nested groupings. |
| Row > row | **Ungroup rows** ungroups the nested row. |
| Row > tabs | **Ungroup tabs** ungroups all the nested tabs in that row. Tabs in other rows are not affected. |
| Tab > rows | **Ungroup rows** ungroups all the nested rows in that tab. Rows in other tabs are not affected. |
{{< figure src="/media/docs/grafana/dashboards/screenshot-ungrouping-v12.4.png" alt="Dashboard with ungrouping behavior annotated" max-width="750px" >}}
{{< admonition type="caution" >}}
If you delete a grouping, rather than ungrouping it, its panels are deleted as well.
{{< /admonition >}}
To remove groupings, follow these steps:
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. (Optional) Click the **Content outline** icon to quickly navigate to the grouping you want to remove.
1. Do one of the following:
- Click **+Add visualization** to configure all the elements of the new dashboard.
- Select one of the suggested dashboards by clicking its **Use dashboard** button. This can be helpful when you're not sure how to most effectively visualize your data.
The suggested dashboards are specific to your data source type (for example, Prometheus, Loki, or Elasticsearch). If there are more than three dashboard suggestions, you can click **View all** to see the rest of them.
![Empty dashboard with add visualization and suggested dashboard options](/media/docs/grafana/dashboards/screenshot-suggested-dashboards-v12.3.png)
{{< docs/public-preview product="Suggested dashboards" >}}
1. Complete the rest of the dashboard configuration. For more detailed steps, refer to [Create a dashboard](#create-a-dashboard), beginning at step five.
## Copy a dashboard
To copy a dashboard, follow these steps:
1. Click **Dashboards** in the main menu.
1. Open the dashboard you want to copy.
1. Click **Edit** in top-right corner.
1. Click the **Save dashboard** drop-down and select **Save as copy**.
1. (Optional) Specify the name, folder, description, and whether or not to copy the original dashboard tags for the copied dashboard.
By default, the copied dashboard has the same name as the original dashboard with the word "Copy" appended and is in the same folder.
- Click **Ungroup rows** or **Ungroup tabs** at the bottom of the dashboard to ungroup all rows or tabs, including any nested groupings.
- Click in a grouping and click **Ungroup rows** or **Ungroup tabs** to ungroup only the tabs or rows nested in that grouping.
1. If you've ungrouped panels that were previously in different panel layouts, you'll be prompted to select a common layout type for all the panels; click **Convert to Auto grid** or **Convert to Custom**.
1. Click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Exit edit**.
## Configure repeating rows
## Configure repeat options
You can configure Grafana to dynamically add panels or rows to a dashboard based on the value of a variable. Variables dynamically change your queries across all rows in a dashboard. For more information about repeating panels, refer to [Configure repeating panels](ref:configure-repeating-panels).
You can configure Grafana to dynamically add panels, rows, or tabs to a dashboard based on the value of a variable.
Variables dynamically change your queries across all panels, rows, or tabs in a dashboard.
To see an example of repeating rows, refer to [Dashboard with repeating rows](https://play.grafana.org/d/000000153/repeat-rows). The example shows that you can also repeat rows if you have variables set with `Multi-value` or `Include all values` selected.
This only applies to queries that include a multi-value variable.
**Before you begin:**
To configure repeats, follow these steps:
- Ensure that the query includes a multi-value variable.
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. Click the panel, row, or tab you want to update to open the edit pane, or click the **Dashboard options** icon to open it.
**To configure repeating rows:**
If the dashboard is large, open the **Content outline** and use it to navigate to the part of the dashboard you want to update.
1. Click **Dashboards** in the main menu.
1. Navigate to the dashboard you want to work on.
1. At the top of the dashboard, click **Add** and select **Row** in the drop-down.
1. Expand the **Repeat options** section.
1. Select the **Repeat by variable**.
1. For panels in a custom layout, set the following options:
1. Under **Repeat direction**, choose one of the following:
- **Horizontal** - Arrange panels side-by-side. Grafana adjusts the width of a repeated panel. You cant mix other panels on a row with a repeated panel.
- **Vertical** - Arrange panels in a column. The width of repeated panels is the same as the original, repeated panel.
1. If you selected **Horizontal**, select a value in the **Max per row** drop-down list to control the maximum number of panels that can be in a row.
If the dashboard is empty, you can click the **+ Add row** button in the middle of the dashboard.
1. (Optional) To provide context to dashboard users, add the variable name to the panel, row, or tab title.
1. When you've finished setting the repeat option, click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Exit edit**.
1. Hover over the row title and click the cog icon.
1. In the **Row Options** dialog box, add a title and select the variable for which you want to add repeating rows.
1. Click **Update**.
### Repeating rows and tabs and the Dashboard special data source
To provide context to dashboard users, add the variable to the row title.
### Repeating rows and the Dashboard special data source
If a row includes panels using the special [Dashboard data source](ref:built-in-special-data-sources)&mdash;the data source that uses a result set from another panel in the same dashboard&mdash;then corresponding panels in repeated rows will reference the panel in the original row, not the ones in the repeated rows.
If a row includes panels using the special [Dashboard data source](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/#special-data-sources)&mdash;the data source that uses a result set from another panel in the same dashboard&mdash;then corresponding panels in repeated rows will reference the panel in the original row, not the ones in the repeated rows.
The same behavior applies to tabs.
For example, in a dashboard:
@@ -242,28 +427,196 @@ For example, in a dashboard:
- Repeating row, `Row 2`, includes `Panel 2A` and `Panel 2B`
- `Panel 2B` references `Panel 1A`, not `Panel 2A`
## Move a panel
## Show/hide rules
You can place a panel on a dashboard in any location.
You can configure panels, rows, and tabs to be shown or hidden based on rules.
For example, you can set a panel to be hidden if there's no data returned by a query or a tab to only be shown if a specific variable value is present.
1. Click **Dashboards** in the main menu.
1. Navigate to the dashboard you want to work on.
1. Click **Edit** in the top-right corner.
1. Click the panel title and drag the panel to the new location.
1. Click **Save dashboard**.
There are three types of show/hide rules to choose from:
- [Query result](#query-result-rule)
- [Template variable](#template-variable-rule)
- [Time range less than](#time-range-less-than-rule)
For steps on how to create show/hide rules, refer to [Configure show/hide rules](#configure-showhide-rules).
{{< admonition type="note" >}}
You can only configure show/hide rules for panels in the **Auto grid** layout. Set the panel layout at the dashboard, row, or tab-level.
{{< /admonition >}}
### Query result rule
Show or hide a panel based on whether or not the query returns any results.
The rule provides **Has data** and **No data** options, so you can choose to show or hide the panel based on the presence or absence of data.
For example, if you have a dashboard with several panels and only want panels that return data to appear, set the rule as follows:
- Panel visibility > Show
- Query result > Has data
Alternatively, you might also want to troubleshoot a dashboard with several panels to see which ones contain broken queries that aren't returning any results.
In this case, you'd set the rule as follows:
- Panel visibility > Show
- Query result > No data
### Template variable rule
Show or hide a panel, row, or tab dynamically based on the variable value.
You can select any variable that's configured for the dashboard and choose from the following operators for maximum flexibility:
- Equals
- Not equals
- Matches (regular expression values)
- Not matches (regular expression values)
You can [add more variables](#add-variables) if you need to without leaving the dashboard.
### Time range less than rule
Show or hide a panel, row, or tab if the dashboard time range is shorter than the selected time range.
This ensures that as you change the time range of the dashboard, you only see data relevant to that time period.
For example, a dashboard is tracking adoption of a feature over time has the following setup:
- Dashboard time range is **Last 7 days**
- One panel tracks weekly stats
- One panel tracks daily stats
For the panel tracking weekly stats, a rule is set up to hide it if the dashboard time range is less than 7 days.
For the panel tracking daily stats, a rule is set up to hide it if the dashboard time range is less 24 hours.
This configuration ensures that these time-based panels are only displayed when enough time has passed to make them relevant.
For this rule type, you can select time ranges from **5 minutes** to **5 years**.
### Configure show/hide rules
To configure show/hide rules, follow these steps:
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. Click the panel, row, or tab you want to update to open the edit pane, or click the **Dashboard options** icon to open it.
If the dashboard is large, open the **Content outline** and use it to navigate to the part of the dashboard you want to update.
1. Expand the **Show / hide rules** section.
1. Select **Show** or **Hide** to set whether the panel, row, or tab is shown or hidden based on the rules outcome.
1. Click **+ Add rule**.
1. Select a rule type:
- **Query result**: Show or hide a panel based on query results. Choose from **Has data** and **No data**.
- **Template variable**: Show or hide the panel, row, or tab dynamically based on the variable value. Select a variable and operator and enter a value.
- **Time range less than**: Show or hide the panel, row, or tab if the dashboard time range is shorter than the selected time range. Select a time range from **5 minutes** to **5 years**.
1. If you've configured more than rule, under **Match rules**, select one of the following:
- **Match all**: The panel, row, or tab is shown or hidden only if _all_ the rules are matched.
- **Match any**: The panel, row, or tab is shown or hidden if _any_ of the rules are matched.
This option is only displayed if you add multiple rules.
1. When you've finished setting rules, click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Exit edit**.
1. Click **Exit edit**
Hidden panels, rows, or tabs aren't visible when the dashboard is in view mode.
In edit mode, hidden dashboard elements are displayed with an icon or overlay indicating this.
## Move a panel
To move a panel, follow these steps:
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. Navigate to the panel you want to move.
If the dashboard is large, open the **Content outline** and use it to navigate to the panel.
1. Click the panel title and drag the panel to another row or tab, or to a new position on the dashboard.
If the dashboard has groupings, you can only move the panel to another grouping.
1. Click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Exit edit**
## Resize a panel
You can size a dashboard panel to suits your needs.
When your dashboard or grouping has a **Custom** layout, you can manually resize a panel.
1. Click **Dashboards** in the main menu.
1. Navigate to the dashboard you want to work on.
1. Click **Edit** in the top-right corner.
1. To adjust the size of the panel, click and drag the lower-right corner of the panel.
1. Click **Save dashboard**.
To resize a panel, follow these steps:
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. Navigate to the panel you want to resize.
If the dashboard is large, open the **Content outline** and use it to navigate to the panel.
1. Click and drag the lower-right corner of the panel to change the size of the panel.
1. Click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Exit edit**.
1. Click **Exit edit**
## Add variables
To add variables without leaving the dashboard, follow these steps:
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. Click **+ Add variable** at the top of the dashboard.
1. Choose a variable type from the list.
1. Set the options for the variable.
1. Click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Exit edit**
### Add variables using the content outline
You can also add variables without leaving the dashboard using the content outline.
To access the variables creation flow this way, follow these steps:
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. Click the **Content outline** icon.
1. Click **Variables** in the outline.
1. Click **+ Add variable**.
1. Complete the rest of the steps to [add a variable without leaving the dashboard](#add-variables).
## Copy or duplicate dashboard elements
You can copy and paste or duplicate panels, rows, and tabs.
To copy or duplicate dashboard elements, follow these steps:
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. Click the panel, row, or tab you want to update to open the edit pane, or click the **Dashboard options** icon to open it.
If the dashboard is large, open the **Content outline** and use it to navigate to the part of the dashboard you want to update.
1. In the top-corner of the edit pane, click the **Copy or Duplicate** icon and do one of the following:
- Click **Copy**.
- Click **Duplicate**. The duplicated element is added next to the original one. Proceed to step 6.
1. If you selected **Copy**, navigate to the part of the dashboard where you want to add the copied element, and click **Paste panel**, **Paste row**, or **Paste tab**.
1. Update the copied or duplicated element if needed.
1. Click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Click **Exit edit**
## Copy a dashboard
To make a copy of a dashboard, follow these steps:
1. Navigate to the dashboard you want to update.
1. Click **Edit**.
1. Click the **Save** drop-down list and select **Save as copy**.
1. (Optional) Specify the name, folder, description, and whether or not to copy the original dashboard tags for the copied dashboard.
By default, the copied dashboard has the same name as the original dashboard with the word "Copy" appended and is in the same folder.
1. Click **Save**.

View File

@@ -1,416 +0,0 @@
---
labels:
products:
- cloud
- oss
stage:
- experimental
_build:
list: false
noindex: true
title: Create a dynamic dashboard
description: Create and edit a dynamic dashboard
weight: 900
refs:
built-in-special-data-sources:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/#special-data-sources
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/connect-externally-hosted/data-sources/#special-data-sources
visualization-specific-options:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/visualizations/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/visualizations/
configure-standard-options:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/configure-standard-options/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/configure-standard-options/
configure-value-mappings:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/configure-value-mappings/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/configure-value-mappings/
generative-ai-features:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/manage-dashboards/#set-up-generative-ai-features-for-dashboards
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/manage-dashboards/#set-up-generative-ai-features-for-dashboards
configure-thresholds:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/configure-thresholds/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/configure-thresholds/
data-sources:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/connect-externally-hosted/data-sources/
add-a-data-source:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/#add-a-data-source
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/#add-a-data-source
about-users-and-permissions:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/roles-and-permissions/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/roles-and-permissions/
visualizations-options:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/visualizations/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/visualizations/
configure-repeating-panels:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/configure-panel-options/#configure-repeating-panels
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/configure-panel-options/#configure-repeating-panels
override-field-values:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/configure-overrides/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/configure-overrides/
aliases:
- ../../../dashboards/build-dashboards/create-dynamic-dashboard/ # /docs/grafana/next/dashboards/build-dashboards/create-dynamic-dashboard/
---
# Create and edit dynamic dashboards
{{< admonition type="caution" >}}
Dynamic dashboards is an [experimental](https://grafana.com/docs/release-life-cycle/) feature. Engineering and on-call support is not available. Documentation is either limited or not provided outside of code comments. No SLA is provided. To get early access to this feature, request it through [this form](https://docs.google.com/forms/d/e/1FAIpQLSd73nQzuhzcHJOrLFK4ef_uMxHAQiPQh1-rsQUT2MRqbeMLpg/viewform?usp=dialog).
**Do not enable this feature in production environments as it may result in the irreversible loss of data.**
{{< /admonition >}}
Dashboards and panels allow you to show your data in visual form. Each panel needs at least one query to display a visualization.
## Before you begin
- Ensure that you have the proper permissions. For more information about permissions, refer to [About users and permissions](ref:about-users-and-permissions).
- Identify the dashboard to which you want to add the panel.
- Understand the query language of the target data source.
- Ensure that data source for which you are writing a query has been added. For more information about adding a data source, refer to [Add a data source](ref:add-a-data-source) if you need instructions.
## Create a dashboard
To create a dashboard, follow these steps:
1. Click **Dashboards** in the main menu.
1. Click **New** and select **New Dashboard**.
1. In the edit pane, enter the dashboard title and description.
{{< figure src="/media/docs/grafana/dashboards/screenshot-new-dashboard-v12.png" max-width="750px" alt="New dashboard" >}}
1. Under **Panel layout**, choose one of the following options:
- **Custom** - Position and size panels manually. The default selection.
- **Auto grid** - Panels are automatically resized to create a uniform grid based on the column and row settings.
1. Click **+ Add visualization**.
1. In the dialog box that opens, do one of the following:
- Select one of your existing data sources.
- Select one of the Grafana [built-in special data sources](ref:built-in-special-data-sources).
- Click **Configure a new data source** to set up a new one (Admins only).
{{< figure class="float-right" src="/media/docs/grafana/dashboards/screenshot-data-source-selector-10.0.png" max-width="800px" alt="Select data source modal" >}}
The **Edit panel** view opens with your data source selected.
You can change the panel data source later using the drop-down in the **Query** tab of the panel editor if needed.
For more information about data sources, refer to [Data sources](ref:data-sources) for specific guidelines.
1. Write or construct a query in the query language of your data source.
1. Click **Refresh** to query the data source.
1. In the visualization list, select a visualization type.
{{< figure src="/media/docs/grafana/dashboards/screenshot-select-visualization-v12.png" max-width="350px" alt="Visualization selector" >}}
Grafana displays a preview of your query results with the visualization applied.
For more information about configuring individual visualizations, refer to [Visualizations options](ref:visualizations-options).
1. Under **Panel options**, enter a title and description for your panel or have Grafana create them using [generative AI features](ref:generative-ai-features).
1. Refer to the following documentation for ways you can adjust panel settings.
While not required, most visualizations need some adjustment before they properly display the information that you need.
- [Configure value mappings](ref:configure-value-mappings)
- [Visualization-specific options](ref:visualization-specific-options)
- [Override field values](ref:override-field-values)
- [Configure thresholds](ref:configure-thresholds)
- [Configure standard options](ref:configure-standard-options)
1. When you've finished editing your panel, click **Save**.
Alternatively, click **Back to dashboard** if you want to see your changes applied to the dashboard first. Then click **Save** when you're ready.
1. Enter a title and description for your dashboard if you haven't already or have Grafana create them using [generative AI features](ref:generative-ai-features).
1. Select a folder, if applicable.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. To add more panels to the dashboard, click **Back to dashboard** and at the bottom-left corner of the dashboard, click **+ Add panel**.
{{< figure src="/media/docs/grafana/dashboards/screenshot-add-panel-v12.png" max-width="500px" alt="Add panel button" >}}
1. (Optional) In the edit pane, enter a title and description for the panel and set the panel transparency and repeat options, if applicable.
1. Click **Configure** in either the edit pane or on the panel to the configuration process.
1. When you've saved all the changes you want to make to the dashboard, click **Back to dashboard**.
1. Toggle off the edit mode switch.
{{< admonition type="caution" >}}
Dynamic dashboards is an [experimental](https://grafana.com/docs/release-life-cycle/) feature. Engineering and on-call support is not available. Documentation is either limited or not provided outside of code comments. No SLA is provided. To get early access to this feature, request it through [this form](https://docs.google.com/forms/d/e/1FAIpQLSd73nQzuhzcHJOrLFK4ef_uMxHAQiPQh1-rsQUT2MRqbeMLpg/viewform?usp=dialog).
**Do not enable this feature in production environments as it may result in the irreversible loss of data.**
{{< /admonition >}}
## Group panels
To help create meaningful sections in your dashboard, you can group panels into rows or tabs.
Rows and tabs let you break up big dashboards or make one dashboard out of several smaller ones.
You can nest tabs and rows within each other or themselves.
Also, tabs are included in the dashboard URL.
The following sections describe the configuration options for adding tabs and rows.
While grouping is meant for multiple panels, you can start a grouping with just one panel.
1. Click **Dashboards** in the main menu.
1. Navigate to the dashboard you want to update.
1. Toggle on the edit mode switch.
1. At the bottom-left corner of the dashboard, click **Group panels**.
1. Select **Group into row** or **Group into tab**.
A dotted line surrounds the panels and the **Row** or **Tab** edit pane is displayed on the right side of the dashboard.
1. Set the [grouping configuration options](#grouping-configuration-options).
1. When you're finished, click **Save** at the top-right corner of the dashboard.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
### Grouping configuration options
The following table describes the options you can set for a row.
<!-- prettier-ignore-start -->
| Option | Description |
| ------ | ----------- |
| Title | Title of the row or tab. |
| Fill screen | Toggle the switch on to make the row fill the screen. Only applies to rows. |
| Hide row header | Toggle the switch on to hide the header. In edit mode, the row header is visible, but crossed out with the hidden icon next to it. Only applies to rows. |
| Group layout | Select the grouping option, between **Rows** and **Tabs**. Only available when there's a nested grouping and applies to the nested grouping. |
| Panel layout | Select whether panels are sized and positioned manually, **Custom**, or automatically, **Auto grid**. Only available when a grouping contains panels. |
| Repeat options > [Repeat by variable](#configure-repeat-options) | Configure the dashboard to dynamically add rows or tabs based on the value of a variable. |
| Show / hide rules > [Row/Tab visibility](#configure-showhide-rules) | Control whether or not rows or tabs are displayed based on variables or a time range. |
<!-- prettier-ignore-end -->
## Configure repeat options
<!-- previous heading "Configure repeating rows" -->
You can configure Grafana to dynamically add panels, rows, or tabs to a dashboard based on the value of that variable.
Variables dynamically change your queries across all rows in a dashboard.
This only applies to queries that include a multi-value variable.
<!-- To see an example of repeating rows, refer to [Dashboard with repeating rows](https://play.grafana.org/d/000000153/repeat-rows).
The example shows that you can also repeat rows if you have variables set with `Multi-value` or `Include all values` selected.
Might be good to update this Play example -->
To configure repeats, follow these steps:
1. Click **Dashboards** in the main menu.
1. Navigate to the dashboard you want to update.
1. Toggle on the edit mode switch.
The **Dashboard** edit pane opens on the right side of the dashboard.
1. Click in the panel, row, or tab you want to work with to bring it into focus and display the associated options in the edit pane.
1. Expand the **Repeat options** section.
1. Select the **Repeat by variable**.
1. For panels only, set the following options:
- Under **Repeat direction**, choose one of the following:
- **Horizontal** - Arrange panels side-by-side. Grafana adjusts the width of a repeated panel. You cant mix other panels on a row with a repeated panel.
- **Vertical** - Arrange panels in a column. The width of repeated panels is the same as the original, repeated panel.
- If you selected **Horizontal**, select a value in the **Max per row** drop-down list to control the maximum number of panels that can be in a row.
1. (Optional) To provide context to dashboard users, add the variable name to the panel, row, or tab title.
1. When you've finished setting the repeat option, click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Toggle off the edit mode switch.
### Repeating rows and tabs and the Dashboard special data source
<!-- is this next section still true? -->
If a row includes panels using the special [Dashboard data source](ref:built-in-special-data-sources)&mdash;the data source that uses a result set from another panel in the same dashboard&mdash;then corresponding panels in repeated rows will reference the panel in the original row, not the ones in the repeated rows.
The same behavior applies to tabs.
For example, in a dashboard:
- `Row 1` includes `Panel 1A` and `Panel 1B`
- `Panel 1B` uses the results from `Panel 1A` by way of the `-- Dashboard --` data source
- Repeating row, `Row 2`, includes `Panel 2A` and `Panel 2B`
- `Panel 2B` references `Panel 1A`, not `Panel 2A`
## Configure show/hide rules
You can configure panels, rows, and tabs to be shown or hidden based on rules.
For example, you might want to set a panel to be hidden if there's no data returned by a query or a tab to only be shown based on a variable being present.
{{< admonition type="note" >}}
You can only configure show/hide rules for panels when the dashboard is using the **Auto grid** panel layout.
{{< /admonition >}}
To configure show/hide rules, follow these steps:
1. Click **Dashboards** in the main menu.
1. Navigate to the dashboard you want to update.
1. Toggle on the edit mode switch.
The **Dashboard** edit pane opens on the right side of the dashboard.
1. Click in the panel, row, or tab you want to work with to bring it into focus and display the associated options in the edit pane.
1. Expand the **Show / hide rules** section.
1. Select **Show** or **Hide** to set whether the panel, row, or tab is shown or hidden based on the rules outcome.
1. Click **+ Add rule**.
1. Select a rule type:
- **Query result** - Show or hide a panel based on query results. Choose from **Has data** and **No data**. For panels only.
- **Template variable** - Show or hide the panel, row, or tab dynamically based on the variable value. Select a variable and operator and enter a value.
- **Time range less than** - Show or hide the panel, row, or tab if the dashboard time range is shorter than the selected time frame. Select or enter a time range.
1. Configure the rule.
1. Under **Match rules**, select one of the following:
- **Match all** - The panel, row, or tab is shown or hidden only if _all_ the rules are matched.
- **Match any** - The panel, row, or tab is shown or hidden if _any_ of the rules are matched.
This option is only displayed if you add multiple rules.
1. When you've finished setting rules, click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Toggle off the edit mode switch.
{{< admonition type="caution" >}}
Dynamic dashboards is an [experimental](https://grafana.com/docs/release-life-cycle/) feature. Engineering and on-call support is not available. Documentation is either limited or not provided outside of code comments. No SLA is provided. To get early access to this feature, request it through [this form](https://docs.google.com/forms/d/e/1FAIpQLSd73nQzuhzcHJOrLFK4ef_uMxHAQiPQh1-rsQUT2MRqbeMLpg/viewform?usp=dialog).
**Do not enable this feature in production environments as it may result in the irreversible loss of data.**
{{< /admonition >}}
## Edit dashboards
When the dashboard is in edit mode, the edit pane that opens displays options associated with the part of the dashboard that it's in focus.
For example, if you click in the area of a panel, row, or tab, that area comes into focus and the edit pane shows the options for that area:
{{< figure src="/media/docs/grafana/dashboards/screenshot-edit-pane-focus-v12.png" max-width="750px" alt="Dashboard with a panel in focus" >}}
- For rows and tabs, all of the available options are in the edit pane.
- For panels, high-level options are in the edit pane and further configuration options are in the **Edit panel** view.
- For dashboards, high-level options are in the edit pane and further configuration options are in the **Settings** page.
To edit dashboards, follow these steps:
1. Click **Dashboards** in the main menu.
1. Navigate to the dashboard you want to update.
1. Toggle on the edit mode switch.
The **Dashboard** edit pane opens on the right side of the dashboard.
1. Click in the area you want to work with to bring it into focus and display the associated options in the edit pane.
1. Do one of the following:
- For rows or tabs, make the required changes using the edit pane.
- For panels, update the panel title, description, repeat options or show/hide rules in the edit pane. For more changes, click **Configure** and continue in **Edit panel** view.
- For dashboards, update the dashboard title, description, grouping or panel layout. For more changes, click the settings (gear) icon in the top-right corner.
1. When you've finished making changes, click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Toggle off the edit mode switch.
### Undo and redo
When a dashboard is in edit mode, you can undo and redo changes you've made using the buttons on the toolbar:
{{< figure src="/media/docs/grafana/dashboards/screenshot-undo-redo-icons-v12.0.png" max-width="500px" alt="Undo and redo buttons" >}}
When you've made a change and hover the cursor over the buttons, the tooltip displays the change you're about to undo or redo.
Also, you can continue undoing or redoing as many changes as you need:
{{< video-embed src="/media/docs/grafana/dashboards/screen-record-undo-redo-v12.0.mp4" >}}
The undo and redo buttons are only available at the dashboard level and only apply to changes made there, such as dashboard layout and grouping and high-level dashboard or panel updates.
They aren't visible and don't apply when you're configuring a panel or making changes in the dashboard settings.
{{< admonition type="note" >}}
Not all dashboard edit actions can be undone or redone yet.
{{< /admonition >}}
## Move or resize a panel
<!-- previous headings Move a panel & Resize a panel -->
When you're dashboard has a **Custom** layout, you can resize or move a panel to any location on the dashboard.
To move or resize, follow these steps:
1. Click **Dashboards** in the main menu.
1. Navigate to the dashboard you want to update.
1. Toggle on the edit mode switch.
1. Do one of the following:
- Click the panel title and drag the panel to the new location.
- Click and drag the lower-right corner of the panel to change the size of the panel.
1. Click **Save**.
1. (Optional) Enter a description of the changes you've made.
1. Click **Save**.
1. Toggle off the edit mode switch.
## Navigate using the dashboard outline
The dashboard **Outline** provides a tree-like structure that shows you all of the parts of your dashboard and their relationships to each other including panels, rows, tabs, and variables.
The outline also lets you quickly navigate the dashboard so that you don't have to spend time finding a particular element to work with it.
By default, the outline is collapsed except for the part that's currently in focus.
{{< figure src="/media/docs/grafana/dashboards/screenshot-dashboard-outline-v12.png" max-width="750px" alt="Dashboard with outline open showing panel in focus" >}}
To navigate the dashboard using the outline, follow these steps:
1. Click **Dashboards** in the main menu.
1. Navigate to the dashboard you want to update.
1. Toggle on the edit mode switch.
The **Dashboard** edit pane opens on the right side of the dashboard.
1. In the edit pane, expand the **Outline** section.
1. Expand the outline to find the dashboard part to which you want to navigate.
1. Click the tree item to navigate that part of the dashboard.
## Copy a dashboard
To make a copy of a dashboard, follow these steps:
1. Click **Dashboards** in the main menu.
1. Navigate to the dashboard you want to update.
1. Toggle on the edit mode switch.
1. Click the **Save** drop-down and select **Save as copy**.
1. (Optional) Specify the name, folder, description, and whether or not to copy the original dashboard tags for the copied dashboard.
By default, the copied dashboard has the same name as the original dashboard with the word "Copy" appended and is in the same folder.
1. Click **Save**.
{{< admonition type="caution" >}}
Dynamic dashboards is an [experimental](https://grafana.com/docs/release-life-cycle/) feature. Engineering and on-call support is not available. Documentation is either limited or not provided outside of code comments. No SLA is provided. To get early access to this feature, request it through [this form](https://docs.google.com/forms/d/e/1FAIpQLSd73nQzuhzcHJOrLFK4ef_uMxHAQiPQh1-rsQUT2MRqbeMLpg/viewform?usp=dialog).
**Do not enable this feature in production environments as it may result in the irreversible loss of data.**
{{< /admonition >}}

View File

@@ -3,20 +3,25 @@ keywords:
- grafana
- dashboard
- template
- suggestions
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Create template dashboards
title: Create dashboards from templates
description: Learn how to create dashboards from templates
menuTitle: Create template and suggested dashboards
title: Create dashboards from templates and suggestions
description: Learn how to create dashboards from templates and suggestions
weight: 3
---
{{< docs/public-preview product="Dashboard templates" >}}
# Create dashboards from templates and suggestions
# Create dashboards from templates
Grafana provides alternative ways to start building a dashboard.
## Create dashboards from templates
{{< docs/public-preview product="Dashboard templates" >}}
Grafana provides a variety of pre-built dashboard templates that you can use to quickly set up visualizations for your data. These dashboards use sample data, which you can replace with your own data, making it easier to get started with monitoring and analysis.
@@ -48,3 +53,23 @@ To create a dashboard from a template, follow these steps:
{{< figure src="/media/docs/grafana/dashboards/screenshot-remove-banner-v12.3.png" max-width="750px" alt="Removing the sample data banner panel" >}}
1. Click **Save dashboard**.
## Create dashboards from suggestions
{{< docs/public-preview product="Suggested dashboards" >}}
You can start the process of creating a dashboard directly from a data source rather than from the **Dashboards** page, which gives you access to suggestions based on the data source.
To begin building a dashboard directly from a data source, follow these steps:
1. Navigate to **Connections > Data sources**.
1. On the row of the data source for which you want to build a dashboard, click **Build a dashboard**.
The empty dashboard page opens.
1. Select one of the suggested dashboards by clicking its **Use dashboard** button. This can be helpful when you're not sure how to most effectively visualize your data.
The suggested dashboards are specific to your data source type (for example, Prometheus, Loki, or Elasticsearch). If there are more than three dashboard suggestions, you can click **View all** to see the rest of them.
![Empty dashboard with add visualization and suggested dashboard options](/media/docs/grafana/dashboards/screenshot-suggested-dashboards-v12.3.png)
1. Complete the rest of the dashboard configuration. For more detailed steps, refer to [Create a dashboard](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/build-dashboards/create-dashboard/), beginning at step five.

View File

@@ -85,7 +85,8 @@ Once you've added a dashboard link, it appears in the upper right corner of your
Add links to other dashboards at the top of your current dashboard.
1. In the dashboard you want to link, click **Edit**.
1. Click **Settings**.
1. In the sidebar, click the **Dashboard options** icon.
1. In the edit pane, click **Settings**.
1. Go to the **Links** tab and then click **Add dashboard link**.
The default link type is **Dashboards**.
@@ -109,7 +110,8 @@ Add links to other dashboards at the top of your current dashboard.
Add a link to a URL at the top of your current dashboard. You can link to any available URL, including dashboards, panels, or external sites. You can even control the time range to ensure the user is zoomed in on the right data in Grafana.
1. In the dashboard you want to link, click **Edit**.
1. Click **Settings**.
1. In the sidebar, click the **Dashboard options** icon.
1. In the edit pane, click **Settings**.
1. Go to the **Links** tab and then click **Add dashboard link**.
1. In the **Type** drop-down, select **Link**.
1. In the **URL** field, enter the URL to which you want to link.
@@ -132,7 +134,8 @@ Add a link to a URL at the top of your current dashboard. You can link to any av
To edit, duplicate, or delete dashboard link, follow these steps:
1. In the dashboard you want to link, click **Edit**.
1. Click **Settings**.
1. In the sidebar, click the **Dashboard options** icon.
1. In the edit pane, click **Settings**.
1. Go to the **Links** tab.
1. Do one of the following:
- **Edit** - Click the name of the link and update the link settings.

View File

@@ -14,7 +14,7 @@ labels:
- cloud
- enterprise
- oss
menutitle: Manage version history
menuTitle: Manage version history
title: Manage dashboard version history
description: View and compare previous versions of your dashboard
weight: 400
@@ -32,8 +32,9 @@ The dashboard version history feature lets you compare and restore to previously
To compare two dashboard versions, follow these steps:
1. Click **Edit** in the top-right corner of the dashboard.
1. Click **Settings**.
1. Click **Edit**.
1. In the sidebar, click the **Dashboard options** icon.
1. In the edit pane, click **Settings**.
1. Go to the **Versions** tab.
1. Select the two dashboard versions that you want to compare.
1. Click **Compare versions** to view the diff between the two versions.
@@ -49,8 +50,9 @@ When you're comparing versions, if one of the versions you've selected is the la
To restore to a previously saved dashboard version, follow these steps:
1. Click **Edit** in the top-right corner of the dashboard.
1. Click **Settings**.
1. Click **Edit**.
1. Click the **Dashboard options** icon.
1. In the edit pane, click **Settings**.
1. Go to the **Versions** tab.
1. Click the **Restore** button next to the version.

View File

@@ -50,8 +50,9 @@ The dashboard settings page allows you to:
To access the dashboard setting page:
1. Click **Edit** in the top-right corner of the dashboard.
1. Click **Settings**.
1. Click **Edit**.
1. In the sidebar, click the **Dashboard options** icon.
1. In the edit pane, click **Settings**.
## Modify dashboard time settings

View File

@@ -3,45 +3,75 @@ aliases:
- ../../../reference/dashboard/ # /docs/grafana/next/reference/dashboard/
- ../../../dashboards/json-model/ # /docs/grafana/next/dashboards/json-model/
- ../../../dashboards/build-dashboards/view-dashboard-json-model/ # /docs/grafana/next/dashboards/build-dashboards/view-dashboard-json-model/
- ../../../as-code/observability-as-code/schema-v2/ # /docs/grafana/latest/as-code/observability-as-code/schema-v2/
- ../../../as-code/observability-as-code/schema-v2/annotations-schema/ # /docs/grafana/latest/as-code/observability-as-code/schema-v2/annotations-schema/
- ../../../as-code/observability-as-code/schema-v2/panel-schema/ # /docs/grafana/latest/as-code/observability-as-code/schema-v2/panel-schema/
- ../../../as-code/observability-as-code/schema-v2/librarypanel-schema/ # /docs/grafana/latest/as-code/observability-as-code/schema-v2/librarypanel-schema/
- ../../../as-code/observability-as-code/schema-v2/layout-schema/ # /docs/grafana/latest/as-code/observability-as-code/schema-v2/layout-schema/
- ../../../as-code/observability-as-code/schema-v2/links-schema/ # /docs/grafana/latest/as-code/observability-as-code/schema-v2/links-schema/
- ../../../as-code/observability-as-code/schema-v2/timesettings-schema/ # /docs/grafana/latest/as-code/observability-as-code/schema-v2/timesettings-schema/
- ../../../as-code/observability-as-code/schema-v2/variables-schema/ # /docs/grafana/latest/as-code/observability-as-code/schema-v2/variables-schema/
- ../../../observability-as-code/schema-v2/ # /docs/grafana/latest/observability-as-code/schema-v2/
- ../../../../next/observability-as-code/schema-v2/annotations-schema/ # /docs/grafana/next/observability-as-code/schema-v2/annotations-schema/
- ../../../../next/observability-as-code/schema-v2/panel-schema/ # /docs/grafana/next/observability-as-code/schema-v2/panel-schema/
- ../../../../next/observability-as-code/schema-v2/librarypanel-schema/ # /docs/grafana/next/observability-as-code/schema-v2/librarypanel-schema/
- ../../../../next/observability-as-code/schema-v2/layout-schema/ # /docs/grafana/next/observability-as-code/schema-v2/layout-schema/
- ../../../../next/observability-as-code/schema-v2/links-schema/ # /docs/grafana/next/observability-as-code/schema-v2/links-schema/
- ../../../../next/observability-as-code/schema-v2/timesettings-schema/ # /docs/grafana/next/observability-as-code/schema-v2/timesettings-schema/
- ../../../../next/observability-as-code/schema-v2/variables-schema/ # /docs/grafana/next/observability-as-code/schema-v2/variables-schema/
keywords:
- grafana
- dashboard
- documentation
- json
- model
- schema v2
- v1 resource
- v2 resource
- classic
labels:
products:
- cloud
- enterprise
- oss
title: JSON model
description: View your Grafana dashboard JSON object
description: View and update your Grafana dashboard JSON object
weight: 700
refs:
annotations:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/annotate-visualizations/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/build-dashboards/annotate-visualizations/
---
# Dashboard JSON model
A dashboard in Grafana is represented by a JSON object, which stores metadata of its dashboard. Dashboard metadata includes dashboard properties, metadata from panels, template variables, panel queries, etc.
Grafana dashboards are represented as JSON objects that store metadata, panels, variables, and settings.
To view the JSON of a dashboard:
## Different dashboard schema models
1. Click **Edit** in the top-right corner of the dashboard.
1. Click **Settings**.
There are currently three dashboard JSON schema models:
- [Classic](#classic-model) - A non-Kubernetes resource used before the adoption of the Kubernetes API by Grafana in v12.2.0. It's been widely used for exporting, importing, and sharing dashboards in the Grafana dashboards collection at [grafana.com/dashboards](https://grafana.com/grafana/dashboards/).
- [V1 Resource](#v1-resource-model) - The Classic dashboard schema formatted as a Kubernetes-style resource. Its `spec` property contains the Classic model of the schema. This is the default format for API communication after Grafana v12.2.0, which enabled the Kubernetes Platform API as default backend for Grafana dashboards. Dashboards created using the Classic model can be exported using either the Classic or the V1 Resource format.
- [V2 Resource](#v2-resource-model) - The latest format, supporting new features such as advanced layouts and conditional rendering. It models all dashboard elements as Kubernetes kinds, following Kubernetes conventions for declaring dashboard components. This format is future-proof and represents the evolving standard for dashboards.
{{< admonition type="note" >}}
[Observability as Code](https://grafana.com/docs/grafana/latest/as-code/observability-as-code/) works with all versions of the JSON model, and it's fully compatible with version 2.
{{< /admonition >}}
## Access and update the JSON model (#view-json)
To access the JSON representation of a dashboard:
1. Click **Edit**.
1. In the sidebar, click the **Dashboard options** icon.
1. In the edit pane, click **Settings**.
1. Go to the **JSON Model** tab.
1. When you've finished viewing the JSON, click **Back to dashboard** and **Exit edit**.
## JSON fields
## Classic model
When a user creates a new dashboard, a new dashboard JSON object is initialized with the following fields:
When you create a new dashboard in self-managed Grafana, a new dashboard JSON object was initialized with the following fields:
{{< admonition type="note" >}}
In the following JSON, id is shown as null which is the default value assigned to it until a dashboard is saved. Once a dashboard is saved, an integer value is assigned to the `id` field.
In the following JSON, id is shown as null which is the default value assigned to it until a dashboard is saved.
After a dashboard is saved, an integer value is assigned to the `id` field.
{{< /admonition >}}
```json
@@ -76,26 +106,30 @@ In the following JSON, id is shown as null which is the default value assigned t
Each field in the dashboard JSON is explained below with its usage:
| Name | Usage |
| ----------------- | ----------------------------------------------------------------------------------------------------------------- |
| **id** | unique numeric identifier for the dashboard. (generated by the db) |
| **uid** | unique dashboard identifier that can be generated by anyone. string (8-40) |
| **title** | current title of dashboard |
| **tags** | tags associated with dashboard, an array of strings |
| **style** | theme of dashboard, i.e. `dark` or `light` |
| **timezone** | timezone of dashboard, i.e. `utc` or `browser` |
| **editable** | whether a dashboard is editable or not |
| **graphTooltip** | 0 for no shared crosshair or tooltip (default), 1 for shared crosshair, 2 for shared crosshair AND shared tooltip |
| **time** | time range for dashboard, i.e. last 6 hours, last 7 days, etc |
| **timepicker** | timepicker metadata, see [timepicker section](#timepicker) for details |
| **templating** | templating metadata, see [templating section](#templating) for details |
| **annotations** | annotations metadata, see [annotations](ref:annotations) for how to add them |
| **refresh** | auto-refresh interval |
| **schemaVersion** | version of the JSON schema (integer), incremented each time a Grafana update brings changes to said schema |
| **version** | version of the dashboard (integer), incremented each time the dashboard is updated |
| **panels** | panels array, see below for detail. |
<!--prettier-ignore-start -->
## Panels
| Name | Usage |
| ----------------- | ------------------------------------------------------------------------------------------ |
| **id** | unique numeric identifier for the dashboard. (generated by the db) |
| **uid** | unique dashboard identifier that can be generated by anyone. string (8-40) |
| **title** | current title of dashboard |
| **tags** | tags associated with dashboard, an array of strings |
| **style** | theme of dashboard, i.e. `dark` or `light` |
| **timezone** | timezone of dashboard, i.e. `utc` or `browser` |
| **editable** | whether a dashboard is editable or not |
| **graphTooltip** | 0 for no shared crosshair or tooltip (default), 1 for shared crosshair, 2 for shared crosshair AND shared tooltip |
| **time** | time range for dashboard, i.e. last 6 hours, last 7 days, etc |
| **timepicker** | timepicker metadata, see [timepicker section](#timepicker) for details |
| **templating** | templating metadata, see [templating section](#templating) for details |
| **annotations** | annotations metadata, see [annotations](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/annotate-visualizations/) for how to add them |
| **refresh** | auto-refresh interval|
| **schemaVersion** | version of the JSON schema (integer), incremented each time a Grafana update brings changes to said schema |
| **version** | version of the dashboard (integer), incremented each time the dashboard is updated |
| **panels** | panels array, see below for detail. |
<!--prettier-ignore-end -->
### Panels
Panels are the building blocks of a dashboard. It consists of data source queries, type of graphs, aliases, etc. Panel JSON consists of an array of JSON objects, each representing a different panel. Most of the fields are common for all panels but some fields depend on the panel type. Following is an example of panel JSON of a text panel.
@@ -168,18 +202,22 @@ The grid has a negative gravity that moves panels up if there is empty space abo
Usage of the fields is explained below:
| Name | Usage |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| **collapse** | whether timepicker is collapsed or not |
| **enable** | whether timepicker is enabled or not |
| **notice** | |
| **now** | |
| **hidden** | whether timepicker is hidden or not |
| **nowDelay** | override the now time by entering a time delay. Use this option to accommodate known delays in data aggregation to avoid null values. |
| **quick_ranges** | custom quick ranges |
| **refresh_intervals** | interval options available in the refresh picker dropdown |
| **status** | |
| **type** | |
<!--prettier-ignore-start -->
| Name | Usage |
| --------------------- | --------------------------------------------------------- |
| **collapse** | whether timepicker is collapsed or not |
| **enable** | whether timepicker is enabled or not |
| **notice** | |
| **now** | |
| **hidden** | whether timepicker is hidden or not |
| **nowDelay** | override the now time by entering a time delay. Use this option to accommodate known delays in data aggregation to avoid null values. |
| **quick_ranges** | custom quick ranges |
| **refresh_intervals** | interval options available in the refresh picker dropdown |
| **status** | |
| **type** | |
<!--prettier-ignore-end -->
### templating
@@ -270,3 +308,82 @@ Usage of the above mentioned fields in the templating section is explained below
| **refresh** | configures when to refresh a variable |
| **regex** | extracts part of a series name or metric node segment |
| **type** | type of variable, i.e. `custom`, `query` or `interval` |
## V1 Resource model
The V1 Resource schema model formats the [Classic JSON model](#classic-model) schema as a Kubernetes-style resource.
The `spec` property of the schema contains the Classic-style model of the schema.
Dashboards created using the Classic model can be exported using either this model or the Classic one.
The following code snippet shows the fields included in the V1 Resource model.
```json
{
"apiVersion": "dashboard.grafana.app/v1beta1",
"kind": "Dashboard",
"metadata": {
"name": "isnt5ss",
"namespace": "stacks-521104",
"uid": "92674c0e-0360-4bb4-99ab-fb150581376d",
"resourceVersion": "1764705030717045",
"generation": 1,
"creationTimestamp": "2025-12-02T19:50:30Z",
"labels": {
"grafana.app/deprecatedInternalID": "1329"
},
"annotations": {
"grafana.app/createdBy": "user:u000000002",
"grafana.app/folder": "",
"grafana.app/saved-from-ui": "Grafana Cloud (instant)"
}
},
"spec": {
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 1329,
"links": [],
"panels": [],
"preload": false,
"schemaVersion": 42,
"tags": [],
"templating": {
"list": []
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "Africa/Abidjan",
"title": "Graphite suggestions",
"uid": "isnt5ss",
"version": 1,
"weekStart": ""
},
"status": {}
}
```
## V2 Resource model
{{< docs/public-preview product="Dashboard JSON schema v2" >}}
For the detailed V2 Resource model schema, refer to the [Swagger documentation](https://play.grafana.org/swagger?api=dashboard.grafana.app-v2beta1).

View File

@@ -213,7 +213,7 @@ To export a dashboard in its current state as a PDF, follow these steps:
1. Click **Dashboards** in the main menu.
1. Open the dashboard you want to export.
1. Click the **Export** drop-down in the top-right corner and select **Export as PDF**.
1. Click the **Export** drop-down in the sidebar and select **Export as PDF**.
1. In the **Export dashboard PDF** drawer that opens, select either **Landscape** or **Portrait** for the PDF orientation.
1. Select either **Grid** or **Simple** for the PDF layout.
1. Set the **Zoom** level; zoom in to enlarge text, or zoom out to see more data (like table columns) per panel.
@@ -229,7 +229,7 @@ Export a Grafana JSON file that contains everything you need, including layout,
1. Click **Dashboards** in the main menu.
1. Open the dashboard you want to export.
1. Click the **Export** drop-down list in the top-right corner and select **Export as code**.
1. Click the **Export** drop-down list in the sidebar and select **Export as code**.
The **Export dashboard** drawer opens.
@@ -255,7 +255,7 @@ To export a dashboard in its current state as a PNG image file, follow these ste
1. Click **Dashboards** in the main menu.
1. Open the dashboard you want to export.
1. Click the **Export** drop-down list in the top-right corner and select **Export as image**.
1. Click the **Export** drop-down list in the sidebar and select **Export as image**.
The **Export as image** drawer opens.

View File

@@ -21,67 +21,144 @@ menuTitle: Use dashboards
title: Use dashboards
description: Learn about the features of a Grafana dashboard
weight: 100
refs:
dashboard-analytics:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/assess-dashboard-usage/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/assess-dashboard-usage/
generative-ai-features:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/manage-dashboards/#set-up-generative-ai-features-for-dashboards
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/manage-dashboards/#set-up-generative-ai-features-for-dashboards
dashboard-settings:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/modify-dashboard-settings/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/build-dashboards/modify-dashboard-settings/
repeating-rows:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/create-dashboard/#configure-repeating-rows
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/build-dashboards/create-dashboard/#configure-repeating-rows
variables:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/variables/
dashboard-folders:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/manage-dashboards/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/manage-dashboards/
sharing:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/share-dashboards-panels/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/share-dashboards-panels/
dashboard-links:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/manage-dashboard-links/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/build-dashboards/manage-dashboard-links/
panel-overview:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/panel-overview/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/panel-overview/
export-dashboards:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/share-dashboards-panels/#export-dashboards
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/share-dashboards-panels/#export-dashboards
add-ad-hoc-filters:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/add-template-variables/#add-ad-hoc-filters
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/variables/add-template-variables/#add-ad-hoc-filters
shared-dashboards:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/share-dashboards-panels/shared-dashboards/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/share-dashboards-panels/shared-dashboards/
image_maps:
- key: annotated-dashboard
src: /media/docs/grafana/dashboards/screenshot-ann-dashboards-v12.4.png
alt: An annotated image of a Grafana dashboard
points:
- x_coord: 8
y_coord: 5
content: |
**Dashboard folder**
Click the dashboard folder name to access the folder and perform other [folder management tasks](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/manage-dashboards/).
- x_coord: 17
y_coord: 5
content: |
**Dashboard title**
Create your own dashboard titles or have Grafana create them for you using [generative AI features](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/manage-dashboards/#set-up-generative-ai-features-for-dashboards).
- x_coord: 23
y_coord: 5
content: |
**Mark as favorite**
Mark the dashboard as one of your favorites to include it in your list of **Starred** dashboards in the main menu.
- x_coord: 27
y_coord: 5
content: |
**Public label**
[Externally shared dashboards](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/share-dashboards-panels/shared-dashboards/), it's marked with the **Public** label.
- x_coord: 84
y_coord: 5
content: |
**Grafana Assistant**
[Grafana Assistant](https://grafana.com/docs/grafana-cloud/machine-learning/assistant/introduction/) combines large language models with Grafana-integrated tools.
- x_coord: 89
y_coord: 5
content: |
**Invite new users**
Invite new users to join your Grafana organization.
- x_coord: 32
y_coord: 23
content: |
**Variables**
Use [variables](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/variables/), including ad hoc filters, to create more interactive and dynamic dashboards.
- x_coord: 45
y_coord: 23
content: |
**Dashboard links**
Link to other dashboards, panels, and external websites. Learn more about [dashboard links](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/build-dashboards/manage-dashboard-links/).
- x_coord: 59
y_coord: 29
content: |
**Current dashboard time range and time picker**
Select [relative time range](#relative-time-range) options or set custom [absolute time ranges](#absolute-time-range).
You can also change the **Timezone** and **Fiscal year** settings by clicking the **Change time settings** button.
- x_coord: 67
y_coord: 29
content: |
**Time range zoom out**
Click to zoom out the time range. Learn more about [common time range controls](#common-time-range-controls).
- x_coord: 73
y_coord: 29
content: |
**Refresh dashboard**
Trigger queries and refresh dashboard data.
- x_coord: 78
y_coord: 29
content: |
**Auto refresh control**
Select a dashboard auto refresh time interval.
- x_coord: 85
y_coord: 29
content: |
**Share dashboard**
Access [dashboard sharing](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/share-dashboards-panels/) options.
- x_coord: 98
y_coord: 22.5
content: |
**Edit**
Enter edit mode, so you can make changes and access dashboard settings.
- x_coord: 98
y_coord: 31
content: |
**Export**
Access [dashboard exporting](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/share-dashboards-panels/#export-dashboards) options.
- x_coord: 98
y_coord: 39
content: |
**Content outline**
The outline provides a tree-like structure that lets you quickly navigate the dashboard.
- x_coord: 98
y_coord: 47
content: |
**Dashboard insights**
View [dashboard analytics](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/assess-dashboard-usage/) including information about users, activity, query counts.
- x_coord: 11.5
y_coord: 30
content: |
**Row title**
A row is one way you can [group panels](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/build-dashboards/create-dashboard/#panel-groupings) in a dashboard.
- x_coord: 20
y_coord: 36
content: |
**Tab title**
A tab is one way you can [group panels](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/build-dashboards/create-dashboard/#panel-groupings) in a dashboard.
- x_coord: 21
y_coord: 45
content: |
**Panel title**
Create your own panel titles or have Grafana create them for you using [generative AI features](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/manage-dashboards/#set-up-generative-ai-features-for-dashboards).
- x_coord: 27
y_coord: 63
content: |
**Dashboard panel**
The [panel](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/panels-visualizations/panel-overview/) is the primary building block of a dashboard.
- x_coord: 19.5
y_coord: 91
content: |
**Panel legend**
Change series colors as well as y-axis and series visibility directly from the legend.
---
# Use dashboards
@@ -95,32 +172,9 @@ This topic provides an overview of dashboard features and shortcuts, and describ
The dashboard user interface provides a number of features that you can use to customize the presentation of your data.
The following image and descriptions highlight all dashboard features.
Hover your cursor over a number to display information about the dashboard element.
![An annotated image of a dashboard](/media/docs/grafana/dashboards/screenshot-dashboard-annotated-v11.3-2.png)
1. **Dashboard folder** - When you click the dashboard folder name, you can search for other dashboards contained in the folder and perform other [folder management tasks](ref:dashboard-folders).
1. **Dashboard title** - You can create your own dashboard titles or have Grafana create them for you using [generative AI features](ref:generative-ai-features).
1. **Kiosk mode** - Click to display the dashboard on a large screen such as a TV or a kiosk. Kiosk mode hides the main menu, navbar, and dashboard controls. Learn more about kiosk mode in our [How to Create Kiosks to Display Dashboards on a TV blog post](https://grafana.com/blog/2019/05/02/grafana-tutorial-how-to-create-kiosks-to-display-dashboards-on-a-tv/). Press `Esc` to leave kiosk mode.
1. **Mark as favorite** - Mark the dashboard as one of your favorites so it's included in your list of **Starred** dashboards in the main menu.
1. **Public label** - When you [share a dashboard externally](ref:shared-dashboards), it's marked with the **Public** label.
1. **Dashboard insights** - Click to view analytics about your dashboard including information about users, activity, query counts. Learn more about [dashboard analytics](ref:dashboard-analytics).
1. **Edit** - Click to leave view-only mode and enter edit mode, where you can make changes directly to the dashboard and access dashboard settings, as well as several panel editing functions.
1. **Export** - Access [dashboard exporting](ref:export-dashboards) options.
1. **Share dashboard** - Access several [dashboard sharing](ref:sharing) options.
1. **Variables** - Use [variables](ref:variables), including ad hoc filters, to create more interactive and dynamic dashboards.
1. **Dashboard links** - Link to other dashboards, panels, and external websites. Learn more about [dashboard links](ref:dashboard-links).
1. **Current dashboard time range and time picker** - Click to select [relative time range](#relative-time-range) options and set custom [absolute time ranges](#absolute-time-range).
- You can change the **Timezone** and **Fiscal year** settings from the time range controls by clicking the **Change time settings** button.
- Time settings are saved on a per-dashboard basis.
1. **Time range zoom out** - Click to zoom out the time range. Learn more about how to use [common time range controls](#common-time-range-controls).
1. **Refresh dashboard** - Click to immediately trigger queries and refresh dashboard data.
1. **Auto refresh control** - Click to select a dashboard auto refresh time interval.
1. **Dashboard row** - A dashboard row is a logical divider within a dashboard that groups panels together.
- Rows can be collapsed or expanded allowing you to hide parts of the dashboard.
- Panels inside a collapsed row do not issue queries.
- Use [repeating rows](ref:repeating-rows) to dynamically create rows based on a template variable.
1. **Dashboard panel** - The [panel](ref:panel-overview) is the primary building block of a dashboard.
1. **Panel legend** - Change series colors as well as y-axis and series visibility directly from the legend.
{{< image-map key="annotated-dashboard" >}}
## Keyboard shortcuts
@@ -134,7 +188,7 @@ Grafana has a number of keyboard shortcuts available. Press `?` on your keyboard
- `Ctrl+K`: Opens the command palette.
- `Esc`: Exits panel when in full screen view or edit mode. Also returns you to the dashboard from dashboard settings.
**Focused panel**
### Focused panel
By hovering over a panel with the mouse you can use some shortcuts that will target that panel.
@@ -263,13 +317,16 @@ Click the **Copy time range to clipboard** icon to copy the current time range t
You can also copy and paste a time range using the keyboard shortcuts `t+c` and `t+v` respectively.
#### Zoom out (Cmd+Z or Ctrl+Z)
#### Zoom out
Click the **Zoom out** icon to view a larger time range in the dashboard or panel visualization.
- Click the **Zoom out** icon to view a larger time range in the dashboard or panel visualizations
- Double click on the panel graph area (time series family visualizations only)
- Type the `t-` keyboard shortcut
#### Zoom in (only applicable to graph visualizations)
#### Zoom in
Click and drag to select the time range in the visualization that you want to view.
- Click and drag horizontally in the panel graph area to select a time range (time series family visualizations only)
- Type the `t+` keyboard shortcut
#### Refresh dashboard
@@ -285,7 +342,7 @@ Selecting the **Auto** interval schedules a refresh based on the query time rang
## Filter dashboard data
Once you've [added an ad hoc filter](ref:add-ad-hoc-filters) in the dashboard settings, you can create label/value filter pairs on the dashboard.
Once you've [added an ad hoc filter](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/visualizations/dashboards/variables/add-template-variables/#add-ad-hoc-filters) in the dashboard settings, you can create label/value filter pairs on the dashboard.
These filters are applied to all metric queries that use the specified data source and to all panels on the dashboard.
To filter dashboard data, follow these steps:

View File

@@ -146,7 +146,7 @@ To create a variable, follow these steps:
- Variable drop-down lists are displayed in the order in which they're listed in the **Variables** in dashboard settings, so put the variables that you will change often at the top, so they will be shown first (far left on the dashboard).
- By default, variables don't have a default value. This means that the topmost value in the drop-down list is always preselected. If you want to pre-populate a variable with an empty value, you can use the following workaround in the variable settings:
1. Select the **Include All Option** checkbox.
2. In the **Custom all value** field, enter a value like `+`.
2. In the **Custom all value** field, enter a value like `.+`.
## Add a query variable

View File

@@ -35,9 +35,9 @@ refs:
destination: /docs/grafana-cloud/visualizations/dashboards/build-dashboards/manage-dashboard-links/#panel-links
configure-repeating-rows:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/create-dashboard/#configure-repeating-rows
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/create-dashboard/#configure-repeat-options
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/build-dashboards/create-dashboard/#configure-repeating-rows
destination: /docs/grafana-cloud/visualizations/dashboards/build-dashboards/create-dashboard/#configure-repeat-options
set-up-generative-ai-features-for-dashboards:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/manage-dashboards/#set-up-generative-ai-features-for-dashboards

View File

@@ -175,9 +175,10 @@ By hovering over a panel with the mouse you can use some shortcuts that will tar
- `pl`: Hide or show legend
- `pr`: Remove Panel
## Zoom panel time range
## Pan and zoom panel time range
You can zoom the panel time range in and out, which in turn, changes the dashboard time range.
You can pan the panel time range left and right, and zoom it and in and out.
This, in turn, changes the dashboard time range.
This feature is supported for the following visualizations:
@@ -191,7 +192,7 @@ This feature is supported for the following visualizations:
Click and drag on the panel to zoom in on a particular time range.
The following screen recordings show this interaction in the time series and x visualizations:
The following screen recordings show this interaction in the time series and candlestick visualizations:
Time series
@@ -211,7 +212,7 @@ For example, if the original time range is from 9:00 to 9:59, the time range cha
- Next range: 8:30 - 10:29
- Next range: 7:30 - 11:29
The following screen recordings demonstrate the preceding example in the time series and x visualizations:
The following screen recordings demonstrate the preceding example in the time series and heatmap visualizations:
Time series
@@ -221,6 +222,19 @@ Heatmap
{{< video-embed src="/media/docs/grafana/panels-visualizations/recording-heatmap-panel-time-zoom-out-mouse.mp4" >}}
### Pan
Click and drag the x-axis area of the panel to pan the time range.
The time range shifts by the distance you drag.
For example, if the original time range is from 9:00 to 9:59 and you drag 30 minutes to the right, the time range changes to 9:30 to 10:29.
The following screen recordings show this interaction in the time series visualization:
Time series
{{< video-embed src="/media/docs/grafana/panels-visualizations/recording-ts-time-pan-mouse.mp4" >}}
## Add a panel
To add a panel in a new dashboard click **+ Add visualization** in the middle of the dashboard:

View File

@@ -92,9 +92,9 @@ The data is converted as follows:
{{< figure src="/media/docs/grafana/panels-visualizations/screenshot-candles-volume-v11.6.png" max-width="750px" alt="A candlestick visualization showing the price movements of specific asset." >}}
## Zoom panel time range
## Pan and zoom panel time range
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Configuration options

View File

@@ -79,9 +79,9 @@ The data is converted as follows:
{{< figure src="/static/img/docs/heatmap-panel/heatmap.png" max-width="1025px" alt="A heatmap visualization showing the random walk distribution over time" >}}
## Zoom panel time range
## Pan and zoom panel time range
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Configuration options

View File

@@ -93,9 +93,9 @@ You can also create a state timeline visualization using time series data. To do
![State timeline with time series](/media/docs/grafana/panels-visualizations/screenshot-state-timeline-time-series-v11.4.png)
## Zoom panel time range
## Pan and zoom panel time range
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Configuration options

View File

@@ -85,9 +85,9 @@ The data is converted as follows:
{{< figure src="/static/img/docs/status-history-panel/status_history.png" max-width="1025px" alt="A status history panel with two time columns showing the status of two servers" >}}
## Zoom panel time range
## Pan and zoom panel time range
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Configuration options

View File

@@ -167,9 +167,9 @@ The following example shows three series: Min, Max, and Value. The Min and Max s
{{< docs/shared lookup="visualizations/multiple-y-axes.md" source="grafana" version="<GRAFANA_VERSION>" leveloffset="+2" >}}
## Zoom panel time range
## Pan and zoom panel time range
{{< docs/shared lookup="visualizations/panel-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
{{< docs/shared lookup="visualizations/panel-pan-zoom.md" source="grafana" version="<GRAFANA_VERSION>" >}}
## Configuration options

View File

@@ -2,6 +2,15 @@ import { Page } from '@playwright/test';
import { test, expect } from '@grafana/plugin-e2e';
// Enable required feature toggles for Saved Searches (part of RuleList.v2)
test.use({
featureToggles: {
alertingListViewV2: true,
alertingFilterV2: true,
alertingSavedSearches: true,
},
});
/**
* UI selectors for Saved Searches e2e tests.
* Each selector is a function that takes the page and returns a locator.
@@ -26,26 +35,50 @@ const ui = {
// Indicators
emptyState: (page: Page) => page.getByText(/no saved searches/i),
defaultIcon: (page: Page) => page.locator('[title="Default search"]'),
defaultIcon: (page: Page) => page.getByRole('img', { name: /default search/i }),
duplicateError: (page: Page) => page.getByText(/already exists/i),
};
/**
* Helper to clear saved searches storage.
* UserStorage uses localStorage as fallback, so we clear both potential keys.
* Helper to clear saved searches from UserStorage.
* UserStorage persists data server-side via k8s API, so we need to delete via API.
*/
async function clearSavedSearches(page: Page) {
await page.evaluate(() => {
// Clear localStorage keys that might contain saved searches
// UserStorage stores under 'grafana.userstorage.alerting' pattern
const keysToRemove = Object.keys(localStorage).filter(
(key) => key.includes('alerting') && (key.includes('savedSearches') || key.includes('userstorage'))
);
keysToRemove.forEach((key) => localStorage.removeItem(key));
// Get namespace and user info from Grafana config
const storageInfo = await page.evaluate(() => {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const bootData = (window as any).grafanaBootData;
const user = bootData?.user;
const userUID = user?.uid === '' || !user?.uid ? String(user?.id ?? 'anonymous') : user.uid;
const resourceName = `alerting:${userUID}`;
const namespace = bootData?.settings?.namespace || 'default';
// Also clear session storage visited flag
const sessionKeysToRemove = Object.keys(sessionStorage).filter((key) => key.includes('alerting'));
sessionKeysToRemove.forEach((key) => sessionStorage.removeItem(key));
return { namespace, resourceName };
});
// Delete the UserStorage resource
try {
await page.request.delete(
`/apis/userstorage.grafana.app/v0alpha1/namespaces/${storageInfo.namespace}/user-storage/${storageInfo.resourceName}`
);
} catch (error) {
// Ignore 404 errors (resource doesn't exist)
if (!(error && typeof error === 'object' && 'status' in error && error.status === 404)) {
console.warn('Failed to clear saved searches:', error);
}
}
// Also clear localStorage as fallback storage
await page.evaluate(({ resourceName }) => {
// The UserStorage key pattern is always `{resourceName}:{key}`
// For saved searches, the key is 'savedSearches'
const key = `${resourceName}:savedSearches`;
window.localStorage.removeItem(key);
}, storageInfo);
// Clear session storage visited flag
await page.evaluate(() => {
window.sessionStorage.removeItem('grafana.alerting.ruleList.visited');
});
}
@@ -150,7 +183,7 @@ test.describe(
await ui.saveButton(page).click();
await ui.saveNameInput(page).fill('Apply Test');
await ui.saveNameInput(page).fill('Firing Rules');
await ui.saveConfirmButton(page).click();
// Clear the search
@@ -159,7 +192,7 @@ test.describe(
// Apply the saved search
await ui.savedSearchesButton(page).click();
await page.getByRole('button', { name: /apply search.*apply test/i }).click();
await page.getByRole('button', { name: /apply.*search.*firing rules/i }).click();
// Verify the search input is updated
await expect(ui.searchInput(page)).toHaveValue('state:firing');
@@ -182,7 +215,7 @@ test.describe(
await ui.renameMenuItem(page).click();
// Enter new name
const renameInput = page.getByDisplayValue('Original Name');
const renameInput = page.getByRole('textbox', { name: /enter a name/i });
await renameInput.clear();
await renameInput.fill('Renamed Search');
await page.keyboard.press('Enter');
@@ -260,12 +293,12 @@ test.describe(
await expect(ui.saveNameInput(page)).toBeVisible();
// Press Escape to cancel
// Press Escape to cancel - this closes the entire dropdown
await page.keyboard.press('Escape');
// Verify we're back to list mode
await expect(ui.saveNameInput(page)).not.toBeVisible();
await expect(ui.saveButton(page)).toBeVisible();
// Verify the entire dialog is closed
await expect(ui.dropdown(page)).not.toBeVisible();
await expect(ui.saveButton(page)).not.toBeVisible();
});
}
);

View File

@@ -727,17 +727,6 @@ const injectedRtkApi = api
}),
invalidatesTags: ['dashboards', 'permissions'],
}),
restoreDashboardVersionByUid: build.mutation<
RestoreDashboardVersionByUidApiResponse,
RestoreDashboardVersionByUidApiArg
>({
query: (queryArg) => ({
url: `/dashboards/uid/${queryArg.uid}/restore`,
method: 'POST',
body: queryArg.restoreDashboardVersionCommand,
}),
invalidatesTags: ['dashboards', 'versions'],
}),
getDashboardVersionsByUid: build.query<GetDashboardVersionsByUidApiResponse, GetDashboardVersionsByUidApiArg>({
query: (queryArg) => ({
url: `/dashboards/uid/${queryArg.uid}/versions`,
@@ -2628,26 +2617,6 @@ export type UpdateDashboardPermissionsByUidApiArg = {
uid: string;
updateDashboardAclCommand: UpdateDashboardAclCommand;
};
export type RestoreDashboardVersionByUidApiResponse = /** status 200 (empty) */ {
/** FolderUID The unique identifier (uid) of the folder the dashboard belongs to. */
folderUid?: string;
/** ID The unique identifier (id) of the created/updated dashboard. */
id: number;
/** Status status of the response. */
status: string;
/** Slug The slug of the dashboard. */
title: string;
/** UID The unique identifier (uid) of the created/updated dashboard. */
uid: string;
/** URL The relative URL for accessing the created/updated dashboard. */
url: string;
/** Version The version of the dashboard. */
version: number;
};
export type RestoreDashboardVersionByUidApiArg = {
uid: string;
restoreDashboardVersionCommand: RestoreDashboardVersionCommand;
};
export type GetDashboardVersionsByUidApiResponse = /** status 200 (empty) */ DashboardVersionResponseMeta;
export type GetDashboardVersionsByUidApiArg = {
uid: string;
@@ -4568,9 +4537,6 @@ export type DashboardAclUpdateItem = {
export type UpdateDashboardAclCommand = {
items?: DashboardAclUpdateItem[];
};
export type RestoreDashboardVersionCommand = {
version?: number;
};
export type DashboardVersionMeta = {
created?: string;
createdBy?: string;
@@ -6633,7 +6599,6 @@ export const {
useGetDashboardPermissionsListByUidQuery,
useLazyGetDashboardPermissionsListByUidQuery,
useUpdateDashboardPermissionsByUidMutation,
useRestoreDashboardVersionByUidMutation,
useGetDashboardVersionsByUidQuery,
useLazyGetDashboardVersionsByUidQuery,
useGetDashboardVersionByUidQuery,

View File

@@ -29,11 +29,14 @@ export interface Options extends common.SingleStatBaseOptions {
barWidthFactor: number;
effects: GaugePanelEffects;
endpointMarker?: ('point' | 'glow' | 'none');
minVizHeight: number;
minVizWidth: number;
segmentCount: number;
segmentSpacing: number;
shape: ('circle' | 'gauge');
showThresholdLabels: boolean;
showThresholdMarkers: boolean;
sizing: common.BarGaugeSizing;
sparkline?: boolean;
textMode?: ('auto' | 'value_and_name' | 'value' | 'name' | 'none');
}
@@ -43,11 +46,14 @@ export const defaultOptions: Partial<Options> = {
barWidthFactor: 0.5,
effects: {},
endpointMarker: 'point',
minVizHeight: 75,
minVizWidth: 75,
segmentCount: 1,
segmentSpacing: 0.3,
shape: 'gauge',
showThresholdLabels: false,
showThresholdMarkers: true,
sizing: common.BarGaugeSizing.Auto,
sparkline: true,
textMode: 'auto',
};

View File

@@ -117,6 +117,44 @@ export const MyComponent = () => {
};
```
### Custom Header Rendering
Column headers can be customized using strings, React elements, or renderer functions. The `header` property accepts any value that matches React Table's `Renderer` type.
**Important:** When using custom header content, prefer inline elements (like `<span>`) over block elements (like `<div>`) to avoid layout issues. Block-level elements can cause extra spacing and alignment problems in table headers because they disrupt the table's inline flow. Use `display: inline-flex` or `display: inline-block` when you need flexbox or block-like behavior.
```tsx
const columns: Array<Column<TableData>> = [
// React element header
{
id: 'checkbox',
header: (
<>
<label htmlFor="select-all" className="sr-only">
Select all rows
</label>
<Checkbox id="select-all" />
</>
),
cell: () => <Checkbox aria-label="Select row" />,
},
// Function renderer header
{
id: 'firstName',
header: () => (
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
<Icon name="user" size="sm" />
<span>First Name</span>
</span>
),
},
// String header
{ id: 'lastName', header: 'Last name' },
];
```
### Custom Cell Rendering
Individual cells can be rendered using custom content dy defining a `cell` property on the column definition.

View File

@@ -3,8 +3,11 @@ import { useCallback, useMemo, useState } from 'react';
import { CellProps } from 'react-table';
import { LinkButton } from '../Button/Button';
import { Checkbox } from '../Forms/Checkbox';
import { Field } from '../Forms/Field';
import { Icon } from '../Icon/Icon';
import { Input } from '../Input/Input';
import { Text } from '../Text/Text';
import { FetchDataArgs, InteractiveTable, InteractiveTableHeaderTooltip } from './InteractiveTable';
import mdx from './InteractiveTable.mdx';
@@ -297,4 +300,40 @@ export const WithControlledSort: StoryFn<typeof InteractiveTable> = (args) => {
return <InteractiveTable {...args} data={data} pageSize={15} fetchData={fetchData} />;
};
export const WithCustomHeader: TableStoryObj = {
args: {
columns: [
// React element header
{
id: 'checkbox',
header: (
<>
<label htmlFor="select-all" className="sr-only">
Select all rows
</label>
<Checkbox id="select-all" />
</>
),
cell: () => <Checkbox aria-label="Select row" />,
},
// Function renderer header
{
id: 'firstName',
header: () => (
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
<Icon name="user" size="sm" />
<Text element="span">First Name</Text>
</span>
),
sortType: 'string',
},
// String header
{ id: 'lastName', header: 'Last name', sortType: 'string' },
{ id: 'car', header: 'Car', sortType: 'string' },
{ id: 'age', header: 'Age', sortType: 'number' },
],
data: pageableData.slice(0, 10),
getRowId: (r) => r.id,
},
};
export default meta;

View File

@@ -2,6 +2,9 @@ import { render, screen, within } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import * as React from 'react';
import { Checkbox } from '../Forms/Checkbox';
import { Icon } from '../Icon/Icon';
import { InteractiveTable } from './InteractiveTable';
import { Column } from './types';
@@ -247,4 +250,104 @@ describe('InteractiveTable', () => {
expect(fetchData).toHaveBeenCalledWith({ sortBy: [{ id: 'id', desc: false }] });
});
});
describe('custom header rendering', () => {
it('should render string headers', () => {
const columns: Array<Column<TableData>> = [{ id: 'id', header: 'ID' }];
const data: TableData[] = [{ id: '1', value: '1', country: 'Sweden' }];
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
expect(screen.getByRole('columnheader', { name: 'ID' })).toBeInTheDocument();
});
it('should render React element headers', () => {
const columns: Array<Column<TableData>> = [
{
id: 'checkbox',
header: (
<>
<label htmlFor="select-all" className="sr-only">
Select all rows
</label>
<Checkbox id="select-all" data-testid="header-checkbox" />
</>
),
cell: () => <Checkbox data-testid="cell-checkbox" aria-label="Select row" />,
},
];
const data: TableData[] = [{ id: '1', value: '1', country: 'Sweden' }];
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
expect(screen.getByTestId('header-checkbox')).toBeInTheDocument();
expect(screen.getByTestId('cell-checkbox')).toBeInTheDocument();
expect(screen.getByLabelText('Select all rows')).toBeInTheDocument();
expect(screen.getByLabelText('Select row')).toBeInTheDocument();
expect(screen.getByText('Select all rows')).toBeInTheDocument();
});
it('should render function renderer headers', () => {
const columns: Array<Column<TableData>> = [
{
id: 'firstName',
header: () => (
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
<Icon name="user" size="sm" data-testid="header-icon" />
<span>First Name</span>
</span>
),
sortType: 'string',
},
];
const data: TableData[] = [{ id: '1', value: '1', country: 'Sweden' }];
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
expect(screen.getByTestId('header-icon')).toBeInTheDocument();
expect(screen.getByRole('columnheader', { name: /first name/i })).toBeInTheDocument();
});
it('should render all header types together', () => {
const columns: Array<Column<TableData>> = [
{
id: 'checkbox',
header: (
<>
<label htmlFor="select-all" className="sr-only">
Select all rows
</label>
<Checkbox id="select-all" data-testid="header-checkbox" />
</>
),
cell: () => <Checkbox aria-label="Select row" />,
},
{
id: 'id',
header: () => (
<span style={{ display: 'inline-flex', alignItems: 'center', gap: '8px' }}>
<Icon name="user" size="sm" data-testid="header-icon" />
<span>ID</span>
</span>
),
sortType: 'string',
},
{ id: 'country', header: 'Country', sortType: 'string' },
{ id: 'value', header: 'Value' },
];
const data: TableData[] = [
{ id: '1', value: 'Value 1', country: 'Sweden' },
{ id: '2', value: 'Value 2', country: 'Norway' },
];
render(<InteractiveTable columns={columns} data={data} getRowId={getRowId} />);
expect(screen.getByTestId('header-checkbox')).toBeInTheDocument();
expect(screen.getByTestId('header-icon')).toBeInTheDocument();
expect(screen.getByRole('columnheader', { name: 'Country' })).toBeInTheDocument();
expect(screen.getByRole('columnheader', { name: 'Value' })).toBeInTheDocument();
// Verify data is rendered
expect(screen.getByText('Sweden')).toBeInTheDocument();
expect(screen.getByText('Norway')).toBeInTheDocument();
expect(screen.getByText('Value 1')).toBeInTheDocument();
expect(screen.getByText('Value 2')).toBeInTheDocument();
});
});
});

View File

@@ -1,5 +1,5 @@
import { ReactNode } from 'react';
import { CellProps, DefaultSortTypes, IdType, SortByFn } from 'react-table';
import { CellProps, DefaultSortTypes, HeaderProps, IdType, Renderer, SortByFn } from 'react-table';
export interface Column<TableData extends object> {
/**
@@ -11,9 +11,9 @@ export interface Column<TableData extends object> {
*/
cell?: (props: CellProps<TableData>) => ReactNode;
/**
* Header name. if `undefined` the header will be empty. Useful for action columns.
* Header name. Can be a string, renderer function, or undefined. If `undefined` the header will be empty. Useful for action columns.
*/
header?: string;
header?: Renderer<HeaderProps<TableData>>;
/**
* Column sort type. If `undefined` the column will not be sortable.
* */

View File

@@ -795,6 +795,10 @@ func (hs *HTTPServer) GetDashboardVersion(c *contextmodel.ReqContext) response.R
// swagger:route POST /dashboards/uid/{uid}/restore dashboards versions restoreDashboardVersionByUID
//
// Restore a dashboard to a given dashboard version using UID.
// This API will be removed when /apis/dashboards.grafana.app/v1 is released.
// You can restore a dashboard by reading it from history, then creating it again.
//
// Deprecated: true
//
// Responses:
// 200: postDashboardResponse

View File

@@ -76,21 +76,27 @@ func (hs *HTTPServer) CreateDashboardSnapshot(c *contextmodel.ReqContext) {
return
}
// Do not check permissions when the instance snapshot public mode is enabled
if !hs.Cfg.SnapshotPublicMode {
evaluator := ac.EvalAll(ac.EvalPermission(dashboards.ActionSnapshotsCreate), ac.EvalPermission(dashboards.ActionDashboardsRead, dashboards.ScopeDashboardsProvider.GetResourceScopeUID(cmd.Dashboard.GetNestedString("uid"))))
if canSave, err := hs.AccessControl.Evaluate(c.Req.Context(), c.SignedInUser, evaluator); err != nil || !canSave {
c.JsonApiErr(http.StatusForbidden, "forbidden", err)
return
}
}
dashboardsnapshots.CreateDashboardSnapshot(c, snapshot.SnapshotSharingOptions{
cfg := snapshot.SnapshotSharingOptions{
SnapshotsEnabled: hs.Cfg.SnapshotEnabled,
ExternalEnabled: hs.Cfg.ExternalEnabled,
ExternalSnapshotName: hs.Cfg.ExternalSnapshotName,
ExternalSnapshotURL: hs.Cfg.ExternalSnapshotUrl,
}, cmd, hs.dashboardsnapshotsService)
}
if hs.Cfg.SnapshotPublicMode {
// Public mode: no user or dashboard validation needed
dashboardsnapshots.CreateDashboardSnapshotPublic(c, cfg, cmd, hs.dashboardsnapshotsService)
return
}
// Regular mode: check permissions
evaluator := ac.EvalAll(ac.EvalPermission(dashboards.ActionSnapshotsCreate), ac.EvalPermission(dashboards.ActionDashboardsRead, dashboards.ScopeDashboardsProvider.GetResourceScopeUID(cmd.Dashboard.GetNestedString("uid"))))
if canSave, err := hs.AccessControl.Evaluate(c.Req.Context(), c.SignedInUser, evaluator); err != nil || !canSave {
c.JsonApiErr(http.StatusForbidden, "forbidden", err)
return
}
dashboardsnapshots.CreateDashboardSnapshot(c, cfg, cmd, hs.dashboardsnapshotsService)
}
// GET /api/snapshots/:key
@@ -213,13 +219,6 @@ func (hs *HTTPServer) DeleteDashboardSnapshot(c *contextmodel.ReqContext) respon
return response.Error(http.StatusUnauthorized, "OrgID mismatch", nil)
}
if queryResult.External {
err := dashboardsnapshots.DeleteExternalDashboardSnapshot(queryResult.ExternalDeleteURL)
if err != nil {
return response.Error(http.StatusInternalServerError, "Failed to delete external dashboard", err)
}
}
// Dashboard can be empty (creation error or external snapshot). This means that the mustInt here returns a 0,
// which before RBAC would result in a dashboard which has no ACL. A dashboard without an ACL would fallback
// to the users org role, which for editors and admins would essentially always be allowed here. With RBAC,
@@ -239,6 +238,13 @@ func (hs *HTTPServer) DeleteDashboardSnapshot(c *contextmodel.ReqContext) respon
}
}
if queryResult.External {
err := dashboardsnapshots.DeleteExternalDashboardSnapshot(queryResult.ExternalDeleteURL)
if err != nil {
return response.Error(http.StatusInternalServerError, "Failed to delete external dashboard", err)
}
}
cmd := &dashboardsnapshots.DeleteDashboardSnapshotCommand{DeleteKey: queryResult.DeleteKey}
if err := hs.dashboardsnapshotsService.DeleteDashboardSnapshot(c.Req.Context(), cmd); err != nil {

View File

@@ -32,6 +32,8 @@ import (
var (
logger = glog.New("data-proxy-log")
client = newHTTPClient()
errPluginProxyRouteAccessDenied = errors.New("plugin proxy route access denied")
)
type DataSourceProxy struct {
@@ -308,12 +310,21 @@ func (proxy *DataSourceProxy) validateRequest() error {
if err != nil {
return err
}
// issues/116273: When we have an empty input route (or input that becomes relative to "."), we do not want it
// to be ".". This is because the `CleanRelativePath` function will never return "./" prefixes, and as such,
// the common prefix we need is an empty string.
if r1 == "." && proxy.proxyPath != "." {
r1 = ""
}
if r2 == "." && route.Path != "." {
r2 = ""
}
if !strings.HasPrefix(r1, r2) {
continue
}
if !proxy.hasAccessToRoute(route) {
return errors.New("plugin proxy route access denied")
return errPluginProxyRouteAccessDenied
}
proxy.matchedRoute = route

View File

@@ -673,6 +673,94 @@ func TestIntegrationDataSourceProxy_routeRule(t *testing.T) {
runDatasourceAuthTest(t, secretsService, secretsStore, cfg, test)
}
})
t.Run("Regression of 116273: Fallback routes should apply fallback route roles", func(t *testing.T) {
for _, tc := range []struct {
InputPath string
ConfigurationPath string
ExpectError bool
}{
{
InputPath: "api/v2/leak-ur-secrets",
ConfigurationPath: "",
ExpectError: true,
},
{
InputPath: "",
ConfigurationPath: "",
ExpectError: true,
},
{
InputPath: ".",
ConfigurationPath: ".",
ExpectError: true,
},
{
InputPath: "",
ConfigurationPath: ".",
ExpectError: false,
},
{
InputPath: "api",
ConfigurationPath: ".",
ExpectError: false,
},
} {
orEmptyStr := func(s string) string {
if s == "" {
return "<empty>"
}
return s
}
t.Run(
fmt.Sprintf("with inputPath=%s, configurationPath=%s, expectError=%v",
orEmptyStr(tc.InputPath), orEmptyStr(tc.ConfigurationPath), tc.ExpectError),
func(t *testing.T) {
ds := &datasources.DataSource{
UID: "dsUID",
JsonData: simplejson.New(),
}
routes := []*plugins.Route{
{
Path: tc.ConfigurationPath,
ReqRole: org.RoleAdmin,
Method: "GET",
},
{
Path: tc.ConfigurationPath,
ReqRole: org.RoleAdmin,
Method: "POST",
},
{
Path: tc.ConfigurationPath,
ReqRole: org.RoleAdmin,
Method: "PUT",
},
{
Path: tc.ConfigurationPath,
ReqRole: org.RoleAdmin,
Method: "DELETE",
},
}
req, err := http.NewRequestWithContext(t.Context(), "GET", "http://localhost/"+tc.InputPath, nil)
require.NoError(t, err, "failed to create HTTP request")
ctx := &contextmodel.ReqContext{
Context: &web.Context{Req: req},
SignedInUser: &user.SignedInUser{OrgRole: org.RoleViewer},
}
proxy, err := setupDSProxyTest(t, ctx, ds, routes, tc.InputPath)
require.NoError(t, err, "failed to setup proxy test")
err = proxy.validateRequest()
if tc.ExpectError {
require.ErrorIs(t, err, errPluginProxyRouteAccessDenied, "request was not denied due to access denied?")
} else {
require.NoError(t, err, "request was unexpectedly denied access")
}
},
)
}
})
}
// test DataSourceProxy request handling.

View File

@@ -0,0 +1,602 @@
package models
import (
"context"
"testing"
"time"
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/stretchr/testify/require"
"go.opentelemetry.io/otel"
"github.com/grafana/grafana/pkg/promlib/intervalv2"
)
var (
testNow = time.Now()
testIntervalCalculator = intervalv2.NewCalculator()
testTracer = otel.Tracer("test/interval")
)
func TestCalculatePrometheusInterval(t *testing.T) {
_, span := testTracer.Start(context.Background(), "test")
defer span.End()
tests := []struct {
name string
queryInterval string
dsScrapeInterval string
intervalMs int64
intervalFactor int64
query backend.DataQuery
want time.Duration
wantErr bool
}{
{
name: "min step 2m with 300000 intervalMs",
queryInterval: "2m",
dsScrapeInterval: "",
intervalMs: 300000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 5 * time.Minute,
MaxDataPoints: 761,
},
want: 2 * time.Minute,
wantErr: false,
},
{
name: "min step 2m with 900000 intervalMs",
queryInterval: "2m",
dsScrapeInterval: "",
intervalMs: 900000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 15 * time.Minute,
MaxDataPoints: 175,
},
want: 2 * time.Minute,
wantErr: false,
},
{
name: "with step parameter",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(12 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 30 * time.Second,
wantErr: false,
},
{
name: "without step parameter",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 15 * time.Second,
wantErr: false,
},
{
name: "with high intervalFactor",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 10,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 20 * time.Minute,
wantErr: false,
},
{
name: "with low intervalFactor",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 2 * time.Minute,
wantErr: false,
},
{
name: "with specified scrape-interval in data source",
queryInterval: "",
dsScrapeInterval: "240s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 4 * time.Minute,
wantErr: false,
},
{
name: "with zero intervalFactor defaults to 1",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 0,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 15 * time.Second,
wantErr: false,
},
{
name: "with $__interval variable",
queryInterval: "$__interval",
dsScrapeInterval: "15s",
intervalMs: 60000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "with ${__interval} variable",
queryInterval: "${__interval}",
dsScrapeInterval: "15s",
intervalMs: 60000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "with ${__interval} variable and explicit interval",
queryInterval: "1m",
dsScrapeInterval: "15s",
intervalMs: 60000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 1 * time.Minute,
wantErr: false,
},
{
name: "with $__rate_interval variable",
queryInterval: "$__rate_interval",
dsScrapeInterval: "30s",
intervalMs: 100000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 100 * time.Second,
MaxDataPoints: 12384,
},
want: 130 * time.Second,
wantErr: false,
},
{
name: "with ${__rate_interval} variable",
queryInterval: "${__rate_interval}",
dsScrapeInterval: "30s",
intervalMs: 100000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 100 * time.Second,
MaxDataPoints: 12384,
},
want: 130 * time.Second,
wantErr: false,
},
{
name: "intervalMs 100s, minStep override 150s and scrape interval 30s",
queryInterval: "150s",
dsScrapeInterval: "30s",
intervalMs: 100000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 100 * time.Second,
MaxDataPoints: 12384,
},
want: 150 * time.Second,
wantErr: false,
},
{
name: "intervalMs 120s, minStep override 150s and ds scrape interval 30s",
queryInterval: "150s",
dsScrapeInterval: "30s",
intervalMs: 120000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 120 * time.Second,
MaxDataPoints: 12384,
},
want: 150 * time.Second,
wantErr: false,
},
{
name: "intervalMs 120s, minStep auto (interval not overridden) and ds scrape interval 30s",
queryInterval: "120s",
dsScrapeInterval: "30s",
intervalMs: 120000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 120 * time.Second,
MaxDataPoints: 12384,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "interval and minStep are automatically calculated and ds scrape interval 30s and time range 1 hour",
queryInterval: "30s",
dsScrapeInterval: "30s",
intervalMs: 30000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 30 * time.Second,
MaxDataPoints: 12384,
},
want: 30 * time.Second,
wantErr: false,
},
{
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 1 hour",
queryInterval: "$__rate_interval",
dsScrapeInterval: "30s",
intervalMs: 30000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 30 * time.Second,
MaxDataPoints: 12384,
},
want: 2 * time.Minute,
wantErr: false,
},
{
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 2 days",
queryInterval: "$__rate_interval",
dsScrapeInterval: "30s",
intervalMs: 120000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 120 * time.Second,
MaxDataPoints: 12384,
},
want: 150 * time.Second,
wantErr: false,
},
{
name: "minStep is $__interval and ds scrape interval 15s and time range 2 days",
queryInterval: "$__interval",
dsScrapeInterval: "15s",
intervalMs: 120000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(2 * 24 * time.Hour),
},
Interval: 120 * time.Second,
MaxDataPoints: 12384,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "with empty dsScrapeInterval defaults to 15s",
queryInterval: "",
dsScrapeInterval: "",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 15 * time.Second,
wantErr: false,
},
{
name: "with very short time range",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Minute),
},
Interval: 1 * time.Minute,
},
want: 15 * time.Second,
wantErr: false,
},
{
name: "with very long time range",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(30 * 24 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 30 * time.Minute,
wantErr: false,
},
{
name: "with manual interval override",
queryInterval: "5m",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 5 * time.Minute,
wantErr: false,
},
{
name: "minStep is auto and ds scrape interval 30s and time range 1 hour",
queryInterval: "",
dsScrapeInterval: "30s",
intervalMs: 30000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 30 * time.Second,
MaxDataPoints: 1613,
},
want: 30 * time.Second,
wantErr: false,
},
{
name: "minStep is auto and ds scrape interval 15s and time range 5 minutes",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 15000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(5 * time.Minute),
},
Interval: 15 * time.Second,
MaxDataPoints: 1055,
},
want: 15 * time.Second,
wantErr: false,
},
// Additional test cases for better coverage
{
name: "with $__interval_ms variable",
queryInterval: "$__interval_ms",
dsScrapeInterval: "15s",
intervalMs: 60000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "with ${__interval_ms} variable",
queryInterval: "${__interval_ms}",
dsScrapeInterval: "15s",
intervalMs: 60000,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: 120 * time.Second,
wantErr: false,
},
{
name: "with MaxDataPoints zero",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
MaxDataPoints: 0,
},
want: 15 * time.Second,
wantErr: false,
},
{
name: "with negative intervalFactor",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: -5,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: -10 * time.Minute,
wantErr: false,
},
{
name: "with invalid interval string that fails parsing",
queryInterval: "invalid-interval",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(48 * time.Hour),
},
Interval: 1 * time.Minute,
},
want: time.Duration(0),
wantErr: true,
},
{
name: "with very small MaxDataPoints",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
MaxDataPoints: 10,
},
want: 5 * time.Minute,
wantErr: false,
},
{
name: "when safeInterval is larger than calculatedInterval",
queryInterval: "",
dsScrapeInterval: "15s",
intervalMs: 0,
intervalFactor: 1,
query: backend.DataQuery{
TimeRange: backend.TimeRange{
From: testNow,
To: testNow.Add(1 * time.Hour),
},
Interval: 1 * time.Minute,
MaxDataPoints: 10000,
},
want: 15 * time.Second,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := calculatePrometheusInterval(
tt.queryInterval,
tt.dsScrapeInterval,
tt.intervalMs,
tt.intervalFactor,
tt.query,
testIntervalCalculator,
)
if tt.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tt.want, got)
})
}
}

View File

@@ -92,7 +92,6 @@ const (
)
// Internal interval and range variables with {} syntax
// Repetitive code, we should have functionality to unify these
const (
varIntervalAlt = "${__interval}"
varIntervalMsAlt = "${__interval_ms}"
@@ -112,8 +111,16 @@ const (
UnknownQueryType TimeSeriesQueryType = "unknown"
)
// safeResolution is the maximum number of data points to prevent excessive resolution.
// This ensures queries don't exceed reasonable data point limits, improving performance
// and preventing potential memory issues. The value of 11000 provides a good balance
// between resolution and performance for most use cases.
var safeResolution = 11000
// rateIntervalMultiplier is the minimum multiplier for rate interval calculation.
// Rate intervals should be at least 4x the scrape interval to ensure accurate rate calculations.
const rateIntervalMultiplier = 4
// QueryModel includes both the common and specific values
// NOTE: this struct may have issues when decoding JSON that requires the special handling
// registered in https://github.com/grafana/grafana-plugin-sdk-go/blob/v0.228.0/experimental/apis/data/v0alpha1/query.go#L298
@@ -154,7 +161,7 @@ type Query struct {
// may be either a string or DataSourceRef
type internalQueryModel struct {
PrometheusQueryProperties `json:",inline"`
//sdkapi.CommonQueryProperties `json:",inline"`
// sdkapi.CommonQueryProperties `json:",inline"`
IntervalMS float64 `json:"intervalMs,omitempty"`
// The following properties may be part of the request payload, however they are not saved in panel JSON
@@ -272,44 +279,121 @@ func (query *Query) TimeRange() TimeRange {
}
}
// isRateIntervalVariable checks if the interval string is a rate interval variable
// ($__rate_interval, ${__rate_interval}, $__rate_interval_ms, or ${__rate_interval_ms})
func isRateIntervalVariable(interval string) bool {
return interval == varRateInterval ||
interval == varRateIntervalAlt ||
interval == varRateIntervalMs ||
interval == varRateIntervalMsAlt
}
// replaceVariable replaces both $__variable and ${__variable} formats in the expression
func replaceVariable(expr, dollarFormat, altFormat, replacement string) string {
expr = strings.ReplaceAll(expr, dollarFormat, replacement)
expr = strings.ReplaceAll(expr, altFormat, replacement)
return expr
}
// isManualIntervalOverride checks if the interval is a manually specified non-variable value
// that should override the calculated interval
func isManualIntervalOverride(interval string) bool {
return interval != "" &&
interval != varInterval &&
interval != varIntervalAlt &&
interval != varIntervalMs &&
interval != varIntervalMsAlt
}
// maxDuration returns the maximum of two durations
func maxDuration(a, b time.Duration) time.Duration {
if a > b {
return a
}
return b
}
// normalizeIntervalFactor ensures intervalFactor is at least 1
func normalizeIntervalFactor(factor int64) int64 {
if factor == 0 {
return 1
}
return factor
}
// calculatePrometheusInterval calculates the optimal step interval for a Prometheus query.
//
// The function determines the query step interval by considering multiple factors:
// - The minimum step specified in the query (queryInterval)
// - The data source scrape interval (dsScrapeInterval)
// - The requested interval in milliseconds (intervalMs)
// - The time range and maximum data points from the query
// - The interval factor multiplier
//
// Special handling:
// - Variable intervals ($__interval, $__rate_interval, etc.) are replaced with calculated values
// - Rate interval variables ($__rate_interval, ${__rate_interval}) use calculateRateInterval for proper rate() function support
// - Manual interval overrides (non-variable strings) take precedence over calculated values
// - The final interval ensures safe resolution limits are not exceeded
//
// Parameters:
// - queryInterval: The minimum step interval string (may contain variables like $__interval or $__rate_interval)
// - dsScrapeInterval: The data source scrape interval (e.g., "15s", "30s")
// - intervalMs: The requested interval in milliseconds
// - intervalFactor: Multiplier for the calculated interval (defaults to 1 if 0)
// - query: The backend data query containing time range and max data points
// - intervalCalculator: Calculator for determining optimal intervals
//
// Returns:
// - The calculated step interval as a time.Duration
// - An error if the interval cannot be calculated (e.g., invalid interval string)
func calculatePrometheusInterval(
queryInterval, dsScrapeInterval string,
intervalMs, intervalFactor int64,
query backend.DataQuery,
intervalCalculator intervalv2.Calculator,
) (time.Duration, error) {
// we need to compare the original query model after it is overwritten below to variables so that we can
// calculate the rateInterval if it is equal to $__rate_interval or ${__rate_interval}
// Preserve the original interval for later comparison, as it may be modified below
originalQueryInterval := queryInterval
// If we are using variable for interval/step, we will replace it with calculated interval
// If we are using a variable for minStep, replace it with empty string
// so that the interval calculation proceeds with the default logic
if isVariableInterval(queryInterval) {
queryInterval = ""
}
// Get the minimum interval from various sources (dsScrapeInterval, queryInterval, intervalMs)
minInterval, err := gtime.GetIntervalFrom(dsScrapeInterval, queryInterval, intervalMs, 15*time.Second)
if err != nil {
return time.Duration(0), err
}
// Calculate the optimal interval based on time range and max data points
calculatedInterval := intervalCalculator.Calculate(query.TimeRange, minInterval, query.MaxDataPoints)
// Calculate the safe interval to prevent too many data points
safeInterval := intervalCalculator.CalculateSafeInterval(query.TimeRange, int64(safeResolution))
adjustedInterval := safeInterval.Value
if calculatedInterval.Value > safeInterval.Value {
adjustedInterval = calculatedInterval.Value
}
// Use the larger of calculated or safe interval to ensure we don't exceed resolution limits
adjustedInterval := maxDuration(calculatedInterval.Value, safeInterval.Value)
// here is where we compare for $__rate_interval or ${__rate_interval}
if originalQueryInterval == varRateInterval || originalQueryInterval == varRateIntervalAlt {
// Handle rate interval variables: these require special calculation
if isRateIntervalVariable(originalQueryInterval) {
// Rate interval is final and is not affected by resolution
return calculateRateInterval(adjustedInterval, dsScrapeInterval), nil
} else {
queryIntervalFactor := intervalFactor
if queryIntervalFactor == 0 {
queryIntervalFactor = 1
}
return time.Duration(int64(adjustedInterval) * queryIntervalFactor), nil
}
// Handle manual interval override: if user specified a non-variable interval,
// it takes precedence over calculated values
if isManualIntervalOverride(originalQueryInterval) {
if parsedInterval, err := gtime.ParseIntervalStringToTimeDuration(originalQueryInterval); err == nil {
return parsedInterval, nil
}
// If parsing fails, fall through to calculated interval with factor
}
// Apply interval factor to the adjusted interval
normalizedFactor := normalizeIntervalFactor(intervalFactor)
return time.Duration(int64(adjustedInterval) * normalizedFactor), nil
}
// calculateRateInterval calculates the $__rate_interval value
@@ -331,7 +415,8 @@ func calculateRateInterval(
return time.Duration(0)
}
rateInterval := time.Duration(int64(math.Max(float64(queryInterval+scrapeIntervalDuration), float64(4)*float64(scrapeIntervalDuration))))
minRateInterval := rateIntervalMultiplier * scrapeIntervalDuration
rateInterval := maxDuration(queryInterval+scrapeIntervalDuration, minRateInterval)
return rateInterval
}
@@ -366,34 +451,33 @@ func InterpolateVariables(
rateInterval = calculateRateInterval(queryInterval, requestedMinStep)
}
expr = strings.ReplaceAll(expr, varIntervalMs, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
expr = strings.ReplaceAll(expr, varInterval, gtime.FormatInterval(calculatedStep))
expr = strings.ReplaceAll(expr, varRangeMs, strconv.FormatInt(rangeMs, 10))
expr = strings.ReplaceAll(expr, varRangeS, strconv.FormatInt(rangeSRounded, 10))
expr = strings.ReplaceAll(expr, varRange, strconv.FormatInt(rangeSRounded, 10)+"s")
expr = strings.ReplaceAll(expr, varRateIntervalMs, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
expr = strings.ReplaceAll(expr, varRateInterval, rateInterval.String())
// Replace interval variables (both $__var and ${__var} formats)
expr = replaceVariable(expr, varIntervalMs, varIntervalMsAlt, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
expr = replaceVariable(expr, varInterval, varIntervalAlt, gtime.FormatInterval(calculatedStep))
// Replace range variables (both $__var and ${__var} formats)
expr = replaceVariable(expr, varRangeMs, varRangeMsAlt, strconv.FormatInt(rangeMs, 10))
expr = replaceVariable(expr, varRangeS, varRangeSAlt, strconv.FormatInt(rangeSRounded, 10))
expr = replaceVariable(expr, varRange, varRangeAlt, strconv.FormatInt(rangeSRounded, 10)+"s")
// Replace rate interval variables (both $__var and ${__var} formats)
expr = replaceVariable(expr, varRateIntervalMs, varRateIntervalMsAlt, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
expr = replaceVariable(expr, varRateInterval, varRateIntervalAlt, rateInterval.String())
// Repetitive code, we should have functionality to unify these
expr = strings.ReplaceAll(expr, varIntervalMsAlt, strconv.FormatInt(int64(calculatedStep/time.Millisecond), 10))
expr = strings.ReplaceAll(expr, varIntervalAlt, gtime.FormatInterval(calculatedStep))
expr = strings.ReplaceAll(expr, varRangeMsAlt, strconv.FormatInt(rangeMs, 10))
expr = strings.ReplaceAll(expr, varRangeSAlt, strconv.FormatInt(rangeSRounded, 10))
expr = strings.ReplaceAll(expr, varRangeAlt, strconv.FormatInt(rangeSRounded, 10)+"s")
expr = strings.ReplaceAll(expr, varRateIntervalMsAlt, strconv.FormatInt(int64(rateInterval/time.Millisecond), 10))
expr = strings.ReplaceAll(expr, varRateIntervalAlt, rateInterval.String())
return expr
}
// isVariableInterval checks if the interval string is a variable interval
// (any of $__interval, ${__interval}, $__interval_ms, ${__interval_ms}, $__rate_interval, ${__rate_interval}, etc.)
func isVariableInterval(interval string) bool {
if interval == varInterval || interval == varIntervalMs || interval == varRateInterval || interval == varRateIntervalMs {
return true
}
// Repetitive code, we should have functionality to unify these
if interval == varIntervalAlt || interval == varIntervalMsAlt || interval == varRateIntervalAlt || interval == varRateIntervalMsAlt {
return true
}
return false
return interval == varInterval ||
interval == varIntervalAlt ||
interval == varIntervalMs ||
interval == varIntervalMsAlt ||
interval == varRateInterval ||
interval == varRateIntervalAlt ||
interval == varRateIntervalMs ||
interval == varRateIntervalMsAlt
}
// AlignTimeRange aligns query range to step and handles the time offset.
@@ -410,7 +494,7 @@ func AlignTimeRange(t time.Time, step time.Duration, offset int64) time.Time {
//go:embed query.types.json
var f embed.FS
// QueryTypeDefinitionsJSON returns the query type definitions
// QueryTypeDefinitionListJSON returns the query type definitions
func QueryTypeDefinitionListJSON() (json.RawMessage, error) {
return f.ReadFile("query.types.json")
}

View File

@@ -2,7 +2,6 @@ package models_test
import (
"context"
"fmt"
"reflect"
"testing"
"time"
@@ -14,6 +13,7 @@ import (
"go.opentelemetry.io/otel"
"github.com/grafana/grafana-plugin-sdk-go/backend/log"
"github.com/grafana/grafana/pkg/promlib/intervalv2"
"github.com/grafana/grafana/pkg/promlib/models"
)
@@ -50,95 +50,6 @@ func TestParse(t *testing.T) {
require.Equal(t, false, res.ExemplarQuery)
})
t.Run("parsing query model with step", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
To: now.Add(12 * time.Hour),
}
q := queryContext(`{
"expr": "go_goroutines",
"format": "time_series",
"refId": "A"
}`, timeRange, time.Duration(1)*time.Minute)
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, time.Second*30, res.Step)
})
t.Run("parsing query model without step parameter", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
To: now.Add(1 * time.Hour),
}
q := queryContext(`{
"expr": "go_goroutines",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}`, timeRange, time.Duration(1)*time.Minute)
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, time.Second*15, res.Step)
})
t.Run("parsing query model with high intervalFactor", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
To: now.Add(48 * time.Hour),
}
q := queryContext(`{
"expr": "go_goroutines",
"format": "time_series",
"intervalFactor": 10,
"refId": "A"
}`, timeRange, time.Duration(1)*time.Minute)
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, time.Minute*20, res.Step)
})
t.Run("parsing query model with low intervalFactor", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
To: now.Add(48 * time.Hour),
}
q := queryContext(`{
"expr": "go_goroutines",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}`, timeRange, time.Duration(1)*time.Minute)
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, time.Minute*2, res.Step)
})
t.Run("parsing query model specified scrape-interval in the data source", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
To: now.Add(48 * time.Hour),
}
q := queryContext(`{
"expr": "go_goroutines",
"format": "time_series",
"intervalFactor": 1,
"refId": "A"
}`, timeRange, time.Duration(1)*time.Minute)
res, err := models.Parse(context.Background(), log.New(), span, q, "240s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, time.Minute*4, res.Step)
})
t.Run("parsing query model with $__interval variable", func(t *testing.T) {
timeRange := backend.TimeRange{
From: now,
@@ -176,7 +87,7 @@ func TestParse(t *testing.T) {
res, err := models.Parse(context.Background(), log.New(), span, q, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, "rate(ALERTS{job=\"test\" [2m]})", res.Expr)
require.Equal(t, "rate(ALERTS{job=\"test\" [1m]})", res.Expr)
})
t.Run("parsing query model with $__interval_ms variable", func(t *testing.T) {
@@ -533,232 +444,6 @@ func TestParse(t *testing.T) {
})
}
func TestRateInterval(t *testing.T) {
_, span := tracer.Start(context.Background(), "operation")
defer span.End()
type args struct {
expr string
interval string
intervalMs int64
dsScrapeInterval string
timeRange *backend.TimeRange
}
tests := []struct {
name string
args args
want *models.Query
}{
{
name: "intervalMs 100s, minStep override 150s and scrape interval 30s",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "150s",
intervalMs: 100000,
dsScrapeInterval: "30s",
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[10m0s])",
Step: time.Second * 150,
},
},
{
name: "intervalMs 120s, minStep override 150s and ds scrape interval 30s",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "150s",
intervalMs: 120000,
dsScrapeInterval: "30s",
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[10m0s])",
Step: time.Second * 150,
},
},
{
name: "intervalMs 120s, minStep auto (interval not overridden) and ds scrape interval 30s",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "120s",
intervalMs: 120000,
dsScrapeInterval: "30s",
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[8m0s])",
Step: time.Second * 120,
},
},
{
name: "interval and minStep are automatically calculated and ds scrape interval 30s and time range 1 hour",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "30s",
intervalMs: 30000,
dsScrapeInterval: "30s",
timeRange: &backend.TimeRange{
From: now,
To: now.Add(1 * time.Hour),
},
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[2m0s])",
Step: time.Second * 30,
},
},
{
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 1 hour",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "$__rate_interval",
intervalMs: 30000,
dsScrapeInterval: "30s",
timeRange: &backend.TimeRange{
From: now,
To: now.Add(1 * time.Hour),
},
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[2m0s])",
Step: time.Minute * 2,
},
},
{
name: "minStep is $__rate_interval and ds scrape interval 30s and time range 2 days",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "$__rate_interval",
intervalMs: 120000,
dsScrapeInterval: "30s",
timeRange: &backend.TimeRange{
From: now,
To: now.Add(2 * 24 * time.Hour),
},
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[2m30s])",
Step: time.Second * 150,
},
},
{
name: "minStep is $__rate_interval and ds scrape interval 15s and time range 2 days",
args: args{
expr: "rate(rpc_durations_seconds_count[$__rate_interval])",
interval: "$__interval",
intervalMs: 120000,
dsScrapeInterval: "15s",
timeRange: &backend.TimeRange{
From: now,
To: now.Add(2 * 24 * time.Hour),
},
},
want: &models.Query{
Expr: "rate(rpc_durations_seconds_count[8m0s])",
Step: time.Second * 120,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
q := mockQuery(tt.args.expr, tt.args.interval, tt.args.intervalMs, tt.args.timeRange)
q.MaxDataPoints = 12384
res, err := models.Parse(context.Background(), log.New(), span, q, tt.args.dsScrapeInterval, intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, tt.want.Expr, res.Expr)
require.Equal(t, tt.want.Step, res.Step)
})
}
t.Run("minStep is auto and ds scrape interval 30s and time range 1 hour", func(t *testing.T) {
query := backend.DataQuery{
RefID: "G",
QueryType: "",
MaxDataPoints: 1613,
Interval: 30 * time.Second,
TimeRange: backend.TimeRange{
From: now,
To: now.Add(1 * time.Hour),
},
JSON: []byte(`{
"datasource":{"type":"prometheus","uid":"zxS5e5W4k"},
"datasourceId":38,
"editorMode":"code",
"exemplar":false,
"expr":"sum(rate(process_cpu_seconds_total[$__rate_interval]))",
"instant":false,
"interval":"",
"intervalMs":30000,
"key":"Q-f96b6729-c47a-4ea8-8f71-a79774cf9bd5-0",
"legendFormat":"__auto",
"maxDataPoints":1613,
"range":true,
"refId":"G",
"requestId":"1G",
"utcOffsetSec":3600
}`),
}
res, err := models.Parse(context.Background(), log.New(), span, query, "30s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, "sum(rate(process_cpu_seconds_total[2m0s]))", res.Expr)
require.Equal(t, 30*time.Second, res.Step)
})
t.Run("minStep is auto and ds scrape interval 15s and time range 5 minutes", func(t *testing.T) {
query := backend.DataQuery{
RefID: "A",
QueryType: "",
MaxDataPoints: 1055,
Interval: 15 * time.Second,
TimeRange: backend.TimeRange{
From: now,
To: now.Add(5 * time.Minute),
},
JSON: []byte(`{
"datasource": {
"type": "prometheus",
"uid": "2z9d6ElGk"
},
"editorMode": "code",
"expr": "sum(rate(cache_requests_total[$__rate_interval]))",
"legendFormat": "__auto",
"range": true,
"refId": "A",
"exemplar": false,
"requestId": "1A",
"utcOffsetSec": 0,
"interval": "",
"datasourceId": 508,
"intervalMs": 15000,
"maxDataPoints": 1055
}`),
}
res, err := models.Parse(context.Background(), log.New(), span, query, "15s", intervalCalculator, false)
require.NoError(t, err)
require.Equal(t, "sum(rate(cache_requests_total[1m0s]))", res.Expr)
require.Equal(t, 15*time.Second, res.Step)
})
}
func mockQuery(expr string, interval string, intervalMs int64, timeRange *backend.TimeRange) backend.DataQuery {
if timeRange == nil {
timeRange = &backend.TimeRange{
From: now,
To: now.Add(1 * time.Hour),
}
}
return backend.DataQuery{
Interval: time.Duration(intervalMs) * time.Millisecond,
JSON: []byte(fmt.Sprintf(`{
"expr": "%s",
"format": "time_series",
"interval": "%s",
"intervalMs": %v,
"intervalFactor": 1,
"refId": "A"
}`, expr, interval, intervalMs)),
TimeRange: *timeRange,
RefID: "A",
}
}
func queryContext(json string, timeRange backend.TimeRange, queryInterval time.Duration) backend.DataQuery {
return backend.DataQuery{
Interval: queryInterval,
@@ -768,11 +453,6 @@ func queryContext(json string, timeRange backend.TimeRange, queryInterval time.D
}
}
// AlignTimeRange aligns query range to step and handles the time offset.
// It rounds start and end down to a multiple of step.
// Prometheus caching is dependent on the range being aligned with the step.
// Rounding to the step can significantly change the start and end of the range for larger steps, i.e. a week.
// In rounding the range to a 1w step the range will always start on a Thursday.
func TestAlignTimeRange(t *testing.T) {
type args struct {
t time.Time

View File

@@ -381,6 +381,102 @@ func TestPrometheus_parseTimeSeriesResponse(t *testing.T) {
})
}
func TestPrometheus_executedQueryString(t *testing.T) {
t.Run("executedQueryString should match expected format with intervalMs 300_000", func(t *testing.T) {
values := []p.SamplePair{
{Value: 1, Timestamp: 1000},
{Value: 2, Timestamp: 2000},
}
result := queryResult{
Type: p.ValMatrix,
Result: p.Matrix{
&p.SampleStream{
Metric: p.Metric{"app": "Application"},
Values: values,
},
},
}
queryJSON := `{
"expr": "test_metric",
"format": "time_series",
"intervalFactor": 1,
"interval": "2m",
"intervalMs": 300000,
"maxDataPoints": 761,
"refId": "A",
"range": true
}`
now := time.Now()
query := backend.DataQuery{
RefID: "A",
MaxDataPoints: 761,
Interval: 300000 * time.Millisecond,
TimeRange: backend.TimeRange{
From: now,
To: now.Add(48 * time.Hour),
},
JSON: []byte(queryJSON),
}
tctx, err := setup()
require.NoError(t, err)
res, err := execute(tctx, query, result, nil)
require.NoError(t, err)
require.Len(t, res, 1)
require.NotNil(t, res[0].Meta)
require.Equal(t, "Expr: test_metric\nStep: 2m0s", res[0].Meta.ExecutedQueryString)
})
t.Run("executedQueryString should match expected format with intervalMs 900_000", func(t *testing.T) {
values := []p.SamplePair{
{Value: 1, Timestamp: 1000},
{Value: 2, Timestamp: 2000},
}
result := queryResult{
Type: p.ValMatrix,
Result: p.Matrix{
&p.SampleStream{
Metric: p.Metric{"app": "Application"},
Values: values,
},
},
}
queryJSON := `{
"expr": "test_metric",
"format": "time_series",
"intervalFactor": 1,
"interval": "2m",
"intervalMs": 900000,
"maxDataPoints": 175,
"refId": "A",
"range": true
}`
now := time.Now()
query := backend.DataQuery{
RefID: "A",
MaxDataPoints: 175,
Interval: 900000 * time.Millisecond,
TimeRange: backend.TimeRange{
From: now,
To: now.Add(48 * time.Hour),
},
JSON: []byte(queryJSON),
}
tctx, err := setup()
require.NoError(t, err)
res, err := execute(tctx, query, result, nil)
require.NoError(t, err)
require.Len(t, res, 1)
require.NotNil(t, res[0].Meta)
require.Equal(t, "Expr: test_metric\nStep: 2m0s", res[0].Meta.ExecutedQueryString)
})
}
type queryResult struct {
Type p.ValueType `json:"resultType"`
Result any `json:"result"`

View File

@@ -36,6 +36,9 @@ var client = &http.Client{
Transport: &http.Transport{Proxy: http.ProxyFromEnvironment},
}
// CreateDashboardSnapshot creates a snapshot when running Grafana in regular mode.
// It validates the user and dashboard exist before creating the snapshot.
// This mode supports both local and external snapshots.
func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSharingOptions, cmd CreateDashboardSnapshotCommand, svc Service) {
if !cfg.SnapshotsEnabled {
c.JsonApiErr(http.StatusForbidden, "Dashboard Snapshots are disabled", nil)
@@ -43,6 +46,7 @@ func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSh
}
uid := cmd.Dashboard.GetNestedString("uid")
user, err := identity.GetRequester(c.Req.Context())
if err != nil {
c.JsonApiErr(http.StatusBadRequest, "missing user in context", nil)
@@ -59,21 +63,18 @@ func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSh
return
}
cmd.ExternalURL = ""
cmd.OrgID = user.GetOrgID()
cmd.UserID, _ = identity.UserIdentifier(user.GetID())
if cmd.Name == "" {
cmd.Name = "Unnamed snapshot"
}
var snapshotUrl string
cmd.ExternalURL = ""
cmd.OrgID = user.GetOrgID()
cmd.UserID, _ = identity.UserIdentifier(user.GetID())
originalDashboardURL, err := createOriginalDashboardURL(&cmd)
if err != nil {
c.JsonApiErr(http.StatusInternalServerError, "Invalid app URL", err)
return
}
var snapshotURL string
if cmd.External {
// Handle external snapshot creation
if !cfg.ExternalEnabled {
c.JsonApiErr(http.StatusForbidden, "External dashboard creation is disabled", nil)
return
@@ -85,40 +86,83 @@ func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSh
return
}
snapshotUrl = resp.Url
cmd.Key = resp.Key
cmd.DeleteKey = resp.DeleteKey
cmd.ExternalURL = resp.Url
cmd.ExternalDeleteURL = resp.DeleteUrl
cmd.Dashboard = &common.Unstructured{}
snapshotURL = resp.Url
metrics.MApiDashboardSnapshotExternal.Inc()
} else {
cmd.Dashboard.SetNestedField(originalDashboardURL, "snapshot", "originalUrl")
if cmd.Key == "" {
var err error
cmd.Key, err = util.GetRandomString(32)
if err != nil {
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
return
}
// Handle local snapshot creation
originalDashboardURL, err := createOriginalDashboardURL(&cmd)
if err != nil {
c.JsonApiErr(http.StatusInternalServerError, "Invalid app URL", err)
return
}
if cmd.DeleteKey == "" {
var err error
cmd.DeleteKey, err = util.GetRandomString(32)
if err != nil {
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
return
}
snapshotURL, err = prepareLocalSnapshot(&cmd, originalDashboardURL)
if err != nil {
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
return
}
snapshotUrl = setting.ToAbsUrl("dashboard/snapshot/" + cmd.Key)
metrics.MApiDashboardSnapshotCreate.Inc()
}
saveAndRespond(c, svc, cmd, snapshotURL)
}
// CreateDashboardSnapshotPublic creates a snapshot when running Grafana in public mode.
// In public mode, there is no user or dashboard information to validate.
// Only local snapshots are supported (external snapshots are not available).
func CreateDashboardSnapshotPublic(c *contextmodel.ReqContext, cfg snapshot.SnapshotSharingOptions, cmd CreateDashboardSnapshotCommand, svc Service) {
if !cfg.SnapshotsEnabled {
c.JsonApiErr(http.StatusForbidden, "Dashboard Snapshots are disabled", nil)
return
}
if cmd.Name == "" {
cmd.Name = "Unnamed snapshot"
}
snapshotURL, err := prepareLocalSnapshot(&cmd, "")
if err != nil {
c.JsonApiErr(http.StatusInternalServerError, "Could not generate random string", err)
return
}
metrics.MApiDashboardSnapshotCreate.Inc()
saveAndRespond(c, svc, cmd, snapshotURL)
}
// prepareLocalSnapshot prepares the command for a local snapshot and returns the snapshot URL.
func prepareLocalSnapshot(cmd *CreateDashboardSnapshotCommand, originalDashboardURL string) (string, error) {
cmd.Dashboard.SetNestedField(originalDashboardURL, "snapshot", "originalUrl")
if cmd.Key == "" {
key, err := util.GetRandomString(32)
if err != nil {
return "", err
}
cmd.Key = key
}
if cmd.DeleteKey == "" {
deleteKey, err := util.GetRandomString(32)
if err != nil {
return "", err
}
cmd.DeleteKey = deleteKey
}
return setting.ToAbsUrl("dashboard/snapshot/" + cmd.Key), nil
}
// saveAndRespond saves the snapshot and sends the response.
func saveAndRespond(c *contextmodel.ReqContext, svc Service, cmd CreateDashboardSnapshotCommand, snapshotURL string) {
result, err := svc.CreateDashboardSnapshot(c.Req.Context(), &cmd)
if err != nil {
c.JsonApiErr(http.StatusInternalServerError, "Failed to create snapshot", err)
@@ -128,7 +172,7 @@ func CreateDashboardSnapshot(c *contextmodel.ReqContext, cfg snapshot.SnapshotSh
c.JSON(http.StatusOK, snapshot.DashboardCreateResponse{
Key: result.Key,
DeleteKey: result.DeleteKey,
URL: snapshotUrl,
URL: snapshotURL,
DeleteURL: setting.ToAbsUrl("api/snapshots-delete/" + result.DeleteKey),
})
}

View File

@@ -20,40 +20,30 @@ import (
"github.com/grafana/grafana/pkg/web"
)
func TestCreateDashboardSnapshot_DashboardNotFound(t *testing.T) {
mockService := &MockService{}
cfg := snapshot.SnapshotSharingOptions{
SnapshotsEnabled: true,
ExternalEnabled: false,
func createTestDashboard(t *testing.T) *common.Unstructured {
t.Helper()
dashboard := &common.Unstructured{}
dashboardData := map[string]any{
"uid": "test-dashboard-uid",
"id": 123,
}
testUser := &user.SignedInUser{
dashboardBytes, _ := json.Marshal(dashboardData)
_ = json.Unmarshal(dashboardBytes, dashboard)
return dashboard
}
func createTestUser() *user.SignedInUser {
return &user.SignedInUser{
UserID: 1,
OrgID: 1,
Login: "testuser",
Name: "Test User",
Email: "test@example.com",
}
dashboard := &common.Unstructured{}
dashboardData := map[string]interface{}{
"uid": "test-dashboard-uid",
"id": 123,
}
dashboardBytes, _ := json.Marshal(dashboardData)
_ = json.Unmarshal(dashboardBytes, dashboard)
cmd := CreateDashboardSnapshotCommand{
DashboardCreateCommand: snapshot.DashboardCreateCommand{
Dashboard: dashboard,
Name: "Test Snapshot",
},
}
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
Return(dashboards.ErrDashboardNotFound)
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
}
func createReqContext(t *testing.T, req *http.Request, testUser *user.SignedInUser) (*contextmodel.ReqContext, *httptest.ResponseRecorder) {
t.Helper()
recorder := httptest.NewRecorder()
ctx := &contextmodel.ReqContext{
Context: &web.Context{
@@ -63,13 +53,319 @@ func TestCreateDashboardSnapshot_DashboardNotFound(t *testing.T) {
SignedInUser: testUser,
Logger: log.NewNopLogger(),
}
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
mockService.AssertExpectations(t)
assert.Equal(t, http.StatusBadRequest, recorder.Code)
var response map[string]interface{}
err := json.Unmarshal(recorder.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "Dashboard not found", response["message"])
return ctx, recorder
}
// TestCreateDashboardSnapshot tests snapshot creation in regular mode (non-public instance).
// These tests cover scenarios when Grafana is running as a regular server with user authentication.
func TestCreateDashboardSnapshot(t *testing.T) {
t.Run("should return error when dashboard not found", func(t *testing.T) {
mockService := &MockService{}
cfg := snapshot.SnapshotSharingOptions{
SnapshotsEnabled: true,
ExternalEnabled: false,
}
testUser := createTestUser()
dashboard := createTestDashboard(t)
cmd := CreateDashboardSnapshotCommand{
DashboardCreateCommand: snapshot.DashboardCreateCommand{
Dashboard: dashboard,
Name: "Test Snapshot",
},
}
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
Return(dashboards.ErrDashboardNotFound)
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
ctx, recorder := createReqContext(t, req, testUser)
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
mockService.AssertExpectations(t)
assert.Equal(t, http.StatusBadRequest, recorder.Code)
var response map[string]any
err := json.Unmarshal(recorder.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "Dashboard not found", response["message"])
})
t.Run("should create external snapshot when external is enabled", func(t *testing.T) {
externalServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
assert.Equal(t, "/api/snapshots", r.URL.Path)
assert.Equal(t, "POST", r.Method)
response := map[string]any{
"key": "external-key",
"deleteKey": "external-delete-key",
"url": "https://external.example.com/dashboard/snapshot/external-key",
"deleteUrl": "https://external.example.com/api/snapshots-delete/external-delete-key",
}
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(response)
}))
defer externalServer.Close()
mockService := NewMockService(t)
cfg := snapshot.SnapshotSharingOptions{
SnapshotsEnabled: true,
ExternalEnabled: true,
ExternalSnapshotURL: externalServer.URL,
}
testUser := createTestUser()
dashboard := createTestDashboard(t)
cmd := CreateDashboardSnapshotCommand{
DashboardCreateCommand: snapshot.DashboardCreateCommand{
Dashboard: dashboard,
Name: "Test External Snapshot",
External: true,
},
}
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
Return(nil)
mockService.On("CreateDashboardSnapshot", mock.Anything, mock.Anything).
Return(&DashboardSnapshot{
Key: "external-key",
DeleteKey: "external-delete-key",
}, nil)
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
ctx, recorder := createReqContext(t, req, testUser)
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
mockService.AssertExpectations(t)
assert.Equal(t, http.StatusOK, recorder.Code)
var response map[string]any
err := json.Unmarshal(recorder.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "external-key", response["key"])
assert.Equal(t, "external-delete-key", response["deleteKey"])
assert.Equal(t, "https://external.example.com/dashboard/snapshot/external-key", response["url"])
})
t.Run("should return forbidden when external is disabled", func(t *testing.T) {
mockService := NewMockService(t)
cfg := snapshot.SnapshotSharingOptions{
SnapshotsEnabled: true,
ExternalEnabled: false,
}
testUser := createTestUser()
dashboard := createTestDashboard(t)
cmd := CreateDashboardSnapshotCommand{
DashboardCreateCommand: snapshot.DashboardCreateCommand{
Dashboard: dashboard,
Name: "Test External Snapshot",
External: true,
},
}
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
Return(nil)
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
ctx, recorder := createReqContext(t, req, testUser)
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
mockService.AssertExpectations(t)
assert.Equal(t, http.StatusForbidden, recorder.Code)
var response map[string]any
err := json.Unmarshal(recorder.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "External dashboard creation is disabled", response["message"])
})
t.Run("should create local snapshot", func(t *testing.T) {
mockService := NewMockService(t)
cfg := snapshot.SnapshotSharingOptions{
SnapshotsEnabled: true,
}
testUser := createTestUser()
dashboard := createTestDashboard(t)
cmd := CreateDashboardSnapshotCommand{
DashboardCreateCommand: snapshot.DashboardCreateCommand{
Dashboard: dashboard,
Name: "Test Local Snapshot",
},
Key: "local-key",
DeleteKey: "local-delete-key",
}
mockService.On("ValidateDashboardExists", mock.Anything, int64(1), "test-dashboard-uid").
Return(nil)
mockService.On("CreateDashboardSnapshot", mock.Anything, mock.Anything).
Return(&DashboardSnapshot{
Key: "local-key",
DeleteKey: "local-delete-key",
}, nil)
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
req = req.WithContext(identity.WithRequester(req.Context(), testUser))
ctx, recorder := createReqContext(t, req, testUser)
CreateDashboardSnapshot(ctx, cfg, cmd, mockService)
mockService.AssertExpectations(t)
assert.Equal(t, http.StatusOK, recorder.Code)
var response map[string]any
err := json.Unmarshal(recorder.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "local-key", response["key"])
assert.Equal(t, "local-delete-key", response["deleteKey"])
assert.Contains(t, response["url"], "dashboard/snapshot/local-key")
assert.Contains(t, response["deleteUrl"], "api/snapshots-delete/local-delete-key")
})
}
// TestCreateDashboardSnapshotPublic tests snapshot creation in public mode.
// These tests cover scenarios when Grafana is running as a public snapshot server
// where no user authentication or dashboard validation is required.
func TestCreateDashboardSnapshotPublic(t *testing.T) {
t.Run("should create local snapshot without user context", func(t *testing.T) {
mockService := NewMockService(t)
cfg := snapshot.SnapshotSharingOptions{
SnapshotsEnabled: true,
}
dashboard := createTestDashboard(t)
cmd := CreateDashboardSnapshotCommand{
DashboardCreateCommand: snapshot.DashboardCreateCommand{
Dashboard: dashboard,
Name: "Test Snapshot",
},
Key: "test-key",
DeleteKey: "test-delete-key",
}
mockService.On("CreateDashboardSnapshot", mock.Anything, mock.Anything).
Return(&DashboardSnapshot{
Key: "test-key",
DeleteKey: "test-delete-key",
}, nil)
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
recorder := httptest.NewRecorder()
ctx := &contextmodel.ReqContext{
Context: &web.Context{
Req: req,
Resp: web.NewResponseWriter("POST", recorder),
},
Logger: log.NewNopLogger(),
}
CreateDashboardSnapshotPublic(ctx, cfg, cmd, mockService)
mockService.AssertExpectations(t)
assert.Equal(t, http.StatusOK, recorder.Code)
var response map[string]any
err := json.Unmarshal(recorder.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "test-key", response["key"])
assert.Equal(t, "test-delete-key", response["deleteKey"])
assert.Contains(t, response["url"], "dashboard/snapshot/test-key")
assert.Contains(t, response["deleteUrl"], "api/snapshots-delete/test-delete-key")
})
t.Run("should return forbidden when snapshots are disabled", func(t *testing.T) {
mockService := NewMockService(t)
cfg := snapshot.SnapshotSharingOptions{
SnapshotsEnabled: false,
}
dashboard := createTestDashboard(t)
cmd := CreateDashboardSnapshotCommand{
DashboardCreateCommand: snapshot.DashboardCreateCommand{
Dashboard: dashboard,
Name: "Test Snapshot",
},
}
req, _ := http.NewRequest("POST", "/api/snapshots", nil)
recorder := httptest.NewRecorder()
ctx := &contextmodel.ReqContext{
Context: &web.Context{
Req: req,
Resp: web.NewResponseWriter("POST", recorder),
},
Logger: log.NewNopLogger(),
}
CreateDashboardSnapshotPublic(ctx, cfg, cmd, mockService)
assert.Equal(t, http.StatusForbidden, recorder.Code)
var response map[string]any
err := json.Unmarshal(recorder.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "Dashboard Snapshots are disabled", response["message"])
})
}
// TestDeleteExternalDashboardSnapshot tests deletion of external snapshots.
// This function is called in public mode and doesn't require user context.
func TestDeleteExternalDashboardSnapshot(t *testing.T) {
t.Run("should return nil on successful deletion", func(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
assert.Equal(t, "GET", r.Method)
w.WriteHeader(http.StatusOK)
}))
defer server.Close()
err := DeleteExternalDashboardSnapshot(server.URL)
assert.NoError(t, err)
})
t.Run("should gracefully handle already deleted snapshot", func(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusInternalServerError)
response := map[string]any{
"message": "Failed to get dashboard snapshot",
}
_ = json.NewEncoder(w).Encode(response)
}))
defer server.Close()
err := DeleteExternalDashboardSnapshot(server.URL)
assert.NoError(t, err)
})
t.Run("should return error on unexpected status code", func(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotFound)
}))
defer server.Close()
err := DeleteExternalDashboardSnapshot(server.URL)
assert.Error(t, err)
assert.Contains(t, err.Error(), "unexpected response when deleting external snapshot")
assert.Contains(t, err.Error(), "404")
})
t.Run("should return error on 500 with different message", func(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusInternalServerError)
response := map[string]any{
"message": "Some other error",
}
_ = json.NewEncoder(w).Encode(response)
}))
defer server.Close()
err := DeleteExternalDashboardSnapshot(server.URL)
assert.Error(t, err)
assert.Contains(t, err.Error(), "500")
})
}

View File

@@ -13,6 +13,8 @@ import (
"time"
"github.com/grafana/grafana-plugin-sdk-go/backend/httpclient"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/services/apiserver/endpoints/request"
"github.com/open-feature/go-sdk/openfeature"
"go.opentelemetry.io/otel/codes"
@@ -47,6 +49,7 @@ type PluginsService struct {
updateStrategy string
features featuremgmt.FeatureToggles
cfg *setting.Cfg
}
func ProvidePluginsService(cfg *setting.Cfg,
@@ -89,6 +92,7 @@ func ProvidePluginsService(cfg *setting.Cfg,
features: features,
updateChecker: updateChecker,
updateStrategy: cfg.PluginUpdateStrategy,
cfg: cfg,
}, nil
}
@@ -136,7 +140,7 @@ func (s *PluginsService) HasUpdate(ctx context.Context, pluginID string) (string
// checkAndUpdate checks for updates and applies them if auto-update is enabled.
func (s *PluginsService) checkAndUpdate(ctx context.Context) {
s.instrumentedCheckForUpdates(ctx)
if openfeature.NewDefaultClient().Boolean(ctx, featuremgmt.FlagPluginsAutoUpdate, false, openfeature.TransactionContext(ctx)) {
if s.checkFlagPluginsAutoUpdate(ctx) {
s.updateAll(ctx)
}
}
@@ -218,6 +222,17 @@ func (s *PluginsService) checkForUpdates(ctx context.Context) error {
return nil
}
func (s *PluginsService) checkFlagPluginsAutoUpdate(ctx context.Context) bool {
ns := request.GetNamespaceMapper(s.cfg)(1)
ctx = identity.WithServiceIdentityForSingleNamespaceContext(ctx, ns)
flag, err := openfeature.NewDefaultClient().BooleanValueDetails(ctx, featuremgmt.FlagPluginsAutoUpdate, false, openfeature.TransactionContext(ctx))
if err != nil {
s.log.Error("flag evaluation error", "flag", featuremgmt.FlagPluginsAutoUpdate, "error", err)
}
return flag.Value
}
func (s *PluginsService) canUpdate(ctx context.Context, plugin pluginstore.Plugin, gcomVersion string) bool {
if !s.updateChecker.IsUpdatable(ctx, plugin) {
return false
@@ -227,7 +242,7 @@ func (s *PluginsService) canUpdate(ctx context.Context, plugin pluginstore.Plugi
return false
}
if openfeature.NewDefaultClient().Boolean(ctx, featuremgmt.FlagPluginsAutoUpdate, false, openfeature.TransactionContext(ctx)) {
if s.checkFlagPluginsAutoUpdate(ctx) {
return s.updateChecker.CanUpdate(plugin.ID, plugin.Info.Version, gcomVersion, s.updateStrategy == setting.PluginUpdateStrategyMinor)
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/grafana/grafana/pkg/apimachinery/validation"
"github.com/grafana/grafana/pkg/storage/unified/sql/db"
"github.com/grafana/grafana/pkg/storage/unified/sql/dbutil"
"github.com/grafana/grafana/pkg/storage/unified/sql/rvmanager"
"github.com/grafana/grafana/pkg/storage/unified/sql/sqltemplate"
gocache "github.com/patrickmn/go-cache"
)
@@ -868,10 +869,18 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
if key.Action == DataActionDeleted {
generation = 0
}
// In compatibility mode, the previous RV, when available, is saved as a microsecond
// timestamp, as is done in the SQL backend.
previousRV := event.PreviousRV
if event.PreviousRV > 0 && isSnowflake(event.PreviousRV) {
previousRV = rvmanager.RVFromSnowflake(event.PreviousRV)
}
_, err := dbutil.Exec(ctx, tx, sqlKVUpdateLegacyResourceHistory, sqlKVLegacyUpdateHistoryRequest{
SQLTemplate: sqltemplate.New(kv.dialect),
GUID: key.GUID,
PreviousRV: event.PreviousRV,
PreviousRV: previousRV,
Generation: generation,
})
@@ -900,7 +909,7 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
Name: key.Name,
Action: action,
Folder: key.Folder,
PreviousRV: event.PreviousRV,
PreviousRV: previousRV,
})
if err != nil {
@@ -916,7 +925,7 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
Name: key.Name,
Action: action,
Folder: key.Folder,
PreviousRV: event.PreviousRV,
PreviousRV: previousRV,
})
if err != nil {
@@ -938,3 +947,15 @@ func (d *dataStore) applyBackwardsCompatibleChanges(ctx context.Context, tx db.T
return nil
}
// isSnowflake returns whether the argument passed is a snowflake ID (new) or a microsecond timestamp (old).
// We try to interpret the number as a microsecond timestamp first. If it represents a time in the past,
// it is considered a microsecond timestamp. Snowflake IDs are much larger integers and would lead
// to dates in the future if interpreted as a microsecond timestamp.
func isSnowflake(rv int64) bool {
ts := time.UnixMicro(rv)
oneHourFromNow := time.Now().Add(time.Hour)
isMicroSecRV := ts.Before(oneHourFromNow)
return !isMicroSecRV
}

View File

@@ -19,13 +19,18 @@ const (
defaultBufferSize = 10000
)
type notifier struct {
type notifier interface {
Watch(context.Context, watchOptions) <-chan Event
}
type pollingNotifier struct {
eventStore *eventStore
log logging.Logger
}
type notifierOptions struct {
log logging.Logger
log logging.Logger
useChannelNotifier bool
}
type watchOptions struct {
@@ -44,15 +49,26 @@ func defaultWatchOptions() watchOptions {
}
}
func newNotifier(eventStore *eventStore, opts notifierOptions) *notifier {
func newNotifier(eventStore *eventStore, opts notifierOptions) notifier {
if opts.log == nil {
opts.log = &logging.NoOpLogger{}
}
return &notifier{eventStore: eventStore, log: opts.log}
if opts.useChannelNotifier {
return &channelNotifier{}
}
return &pollingNotifier{eventStore: eventStore, log: opts.log}
}
type channelNotifier struct{}
func (cn *channelNotifier) Watch(ctx context.Context, opts watchOptions) <-chan Event {
return nil
}
// Return the last resource version from the event store
func (n *notifier) lastEventResourceVersion(ctx context.Context) (int64, error) {
func (n *pollingNotifier) lastEventResourceVersion(ctx context.Context) (int64, error) {
e, err := n.eventStore.LastEventKey(ctx)
if err != nil {
return 0, err
@@ -60,11 +76,11 @@ func (n *notifier) lastEventResourceVersion(ctx context.Context) (int64, error)
return e.ResourceVersion, nil
}
func (n *notifier) cacheKey(evt Event) string {
func (n *pollingNotifier) cacheKey(evt Event) string {
return fmt.Sprintf("%s~%s~%s~%s~%d", evt.Namespace, evt.Group, evt.Resource, evt.Name, evt.ResourceVersion)
}
func (n *notifier) Watch(ctx context.Context, opts watchOptions) <-chan Event {
func (n *pollingNotifier) Watch(ctx context.Context, opts watchOptions) <-chan Event {
if opts.MinBackoff <= 0 {
opts.MinBackoff = defaultMinBackoff
}

View File

@@ -13,7 +13,7 @@ import (
"github.com/stretchr/testify/require"
)
func setupTestNotifier(t *testing.T) (*notifier, *eventStore) {
func setupTestNotifier(t *testing.T) (*pollingNotifier, *eventStore) {
db := setupTestBadgerDB(t)
t.Cleanup(func() {
err := db.Close()
@@ -22,10 +22,10 @@ func setupTestNotifier(t *testing.T) (*notifier, *eventStore) {
kv := NewBadgerKV(db)
eventStore := newEventStore(kv)
notifier := newNotifier(eventStore, notifierOptions{log: &logging.NoOpLogger{}})
return notifier, eventStore
return notifier.(*pollingNotifier), eventStore
}
func setupTestNotifierSqlKv(t *testing.T) (*notifier, *eventStore) {
func setupTestNotifierSqlKv(t *testing.T) (*pollingNotifier, *eventStore) {
dbstore := db.InitTestDB(t)
eDB, err := dbimpl.ProvideResourceDB(dbstore, setting.NewCfg(), nil)
require.NoError(t, err)
@@ -33,7 +33,7 @@ func setupTestNotifierSqlKv(t *testing.T) (*notifier, *eventStore) {
require.NoError(t, err)
eventStore := newEventStore(kv)
notifier := newNotifier(eventStore, notifierOptions{log: &logging.NoOpLogger{}})
return notifier, eventStore
return notifier.(*pollingNotifier), eventStore
}
func TestNewNotifier(t *testing.T) {
@@ -49,7 +49,7 @@ func TestDefaultWatchOptions(t *testing.T) {
assert.Equal(t, defaultBufferSize, opts.BufferSize)
}
func runNotifierTestWith(t *testing.T, storeName string, newStoreFn func(*testing.T) (*notifier, *eventStore), testFn func(*testing.T, context.Context, *notifier, *eventStore)) {
func runNotifierTestWith(t *testing.T, storeName string, newStoreFn func(*testing.T) (*pollingNotifier, *eventStore), testFn func(*testing.T, context.Context, *pollingNotifier, *eventStore)) {
t.Run(storeName, func(t *testing.T) {
ctx := context.Background()
notifier, eventStore := newStoreFn(t)
@@ -62,7 +62,7 @@ func TestNotifier_lastEventResourceVersion(t *testing.T) {
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierLastEventResourceVersion)
}
func testNotifierLastEventResourceVersion(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
func testNotifierLastEventResourceVersion(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
// Test with no events
rv, err := notifier.lastEventResourceVersion(ctx)
assert.Error(t, err)
@@ -113,7 +113,7 @@ func TestNotifier_cachekey(t *testing.T) {
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierCachekey)
}
func testNotifierCachekey(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
func testNotifierCachekey(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
tests := []struct {
name string
event Event
@@ -167,7 +167,7 @@ func TestNotifier_Watch_NoEvents(t *testing.T) {
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchNoEvents)
}
func testNotifierWatchNoEvents(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
func testNotifierWatchNoEvents(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
ctx, cancel := context.WithTimeout(ctx, 500*time.Millisecond)
defer cancel()
@@ -208,7 +208,7 @@ func TestNotifier_Watch_WithExistingEvents(t *testing.T) {
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchWithExistingEvents)
}
func testNotifierWatchWithExistingEvents(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
func testNotifierWatchWithExistingEvents(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
defer cancel()
@@ -282,7 +282,7 @@ func TestNotifier_Watch_EventDeduplication(t *testing.T) {
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchEventDeduplication)
}
func testNotifierWatchEventDeduplication(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
func testNotifierWatchEventDeduplication(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
defer cancel()
@@ -348,7 +348,7 @@ func TestNotifier_Watch_ContextCancellation(t *testing.T) {
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchContextCancellation)
}
func testNotifierWatchContextCancellation(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
func testNotifierWatchContextCancellation(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
ctx, cancel := context.WithCancel(ctx)
// Add an initial event so that lastEventResourceVersion doesn't return ErrNotFound
@@ -394,7 +394,7 @@ func TestNotifier_Watch_MultipleEvents(t *testing.T) {
runNotifierTestWith(t, "sqlkv", setupTestNotifierSqlKv, testNotifierWatchMultipleEvents)
}
func testNotifierWatchMultipleEvents(t *testing.T, ctx context.Context, notifier *notifier, eventStore *eventStore) {
func testNotifierWatchMultipleEvents(t *testing.T, ctx context.Context, notifier *pollingNotifier, eventStore *eventStore) {
ctx, cancel := context.WithTimeout(ctx, 3*time.Second)
defer cancel()
rv := time.Now().UnixNano()
@@ -456,33 +456,27 @@ func testNotifierWatchMultipleEvents(t *testing.T, ctx context.Context, notifier
},
}
errCh := make(chan error)
go func() {
for _, event := range testEvents {
err := eventStore.Save(ctx, event)
require.NoError(t, err)
errCh <- eventStore.Save(ctx, event)
}
}()
// Receive events
receivedEvents := make([]Event, 0, len(testEvents))
for i := 0; i < len(testEvents); i++ {
receivedEvents := make([]string, 0, len(testEvents))
for len(receivedEvents) != len(testEvents) {
select {
case event := <-events:
receivedEvents = append(receivedEvents, event)
receivedEvents = append(receivedEvents, event.Name)
case err := <-errCh:
require.NoError(t, err)
case <-time.After(1 * time.Second):
t.Fatalf("Timed out waiting for event %d", i+1)
t.Fatalf("Timed out waiting for event %d", len(receivedEvents)+1)
}
}
// Verify all events were received
assert.Len(t, receivedEvents, len(testEvents))
// Verify the events match and ordered by resource version
receivedNames := make([]string, len(receivedEvents))
for i, event := range receivedEvents {
receivedNames[i] = event.Name
}
expectedNames := []string{"test-resource-1", "test-resource-2", "test-resource-3"}
assert.ElementsMatch(t, expectedNames, receivedNames)
assert.ElementsMatch(t, expectedNames, receivedEvents)
}

View File

@@ -473,8 +473,6 @@ func (k *sqlKV) Delete(ctx context.Context, section string, key string) error {
return ErrNotFound
}
// TODO reflect change to resource table
return nil
}

View File

@@ -61,7 +61,7 @@ type kvStorageBackend struct {
bulkLock *BulkLock
dataStore *dataStore
eventStore *eventStore
notifier *notifier
notifier notifier
builder DocumentBuilder
log logging.Logger
withPruner bool
@@ -91,6 +91,7 @@ type KVBackendOptions struct {
Tracer trace.Tracer // TODO add tracing
Reg prometheus.Registerer // TODO add metrics
UseChannelNotifier bool
// Adding RvManager overrides the RV generated with snowflake in order to keep backwards compatibility with
// unified/sql
RvManager *rvmanager.ResourceVersionManager
@@ -121,7 +122,7 @@ func NewKVStorageBackend(opts KVBackendOptions) (KVBackend, error) {
bulkLock: NewBulkLock(),
dataStore: newDataStore(kv),
eventStore: eventStore,
notifier: newNotifier(eventStore, notifierOptions{}),
notifier: newNotifier(eventStore, notifierOptions{useChannelNotifier: opts.UseChannelNotifier}),
snowflake: s,
builder: StandardDocumentBuilder(), // For now we use the standard document builder.
log: &logging.NoOpLogger{}, // Make this configurable
@@ -346,7 +347,7 @@ func (k *kvStorageBackend) WriteEvent(ctx context.Context, event WriteEvent) (in
return 0, fmt.Errorf("failed to write data: %w", err)
}
rv = rvmanager.SnowflakeFromRv(rv)
rv = rvmanager.SnowflakeFromRV(rv)
dataKey.ResourceVersion = rv
} else {
err := k.dataStore.Save(ctx, dataKey, bytes.NewReader(event.Value))
@@ -688,9 +689,6 @@ func validateListHistoryRequest(req *resourcepb.ListRequest) error {
if key.Namespace == "" {
return fmt.Errorf("namespace is required")
}
if key.Name == "" {
return fmt.Errorf("name is required")
}
return nil
}

View File

@@ -307,7 +307,7 @@ func (m *ResourceVersionManager) execBatch(ctx context.Context, group, resource
// Allocate the RVs
for i, guid := range guids {
guidToRV[guid] = rv
guidToSnowflakeRV[guid] = SnowflakeFromRv(rv)
guidToSnowflakeRV[guid] = SnowflakeFromRV(rv)
rvs[i] = rv
rv++
}
@@ -364,12 +364,20 @@ func (m *ResourceVersionManager) execBatch(ctx context.Context, group, resource
}
}
// takes a unix microsecond rv and transforms into a snowflake format. The timestamp is converted from microsecond to
// takes a unix microsecond RV and transforms into a snowflake format. The timestamp is converted from microsecond to
// millisecond (the integer division) and the remainder is saved in the stepbits section. machine id is always 0
func SnowflakeFromRv(rv int64) int64 {
func SnowflakeFromRV(rv int64) int64 {
return (((rv / 1000) - snowflake.Epoch) << (snowflake.NodeBits + snowflake.StepBits)) + (rv % 1000)
}
// It is generally not possible to convert from a snowflakeID to a microsecond RV due to the loss in precision
// (snowflake ID stores timestamp in milliseconds). However, this implementation stores the microsecond fraction
// in the step bits (see SnowflakeFromRV), allowing us to compute the microsecond timestamp.
func RVFromSnowflake(snowflakeID int64) int64 {
microSecFraction := snowflakeID & ((1 << snowflake.StepBits) - 1)
return ((snowflakeID>>(snowflake.NodeBits+snowflake.StepBits))+snowflake.Epoch)*1000 + microSecFraction
}
// helper utility to compare two RVs. The first RV must be in snowflake format. Will convert rv2 to snowflake and retry
// if comparison fails
func IsRvEqual(rv1, rv2 int64) bool {
@@ -377,7 +385,7 @@ func IsRvEqual(rv1, rv2 int64) bool {
return true
}
return rv1 == SnowflakeFromRv(rv2)
return rv1 == SnowflakeFromRV(rv2)
}
// Lock locks the resource version for the given key

View File

@@ -63,3 +63,13 @@ func TestResourceVersionManager(t *testing.T) {
require.Equal(t, rv, int64(200))
})
}
func TestSnowflakeFromRVRoundtrips(t *testing.T) {
// 2026-01-12 19:33:58.806211 +0000 UTC
offset := int64(1768246438806211) // in microseconds
for n := range int64(100) {
ts := offset + n
require.Equal(t, ts, RVFromSnowflake(SnowflakeFromRV(ts)))
}
}

View File

@@ -99,6 +99,9 @@ func NewResourceServer(opts ServerOptions) (resource.ResourceServer, error) {
return nil, err
}
isHA := isHighAvailabilityEnabled(opts.Cfg.SectionWithEnvOverrides("database"),
opts.Cfg.SectionWithEnvOverrides("resource_api"))
if opts.Cfg.EnableSQLKVBackend {
sqlkv, err := resource.NewSQLKV(eDB)
if err != nil {
@@ -106,9 +109,10 @@ func NewResourceServer(opts ServerOptions) (resource.ResourceServer, error) {
}
kvBackendOpts := resource.KVBackendOptions{
KvStore: sqlkv,
Tracer: opts.Tracer,
Reg: opts.Reg,
KvStore: sqlkv,
Tracer: opts.Tracer,
Reg: opts.Reg,
UseChannelNotifier: !isHA,
}
ctx := context.Background()
@@ -140,9 +144,6 @@ func NewResourceServer(opts ServerOptions) (resource.ResourceServer, error) {
serverOptions.Backend = kvBackend
serverOptions.Diagnostics = kvBackend
} else {
isHA := isHighAvailabilityEnabled(opts.Cfg.SectionWithEnvOverrides("database"),
opts.Cfg.SectionWithEnvOverrides("resource_api"))
backend, err := NewBackend(BackendOptions{
DBProvider: eDB,
Reg: opts.Reg,

View File

@@ -23,6 +23,7 @@ import (
"github.com/grafana/authlib/types"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
sqldb "github.com/grafana/grafana/pkg/storage/unified/sql/db"
@@ -99,6 +100,10 @@ func RunStorageBackendTest(t *testing.T, newBackend NewBackendFunc, opts *TestOp
}
t.Run(tc.name, func(t *testing.T) {
if db.IsTestDbSQLite() {
t.Skip("Skipping tests on sqlite until channel notifier is implemented")
}
tc.fn(t, newBackend(context.Background()), opts.NSPrefix)
})
}
@@ -1166,7 +1171,7 @@ func runTestIntegrationBackendCreateNewResource(t *testing.T, backend resource.S
}))
server := newServer(t, backend)
ns := nsPrefix + "-create-resource"
ns := nsPrefix + "-create-rsrce" // create-resource
ctx = request.WithNamespace(ctx, ns)
request := &resourcepb.CreateRequest{
@@ -1607,7 +1612,7 @@ func (s *sliceBulkRequestIterator) RollbackRequested() bool {
func runTestIntegrationBackendOptimisticLocking(t *testing.T, backend resource.StorageBackend, nsPrefix string) {
ctx := testutil.NewTestContext(t, time.Now().Add(30*time.Second))
ns := nsPrefix + "-optimistic-locking"
ns := nsPrefix + "-optimis-lock" // optimistic-locking. need to cut down on characters to not exceed namespace character limit (40)
t.Run("concurrent updates with same RV - only one succeeds", func(t *testing.T) {
// Create initial resource with rv0 (no previous RV)

View File

@@ -36,6 +36,10 @@ func NewTestSqlKvBackend(t *testing.T, ctx context.Context, withRvManager bool)
KvStore: kv,
}
if db.DriverName() == "sqlite3" {
kvOpts.UseChannelNotifier = true
}
if withRvManager {
dialect := sqltemplate.DialectForDriver(db.DriverName())
rvManager, err := rvmanager.NewResourceVersionManager(rvmanager.ResourceManagerOptions{
@@ -200,7 +204,7 @@ func verifyKeyPath(t *testing.T, db sqldb.DB, ctx context.Context, key *resource
var keyPathRV int64
if isSqlBackend {
// Convert microsecond RV to snowflake for key_path construction
keyPathRV = rvmanager.SnowflakeFromRv(resourceVersion)
keyPathRV = rvmanager.SnowflakeFromRV(resourceVersion)
} else {
// KV backend already provides snowflake RV
keyPathRV = resourceVersion
@@ -434,9 +438,6 @@ func verifyResourceHistoryTable(t *testing.T, db sqldb.DB, namespace string, res
rows, err := db.QueryContext(ctx, query, namespace)
require.NoError(t, err)
defer func() {
_ = rows.Close()
}()
var records []ResourceHistoryRecord
for rows.Next() {
@@ -460,33 +461,34 @@ func verifyResourceHistoryTable(t *testing.T, db sqldb.DB, namespace string, res
for resourceIdx, res := range resources {
// Check create record (action=1, generation=1)
createRecord := records[recordIndex]
verifyResourceHistoryRecord(t, createRecord, res, resourceIdx, 1, 0, 1, resourceVersions[resourceIdx][0])
verifyResourceHistoryRecord(t, createRecord, namespace, res, resourceIdx, 1, 0, 1, resourceVersions[resourceIdx][0])
recordIndex++
}
for resourceIdx, res := range resources {
// Check update record (action=2, generation=2)
updateRecord := records[recordIndex]
verifyResourceHistoryRecord(t, updateRecord, res, resourceIdx, 2, resourceVersions[resourceIdx][0], 2, resourceVersions[resourceIdx][1])
verifyResourceHistoryRecord(t, updateRecord, namespace, res, resourceIdx, 2, resourceVersions[resourceIdx][0], 2, resourceVersions[resourceIdx][1])
recordIndex++
}
for resourceIdx, res := range resources[:2] {
// Check delete record (action=3, generation=0) - only first 2 resources were deleted
deleteRecord := records[recordIndex]
verifyResourceHistoryRecord(t, deleteRecord, res, resourceIdx, 3, resourceVersions[resourceIdx][1], 0, resourceVersions[resourceIdx][2])
verifyResourceHistoryRecord(t, deleteRecord, namespace, res, resourceIdx, 3, resourceVersions[resourceIdx][1], 0, resourceVersions[resourceIdx][2])
recordIndex++
}
}
// verifyResourceHistoryRecord validates a single resource_history record
func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, expectedRes struct{ name, folder string }, resourceIdx, expectedAction int, expectedPrevRV int64, expectedGeneration int, expectedRV int64) {
func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, namespace string, expectedRes struct{ name, folder string }, resourceIdx, expectedAction int, expectedPrevRV int64, expectedGeneration int, expectedRV int64) {
// Validate GUID (should be non-empty)
require.NotEmpty(t, record.GUID, "GUID should not be empty")
// Validate group/resource/namespace/name
require.Equal(t, "playlist.grafana.app", record.Group)
require.Equal(t, "playlists", record.Resource)
require.Equal(t, namespace, record.Namespace)
require.Equal(t, expectedRes.name, record.Name)
// Validate value contains expected JSON - server modifies/formats the JSON differently for different operations
@@ -513,8 +515,12 @@ func verifyResourceHistoryRecord(t *testing.T, record ResourceHistoryRecord, exp
// For KV backend operations, expectedPrevRV is now in snowflake format (returned by KV backend)
// but resource_history table stores microsecond RV, so we need to use IsRvEqual for comparison
if strings.Contains(record.Namespace, "-kv") {
require.True(t, rvmanager.IsRvEqual(expectedPrevRV, record.PreviousResourceVersion),
"Previous resource version should match (KV backend snowflake format)")
if expectedPrevRV == 0 {
require.Zero(t, record.PreviousResourceVersion)
} else {
require.Equal(t, expectedPrevRV, rvmanager.SnowflakeFromRV(record.PreviousResourceVersion),
"Previous resource version should match (KV backend snowflake format)")
}
} else {
require.Equal(t, expectedPrevRV, record.PreviousResourceVersion)
}
@@ -546,9 +552,6 @@ func verifyResourceTable(t *testing.T, db sqldb.DB, namespace string, resources
rows, err := db.QueryContext(ctx, query, namespace)
require.NoError(t, err)
defer func() {
_ = rows.Close()
}()
var records []ResourceRecord
for rows.Next() {
@@ -612,9 +615,6 @@ func verifyResourceVersionTable(t *testing.T, db sqldb.DB, namespace string, res
// Check that we have exactly one entry for playlist.grafana.app/playlists
rows, err := db.QueryContext(ctx, query, "playlist.grafana.app", "playlists")
require.NoError(t, err)
defer func() {
_ = rows.Close()
}()
var records []ResourceVersionRecord
for rows.Next() {
@@ -649,7 +649,7 @@ func verifyResourceVersionTable(t *testing.T, db sqldb.DB, namespace string, res
isKvBackend := strings.Contains(namespace, "-kv")
recordResourceVersion := record.ResourceVersion
if isKvBackend {
recordResourceVersion = rvmanager.SnowflakeFromRv(record.ResourceVersion)
recordResourceVersion = rvmanager.SnowflakeFromRV(record.ResourceVersion)
}
require.Less(t, recordResourceVersion, int64(9223372036854775807), "resource_version should be reasonable")
@@ -841,24 +841,20 @@ func runMixedConcurrentOperations(t *testing.T, sqlServer, kvServer resource.Res
}
// SQL backend operations
wg.Add(1)
go func() {
defer wg.Done()
wg.Go(func() {
<-startBarrier // Wait for signal to start
if err := runBackendOperationsWithCounts(ctx, sqlServer, namespace+"-sql", "sql", opCounts); err != nil {
errors <- fmt.Errorf("SQL backend operations failed: %w", err)
}
}()
})
// KV backend operations
wg.Add(1)
go func() {
defer wg.Done()
wg.Go(func() {
<-startBarrier // Wait for signal to start
if err := runBackendOperationsWithCounts(ctx, kvServer, namespace+"-kv", "kv", opCounts); err != nil {
errors <- fmt.Errorf("KV backend operations failed: %w", err)
}
}()
})
// Start both goroutines simultaneously
close(startBarrier)

View File

@@ -8,6 +8,7 @@ import (
"github.com/stretchr/testify/require"
"github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/util/testutil"
)
func TestBadgerKVStorageBackend(t *testing.T) {
@@ -36,19 +37,13 @@ func TestBadgerKVStorageBackend(t *testing.T) {
})
}
func TestSQLKVStorageBackend(t *testing.T) {
func TestIntegrationSQLKVStorageBackend(t *testing.T) {
testutil.SkipIntegrationTestInShortMode(t)
skipTests := map[string]bool{
TestWatchWriteEvents: true,
TestList: true,
TestBlobSupport: true,
TestGetResourceStats: true,
TestListHistory: true,
TestListHistoryErrorReporting: true,
TestListModifiedSince: true,
TestListTrash: true,
TestCreateNewResource: true,
TestGetResourceLastImportTime: true,
TestOptimisticLocking: true,
}
t.Run("Without RvManager", func(t *testing.T) {
@@ -56,7 +51,7 @@ func TestSQLKVStorageBackend(t *testing.T) {
backend, _ := NewTestSqlKvBackend(t, ctx, false)
return backend
}, &TestOptions{
NSPrefix: "sqlkvstorage-test",
NSPrefix: "sqlkvstoragetest",
SkipTests: skipTests,
})
})
@@ -66,7 +61,7 @@ func TestSQLKVStorageBackend(t *testing.T) {
backend, _ := NewTestSqlKvBackend(t, ctx, true)
return backend
}, &TestOptions{
NSPrefix: "sqlkvstorage-withrvmanager-test",
NSPrefix: "sqlkvstoragetest-rvmanager",
SkipTests: skipTests,
})
})

View File

@@ -10,10 +10,10 @@ import (
"github.com/grafana/alerting/notify"
"github.com/grafana/alerting/receivers/schema"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/errors"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/grafana/grafana/apps/alerting/notifications/pkg/apis/alertingnotifications/v0alpha1"
"github.com/grafana/grafana/pkg/services/featuremgmt"
@@ -21,7 +21,6 @@ import (
"github.com/grafana/grafana/pkg/services/ngalert/models"
"github.com/grafana/grafana/pkg/tests/api/alerting"
"github.com/grafana/grafana/pkg/tests/apis"
test_common "github.com/grafana/grafana/pkg/tests/apis/alerting/notifications/common"
"github.com/grafana/grafana/pkg/tests/testinfra"
)
@@ -34,7 +33,8 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
},
})
receiverClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
receiverClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
cliCfg := helper.Org1.Admin.NewRestConfig()
alertingApi := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
@@ -58,9 +58,9 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
response := alertingApi.ConvertPrometheusPostAlertmanagerConfig(t, amConfig, headers)
require.Equal(t, "success", response.Status)
receiversRaw, err := receiverClient.Client.List(ctx, v1.ListOptions{})
receiversRaw, err := receiverClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
raw, err := receiversRaw.MarshalJSON()
raw, err := json.Marshal(receiversRaw)
require.NoError(t, err)
expectedBytes, err := os.ReadFile(path.Join("test-data", "imported-expected-snapshot.json"))
@@ -74,7 +74,7 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
require.NoError(t, err)
}
receivers, err := receiverClient.List(ctx, v1.ListOptions{})
receivers, err := receiverClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
t.Run("secure fields should be properly masked", func(t *testing.T) {
for _, receiver := range receivers.Items {
@@ -114,14 +114,14 @@ func TestIntegrationReadImported_Snapshot(t *testing.T) {
toUpdate := receivers.Items[1]
toUpdate.Spec.Title = "another title"
_, err = receiverClient.Update(ctx, &toUpdate, v1.UpdateOptions{})
_, err = receiverClient.Update(ctx, &toUpdate, resource.UpdateOptions{})
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
})
t.Run("should not be able to delete", func(t *testing.T) {
toDelete := receivers.Items[1]
err = receiverClient.Delete(ctx, toDelete.Name, v1.DeleteOptions{})
err = receiverClient.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: toDelete.Name}, resource.DeleteOptions{})
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
})
}

View File

@@ -15,12 +15,12 @@ import (
"github.com/grafana/alerting/notify/notifytest"
"github.com/grafana/alerting/receivers/line"
"github.com/grafana/alerting/receivers/schema"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/errors"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
"github.com/grafana/alerting/notify"
@@ -65,7 +65,8 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
client := test_common.NewReceiverClient(t, helper.Org1.Admin)
client, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
newResource := &v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -77,42 +78,42 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
}
t.Run("create should fail if object name is specified", func(t *testing.T) {
resource := newResource.Copy().(*v0alpha1.Receiver)
resource.Name = "new-receiver"
_, err := client.Create(ctx, resource, v1.CreateOptions{})
receiver := newResource.Copy().(*v0alpha1.Receiver)
receiver.Name = "new-receiver"
_, err := client.Create(ctx, receiver, resource.CreateOptions{})
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
})
var resourceID string
var resourceID resource.Identifier
t.Run("create should succeed and provide resource name", func(t *testing.T) {
actual, err := client.Create(ctx, newResource, v1.CreateOptions{})
actual, err := client.Create(ctx, newResource, resource.CreateOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
resourceID = actual.Name
resourceID = actual.GetStaticMetadata().Identifier()
})
t.Run("resource should be available by the identifier", func(t *testing.T) {
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
actual, err := client.Get(ctx, resourceID)
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.Equal(t, newResource.Spec, actual.Spec)
})
t.Run("update should rename receiver if name in the specification changes", func(t *testing.T) {
existing, err := client.Get(ctx, resourceID, v1.GetOptions{})
existing, err := client.Get(ctx, resourceID)
require.NoError(t, err)
updated := existing.Copy().(*v0alpha1.Receiver)
updated.Spec.Title = "another-newReceiver"
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
require.NoError(t, err)
require.Equal(t, updated.Spec, actual.Spec)
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
require.NotEqualf(t, updated.ResourceVersion, actual.ResourceVersion, "Update should change the resource version but it didn't")
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.Equal(t, actual.Spec, resource.Spec)
require.Equal(t, actual.Name, resource.Name)
@@ -140,7 +141,8 @@ func TestIntegrationResourcePermissions(t *testing.T) {
admin := org1.Admin
viewer := org1.Viewer
editor := org1.Editor
adminClient := test_common.NewReceiverClient(t, admin)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(admin.GetClientRegistry())
require.NoError(t, err)
writeACMetadata := []string{"canWrite", "canDelete"}
allACMetadata := []string{"canWrite", "canDelete", "canReadSecrets", "canAdmin", "canModifyProtected"}
@@ -292,8 +294,10 @@ func TestIntegrationResourcePermissions(t *testing.T) {
},
} {
t.Run(tc.name, func(t *testing.T) {
createClient := test_common.NewReceiverClient(t, tc.creatingUser)
client := test_common.NewReceiverClient(t, tc.testUser)
createClient, err := v0alpha1.NewReceiverClientFromGenerator(tc.creatingUser.GetClientRegistry())
require.NoError(t, err)
client, err := v0alpha1.NewReceiverClientFromGenerator(tc.testUser.GetClientRegistry())
require.NoError(t, err)
var created = &v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
@@ -308,12 +312,12 @@ func TestIntegrationResourcePermissions(t *testing.T) {
require.NoError(t, err)
// Create receiver with creatingUser
created, err = createClient.Create(ctx, created, v1.CreateOptions{})
created, err = createClient.Create(ctx, created, resource.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, created)
defer func() {
_ = adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
_ = adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
}()
// Assign resource permissions
@@ -338,7 +342,7 @@ func TestIntegrationResourcePermissions(t *testing.T) {
// Obtain expected responses using admin client as source of truth.
expectedGetWithMetadata, expectedListWithMetadata := func() (*v0alpha1.Receiver, *v0alpha1.Receiver) {
expectedGet, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
expectedGet, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.NotNil(t, expectedGet)
@@ -352,7 +356,7 @@ func TestIntegrationResourcePermissions(t *testing.T) {
expectedGetWithMetadata.SetAccessControl(ac)
}
expectedList, err := adminClient.List(ctx, v1.ListOptions{})
expectedList, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
expectedListWithMetadata := extractReceiverFromList(expectedList, created.Name)
require.NotNil(t, expectedListWithMetadata)
@@ -368,26 +372,26 @@ func TestIntegrationResourcePermissions(t *testing.T) {
}()
t.Run("should be able to list receivers", func(t *testing.T) {
list, err := client.List(ctx, v1.ListOptions{})
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
listedReceiver := extractReceiverFromList(list, created.Name)
assert.Equalf(t, expectedListWithMetadata, listedReceiver, "Expected %v but got %v", expectedListWithMetadata, listedReceiver)
})
t.Run("should be able to read receiver by resource identifier", func(t *testing.T) {
got, err := client.Get(ctx, expectedGetWithMetadata.Name, v1.GetOptions{})
got, err := client.Get(ctx, expectedGetWithMetadata.GetStaticMetadata().Identifier())
require.NoError(t, err)
assert.Equalf(t, expectedGetWithMetadata, got, "Expected %v but got %v", expectedGetWithMetadata, got)
})
} else {
t.Run("list receivers should be empty", func(t *testing.T) {
list, err := client.List(ctx, v1.ListOptions{})
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Emptyf(t, list.Items, "Expected no receivers but got %v", list.Items)
})
t.Run("should be forbidden to read receiver by name", func(t *testing.T) {
_, err := client.Get(ctx, created.Name, v1.GetOptions{})
_, err := client.Get(ctx, created.GetStaticMetadata().Identifier())
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
@@ -559,10 +563,12 @@ func TestIntegrationAccessControl(t *testing.T) {
},
}
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
for _, tc := range testCases {
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
client := test_common.NewReceiverClient(t, tc.user)
client, err := v0alpha1.NewReceiverClientFromGenerator(tc.user.GetClientRegistry())
require.NoError(t, err)
var expected = &v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
@@ -580,29 +586,29 @@ func TestIntegrationAccessControl(t *testing.T) {
newReceiver.Spec.Title = fmt.Sprintf("receiver-2-%s", tc.user.Identity.GetLogin())
if tc.canCreate {
t.Run("should be able to create receiver", func(t *testing.T) {
actual, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
actual, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.Equal(t, newReceiver.Spec, actual.Spec)
t.Run("should fail if already exists", func(t *testing.T) {
_, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
_, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
require.Truef(t, errors.IsConflict(err), "expected bad request but got %s", err)
})
// Cleanup.
require.NoError(t, adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{}))
require.NoError(t, adminClient.Delete(ctx, actual.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
})
} else {
t.Run("should be forbidden to create", func(t *testing.T) {
_, err := client.Create(ctx, newReceiver, v1.CreateOptions{})
_, err := client.Create(ctx, newReceiver, resource.CreateOptions{})
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
})
}
// create resource to proceed with other tests. We don't use the one created above because the user will always
// have admin permissions on it.
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, expected)
@@ -627,34 +633,34 @@ func TestIntegrationAccessControl(t *testing.T) {
expectedWithMetadata.SetAccessControl("canAdmin")
}
t.Run("should be able to list receivers", func(t *testing.T) {
list, err := client.List(ctx, v1.ListOptions{})
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 2) // default + created
})
t.Run("should be able to read receiver by resource identifier", func(t *testing.T) {
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.Equal(t, expectedWithMetadata, got)
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("list receivers should be empty", func(t *testing.T) {
list, err := client.List(ctx, v1.ListOptions{})
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Emptyf(t, list.Items, "Expected no receivers but got %v", list.Items)
})
t.Run("should be forbidden to read receiver by name", func(t *testing.T) {
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
@@ -668,7 +674,7 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canUpdate {
t.Run("should be able to update receiver", func(t *testing.T) {
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
expected = updated
@@ -676,7 +682,7 @@ func TestIntegrationAccessControl(t *testing.T) {
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.Receiver)
up.Name = "notFound"
_, err := client.Update(ctx, up, v1.UpdateOptions{})
_, err := client.Update(ctx, up, resource.UpdateOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
@@ -686,7 +692,7 @@ func TestIntegrationAccessControl(t *testing.T) {
createIntegration(t, "webhook"),
}
expected, err = adminClient.Update(ctx, updatedExpected, v1.UpdateOptions{})
expected, err = adminClient.Update(ctx, updatedExpected, resource.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, expected)
@@ -695,60 +701,62 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canUpdateProtected {
t.Run("should be able to update protected fields of the receiver", func(t *testing.T) {
updated, err := client.Update(ctx, updatedProtected, v1.UpdateOptions{})
updated, err := client.Update(ctx, updatedProtected, resource.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, updated)
expected = updated
})
} else {
t.Run("should be forbidden to edit protected fields of the receiver", func(t *testing.T) {
_, err := client.Update(ctx, updatedProtected, v1.UpdateOptions{})
_, err := client.Update(ctx, updatedProtected, resource.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
} else {
t.Run("should be forbidden to update receiver", func(t *testing.T) {
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.Receiver)
up.Name = "notFound"
_, err := client.Update(ctx, up, v1.UpdateOptions{})
_, err := client.Update(ctx, up, resource.UpdateOptions{
ResourceVersion: up.ResourceVersion,
})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
require.Falsef(t, tc.canUpdateProtected, "Invalid combination of assertions. CanUpdateProtected should be false")
}
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
deleteOptions := resource.DeleteOptions{Preconditions: resource.DeleteOptionsPreconditions{ResourceVersion: expected.ResourceVersion}}
if tc.canDelete {
t.Run("should be able to delete receiver", func(t *testing.T) {
err := client.Delete(ctx, expected.Name, deleteOptions)
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), deleteOptions)
require.NoError(t, err)
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to delete receiver", func(t *testing.T) {
err := client.Delete(ctx, expected.Name, deleteOptions)
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), deleteOptions)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
}
if tc.canRead {
t.Run("should get empty list if no receivers", func(t *testing.T) {
list, err := client.List(ctx, v1.ListOptions{})
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 1)
})
@@ -766,7 +774,8 @@ func TestIntegrationInUseMetadata(t *testing.T) {
cliCfg := helper.Org1.Admin.NewRestConfig()
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
// Prepare environment and create notification policy and rule that use receiver
alertmanagerRaw, err := testData.ReadFile(path.Join("test-data", "notification-settings.json"))
require.NoError(t, err)
@@ -813,7 +822,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
requestReceivers := func(t *testing.T, title string) (v0alpha1.Receiver, v0alpha1.Receiver) {
t.Helper()
receivers, err := adminClient.List(ctx, v1.ListOptions{})
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, receivers.Items, 2)
idx := slices.IndexFunc(receivers.Items, func(interval v0alpha1.Receiver) bool {
@@ -821,7 +830,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
})
receiverListed := receivers.Items[idx]
receiverGet, err := adminClient.Get(ctx, receiverListed.Name, v1.GetOptions{})
receiverGet, err := adminClient.Get(ctx, receiverListed.GetStaticMetadata().Identifier())
require.NoError(t, err)
return receiverListed, *receiverGet
@@ -846,8 +855,9 @@ func TestIntegrationInUseMetadata(t *testing.T) {
amConfig.AlertmanagerConfig.Route.Routes = amConfig.AlertmanagerConfig.Route.Routes[:1]
v1Route, err := routingtree.ConvertToK8sResource(helper.Org1.AdminServiceAccount.OrgId, *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
require.NoError(t, err)
routeAdminClient := test_common.NewRoutingTreeClient(t, helper.Org1.Admin)
_, err = routeAdminClient.Update(ctx, v1Route, v1.UpdateOptions{})
routeAdminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
_, err = routeAdminClient.Update(ctx, v1Route, resource.UpdateOptions{})
require.NoError(t, err)
receiverListed, receiverGet = requestReceivers(t, "user-defined")
@@ -868,7 +878,7 @@ func TestIntegrationInUseMetadata(t *testing.T) {
amConfig.AlertmanagerConfig.Route.Routes = nil
v1route, err := routingtree.ConvertToK8sResource(1, *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
require.NoError(t, err)
_, err = routeAdminClient.Update(ctx, v1route, v1.UpdateOptions{})
_, err = routeAdminClient.Update(ctx, v1route, resource.UpdateOptions{})
require.NoError(t, err)
// Remove the remaining rules.
@@ -892,7 +902,8 @@ func TestIntegrationProvisioning(t *testing.T) {
org := helper.Org1
admin := org.Admin
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
db, err := store.ProvideDBStore(env.Cfg, env.FeatureToggles, env.SQLStore, &foldertest.FakeService{}, &dashboards.FakeDashboardService{}, ac, bus.ProvideBus(tracing.InitializeTracerForTest()))
@@ -908,7 +919,7 @@ func TestIntegrationProvisioning(t *testing.T) {
createIntegration(t, "email"),
},
},
}, v1.CreateOptions{})
}, resource.CreateOptions{})
require.NoError(t, err)
require.Equal(t, "none", created.GetProvenanceStatus())
@@ -917,23 +928,23 @@ func TestIntegrationProvisioning(t *testing.T) {
UID: *created.Spec.Integrations[0].Uid,
}, admin.Identity.GetOrgID(), "API"))
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.Equal(t, "API", got.GetProvenanceStatus())
})
t.Run("should not let update if provisioned", func(t *testing.T) {
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
require.NoError(t, err)
updated := got.Copy().(*v0alpha1.Receiver)
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "email"))
_, err = adminClient.Update(ctx, updated, v1.UpdateOptions{})
_, err = adminClient.Update(ctx, updated, resource.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should not let delete if provisioned", func(t *testing.T) {
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
@@ -944,7 +955,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
oldClient := test_common.NewReceiverClient(t, helper.Org1.Admin) // TODO replace with regular client once Delete works
receiver := v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -955,21 +969,22 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
},
}
created, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
created, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
require.NoError(t, err)
require.NotNil(t, created)
require.NotEmpty(t, created.ResourceVersion)
t.Run("should forbid if version does not match", func(t *testing.T) {
t.Run("should conflict if version does not match", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.Receiver)
updated.ResourceVersion = "test"
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
ResourceVersion: "test",
})
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should update if version matches", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.Receiver)
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "email"))
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
require.NoError(t, err)
for i, integration := range actualUpdated.Spec.Integrations {
updated.Spec.Integrations[i].Uid = integration.Uid
@@ -981,25 +996,25 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
updated := created.Copy().(*v0alpha1.Receiver)
updated.ResourceVersion = ""
updated.Spec.Integrations = append(updated.Spec.Integrations, createIntegration(t, "webhook"))
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
_, err := oldClient.Update(ctx, updated, v1.UpdateOptions{})
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err) // TODO Change that? K8s returns 400 instead.
})
t.Run("should fail to delete if version does not match", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
require.NoError(t, err)
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer("something"),
},
})
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
require.Truef(t, errors.IsConflict(err), "should get conflict error but got %s", err)
})
t.Run("should succeed if version matches", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
require.NoError(t, err)
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -1007,10 +1022,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
require.NoError(t, err)
})
t.Run("should succeed if version is empty", func(t *testing.T) {
actual, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
actual, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
require.NoError(t, err)
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = oldClient.Delete(ctx, actual.Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -1025,7 +1040,8 @@ func TestIntegrationPatch(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
receiver := v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -1040,40 +1056,40 @@ func TestIntegrationPatch(t *testing.T) {
},
}
current, err := adminClient.Create(ctx, &receiver, v1.CreateOptions{})
current, err := adminClient.Create(ctx, &receiver, resource.CreateOptions{})
require.NoError(t, err)
require.NotNil(t, current)
t.Run("should patch with json patch", func(t *testing.T) {
current, err := adminClient.Get(ctx, current.Name, v1.GetOptions{})
current, err := adminClient.Get(ctx, current.GetStaticMetadata().Identifier())
require.NoError(t, err)
index := slices.IndexFunc(current.Spec.Integrations, func(t v0alpha1.ReceiverIntegration) bool {
return t.Type == "webhook"
})
patch := []map[string]any{
patch := []resource.PatchOperation{
{
"op": "remove",
"path": fmt.Sprintf("/spec/integrations/%d/settings/username", index),
Operation: "remove",
Path: fmt.Sprintf("/spec/integrations/%d/settings/username", index),
},
{
"op": "remove",
"path": fmt.Sprintf("/spec/integrations/%d/secureFields/password", index),
Operation: "remove",
Path: fmt.Sprintf("/spec/integrations/%d/secureFields/password", index),
},
{
"op": "replace",
"path": fmt.Sprintf("/spec/integrations/%d/settings/authorization_scheme", index),
"value": "bearer",
Operation: "replace",
Path: fmt.Sprintf("/spec/integrations/%d/settings/authorization_scheme", index),
Value: "bearer",
},
{
"op": "add",
"path": fmt.Sprintf("/spec/integrations/%d/settings/authorization_credentials", index),
"value": "authz-token",
Operation: "add",
Path: fmt.Sprintf("/spec/integrations/%d/settings/authorization_credentials", index),
Value: "authz-token",
},
{
"op": "remove",
"path": fmt.Sprintf("/spec/integrations/%d/secureFields/authorization_credentials", index),
Operation: "remove",
Path: fmt.Sprintf("/spec/integrations/%d/secureFields/authorization_credentials", index),
},
}
@@ -1084,10 +1100,7 @@ func TestIntegrationPatch(t *testing.T) {
delete(expected.SecureFields, "password")
expected.SecureFields["authorization_credentials"] = true
patchData, err := json.Marshal(patch)
require.NoError(t, err)
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
require.NoError(t, err)
require.EqualValues(t, expected, result.Spec.Integrations[index])
@@ -1127,7 +1140,8 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
cliCfg := helper.Org1.Admin.NewRestConfig()
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
// Prepare environment and create notification policy and rule that use time receiver
alertmanagerRaw, err := testData.ReadFile(path.Join("test-data", "notification-settings.json"))
require.NoError(t, err)
@@ -1146,7 +1160,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
_, status, data := legacyCli.PostRulesGroupWithStatus(t, folderUID, &ruleGroup, false)
require.Equalf(t, http.StatusAccepted, status, "Failed to post Rule: %s", data)
receivers, err := adminClient.List(ctx, v1.ListOptions{})
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, receivers.Items, 2)
idx := slices.IndexFunc(receivers.Items, func(interval v0alpha1.Receiver) bool {
@@ -1164,7 +1178,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
expectedTitle := renamed.Spec.Title + "-new"
renamed.Spec.Title = expectedTitle
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
require.NoError(t, err)
updatedRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
@@ -1178,7 +1192,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
assert.Equalf(t, expectedTitle, route.Receiver, "time receiver in routes should have been renamed but it did not")
}
actual, err = adminClient.Get(ctx, actual.Name, v1.GetOptions{})
actual, err = adminClient.Get(ctx, actual.GetStaticMetadata().Identifier())
require.NoError(t, err)
receiver = *actual
@@ -1194,20 +1208,20 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
t.Cleanup(func() {
require.NoError(t, db.DeleteProvenance(ctx, &currentRoute, orgID))
})
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
t.Run("provisioned rules", func(t *testing.T) {
ruleUid := currentRuleGroup.Rules[0].GrafanaManagedAlert.UID
resource := &ngmodels.AlertRule{UID: ruleUid}
require.NoError(t, db.SetProvenance(ctx, resource, orgID, "API"))
rule := &ngmodels.AlertRule{UID: ruleUid}
require.NoError(t, db.SetProvenance(ctx, rule, orgID, "API"))
t.Cleanup(func() {
require.NoError(t, db.DeleteProvenance(ctx, resource, orgID))
require.NoError(t, db.DeleteProvenance(ctx, rule, orgID))
})
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -1216,7 +1230,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
t.Run("Delete", func(t *testing.T) {
t.Run("should fail to delete if receiver is used in rule and routes", func(t *testing.T) {
err := adminClient.Delete(ctx, receiver.Name, v1.DeleteOptions{})
err := adminClient.Delete(ctx, receiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -1225,7 +1239,7 @@ func TestIntegrationReferentialIntegrity(t *testing.T) {
route.Routes[0].Receiver = ""
legacyCli.UpdateRoute(t, route, true)
err = adminClient.Delete(ctx, receiver.Name, v1.DeleteOptions{})
err = adminClient.Delete(ctx, receiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
})
@@ -1237,10 +1251,11 @@ func TestIntegrationCRUD(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
var defaultReceiver *v0alpha1.Receiver
t.Run("should list the default receiver", func(t *testing.T) {
items, err := adminClient.List(ctx, v1.ListOptions{})
items, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
assert.Len(t, items.Items, 1)
defaultReceiver = &items.Items[0]
@@ -1249,7 +1264,7 @@ func TestIntegrationCRUD(t *testing.T) {
assert.NotEmpty(t, defaultReceiver.Name)
assert.NotEmpty(t, defaultReceiver.ResourceVersion)
defaultReceiver, err = adminClient.Get(ctx, defaultReceiver.Name, v1.GetOptions{})
defaultReceiver, err = adminClient.Get(ctx, defaultReceiver.GetStaticMetadata().Identifier())
require.NoError(t, err)
assert.NotEmpty(t, defaultReceiver.UID)
assert.NotEmpty(t, defaultReceiver.Name)
@@ -1262,7 +1277,7 @@ func TestIntegrationCRUD(t *testing.T) {
newDefault := defaultReceiver.Copy().(*v0alpha1.Receiver)
newDefault.Spec.Integrations = append(newDefault.Spec.Integrations, createIntegration(t, line.Type))
updatedReceiver, err := adminClient.Update(ctx, newDefault, v1.UpdateOptions{})
updatedReceiver, err := adminClient.Update(ctx, newDefault, resource.UpdateOptions{})
require.NoError(t, err)
expected := newDefault.Copy().(*v0alpha1.Receiver)
@@ -1290,12 +1305,12 @@ func TestIntegrationCRUD(t *testing.T) {
Integrations: []v0alpha1.ReceiverIntegration{},
},
}
_, err := adminClient.Create(ctx, newReceiver, v1.CreateOptions{})
_, err := adminClient.Create(ctx, newReceiver, resource.CreateOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
t.Run("should not let delete default receiver", func(t *testing.T) {
err := adminClient.Delete(ctx, defaultReceiver.Name, v1.DeleteOptions{})
err := adminClient.Delete(ctx, defaultReceiver.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -1317,7 +1332,7 @@ func TestIntegrationCRUD(t *testing.T) {
Title: "all-receivers",
Integrations: integrations,
},
}, v1.CreateOptions{})
}, resource.CreateOptions{})
require.NoError(t, err)
require.Len(t, receiver.Spec.Integrations, len(integrations))
@@ -1342,7 +1357,7 @@ func TestIntegrationCRUD(t *testing.T) {
})
t.Run("should be able read what it is created", func(t *testing.T) {
get, err := adminClient.Get(ctx, receiver.Name, v1.GetOptions{})
get, err := adminClient.Get(ctx, receiver.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.Equal(t, receiver, get)
t.Run("should return secrets in secureFields but not settings", func(t *testing.T) {
@@ -1394,7 +1409,7 @@ func TestIntegrationCRUD(t *testing.T) {
Title: fmt.Sprintf("invalid-%s", key),
Integrations: []v0alpha1.ReceiverIntegration{integration},
},
}, v1.CreateOptions{})
}, resource.CreateOptions{})
require.Errorf(t, err, "Expected error but got successful result: %v", receiver)
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest, got: %s", err)
})
@@ -1408,7 +1423,8 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
recv1 := &v0alpha1.Receiver{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -1420,7 +1436,7 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
},
},
}
recv1, err := adminClient.Create(ctx, recv1, v1.CreateOptions{})
recv1, err = adminClient.Create(ctx, recv1, resource.CreateOptions{})
require.NoError(t, err)
recv2 := &v0alpha1.Receiver{
@@ -1434,7 +1450,7 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
},
},
}
recv2, err = adminClient.Create(ctx, recv2, v1.CreateOptions{})
recv2, err = adminClient.Create(ctx, recv2, resource.CreateOptions{})
require.NoError(t, err)
env := helper.GetEnv()
@@ -1444,18 +1460,20 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
require.NoError(t, db.SetProvenance(ctx, &definitions.EmbeddedContactPoint{
UID: *recv2.Spec.Integrations[0].Uid,
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
recv2, err = adminClient.Get(ctx, recv2.Name, v1.GetOptions{})
recv2, err = adminClient.Get(ctx, recv2.GetStaticMetadata().Identifier())
require.NoError(t, err)
receivers, err := adminClient.List(ctx, v1.ListOptions{})
receivers, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, receivers.Items, 3) // Includes default.
t.Run("should filter by receiver name", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "spec.title=" + recv1.Spec.Title,
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{
"spec.title=" + recv1.Spec.Title,
},
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -1463,8 +1481,10 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
})
t.Run("should filter by metadata name", func(t *testing.T) {
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "metadata.name=" + recv2.Name,
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{
"metadata.name=" + recv2.Name,
},
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -1473,8 +1493,10 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
t.Run("should filter by multiple filters", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.title=%s", recv2.Name, recv2.Spec.Title),
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{
fmt.Sprintf("metadata.name=%s,spec.title=%s", recv2.Name, recv2.Spec.Title),
},
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -1482,8 +1504,10 @@ func TestIntegrationReceiverListSelector(t *testing.T) {
})
t.Run("should be empty when filter does not match", func(t *testing.T) {
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{
fmt.Sprintf("metadata.name=%s", "unknown"),
},
})
require.NoError(t, err)
require.Empty(t, list.Items)
@@ -1497,7 +1521,8 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
helper := getTestHelper(t)
receiverClient := test_common.NewReceiverClient(t, helper.Org1.Admin)
receiverClient, err := v0alpha1.NewReceiverClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
for _, receiver := range amConfig.AlertmanagerConfig.Receivers {
if receiver.Name == "grafana-default-email" {
continue
@@ -1523,7 +1548,7 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
})
}
created, err := receiverClient.Create(ctx, &toCreate, v1.CreateOptions{})
created, err := receiverClient.Create(ctx, &toCreate, resource.CreateOptions{})
require.NoError(t, err)
for i, integration := range created.Spec.Integrations {
@@ -1533,10 +1558,11 @@ func persistInitialConfig(t *testing.T, amConfig definitions.PostableUserConfig)
nsMapper := func(_ int64) string { return "default" }
routeClient := test_common.NewRoutingTreeClient(t, helper.Org1.Admin)
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
v1route, err := routingtree.ConvertToK8sResource(helper.Org1.AdminServiceAccount.OrgId, *amConfig.AlertmanagerConfig.Route, "", nsMapper)
require.NoError(t, err)
_, err = routeClient.Update(ctx, v1route, v1.UpdateOptions{})
_, err = routeClient.Update(ctx, v1route, resource.UpdateOptions{})
require.NoError(t, err)
}

View File

@@ -1,10 +1,14 @@
{
"kind": "ReceiverList",
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"metadata": {},
"items": [
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWls",
"namespace": "default",
"uid": "zyXFk301pvwNz4HRPrTMKPMFO2934cPB7H1ZXmyM1TUX",
"resourceVersion": "a82b34036bdabbc4",
"annotations": {
"grafana.com/access/canAdmin": "true",
"grafana.com/access/canDelete": "true",
@@ -15,53 +19,29 @@
"grafana.com/inUse/routes": "1",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "none"
},
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWls",
"namespace": "default",
"resourceVersion": "a82b34036bdabbc4",
"uid": "zyXFk301pvwNz4HRPrTMKPMFO2934cPB7H1ZXmyM1TUX"
}
},
"spec": {
"title": "grafana-default-email",
"integrations": [
{
"uid": "",
"type": "email",
"version": "v1",
"disableResolveMessage": false,
"settings": {
"addresses": "\u003cexample@email.com\u003e"
},
"type": "email",
"uid": "",
"version": "v1"
}
}
],
"title": "grafana-default-email"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
"grafana.com/canUse": "false",
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "Z3JhZmFuYS1kZWZhdWx0LWVtYWlsdGVzdC1jcmVhdGUtZ2V0LWNvbmZpZw",
"namespace": "default",
"uid": "JzW6DIlcxj4sRN8A2ULcwTXAmm0Vs0Z68aEBqXSvxK0X",
"resourceVersion": "b2823b50ffa1eff6",
"uid": "JzW6DIlcxj4sRN8A2ULcwTXAmm0Vs0Z68aEBqXSvxK0X"
},
"spec": {
"integrations": [],
"title": "grafana-default-emailtest-create-get-config"
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -69,19 +49,36 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "ZGlzY29yZA",
"namespace": "default",
"resourceVersion": "06e437697f62ac59",
"uid": "8cH8Ql2S6VhPEVUhwlQEKYWyPbRJS7YKj2lEXdrehH8X"
}
},
"spec": {
"title": "grafana-default-emailtest-create-get-config",
"integrations": []
}
},
{
"metadata": {
"name": "ZGlzY29yZA",
"namespace": "default",
"uid": "8cH8Ql2S6VhPEVUhwlQEKYWyPbRJS7YKj2lEXdrehH8X",
"resourceVersion": "06e437697f62ac59",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
"grafana.com/canUse": "false",
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
}
},
"spec": {
"title": "discord",
"integrations": [
{
"uid": "",
"type": "discord",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"webhook_url": true
},
"settings": {
"http_config": {
"enable_http2": true,
@@ -95,18 +92,19 @@
"send_resolved": true,
"title": "{{ template \"discord.default.title\" . }}"
},
"type": "discord",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"webhook_url": true
}
}
],
"title": "discord"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "ZW1haWw",
"namespace": "default",
"uid": "bhlvlN758xmnwVrHVPX0c5XvFHepenUbOXP0fuE6eUMX",
"resourceVersion": "9b3ffed277cee189",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -114,19 +112,16 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "ZW1haWw",
"namespace": "default",
"resourceVersion": "9b3ffed277cee189",
"uid": "bhlvlN758xmnwVrHVPX0c5XvFHepenUbOXP0fuE6eUMX"
}
},
"spec": {
"title": "email",
"integrations": [
{
"uid": "",
"type": "email",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"auth_password": true
},
"settings": {
"auth_username": "alertmanager",
"from": "alertmanager@example.com",
@@ -144,18 +139,19 @@
},
"to": "team@example.com"
},
"type": "email",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"auth_password": true
}
}
],
"title": "email"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "amlyYQ",
"namespace": "default",
"uid": "7Pu4xcRXbvw4XEX279SoqyO8Ibo8cMl0vAJyYTsJ0NEX",
"resourceVersion": "deae9d34f8554205",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -163,19 +159,16 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "amlyYQ",
"namespace": "default",
"resourceVersion": "deae9d34f8554205",
"uid": "7Pu4xcRXbvw4XEX279SoqyO8Ibo8cMl0vAJyYTsJ0NEX"
}
},
"spec": {
"title": "jira",
"integrations": [
{
"uid": "",
"type": "jira",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"http_config.basic_auth.password": true
},
"settings": {
"api_url": "http://localhost/jira",
"custom_fields": {
@@ -203,18 +196,19 @@
"send_resolved": true,
"summary": "{{ template \"jira.default.summary\" . }}"
},
"type": "jira",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"http_config.basic_auth.password": true
}
}
],
"title": "jira"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "bXN0ZWFtcw",
"namespace": "default",
"uid": "z7xTMDjrk1HAHXPEx78tQb63LXYA6ivXLOtz2Z09ucIX",
"resourceVersion": "95c8d082d65466a3",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -222,19 +216,16 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "bXN0ZWFtcw",
"namespace": "default",
"resourceVersion": "95c8d082d65466a3",
"uid": "z7xTMDjrk1HAHXPEx78tQb63LXYA6ivXLOtz2Z09ucIX"
}
},
"spec": {
"title": "msteams",
"integrations": [
{
"uid": "",
"type": "teams",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"webhook_url": true
},
"settings": {
"http_config": {
"enable_http2": true,
@@ -249,18 +240,19 @@
"text": "{{ template \"msteams.default.text\" . }}",
"title": "{{ template \"msteams.default.title\" . }}"
},
"type": "teams",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"webhook_url": true
}
}
],
"title": "msteams"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "b3BzZ2VuaWU",
"namespace": "default",
"uid": "XmkZ214Dj030hvynYiwNLq8i6uRCjUYXMXjE5m19OKAX",
"resourceVersion": "8ee2957ba150ba16",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -268,19 +260,16 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "b3BzZ2VuaWU",
"namespace": "default",
"resourceVersion": "8ee2957ba150ba16",
"uid": "XmkZ214Dj030hvynYiwNLq8i6uRCjUYXMXjE5m19OKAX"
}
},
"spec": {
"title": "opsgenie",
"integrations": [
{
"uid": "",
"type": "opsgenie",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"api_key": true
},
"settings": {
"actions": "test actions",
"api_url": "http://localhost/opsgenie/",
@@ -311,18 +300,19 @@
"tags": "test-tags",
"update_alerts": true
},
"type": "opsgenie",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"api_key": true
}
}
],
"title": "opsgenie"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "cGFnZXJkdXR5",
"namespace": "default",
"uid": "QNitkUCkwzrIc7WVCCJGGDyvXLyo9csSUVqfyStyctQX",
"resourceVersion": "fe673d5dcd67ccf0",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -330,20 +320,16 @@
"grafana.com/inUse/routes": "1",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "cGFnZXJkdXR5",
"namespace": "default",
"resourceVersion": "fe673d5dcd67ccf0",
"uid": "QNitkUCkwzrIc7WVCCJGGDyvXLyo9csSUVqfyStyctQX"
}
},
"spec": {
"title": "pagerduty",
"integrations": [
{
"uid": "",
"type": "pagerduty",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"routing_key": true,
"service_key": true
},
"settings": {
"class": "test class",
"client": "Alertmanager",
@@ -383,18 +369,20 @@
"source": "test source",
"url": "http://localhost/pagerduty"
},
"type": "pagerduty",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"routing_key": true,
"service_key": true
}
}
],
"title": "pagerduty"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "cHVzaG92ZXI",
"namespace": "default",
"uid": "t2TJSktI6vyGfdbLOKmxH4eBqgcIGsAuW8Qm9m0HRycX",
"resourceVersion": "6ae076725ab463e0",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -402,21 +390,16 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "cHVzaG92ZXI",
"namespace": "default",
"resourceVersion": "6ae076725ab463e0",
"uid": "t2TJSktI6vyGfdbLOKmxH4eBqgcIGsAuW8Qm9m0HRycX"
}
},
"spec": {
"title": "pushover",
"integrations": [
{
"uid": "",
"type": "pushover",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"http_config.authorization.credentials": true,
"token": true,
"user_key": true
},
"settings": {
"expire": "1h0m0s",
"http_config": {
@@ -437,18 +420,21 @@
"title": "{{ template \"pushover.default.title\" . }}",
"url": "http://localhost/pushover"
},
"type": "pushover",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"http_config.authorization.credentials": true,
"token": true,
"user_key": true
}
}
],
"title": "pushover"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "c2xhY2s",
"namespace": "default",
"uid": "xSB0hnoc9j1CnLCHR3VgeVGXdVXILM0p2dM64bbHN9oX",
"resourceVersion": "ec0e343029ff5d8b",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -456,19 +442,16 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "c2xhY2s",
"namespace": "default",
"resourceVersion": "ec0e343029ff5d8b",
"uid": "xSB0hnoc9j1CnLCHR3VgeVGXdVXILM0p2dM64bbHN9oX"
}
},
"spec": {
"title": "slack",
"integrations": [
{
"uid": "",
"type": "slack",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"api_url": true
},
"settings": {
"actions": [
{
@@ -522,18 +505,19 @@
"title_link": "http://localhost",
"username": "Alerting Team"
},
"type": "slack",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"api_url": true
}
}
],
"title": "slack"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "c25z",
"namespace": "default",
"uid": "vSP8NtFr23hnqZqLxRgzUKfr1wOemOvZm1S6MYkfRI4X",
"resourceVersion": "77d734ad4c196d36",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -541,19 +525,16 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "c25z",
"namespace": "default",
"resourceVersion": "77d734ad4c196d36",
"uid": "vSP8NtFr23hnqZqLxRgzUKfr1wOemOvZm1S6MYkfRI4X"
}
},
"spec": {
"title": "sns",
"integrations": [
{
"uid": "",
"type": "sns",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"sigv4.SecretKey": true
},
"settings": {
"attributes": {
"key1": "value1"
@@ -577,18 +558,19 @@
"subject": "{{ template \"sns.default.subject\" . }}",
"topic_arn": "arn:aws:sns:us-east-1:123456789012:alerts"
},
"type": "sns",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"sigv4.SecretKey": true
}
}
],
"title": "sns"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "dGVsZWdyYW0",
"namespace": "default",
"uid": "XLWjtmYcjP5PiqBCwZXX3YKHV1G8niRtpCakIpcHqoYX",
"resourceVersion": "d9850878a33e302e",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -596,19 +578,16 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "dGVsZWdyYW0",
"namespace": "default",
"resourceVersion": "d9850878a33e302e",
"uid": "XLWjtmYcjP5PiqBCwZXX3YKHV1G8niRtpCakIpcHqoYX"
}
},
"spec": {
"title": "telegram",
"integrations": [
{
"uid": "",
"type": "telegram",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"token": true
},
"settings": {
"api_url": "http://localhost/telegram-default",
"chat": -1001234567890,
@@ -624,18 +603,19 @@
"parse_mode": "MarkdownV2",
"send_resolved": true
},
"type": "telegram",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"token": true
}
}
],
"title": "telegram"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "dmljdG9yb3Bz",
"namespace": "default",
"uid": "EWiwQ6TIW0GpEo46WusW7Nvg0HuD4QAbHf0JZ2OSOhEX",
"resourceVersion": "1e6886531440afc2",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -643,19 +623,16 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "dmljdG9yb3Bz",
"namespace": "default",
"resourceVersion": "1e6886531440afc2",
"uid": "EWiwQ6TIW0GpEo46WusW7Nvg0HuD4QAbHf0JZ2OSOhEX"
}
},
"spec": {
"title": "victorops",
"integrations": [
{
"uid": "",
"type": "victorops",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"api_key": true
},
"settings": {
"api_url": "http://localhost/victorops-default/",
"entity_display_name": "{{ template \"victorops.default.entity_display_name\" . }}",
@@ -674,18 +651,19 @@
"send_resolved": true,
"state_message": "{{ template \"victorops.default.state_message\" . }}"
},
"type": "victorops",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"api_key": true
}
}
],
"title": "victorops"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "d2ViZXg",
"namespace": "default",
"uid": "wDNufI44UXHWq4ERRYenZ7XgXVV3Tjxaokz9IjMRZ54X",
"resourceVersion": "08fc955a08dfe9c0",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -693,19 +671,16 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "d2ViZXg",
"namespace": "default",
"resourceVersion": "08fc955a08dfe9c0",
"uid": "wDNufI44UXHWq4ERRYenZ7XgXVV3Tjxaokz9IjMRZ54X"
}
},
"spec": {
"title": "webex",
"integrations": [
{
"uid": "",
"type": "webex",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"http_config.authorization.credentials": true
},
"settings": {
"api_url": "http://localhost/webes-default",
"http_config": {
@@ -723,18 +698,19 @@
"room_id": "Y2lzY29zcGFyazovL3VzL1JPT00v12345678",
"send_resolved": true
},
"type": "webex",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"http_config.authorization.credentials": true
}
}
],
"title": "webex"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "d2ViaG9vaw",
"namespace": "default",
"uid": "aKzigXATPp6HOh20yTrlTcuF2Y9IrPHridGIcWrJygsX",
"resourceVersion": "494392f899a7b410",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -742,19 +718,16 @@
"grafana.com/inUse/routes": "1",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "d2ViaG9vaw",
"namespace": "default",
"resourceVersion": "494392f899a7b410",
"uid": "aKzigXATPp6HOh20yTrlTcuF2Y9IrPHridGIcWrJygsX"
}
},
"spec": {
"title": "webhook",
"integrations": [
{
"uid": "",
"type": "webhook",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"url": true
},
"settings": {
"http_config": {
"enable_http2": true,
@@ -769,18 +742,19 @@
"timeout": "0s",
"url_file": ""
},
"type": "webhook",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"url": true
}
}
],
"title": "webhook"
]
}
},
{
"apiVersion": "notifications.alerting.grafana.app/v0alpha1",
"kind": "Receiver",
"metadata": {
"name": "d2VjaGF0",
"namespace": "default",
"uid": "jkXCvNrNVw7XX5nmYFyrGiA4ckAvJ282u2scW8KZq7IX",
"resourceVersion": "135913515cbc156b",
"annotations": {
"grafana.com/access/canModifyProtected": "true",
"grafana.com/access/canReadSecrets": "true",
@@ -788,19 +762,16 @@
"grafana.com/inUse/routes": "0",
"grafana.com/inUse/rules": "0",
"grafana.com/provenance": "converted_prometheus"
},
"name": "d2VjaGF0",
"namespace": "default",
"resourceVersion": "135913515cbc156b",
"uid": "jkXCvNrNVw7XX5nmYFyrGiA4ckAvJ282u2scW8KZq7IX"
}
},
"spec": {
"title": "wechat",
"integrations": [
{
"uid": "",
"type": "wechat",
"version": "v0mimir1",
"disableResolveMessage": false,
"secureFields": {
"api_secret": true
},
"settings": {
"agent_id": "1000002",
"api_url": "http://localhost/wechat/",
@@ -820,15 +791,12 @@
"to_tag": "tag1",
"to_user": "user1"
},
"type": "wechat",
"uid": "",
"version": "v0mimir1"
"secureFields": {
"api_secret": true
}
}
],
"title": "wechat"
]
}
}
],
"kind": "ReceiverList",
"metadata": {}
}
]
}

View File

@@ -8,6 +8,7 @@ import (
"testing"
"time"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/prometheus/alertmanager/config"
"github.com/prometheus/alertmanager/pkg/labels"
"github.com/prometheus/common/model"
@@ -39,6 +40,11 @@ import (
"github.com/grafana/grafana/pkg/util/testutil"
)
var defaultTreeIdentifier = resource.Identifier{
Namespace: apis.DefaultNamespace,
Name: v0alpha1.UserDefinedRoutingTreeName,
}
func TestMain(m *testing.M) {
testsuite.Run(m)
}
@@ -52,7 +58,8 @@ func TestIntegrationNotAllowedMethods(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
route := &v0alpha1.RoutingTree{
ObjectMeta: v1.ObjectMeta{
@@ -60,11 +67,7 @@ func TestIntegrationNotAllowedMethods(t *testing.T) {
},
Spec: v0alpha1.RoutingTreeSpec{},
}
_, err := client.Create(ctx, route, v1.CreateOptions{})
assert.Error(t, err)
require.Truef(t, errors.IsMethodNotSupported(err), "Expected MethodNotSupported but got %s", err)
err = client.Client.DeleteCollection(ctx, v1.DeleteOptions{}, v1.ListOptions{})
_, err = client.Create(ctx, route, resource.CreateOptions{})
assert.Error(t, err)
require.Truef(t, errors.IsMethodNotSupported(err), "Expected MethodNotSupported but got %s", err)
}
@@ -154,50 +157,52 @@ func TestIntegrationAccessControl(t *testing.T) {
}
admin := org1.Admin
adminClient := common.NewRoutingTreeClient(t, admin)
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(admin.GetClientRegistry())
require.NoError(t, err)
for _, tc := range testCases {
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
client := common.NewRoutingTreeClient(t, tc.user)
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(tc.user.GetClientRegistry())
require.NoError(t, err)
if tc.canRead {
t.Run("should be able to list routing trees", func(t *testing.T) {
list, err := client.List(ctx, v1.ListOptions{})
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 1)
require.Equal(t, v0alpha1.UserDefinedRoutingTreeName, list.Items[0].Name)
})
t.Run("should be able to read routing trees by resource identifier", func(t *testing.T) {
_, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
_, err := client.Get(ctx, defaultTreeIdentifier)
require.NoError(t, err)
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to list routing trees", func(t *testing.T) {
_, err := client.List(ctx, v1.ListOptions{})
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should be forbidden to read routing tree by name", func(t *testing.T) {
_, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
_, err := client.Get(ctx, defaultTreeIdentifier)
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
}
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
require.NoError(t, err)
expected := current.Copy().(*v0alpha1.RoutingTree)
expected.Spec.Routes = []v0alpha1.RoutingTreeRoute{
@@ -217,7 +222,7 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canUpdate {
t.Run("should be able to update routing tree", func(t *testing.T) {
updated, err := client.Update(ctx, expected, v1.UpdateOptions{})
updated, err := client.Update(ctx, expected, resource.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
expected = updated
@@ -225,21 +230,23 @@ func TestIntegrationAccessControl(t *testing.T) {
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
up := expected.Copy().(*v0alpha1.RoutingTree)
up.Name = "notFound"
_, err := client.Update(ctx, up, v1.UpdateOptions{})
_, err := client.Update(ctx, up, resource.UpdateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to update routing tree", func(t *testing.T) {
_, err := client.Update(ctx, expected, v1.UpdateOptions{})
_, err := client.Update(ctx, expected, resource.UpdateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
up := expected.Copy().(*v0alpha1.RoutingTree)
up.Name = "notFound"
_, err := client.Update(ctx, up, v1.UpdateOptions{})
_, err := client.Update(ctx, up, resource.UpdateOptions{
ResourceVersion: up.ResourceVersion,
})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
@@ -248,32 +255,32 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canUpdate {
t.Run("should be able to reset routing tree", func(t *testing.T) {
err := client.Delete(ctx, expected.Name, v1.DeleteOptions{})
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.NoError(t, err)
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
require.Error(t, err)
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to reset routing tree", func(t *testing.T) {
err := client.Delete(ctx, expected.Name, v1.DeleteOptions{})
err := client.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
err := client.Delete(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "notfound"}, resource.DeleteOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
}
})
err := adminClient.Delete(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.DeleteOptions{})
err := adminClient.Delete(ctx, defaultTreeIdentifier, resource.DeleteOptions{})
require.NoError(t, err)
}
}
@@ -287,21 +294,22 @@ func TestIntegrationProvisioning(t *testing.T) {
org := helper.Org1
admin := org.Admin
adminClient := common.NewRoutingTreeClient(t, admin)
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(admin.GetClientRegistry())
require.NoError(t, err)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
db, err := store.ProvideDBStore(env.Cfg, env.FeatureToggles, env.SQLStore, &foldertest.FakeService{}, &dashboards.FakeDashboardService{}, ac, bus.ProvideBus(tracing.InitializeTracerForTest()))
require.NoError(t, err)
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
require.NoError(t, err)
require.Equal(t, "none", current.GetProvenanceStatus())
t.Run("should provide provenance status", func(t *testing.T) {
require.NoError(t, db.SetProvenance(ctx, &definitions.Route{}, admin.Identity.GetOrgID(), "API"))
got, err := adminClient.Get(ctx, current.Name, v1.GetOptions{})
got, err := adminClient.Get(ctx, current.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.Equal(t, "API", got.GetProvenanceStatus())
})
@@ -319,13 +327,13 @@ func TestIntegrationProvisioning(t *testing.T) {
},
}
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should not let delete if provisioned", func(t *testing.T) {
err := adminClient.Delete(ctx, current.Name, v1.DeleteOptions{})
err := adminClient.Delete(ctx, current.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
@@ -336,35 +344,37 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
current, err := adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
current, err := adminClient.Get(ctx, defaultTreeIdentifier)
require.NoError(t, err)
require.NotEmpty(t, current.ResourceVersion)
t.Run("should forbid if version does not match", func(t *testing.T) {
updated := current.Copy().(*v0alpha1.RoutingTree)
updated.ResourceVersion = "test"
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
ResourceVersion: "test",
})
require.Error(t, err)
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should update if version matches", func(t *testing.T) {
updated := current.Copy().(*v0alpha1.RoutingTree)
updated.Spec.Defaults.GroupBy = append(updated.Spec.Defaults.GroupBy, "data")
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
})
t.Run("should update if version is empty", func(t *testing.T) {
current, err = adminClient.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
current, err = adminClient.Get(ctx, defaultTreeIdentifier)
require.NoError(t, err)
updated := current.Copy().(*v0alpha1.RoutingTree)
updated.ResourceVersion = ""
updated.Spec.Routes = append(updated.Spec.Routes, v0alpha1.RoutingTreeRoute{Continue: true})
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, current.ResourceVersion, actualUpdated.ResourceVersion)
@@ -380,20 +390,22 @@ func TestIntegrationDataConsistency(t *testing.T) {
cliCfg := helper.Org1.Admin.NewRestConfig()
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
receiver := "grafana-default-email"
timeInterval := "test-time-interval"
createRoute := func(t *testing.T, route definitions.Route) {
t.Helper()
routeClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
v1Route, err := routingtree.ConvertToK8sResource(helper.Org1.Admin.Identity.GetOrgID(), route, "", func(int64) string { return "default" })
require.NoError(t, err)
_, err = routeClient.Update(ctx, v1Route, v1.UpdateOptions{})
_, err = routeClient.Update(ctx, v1Route, resource.UpdateOptions{})
require.NoError(t, err)
}
_, err := common.NewTimeIntervalClient(t, helper.Org1.Admin).Create(ctx, &v0alpha1.TimeInterval{
_, err = common.NewTimeIntervalClient(t, helper.Org1.Admin).Create(ctx, &v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
},
@@ -435,7 +447,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
},
}
createRoute(t, route)
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
tree, err := client.Get(ctx, defaultTreeIdentifier)
require.NoError(t, err)
expected := []v0alpha1.RoutingTreeMatcher{
{
@@ -503,9 +515,9 @@ func TestIntegrationDataConsistency(t *testing.T) {
ensureMatcher(t, labels.MatchNotEqual, "matchers", "v"),
}
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
tree, err := client.Get(ctx, defaultTreeIdentifier)
require.NoError(t, err)
_, err = client.Update(ctx, tree, v1.UpdateOptions{})
_, err = client.Update(ctx, tree, resource.UpdateOptions{})
require.NoError(t, err)
cfg, _, _ = legacyCli.GetAlertmanagerConfigWithStatus(t)
@@ -542,7 +554,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
createRoute(t, route)
t.Run("correctly reads all fields", func(t *testing.T) {
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
tree, err := client.Get(ctx, defaultTreeIdentifier)
require.NoError(t, err)
assert.Equal(t, v0alpha1.RoutingTreeRouteDefaults{
Receiver: receiver,
@@ -589,10 +601,10 @@ func TestIntegrationDataConsistency(t *testing.T) {
t.Run("correctly save all fields", func(t *testing.T) {
before, status, body := legacyCli.GetAlertmanagerConfigWithStatus(t)
require.Equalf(t, http.StatusOK, status, body)
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
tree, err := client.Get(ctx, defaultTreeIdentifier)
tree.Spec.Defaults.GroupBy = []string{"test-123", "test-456", "test-789"}
require.NoError(t, err)
_, err = client.Update(ctx, tree, v1.UpdateOptions{})
_, err = client.Update(ctx, tree, resource.UpdateOptions{})
require.NoError(t, err)
before.AlertmanagerConfig.Route.GroupByStr = []string{"test-123", "test-456", "test-789"}
@@ -640,7 +652,7 @@ func TestIntegrationDataConsistency(t *testing.T) {
}
createRoute(t, route)
tree, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
tree, err := client.Get(ctx, defaultTreeIdentifier)
require.NoError(t, err)
assert.Equal(t, "foo🙂", tree.Spec.Routes[0].GroupBy[0])
expected := []v0alpha1.RoutingTreeMatcher{
@@ -666,7 +678,8 @@ func TestIntegrationExtraConfigsConflicts(t *testing.T) {
cliCfg := helper.Org1.Admin.NewRestConfig()
legacyCli := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
client := common.NewRoutingTreeClient(t, helper.Org1.Admin)
client, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
// Now upload a new extra config
testAlertmanagerConfigYAML := `
@@ -691,7 +704,7 @@ receivers:
}, headers)
require.Equal(t, "success", response.Status)
current, err := client.Get(ctx, v0alpha1.UserDefinedRoutingTreeName, v1.GetOptions{})
current, err := client.Get(ctx, defaultTreeIdentifier)
require.NoError(t, err)
updated := current.Copy().(*v0alpha1.RoutingTree)
updated.Spec.Routes = append(updated.Spec.Routes, v0alpha1.RoutingTreeRoute{
@@ -704,7 +717,7 @@ receivers:
},
})
_, err = client.Update(ctx, updated, v1.UpdateOptions{})
_, err = client.Update(ctx, updated, resource.UpdateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsBadRequest(err), "Should get BadRequest error but got: %s", err)
@@ -712,6 +725,6 @@ receivers:
legacyCli.ConvertPrometheusDeleteAlertmanagerConfig(t, headers)
// and try again
_, err = client.Update(ctx, updated, v1.UpdateOptions{})
_, err = client.Update(ctx, updated, resource.UpdateOptions{})
require.NoError(t, err)
}

View File

@@ -6,6 +6,7 @@ import (
"path"
"testing"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.yaml.in/yaml/v3"
@@ -18,7 +19,6 @@ import (
"github.com/grafana/grafana/pkg/services/ngalert/models"
"github.com/grafana/grafana/pkg/tests/api/alerting"
"github.com/grafana/grafana/pkg/tests/apis"
"github.com/grafana/grafana/pkg/tests/apis/alerting/notifications/common"
"github.com/grafana/grafana/pkg/tests/testinfra"
"github.com/grafana/grafana/pkg/util/testutil"
)
@@ -35,7 +35,8 @@ func TestIntegrationImportedTemplates(t *testing.T) {
},
})
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
cliCfg := helper.Org1.Admin.NewRestConfig()
alertingApi := alerting.NewAlertingLegacyAPIClient(helper.GetEnv().Server.HTTPServer.Listener.Addr().String(), cliCfg.Username, cliCfg.Password)
@@ -57,7 +58,7 @@ func TestIntegrationImportedTemplates(t *testing.T) {
response := alertingApi.ConvertPrometheusPostAlertmanagerConfig(t, amConfig, headers)
require.Equal(t, "success", response.Status)
templates, err := client.List(context.Background(), metav1.ListOptions{})
templates, err := client.List(context.Background(), apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, templates.Items, 3)
@@ -90,12 +91,12 @@ func TestIntegrationImportedTemplates(t *testing.T) {
t.Run("should not be able to update", func(t *testing.T) {
tpl := templates.Items[1]
tpl.Spec.Content = "new content"
_, err := client.Update(context.Background(), &tpl, metav1.UpdateOptions{})
_, err := client.Update(context.Background(), &tpl, resource.UpdateOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
t.Run("should not be able to delete", func(t *testing.T) {
err := client.Delete(context.Background(), templates.Items[1].Name, metav1.DeleteOptions{})
err := client.Delete(context.Background(), templates.Items[1].GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
@@ -108,14 +109,14 @@ func TestIntegrationImportedTemplates(t *testing.T) {
}
tpl.Spec.Kind = v0alpha1.TemplateGroupTemplateKindGrafana
created, err := client.Create(context.Background(), &tpl, metav1.CreateOptions{})
created, err := client.Create(context.Background(), &tpl, resource.CreateOptions{})
require.NoError(t, err)
assert.NotEqual(t, templates.Items[1].Name, created.Name)
})
t.Run("sort by kind and then name", func(t *testing.T) {
templates, err := client.List(context.Background(), metav1.ListOptions{})
templates, err := client.List(context.Background(), apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, templates.Items, 4)

View File

@@ -7,6 +7,7 @@ import (
"testing"
"github.com/grafana/alerting/templates"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/errors"
@@ -45,7 +46,8 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
newTemplate := &v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
@@ -61,23 +63,23 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
t.Run("create should fail if object name is specified", func(t *testing.T) {
template := newTemplate.Copy().(*v0alpha1.TemplateGroup)
template.Name = "new-templateGroup"
_, err := client.Create(ctx, template, v1.CreateOptions{})
_, err := client.Create(ctx, template, resource.CreateOptions{})
assert.Error(t, err)
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
})
var resourceID string
var resourceID resource.Identifier
t.Run("create should succeed and provide resource name", func(t *testing.T) {
actual, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
actual, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
resourceID = actual.Name
resourceID = actual.GetStaticMetadata().Identifier()
})
var existingTemplateGroup *v0alpha1.TemplateGroup
t.Run("resource should be available by the identifier", func(t *testing.T) {
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
actual, err := client.Get(ctx, resourceID)
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.Equal(t, newTemplate.Spec, actual.Spec)
@@ -90,12 +92,12 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
}
updated := existingTemplateGroup.Copy().(*v0alpha1.TemplateGroup)
updated.Spec.Title = "another-templateGroup"
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
require.NoError(t, err)
require.Equal(t, updated.Spec, actual.Spec)
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.Equal(t, actual, resource)
@@ -104,7 +106,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
var defaultTemplateGroup *v0alpha1.TemplateGroup
t.Run("default template should be available by the identifier", func(t *testing.T) {
actual, err := client.Get(ctx, templates.DefaultTemplateName, v1.GetOptions{})
actual, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: templates.DefaultTemplateName})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
@@ -122,7 +124,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
t.Run("create with reserved default title should work", func(t *testing.T) {
template := newTemplate.Copy().(*v0alpha1.TemplateGroup)
template.Spec.Title = defaultTemplateGroup.Spec.Title
actual, err := client.Create(ctx, template, v1.CreateOptions{})
actual, err := client.Create(ctx, template, resource.CreateOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
@@ -130,7 +132,7 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
})
t.Run("default template should not be available by calculated UID", func(t *testing.T) {
actual, err := client.Get(ctx, newTemplateWithOverlappingName.Name, v1.GetOptions{})
actual, err := client.Get(ctx, newTemplateWithOverlappingName.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
@@ -215,11 +217,13 @@ func TestIntegrationAccessControl(t *testing.T) {
},
}
adminClient := common.NewTemplateGroupClient(t, org1.Admin)
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(org1.Admin.GetClientRegistry())
require.NoError(t, err)
for _, tc := range testCases {
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
client := common.NewTemplateGroupClient(t, tc.user)
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(tc.user.GetClientRegistry())
require.NoError(t, err)
var expected = &v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
@@ -237,12 +241,12 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canCreate {
t.Run("should be able to create template group", func(t *testing.T) {
actual, err := client.Create(ctx, expected, v1.CreateOptions{})
actual, err := client.Create(ctx, expected, resource.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.Equal(t, expected.Spec, actual.Spec)
t.Run("should fail if already exists", func(t *testing.T) {
_, err := client.Create(ctx, actual, v1.CreateOptions{})
_, err := client.Create(ctx, actual, resource.CreateOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
@@ -250,45 +254,45 @@ func TestIntegrationAccessControl(t *testing.T) {
})
} else {
t.Run("should be forbidden to create", func(t *testing.T) {
_, err := client.Create(ctx, expected, v1.CreateOptions{})
_, err := client.Create(ctx, expected, resource.CreateOptions{})
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
})
// create resource to proceed with other tests
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, expected)
}
if tc.canRead {
t.Run("should be able to list template groups", func(t *testing.T) {
list, err := client.List(ctx, v1.ListOptions{})
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 2) // Includes default template.
})
t.Run("should be able to read template group by resource identifier", func(t *testing.T) {
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.Equal(t, expected, got)
require.Equal(t, expected.Spec, got.Spec)
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to list template groups", func(t *testing.T) {
_, err := client.List(ctx, v1.ListOptions{})
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should be forbidden to read template group by name", func(t *testing.T) {
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
@@ -302,7 +306,7 @@ func TestIntegrationAccessControl(t *testing.T) {
if tc.canUpdate {
t.Run("should be able to update template group", func(t *testing.T) {
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
expected = updated
@@ -310,52 +314,54 @@ func TestIntegrationAccessControl(t *testing.T) {
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.TemplateGroup)
up.Name = "notFound"
_, err := client.Update(ctx, up, v1.UpdateOptions{})
_, err := client.Update(ctx, up, resource.UpdateOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to update template group", func(t *testing.T) {
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.TemplateGroup)
up.Name = "notFound"
_, err := client.Update(ctx, up, v1.UpdateOptions{})
_, err := client.Update(ctx, up, resource.UpdateOptions{
ResourceVersion: up.ResourceVersion,
})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
}
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
oldClient := common.NewTemplateGroupClient(t, tc.user) // TODO replace with normal client once delete is fixed
if tc.canDelete {
t.Run("should be able to delete template group", func(t *testing.T) {
err := client.Delete(ctx, expected.Name, deleteOptions)
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
require.NoError(t, err)
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to delete template group", func(t *testing.T) {
err := client.Delete(ctx, expected.Name, deleteOptions)
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
}
if tc.canRead {
t.Run("should get list with just default template if no template groups", func(t *testing.T) {
list, err := client.List(ctx, v1.ListOptions{})
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 1)
require.Equal(t, templates.DefaultTemplateName, list.Items[0].Name)
@@ -374,7 +380,8 @@ func TestIntegrationProvisioning(t *testing.T) {
org := helper.Org1
admin := org.Admin
adminClient := common.NewTemplateGroupClient(t, admin)
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(admin.GetClientRegistry())
require.NoError(t, err)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
@@ -390,7 +397,7 @@ func TestIntegrationProvisioning(t *testing.T) {
Content: `{{ define "test" }} test {{ end }}`,
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
},
}, v1.CreateOptions{})
}, resource.CreateOptions{})
require.NoError(t, err)
require.Equal(t, "none", created.GetProvenanceStatus())
@@ -399,7 +406,7 @@ func TestIntegrationProvisioning(t *testing.T) {
Name: created.Spec.Title,
}, admin.Identity.GetOrgID(), "API"))
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.Equal(t, "API", got.GetProvenanceStatus())
})
@@ -407,12 +414,12 @@ func TestIntegrationProvisioning(t *testing.T) {
updated := created.Copy().(*v0alpha1.TemplateGroup)
updated.Spec.Content = `{{ define "another-test" }} test {{ end }}`
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should not let delete if provisioned", func(t *testing.T) {
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
@@ -423,8 +430,9 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
oldClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
template := v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -436,21 +444,22 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
},
}
created, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
created, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
require.NoError(t, err)
require.NotNil(t, created)
require.NotEmpty(t, created.ResourceVersion)
t.Run("should forbid if version does not match", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.TemplateGroup)
updated.ResourceVersion = "test"
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
ResourceVersion: "test",
})
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should update if version matches", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.TemplateGroup)
updated.Spec.Content = `{{ define "test-another" }} test {{ end }}`
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
@@ -460,16 +469,16 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
updated.ResourceVersion = ""
updated.Spec.Content = `{{ define "test-another-2" }} test {{ end }}`
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, created.ResourceVersion, actualUpdated.ResourceVersion)
})
t.Run("should fail to delete if version does not match", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
require.NoError(t, err)
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer("something"),
},
@@ -477,10 +486,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should succeed if version matches", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
require.NoError(t, err)
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -488,10 +497,10 @@ func TestIntegrationOptimisticConcurrency(t *testing.T) {
require.NoError(t, err)
})
t.Run("should succeed if version is empty", func(t *testing.T) {
actual, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
actual, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
require.NoError(t, err)
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -506,7 +515,8 @@ func TestIntegrationPatch(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
template := v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
@@ -519,8 +529,10 @@ func TestIntegrationPatch(t *testing.T) {
},
}
current, err := adminClient.Create(ctx, &template, v1.CreateOptions{})
current, err := adminClient.Create(ctx, &template, resource.CreateOptions{})
require.NoError(t, err)
oldClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
require.NotNil(t, current)
require.NotEmpty(t, current.ResourceVersion)
@@ -531,7 +543,7 @@ func TestIntegrationPatch(t *testing.T) {
}
}`
result, err := adminClient.Patch(ctx, current.Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
result, err := oldClient.Patch(ctx, current.GetStaticMetadata().Identifier().Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
require.NoError(t, err)
require.Equal(t, `{{ define "test-another" }} test {{ end }}`, result.Spec.Content)
current = result
@@ -540,18 +552,15 @@ func TestIntegrationPatch(t *testing.T) {
t.Run("should patch with json patch", func(t *testing.T) {
expected := `{{ define "test-json-patch" }} test {{ end }}`
patch := []map[string]interface{}{
patch := []resource.PatchOperation{
{
"op": "replace",
"path": "/spec/content",
"value": expected,
Operation: "replace",
Path: "/spec/content",
Value: expected,
},
}
patchData, err := json.Marshal(patch)
require.NoError(t, err)
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
require.NoError(t, err)
expectedSpec := current.Spec
expectedSpec.Content = expected
@@ -565,7 +574,8 @@ func TestIntegrationListSelector(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := common.NewTemplateGroupClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
template1 := &v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
@@ -577,7 +587,7 @@ func TestIntegrationListSelector(t *testing.T) {
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
},
}
template1, err := adminClient.Create(ctx, template1, v1.CreateOptions{})
template1, err = adminClient.Create(ctx, template1, resource.CreateOptions{})
require.NoError(t, err)
template2 := &v0alpha1.TemplateGroup{
@@ -590,7 +600,7 @@ func TestIntegrationListSelector(t *testing.T) {
Kind: v0alpha1.TemplateGroupTemplateKindGrafana,
},
}
template2, err = adminClient.Create(ctx, template2, v1.CreateOptions{})
template2, err = adminClient.Create(ctx, template2, resource.CreateOptions{})
require.NoError(t, err)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
@@ -599,18 +609,18 @@ func TestIntegrationListSelector(t *testing.T) {
require.NoError(t, db.SetProvenance(ctx, &definitions.NotificationTemplate{
Name: template2.Spec.Title,
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
template2, err = adminClient.Get(ctx, template2.Name, v1.GetOptions{})
template2, err = adminClient.Get(ctx, template2.GetStaticMetadata().Identifier())
require.NoError(t, err)
tmpls, err := adminClient.List(ctx, v1.ListOptions{})
tmpls, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, tmpls.Items, 3) // Includes default template.
t.Run("should filter by template name", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "spec.title=" + template1.Spec.Title,
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"spec.title=" + template1.Spec.Title},
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -618,8 +628,8 @@ func TestIntegrationListSelector(t *testing.T) {
})
t.Run("should filter by template metadata name", func(t *testing.T) {
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "metadata.name=" + template2.Name,
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"metadata.name=" + template2.Name},
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -628,8 +638,8 @@ func TestIntegrationListSelector(t *testing.T) {
t.Run("should filter by multiple filters", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.title=%s", template2.Name, template2.Spec.Title),
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s,spec.title=%s", template2.Name, template2.Spec.Title)},
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -637,8 +647,8 @@ func TestIntegrationListSelector(t *testing.T) {
})
t.Run("should be empty when filter does not match", func(t *testing.T) {
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", "unknown")},
})
require.NoError(t, err)
require.Empty(t, list.Items)
@@ -646,17 +656,17 @@ func TestIntegrationListSelector(t *testing.T) {
t.Run("should filter by default template name", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "spec.title=" + v0alpha1.DefaultTemplateTitle,
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"spec.title=" + v0alpha1.DefaultTemplateTitle},
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
require.Equal(t, templates.DefaultTemplateName, list.Items[0].Name)
// Now just non-default templates
list, err = adminClient.List(ctx, v1.ListOptions{
FieldSelector: "spec.title!=" + v0alpha1.DefaultTemplateTitle,
})
list, err = adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"spec.title!=" + v0alpha1.DefaultTemplateTitle}},
)
require.NoError(t, err)
require.Len(t, list.Items, 2)
require.NotEqualf(t, templates.DefaultTemplateName, list.Items[0].Name, "Expected non-default template but got %s", list.Items[0].Name)
@@ -669,7 +679,8 @@ func TestIntegrationKinds(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
client := common.NewTemplateGroupClient(t, helper.Org1.Admin)
client, err := v0alpha1.NewTemplateGroupClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
newTemplate := &v0alpha1.TemplateGroup{
ObjectMeta: v1.ObjectMeta{
@@ -683,17 +694,17 @@ func TestIntegrationKinds(t *testing.T) {
}
t.Run("should not let create Mimir template", func(t *testing.T) {
_, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
_, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
t.Run("should not let change kind", func(t *testing.T) {
newTemplate.Spec.Kind = v0alpha1.TemplateGroupTemplateKindGrafana
created, err := client.Create(ctx, newTemplate, v1.CreateOptions{})
created, err := client.Create(ctx, newTemplate, resource.CreateOptions{})
require.NoError(t, err)
created.Spec.Kind = v0alpha1.TemplateGroupTemplateKindMimir
_, err = client.Update(ctx, created, v1.UpdateOptions{})
_, err = client.Update(ctx, created, resource.UpdateOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
}

View File

@@ -10,6 +10,7 @@ import (
"slices"
"testing"
"github.com/grafana/grafana-app-sdk/resource"
"github.com/prometheus/alertmanager/config"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -57,7 +58,8 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
client := common.NewTimeIntervalClient(t, helper.Org1.Admin)
client, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
newInterval := &v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
@@ -72,22 +74,22 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
t.Run("create should fail if object name is specified", func(t *testing.T) {
interval := newInterval.Copy().(*v0alpha1.TimeInterval)
interval.Name = "time-newInterval"
_, err := client.Create(ctx, interval, v1.CreateOptions{})
_, err := client.Create(ctx, interval, resource.CreateOptions{})
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest but got %s", err)
})
var resourceID string
var resourceID resource.Identifier
t.Run("create should succeed and provide resource name", func(t *testing.T) {
actual, err := client.Create(ctx, newInterval, v1.CreateOptions{})
actual, err := client.Create(ctx, newInterval, resource.CreateOptions{})
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.NotEmptyf(t, actual.UID, "Resource UID should not be empty")
resourceID = actual.Name
resourceID = actual.GetStaticMetadata().Identifier()
})
var existingInterval *v0alpha1.TimeInterval
t.Run("resource should be available by the identifier", func(t *testing.T) {
actual, err := client.Get(ctx, resourceID, v1.GetOptions{})
actual, err := client.Get(ctx, resourceID)
require.NoError(t, err)
require.NotEmptyf(t, actual.Name, "Resource name should not be empty")
require.Equal(t, newInterval.Spec, actual.Spec)
@@ -100,13 +102,13 @@ func TestIntegrationResourceIdentifier(t *testing.T) {
}
updated := existingInterval.Copy().(*v0alpha1.TimeInterval)
updated.Spec.Name = "another-newInterval"
actual, err := client.Update(ctx, updated, v1.UpdateOptions{})
actual, err := client.Update(ctx, updated, resource.UpdateOptions{})
require.NoError(t, err)
require.Equal(t, updated.Spec, actual.Spec)
require.NotEqualf(t, updated.Name, actual.Name, "Update should change the resource name but it didn't")
require.NotEqualf(t, updated.ResourceVersion, actual.ResourceVersion, "Update should change the resource version but it didn't")
resource, err := client.Get(ctx, actual.Name, v1.GetOptions{})
resource, err := client.Get(ctx, actual.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.Equal(t, actual, resource)
})
@@ -189,11 +191,13 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
},
}
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
for _, tc := range testCases {
t.Run(fmt.Sprintf("user '%s'", tc.user.Identity.GetLogin()), func(t *testing.T) {
client := common.NewTimeIntervalClient(t, tc.user)
client, err := v0alpha1.NewTimeIntervalClientFromGenerator(tc.user.GetClientRegistry())
require.NoError(t, err)
var expected = &v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
Namespace: "default",
@@ -209,12 +213,12 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
if tc.canCreate {
t.Run("should be able to create time interval", func(t *testing.T) {
actual, err := client.Create(ctx, expected, v1.CreateOptions{})
actual, err := client.Create(ctx, expected, resource.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.Equal(t, expected.Spec, actual.Spec)
t.Run("should fail if already exists", func(t *testing.T) {
_, err := client.Create(ctx, actual, v1.CreateOptions{})
_, err := client.Create(ctx, actual, resource.CreateOptions{})
require.Truef(t, errors.IsBadRequest(err), "expected bad request but got %s", err)
})
@@ -222,45 +226,45 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
})
} else {
t.Run("should be forbidden to create", func(t *testing.T) {
_, err := client.Create(ctx, expected, v1.CreateOptions{})
_, err := client.Create(ctx, expected, resource.CreateOptions{})
require.Truef(t, errors.IsForbidden(err), "Payload %s", string(d))
})
// create resource to proceed with other tests
expected, err = adminClient.Create(ctx, expected, v1.CreateOptions{})
expected, err = adminClient.Create(ctx, expected, resource.CreateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
require.NotNil(t, expected)
}
if tc.canRead {
t.Run("should be able to list time intervals", func(t *testing.T) {
list, err := client.List(ctx, v1.ListOptions{})
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 1)
})
t.Run("should be able to read time interval by resource identifier", func(t *testing.T) {
got, err := client.Get(ctx, expected.Name, v1.GetOptions{})
got, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.Equal(t, expected, got)
require.Equal(t, expected.Spec, got.Spec)
t.Run("should get NotFound if resource does not exist", func(t *testing.T) {
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to list time intervals", func(t *testing.T) {
_, err := client.List(ctx, v1.ListOptions{})
_, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should be forbidden to read time interval by name", func(t *testing.T) {
_, err := client.Get(ctx, expected.Name, v1.GetOptions{})
_, err := client.Get(ctx, expected.GetStaticMetadata().Identifier())
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if name does not exist", func(t *testing.T) {
_, err := client.Get(ctx, "Notfound", v1.GetOptions{})
_, err := client.Get(ctx, resource.Identifier{Namespace: apis.DefaultNamespace, Name: "Notfound"})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
@@ -274,7 +278,7 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
if tc.canUpdate {
t.Run("should be able to update time interval", func(t *testing.T) {
updated, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
updated, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
require.NoErrorf(t, err, "Payload %s", string(d))
expected = updated
@@ -282,52 +286,54 @@ func TestIntegrationTimeIntervalAccessControl(t *testing.T) {
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.TimeInterval)
up.Name = "notFound"
_, err := client.Update(ctx, up, v1.UpdateOptions{})
_, err := client.Update(ctx, up, resource.UpdateOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to update time interval", func(t *testing.T) {
_, err := client.Update(ctx, updatedExpected, v1.UpdateOptions{})
_, err := client.Update(ctx, updatedExpected, resource.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should get forbidden even if resource does not exist", func(t *testing.T) {
up := updatedExpected.Copy().(*v0alpha1.TimeInterval)
up.Name = "notFound"
_, err := client.Update(ctx, up, v1.UpdateOptions{})
_, err := client.Update(ctx, up, resource.UpdateOptions{
ResourceVersion: up.ResourceVersion,
})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
}
deleteOptions := v1.DeleteOptions{Preconditions: &v1.Preconditions{ResourceVersion: util.Pointer(expected.ResourceVersion)}}
oldClient := common.NewTimeIntervalClient(t, tc.user)
if tc.canDelete {
t.Run("should be able to delete time interval", func(t *testing.T) {
err := client.Delete(ctx, expected.Name, deleteOptions)
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
require.NoError(t, err)
t.Run("should get NotFound if name does not exist", func(t *testing.T) {
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Truef(t, errors.IsNotFound(err), "Should get NotFound error but got: %s", err)
})
})
} else {
t.Run("should be forbidden to delete time interval", func(t *testing.T) {
err := client.Delete(ctx, expected.Name, deleteOptions)
err := oldClient.Delete(ctx, expected.GetStaticMetadata().Identifier().Name, deleteOptions)
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
t.Run("should be forbidden even if resource does not exist", func(t *testing.T) {
err := client.Delete(ctx, "notfound", v1.DeleteOptions{})
err := oldClient.Delete(ctx, "notfound", v1.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
})
require.NoError(t, adminClient.Delete(ctx, expected.Name, v1.DeleteOptions{}))
require.NoError(t, adminClient.Delete(ctx, expected.GetStaticMetadata().Identifier(), resource.DeleteOptions{}))
}
if tc.canRead {
t.Run("should get empty list if no mute timings", func(t *testing.T) {
list, err := client.List(ctx, v1.ListOptions{})
list, err := client.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, list.Items, 0)
})
@@ -345,7 +351,8 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
org := helper.Org1
admin := org.Admin
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
@@ -360,7 +367,7 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
Name: "time-interval-1",
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
},
}, v1.CreateOptions{})
}, resource.CreateOptions{})
require.NoError(t, err)
require.Equal(t, "none", created.GetProvenanceStatus())
@@ -371,7 +378,7 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
},
}, admin.Identity.GetOrgID(), "API"))
got, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
got, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
require.NoError(t, err)
require.Equal(t, "API", got.GetProvenanceStatus())
})
@@ -379,12 +386,12 @@ func TestIntegrationTimeIntervalProvisioning(t *testing.T) {
updated := created.Copy().(*v0alpha1.TimeInterval)
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
t.Run("should not let delete if provisioned", func(t *testing.T) {
err := adminClient.Delete(ctx, created.Name, v1.DeleteOptions{})
err := adminClient.Delete(ctx, created.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Truef(t, errors.IsForbidden(err), "should get Forbidden error but got %s", err)
})
}
@@ -395,7 +402,9 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
oldClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
interval := v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
@@ -407,21 +416,22 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
},
}
created, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
created, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
require.NoError(t, err)
require.NotNil(t, created)
require.NotEmpty(t, created.ResourceVersion)
t.Run("should forbid if version does not match", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.TimeInterval)
updated.ResourceVersion = "test"
_, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
_, err := adminClient.Update(ctx, updated, resource.UpdateOptions{
ResourceVersion: "test",
})
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should update if version matches", func(t *testing.T) {
updated := created.Copy().(*v0alpha1.TimeInterval)
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, updated.ResourceVersion, actualUpdated.ResourceVersion)
@@ -431,16 +441,16 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
updated.ResourceVersion = ""
updated.Spec.TimeIntervals = fakes.IntervalGenerator{}.GenerateMany(2)
actualUpdated, err := adminClient.Update(ctx, updated, v1.UpdateOptions{})
actualUpdated, err := adminClient.Update(ctx, updated, resource.UpdateOptions{})
require.NoError(t, err)
require.EqualValues(t, updated.Spec, actualUpdated.Spec)
require.NotEqual(t, created.ResourceVersion, actualUpdated.ResourceVersion)
})
t.Run("should fail to delete if version does not match", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
require.NoError(t, err)
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer("something"),
},
@@ -448,10 +458,10 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
require.Truef(t, errors.IsConflict(err), "should get Forbidden error but got %s", err)
})
t.Run("should succeed if version matches", func(t *testing.T) {
actual, err := adminClient.Get(ctx, created.Name, v1.GetOptions{})
actual, err := adminClient.Get(ctx, created.GetStaticMetadata().Identifier())
require.NoError(t, err)
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -459,10 +469,10 @@ func TestIntegrationTimeIntervalOptimisticConcurrency(t *testing.T) {
require.NoError(t, err)
})
t.Run("should succeed if version is empty", func(t *testing.T) {
actual, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
actual, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
require.NoError(t, err)
err = adminClient.Delete(ctx, actual.Name, v1.DeleteOptions{
err = oldClient.Delete(ctx, actual.GetStaticMetadata().Identifier().Name, v1.DeleteOptions{
Preconditions: &v1.Preconditions{
ResourceVersion: util.Pointer(actual.ResourceVersion),
},
@@ -477,7 +487,9 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
oldClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
interval := v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
@@ -489,7 +501,7 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
},
}
current, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
current, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
require.NoError(t, err)
require.NotNil(t, current)
require.NotEmpty(t, current.ResourceVersion)
@@ -501,7 +513,7 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
}
}`
result, err := adminClient.Patch(ctx, current.Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
result, err := oldClient.Patch(ctx, current.GetStaticMetadata().Identifier().Name, types.MergePatchType, []byte(patch), v1.PatchOptions{})
require.NoError(t, err)
require.Empty(t, result.Spec.TimeIntervals)
current = result
@@ -510,18 +522,15 @@ func TestIntegrationTimeIntervalPatch(t *testing.T) {
t.Run("should patch with json patch", func(t *testing.T) {
expected := fakes.IntervalGenerator{}.Generate()
patch := []map[string]interface{}{
patch := []resource.PatchOperation{
{
"op": "add",
"path": "/spec/time_intervals/-",
"value": expected,
Operation: "add",
Path: "/spec/time_intervals/-",
Value: expected,
},
}
patchData, err := json.Marshal(patch)
require.NoError(t, err)
result, err := adminClient.Patch(ctx, current.Name, types.JSONPatchType, patchData, v1.PatchOptions{})
result, err := adminClient.Patch(ctx, current.GetStaticMetadata().Identifier(), resource.PatchRequest{Operations: patch}, resource.PatchOptions{})
require.NoError(t, err)
expectedSpec := v0alpha1.TimeIntervalSpec{
Name: current.Spec.Name,
@@ -540,7 +549,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
interval1 := &v0alpha1.TimeInterval{
ObjectMeta: v1.ObjectMeta{
@@ -551,7 +561,7 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
},
}
interval1, err := adminClient.Create(ctx, interval1, v1.CreateOptions{})
interval1, err = adminClient.Create(ctx, interval1, resource.CreateOptions{})
require.NoError(t, err)
interval2 := &v0alpha1.TimeInterval{
@@ -563,7 +573,7 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
TimeIntervals: fakes.IntervalGenerator{}.GenerateMany(2),
},
}
interval2, err = adminClient.Create(ctx, interval2, v1.CreateOptions{})
interval2, err = adminClient.Create(ctx, interval2, resource.CreateOptions{})
require.NoError(t, err)
env := helper.GetEnv()
ac := acimpl.ProvideAccessControl(env.FeatureToggles)
@@ -574,18 +584,18 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
Name: interval2.Spec.Name,
},
}, helper.Org1.Admin.Identity.GetOrgID(), "API"))
interval2, err = adminClient.Get(ctx, interval2.Name, v1.GetOptions{})
interval2, err = adminClient.Get(ctx, interval2.GetStaticMetadata().Identifier())
require.NoError(t, err)
intervals, err := adminClient.List(ctx, v1.ListOptions{})
intervals, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, intervals.Items, 2)
t.Run("should filter by interval name", func(t *testing.T) {
t.Skip("disabled until app installer supports it") // TODO revisit when custom field selectors are supported
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "spec.name=" + interval1.Spec.Name,
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"spec.name=" + interval1.Spec.Name},
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -593,8 +603,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
})
t.Run("should filter by interval metadata name", func(t *testing.T) {
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: "metadata.name=" + interval2.Name,
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{"metadata.name=" + interval2.Name},
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -603,8 +613,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
t.Run("should filter by multiple filters", func(t *testing.T) {
t.Skip("disabled until app installer supports it")
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s,spec.name=%s", interval2.Name, interval2.Spec.Name),
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", interval2.Name), fmt.Sprintf("spec.name=%s", interval2.Spec.Name)},
})
require.NoError(t, err)
require.Len(t, list.Items, 1)
@@ -612,8 +622,8 @@ func TestIntegrationTimeIntervalListSelector(t *testing.T) {
})
t.Run("should be empty when filter does not match", func(t *testing.T) {
list, err := adminClient.List(ctx, v1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.name=%s", "unknown"),
list, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{
FieldSelectors: []string{fmt.Sprintf("metadata.name=%s", "unknown")},
})
require.NoError(t, err)
require.Empty(t, list.Items)
@@ -647,18 +657,20 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
})
}
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
v1intervals, err := timeinterval.ConvertToK8sResources(orgID, mtis, func(int64) string { return "default" }, nil)
require.NoError(t, err)
for _, interval := range v1intervals.Items {
_, err := adminClient.Create(ctx, &interval, v1.CreateOptions{})
_, err := adminClient.Create(ctx, &interval, resource.CreateOptions{})
require.NoError(t, err)
}
routeClient := common.NewRoutingTreeClient(t, helper.Org1.Admin)
routeClient, err := v0alpha1.NewRoutingTreeClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
v1route, err := routingtree.ConvertToK8sResource(helper.Org1.Admin.Identity.GetOrgID(), *amConfig.AlertmanagerConfig.Route, "", func(int64) string { return "default" })
require.NoError(t, err)
_, err = routeClient.Update(ctx, v1route, v1.UpdateOptions{})
_, err = routeClient.Update(ctx, v1route, resource.UpdateOptions{})
require.NoError(t, err)
postGroupRaw, err := testData.ReadFile(path.Join("test-data", "rulegroup-1.json"))
@@ -675,7 +687,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
currentRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
require.Equal(t, http.StatusAccepted, status)
intervals, err := adminClient.List(ctx, v1.ListOptions{})
intervals, err := adminClient.List(ctx, apis.DefaultNamespace, resource.ListOptions{})
require.NoError(t, err)
require.Len(t, intervals.Items, 3)
intervalIdx := slices.IndexFunc(intervals.Items, func(interval v0alpha1.TimeInterval) bool {
@@ -700,7 +712,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
renamed := interval.Copy().(*v0alpha1.TimeInterval)
renamed.Spec.Name += "-new"
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
require.NoError(t, err)
updatedRuleGroup, status := legacyCli.GetRulesGroup(t, folderUID, ruleGroup.Name)
@@ -732,20 +744,20 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
t.Cleanup(func() {
require.NoError(t, db.DeleteProvenance(ctx, &currentRoute, orgID))
})
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
t.Run("provisioned rules", func(t *testing.T) {
ruleUid := currentRuleGroup.Rules[0].GrafanaManagedAlert.UID
resource := &ngmodels.AlertRule{UID: ruleUid}
require.NoError(t, db.SetProvenance(ctx, resource, orgID, "API"))
rule := &ngmodels.AlertRule{UID: ruleUid}
require.NoError(t, db.SetProvenance(ctx, rule, orgID, "API"))
t.Cleanup(func() {
require.NoError(t, db.DeleteProvenance(ctx, resource, orgID))
require.NoError(t, db.DeleteProvenance(ctx, rule, orgID))
})
actual, err := adminClient.Update(ctx, renamed, v1.UpdateOptions{})
actual, err := adminClient.Update(ctx, renamed, resource.UpdateOptions{})
require.Errorf(t, err, "Expected error but got successful result: %v", actual)
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -754,7 +766,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
t.Run("Delete", func(t *testing.T) {
t.Run("should fail to delete if time interval is used in rule and routes", func(t *testing.T) {
err := adminClient.Delete(ctx, interval.Name, v1.DeleteOptions{})
err := adminClient.Delete(ctx, interval.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -763,7 +775,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
route.Routes[0].MuteTimeIntervals = nil
legacyCli.UpdateRoute(t, route, true)
err = adminClient.Delete(ctx, interval.Name, v1.DeleteOptions{})
err = adminClient.Delete(ctx, interval.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
@@ -773,7 +785,7 @@ func TestIntegrationTimeIntervalReferentialIntegrity(t *testing.T) {
})
intervalToDelete := intervals.Items[idx]
err = adminClient.Delete(ctx, intervalToDelete.Name, v1.DeleteOptions{})
err = adminClient.Delete(ctx, intervalToDelete.GetStaticMetadata().Identifier(), resource.DeleteOptions{})
require.Truef(t, errors.IsConflict(err), "Expected Conflict, got: %s", err)
})
})
@@ -785,7 +797,8 @@ func TestIntegrationTimeIntervalValidation(t *testing.T) {
ctx := context.Background()
helper := getTestHelper(t)
adminClient := common.NewTimeIntervalClient(t, helper.Org1.Admin)
adminClient, err := v0alpha1.NewTimeIntervalClientFromGenerator(helper.Org1.Admin.GetClientRegistry())
require.NoError(t, err)
testCases := []struct {
name string
@@ -819,7 +832,7 @@ func TestIntegrationTimeIntervalValidation(t *testing.T) {
},
Spec: tc.interval,
}
_, err := adminClient.Create(ctx, i, v1.CreateOptions{})
_, err := adminClient.Create(ctx, i, resource.CreateOptions{})
require.Error(t, err)
require.Truef(t, errors.IsBadRequest(err), "Expected BadRequest, got: %s", err)
})

View File

@@ -14,7 +14,7 @@ import (
"testing"
"time"
githubConnection "github.com/grafana/grafana/apps/provisioning/pkg/connection/github"
appsdk_k8s "github.com/grafana/grafana-app-sdk/k8s"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/errors"
@@ -28,6 +28,8 @@ import (
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
githubConnection "github.com/grafana/grafana/apps/provisioning/pkg/connection/github"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/configprovider"
@@ -57,6 +59,8 @@ import (
const (
Org1 = "Org1"
Org2 = "OrgB"
DefaultNamespace = "default"
)
var (
@@ -445,6 +449,11 @@ func (c *User) RESTClient(t *testing.T, gv *schema.GroupVersion) *rest.RESTClien
return client
}
func (c *User) GetClientRegistry() *appsdk_k8s.ClientRegistry {
restConfig := c.NewRestConfig()
return appsdk_k8s.NewClientRegistry(*restConfig, appsdk_k8s.DefaultClientConfig())
}
type RequestParams struct {
User User
Method string // GET, POST, PATCH, etc

View File

@@ -179,6 +179,10 @@ export default defineConfig<PluginOptions>({
name: 'cloud-plugins',
testDir: path.join(testDirRoot, '/cloud-plugins-suite'),
}),
withAuth({
name: 'alerting',
testDir: path.join(testDirRoot, '/alerting-suite'),
}),
withAuth({
name: 'dashboard-new-layouts',
testDir: path.join(testDirRoot, '/dashboard-new-layouts'),

View File

@@ -4024,12 +4024,14 @@
},
"/dashboards/uid/{uid}/restore": {
"post": {
"description": "This API will be removed when /apis/dashboards.grafana.app/v1 is released.\nYou can restore a dashboard by reading it from history, then creating it again.",
"tags": [
"dashboards",
"versions"
],
"summary": "Restore a dashboard to a given dashboard version using UID.",
"operationId": "restoreDashboardVersionByUID",
"deprecated": true,
"parameters": [
{
"name": "Body",

View File

@@ -0,0 +1,67 @@
import { render, screen } from 'test/test-utils';
import { KnownProvenance } from '../types/knownProvenance';
import { ProvisioningBadge } from './Provisioning';
describe('ProvisioningBadge', () => {
describe('when the provenance is file', () => {
it('should render the badge with the correct text', () => {
render(<ProvisioningBadge provenance={KnownProvenance.File} />);
expect(screen.getByText('Provisioned')).toBeInTheDocument();
expect(screen.queryByText('Imported')).not.toBeInTheDocument();
});
it('should render correct tooltip text', async () => {
const { user } = render(<ProvisioningBadge tooltip provenance={KnownProvenance.File} />);
const badge = screen.getByText('Provisioned');
await user.hover(badge);
expect(
screen.getByText('This resource has been provisioned via file and cannot be edited through the UI')
).toBeInTheDocument();
});
});
describe('when the provenance is ConvertedPrometheus', () => {
it('should render the badge with the correct text', () => {
render(<ProvisioningBadge provenance={KnownProvenance.ConvertedPrometheus} />);
expect(screen.getByText('Imported')).toBeInTheDocument();
expect(screen.queryByText('Provisioned')).not.toBeInTheDocument();
});
it('should render correct tooltip text', async () => {
const { user } = render(<ProvisioningBadge tooltip provenance={KnownProvenance.ConvertedPrometheus} />);
const badge = screen.getByText('Imported');
await user.hover(badge);
expect(
screen.getByText('This resource has been provisioned via Prometheus/Mimir and cannot be edited through the UI')
).toBeInTheDocument();
});
});
describe('when the provenance is API', () => {
it('should render the badge with the correct text', () => {
render(<ProvisioningBadge provenance={KnownProvenance.API} />);
expect(screen.getByText('Provisioned')).toBeInTheDocument();
expect(screen.queryByText('Imported')).not.toBeInTheDocument();
});
it('should render correct tooltip text', async () => {
const { user } = render(<ProvisioningBadge tooltip provenance={KnownProvenance.API} />);
const badge = screen.getByText('Provisioned');
await user.hover(badge);
expect(
screen.getByText('This resource has been provisioned via api and cannot be edited through the UI')
).toBeInTheDocument();
});
});
});

View File

@@ -3,6 +3,8 @@ import { ComponentPropsWithoutRef } from 'react';
import { Trans, t } from '@grafana/i18n';
import { Alert, Badge, Tooltip } from '@grafana/ui';
import { KnownProvenance } from '../types/knownProvenance';
export enum ProvisionedResource {
ContactPoint = 'contact point',
Template = 'template',
@@ -64,11 +66,17 @@ export const ProvisioningBadge = ({
*/
provenance?: string;
}) => {
const badge = <Badge text={t('alerting.provisioning-badge.badge.text-provisioned', 'Provisioned')} color="purple" />;
const isConvertedPrometheus = provenance === KnownProvenance.ConvertedPrometheus;
const badgeText = isConvertedPrometheus
? t('alerting.provisioning-badge.badge.text-converted-prometheus', 'Imported')
: t('alerting.provisioning-badge.badge.text-provisioned', 'Provisioned');
const badgeColor = isConvertedPrometheus ? 'blue' : 'purple';
const badge = <Badge text={badgeText} color={badgeColor} />;
if (tooltip) {
const provenanceText = isConvertedPrometheus ? 'Prometheus/Mimir' : provenance;
const provenanceTooltip = (
<Trans i18nKey="alerting.provisioning.badge-tooltip-provenance" values={{ provenance }}>
<Trans i18nKey="alerting.provisioning.badge-tooltip-provenance" values={{ provenance: provenanceText }}>
This resource has been provisioned via {{ provenance }} and cannot be edited through the UI
</Trans>
);

View File

@@ -0,0 +1,60 @@
import { render, screen } from 'test/test-utils';
import { AccessControlAction } from 'app/types/accessControl';
import { setupMswServer } from '../../mockApi';
import { grantUserPermissions } from '../../mocks';
import { AlertmanagerProvider } from '../../state/AlertmanagerContext';
import { KnownProvenance } from '../../types/knownProvenance';
import { ContactPointHeader } from './ContactPointHeader';
import { ContactPointWithMetadata } from './utils';
setupMswServer();
const renderWithProvider = (component: React.ReactElement, alertmanagerSourceName?: string) => {
return render(
<AlertmanagerProvider accessType="notification" alertmanagerSourceName={alertmanagerSourceName}>
{component}
</AlertmanagerProvider>
);
};
describe('ContactPointHeader', () => {
beforeEach(() => {
grantUserPermissions([
AccessControlAction.AlertingNotificationsRead,
AccessControlAction.AlertingNotificationsWrite,
]);
});
const mockContactPoint: ContactPointWithMetadata = {
id: 'test-contact-point',
name: 'Test Contact Point',
provenance: KnownProvenance.API,
policies: [],
grafana_managed_receiver_configs: [],
};
it('shows Provisioned badge when contact point has file provenance via K8s annotations', () => {
const contactPointWithFile = {
...mockContactPoint,
provenance: KnownProvenance.File,
};
renderWithProvider(<ContactPointHeader contactPoint={contactPointWithFile} onDelete={jest.fn()} />);
expect(screen.getByText('Provisioned')).toBeInTheDocument();
});
it('shows correct badge when contact point has converted_prometheus provenance', () => {
const contactPointWithConvertedPrometheus = {
...mockContactPoint,
provenance: KnownProvenance.ConvertedPrometheus,
};
renderWithProvider(<ContactPointHeader contactPoint={contactPointWithConvertedPrometheus} onDelete={jest.fn()} />);
expect(screen.getByText('Imported')).toBeInTheDocument();
});
});

View File

@@ -13,6 +13,7 @@ import {
canDeleteEntity,
canEditEntity,
getAnnotation,
isProvisionedResource,
shouldUseK8sApi,
} from 'app/features/alerting/unified/utils/k8s/utils';
@@ -31,13 +32,15 @@ interface ContactPointHeaderProps {
}
export const ContactPointHeader = ({ contactPoint, onDelete }: ContactPointHeaderProps) => {
const { name, id, provisioned, policies = [] } = contactPoint;
const { name, id, provenance, policies = [] } = contactPoint;
const styles = useStyles2(getStyles);
const [showPermissionsDrawer, setShowPermissionsDrawer] = useState(false);
const { selectedAlertmanager } = useAlertmanager();
const usingK8sApi = shouldUseK8sApi(selectedAlertmanager!);
const isProvisioned = isProvisionedResource(provenance);
const [exportSupported, exportAllowed] = useAlertmanagerAbility(AlertmanagerAction.ExportContactPoint);
const [editSupported, editAllowed] = useAlertmanagerAbility(AlertmanagerAction.UpdateContactPoint);
const [deleteSupported, deleteAllowed] = useAlertmanagerAbility(AlertmanagerAction.UpdateContactPoint);
@@ -70,14 +73,14 @@ export const ContactPointHeader = ({ contactPoint, onDelete }: ContactPointHeade
/** Does the current user have permissions to edit the contact point? */
const hasAbilityToEdit = usingK8sApi ? canEditEntity(contactPoint) : editAllowed;
/** Can the contact point actually be edited via the UI? */
const contactPointIsEditable = !provisioned;
const contactPointIsEditable = !isProvisioned;
/** Given the alertmanager, the user's permissions, and the state of the contact point - can it actually be edited? */
const canEdit = editSupported && hasAbilityToEdit && contactPointIsEditable;
/** Does the current user have permissions to delete the contact point? */
const hasAbilityToDelete = usingK8sApi ? canDeleteEntity(contactPoint) : deleteAllowed;
/** Can the contact point actually be deleted, regardless of permissions? i.e. ensuring it isn't provisioned and isn't referenced elsewhere */
const contactPointIsDeleteable = !provisioned && !numberOfPoliciesPreventingDeletion && !numberOfRules;
const contactPointIsDeleteable = !isProvisioned && !numberOfPoliciesPreventingDeletion && !numberOfRules;
/** Given the alertmanager, the user's permissions, and the state of the contact point - can it actually be deleted? */
const canBeDeleted = deleteSupported && hasAbilityToDelete && contactPointIsDeleteable;
@@ -130,7 +133,7 @@ export const ContactPointHeader = ({ contactPoint, onDelete }: ContactPointHeade
const reasonsDeleteIsDisabled = [
!hasAbilityToDelete ? cannotDeleteNoPermissions : '',
provisioned ? cannotDeleteProvisioned : '',
isProvisioned ? cannotDeleteProvisioned : '',
numberOfPoliciesPreventingDeletion > 0 ? cannotDeletePolicies : '',
numberOfRules ? cannotDeleteRules : '',
].filter(Boolean);
@@ -209,15 +212,13 @@ export const ContactPointHeader = ({ contactPoint, onDelete }: ContactPointHeade
{referencedByRulesText}
</TextLink>
)}
{provisioned && (
<ProvisioningBadge tooltip provenance={getAnnotation(contactPoint, K8sAnnotations.Provenance)} />
)}
{isProvisioned && <ProvisioningBadge tooltip provenance={provenance} />}
{!isReferencedByAnything && <UnusedContactPointBadge />}
<Spacer />
<LinkButton
tooltipPlacement="top"
tooltip={
provisioned
isProvisioned
? t(
'alerting.contact-point-header.tooltip-provisioned-contact-points',
'Provisioned contact points cannot be edited in the UI'

View File

@@ -13,6 +13,7 @@ import { setupMswServer } from '../../mockApi';
import { grantUserPermissions, mockDataSource } from '../../mocks';
import { AlertmanagerProvider } from '../../state/AlertmanagerContext';
import { setupDataSources } from '../../testSetup/datasources';
import { KnownProvenance } from '../../types/knownProvenance';
import { DataSourceType, GRAFANA_RULES_SOURCE_NAME } from '../../utils/datasource';
import { ContactPoint } from './ContactPoint';
@@ -305,7 +306,9 @@ describe('contact points', () => {
});
it('should disable buttons when provisioned', async () => {
const { user } = renderWithProvider(<ContactPoint contactPoint={{ ...basicContactPoint, provisioned: true }} />);
const { user } = renderWithProvider(
<ContactPoint contactPoint={{ ...basicContactPoint, provenance: KnownProvenance.File }} />
);
expect(screen.getByText(/provisioned/i)).toBeInTheDocument();

View File

@@ -50,7 +50,7 @@ exports[`useContactPoints should return contact points with status 1`] = `
},
},
],
"provisioned": false,
"provenance": undefined,
},
{
"grafana_managed_receiver_configs": [
@@ -93,7 +93,7 @@ exports[`useContactPoints should return contact points with status 1`] = `
},
"name": "lotsa-emails",
"policies": [],
"provisioned": false,
"provenance": undefined,
},
{
"grafana_managed_receiver_configs": [
@@ -129,7 +129,7 @@ exports[`useContactPoints should return contact points with status 1`] = `
},
"name": "OnCall Conctact point",
"policies": [],
"provisioned": false,
"provenance": undefined,
},
{
"grafana_managed_receiver_configs": [
@@ -178,7 +178,7 @@ exports[`useContactPoints should return contact points with status 1`] = `
},
},
],
"provisioned": true,
"provenance": "api",
},
{
"grafana_managed_receiver_configs": [
@@ -243,7 +243,7 @@ exports[`useContactPoints should return contact points with status 1`] = `
},
"name": "Slack with multiple channels",
"policies": [],
"provisioned": false,
"provenance": undefined,
},
],
"error": undefined,
@@ -301,7 +301,7 @@ exports[`useContactPoints when having oncall plugin installed and no alert manag
},
},
],
"provisioned": false,
"provenance": undefined,
},
{
"grafana_managed_receiver_configs": [
@@ -344,7 +344,7 @@ exports[`useContactPoints when having oncall plugin installed and no alert manag
},
"name": "lotsa-emails",
"policies": [],
"provisioned": false,
"provenance": undefined,
},
{
"grafana_managed_receiver_configs": [
@@ -383,7 +383,7 @@ exports[`useContactPoints when having oncall plugin installed and no alert manag
},
"name": "OnCall Conctact point",
"policies": [],
"provisioned": false,
"provenance": undefined,
},
{
"grafana_managed_receiver_configs": [
@@ -432,7 +432,7 @@ exports[`useContactPoints when having oncall plugin installed and no alert manag
},
},
],
"provisioned": true,
"provenance": "api",
},
{
"grafana_managed_receiver_configs": [
@@ -497,7 +497,7 @@ exports[`useContactPoints when having oncall plugin installed and no alert manag
},
"name": "Slack with multiple channels",
"policies": [],
"provisioned": false,
"provenance": undefined,
},
],
"error": undefined,

View File

@@ -6,10 +6,13 @@ import { disablePlugin } from 'app/features/alerting/unified/mocks/server/config
import { setOnCallIntegrations } from 'app/features/alerting/unified/mocks/server/handlers/plugins/configure-plugins';
import { SupportedPlugin } from 'app/features/alerting/unified/types/pluginBridges';
import { GRAFANA_RULES_SOURCE_NAME } from 'app/features/alerting/unified/utils/datasource';
import { AlertManagerCortexConfig } from 'app/plugins/datasource/alertmanager/types';
import { AccessControlAction } from 'app/types/accessControl';
import { setupMswServer } from '../../mockApi';
import { grantUserPermissions } from '../../mocks';
import { setAlertmanagerConfig } from '../../mocks/server/entities/alertmanagers';
import { KnownProvenance } from '../../types/knownProvenance';
import { useContactPointsWithStatus } from './useContactPoints';
@@ -69,4 +72,235 @@ describe('useContactPoints', () => {
expect(snapshot).toMatchSnapshot();
});
});
describe('Provenance handling', () => {
it('should extract provenance when provenance is "api"', async () => {
// Set up alertmanager config with a receiver that has API provenance
const config: AlertManagerCortexConfig = {
template_files: {},
alertmanager_config: {
receivers: [
{
name: 'api-provenance-contact-point',
grafana_managed_receiver_configs: [
{
uid: 'test-uid-1',
name: 'api-provenance-contact-point',
type: 'email',
disableResolveMessage: false,
settings: {
addresses: 'test@example.com',
},
secureFields: {},
provenance: 'api', // This will be used by the K8s mock handler
},
],
},
],
},
};
setAlertmanagerConfig(GRAFANA_RULES_SOURCE_NAME, config);
const { result } = renderHook(
() =>
useContactPointsWithStatus({
alertmanager: GRAFANA_RULES_SOURCE_NAME,
fetchPolicies: false,
fetchStatuses: false,
}),
{ wrapper }
);
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
const contactPoint = result.current.contactPoints?.find((cp) => cp.name === 'api-provenance-contact-point');
expect(contactPoint).toBeDefined();
expect(contactPoint?.provenance).toBe(KnownProvenance.API);
});
it('should extract provenance when provenance is "file"', async () => {
const config: AlertManagerCortexConfig = {
template_files: {},
alertmanager_config: {
receivers: [
{
name: 'file-provenance-contact-point',
grafana_managed_receiver_configs: [
{
uid: 'test-uid-2',
name: 'file-provenance-contact-point',
type: 'email',
disableResolveMessage: false,
settings: {
addresses: 'test@example.com',
},
secureFields: {},
provenance: 'file',
},
],
},
],
},
};
setAlertmanagerConfig(GRAFANA_RULES_SOURCE_NAME, config);
const { result } = renderHook(
() =>
useContactPointsWithStatus({
alertmanager: GRAFANA_RULES_SOURCE_NAME,
fetchPolicies: false,
fetchStatuses: false,
}),
{ wrapper }
);
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
const contactPoint = result.current.contactPoints?.find((cp) => cp.name === 'file-provenance-contact-point');
expect(contactPoint).toBeDefined();
expect(contactPoint?.provenance).toBe(KnownProvenance.File);
});
it('should extract provenance when provenance is "converted_prometheus"', async () => {
const config: AlertManagerCortexConfig = {
template_files: {},
alertmanager_config: {
receivers: [
{
name: 'mimir-provenance-contact-point',
grafana_managed_receiver_configs: [
{
uid: 'test-uid-3',
name: 'mimir-provenance-contact-point',
type: 'email',
disableResolveMessage: false,
settings: {
addresses: 'test@example.com',
},
secureFields: {},
provenance: 'converted_prometheus',
},
],
},
],
},
};
setAlertmanagerConfig(GRAFANA_RULES_SOURCE_NAME, config);
const { result } = renderHook(
() =>
useContactPointsWithStatus({
alertmanager: GRAFANA_RULES_SOURCE_NAME,
fetchPolicies: false,
fetchStatuses: false,
}),
{ wrapper }
);
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
const contactPoint = result.current.contactPoints?.find((cp) => cp.name === 'mimir-provenance-contact-point');
expect(contactPoint).toBeDefined();
expect(contactPoint?.provenance).toBe(KnownProvenance.ConvertedPrometheus);
});
it('should map "none" provenance annotation to undefined', async () => {
const config: AlertManagerCortexConfig = {
template_files: {},
alertmanager_config: {
receivers: [
{
name: 'none-provenance-contact-point',
grafana_managed_receiver_configs: [
{
uid: 'test-uid-4',
name: 'none-provenance-contact-point',
type: 'email',
disableResolveMessage: false,
settings: {
addresses: 'test@example.com',
},
secureFields: {},
// No provenance field - will default to PROVENANCE_NONE in mock handler
},
],
},
],
},
};
setAlertmanagerConfig(GRAFANA_RULES_SOURCE_NAME, config);
const { result } = renderHook(
() =>
useContactPointsWithStatus({
alertmanager: GRAFANA_RULES_SOURCE_NAME,
fetchPolicies: false,
fetchStatuses: false,
}),
{ wrapper }
);
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
const contactPoint = result.current.contactPoints?.find((cp) => cp.name === 'none-provenance-contact-point');
expect(contactPoint).toBeDefined();
// The mock handler sets PROVENANCE_NONE ('none') when no provenance is found
// parseK8sReceiver converts 'none' to undefined
expect(contactPoint?.provenance).toBeUndefined();
});
it('should handle missing annotations gracefully', async () => {
// This test verifies that when annotations are undefined, provenance is handled correctly
const config: AlertManagerCortexConfig = {
template_files: {},
alertmanager_config: {
receivers: [
{
name: 'no-annotations-contact-point',
grafana_managed_receiver_configs: [
{
uid: 'test-uid-5',
name: 'no-annotations-contact-point',
type: 'email',
disableResolveMessage: false,
settings: {
addresses: 'test@example.com',
},
secureFields: {},
},
],
},
],
},
};
setAlertmanagerConfig(GRAFANA_RULES_SOURCE_NAME, config);
const { result } = renderHook(
() =>
useContactPointsWithStatus({
alertmanager: GRAFANA_RULES_SOURCE_NAME,
fetchPolicies: false,
fetchStatuses: false,
}),
{ wrapper }
);
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
const contactPoint = result.current.contactPoints?.find((cp) => cp.name === 'no-annotations-contact-point');
expect(contactPoint).toBeDefined();
// When annotations are missing, the mock handler should set provenance to undefined
expect(contactPoint?.provenance).toBeUndefined();
});
});
});

View File

@@ -11,7 +11,7 @@ import { ComGithubGrafanaGrafanaPkgApisAlertingNotificationsV0Alpha1Receiver } f
import { BaseAlertmanagerArgs, Skippable } from 'app/features/alerting/unified/types/hooks';
import { cloudNotifierTypes } from 'app/features/alerting/unified/utils/cloud-alertmanager-notifier-types';
import { GRAFANA_RULES_SOURCE_NAME } from 'app/features/alerting/unified/utils/datasource';
import { isK8sEntityProvisioned, shouldUseK8sApi } from 'app/features/alerting/unified/utils/k8s/utils';
import { shouldUseK8sApi } from 'app/features/alerting/unified/utils/k8s/utils';
import { GrafanaManagedContactPoint, Receiver } from 'app/plugins/datasource/alertmanager/types';
import { getAPINamespace } from '../../../../../api/utils';
@@ -21,7 +21,9 @@ import { useAsync } from '../../hooks/useAsync';
import { usePluginBridge } from '../../hooks/usePluginBridge';
import { useProduceNewAlertmanagerConfiguration } from '../../hooks/useProduceNewAlertmanagerConfig';
import { addReceiverAction, deleteReceiverAction, updateReceiverAction } from '../../reducers/alertmanager/receivers';
import { KnownProvenance } from '../../types/knownProvenance';
import { getIrmIfPresentOrOnCallPluginId } from '../../utils/config';
import { K8sAnnotations } from '../../utils/k8s/constants';
import { enhanceContactPointsWithMetadata } from './utils';
@@ -78,10 +80,13 @@ const useOnCallIntegrations = ({ skip }: Skippable = {}) => {
type K8sReceiver = ComGithubGrafanaGrafanaPkgApisAlertingNotificationsV0Alpha1Receiver;
const parseK8sReceiver = (item: K8sReceiver): GrafanaManagedContactPoint => {
const metadataProvenance = item.metadata.annotations?.[K8sAnnotations.Provenance];
const provenance = metadataProvenance === KnownProvenance.None ? undefined : metadataProvenance;
return {
id: item.metadata.name || item.metadata.uid || item.spec.title,
name: item.spec.title,
provisioned: isK8sEntityProvisioned(item),
provenance: provenance,
grafana_managed_receiver_configs: item.spec.integrations,
metadata: item.metadata,
};

View File

@@ -16,7 +16,8 @@ import {
deleteNotificationTemplateAction,
updateNotificationTemplateAction,
} from '../../reducers/alertmanager/notificationTemplates';
import { K8sAnnotations, PROVENANCE_NONE } from '../../utils/k8s/constants';
import { KnownProvenance } from '../../types/knownProvenance';
import { K8sAnnotations } from '../../utils/k8s/constants';
import { getAnnotation, shouldUseK8sApi } from '../../utils/k8s/utils';
import { ensureDefine } from '../../utils/templates';
import { TemplateFormValues } from '../receivers/TemplateForm';
@@ -79,7 +80,7 @@ function templateGroupsToTemplates(
function templateGroupToTemplate(
templateGroup: ComGithubGrafanaGrafanaPkgApisAlertingNotificationsV0Alpha1TemplateGroup
): NotificationTemplate {
const provenance = getAnnotation(templateGroup, K8sAnnotations.Provenance) ?? PROVENANCE_NONE;
const provenance = getAnnotation(templateGroup, K8sAnnotations.Provenance) ?? KnownProvenance.None;
return {
// K8s entities should always have a metadata.name property. The type is marked as optional because it's also used in other places
uid: templateGroup.metadata.name ?? templateGroup.spec.title,
@@ -96,8 +97,8 @@ function amConfigToTemplates(config: AlertManagerCortexConfig): NotificationTemp
uid: title,
title,
content,
// Undefined, null or empty string should be converted to PROVENANCE_NONE
provenance: (config.template_file_provenances ?? {})[title] || PROVENANCE_NONE,
// Undefined, null or empty string should be converted to KnownProvenance.None
provenance: (config.template_file_provenances ?? {})[title] || KnownProvenance.None,
missing: !templates.includes(title),
}));
}
@@ -272,7 +273,7 @@ export function useValidateNotificationTemplate({
}
interface NotificationTemplateMetadata {
isProvisioned: boolean;
provenance?: string;
}
export function useNotificationTemplateMetadata(
@@ -280,11 +281,11 @@ export function useNotificationTemplateMetadata(
): NotificationTemplateMetadata {
if (!template) {
return {
isProvisioned: false,
provenance: KnownProvenance.None,
};
}
return {
isProvisioned: Boolean(template.provenance) && template.provenance !== PROVENANCE_NONE,
provenance: template.provenance,
};
}

View File

@@ -1,8 +1,12 @@
import { GrafanaManagedContactPoint } from 'app/plugins/datasource/alertmanager/types';
import { KnownProvenance } from '../../types/knownProvenance';
import { ReceiverTypes } from '../receivers/grafanaAppReceivers/onCall/onCall';
import { RECEIVER_META_KEY, RECEIVER_PLUGIN_META_KEY } from './constants';
import {
ReceiverConfigWithMetadata,
enhanceContactPointsWithMetadata,
getReceiverDescription,
isAutoGeneratedPolicy,
summarizeEmailAddresses,
@@ -128,3 +132,110 @@ describe('summarizeEmailAddresses', () => {
expect(summarizeEmailAddresses('foo@foo.com\n bar@bar.com ')).toBe(output);
});
});
describe('enhanceContactPointsWithMetadata', () => {
it('should extract provenance from receiver configs when contact point has no provenance', () => {
const contactPoint: GrafanaManagedContactPoint = {
name: 'test-contact-point',
grafana_managed_receiver_configs: [
{
uid: 'test-uid',
name: 'test-contact-point',
type: 'email',
settings: { addresses: 'test@example.com' },
secureFields: {},
provenance: KnownProvenance.API,
},
],
};
const enhanced = enhanceContactPointsWithMetadata({
contactPoints: [contactPoint],
notifiers: [],
status: [],
});
expect(enhanced[0].provenance).toBe(KnownProvenance.API);
});
it('should prefer contact point provenance over receiver config provenance', () => {
const contactPoint: GrafanaManagedContactPoint = {
name: 'test-contact-point',
provenance: KnownProvenance.File, // Provenance on contact point (from K8s)
grafana_managed_receiver_configs: [
{
uid: 'test-uid',
name: 'test-contact-point',
type: 'email',
settings: { addresses: 'test@example.com' },
secureFields: {},
provenance: KnownProvenance.API, // Different provenance on receiver config
},
],
};
const enhanced = enhanceContactPointsWithMetadata({
contactPoints: [contactPoint],
notifiers: [],
status: [],
});
expect(enhanced[0].provenance).toBe(KnownProvenance.File);
});
it('should extract provenance from first receiver config that has it', () => {
const contactPoint: GrafanaManagedContactPoint = {
name: 'test-contact-point',
grafana_managed_receiver_configs: [
{
uid: 'test-uid-1',
name: 'test-contact-point',
type: 'email',
settings: { addresses: 'test@example.com' },
secureFields: {},
// No provenance on first receiver
},
{
uid: 'test-uid-2',
name: 'test-contact-point',
type: 'slack',
settings: { recipient: '#channel' },
secureFields: {},
provenance: KnownProvenance.ConvertedPrometheus, // Provenance on second receiver
},
],
};
const enhanced = enhanceContactPointsWithMetadata({
contactPoints: [contactPoint],
notifiers: [],
status: [],
});
expect(enhanced[0].provenance).toBe(KnownProvenance.ConvertedPrometheus);
});
it('should have undefined provenance when neither contact point nor receiver configs have provenance', () => {
const contactPoint: GrafanaManagedContactPoint = {
name: 'test-contact-point',
grafana_managed_receiver_configs: [
{
uid: 'test-uid',
name: 'test-contact-point',
type: 'email',
settings: { addresses: 'test@example.com' },
secureFields: {},
// No provenance
},
],
};
const enhanced = enhanceContactPointsWithMetadata({
contactPoints: [contactPoint],
notifiers: [],
status: [],
});
expect(enhanced[0].provenance).toBeUndefined();
});
});

View File

@@ -146,9 +146,16 @@ export function enhanceContactPointsWithMetadata({
const id = getContactPointIdentifier(contactPoint);
// Extract provenance from contactPoint first; else, search in its receivers
const contactPointProvenance =
'provenance' in contactPoint && contactPoint.provenance !== undefined
? contactPoint.provenance
: receivers.find((receiver) => Boolean(receiver.provenance))?.provenance;
return {
...contactPoint,
id,
provenance: contactPointProvenance,
policies:
alertmanagerConfiguration && usedContactPointsByName && (usedContactPointsByName[contactPoint.name] ?? []),
grafana_managed_receiver_configs: receivers.map((receiver, index) => {

View File

@@ -9,7 +9,7 @@ import {
IoK8SApimachineryPkgApisMetaV1ObjectMeta,
} from 'app/features/alerting/unified/openapi/timeIntervalsApi.gen';
import { BaseAlertmanagerArgs, Skippable } from 'app/features/alerting/unified/types/hooks';
import { PROVENANCE_NONE } from 'app/features/alerting/unified/utils/k8s/constants';
import { KnownProvenance } from 'app/features/alerting/unified/types/knownProvenance';
import {
isK8sEntityProvisioned,
shouldUseK8sApi,
@@ -62,7 +62,7 @@ const parseAmTimeInterval: (interval: MuteTimeInterval, provenance: string) => M
return {
...interval,
id: interval.name,
provisioned: Boolean(provenance && provenance !== PROVENANCE_NONE),
provisioned: Boolean(provenance && provenance !== KnownProvenance.None),
};
};

View File

@@ -11,7 +11,7 @@ import { AlertmanagerAction, useAlertmanagerAbility } from 'app/features/alertin
import { FormAmRoute } from 'app/features/alerting/unified/types/amroutes';
import { addUniqueIdentifierToRoute } from 'app/features/alerting/unified/utils/amroutes';
import { getErrorCode, stringifyErrorLike } from 'app/features/alerting/unified/utils/misc';
import { ObjectMatcher, ROUTES_META_SYMBOL, RouteWithID } from 'app/plugins/datasource/alertmanager/types';
import { ObjectMatcher, RouteWithID } from 'app/plugins/datasource/alertmanager/types';
import { anyOfRequestState, isError } from '../../hooks/useAsync';
import { useAlertmanager } from '../../state/AlertmanagerContext';
@@ -27,6 +27,7 @@ import { useAddPolicyModal, useAlertGroupsModal, useDeletePolicyModal, useEditPo
import { Policy } from './Policy';
import { TIMING_OPTIONS_DEFAULTS } from './timingOptions';
import {
isRouteProvisioned,
useAddNotificationPolicy,
useDeleteNotificationPolicy,
useNotificationPolicyRoute,
@@ -99,6 +100,8 @@ export const NotificationPoliciesList = () => {
}
return;
}, [defaultPolicy]);
const routeProvenance = defaultPolicy?.provenance;
const isRootRouteProvisioned = rootRoute ? isRouteProvisioned(rootRoute) : false;
// useAsync could also work but it's hard to wait until it's done in the tests
// Combining with useEffect gives more predictable results because the condition is in useEffect
@@ -244,7 +247,8 @@ export const NotificationPoliciesList = () => {
currentRoute={defaults(rootRoute, TIMING_OPTIONS_DEFAULTS)}
contactPointsState={contactPointsState.receivers}
readOnly={!hasConfigurationAPI}
provisioned={rootRoute[ROUTES_META_SYMBOL]?.provisioned}
provisioned={isRootRouteProvisioned}
provenance={routeProvenance}
alertManagerSourceName={selectedAlertmanager}
onAddPolicy={openAddModal}
onEditPolicy={openEditModal}

View File

@@ -17,6 +17,7 @@ import {
import { useAlertmanagerAbilities } from '../../hooks/useAbilities';
import { mockReceiversState } from '../../mocks';
import { AlertmanagerProvider } from '../../state/AlertmanagerContext';
import { KnownProvenance } from '../../types/knownProvenance';
import { GRAFANA_RULES_SOURCE_NAME } from '../../utils/datasource';
import {
@@ -331,6 +332,60 @@ describe('Policy', () => {
const customPolicy = screen.getByTestId('am-route-container');
expect(within(customPolicy).getByTestId('matches-all')).toBeInTheDocument();
});
it('shows correct badge when policy has file provenance', () => {
const mockRoute: RouteWithID = {
id: 'test-route',
receiver: 'test-receiver',
routes: [],
};
renderPolicy(
<Policy
readOnly
isDefaultPolicy
currentRoute={mockRoute}
contactPointsState={mockReceiversState()}
alertManagerSourceName={GRAFANA_RULES_SOURCE_NAME}
onEditPolicy={noop}
onAddPolicy={noop}
onDeletePolicy={noop}
onShowAlertInstances={noop}
provisioned
provenance={KnownProvenance.File}
/>
);
const badge = screen.getByText('Provisioned');
expect(badge).toBeInTheDocument();
});
it('shows correct badge when policy has converted_prometheus provenance', () => {
const mockRoute: RouteWithID = {
id: 'test-route',
receiver: 'test-receiver',
routes: [],
};
renderPolicy(
<Policy
readOnly
isDefaultPolicy
currentRoute={mockRoute}
contactPointsState={mockReceiversState()}
alertManagerSourceName={GRAFANA_RULES_SOURCE_NAME}
onEditPolicy={noop}
onAddPolicy={noop}
onDeletePolicy={noop}
onShowAlertInstances={noop}
provisioned
provenance={KnownProvenance.ConvertedPrometheus}
/>
);
const badge = screen.getByText('Imported');
expect(badge).toBeInTheDocument();
});
});
// Doesn't matter which path the routes use, it just needs to match the initialEntries history entry to render the element

View File

@@ -61,6 +61,7 @@ interface PolicyComponentProps {
contactPointsState?: ReceiversState;
readOnly?: boolean;
provisioned?: boolean;
provenance?: string;
inheritedProperties?: InheritableProperties;
routesMatchingFilters?: RoutesMatchingFilters;
@@ -89,6 +90,7 @@ const Policy = (props: PolicyComponentProps) => {
contactPointsState,
readOnly = false,
provisioned = false,
provenance,
alertManagerSourceName,
currentRoute,
inheritedProperties,
@@ -255,7 +257,7 @@ const Policy = (props: PolicyComponentProps) => {
<Spacer />
{/* TODO maybe we should move errors to the gutter instead? */}
{errors.length > 0 && <Errors errors={errors} />}
{provisioned && <ProvisioningBadge />}
{provisioned && <ProvisioningBadge tooltip provenance={provenance} />}
<Stack direction="row" gap={0.5}>
{!isAutoGenerated && !readOnly && (
<Authorize actions={[AlertmanagerAction.CreateNotificationPolicy]}>

View File

@@ -1,9 +1,15 @@
import { MatcherOperator, ROUTES_META_SYMBOL, Route } from 'app/plugins/datasource/alertmanager/types';
import { ComGithubGrafanaGrafanaPkgApisAlertingNotificationsV0Alpha1Route } from '../../openapi/routesApi.gen';
import { KnownProvenance } from '../../types/knownProvenance';
import { ROOT_ROUTE_NAME } from '../../utils/k8s/constants';
import { createKubernetesRoutingTreeSpec, k8sSubRouteToRoute, routeToK8sSubRoute } from './useNotificationPolicyRoute';
import {
createKubernetesRoutingTreeSpec,
isRouteProvisioned,
k8sSubRouteToRoute,
routeToK8sSubRoute,
} from './useNotificationPolicyRoute';
test('k8sSubRouteToRoute', () => {
const input: ComGithubGrafanaGrafanaPkgApisAlertingNotificationsV0Alpha1Route = {
@@ -115,3 +121,86 @@ test('createKubernetesRoutingTreeSpec', () => {
expect(tree.metadata.name).toBe(ROOT_ROUTE_NAME);
expect(tree).toMatchSnapshot();
});
describe('isRouteProvisioned', () => {
it('returns false when route has no provenance', () => {
const route: Route = {
receiver: 'test-receiver',
};
expect(isRouteProvisioned(route)).toBeFalsy();
});
it('returns false when route has KnownProvenance.None in metadata', () => {
const route: Route = {
receiver: 'test-receiver',
[ROUTES_META_SYMBOL]: {
provenance: KnownProvenance.None,
},
};
expect(isRouteProvisioned(route)).toBeFalsy();
});
it('returns false when route has KnownProvenance.None at top level', () => {
const route: Route = {
receiver: 'test-receiver',
provenance: KnownProvenance.None,
};
expect(isRouteProvisioned(route)).toBeFalsy();
});
it('returns true when route has file provenance in metadata', () => {
const route: Route = {
receiver: 'test-receiver',
[ROUTES_META_SYMBOL]: {
provenance: KnownProvenance.File,
},
};
expect(isRouteProvisioned(route)).toBeTruthy();
});
it('returns true when route has api provenance in metadata', () => {
const route: Route = {
receiver: 'test-receiver',
[ROUTES_META_SYMBOL]: {
provenance: KnownProvenance.API,
},
};
expect(isRouteProvisioned(route)).toBeTruthy();
});
it('returns true when route has converted_prometheus provenance in metadata', () => {
const route: Route = {
receiver: 'test-receiver',
[ROUTES_META_SYMBOL]: {
provenance: KnownProvenance.ConvertedPrometheus,
},
};
expect(isRouteProvisioned(route)).toBeTruthy();
});
it('returns true when route has file provenance at top level', () => {
const route: Route = {
receiver: 'test-receiver',
provenance: KnownProvenance.File,
};
expect(isRouteProvisioned(route)).toBeTruthy();
});
it('falls back to top-level provenance when metadata provenance is missing', () => {
const route: Route = {
receiver: 'test-receiver',
provenance: KnownProvenance.File,
[ROUTES_META_SYMBOL]: {
provenance: undefined,
},
};
expect(isRouteProvisioned(route)).toBeTruthy();
});
});

View File

@@ -22,8 +22,8 @@ import {
} from '../../reducers/alertmanager/notificationPolicyRoutes';
import { FormAmRoute } from '../../types/amroutes';
import { addUniqueIdentifierToRoute } from '../../utils/amroutes';
import { PROVENANCE_NONE, ROOT_ROUTE_NAME } from '../../utils/k8s/constants';
import { isK8sEntityProvisioned, shouldUseK8sApi } from '../../utils/k8s/utils';
import { K8sAnnotations, ROOT_ROUTE_NAME } from '../../utils/k8s/constants';
import { getAnnotation, isProvisionedResource, shouldUseK8sApi } from '../../utils/k8s/utils';
import { routeAdapter } from '../../utils/routeAdapter';
import {
InsertPosition,
@@ -33,6 +33,11 @@ import {
omitRouteFromRouteTree,
} from '../../utils/routeTree';
export function isRouteProvisioned(route: Route): boolean {
const provenance = route[ROUTES_META_SYMBOL]?.provenance ?? route.provenance;
return isProvisionedResource(provenance);
}
const k8sRoutesToRoutesMemoized = memoize(k8sRoutesToRoutes, { maxSize: 1 });
const {
@@ -82,7 +87,7 @@ const parseAmConfigRoute = memoize((route: Route): Route => {
return {
...route,
[ROUTES_META_SYMBOL]: {
provisioned: Boolean(route.provenance && route.provenance !== PROVENANCE_NONE),
provenance: route.provenance,
},
};
});
@@ -232,10 +237,11 @@ function k8sRoutesToRoutes(routes: ComGithubGrafanaGrafanaPkgApisAlertingNotific
...route.spec.defaults,
routes: route.spec.routes?.map(k8sSubRouteToRoute),
[ROUTES_META_SYMBOL]: {
provisioned: isK8sEntityProvisioned(route),
provenance: getAnnotation(route, K8sAnnotations.Provenance),
resourceVersion: route.metadata.resourceVersion,
name: route.metadata.name,
},
provenance: getAnnotation(route, K8sAnnotations.Provenance),
};
});
}

View File

@@ -33,6 +33,7 @@ import { AccessControlAction } from 'app/types/accessControl';
import { AITemplateButtonComponent } from '../../enterprise-components/AI/AIGenTemplateButton/addAITemplateButton';
import { GRAFANA_RULES_SOURCE_NAME } from '../../utils/datasource';
import { isProvisionedResource } from '../../utils/k8s/utils';
import { makeAMLink, stringifyErrorLike } from '../../utils/misc';
import { EditorColumnHeader } from '../EditorColumnHeader';
import { ProvisionedResource, ProvisioningAlert } from '../Provisioning';
@@ -122,7 +123,8 @@ export const TemplateForm = ({ originalTemplate, prefill, alertmanager }: Props)
// AI feedback state
const [aiGeneratedTemplate, setAiGeneratedTemplate] = useState(false);
const { isProvisioned } = useNotificationTemplateMetadata(originalTemplate);
const { provenance } = useNotificationTemplateMetadata(originalTemplate);
const isProvisioned = isProvisionedResource(provenance);
const originalTemplatePrefill: TemplateFormValues | undefined = originalTemplate
? { title: originalTemplate.title, content: originalTemplate.content }
: undefined;

View File

@@ -0,0 +1,98 @@
import { render, screen, within } from 'test/test-utils';
import { AppNotificationList } from 'app/core/components/AppNotifications/AppNotificationList';
import { AccessControlAction } from 'app/types/accessControl';
import { setupMswServer } from '../../mockApi';
import { grantUserPermissions } from '../../mocks';
import { AlertmanagerProvider } from '../../state/AlertmanagerContext';
import { KnownProvenance } from '../../types/knownProvenance';
import { GRAFANA_RULES_SOURCE_NAME } from '../../utils/datasource';
import { NotificationTemplate } from '../contact-points/useNotificationTemplates';
import { TemplatesTable } from './TemplatesTable';
const mockTemplates: Array<Partial<NotificationTemplate>> = [
{
uid: 'mimir-template',
title: 'mimir-template',
content: '{{ define "mimir-template" }}Template from Mimir{{ end }}',
provenance: KnownProvenance.ConvertedPrometheus,
},
{
uid: 'file-template',
title: 'file-template',
content: '{{ define "file-template" }}File provisioned template{{ end }}',
provenance: KnownProvenance.File,
},
{
uid: 'api-template',
title: 'api-template',
content: '{{ define "api-template" }}API provisioned template{{ end }}',
provenance: KnownProvenance.API,
},
{
uid: 'no-provenance-template',
title: 'no-provenance-template',
content: '{{ define "no-provenance-template" }}No provenance template{{ end }}',
provenance: KnownProvenance.None,
},
{
uid: 'undefined-provenance-template',
title: 'undefined-provenance-template',
content: '{{ define "undefined-provenance-template" }}Undefined provenance template{{ end }}',
provenance: undefined,
},
];
const renderWithProvider = (templates: Array<Partial<NotificationTemplate>>) => {
return render(
<AlertmanagerProvider accessType={'notification'}>
<TemplatesTable alertManagerName={GRAFANA_RULES_SOURCE_NAME} templates={templates as NotificationTemplate[]} />
<AppNotificationList />
</AlertmanagerProvider>
);
};
setupMswServer();
describe('TemplatesTable', () => {
beforeEach(() => {
grantUserPermissions([
AccessControlAction.AlertingNotificationsRead,
AccessControlAction.AlertingNotificationsWrite,
AccessControlAction.AlertingNotificationsExternalRead,
AccessControlAction.AlertingNotificationsExternalWrite,
]);
});
it('shows "Imported" badge for templates with converted_prometheus provenance', () => {
const templates = [mockTemplates[0]]; // mimir-template
renderWithProvider(templates);
const templateRow = screen.getByRole('row', { name: /mimir-template/i });
const badge = within(templateRow).getByText('Imported');
expect(badge).toBeInTheDocument();
});
it('shows "Provisioned" badge for templates with other provenance', () => {
// api and file templates
[mockTemplates[1], mockTemplates[2]].forEach((template) => {
renderWithProvider([template]);
const templateRow = screen.getByRole('row', { name: new RegExp(template.title ?? '', 'i') });
const badge = within(templateRow).getByText('Provisioned');
expect(badge).toBeInTheDocument();
});
});
it('does not show badge for templates with KnownProvenance.None or empty string provenance', () => {
// no-provenance-template and undefined-provenance-template
[mockTemplates[3], mockTemplates[4]].forEach((template) => {
renderWithProvider([template]);
const templateRow = screen.getByRole('row', { name: new RegExp(template.title ?? '', 'i') });
expect(within(templateRow).queryByText('Provisioned')).not.toBeInTheDocument();
});
});
});

View File

@@ -10,6 +10,7 @@ import { GRAFANA_RULES_SOURCE_NAME } from 'app/features/alerting/unified/utils/d
import { Authorize } from '../../components/Authorize';
import { AlertmanagerAction } from '../../hooks/useAbilities';
import { getAlertTableStyles } from '../../styles/table';
import { isProvisionedResource } from '../../utils/k8s/utils';
import { makeAMLink, stringifyErrorLike } from '../../utils/misc';
import { CollapseToggle } from '../CollapseToggle';
import { DetailsField } from '../DetailsField';
@@ -128,7 +129,8 @@ function TemplateRow({ notificationTemplate, idx, alertManagerName, onDeleteClic
const isGrafanaAlertmanager = alertManagerName === GRAFANA_RULES_SOURCE_NAME;
const [isExpanded, setIsExpanded] = useState(false);
const { isProvisioned } = useNotificationTemplateMetadata(notificationTemplate);
const { provenance } = useNotificationTemplateMetadata(notificationTemplate);
const isProvisioned = isProvisionedResource(provenance);
const { uid, title: name, content: template, missing } = notificationTemplate;
const misconfiguredBadgeText = t('alerting.templates.misconfigured-badge-text', 'Misconfigured');
@@ -139,7 +141,7 @@ function TemplateRow({ notificationTemplate, idx, alertManagerName, onDeleteClic
<CollapseToggle isCollapsed={!isExpanded} onToggle={() => setIsExpanded(!isExpanded)} />
</td>
<td>
{name} {isProvisioned && <ProvisioningBadge />}{' '}
{name} {isProvisioned && <ProvisioningBadge tooltip provenance={provenance} />}{' '}
{missing && !isGrafanaAlertmanager && (
<Tooltip
content={

View File

@@ -9,7 +9,11 @@ import {
} from 'app/features/alerting/unified/components/contact-points/useContactPoints';
import { showManageContactPointPermissions } from 'app/features/alerting/unified/components/contact-points/utils';
import { GRAFANA_RULES_SOURCE_NAME } from 'app/features/alerting/unified/utils/datasource';
import { canEditEntity, canModifyProtectedEntity } from 'app/features/alerting/unified/utils/k8s/utils';
import {
canEditEntity,
canModifyProtectedEntity,
isProvisionedResource,
} from 'app/features/alerting/unified/utils/k8s/utils';
import {
GrafanaManagedContactPoint,
GrafanaManagedReceiverConfig,
@@ -127,7 +131,8 @@ export const GrafanaReceiverForm = ({ contactPoint, readOnly = false, editMode }
// If there is no contact point it means we're creating a new one, so scoped permissions doesn't exist yet
const hasScopedEditPermissions = contactPoint ? canEditEntity(contactPoint) : true;
const hasScopedEditProtectedPermissions = contactPoint ? canModifyProtectedEntity(contactPoint) : true;
const isEditable = !readOnly && hasScopedEditPermissions && !contactPoint?.provisioned;
const isProvisioned = isProvisionedResource(contactPoint?.provenance);
const isEditable = !readOnly && hasScopedEditPermissions && !isProvisioned;
const isTestable = !readOnly;
const canEditProtectedFields = editMode ? hasScopedEditProtectedPermissions : true;
@@ -170,10 +175,8 @@ export const GrafanaReceiverForm = ({ contactPoint, readOnly = false, editMode }
</Alert>
)}
{contactPoint?.provisioned && hasLegacyIntegrations(contactPoint, grafanaNotifiers) && (
<ImportedContactPointAlert />
)}
{contactPoint?.provisioned && !hasLegacyIntegrations(contactPoint, grafanaNotifiers) && (
{isProvisioned && hasLegacyIntegrations(contactPoint, grafanaNotifiers) && <ImportedContactPointAlert />}
{isProvisioned && !hasLegacyIntegrations(contactPoint, grafanaNotifiers) && (
<ProvisioningAlert resource={ProvisionedResource.ContactPoint} />
)}

View File

@@ -7,8 +7,8 @@ import { grantUserPermissions } from 'app/features/alerting/unified/mocks';
import { getAlertmanagerConfig } from 'app/features/alerting/unified/mocks/server/entities/alertmanagers';
import { AlertmanagerProvider } from 'app/features/alerting/unified/state/AlertmanagerContext';
import { NotificationChannelOption } from 'app/features/alerting/unified/types/alerting';
import { KnownProvenance } from 'app/features/alerting/unified/types/knownProvenance';
import { GRAFANA_RULES_SOURCE_NAME } from 'app/features/alerting/unified/utils/datasource';
import { PROVENANCE_NONE } from 'app/features/alerting/unified/utils/k8s/constants';
import { DEFAULT_TEMPLATES } from 'app/features/alerting/unified/utils/template-constants';
import { AccessControlAction } from 'app/types/accessControl';
@@ -68,7 +68,7 @@ describe('getTemplateOptions function', () => {
uid: title,
title,
content,
provenance: PROVENANCE_NONE,
provenance: KnownProvenance.None,
};
});
const defaultTemplates = parseTemplates(DEFAULT_TEMPLATES);

View File

@@ -4,7 +4,8 @@ import {
ComGithubGrafanaGrafanaPkgApisAlertingNotificationsV0Alpha1Route,
ComGithubGrafanaGrafanaPkgApisAlertingNotificationsV0Alpha1RoutingTree,
} from 'app/features/alerting/unified/openapi/routesApi.gen';
import { K8sAnnotations, PROVENANCE_NONE, ROOT_ROUTE_NAME } from 'app/features/alerting/unified/utils/k8s/constants';
import { KnownProvenance } from 'app/features/alerting/unified/types/knownProvenance';
import { K8sAnnotations, ROOT_ROUTE_NAME } from 'app/features/alerting/unified/utils/k8s/constants';
import { AlertManagerCortexConfig, MatcherOperator, Route } from 'app/plugins/datasource/alertmanager/types';
/**
@@ -66,7 +67,7 @@ export const getUserDefinedRoutingTree: (
name: ROOT_ROUTE_NAME,
namespace: 'default',
annotations: {
[K8sAnnotations.Provenance]: PROVENANCE_NONE,
[K8sAnnotations.Provenance]: KnownProvenance.None,
},
// Resource versions are much shorter than this in reality, but this is an easy way
// for us to mock the concurrency logic and check if the policies have updated since the last fetch

View File

@@ -6,8 +6,9 @@ import {
} from 'app/features/alerting/unified/mocks/server/entities/alertmanagers';
import { ALERTING_API_SERVER_BASE_URL, getK8sResponse } from 'app/features/alerting/unified/mocks/server/utils';
import { ComGithubGrafanaGrafanaPkgApisAlertingNotificationsV0Alpha1Receiver } from 'app/features/alerting/unified/openapi/receiversApi.gen';
import { KnownProvenance } from 'app/features/alerting/unified/types/knownProvenance';
import { GRAFANA_RULES_SOURCE_NAME } from 'app/features/alerting/unified/utils/datasource';
import { K8sAnnotations, PROVENANCE_NONE } from 'app/features/alerting/unified/utils/k8s/constants';
import { K8sAnnotations } from 'app/features/alerting/unified/utils/k8s/constants';
const usedByPolicies = ['grafana-default-email'];
const usedByRules = ['grafana-default-email'];
@@ -23,7 +24,7 @@ const getReceiversList = () => {
const provenance =
contactPoint.grafana_managed_receiver_configs?.find((integration) => {
return integration.provenance;
})?.provenance || PROVENANCE_NONE;
})?.provenance || KnownProvenance.None;
return {
metadata: {
// This isn't exactly accurate, but its the cleanest way to use the same data for AM config and K8S responses

View File

@@ -3,8 +3,9 @@ import { HttpResponse, http } from 'msw';
import { getAlertmanagerConfig } from 'app/features/alerting/unified/mocks/server/entities/alertmanagers';
import { ALERTING_API_SERVER_BASE_URL, getK8sResponse } from 'app/features/alerting/unified/mocks/server/utils';
import { ComGithubGrafanaGrafanaPkgApisAlertingNotificationsV0Alpha1TemplateGroup } from 'app/features/alerting/unified/openapi/templatesApi.gen';
import { KnownProvenance } from 'app/features/alerting/unified/types/knownProvenance';
import { GRAFANA_RULES_SOURCE_NAME } from 'app/features/alerting/unified/utils/datasource';
import { PROVENANCE_ANNOTATION, PROVENANCE_NONE } from 'app/features/alerting/unified/utils/k8s/constants';
import { PROVENANCE_ANNOTATION } from 'app/features/alerting/unified/utils/k8s/constants';
const config = getAlertmanagerConfig(GRAFANA_RULES_SOURCE_NAME);
@@ -14,7 +15,7 @@ const mappedTemplates = Object.entries(
).map<ComGithubGrafanaGrafanaPkgApisAlertingNotificationsV0Alpha1TemplateGroup>(([title, template]) => ({
metadata: {
name: titleToK8sResourceName(title), // K8s uses unique identifiers for resources
annotations: { [PROVENANCE_ANNOTATION]: config.template_file_provenances?.[title] || PROVENANCE_NONE },
annotations: { [PROVENANCE_ANNOTATION]: config.template_file_provenances?.[title] || KnownProvenance.None },
},
spec: {
title: title,

View File

@@ -4,7 +4,8 @@ import { base64UrlEncode } from '@grafana/alerting';
import { filterBySelector } from 'app/features/alerting/unified/mocks/server/handlers/k8s/utils';
import { ALERTING_API_SERVER_BASE_URL, getK8sResponse } from 'app/features/alerting/unified/mocks/server/utils';
import { ComGithubGrafanaGrafanaPkgApisAlertingNotificationsV0Alpha1TimeInterval } from 'app/features/alerting/unified/openapi/timeIntervalsApi.gen';
import { K8sAnnotations, PROVENANCE_NONE } from 'app/features/alerting/unified/utils/k8s/constants';
import { KnownProvenance } from 'app/features/alerting/unified/types/knownProvenance';
import { K8sAnnotations } from 'app/features/alerting/unified/utils/k8s/constants';
/** UID of a time interval that we expect to follow all happy paths within tests/mocks */
export const TIME_INTERVAL_UID_HAPPY_PATH = 'f4eae7a4895fa786';
@@ -21,7 +22,7 @@ const allTimeIntervals = getK8sResponse<ComGithubGrafanaGrafanaPkgApisAlertingNo
{
metadata: {
annotations: {
[K8sAnnotations.Provenance]: PROVENANCE_NONE,
[K8sAnnotations.Provenance]: KnownProvenance.None,
},
name: base64UrlEncode(TIME_INTERVAL_NAME_HAPPY_PATH),
uid: TIME_INTERVAL_UID_HAPPY_PATH,

Some files were not shown because too many files have changed in this diff Show More