Compare commits

..

22 Commits

Author SHA1 Message Date
Dominik Prokop
43e3efc98a Migrate to new import pattern
- Update all consumers to import from @grafana/schema/dashboard/v2beta1
- Update raw dashboard type imports to use @grafana/schema/dashboard/v0
- Add v2beta1/index.ts re-export file for the sub-path
- Consolidate duplicate imports to fix lint errors
2025-12-16 13:10:17 +01:00
Dominik Prokop
d853bd598d Add development build support for schema sub-paths
- Add webpack NormalModuleReplacementPlugin to resolve sub-paths from source
- Add TypeScript paths mappings for monorepo type-checking
- Add Jest moduleNameMapper for test resolution
2025-12-16 11:13:18 +01:00
Dominik Prokop
1c94a7ddd5 Add versioned dashboard schema sub-path exports
- Create dashboard/v0 sub-path for raw dashboard types
- Create dashboard/v2beta1 sub-path for v2 schema types
- Add exports and typesVersions to package.json via prepare-npm-package.js
- typesVersions provides backwards compatibility for moduleResolution: node
- Add rollup build targets for both sub-paths
2025-12-16 10:34:46 +01:00
Georges Chaudy
3fe8e70436 Enhancement: Introduce optimized folder permission relations (#115247)
Enhancement: Introduce optimized folder permission relations and new permission definitions

- Added `can_get_permissions` and `can_set_permissions` relations to enhance permission management.
- Implemented `FolderPermissionRelation` function to optimize permission checks for folder resources.
- Updated `checkTyped` and `listTyped` methods to utilize optimized relations for permission management.
- Introduced a new benchmark test file for performance evaluation of permission checks and listings.
2025-12-16 10:14:06 +01:00
Misi
6350b26326 Fix: Move the hidden users exclusion to the DB layer (#115254)
* Move the hidden users exclusion to the store layer

* Address Copilot's feedback

* Improve test case name
2025-12-16 09:37:59 +01:00
Mustafa Sencer Özcan
2d6c1c4e9e docs: add readme for unified storage on-prem migrations (#114397)
* docs: add documentation for unified storage migrations

* docs: move

* docs: rename title

* docs: add docs

* fix: update table

* fix: lint

* docs: add migration table explanation
2025-12-16 08:00:22 +00:00
Ryan McKinley
9fb61bd9f6 Live: more cleanup (#115144) 2025-12-16 08:22:19 +03:00
Costa Alexoglou
b8a5a516b5 feat: enabled search in mt-dashbord srvc (#115366) 2025-12-15 17:57:44 -07:00
Santiago
200870a6d4 Alerting: Add compact model for alert rules (#115239) 2025-12-15 21:55:30 +01:00
Lauren
1cb7a00341 Alerting: Add managed folder validation frontend (#115203)
* hide alerts tab for git synced folders

* add tests for alert tab visibility

* hide managed folders from folder picker

* update UI so managed folders are disabled in dropdown not hidden

* add folder d to folder tree

* include folder d in useFolderQuery hook tests

* update provisioned folders from disabled to hidden in the folder selector

* remove disabled logic from NestedFolderList
2025-12-15 21:50:16 +01:00
Kristina Demeshchik
9aa8fb183d Dashboards: Fix edit button visibility to respect editable flag in new layouts (#115372)
Dashboard in `editable: false` mode
2025-12-15 14:25:21 -05:00
Johnny Kartheiser
eec4722372 alerting docs: restore config feature toggle info (#114056)
* alerting docs: restore config feature toggle info

* Update docs/sources/alerting/set-up/configure-alert-state-history/index.md

Co-authored-by: Alexander Akhmetov <me@alx.cx>

---------

Co-authored-by: Alexander Akhmetov <me@alx.cx>
2025-12-15 13:24:03 -06:00
Andrew Hackmann
956ab05148 Elasticsearch: Raw query editor for DSL (#114066)
* init

* it works! but what a mess

* nil ptr bug

* split up client.go

* split up search_request.go

* split up data_query.go

* split up response_parser

* fix merge

* update handling request

* raw dsl agg parser

* change rawQuery to rawDSLQuery

* agg parser works but needs work

* clean up agg parser

* fix bugs with raw dsl parsers

* feature toggle

* fix tests

* editor type selector

* editor type added

* add fix builder vs code by not using same query field

* clean up

* fix lint

* pretty

* editor type selection should be behind ft

* adam's feedback

* prettier
2025-12-15 19:11:05 +00:00
J Stickler
ca2babf1a3 docs: update visualizations for logs (#115183)
* docs: update visualizations for logs

* ran prettier

* vale errors
2025-12-15 13:59:48 -05:00
Haris Rozajac
8979808e4a Dashboard V1 -> V2 conversion: Rows with hidden header should never be collapsed (#115290)
* rows with hidden header should never be collapsed

* fix test

* shouldn't need to normalize this

* fix frontend conversion

* fix lint

* Update public/app/features/dashboard-scene/serialization/transformSaveModelToScene.ts

Co-authored-by: Ivan Ortega Alba <ivanortegaalba@gmail.com>

---------

Co-authored-by: oscarkilhed <oscar.kilhed@grafana.com>
Co-authored-by: Ivan Ortega Alba <ivanortegaalba@gmail.com>
2025-12-15 18:08:35 +00:00
Johnny Kartheiser
4d6fc09cb1 alerting docs: RBAC updates (#114776)
* alerting docs: RBAC updates

added permissions that weren't listed, broke up into smaller sections

* clarifications, edits, and suggestions

changed the formatting to address some comments, suggestions, and typos

* Update index.md

* basic roles table added to alerting

* permissions overview chart

* ai caught some other things...

* prettier

* "provenance:writer" addition

apparently it's not actually "status.writer"?

* prettier

* re: yuri comments
2025-12-15 11:30:38 -06:00
Oscar Kilhed
7b8d7d94ac Dashboards: Fix dashboard controls margin (#115360)
fix dashboard controls margin
2025-12-15 16:04:50 +00:00
Sonia Aguilar
1ffd19f1e9 Alerting: Update prompt for Analyze rule AI button (#115341)
* update prompt for analayze rule AI button

* bring back the follow up in prompt

* use navigation suggestion instead of follow up
2025-12-15 16:50:31 +01:00
Andreas Christou
ad793a5288 Logs: Improved flexibility of hasSupplementaryQuerySupport (#115348)
Pass the request for improved control
2025-12-15 15:43:22 +00:00
Roberto Jiménez Sánchez
08a6f31733 Provisioning: allow editors to POST jobs in provisioning API (#115351)
fix: allow editors to POST jobs in provisioning API

Editors should be able to post jobs in the 'jobs' endpoint for syncing
repositories. This aligns with the requirement that syncing a repository
requires editor privileges.

- Separated 'jobs' subresource authorization from repository/test
- Allow both admins and editors to POST jobs
- Added integration tests to verify permissions

Fixes authorization bug where editors were incorrectly denied access.
2025-12-15 15:39:07 +00:00
Andreas Christou
6bc534d592 Chore: Move OpenTSDB to big tent (#114837) 2025-12-15 16:31:31 +01:00
alerting-team[bot]
7779c90713 Alerting: Add limits for the size of expanded notification templates (#115242)
* [create-pull-request] automated change

* propagate template limits from config

* fmt

---------

Co-authored-by: yuri-tceretian <25988953+yuri-tceretian@users.noreply.github.com>
Co-authored-by: Yuri Tseretyan <yuriy.tseretyan@grafana.com>
2025-12-15 10:21:24 -05:00
178 changed files with 4629 additions and 1125 deletions

8
.github/CODEOWNERS vendored
View File

@@ -208,7 +208,7 @@
/pkg/tests/apis/shorturl @grafana/sharing-squad
/pkg/tests/api/correlations/ @grafana/datapro
/pkg/tsdb/grafanads/ @grafana/grafana-backend-group
/pkg/tsdb/opentsdb/ @grafana/partner-datasources
/pkg/tsdb/opentsdb/ @grafana/oss-big-tent
/pkg/util/ @grafana/grafana-backend-group
/pkg/web/ @grafana/grafana-backend-group
@@ -260,7 +260,7 @@
/devenv/dev-dashboards/dashboards.go @grafana/dataviz-squad
/devenv/dev-dashboards/home.json @grafana/dataviz-squad
/devenv/dev-dashboards/datasource-elasticsearch/ @grafana/partner-datasources
/devenv/dev-dashboards/datasource-opentsdb/ @grafana/partner-datasources
/devenv/dev-dashboards/datasource-opentsdb/ @grafana/oss-big-tent
/devenv/dev-dashboards/datasource-influxdb/ @grafana/partner-datasources
/devenv/dev-dashboards/datasource-mssql/ @grafana/partner-datasources
/devenv/dev-dashboards/datasource-loki/ @grafana/plugins-platform-frontend
@@ -307,7 +307,7 @@
/devenv/docker/blocks/mysql_exporter/ @grafana/oss-big-tent
/devenv/docker/blocks/mysql_opendata/ @grafana/oss-big-tent
/devenv/docker/blocks/mysql_tests/ @grafana/oss-big-tent
/devenv/docker/blocks/opentsdb/ @grafana/partner-datasources
/devenv/docker/blocks/opentsdb/ @grafana/oss-big-tent
/devenv/docker/blocks/postgres/ @grafana/oss-big-tent
/devenv/docker/blocks/postgres_tests/ @grafana/oss-big-tent
/devenv/docker/blocks/prometheus/ @grafana/oss-big-tent
@@ -1101,7 +1101,7 @@ eslint-suppressions.json @grafanabot
/public/app/plugins/datasource/mixed/ @grafana/dashboards-squad
/public/app/plugins/datasource/mssql/ @grafana/partner-datasources
/public/app/plugins/datasource/mysql/ @grafana/oss-big-tent
/public/app/plugins/datasource/opentsdb/ @grafana/partner-datasources
/public/app/plugins/datasource/opentsdb/ @grafana/oss-big-tent
/public/app/plugins/datasource/grafana-postgresql-datasource/ @grafana/oss-big-tent
/public/app/plugins/datasource/prometheus/ @grafana/oss-big-tent
/public/app/plugins/datasource/cloud-monitoring/ @grafana/partner-datasources

View File

@@ -149,7 +149,7 @@ require (
github.com/google/go-querystring v1.1.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/google/wire v0.7.0 // indirect
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba // indirect
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 // indirect
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // indirect
github.com/grafana/dataplane/sdata v0.0.9 // indirect
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 // indirect

View File

@@ -606,8 +606,8 @@ github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2z
github.com/gorilla/mux v1.7.1/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 h1:ZzG/gCclEit9w0QUfQt9GURcOycAIGcsQAhY1u0AEX0=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f h1:Cbm6OKkOcJ+7CSZsGsEJzktC/SIa5bxVeYKQLuYK86o=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f/go.mod h1:axY0cdOg3q0TZHwpHnIz5x16xZ8ZBxJHShsSHHXcHQg=
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGrHj3GdFbvsMzUT7eusgii9PKf9L1ZaXDDbY=

View File

@@ -4,7 +4,7 @@ go 1.25.5
require (
github.com/go-kit/log v0.2.1
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4
github.com/grafana/grafana-app-sdk v0.48.5
github.com/grafana/grafana-app-sdk/logging v0.48.3

View File

@@ -216,12 +216,10 @@ github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=
github.com/grafana/grafana-app-sdk v0.48.5/go.mod h1:HJsMOSBmt/D/Ihs1SvagOwmXKi0coBMVHlfvdd+qe9Y=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 h1:ZzG/gCclEit9w0QUfQt9GURcOycAIGcsQAhY1u0AEX0=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4 h1:jSojuc7njleS3UOz223WDlXOinmuLAIPI0z2vtq8EgI=
github.com/grafana/dskit v0.0.0-20250908063411-6b6da59b5cc4/go.mod h1:VahT+GtfQIM+o8ht2StR6J9g+Ef+C2Vokh5uuSmOD/4=
github.com/grafana/grafana-app-sdk v0.48.5 h1:MS8l9fTZz+VbTfgApn09jw27GxhQ6fNOWGhC4ydvZmM=

View File

@@ -530,7 +530,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": true,
"collapse": false,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -546,7 +546,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": true,
"collapse": false,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -548,7 +548,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": true,
"collapse": false,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -574,7 +574,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": true,
"collapse": false,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -1663,7 +1663,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": true,
"collapse": false,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -1727,7 +1727,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": true,
"collapse": false,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -328,7 +328,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": true,
"collapse": false,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -335,7 +335,7 @@
"kind": "RowsLayoutRow",
"spec": {
"title": "",
"collapse": true,
"collapse": false,
"hideHeader": true,
"layout": {
"kind": "GridLayout",

View File

@@ -501,11 +501,9 @@ func convertToRowsLayout(ctx context.Context, panels []interface{}, dsIndexProvi
if currentRow != nil {
// If currentRow is a hidden-header row (panels before first explicit row),
// set its collapse to match the first explicit row's collapsed value
// This matches frontend behavior: collapse: panel.collapsed
// it should not be collapsed because it will disappear and be visible only in edit mode
if currentRow.Spec.HideHeader != nil && *currentRow.Spec.HideHeader {
rowCollapsed := getBoolField(panelMap, "collapsed", false)
currentRow.Spec.Collapse = &rowCollapsed
currentRow.Spec.Collapse = &[]bool{false}[0]
}
// Flush current row to layout
rows = append(rows, *currentRow)

View File

@@ -75,9 +75,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": true,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -154,9 +154,9 @@
"effects": {
"barGlow": false,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -233,9 +233,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -312,9 +312,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -391,9 +391,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -470,9 +470,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": false,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -549,9 +549,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": false,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -641,9 +641,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -720,9 +720,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -799,9 +799,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -878,9 +878,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -974,9 +974,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1053,9 +1053,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1132,9 +1132,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1211,9 +1211,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1290,9 +1290,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1386,9 +1386,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1469,9 +1469,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1552,9 +1552,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1643,9 +1643,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -1727,9 +1727,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -1825,9 +1825,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -1910,9 +1910,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -1994,9 +1994,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -2078,9 +2078,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -2172,7 +2172,9 @@
},
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
@@ -2238,7 +2240,9 @@
},
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
@@ -2275,4 +2279,4 @@
"title": "Panel tests - Gauge (new)",
"uid": "panel-tests-gauge-new",
"weekStart": ""
}
}

View File

@@ -955,9 +955,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1162,4 +1162,4 @@
"title": "Panel tests - Old gauge to new",
"uid": "panel-tests-old-gauge-to-new",
"weekStart": ""
}
}

View File

@@ -221,7 +221,7 @@ require (
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
github.com/gorilla/mux v1.8.1 // indirect
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba // indirect
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 // indirect
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // indirect
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 // indirect
github.com/grafana/dataplane/sdata v0.0.9 // indirect

View File

@@ -817,8 +817,8 @@ github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 h1:ZzG/gCclEit9w0QUfQt9GURcOycAIGcsQAhY1u0AEX0=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f h1:Cbm6OKkOcJ+7CSZsGsEJzktC/SIa5bxVeYKQLuYK86o=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f/go.mod h1:axY0cdOg3q0TZHwpHnIz5x16xZ8ZBxJHShsSHHXcHQg=
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGrHj3GdFbvsMzUT7eusgii9PKf9L1ZaXDDbY=

View File

@@ -74,7 +74,7 @@ require (
github.com/google/gnostic-models v0.7.0 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba // indirect
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 // indirect
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // indirect
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 // indirect
github.com/grafana/dataplane/sdata v0.0.9 // indirect

View File

@@ -174,8 +174,8 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 h1:ZzG/gCclEit9w0QUfQt9GURcOycAIGcsQAhY1u0AEX0=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f h1:Cbm6OKkOcJ+7CSZsGsEJzktC/SIa5bxVeYKQLuYK86o=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f/go.mod h1:axY0cdOg3q0TZHwpHnIz5x16xZ8ZBxJHShsSHHXcHQg=
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGrHj3GdFbvsMzUT7eusgii9PKf9L1ZaXDDbY=

View File

@@ -1327,6 +1327,10 @@ alertmanager_max_silences_count =
# Maximum silence size in bytes. Default: 0 (no limit).
alertmanager_max_silence_size_bytes =
# Maximum size of the expanded template output in bytes. Default: 10485760 (0 - no limit).
# The result of template expansion will be truncated to the limit.
alertmanager_max_template_output_bytes =
# Redis server address or addresses. It can be a single Redis address if using Redis standalone,
# or a list of comma-separated addresses if using Redis Cluster/Sentinel.
ha_redis_address =

View File

@@ -44,7 +44,7 @@ refs:
destination: /docs/grafana-cloud/alerting-and-irm/oncall/user-and-team-management/#available-grafana-oncall-rbac-roles--granted-actions
---
# RBAC role definitions
# Grafana RBAC role definitions
{{< admonition type="note" >}}
Available in [Grafana Enterprise](/docs/grafana/<GRAFANA_VERSION>/introduction/grafana-enterprise/) and [Grafana Cloud](/docs/grafana-cloud).
@@ -59,7 +59,7 @@ The following tables list permissions associated with basic and fixed roles. Thi
| Grafana Admin | `basic_grafana_admin` |
| `fixed:authentication.config:writer`<br>`fixed:general.auth.config:writer`<br>`fixed:ldap:writer`<br>`fixed:licensing:writer`<br>`fixed:migrationassistant:migrator`<br>`fixed:org.users:writer`<br>`fixed:organization:maintainer`<br>`fixed:plugins:maintainer`<br>`fixed:provisioning:writer`<br>`fixed:roles:writer`<br>`fixed:settings:reader`<br>`fixed:settings:writer`<br>`fixed:stats:reader`<br>`fixed:support.bundles:writer`<br>`fixed:usagestats:reader`<br>`fixed:users:writer` | Default [Grafana server administrator](/docs/grafana/<GRAFANA_VERSION>/administration/roles-and-permissions/#grafana-server-administrators) assignments. |
| Admin | `basic_admin` | All roles assigned to Editor and `fixed:reports:writer` <br>`fixed:datasources:writer`<br>`fixed:organization:writer`<br>`fixed:datasources.permissions:writer`<br>`fixed:teams:writer`<br>`fixed:dashboards:writer`<br>`fixed:dashboards.permissions:writer`<br>`fixed:dashboards.public:writer`<br>`fixed:folders:writer`<br>`fixed:folders.permissions:writer`<br>`fixed:alerting:writer`<br>`fixed:alerting.provisioning.secrets:reader`<br>`fixed:alerting.provisioning:writer`<br>`fixed:datasources.caching:writer`<br>`fixed:plugins:writer`<br>`fixed:library.panels:writer` | Default [Grafana organization administrator](ref:rbac-basic-roles) assignments. |
| Editor | `basic_editor` | All roles assigned to Viewer and `fixed:datasources:explorer` <br>`fixed:dashboards:creator`<br>`fixed:folders:creator`<br>`fixed:annotations:writer`<br>`fixed:alerting:writer`<br>`fixed:library.panels:creator`<br>`fixed:library.panels:general.writer`<br>`fixed:alerting.provisioning.status:writer` | Default [Editor](ref:rbac-basic-roles) assignments. |
| Editor | `basic_editor` | All roles assigned to Viewer and `fixed:datasources:explorer` <br>`fixed:dashboards:creator`<br>`fixed:folders:creator`<br>`fixed:annotations:writer`<br>`fixed:alerting:writer`<br>`fixed:library.panels:creator`<br>`fixed:library.panels:general.writer`<br>`fixed:alerting.provisioning.provenance:writer` | Default [Editor](ref:rbac-basic-roles) assignments. |
| Viewer | `basic_viewer` | `fixed:datasources.id:reader`<br>`fixed:organization:reader`<br>`fixed:annotations:reader`<br>`fixed:annotations.dashboard:writer`<br>`fixed:alerting:reader`<br>`fixed:plugins.app:reader`<br>`fixed:dashboards.insights:reader`<br>`fixed:datasources.insights:reader`<br>`fixed:library.panels:general.reader`<br>`fixed:folders.general:reader`<br>`fixed:datasources.builtin:reader` | Default [Viewer](ref:rbac-basic-roles) assignments. |
| No Basic Role | n/a | | Default [No Basic Role](ref:rbac-basic-roles) |
@@ -74,86 +74,86 @@ These UUIDs won't be available if your instance was created before Grafana v10.2
To learn how to use the roles API to determine the role UUIDs, refer to [Manage RBAC roles](ref:rbac-manage-rbac-roles).
{{< /admonition >}}
| Fixed role | UUID | Permissions | Description |
| -------------------------------------------- | ----------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `fixed:alerting:reader` | `fixed_O2oP1_uBFozI2i93klAkcvEWR30` | All permissions from `fixed:alerting.rules:reader` <br>`fixed:alerting.instances:reader`<br>`fixed:alerting.notifications:reader` | Read-only permissions for all Grafana, Mimir, Loki and Alertmanager alert rules\*, alerts, contact points, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting:writer` | `fixed_-PAZgSJsDlRD8NUg-PFSeH_BkJY` | All permissions from `fixed:alerting.rules:writer` <br>`fixed:alerting.instances:writer`<br>`fixed:alerting.notifications:writer` | Create, update, and delete Grafana, Mimir, Loki and Alertmanager alert rules\*, silences, contact points, templates, mute timings, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting.instances:reader` | `fixed_ut5fVS-Ulh_ejFoskFhJT_rYg0Y` | `alert.instances:read` for organization scope <br> `alert.instances.external:read` for scope `datasources:*` | Read all alerts and silences in the organization produced by Grafana Alerts and Mimir and Loki alerts and silences.[\*](#alerting-roles) |
| `fixed:alerting.instances:writer` | `fixed_pKOBJE346uyqMLdgWbk1NsQfEl0` | All permissions from `fixed:alerting.instances:reader` and<br> `alert.instances:create`<br>`alert.instances:write` for organization scope <br> `alert.instances.external:write` for scope `datasources:*` | Create, update and expire all silences in the organization produced by Grafana, Mimir, and Loki.[\*](#alerting-roles) |
| `fixed:alerting.notifications:reader` | `fixed_hmBn0lX5h1RZXB9Vaot420EEdA0` | `alert.notifications:read` for organization scope<br>`alert.notifications.external:read` for scope `datasources:*` | Read all Grafana and Alertmanager contact points, templates, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting.notifications:writer` | `fixed_XplK6HPNxf9AP5IGTdB5Iun4tJc` | All permissions from `fixed:alerting.notifications:reader` and<br>`alert.notifications:write`for organization scope<br>`alert.notifications.external:read` for scope `datasources:*` | Create, update, and delete contact points, templates, mute timings and notification policies for Grafana and external Alertmanager.[\*](#alerting-roles) |
| `fixed:alerting.provisioning:writer` | `fixed_y7pFjdEkxpx5ETdcxPvp0AgRuUo` | `alert.provisioning:read` and `alert.provisioning:write` | Create, update and delete Grafana alert rules, notification policies, contact points, templates, etc via provisioning API. [\*](#alerting-roles) |
| `fixed:alerting.provisioning.secrets:reader` | `fixed_9fmzXXZZG-Od0Amy2ofEG8Uk--c` | `alert.provisioning:read` and `alert.provisioning.secrets:read` | Read-only permissions for Provisioning API and let export resources with decrypted secrets [\*](#alerting-roles) |
| `fixed:alerting.provisioning.status:writer` | `fixed_eAxlzfkTuobvKEgXHveFMBZrOj8` | `alert.provisioning.provenance:write` | Set provenance status to alert rules, notification policies, contact points, etc. Should be used together with regular writer roles. [\*](#alerting-roles) |
| `fixed:alerting.rules:reader` | `fixed_fRGKL_vAqUsmUWq5EYKnOha9DcA` | `alert.rule:read`, `alert.silences:read` for scope `folders:*` <br> `alert.rules.external:read` for scope `datasources:*` <br> `alert.notifications.time-intervals:read` <br> `alert.notifications.receivers:list` | Read all\* Grafana, Mimir, and Loki alert rules.[\*](#alerting-roles) and read rule-specific silences |
| `fixed:alerting.rules:writer` | `fixed_YJJGwAalUwDZPrXSyFH8GfYBXAc` | All permissions from `fixed:alerting.rules:reader` and <br> `alert.rule:create` <br> `alert.rule:write` <br> `alert.rule:delete` <br> `alert.silences:create` <br> `alert.silences:write` for scope `folders:*` <br> `alert.rules.external:write` for scope `datasources:*` | Create, update, and delete all\* Grafana, Mimir, and Loki alert rules.[\*](#alerting-roles) and manage rule-specific silences |
| `fixed:annotations:reader` | `fixed_hpZnoizrfAJsrceNcNQqWYV-xNU` | `annotations:read` for scopes `annotations:type:*` | Read all annotations and annotation tags. |
| `fixed:annotations:writer` | `fixed_ZVW-Aa9Tzle6J4s2aUFcq1StKWE` | All permissions from `fixed:annotations:reader` <br>`annotations:write` <br>`annotations.create`<br> `annotations:delete` for scope `annotations:type:*` | Read, create, update and delete all annotations and annotation tags. |
| `fixed:annotations.dashboard:writer` | `fixed_8A775xenXeKaJk4Cr7bchP9yXOA` | `annotations:write` <br>`annotations.create`<br> `annotations:delete` for scope `annotations:type:dashboard` | Create, update and delete dashboard annotations and annotation tags. |
| `fixed:authentication.config:writer` | `fixed_0rYhZ2Qnzs8AdB1nX7gexk3fHDw` | `settings:read` for scope `settings:auth.saml:*` <br> `settings:write` for scope `settings:auth.saml:*` | Read and update authentication and SAML settings. |
| `fixed:general.auth.config:writer` | `fixed_QFxIT_FGtBqbIVJIwx1bLgI5z6c` | `settings:read` for scope `settings:auth:oauth_allow_insecure_email_lookup` <br> `settings:write` for scope `settings:auth:oauth_allow_insecure_email_lookup` | Read and update the Grafana instance's general authentication configuration settings. |
| `fixed:dashboards:creator` | `fixed_ZorKUcEPCM01A1fPakEzGBUyU64` | `dashboards:create`<br>`folders:read` | Create dashboards. |
| `fixed:dashboards:reader` | `fixed_Sgr67JTOhjQGFlzYRahOe45TdWM` | `dashboards:read` | Read all dashboards. |
| `fixed:dashboards:writer` | `fixed_OK2YOQGIoI1G031hVzJB6rAJQAs` | All permissions from `fixed:dashboards:reader` and <br>`dashboards:write`<br>`dashboards:delete`<br>`dashboards:create`<br>`dashboards.permissions:read`<br>`dashboards.permissions:write` | Read, create, update, and delete all dashboards. |
| `fixed:dashboards.insights:reader` | `fixed_JlBJ2_gizP8zhgaeGE2rjyZe2Rs` | `dashboards.insights:read` | Read dashboard insights data and see presence indicators. |
| `fixed:dashboards.permissions:reader` | `fixed_f17oxuXW_58LL8mYJsm4T_mCeIw` | `dashboards.permissions:read` | Read all dashboard permissions. |
| `fixed:dashboards.permissions:writer` | `fixed_CcznxhWX_Yqn8uWMXMQ-b5iFW9k` | All permissions from `fixed:dashboards.permissions:reader` and <br>`dashboards.permissions:write` | Read and update all dashboard permissions. |
| `fixed:dashboards.public:writer` | `fixed_f_GHHRBciaqESXfGz2oCcooqHxs` | `dashboards.public:write` | Create, update, delete or pause a shared dashboard. |
| `fixed:datasources:creator` | `fixed_XX8jHREgUt-wo1A-rPXIiFlX6Zw` | `datasources:create` | Create data sources. |
| `fixed:datasources:explorer` | `fixed_qDzW9mzx9yM91T5Bi8dHUM2muTw` | `datasources:explore` | Enable the Explore feature. Data source permissions still apply, you can only query data sources for which you have query permissions. |
| `fixed:datasources:reader` | `fixed_C2x8IxkiBc1KZVjyYH775T9jNMQ` | `datasources:read`<br>`datasources:query` | Read and query data sources. |
| `fixed:datasources:writer` | `fixed_q8HXq8kjjA5IlHHgBJlKlUyaNik` | All permissions from `fixed:datasources:reader` and <br>`datasources:create`<br>`datasources:write`<br>`datasources:delete` | Read, query, create, delete, or update a data source. |
| `fixed:datasources.builtin:reader` | `fixed_q8HXq8kjjA5IlHHgBJlKlUyaNik` | `datasources:read` and `datasources:query` scoped to `datasources:uid:grafana` | An internal role used to grant Viewers access to the builtin example data source in Grafana. |
| `fixed:datasources.caching:reader` | `fixed_D2ddpGxJYlw0mbsTS1ek9fj0kj4` | `datasources.caching:read` | Read data source query caching settings. |
| `fixed:datasources.caching:writer` | `fixed_JtFjHr7jd7hSqUYcktKvRvIOGRE` | `datasources.caching:read`<br>`datasources.caching:write` | Enable, disable, or update query caching settings. |
| `fixed:datasources.id:reader` | `fixed_entg--fHmDqWY2-69N0ocawK0Os` | `datasources.id:read` | Read the ID of a data source based on its name. |
| `fixed:datasources.insights:reader` | `fixed_EBZ3NwlfecNPp2p0XcZRC1nfEYk` | `datasources.insights:read` | Read data source insights data. |
| `fixed:datasources.permissions:reader` | `fixed_ErYA-cTN3yn4h4GxaVPcawRhiOY` | `datasources.permissions:read` | Read data source permissions. |
| `fixed:datasources.permissions:writer` | `fixed_aiQh9YDfLOKjQhYasF9_SFUjQiw` | All permissions from `fixed:datasources.permissions:reader` and <br>`datasources.permissions:write` | Create, read, or delete permissions of a data source. |
| `fixed:folders:creator` | `fixed_gGLRbZGAGB6n9uECqSh_W382RlQ` | `folders:create` | Create folders in the root level. |
| `fixed:folders:reader` | `fixed_yeW-5QPeo-i5PZUIUXMlAA97GnQ` | `folders:read`<br>`dashboards:read` | Read all folders and dashboards. |
| `fixed:folders:writer` | `fixed_wJXLoTzgE7jVuz90dryYoiogL0o` | All permissions from `fixed:dashboards:writer` and <br>`folders:read`<br>`folders:write`<br>`folders:create`<br>`folders:delete`<br>`folders.permissions:read`<br>`folders.permissions:write` | Read, update, and delete all folders and dashboards. Create folders and subfolders. |
| `fixed:folders.general:reader` | `fixed_rSASbkg8DvpG_gTX5s41d7uxRvI` | `folders:read` scoped to `folders:uid:general` | An internal role used to correctly display access to the folder tree for Viewer role. |
| `fixed:folders.permissions:reader` | `fixed_E06l4cx0JFm47EeLBE4nmv3pnSo` | `folders.permissions:read` | Read all folder permissions. |
| `fixed:folders.permissions:writer` | `fixed_3GAgpQ_hWG8o7-lwNb86_VB37eI` | All permissions from `fixed:folders.permissions:reader` and <br>`folders.permissions:write` | Read and update all folder permissions. |
| `fixed:ldap:reader` | `fixed_lMcOPwSkxKY-qCK8NMJc5k6izLE` | `ldap.user:read`<br>`ldap.status:read` | Read the LDAP configuration and LDAP status information. |
| `fixed:ldap:writer` | `fixed_p6AvnU4GCQyIh7-hbwI-bk3GYnU` | All permissions from `fixed:ldap:reader` and <br>`ldap.user:sync`<br>`ldap.config:reload` | Read and update the LDAP configuration, and read LDAP status information. |
| `fixed:library.panels:creator` | `fixed_6eX6ItfegCIY5zLmPqTDW8ZV7KY` | `library.panels:create`<br>`folders:read` | Create library panel at the root level. |
| `fixed:library.panels:general.reader` | `fixed_ct0DghiBWR_2BiQm3EvNPDVmpio` | `library.panels:read` | Read all library panels at the root level. |
| `fixed:library.panels:general.writer` | `fixed_DgprkmqfN_1EhZ2v1_d1fYG8LzI` | All permissions from `fixed:library.panels:general.reader` plus<br>`library.panels:create`<br>`library.panels:delete`<br>`library.panels:write` | Create, read, write or delete all library panels and their permissions at the root level. |
| `fixed:library.panels:reader` | `fixed_tvTr9CnZ6La5vvUO_U_X1LPnhUs` | `library.panels:read` | Read all library panels. |
| `fixed:library.panels:writer` | `fixed_JTljAr21LWLTXCkgfBC4H0lhBC8` | All permissions from `fixed:library.panels:reader` plus<br>`library.panels:create`<br>`library.panels:delete`<br>`library.panels:write` | Create, read, write or delete all library panels and their permissions. |
| `fixed:licensing:reader` | `fixed_OADpuXvNEylO2Kelu3GIuBXEAYE` | `licensing:read`<br>`licensing.reports:read` | Read licensing information and licensing reports. |
| `fixed:licensing:writer` | `fixed_gzbz3rJpQMdaKHt-E4q0PVaKMoE` | All permissions from `fixed:licensing:reader` and <br>`licensing:write`<br>`licensing:delete` | Read licensing information and licensing reports, update and delete the license token. |
| `fixed:migrationassistant:migrator` | `fixed_LLk2p7TRuBztOAksTQb1Klc8YTk` | `migrationassistant:migrate` | Execute on-prem to cloud migrations through the Migration Assistant. |
| `fixed:org.users:reader` | `fixed_oCqNwlVHLOpw7-jAlwp4HzYqwGY` | `org.users:read` | Read users within a single organization. |
| `fixed:org.users:writer` | `fixed_VERj5nayasjgf_Yh0sWqqCkxWlw` | All permissions from `fixed:org.users:reader` and <br>`org.users:add`<br>`org.users:remove`<br>`org.users:write` | Within a single organization, add a user, invite a new user, read information about a user and their role, remove a user from that organization, or change the role of a user. |
| `fixed:organization:maintainer` | `fixed_CMm-uuBaPUBf4r8XG3jIvxo55bg` | All permissions from `fixed:organization:reader` and <br> `orgs:write`<br>`orgs:create`<br>`orgs:delete`<br>`orgs.quotas:write` | Create, read, write, or delete an organization. Read or write its quotas. This role needs to be assigned globally. |
| `fixed:organization:reader` | `fixed_0SZPJlTHdNEe8zO91zv7Zwiwa2w` | `orgs:read`<br>`orgs.quotas:read` | Read an organization and its quotas. |
| `fixed:organization:writer` | `fixed_Y4jGqDd8w1yCrPwlik8z5Iu8-3M` | All permissions from `fixed:organization:reader` and <br> `orgs:write`<br>`orgs.preferences:read`<br>`orgs.preferences:write` | Read an organization, its quotas, or its preferences. Update organization properties, or its preferences. |
| `fixed:plugins:maintainer` | `fixed_yEOKidBcWgbm74x-nTa3lW5lOyY` | `plugins:install` | Install and uninstall plugins. Needs to be assigned globally. |
| `fixed:plugins:writer` | `fixed_MRYpGk7kpNNwt2VoVOXFiPnQziE` | `plugins:write` | Enable and disable plugins and edit plugins' settings. |
| `fixed:plugins.app:reader` | `fixed_AcZRiNYx7NueYkUqzw1o2OGGUAA` | `plugins.app:access` | Access application plugins (still enforcing the organization role). |
| `fixed:provisioning:writer` | `fixed_bgk1FCyR6OEDwhgirZlQgu5LlCA` | `provisioning:reload` | Reload provisioning. |
| `fixed:reports:reader` | `fixed_72_8LU_0ukfm6BdblOw8Z9q-GQ8` | `reports:read`<br>`reports:send`<br>`reports.settings:read` | Read all reports and shared report settings. |
| `fixed:reports:writer` | `fixed_jBW3_7g1EWOjGVBYeVRwtFxhUNw` | All permissions from `fixed:reports:reader` and <br>`reports:create`<br>`reports:write`<br>`reports:delete`<br>`reports.settings:write` | Create, read, update, or delete all reports and shared report settings. |
| `fixed:roles:reader` | `fixed_GkfG-1NSwEGb4hpK3-E3qHyNltc` | `roles:read`<br>`teams.roles:read`<br>`users.roles:read`<br>`users.permissions:read` | Read all access control roles, roles and permissions assigned to users, teams. |
| `fixed:roles:resetter` | `fixed_WgPpC3qJRmVpVTJavFNwfS5RuzQ` | `roles:write` with scope `permissions:type:escalate` | Reset basic roles to their default. |
| `fixed:roles:writer` | `fixed_W5aFaw8isAM27x_eWfElBhZ0iOc` | All permissions from `fixed:roles:reader` and <br>`roles:write`<br>`roles:delete`<br>`teams.roles:add`<br>`teams.roles:remove`<br>`users.roles:add`<br>`users.roles:remove` | Create, read, update, or delete all roles, assign or unassign roles to users, teams. |
| `fixed:serviceaccounts:creator` | `fixed_Ikw60fckA0MyiiZ73BawSfOULy4` | `serviceaccounts:create` | Create Grafana service accounts. |
| `fixed:serviceaccounts:reader` | `fixed_QFjJAZ88iawMLInYOxPA1DB1w6I` | `serviceaccounts:read` | Read Grafana service accounts. |
| `fixed:serviceaccounts:writer` | `fixed_iBvUNUEZBZ7PUW0vdkN5iojc2sk` | `serviceaccounts:read`<br>`serviceaccounts:create`<br>`serviceaccounts:write`<br>`serviceaccounts:delete`<br>`serviceaccounts.permissions:read`<br>`serviceaccounts.permissions:write` | Create, update, read and delete all Grafana service accounts and manage service account permissions. |
| `fixed:settings:reader` | `fixed_0LaUt1x6PP8hsZzEBhqPQZFUd8Q` | `settings:read` | Read Grafana instance settings. |
| `fixed:settings:writer` | `fixed_joIHDgMrGg790hMhUufVzcU4j44` | All permissions from `fixed:settings:reader` and<br>`settings:write` | Read and update Grafana instance settings. |
| `fixed:stats:reader` | `fixed_OnRCXxZVINWpcKvTF5A1gecJ7pA` | `server.stats:read` | Read Grafana instance statistics. |
| `fixed:support.bundles:reader` | `fixed_gcPjI3PTUJwRx-GJZwDhNa7zbos` | `support.bundles:read` | List and download support bundles. |
| `fixed:support.bundles:writer` | `fixed_dTgCv9Wxrp_WHAhwHYIgeboxKpE` | `support.bundles:read`<br>`support.bundles:create`<br>`support.bundles:delete` | Create, delete, list and download support bundles. |
| `fixed:teams:creator` | `fixed_nzVQoNSDSn0fg1MDgO6XnZX2RZI` | `teams:create`<br>`org.users:read` | Create a team and list organization users (required to manage the created team). |
| `fixed:teams:read` | `fixed_Z8pB0GQlrqRt8IZBCJQxPWvJPgQ` | `teams:read` | List all teams. |
| `fixed:teams:writer` | `fixed_xw1T0579h620MOYi4L96GUs7fZY` | `teams:create`<br>`teams:delete`<br>`teams:read`<br>`teams:write`<br>`teams.permissions:read`<br>`teams.permissions:write` | Create, read, update and delete teams and manage team memberships. |
| `fixed:usagestats:reader` | `fixed_eAM0azEvnWFCJAjNkUKnGL_1-bU` | `server.usagestats.report:read` | View usage statistics report. |
| `fixed:users:reader` | `fixed_buZastUG3reWyQpPemcWjGqPAd0` | `users:read`<br>`users.quotas:read`<br>`users.authtoken:read` | Read all users and their information, such as team memberships, authentication tokens, and quotas. |
| `fixed:users:writer` | `fixed_wjzgHHo_Ux25DJuELn_oiAdB_yM` | All permissions from `fixed:users:reader` and <br>`users:write`<br>`users:create`<br>`users:delete`<br>`users:enable`<br>`users:disable`<br>`users.password:write`<br>`users.permissions:write`<br>`users:logout`<br>`users.authtoken:write`<br>`users.quotas:write` | Read and update all attributes and settings for all users in Grafana: update user information, read user information, create or enable or disable a user, make a user a Grafana administrator, sign out a user, update a users authentication token, or update quotas for all users. |
| Fixed role | UUID | Permissions | Description |
| ----------------------------------------------- | ----------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `fixed:alerting:reader` | `fixed_O2oP1_uBFozI2i93klAkcvEWR30` | All permissions from `fixed:alerting.rules:reader` <br>`fixed:alerting.instances:reader`<br>`fixed:alerting.notifications:reader` | Read-only permissions for all Grafana, Mimir, Loki and Alertmanager alert rules\*, alerts, contact points, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting:writer` | `fixed_-PAZgSJsDlRD8NUg-PFSeH_BkJY` | All permissions from `fixed:alerting.rules:writer` <br>`fixed:alerting.instances:writer`<br>`fixed:alerting.notifications:writer` | Create, update, and delete Grafana, Mimir, Loki and Alertmanager alert rules\*, silences, contact points, templates, mute timings, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting.instances:reader` | `fixed_ut5fVS-Ulh_ejFoskFhJT_rYg0Y` | `alert.instances:read` for organization scope <br> `alert.instances.external:read` for scope `datasources:*` | Read all alerts and silences in the organization produced by Grafana Alerts and Mimir and Loki alerts and silences.[\*](#alerting-roles) |
| `fixed:alerting.instances:writer` | `fixed_pKOBJE346uyqMLdgWbk1NsQfEl0` | All permissions from `fixed:alerting.instances:reader` and<br> `alert.instances:create`<br>`alert.instances:write` for organization scope <br> `alert.instances.external:write` for scope `datasources:*` | Create, update and expire all silences in the organization produced by Grafana, Mimir, and Loki.[\*](#alerting-roles) |
| `fixed:alerting.notifications:reader` | `fixed_hmBn0lX5h1RZXB9Vaot420EEdA0` | `alert.notifications:read` for organization scope<br>`alert.notifications.external:read` for scope `datasources:*` | Read all Grafana and Alertmanager contact points, templates, and notification policies.[\*](#alerting-roles) |
| `fixed:alerting.notifications:writer` | `fixed_XplK6HPNxf9AP5IGTdB5Iun4tJc` | All permissions from `fixed:alerting.notifications:reader` and<br>`alert.notifications:write`for organization scope<br>`alert.notifications.external:read` for scope `datasources:*` | Create, update, and delete contact points, templates, mute timings and notification policies for Grafana and external Alertmanager.[\*](#alerting-roles) |
| `fixed:alerting.provisioning:writer` | `fixed_y7pFjdEkxpx5ETdcxPvp0AgRuUo` | `alert.provisioning:read` and `alert.provisioning:write` | Create, update and delete Grafana alert rules, notification policies, contact points, templates, etc via provisioning API. [\*](#alerting-roles) |
| `fixed:alerting.provisioning.secrets:reader` | `fixed_9fmzXXZZG-Od0Amy2ofEG8Uk--c` | `alert.provisioning:read` and `alert.provisioning.secrets:read` | Read-only permissions for Provisioning API and let export resources with decrypted secrets [\*](#alerting-roles) |
| `fixed:alerting.provisioning.provenance:writer` | `fixed_eAxlzfkTuobvKEgXHveFMBZrOj8` | `alert.provisioning.provenance:write` | Set provenance status to alert rules, notification policies, contact points, etc. Should be used together with regular writer roles. [\*](#alerting-roles) |
| `fixed:alerting.rules:reader` | `fixed_fRGKL_vAqUsmUWq5EYKnOha9DcA` | `alert.rule:read`, `alert.silences:read` for scope `folders:*` <br> `alert.rules.external:read` for scope `datasources:*` <br> `alert.notifications.time-intervals:read` <br> `alert.notifications.receivers:list` | Read all\* Grafana, Mimir, and Loki alert rules.[\*](#alerting-roles) and read rule-specific silences |
| `fixed:alerting.rules:writer` | `fixed_YJJGwAalUwDZPrXSyFH8GfYBXAc` | All permissions from `fixed:alerting.rules:reader` and <br> `alert.rule:create` <br> `alert.rule:write` <br> `alert.rule:delete` <br> `alert.silences:create` <br> `alert.silences:write` for scope `folders:*` <br> `alert.rules.external:write` for scope `datasources:*` | Create, update, and delete all\* Grafana, Mimir, and Loki alert rules.[\*](#alerting-roles) and manage rule-specific silences |
| `fixed:annotations:reader` | `fixed_hpZnoizrfAJsrceNcNQqWYV-xNU` | `annotations:read` for scopes `annotations:type:*` | Read all annotations and annotation tags. |
| `fixed:annotations:writer` | `fixed_ZVW-Aa9Tzle6J4s2aUFcq1StKWE` | All permissions from `fixed:annotations:reader` <br>`annotations:write` <br>`annotations.create`<br> `annotations:delete` for scope `annotations:type:*` | Read, create, update and delete all annotations and annotation tags. |
| `fixed:annotations.dashboard:writer` | `fixed_8A775xenXeKaJk4Cr7bchP9yXOA` | `annotations:write` <br>`annotations.create`<br> `annotations:delete` for scope `annotations:type:dashboard` | Create, update and delete dashboard annotations and annotation tags. |
| `fixed:authentication.config:writer` | `fixed_0rYhZ2Qnzs8AdB1nX7gexk3fHDw` | `settings:read` for scope `settings:auth.saml:*` <br> `settings:write` for scope `settings:auth.saml:*` | Read and update authentication and SAML settings. |
| `fixed:general.auth.config:writer` | `fixed_QFxIT_FGtBqbIVJIwx1bLgI5z6c` | `settings:read` for scope `settings:auth:oauth_allow_insecure_email_lookup` <br> `settings:write` for scope `settings:auth:oauth_allow_insecure_email_lookup` | Read and update the Grafana instance's general authentication configuration settings. |
| `fixed:dashboards:creator` | `fixed_ZorKUcEPCM01A1fPakEzGBUyU64` | `dashboards:create`<br>`folders:read` | Create dashboards. |
| `fixed:dashboards:reader` | `fixed_Sgr67JTOhjQGFlzYRahOe45TdWM` | `dashboards:read` | Read all dashboards. |
| `fixed:dashboards:writer` | `fixed_OK2YOQGIoI1G031hVzJB6rAJQAs` | All permissions from `fixed:dashboards:reader` and <br>`dashboards:write`<br>`dashboards:delete`<br>`dashboards:create`<br>`dashboards.permissions:read`<br>`dashboards.permissions:write` | Read, create, update, and delete all dashboards. |
| `fixed:dashboards.insights:reader` | `fixed_JlBJ2_gizP8zhgaeGE2rjyZe2Rs` | `dashboards.insights:read` | Read dashboard insights data and see presence indicators. |
| `fixed:dashboards.permissions:reader` | `fixed_f17oxuXW_58LL8mYJsm4T_mCeIw` | `dashboards.permissions:read` | Read all dashboard permissions. |
| `fixed:dashboards.permissions:writer` | `fixed_CcznxhWX_Yqn8uWMXMQ-b5iFW9k` | All permissions from `fixed:dashboards.permissions:reader` and <br>`dashboards.permissions:write` | Read and update all dashboard permissions. |
| `fixed:dashboards.public:writer` | `fixed_f_GHHRBciaqESXfGz2oCcooqHxs` | `dashboards.public:write` | Create, update, delete or pause a shared dashboard. |
| `fixed:datasources:creator` | `fixed_XX8jHREgUt-wo1A-rPXIiFlX6Zw` | `datasources:create` | Create data sources. |
| `fixed:datasources:explorer` | `fixed_qDzW9mzx9yM91T5Bi8dHUM2muTw` | `datasources:explore` | Enable the Explore feature. Data source permissions still apply, you can only query data sources for which you have query permissions. |
| `fixed:datasources:reader` | `fixed_C2x8IxkiBc1KZVjyYH775T9jNMQ` | `datasources:read`<br>`datasources:query` | Read and query data sources. |
| `fixed:datasources:writer` | `fixed_q8HXq8kjjA5IlHHgBJlKlUyaNik` | All permissions from `fixed:datasources:reader` and <br>`datasources:create`<br>`datasources:write`<br>`datasources:delete` | Read, query, create, delete, or update a data source. |
| `fixed:datasources.builtin:reader` | `fixed_q8HXq8kjjA5IlHHgBJlKlUyaNik` | `datasources:read` and `datasources:query` scoped to `datasources:uid:grafana` | An internal role used to grant Viewers access to the builtin example data source in Grafana. |
| `fixed:datasources.caching:reader` | `fixed_D2ddpGxJYlw0mbsTS1ek9fj0kj4` | `datasources.caching:read` | Read data source query caching settings. |
| `fixed:datasources.caching:writer` | `fixed_JtFjHr7jd7hSqUYcktKvRvIOGRE` | `datasources.caching:read`<br>`datasources.caching:write` | Enable, disable, or update query caching settings. |
| `fixed:datasources.id:reader` | `fixed_entg--fHmDqWY2-69N0ocawK0Os` | `datasources.id:read` | Read the ID of a data source based on its name. |
| `fixed:datasources.insights:reader` | `fixed_EBZ3NwlfecNPp2p0XcZRC1nfEYk` | `datasources.insights:read` | Read data source insights data. |
| `fixed:datasources.permissions:reader` | `fixed_ErYA-cTN3yn4h4GxaVPcawRhiOY` | `datasources.permissions:read` | Read data source permissions. |
| `fixed:datasources.permissions:writer` | `fixed_aiQh9YDfLOKjQhYasF9_SFUjQiw` | All permissions from `fixed:datasources.permissions:reader` and <br>`datasources.permissions:write` | Create, read, or delete permissions of a data source. |
| `fixed:folders:creator` | `fixed_gGLRbZGAGB6n9uECqSh_W382RlQ` | `folders:create` | Create folders in the root level. |
| `fixed:folders:reader` | `fixed_yeW-5QPeo-i5PZUIUXMlAA97GnQ` | `folders:read`<br>`dashboards:read` | Read all folders and dashboards. |
| `fixed:folders:writer` | `fixed_wJXLoTzgE7jVuz90dryYoiogL0o` | All permissions from `fixed:dashboards:writer` and <br>`folders:read`<br>`folders:write`<br>`folders:create`<br>`folders:delete`<br>`folders.permissions:read`<br>`folders.permissions:write` | Read, update, and delete all folders and dashboards. Create folders and subfolders. |
| `fixed:folders.general:reader` | `fixed_rSASbkg8DvpG_gTX5s41d7uxRvI` | `folders:read` scoped to `folders:uid:general` | An internal role used to correctly display access to the folder tree for Viewer role. |
| `fixed:folders.permissions:reader` | `fixed_E06l4cx0JFm47EeLBE4nmv3pnSo` | `folders.permissions:read` | Read all folder permissions. |
| `fixed:folders.permissions:writer` | `fixed_3GAgpQ_hWG8o7-lwNb86_VB37eI` | All permissions from `fixed:folders.permissions:reader` and <br>`folders.permissions:write` | Read and update all folder permissions. |
| `fixed:ldap:reader` | `fixed_lMcOPwSkxKY-qCK8NMJc5k6izLE` | `ldap.user:read`<br>`ldap.status:read` | Read the LDAP configuration and LDAP status information. |
| `fixed:ldap:writer` | `fixed_p6AvnU4GCQyIh7-hbwI-bk3GYnU` | All permissions from `fixed:ldap:reader` and <br>`ldap.user:sync`<br>`ldap.config:reload` | Read and update the LDAP configuration, and read LDAP status information. |
| `fixed:library.panels:creator` | `fixed_6eX6ItfegCIY5zLmPqTDW8ZV7KY` | `library.panels:create`<br>`folders:read` | Create library panel at the root level. |
| `fixed:library.panels:general.reader` | `fixed_ct0DghiBWR_2BiQm3EvNPDVmpio` | `library.panels:read` | Read all library panels at the root level. |
| `fixed:library.panels:general.writer` | `fixed_DgprkmqfN_1EhZ2v1_d1fYG8LzI` | All permissions from `fixed:library.panels:general.reader` plus<br>`library.panels:create`<br>`library.panels:delete`<br>`library.panels:write` | Create, read, write or delete all library panels and their permissions at the root level. |
| `fixed:library.panels:reader` | `fixed_tvTr9CnZ6La5vvUO_U_X1LPnhUs` | `library.panels:read` | Read all library panels. |
| `fixed:library.panels:writer` | `fixed_JTljAr21LWLTXCkgfBC4H0lhBC8` | All permissions from `fixed:library.panels:reader` plus<br>`library.panels:create`<br>`library.panels:delete`<br>`library.panels:write` | Create, read, write or delete all library panels and their permissions. |
| `fixed:licensing:reader` | `fixed_OADpuXvNEylO2Kelu3GIuBXEAYE` | `licensing:read`<br>`licensing.reports:read` | Read licensing information and licensing reports. |
| `fixed:licensing:writer` | `fixed_gzbz3rJpQMdaKHt-E4q0PVaKMoE` | All permissions from `fixed:licensing:reader` and <br>`licensing:write`<br>`licensing:delete` | Read licensing information and licensing reports, update and delete the license token. |
| `fixed:migrationassistant:migrator` | `fixed_LLk2p7TRuBztOAksTQb1Klc8YTk` | `migrationassistant:migrate` | Execute on-prem to cloud migrations through the Migration Assistant. |
| `fixed:org.users:reader` | `fixed_oCqNwlVHLOpw7-jAlwp4HzYqwGY` | `org.users:read` | Read users within a single organization. |
| `fixed:org.users:writer` | `fixed_VERj5nayasjgf_Yh0sWqqCkxWlw` | All permissions from `fixed:org.users:reader` and <br>`org.users:add`<br>`org.users:remove`<br>`org.users:write` | Within a single organization, add a user, invite a new user, read information about a user and their role, remove a user from that organization, or change the role of a user. |
| `fixed:organization:maintainer` | `fixed_CMm-uuBaPUBf4r8XG3jIvxo55bg` | All permissions from `fixed:organization:reader` and <br> `orgs:write`<br>`orgs:create`<br>`orgs:delete`<br>`orgs.quotas:write` | Create, read, write, or delete an organization. Read or write its quotas. This role needs to be assigned globally. |
| `fixed:organization:reader` | `fixed_0SZPJlTHdNEe8zO91zv7Zwiwa2w` | `orgs:read`<br>`orgs.quotas:read` | Read an organization and its quotas. |
| `fixed:organization:writer` | `fixed_Y4jGqDd8w1yCrPwlik8z5Iu8-3M` | All permissions from `fixed:organization:reader` and <br> `orgs:write`<br>`orgs.preferences:read`<br>`orgs.preferences:write` | Read an organization, its quotas, or its preferences. Update organization properties, or its preferences. |
| `fixed:plugins:maintainer` | `fixed_yEOKidBcWgbm74x-nTa3lW5lOyY` | `plugins:install` | Install and uninstall plugins. Needs to be assigned globally. |
| `fixed:plugins:writer` | `fixed_MRYpGk7kpNNwt2VoVOXFiPnQziE` | `plugins:write` | Enable and disable plugins and edit plugins' settings. |
| `fixed:plugins.app:reader` | `fixed_AcZRiNYx7NueYkUqzw1o2OGGUAA` | `plugins.app:access` | Access application plugins (still enforcing the organization role). |
| `fixed:provisioning:writer` | `fixed_bgk1FCyR6OEDwhgirZlQgu5LlCA` | `provisioning:reload` | Reload provisioning. |
| `fixed:reports:reader` | `fixed_72_8LU_0ukfm6BdblOw8Z9q-GQ8` | `reports:read`<br>`reports:send`<br>`reports.settings:read` | Read all reports and shared report settings. |
| `fixed:reports:writer` | `fixed_jBW3_7g1EWOjGVBYeVRwtFxhUNw` | All permissions from `fixed:reports:reader` and <br>`reports:create`<br>`reports:write`<br>`reports:delete`<br>`reports.settings:write` | Create, read, update, or delete all reports and shared report settings. |
| `fixed:roles:reader` | `fixed_GkfG-1NSwEGb4hpK3-E3qHyNltc` | `roles:read`<br>`teams.roles:read`<br>`users.roles:read`<br>`users.permissions:read` | Read all access control roles, roles and permissions assigned to users, teams. |
| `fixed:roles:resetter` | `fixed_WgPpC3qJRmVpVTJavFNwfS5RuzQ` | `roles:write` with scope `permissions:type:escalate` | Reset basic roles to their default. |
| `fixed:roles:writer` | `fixed_W5aFaw8isAM27x_eWfElBhZ0iOc` | All permissions from `fixed:roles:reader` and <br>`roles:write`<br>`roles:delete`<br>`teams.roles:add`<br>`teams.roles:remove`<br>`users.roles:add`<br>`users.roles:remove` | Create, read, update, or delete all roles, assign or unassign roles to users, teams. |
| `fixed:serviceaccounts:creator` | `fixed_Ikw60fckA0MyiiZ73BawSfOULy4` | `serviceaccounts:create` | Create Grafana service accounts. |
| `fixed:serviceaccounts:reader` | `fixed_QFjJAZ88iawMLInYOxPA1DB1w6I` | `serviceaccounts:read` | Read Grafana service accounts. |
| `fixed:serviceaccounts:writer` | `fixed_iBvUNUEZBZ7PUW0vdkN5iojc2sk` | `serviceaccounts:read`<br>`serviceaccounts:create`<br>`serviceaccounts:write`<br>`serviceaccounts:delete`<br>`serviceaccounts.permissions:read`<br>`serviceaccounts.permissions:write` | Create, update, read and delete all Grafana service accounts and manage service account permissions. |
| `fixed:settings:reader` | `fixed_0LaUt1x6PP8hsZzEBhqPQZFUd8Q` | `settings:read` | Read Grafana instance settings. |
| `fixed:settings:writer` | `fixed_joIHDgMrGg790hMhUufVzcU4j44` | All permissions from `fixed:settings:reader` and<br>`settings:write` | Read and update Grafana instance settings. |
| `fixed:stats:reader` | `fixed_OnRCXxZVINWpcKvTF5A1gecJ7pA` | `server.stats:read` | Read Grafana instance statistics. |
| `fixed:support.bundles:reader` | `fixed_gcPjI3PTUJwRx-GJZwDhNa7zbos` | `support.bundles:read` | List and download support bundles. |
| `fixed:support.bundles:writer` | `fixed_dTgCv9Wxrp_WHAhwHYIgeboxKpE` | `support.bundles:read`<br>`support.bundles:create`<br>`support.bundles:delete` | Create, delete, list and download support bundles. |
| `fixed:teams:creator` | `fixed_nzVQoNSDSn0fg1MDgO6XnZX2RZI` | `teams:create`<br>`org.users:read` | Create a team and list organization users (required to manage the created team). |
| `fixed:teams:read` | `fixed_Z8pB0GQlrqRt8IZBCJQxPWvJPgQ` | `teams:read` | List all teams. |
| `fixed:teams:writer` | `fixed_xw1T0579h620MOYi4L96GUs7fZY` | `teams:create`<br>`teams:delete`<br>`teams:read`<br>`teams:write`<br>`teams.permissions:read`<br>`teams.permissions:write` | Create, read, update and delete teams and manage team memberships. |
| `fixed:usagestats:reader` | `fixed_eAM0azEvnWFCJAjNkUKnGL_1-bU` | `server.usagestats.report:read` | View usage statistics report. |
| `fixed:users:reader` | `fixed_buZastUG3reWyQpPemcWjGqPAd0` | `users:read`<br>`users.quotas:read`<br>`users.authtoken:read` | Read all users and their information, such as team memberships, authentication tokens, and quotas. |
| `fixed:users:writer` | `fixed_wjzgHHo_Ux25DJuELn_oiAdB_yM` | All permissions from `fixed:users:reader` and <br>`users:write`<br>`users:create`<br>`users:delete`<br>`users:enable`<br>`users:disable`<br>`users.password:write`<br>`users.permissions:write`<br>`users:logout`<br>`users.authtoken:write`<br>`users.quotas:write` | Read and update all attributes and settings for all users in Grafana: update user information, read user information, create or enable or disable a user, make a user a Grafana administrator, sign out a user, update a users authentication token, or update quotas for all users. |
### Alerting roles
@@ -164,10 +164,20 @@ Access to Grafana alert rules is an intersection of many permissions:
- Permission to read a folder. For example, the fixed role `fixed:folders:reader` includes the action `folders:read` and a folder scope `folders:id:`.
- Permission to query **all** data sources that a given alert rule uses. If a user cannot query a given data source, they cannot see any alert rules that query that data source.
There is only one exclusion at this moment. Role `fixed:alerting.provisioning:writer` does not require user to have any additional permissions and provides access to all aspects of the alerting configuration via special provisioning API.
There is only one exclusion. Role `fixed:alerting.provisioning:writer` does not require user to have any additional permissions and provides access to all aspects of the alerting configuration via special provisioning API.
For more information about the permissions required to access alert rules, refer to [Create a custom role to access alerts in a folder](ref:plan-rbac-rollout-strategy-create-a-custom-role-to-access-alerts-in-a-folder).
#### Alerting basic roles
The following table lists the default RBAC alerting role assignments to the basic roles:
| Basic role | Associated fixed roles | Description |
| ---------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------- |
| Admin | `fixed:alerting:writer`<br>`fixed:alerting.provisioning.secrets:reader`<br>`fixed:alerting.provisioning:writer` | Default [Grafana organization administrator](ref:rbac-basic-roles) assignments. |
| Editor | `fixed:alerting:writer`<br>`fixed:alerting.provisioning.provenance:writer` | Default [Editor](ref:rbac-basic-roles) assignments. |
| Viewer | `fixed:alerting:reader` | Default [Viewer](ref:rbac-basic-roles) assignments. |
### Grafana OnCall roles
If you are using [Grafana OnCall](ref:oncall), you can try out the integration between Grafana OnCall and RBAC.

View File

@@ -62,6 +62,9 @@ The following steps describe a basic configuration:
# The URL of the Loki server
loki_remote_url = http://localhost:3100
[feature_toggles]
enable = alertingCentralAlertHistory
```
1. **Configure the Loki data source in Grafana**

View File

@@ -17,55 +17,166 @@ weight: 155
# Configure RBAC
Role-based access control (RBAC) for Grafana Enterprise and Grafana Cloud provides a standardized way of granting, changing, and revoking access, so that users can view and modify Grafana resources.
[Role-based access control (RBAC)](/docs/grafana/latest/administration/roles-and-permissions/access-control/plan-rbac-rollout-strategy/) for Grafana Enterprise and Grafana Cloud provides a standardized way of granting, changing, and revoking access, so that users can view and modify Grafana resources.
A user is any individual who can log in to Grafana. Each user is associated with a role that includes permissions. Permissions determine the tasks a user can perform in the system.
A user is any individual who can log in to Grafana. Each user has a role that includes permissions. Permissions determine the tasks a user can perform in the system.
Each permission contains one or more actions and a scope.
## Role types
Grafana has three types of roles for managing access:
- **Basic roles**: Admin, Editor, Viewer, and No basic role. These are assigned to users and provide default access levels.
- **Fixed roles**: Predefined groups of permissions for specific use cases. Basic roles automatically include certain fixed roles.
- **Custom roles**: User-defined roles that combine specific permissions for granular access control.
## Basic role permissions
The following table summarizes the default alerting permissions for each basic role.
| Capability | Admin | Editor | Viewer |
| ----------------------------------------- | :---: | :----: | :----: |
| View alert rules | ✓ | ✓ | ✓ |
| Create, edit, and delete alert rules | ✓ | ✓ | |
| View silences | ✓ | ✓ | ✓ |
| Create, edit, and expire silences | ✓ | ✓ | |
| View contact points and templates | ✓ | ✓ | ✓ |
| Create, edit, and delete contact points | ✓ | ✓ | |
| View notification policies | ✓ | ✓ | ✓ |
| Create, edit, and delete policies | ✓ | ✓ | |
| View mute timings | ✓ | ✓ | ✓ |
| Create, edit, and delete timing intervals | ✓ | ✓ | |
| Access provisioning API | ✓ | ✓ | |
| Export with decrypted secrets | ✓ | | |
{{< admonition type="note" >}}
Access to alert rules also requires permission to read the folder containing the rules and permission to query the data sources used in the rules.
{{< /admonition >}}
## Permissions
Grafana Alerting has the following permissions.
Grafana Alerting has the following permissions organized by resource type.
| Action | Applicable scope | Description |
| -------------------------------------------- | -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `alert.instances.external:read` | `datasources:*`<br>`datasources:uid:*` | Read alerts and silences in data sources that support alerting. |
| `alert.instances.external:write` | `datasources:*`<br>`datasources:uid:*` | Manage alerts and silences in data sources that support alerting. |
| `alert.instances:create` | n/a | Create silences in the current organization. |
| `alert.instances:read` | n/a | Read alerts and silences in the current organization. |
| `alert.instances:write` | n/a | Update and expire silences in the current organization. |
| `alert.notifications.external:read` | `datasources:*`<br>`datasources:uid:*` | Read templates, contact points, notification policies, and mute timings in data sources that support alerting. |
| `alert.notifications.external:write` | `datasources:*`<br>`datasources:uid:*` | Manage templates, contact points, notification policies, and mute timings in data sources that support alerting. |
| `alert.notifications:write` | n/a | Manage templates, contact points, notification policies, and mute timings in the current organization. |
| `alert.notifications:read` | n/a | Read all templates, contact points, notification policies, and mute timings in the current organization. |
| `alert.rules.external:read` | `datasources:*`<br>`datasources:uid:*` | Read alert rules in data sources that support alerting (Prometheus, Mimir, and Loki) |
| `alert.rules.external:write` | `datasources:*`<br>`datasources:uid:*` | Create, update, and delete alert rules in data sources that support alerting (Mimir and Loki). |
| `alert.rules:create` | `folders:*`<br>`folders:uid:*` | Create Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder and `datasources:query` in the scope of data sources the user can query. |
| `alert.rules:delete` | `folders:*`<br>`folders:uid:*` | Delete Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. |
| `alert.rules:read` | `folders:*`<br>`folders:uid:*` | Read Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. |
| `alert.rules:write` | `folders:*`<br>`folders:uid:*` | Update Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. To allow query modifications add `datasources:query` in the scope of data sources the user can query. |
| `alert.silences:create` | `folders:*`<br>`folders:uid:*` | Create rule-specific silences in a folder and its subfolders. |
| `alert.silences:read` | `folders:*`<br>`folders:uid:*` | Read all general silences and rule-specific silences in a folder and its subfolders. |
| `alert.silences:write` | `folders:*`<br>`folders:uid:*` | Update and expire rule-specific silences in a folder and its subfolders. |
| `alert.provisioning:read` | n/a | Read all Grafana alert rules, notification policies, etc via provisioning API. Permissions to folders and data source are not required. |
| `alert.provisioning.secrets:read` | n/a | Same as `alert.provisioning:read` plus ability to export resources with decrypted secrets. |
| `alert.provisioning:write` | n/a | Update all Grafana alert rules, notification policies, etc via provisioning API. Permissions to folders and data source are not required. |
| `alert.provisioning.provenance:write` | n/a | Set provisioning status for alerting resources. Cannot be used alone. Requires user to have permissions to access resources |
| `alert.notifications.receivers:read` | `receivers:*`<br>`receivers:uid:*` | Read contact points. |
| `alert.notifications.receivers.secrets:read` | `receivers:*`<br>`receivers:uid:*` | Export contact points with decrypted secrets. |
| `alert.notifications.receivers:create` | n/a | Create a new contact points. The creator is automatically granted full access to the created contact point. |
| `alert.notifications.receivers:write` | `receivers:*`<br>`receivers:uid:*` | Update existing contact points. |
| `alert.notifications.receivers:delete` | `receivers:*`<br>`receivers:uid:*` | Update and delete existing contact points. |
| `receivers.permissions:read` | `receivers:*`<br>`receivers:uid:*` | Read permissions for contact points. |
| `receivers.permissions:write` | `receivers:*`<br>`receivers:uid:*` | Manage permissions for contact points. |
| `alert.notifications.time-intervals:read` | n/a | Read mute time intervals. |
| `alert.notifications.time-intervals:write` | n/a | Create new or update existing mute time intervals. |
| `alert.notifications.time-intervals:delete` | n/a | Delete existing time intervals. |
| `alert.notifications.templates:read` | n/a | Read templates. |
| `alert.notifications.templates:write` | n/a | Create new or update existing templates. |
| `alert.notifications.templates:delete` | n/a | Delete existing templates. |
| `alert.notifications.templates.test:write` | n/a | Test templates with custom payloads (preview and payload editor functionality). |
| `alert.notifications.routes:read` | n/a | Read notification policies. |
| `alert.notifications.routes:write` | n/a | Create new, update and update notification policies. |
### Alert rules
Permissions for managing Grafana-managed alert rules.
| Action | Applicable scope | Description |
| -------------------- | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `alert.rules:create` | `folders:*`<br>`folders:uid:*` | Create Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder and `datasources:query` in the scope of data sources the user can query. |
| `alert.rules:read` | `folders:*`<br>`folders:uid:*` | Read Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. |
| `alert.rules:write` | `folders:*`<br>`folders:uid:*` | Update Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. To allow query modifications add `datasources:query` in the scope of data sources the user can query. |
| `alert.rules:delete` | `folders:*`<br>`folders:uid:*` | Delete Grafana alert rules in a folder and its subfolders. Combine this permission with `folders:read` in a scope that includes the folder. |
### External alert rules
Permissions for managing alert rules in external data sources that support alerting.
| Action | Applicable scope | Description |
| ---------------------------- | -------------------------------------- | ---------------------------------------------------------------------------------------------- |
| `alert.rules.external:read` | `datasources:*`<br>`datasources:uid:*` | Read alert rules in data sources that support alerting (Prometheus, Mimir, and Loki). |
| `alert.rules.external:write` | `datasources:*`<br>`datasources:uid:*` | Create, update, and delete alert rules in data sources that support alerting (Mimir and Loki). |
### Alert instances and silences
Permissions for managing alert instances and silences in Grafana.
| Action | Applicable scope | Description |
| ------------------------ | ------------------------------ | ------------------------------------------------------------------------------------ |
| `alert.instances:read` | n/a | Read alerts and silences in the current organization. |
| `alert.instances:create` | n/a | Create silences in the current organization. |
| `alert.instances:write` | n/a | Update and expire silences in the current organization. |
| `alert.silences:read` | `folders:*`<br>`folders:uid:*` | Read all general silences and rule-specific silences in a folder and its subfolders. |
| `alert.silences:create` | `folders:*`<br>`folders:uid:*` | Create rule-specific silences in a folder and its subfolders. |
| `alert.silences:write` | `folders:*`<br>`folders:uid:*` | Update and expire rule-specific silences in a folder and its subfolders. |
### External alert instances
Permissions for managing alert instances in external data sources.
| Action | Applicable scope | Description |
| -------------------------------- | -------------------------------------- | ----------------------------------------------------------------- |
| `alert.instances.external:read` | `datasources:*`<br>`datasources:uid:*` | Read alerts and silences in data sources that support alerting. |
| `alert.instances.external:write` | `datasources:*`<br>`datasources:uid:*` | Manage alerts and silences in data sources that support alerting. |
### Contact points
Permissions for managing contact points (notification receivers).
| Action | Applicable scope | Description |
| -------------------------------------------- | ---------------------------------- | ----------------------------------------------------------------------------------------------------------- |
| `alert.notifications.receivers:list` | n/a | List contact points in the current organization. |
| `alert.notifications.receivers:read` | `receivers:*`<br>`receivers:uid:*` | Read contact points. |
| `alert.notifications.receivers.secrets:read` | `receivers:*`<br>`receivers:uid:*` | Export contact points with decrypted secrets. |
| `alert.notifications.receivers:create` | n/a | Create a new contact points. The creator is automatically granted full access to the created contact point. |
| `alert.notifications.receivers:write` | `receivers:*`<br>`receivers:uid:*` | Update existing contact points. |
| `alert.notifications.receivers:delete` | `receivers:*`<br>`receivers:uid:*` | Update and delete existing contact points. |
| `alert.notifications.receivers:test` | `receivers:*`<br>`receivers:uid:*` | Test contact points to verify their configuration. |
| `receivers.permissions:read` | `receivers:*`<br>`receivers:uid:*` | Read permissions for contact points. |
| `receivers.permissions:write` | `receivers:*`<br>`receivers:uid:*` | Manage permissions for contact points. |
### Notification policies
Permissions for managing notification policies (routing rules).
| Action | Applicable scope | Description |
| ---------------------------------- | ---------------- | ----------------------------------------------------- |
| `alert.notifications.routes:read` | n/a | Read notification policies. |
| `alert.notifications.routes:write` | n/a | Create new, update, and delete notification policies. |
### Time intervals
Permissions for managing mute time intervals.
| Action | Applicable scope | Description |
| ------------------------------------------- | ---------------- | -------------------------------------------------- |
| `alert.notifications.time-intervals:read` | n/a | Read mute time intervals. |
| `alert.notifications.time-intervals:write` | n/a | Create new or update existing mute time intervals. |
| `alert.notifications.time-intervals:delete` | n/a | Delete existing time intervals. |
### Templates
Permissions for managing notification templates.
| Action | Applicable scope | Description |
| ------------------------------------------ | ---------------- | ------------------------------------------------------------------------------- |
| `alert.notifications.templates:read` | n/a | Read templates. |
| `alert.notifications.templates:write` | n/a | Create new or update existing templates. |
| `alert.notifications.templates:delete` | n/a | Delete existing templates. |
| `alert.notifications.templates.test:write` | n/a | Test templates with custom payloads (preview and payload editor functionality). |
### General notifications
Legacy permissions for managing all notification resources.
| Action | Applicable scope | Description |
| --------------------------- | ---------------- | -------------------------------------------------------------------------------------------------------- |
| `alert.notifications:read` | n/a | Read all templates, contact points, notification policies, and mute timings in the current organization. |
| `alert.notifications:write` | n/a | Manage templates, contact points, notification policies, and mute timings in the current organization. |
### External notifications
Permissions for managing notification resources in external data sources.
| Action | Applicable scope | Description |
| ------------------------------------ | -------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
| `alert.notifications.external:read` | `datasources:*`<br>`datasources:uid:*` | Read templates, contact points, notification policies, and mute timings in data sources that support alerting. |
| `alert.notifications.external:write` | `datasources:*`<br>`datasources:uid:*` | Manage templates, contact points, notification policies, and mute timings in data sources that support alerting. |
### Provisioning
Permissions for managing alerting resources via the provisioning API.
| Action | Applicable scope | Description |
| ---------------------------------------- | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `alert.provisioning:read` | n/a | Read all Grafana alert rules, notification policies, etc via provisioning API. Permissions to folders and data source are not required. |
| `alert.provisioning.secrets:read` | n/a | Same as `alert.provisioning:read` plus ability to export resources with decrypted secrets. |
| `alert.provisioning:write` | n/a | Update all Grafana alert rules, notification policies, etc via provisioning API. Permissions to folders and data source are not required. |
| `alert.rules.provisioning:read` | n/a | Read Grafana alert rules via provisioning API. More specific than `alert.provisioning:read`. |
| `alert.rules.provisioning:write` | n/a | Create, update, and delete Grafana alert rules via provisioning API. More specific than `alert.provisioning:write`. |
| `alert.notifications.provisioning:read` | n/a | Read notification resources (contact points, notification policies, templates, time intervals) via provisioning API. More specific than `alert.provisioning:read`. |
| `alert.notifications.provisioning:write` | n/a | Create, update, and delete notification resources via provisioning API. More specific than `alert.provisioning:write`. |
| `alert.provisioning.provenance:write` | n/a | Set provisioning status for alerting resources. Cannot be used alone. Requires user to have permissions to access resources. |
To help plan your RBAC rollout strategy, refer to [Plan your RBAC rollout strategy](https://grafana.com/docs/grafana/next/administration/roles-and-permissions/access-control/plan-rbac-rollout-strategy/).

View File

@@ -16,7 +16,7 @@ title: Manage access using folders or data sources
weight: 200
---
## Manage access using folders or data sources
# Manage access using folders or data sources
You can extend the access provided by a role to alert rules and rule-specific silences by assigning permissions to individual folders or data sources.

View File

@@ -55,7 +55,7 @@ Details of the fixed roles and the access they provide for Grafana Alerting are
| Full read-only access: `fixed:alerting:reader` | All permissions from `fixed:alerting.rules:reader` <br>`fixed:alerting.instances:reader`<br>`fixed:alerting.notifications:reader` | Read alert rules, alert instances, silences, contact points, and notification policies in Grafana and external providers. |
| Read via Provisioning API + Export Secrets: `fixed:alerting.provisioning.secrets:reader` | `alert.provisioning:read` and `alert.provisioning.secrets:read` | Read alert rules, alert instances, silences, contact points, and notification policies using the provisioning API and use export with decrypted secrets. |
| Access to alert rules provisioning API: `fixed:alerting.provisioning:writer` | `alert.provisioning:read` and `alert.provisioning:write` | Manage all alert rules, notification policies, contact points, templates, in the organization using the provisioning API. |
| Set provisioning status: `fixed:alerting.provisioning.status:writer` | `alert.provisioning.provenance:write` | Set provisioning rules for Alerting resources. Should be used together with other regular roles (Notifications Writer and/or Rules Writer.) |
| Set provisioning status: `fixed:alerting.provisioning.provenance:writer` | `alert.provisioning.provenance:write` | Set provisioning rules for Alerting resources. Should be used together with other regular roles (Notifications Writer and/or Rules Writer.) |
| Contact Point Reader: `fixed:alerting.receivers:reader` | `alert.notifications.receivers:read` for scope `receivers:*` | Read all contact points. |
| Contact Point Creator: `fixed:alerting.receivers:creator` | `alert.notifications.receivers:create` | Create a new contact point. The user is automatically granted full access to the created contact point. |
| Contact Point Writer: `fixed:alerting.receivers:writer` | `alert.notifications.receivers:read`, `alert.notifications.receivers:write`, `alert.notifications.receivers:delete` for scope `receivers:*` and <br> `alert.notifications.receivers:create` | Create a new contact point and manage all existing contact points. |
@@ -63,8 +63,8 @@ Details of the fixed roles and the access they provide for Grafana Alerting are
| Templates Writer: `fixed:alerting.templates:writer` | `alert.notifications.templates:read`, `alert.notifications.templates:write`, `alert.notifications.templates:delete`, `alert.notifications.templates.test:write` | Create new and manage existing notification templates. Test templates with custom payloads. |
| Time Intervals Reader: `fixed:alerting.time-intervals:reader` | `alert.notifications.time-intervals:read` | Read all time intervals. |
| Time Intervals Writer: `fixed:alerting.time-intervals:writer` | `alert.notifications.time-intervals:read`, `alert.notifications.time-intervals:write`, `alert.notifications.time-intervals:delete` | Create new and manage existing time intervals. |
| Notification Policies Reader: `fixed:alerting.routes:reader` | `alert.notifications.routes:read` | Read all time intervals. |
| Notification Policies Writer: `fixed:alerting.routes:writer` | `alert.notifications.routes:read` `alert.notifications.routes:write` | Create new and manage existing time intervals. |
| Notification Policies Reader: `fixed:alerting.routes:reader` | `alert.notifications.routes:read` | Read all notification policies. |
| Notification Policies Writer: `fixed:alerting.routes:writer` | `alert.notifications.routes:read`<br>`alert.notifications.routes:write` | Create new and manage existing notification policies. |
## Create custom roles

View File

@@ -16,25 +16,27 @@ weight: 150
# Configure roles and permissions
This guide explains how to configure roles and permissions for Grafana Alerting for Grafana OSS users. You'll learn how to manage access using roles, folder permissions, and contact point permissions.
A user is any individual who can log in to Grafana. Each user is associated with a role that includes permissions. Permissions determine the tasks a user can perform in the system. For example, the Admin role includes permissions for an administrator to create and delete users.
For more information, refer to [Organization roles](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/administration/roles-and-permissions/#organization-roles).
## Manage access using roles
For Grafana OSS, there are three roles: Admin, Editor, and Viewer.
Grafana OSS has three roles: Admin, Editor, and Viewer.
Details of the roles and the access they provide for Grafana Alerting are below.
The following table describes the access each role provides for Grafana Alerting.
| Role | Access |
| ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Admin | Write access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences), and provisioning. |
| Editor | Write access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences), and provisioning. |
| Viewer | Read access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences). |
| Role | Access |
| ------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Viewer | Read access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences). |
| Editor | Write access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences), and provisioning. |
| Admin | Write access to alert rules, notification resources (notification API, contact points, templates, time intervals, notification policies, and silences), and provisioning, as well as assign roles. |
## Assign roles
To assign roles, admins need to complete the following steps.
To assign roles, an admin needs to complete the following steps.
1. Navigate to **Administration** > **Users and access** > **Users, Teams, or Service Accounts**.
1. Search for the user, team or service account you want to add a role for.
@@ -58,32 +60,30 @@ Refer to the following table for details on the additional access provided by fo
You can't use folders to customize access to notification resources.
{{< /admonition >}}
To manage folder permissions, complete the following steps.
To manage folder permissions, complete the following steps:
1. In the left-side menu, click **Dashboards**.
1. Hover your mouse cursor over a folder and click **Go to folder**.
1. Click **Manage permissions** from the Folder actions menu.
1. Update or add permissions as required.
## Manage access using contact point permissions
## Manage access to contact points
### Before you begin
Extend or limit the access provided by a role to contact points by assigning permissions to individual contact point.
Extend or limit the access provided by a role to contact points by assigning permissions to individual contact points.
This allows different users, teams, or service accounts to have customized access to read or modify specific contact points.
Refer to the following table for details on the additional access provided by contact point permissions.
| Folder permission | Additional Access |
| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
| View | View and export contact point as well as select it on the Alert rule edit page |
| Edit | Update or delete the contact point |
| Admin | Same additional access as Edit and manage permissions for the contact point. User should have additional permissions to read users and teams. |
| Contact point permission | Additional Access |
| ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------- |
| View | View and export contact point as well as select it on the Alert rule edit page |
| Edit | Update or delete the contact point |
| Admin | Same additional access as Edit and manage permissions for the contact point. User should have additional permissions to read users and teams. |
### Steps
### Assign contact point permissions
To contact point permissions, complete the following steps.
To manage contact point permissions, complete the following steps:
1. In the left-side menu, click **Contact points**.
1. Hover your mouse cursor over a contact point and click **More**.

View File

@@ -1776,6 +1776,13 @@ Specify the frequency of polling for Alertmanager configuration changes. The def
The interval string is a possibly signed sequence of decimal numbers, followed by a unit suffix (ms, s, m, h, d), for example, 30s or 1m.
#### `alertmanager_max_template_output_bytes`
Maximum size in bytes that the expanded result of any single template expression (e.g. {{ .CommonAnnotations.description }}, {{ .ExternalURL }}, etc.) may reach during notification rendering.
The limit is checked after template execution for each templated field, but before the value is inserted into the final notification payload sent to the receiver.
If exceeded, the notification will contain output truncated up to the limit and a warning will be logged.
The default value is 10,485,760 bytes (10Mb).
#### `ha_redis_address`
Redis server address or addresses. It can be a single Redis address if using Redis standalone,

View File

@@ -43,24 +43,36 @@ If the data source doesn't support loading the full range logs volume, the logs
The following sections provide detailed explanations on how to visualize and interact with individual logs in Explore.
### Logs navigation
### Infinite scroll
Logs navigation, located at the right side of the log lines, can be used to easily request additional logs by clicking **Older logs** at the bottom of the navigation. This is especially useful when you reach the line limit and you want to see more logs. Each request run from the navigation displays in the navigation as separate page. Every page shows `from` and `to` timestamps of the incoming log lines. You can see previous results by clicking on each page. Explore caches the last five requests run from the logs navigation so you're not re-running the same queries when clicking on the pages, saving time and resources.
<!-- vale Grafana.GoogleWill = NO -->
![Navigate logs in Explore](/static/img/docs/explore/navigate-logs-8-0.png)
When you reach the bottom of the list of logs, you will see the message `Scroll to load more`. If you continue scrolling and the displayed logs are within the selected time interval, Grafana will load more logs. When the sort order is "newest first" you receive older logs, and when the sort order is "oldest first" you get newer logs.
<!-- vale Grafana.GoogleWill = YES -->
### Visualization options
You have the option to customize the display of logs and choose which columns to show. Following is a list of available options.
| Option | Description |
| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Time** | Shows or hides the time column. This is the timestamp associated with the log line as reported from the data source. |
| **Unique labels** | Shows or hides the unique labels column that includes only non-common labels. All common labels are displayed above. |
| **Wrap lines** | Set this to `true` if you want the display to use line wrapping. If set to `false`, it will result in horizontal scrolling. |
| **Prettify JSON** | Set this to `true` to pretty print all JSON logs. This setting does not affect logs in any format other than JSON. |
| **Deduplication** | Log data can be very repetitive. Explore hides duplicate log lines using a few different deduplication algorithms. **Exact** matches are done on the whole line except for date fields. **Numbers** matches are done on the line after stripping out numbers such as durations, IP addresses, and so on. **Signature** is the most aggressive deduplication as it strips all letters and numbers and matches on the remaining whitespace and punctuation. |
| **Display results order** | You can change the order of received logs from the default descending order (newest first) to ascending order (oldest first). |
<!-- vale Grafana.Spelling = NO -->
| Option | Description |
| ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Expand / Collapse | Expand or collapse the controls toolbar. |
| Scroll to bottom | Jump to the bottom of the logs table. |
| Oldest Logs First / Newest logs first | Sort direction (ascending or descending). |
| Search logs / Close search | Click to open/close the client side string search of the displayed logs result. |
| Deduplication | **None** does not perform any deduplication, **Exact** matches are done on the whole line except for date fields. **Numbers** matches are done on the line after stripping out numbers such as durations, IP addresses, and so on. **Signature** is the most aggressive deduplication as it strips all letters and numbers and matches on the remaining whitespace and punctuation. |
| Filter levels | Filter logs in display by log level: All levels, Info, Debut, Warning, Error. |
| Set Timestamp format | Hide timestamps (disabled), Show milliseconds timestamps, Show nanoseconds timestamps. |
| Set line wrap | Disable line wrapping, Enable line wrapping, Enable line wrapping and prettify JSON. |
| Enable highlighting | Plain text, Highlight text. |
| Font size | Small font (default), Large font. |
| Unescaped newlines | Only displayed if the logs contain unescaped new lines. Click to unescape and display as new lines. |
| Download logs | Plain text (txt), JavaScript Object Notation (JSON), Comma-separated values (CSV) |
<!-- vale Grafana.Spelling = YES -->
### Download log lines
@@ -143,16 +155,31 @@ Click the **eye icon** to select a subset of fields to visualize in the logs lis
Each field has a **stats icon**, which displays ad-hoc statistics in relation to all displayed logs.
For data sources that support log types, such as Loki, instead of a single view containing all fields, fields will be displayed grouped by their type: Indexed Labels, Parsed fields, and Structured Metadata.
#### Links
Grafana provides data links or correlations, allowing you to convert any part of a log message into an internal or external link. These links enable you to navigate to related data or external resources, offering a seamless and convenient way to explore additional information.
{{< figure src="/static/img/docs/explore/data-link-9-4.png" max-width="800px" caption="Data link in Explore" >}}
#### Log details modes
There are two modes available to view log details:
- **Inline** The default, displays log details below the log line.
- **Sidebar** Displays log details in a sidebar view.
No matter which display mode you are currently viewing, you can change it by clicking the mode control icon.
### Log context
Log context is a feature that displays additional lines of context surrounding a log entry that matches a specific search query. This helps in understanding the context of the log entry and is similar to the `-C` parameter in the `grep` command.
If you're using Loki for your logs, to modify your log context queries, you can use the Loki log context query editor at the top of the table. You can activate this editor by clicking the menu for the log line, and selecting **Show context**. Within the **Log Context** view, you have the option to modify your search by removing one or more label filters from the log stream. If your original query used a parser, you can refine your search by leveraging extracted label filters.
Change the **Context time window** option to look for logs within a specific time interval around your log line.
Toggle **Wrap lines** if you encounter long lines of text that make it difficult to read and analyze the context around log entries. By enabling this toggle, Grafana automatically wraps long lines of text to fit within the visible width of the viewer, making the log entries easier to read and understand.
Click **Open in split view** to execute the context query for a log entry in a split screen in the Explore view. Clicking this button opens a new Explore pane with the context query displayed alongside the log entry, making it easier to analyze and understand the surrounding context.

View File

@@ -31,7 +31,7 @@ refs:
_Logs_ are structured records of events or messages generated by a system or application&mdash;that is, a series of text records with status updates from your system or app. They generally include timestamps, messages, and context information like the severity of the logged event.
The logs visualization displays these records from data sources that support logs, such as Elastic, Influx, and Loki. The logs visualization has colored indicators of log status, as well as collapsible log events that help you analyze the information generated.
The logs visualization displays these records from data sources that support logs, such as Elastic, Influx, and Loki. The logs visualization shows, by default, the timestamp, a colored string representing the log status, the log line body, as well as collapsible log events that help you analyze the information generated.
{{< figure src="/media/docs/grafana/panels-visualizations/screenshot-logs-v12.3.png" max-width="750px" alt="Logs visualization" >}}
@@ -100,16 +100,16 @@ Use these settings to refine your visualization:
| Option | Description |
| --------------- | --------------- |
| Time | Show or hide the time column. This is the timestamp associated with the log line as reported from the data source. |
| Show timestamps | Show or hide the time column. This is the timestamp associated with the log line as reported from the data source. |
| Unique labels | Show or hide the unique labels column, which shows only non-common labels. |
| Common labels | Show or hide the common labels. |
| Wrap lines | Turn line wrapping on or off. |
| Enable logs highlighting | Experimental. Use a predefined coloring scheme to highlight relevant parts of the log lines. Subtle colors are added to the log lines to improve readability and help with identifying important information faster. |
| Prettify JSON | Toggle the switch on to pretty print all JSON logs. This setting does not affect logs in any format other than JSON. |
| Enable highlighting | Use a predefined syntax coloring grammar to highlight relevant parts of the log lines |
| Enable log details | Toggle the switch on to see an extendable area with log details including labels and detected fields. Each field or label has a stats icon to display ad-hoc statistics in relation to all displayed logs. The default setting is on. |
| Log details panel mode | Choose to display the log details in a sidebar panel or inline, below the log line. The default mode depends on viewport size: the default mode for smaller viewports is inline, while for larger ones, it's sidebar. You can also change mode dynamically in the panel by clicking the mode control. |
| Enable infinite scrolling | Request more results by scrolling to the bottom of the logs list. When you reach the bottom of the list of logs, if you continue scrolling and the displayed logs are within the selected time interval, you can request to load more logs. When the sort order is **Newest first**, you receive older logs, and when the sort order is **Oldest first** you get newer logs. |
| Show controls | Display controls to jump to the last or first log line, and filter by log level. |
| Font size | Select between the **Default** font size and **Small** font sizes.|
| Log Details panel mode | Choose to display the log details in a sidebar panel or inline, below the log line. |
| Enable infinite scrolling | Request more results by scrolling to the bottom of the logs list. |
| Show controls | Display controls to jump to the last or first log line, and filters by log level |
| Font size | Select between the default font size and small font size. |
| Deduplication | Hide log messages that are duplicates of others shown, according to your selected criteria. Choose from: <ul><li>**Exact** - Ignoring ISO datetimes.</li><li>**Numerical** - Ignoring only those that differ by numbers such as IPs or latencies.</li><li>**Signatures** - Removing successive lines with identical punctuation and white space.</li></ul> |
| Order | Set whether to show results **Newest first** or **Oldest first**. |

2
go.mod
View File

@@ -87,7 +87,7 @@ require (
github.com/googleapis/gax-go/v2 v2.15.0 // @grafana/grafana-backend-group
github.com/gorilla/mux v1.8.1 // @grafana/grafana-backend-group
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // @grafana/grafana-app-platform-squad
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba // @grafana/alerting-backend
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 // @grafana/alerting-backend
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f // @grafana/identity-access-team
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 // @grafana/identity-access-team
github.com/grafana/dataplane/examples v0.0.1 // @grafana/observability-metrics

4
go.sum
View File

@@ -1613,8 +1613,8 @@ github.com/gorilla/sessions v1.2.1 h1:DHd3rPN5lE3Ts3D8rKkQ8x/0kqfeNmBAaiSi+o7Fsg
github.com/gorilla/sessions v1.2.1/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba h1:psKWNETD5nGxmFAlqnWsXoRyUwSa2GHNEMSEDKGKfQ4=
github.com/grafana/alerting v0.0.0-20251204145817-de8c2bbf9eba/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7 h1:ZzG/gCclEit9w0QUfQt9GURcOycAIGcsQAhY1u0AEX0=
github.com/grafana/alerting v0.0.0-20251212143239-491433b332b7/go.mod h1:l7v67cgP7x72ajB9UPZlumdrHqNztpKoqQ52cU8T3LU=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f h1:Cbm6OKkOcJ+7CSZsGsEJzktC/SIa5bxVeYKQLuYK86o=
github.com/grafana/authlib v0.0.0-20250930082137-a40e2c2b094f/go.mod h1:axY0cdOg3q0TZHwpHnIz5x16xZ8ZBxJHShsSHHXcHQg=
github.com/grafana/authlib/types v0.0.0-20251119142549-be091cf2f4d4 h1:Muoy+FMGrHj3GdFbvsMzUT7eusgii9PKf9L1ZaXDDbY=

View File

@@ -67,6 +67,8 @@ module.exports = {
// near-membrane-dom won't work in a nodejs environment.
'@locker/near-membrane-dom': '<rootDir>/public/test/mocks/nearMembraneDom.ts',
'^@grafana/schema/dist/esm/(.*)$': '<rootDir>/packages/grafana-schema/src/$1',
'^@grafana/schema/dashboard/v0$': '<rootDir>/packages/grafana-schema/src/schema/dashboard/v0/index',
'^@grafana/schema/dashboard/v2beta1$': '<rootDir>/packages/grafana-schema/src/schema/dashboard/v2beta1/index',
// prevent systemjs amd extra from breaking tests.
'systemjs/dist/extras/amd': '<rootDir>/public/test/mocks/systemjsAMDExtra.ts',
'@bsull/augurs': '<rootDir>/public/test/mocks/augurs.ts',

View File

@@ -1189,6 +1189,11 @@ export interface FeatureToggles {
*/
panelTimeSettings?: boolean;
/**
* Enables the raw DSL query editor in the Elasticsearch data source
* @default false
*/
elasticsearchRawDSLQuery?: boolean;
/**
* Enables app platform API for annotations
* @default false
*/

View File

@@ -273,7 +273,7 @@ export interface DataSourceWithSupplementaryQueriesSupport<TQuery extends DataQu
/**
* Returns supplementary query types that data source supports.
*/
getSupportedSupplementaryQueryTypes(): SupplementaryQueryType[];
getSupportedSupplementaryQueryTypes(dsRequest?: DataQueryRequest<DataQuery>): SupplementaryQueryType[];
/**
* Returns a supplementary query to be used to fetch supplementary data based on the provided type and original query.
* If the provided query is not suitable for the provided supplementary query type, undefined should be returned.
@@ -283,7 +283,8 @@ export interface DataSourceWithSupplementaryQueriesSupport<TQuery extends DataQu
export const hasSupplementaryQuerySupport = <TQuery extends DataQuery>(
datasource: DataSourceApi | (DataSourceApi & DataSourceWithSupplementaryQueriesSupport<TQuery>),
type: SupplementaryQueryType
type: SupplementaryQueryType,
dsRequest?: DataQueryRequest<DataQuery>
): datasource is DataSourceApi & DataSourceWithSupplementaryQueriesSupport<TQuery> => {
if (!datasource) {
return false;
@@ -293,7 +294,7 @@ export const hasSupplementaryQuerySupport = <TQuery extends DataQuery>(
('getDataProvider' in datasource || 'getSupplementaryRequest' in datasource) &&
'getSupplementaryQuery' in datasource &&
'getSupportedSupplementaryQueryTypes' in datasource &&
datasource.getSupportedSupplementaryQueryTypes().includes(type)
datasource.getSupportedSupplementaryQueryTypes(dsRequest).includes(type)
);
};

View File

@@ -48,4 +48,44 @@ export default [
},
treeshake: false,
},
// Build sub-path exports for dashboard v0
{
input: {
'schema/dashboard/v0': fileURLToPath(new URL('src/schema/dashboard/v0/index.ts', import.meta.url)),
},
plugins: [noderesolve, esbuild],
output: [
{
format: 'esm',
dir: path.dirname(pkg.publishConfig.module),
entryFileNames: '[name].mjs',
},
{
format: 'cjs',
dir: path.dirname(pkg.publishConfig.main),
entryFileNames: '[name].cjs',
},
],
treeshake: false,
},
// Build sub-path exports for dashboard v2beta1
{
input: {
'schema/dashboard/v2beta1': fileURLToPath(new URL('src/schema/dashboard/v2beta1/index.ts', import.meta.url)),
},
plugins: [noderesolve, esbuild],
output: [
{
format: 'esm',
dir: path.dirname(pkg.publishConfig.module),
entryFileNames: '[name].mjs',
},
{
format: 'cjs',
dir: path.dirname(pkg.publishConfig.main),
entryFileNames: '[name].cjs',
},
],
treeshake: false,
},
];

View File

@@ -387,6 +387,10 @@ export interface ElasticsearchDataQuery extends common.DataQuery {
* List of bucket aggregations
*/
bucketAggs?: Array<BucketAggregation>;
/**
* Editor type
*/
editorType?: string;
/**
* List of metric aggregations
*/
@@ -395,6 +399,10 @@ export interface ElasticsearchDataQuery extends common.DataQuery {
* Lucene query
*/
query?: string;
/**
* Raw DSL query
*/
rawDSLQuery?: string;
/**
* Name of time field
*/

View File

@@ -0,0 +1,3 @@
// Re-export raw dashboard types for v0 (legacy) dashboard schema
// This allows imports like: import { AnnotationPanelFilter, DashboardLink } from '@grafana/schema/dashboard/v0'
export * from '../../../raw/dashboard/x/dashboard_types.gen';

View File

@@ -0,0 +1,4 @@
// Re-export all types and values from types.spec.gen and types.status.gen for sub-path imports
// This allows imports like: import { Spec, Status } from '@grafana/schema/dashboard/v2beta1'
export * from './types.spec.gen';
export * from './types.status.gen';

View File

@@ -1,6 +1,6 @@
import { Chance } from 'chance';
import { DashboardsTreeItem, DashboardViewItem, UIDashboardViewItem } from '../types/browse-dashboards';
import { DashboardsTreeItem, DashboardViewItem, ManagerKind, UIDashboardViewItem } from '../types/browse-dashboards';
function wellFormedEmptyFolder(
seed = 1,
@@ -64,13 +64,14 @@ function wellFormedFolder(
}
export function treeViewersCanEdit() {
const [, { folderA, folderC }] = wellFormedTree();
const [, { folderA, folderC, folderD }] = wellFormedTree();
return [
[folderA, folderC],
[folderA, folderC, folderD],
{
folderA,
folderC,
folderD,
},
] as const;
}
@@ -90,6 +91,8 @@ export function wellFormedTree() {
const folderB = wellFormedFolder(seed++);
const folderB_empty = wellFormedEmptyFolder(seed++);
const folderC = wellFormedFolder(seed++);
// folderD is marked as managed by repo (git-synced) for testing disabled folder behavior
const folderD = wellFormedFolder(seed++, {}, { managedBy: ManagerKind.Repo });
const dashbdD = wellFormedDashboard(seed++);
const dashbdE = wellFormedDashboard(seed++);
@@ -107,6 +110,7 @@ export function wellFormedTree() {
folderB,
folderB_empty,
folderC,
folderD,
dashbdD,
dashbdE,
],
@@ -123,6 +127,7 @@ export function wellFormedTree() {
folderB,
folderB_empty,
folderC,
folderD,
dashbdD,
dashbdE,
},

View File

@@ -4,6 +4,7 @@ import { HttpResponse, http } from 'msw';
import { treeViewersCanEdit, wellFormedTree } from '../../../fixtures/folders';
const [mockTree, { folderB }] = wellFormedTree();
// folderD is included in mockTree and will be returned by the handlers with managedBy: 'repo'
const [mockTreeThatViewersCanEdit] = treeViewersCanEdit();
const collator = new Intl.Collator();
@@ -48,6 +49,7 @@ const listFoldersHandler = () =>
id: random.integer({ min: 1, max: 1000 }),
uid: folder.item.uid,
title: folder.item.kind === 'folder' ? folder.item.title : "invalid - this shouldn't happen",
...('managedBy' in folder.item && folder.item.managedBy ? { managedBy: folder.item.managedBy } : {}),
};
})
.sort((a, b) => collator.compare(a.title, b.title)) // API always sorts by title
@@ -76,6 +78,7 @@ const getFolderHandler = () =>
uid: folder?.item.uid,
...additionalProperties,
...(accessControlQueryParam ? { accessControl: mockAccessControl } : {}),
...('managedBy' in folder.item && folder.item.managedBy ? { managedBy: folder.item.managedBy } : {}),
});
});

View File

@@ -5,6 +5,7 @@ import { wellFormedTree } from '../../../../fixtures/folders';
import { getErrorResponse } from '../../../helpers';
const [mockTree, { folderB }] = wellFormedTree();
// folderD is included in mockTree and will be returned by the handlers with managedBy: 'repo'
const baseResponse = {
kind: 'Folder',
@@ -24,7 +25,7 @@ const folderToAppPlatform = (folder: (typeof mockTree)[number]['item'], id?: num
// TODO: Generalise annotations in fixture data
'grafana.app/createdBy': 'user:1',
'grafana.app/updatedBy': 'user:2',
'grafana.app/managedBy': 'user',
'grafana.app/managedBy': 'managedBy' in folder ? folder.managedBy : 'user',
'grafana.app/updatedTimestamp': '2024-01-01T00:00:00Z',
'grafana.app/folder': folder.kind === 'folder' ? folder.parentUID : undefined,
},

View File

@@ -3,7 +3,7 @@
// @grafana/schema?
// New package @grafana/core? @grafana/types?
enum ManagerKind {
export enum ManagerKind {
Repo = 'repo',
Terraform = 'terraform',
Kubectl = 'kubectl',

View File

@@ -112,17 +112,15 @@ func TestGetHomeDashboard(t *testing.T) {
}
func newTestLive(t *testing.T) *live.GrafanaLive {
features := featuremgmt.WithFeatures()
cfg := setting.NewCfg()
cfg.AppURL = "http://localhost:3000/"
gLive, err := live.ProvideService(nil, cfg,
gLive, err := live.ProvideService(cfg,
routing.NewRouteRegister(),
nil, nil, nil, nil,
nil,
&usagestats.UsageStatsMock{T: t},
features, acimpl.ProvideAccessControl(features),
&dashboards.FakeDashboardService{},
nil, nil)
featuremgmt.WithFeatures(),
&dashboards.FakeDashboardService{}, nil)
require.NoError(t, err)
return gLive
}

View File

@@ -294,6 +294,7 @@ func (hs *HTTPServer) SearchOrgUsersWithPaging(c *contextmodel.ReqContext) respo
}
func (hs *HTTPServer) searchOrgUsersHelper(c *contextmodel.ReqContext, query *org.SearchOrgUsersQuery) (*org.SearchOrgUsersQueryResult, error) {
query.ExcludeHiddenUsers = true
result, err := hs.orgService.SearchOrgUsers(c.Req.Context(), query)
if err != nil {
return nil, err
@@ -303,9 +304,6 @@ func (hs *HTTPServer) searchOrgUsersHelper(c *contextmodel.ReqContext, query *or
userIDs := map[string]bool{}
authLabelsUserIDs := make([]int64, 0, len(result.OrgUsers))
for _, user := range result.OrgUsers {
if dtos.IsHiddenUser(user.Login, c.SignedInUser, hs.Cfg) {
continue
}
user.AvatarURL = dtos.GetGravatarUrl(hs.Cfg, user.Email)
userIDs[fmt.Sprint(user.UserID)] = true

View File

@@ -171,11 +171,16 @@ func TestIntegrationOrgUsersAPIEndpoint_userLoggedIn(t *testing.T) {
orgService.ExpectedSearchOrgUsersResult = &org.SearchOrgUsersQueryResult{
OrgUsers: []*org.OrgUserDTO{
{Login: testUserLogin, Email: "testUser@grafana.com"},
{Login: "user1", Email: "user1@grafana.com"},
{Login: "user2", Email: "user2@grafana.com"},
},
}
orgService.SearchOrgUsersFn = func(ctx context.Context, query *org.SearchOrgUsersQuery) (*org.SearchOrgUsersQueryResult, error) {
require.True(t, query.ExcludeHiddenUsers)
return orgService.ExpectedSearchOrgUsersResult, nil
}
defer func() { orgService.SearchOrgUsersFn = nil }()
sc.handlerFunc = hs.GetOrgUsersForCurrentOrg
sc.fakeReqWithParams("GET", sc.url, map[string]string{}).exec()
@@ -191,6 +196,18 @@ func TestIntegrationOrgUsersAPIEndpoint_userLoggedIn(t *testing.T) {
loggedInUserScenarioWithRole(t, "When calling GET as an admin on", "GET", "api/org/users/lookup",
"api/org/users/lookup", org.RoleAdmin, func(sc *scenarioContext) {
orgService.ExpectedSearchOrgUsersResult = &org.SearchOrgUsersQueryResult{
OrgUsers: []*org.OrgUserDTO{
{Login: testUserLogin, Email: "testUser@grafana.com"},
{Login: "user2", Email: "user2@grafana.com"},
},
}
orgService.SearchOrgUsersFn = func(ctx context.Context, query *org.SearchOrgUsersQuery) (*org.SearchOrgUsersQueryResult, error) {
require.True(t, query.ExcludeHiddenUsers)
return orgService.ExpectedSearchOrgUsersResult, nil
}
defer func() { orgService.SearchOrgUsersFn = nil }()
sc.handlerFunc = hs.GetOrgUsersForCurrentOrgLookup
sc.fakeReqWithParams("GET", sc.url, map[string]string{}).exec()

View File

@@ -222,7 +222,7 @@ func RegisterAPIService(
return builder
}
func NewAPIService(ac authlib.AccessClient, features featuremgmt.FeatureToggles, folderClientProvider client.K8sHandlerProvider, datasourceProvider schemaversion.DataSourceIndexProvider, libraryElementProvider schemaversion.LibraryElementIndexProvider, resourcePermissionsSvc *dynamic.NamespaceableResourceInterface) *DashboardsAPIBuilder {
func NewAPIService(ac authlib.AccessClient, features featuremgmt.FeatureToggles, folderClientProvider client.K8sHandlerProvider, datasourceProvider schemaversion.DataSourceIndexProvider, libraryElementProvider schemaversion.LibraryElementIndexProvider, resourcePermissionsSvc *dynamic.NamespaceableResourceInterface, search *SearchHandler) *DashboardsAPIBuilder {
migration.Initialize(datasourceProvider, libraryElementProvider, migration.DefaultCacheTTL)
return &DashboardsAPIBuilder{
minRefreshInterval: "10s",
@@ -231,6 +231,7 @@ func NewAPIService(ac authlib.AccessClient, features featuremgmt.FeatureToggles,
dashboardService: &dashsvc.DashboardServiceImpl{}, // for validation helpers only
folderClientProvider: folderClientProvider,
resourcePermissionsSvc: resourcePermissionsSvc,
search: search,
isStandalone: true,
}
}

View File

@@ -105,8 +105,7 @@ func (c *filesConnector) Connect(ctx context.Context, name string, opts runtime.
return
}
folders := resources.NewFolderManager(readWriter, folderClient, resources.NewEmptyFolderTree())
authorizer := resources.NewRepositoryAuthorizer(repo.Config(), c.access)
dualReadWriter := resources.NewDualReadWriter(readWriter, parser, folders, authorizer)
dualReadWriter := resources.NewDualReadWriter(readWriter, parser, folders, c.access)
query := r.URL.Query()
opts := resources.DualWriteOptions{
Ref: query.Get("ref"),

View File

@@ -328,91 +328,124 @@ func (b *APIBuilder) GetAuthorizer() authorizer.Authorizer {
return authorizer.DecisionDeny, "failed to find requester", err
}
// Different routes may need different permissions.
// * Reading and modifying a repository's configuration requires administrator privileges.
// * Reading a repository's limited configuration (/stats & /settings) requires viewer privileges.
// * Reading a repository's files requires viewer privileges.
// * Reading a repository's refs requires viewer privileges.
// * Editing a repository's files requires editor privileges.
// * Syncing a repository requires editor privileges.
// * Exporting a repository requires administrator privileges.
// * Migrating a repository requires administrator privileges.
// * Testing a repository configuration requires administrator privileges.
// * Viewing a repository's history requires editor privileges.
switch a.GetResource() {
case provisioning.RepositoryResourceInfo.GetName():
// TODO: Support more fine-grained permissions than the basic roles. Especially on Enterprise.
switch a.GetSubresource() {
case "", "test", "jobs":
// Doing something with the repository itself.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
case "refs":
// This is strictly a read operation. It is handy on the frontend for viewers.
if id.GetOrgRole().Includes(identity.RoleViewer) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "viewer role is required", nil
case "files":
// Access to files is controlled by the AccessClient
return authorizer.DecisionAllow, "", nil
case "resources", "sync", "history":
// These are strictly read operations.
// Sync can also be somewhat destructive, but it's expected to be fine to import changes.
if id.GetOrgRole().Includes(identity.RoleEditor) {
return authorizer.DecisionAllow, "", nil
} else {
return authorizer.DecisionDeny, "editor role is required", nil
}
case "status":
if id.GetOrgRole().Includes(identity.RoleViewer) && a.GetVerb() == apiutils.VerbGet {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "users cannot update the status of a repository", nil
default:
if id.GetIsGrafanaAdmin() {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "unmapped subresource defaults to no access", nil
}
case "stats":
// This can leak information one shouldn't necessarily have access to.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
case "settings":
// This is strictly a read operation. It is handy on the frontend for viewers.
if id.GetOrgRole().Includes(identity.RoleViewer) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "viewer role is required", nil
case provisioning.JobResourceInfo.GetName(),
provisioning.HistoricJobResourceInfo.GetName():
// Jobs are shown on the configuration page.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
default:
// We haven't bothered with this kind yet.
if id.GetIsGrafanaAdmin() {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "unmapped kind defaults to no access", nil
}
return b.authorizeResource(ctx, a, id)
})
}
// authorizeResource handles authorization for different resources.
// Different routes may need different permissions.
// * Reading and modifying a repository's configuration requires administrator privileges.
// * Reading a repository's limited configuration (/stats & /settings) requires viewer privileges.
// * Reading a repository's files requires viewer privileges.
// * Reading a repository's refs requires viewer privileges.
// * Editing a repository's files requires editor privileges.
// * Syncing a repository requires editor privileges.
// * Exporting a repository requires administrator privileges.
// * Migrating a repository requires administrator privileges.
// * Testing a repository configuration requires administrator privileges.
// * Viewing a repository's history requires editor privileges.
func (b *APIBuilder) authorizeResource(ctx context.Context, a authorizer.Attributes, id identity.Requester) (authorizer.Decision, string, error) {
switch a.GetResource() {
case provisioning.RepositoryResourceInfo.GetName():
return b.authorizeRepositorySubresource(a, id)
case "stats":
return b.authorizeStats(id)
case "settings":
return b.authorizeSettings(id)
case provisioning.JobResourceInfo.GetName(), provisioning.HistoricJobResourceInfo.GetName():
return b.authorizeJobs(id)
default:
return b.authorizeDefault(id)
}
}
// authorizeRepositorySubresource handles authorization for repository subresources.
func (b *APIBuilder) authorizeRepositorySubresource(a authorizer.Attributes, id identity.Requester) (authorizer.Decision, string, error) {
// TODO: Support more fine-grained permissions than the basic roles. Especially on Enterprise.
switch a.GetSubresource() {
case "", "test":
// Doing something with the repository itself.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
case "jobs":
// Posting jobs requires editor privileges (for syncing).
if id.GetOrgRole().Includes(identity.RoleAdmin) || id.GetOrgRole().Includes(identity.RoleEditor) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "editor role is required", nil
case "refs":
// This is strictly a read operation. It is handy on the frontend for viewers.
if id.GetOrgRole().Includes(identity.RoleViewer) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "viewer role is required", nil
case "files":
// Access to files is controlled by the AccessClient
return authorizer.DecisionAllow, "", nil
case "resources", "sync", "history":
// These are strictly read operations.
// Sync can also be somewhat destructive, but it's expected to be fine to import changes.
if id.GetOrgRole().Includes(identity.RoleEditor) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "editor role is required", nil
case "status":
if id.GetOrgRole().Includes(identity.RoleViewer) && a.GetVerb() == apiutils.VerbGet {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "users cannot update the status of a repository", nil
default:
if id.GetIsGrafanaAdmin() {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "unmapped subresource defaults to no access", nil
}
}
// authorizeStats handles authorization for stats resource.
func (b *APIBuilder) authorizeStats(id identity.Requester) (authorizer.Decision, string, error) {
// This can leak information one shouldn't necessarily have access to.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
}
// authorizeSettings handles authorization for settings resource.
func (b *APIBuilder) authorizeSettings(id identity.Requester) (authorizer.Decision, string, error) {
// This is strictly a read operation. It is handy on the frontend for viewers.
if id.GetOrgRole().Includes(identity.RoleViewer) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "viewer role is required", nil
}
// authorizeJobs handles authorization for job resources.
func (b *APIBuilder) authorizeJobs(id identity.Requester) (authorizer.Decision, string, error) {
// Jobs are shown on the configuration page.
if id.GetOrgRole().Includes(identity.RoleAdmin) {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "admin role is required", nil
}
// authorizeDefault handles authorization for unmapped resources.
func (b *APIBuilder) authorizeDefault(id identity.Requester) (authorizer.Decision, string, error) {
// We haven't bothered with this kind yet.
if id.GetIsGrafanaAdmin() {
return authorizer.DecisionAllow, "", nil
}
return authorizer.DecisionDeny, "unmapped kind defaults to no access", nil
}
func (b *APIBuilder) GetGroupVersion() schema.GroupVersion {
return provisioning.SchemeGroupVersion
}

View File

@@ -7,9 +7,9 @@ import (
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
authlib "github.com/grafana/authlib/types"
"github.com/grafana/grafana-app-sdk/logging"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository"
@@ -21,11 +21,18 @@ import (
// DualReadWriter is a wrapper around a repository that can read from and write resources
// into both the Git repository as well as in Grafana. It isn't a dual writer in the sense of what unistore handling calls dual writing.
// Standard provisioning Authorizer has already run by the time DualReadWriter is called
// for incoming requests from actors, external or internal. However, since it is the files
// connector that redirects here, the external resources such as dashboards
// end up requiring additional authorization checks which the DualReadWriter performs here.
// TODO: it does not support folders yet
type DualReadWriter struct {
repo repository.ReaderWriter
parser Parser
folders *FolderManager
authorizer Authorizer
repo repository.ReaderWriter
parser Parser
folders *FolderManager
access authlib.AccessChecker
}
type DualWriteOptions struct {
@@ -41,8 +48,8 @@ type DualWriteOptions struct {
Branch string // Configured default branch
}
func NewDualReadWriter(repo repository.ReaderWriter, parser Parser, folders *FolderManager, authorizer Authorizer) *DualReadWriter {
return &DualReadWriter{repo: repo, parser: parser, folders: folders, authorizer: authorizer}
func NewDualReadWriter(repo repository.ReaderWriter, parser Parser, folders *FolderManager, access authlib.AccessChecker) *DualReadWriter {
return &DualReadWriter{repo: repo, parser: parser, folders: folders, access: access}
}
func (r *DualReadWriter) Read(ctx context.Context, path string, ref string) (*ParsedResource, error) {
@@ -70,7 +77,8 @@ func (r *DualReadWriter) Read(ctx context.Context, path string, ref string) (*Pa
return nil, fmt.Errorf("error running dryRun: %w", err)
}
if err = r.authorizer.AuthorizeResource(ctx, parsed, utils.VerbGet); err != nil {
// Authorize based on the existing resource
if err = r.authorize(ctx, parsed, utils.VerbGet); err != nil {
return nil, err
}
@@ -78,7 +86,7 @@ func (r *DualReadWriter) Read(ctx context.Context, path string, ref string) (*Pa
}
func (r *DualReadWriter) Delete(ctx context.Context, opts DualWriteOptions) (*ParsedResource, error) {
if err := r.authorizer.AuthorizeWrite(ctx, opts.Ref); err != nil {
if err := repository.IsWriteAllowed(r.repo.Config(), opts.Ref); err != nil {
return nil, err
}
@@ -104,7 +112,7 @@ func (r *DualReadWriter) Delete(ctx context.Context, opts DualWriteOptions) (*Pa
return nil, fmt.Errorf("parse file: %w", err)
}
if err = r.authorizer.AuthorizeResource(ctx, parsed, utils.VerbDelete); err != nil {
if err = r.authorize(ctx, parsed, utils.VerbDelete); err != nil {
return nil, err
}
@@ -136,7 +144,7 @@ func (r *DualReadWriter) Delete(ctx context.Context, opts DualWriteOptions) (*Pa
// CreateFolder creates a new folder in the repository
// FIXME: fix signature to return ParsedResource
func (r *DualReadWriter) CreateFolder(ctx context.Context, opts DualWriteOptions) (*provisioning.ResourceWrapper, error) {
if err := r.authorizer.AuthorizeWrite(ctx, opts.Ref); err != nil {
if err := repository.IsWriteAllowed(r.repo.Config(), opts.Ref); err != nil {
return nil, err
}
@@ -144,12 +152,9 @@ func (r *DualReadWriter) CreateFolder(ctx context.Context, opts DualWriteOptions
return nil, fmt.Errorf("not a folder path")
}
// For create operations, use empty name to check parent folder permissions
folderParsed := folderParsedResource(opts.Path, opts.Ref, r.repo.Config(), "")
if err := r.authorizer.AuthorizeResource(ctx, folderParsed, utils.VerbCreate); err != nil {
if err := r.authorizeCreateFolder(ctx, opts.Path); err != nil {
return nil, err
}
// TODO: authorized to create folders under first existing ancestor folder
// Now actually create the folder
if err := r.repo.Create(ctx, opts.Path, opts.Ref, nil, opts.Message); err != nil {
@@ -197,7 +202,17 @@ func (r *DualReadWriter) CreateFolder(ctx context.Context, opts DualWriteOptions
// CreateResource creates a new resource in the repository
func (r *DualReadWriter) CreateResource(ctx context.Context, opts DualWriteOptions) (*ParsedResource, error) {
if err := r.authorizer.AuthorizeWrite(ctx, opts.Ref); err != nil {
return r.createOrUpdate(ctx, true, opts)
}
// UpdateResource updates a resource in the repository
func (r *DualReadWriter) UpdateResource(ctx context.Context, opts DualWriteOptions) (*ParsedResource, error) {
return r.createOrUpdate(ctx, false, opts)
}
// Create or updates a resource in the repository
func (r *DualReadWriter) createOrUpdate(ctx context.Context, create bool, opts DualWriteOptions) (*ParsedResource, error) {
if err := repository.IsWriteAllowed(r.repo.Config(), opts.Ref); err != nil {
return nil, err
}
@@ -212,8 +227,6 @@ func (r *DualReadWriter) CreateResource(ctx context.Context, opts DualWriteOptio
return nil, err
}
// TODO: check if the resource does not exist in the database.
// Make sure the value is valid
if !opts.SkipDryRun {
if err := parsed.DryRun(ctx); err != nil {
@@ -229,96 +242,12 @@ func (r *DualReadWriter) CreateResource(ctx context.Context, opts DualWriteOptio
return nil, fmt.Errorf("errors while parsing file [%v]", parsed.Errors)
}
// TODO: is this the right way?
// Check if resource already exists - create should fail if it does
if err = r.ensureExisting(ctx, parsed); err != nil {
return nil, err
// Verify that we can create (or update) the referenced resource
verb := utils.VerbUpdate
if parsed.Action == provisioning.ResourceActionCreate {
verb = utils.VerbCreate
}
if parsed.Existing != nil {
return nil, apierrors.NewConflict(parsed.GVR.GroupResource(), parsed.Obj.GetName(),
fmt.Errorf("resource already exists"))
}
// Authorization check: Check if we can create the resource in the folder from the file
if err = r.authorizer.AuthorizeResource(ctx, parsed, utils.VerbCreate); err != nil {
return nil, err
}
// TODO: authorized to create folders under first existing ancestor folder
data, err := parsed.ToSaveBytes()
if err != nil {
return nil, err
}
// Always use the provisioning identity when writing
ctx, _, err = identity.WithProvisioningIdentity(ctx, parsed.Obj.GetNamespace())
if err != nil {
return nil, fmt.Errorf("unable to use provisioning identity %w", err)
}
// TODO: handle the error repository.ErrFileAlreadyExists
err = r.repo.Create(ctx, opts.Path, opts.Ref, data, opts.Message)
if err != nil {
return nil, err // raw error is useful
}
// Directly update the grafana database
// Behaves the same running sync after writing
// FIXME: to make sure if behaves in the same way as in sync, we should
// we should refactor the code to use the same function.
if r.shouldUpdateGrafanaDB(opts, parsed) {
if _, err := r.folders.EnsureFolderPathExist(ctx, opts.Path); err != nil {
return nil, fmt.Errorf("ensure folder path exists: %w", err)
}
err = parsed.Run(ctx)
}
return parsed, err
}
// UpdateResource updates a resource in the repository
func (r *DualReadWriter) UpdateResource(ctx context.Context, opts DualWriteOptions) (*ParsedResource, error) {
if err := r.authorizer.AuthorizeWrite(ctx, opts.Ref); err != nil {
return nil, err
}
info := &repository.FileInfo{
Data: opts.Data,
Path: opts.Path,
Ref: opts.Ref,
}
parsed, err := r.parser.Parse(ctx, info)
if err != nil {
return nil, err
}
// Make sure the value is valid
if !opts.SkipDryRun {
if err := parsed.DryRun(ctx); err != nil {
logger := logging.FromContext(ctx).With("path", opts.Path, "name", parsed.Obj.GetName(), "ref", opts.Ref)
logger.Warn("failed to dry run resource on update", "error", err)
return nil, fmt.Errorf("error running dryRun: %w", err)
}
}
if len(parsed.Errors) > 0 {
// Now returns BadRequest (400) for validation errors
return nil, fmt.Errorf("errors while parsing file [%v]", parsed.Errors)
}
// Populate existing resource to check permissions in the correct folder
if err = r.ensureExisting(ctx, parsed); err != nil {
return nil, err
}
// TODO: what to do with a name or kind change?
// Authorization check: Check if we can update the existing resource in its current folder
if err = r.authorizer.AuthorizeResource(ctx, parsed, utils.VerbUpdate); err != nil {
if err = r.authorize(ctx, parsed, verb); err != nil {
return nil, err
}
@@ -333,7 +262,12 @@ func (r *DualReadWriter) UpdateResource(ctx context.Context, opts DualWriteOptio
return nil, fmt.Errorf("unable to use provisioning identity %w", err)
}
err = r.repo.Update(ctx, opts.Path, opts.Ref, data, opts.Message)
// Create or update
if create {
err = r.repo.Create(ctx, opts.Path, opts.Ref, data, opts.Message)
} else {
err = r.repo.Update(ctx, opts.Path, opts.Ref, data, opts.Message)
}
if err != nil {
return nil, err // raw error is useful
}
@@ -355,7 +289,7 @@ func (r *DualReadWriter) UpdateResource(ctx context.Context, opts DualWriteOptio
// MoveResource moves a resource from one path to another in the repository
func (r *DualReadWriter) MoveResource(ctx context.Context, opts DualWriteOptions) (*ParsedResource, error) {
if err := r.authorizer.AuthorizeWrite(ctx, opts.Ref); err != nil {
if err := repository.IsWriteAllowed(r.repo.Config(), opts.Ref); err != nil {
return nil, err
}
@@ -394,19 +328,6 @@ func (r *DualReadWriter) moveDirectory(ctx context.Context, opts DualWriteOption
}
}
// Check permissions to delete the original folder
originalFolderID := ParseFolder(opts.OriginalPath, r.repo.Config().Name).ID
originalFolderParsed := folderParsedResource(opts.OriginalPath, opts.Ref, r.repo.Config(), originalFolderID)
if err := r.authorizer.AuthorizeResource(ctx, originalFolderParsed, utils.VerbDelete); err != nil {
return nil, fmt.Errorf("not authorized to move from original folder: %w", err)
}
// Check permissions to create at the new folder location (empty name for create)
newFolderParsed := folderParsedResource(opts.Path, opts.Ref, r.repo.Config(), "")
if err := r.authorizer.AuthorizeResource(ctx, newFolderParsed, utils.VerbCreate); err != nil {
return nil, fmt.Errorf("not authorized to move to new folder: %w", err)
}
// For branch operations, we just perform the repository move without updating Grafana DB
// Always use the provisioning identity when writing
ctx, _, err := identity.WithProvisioningIdentity(ctx, r.repo.Config().Namespace)
@@ -457,13 +378,8 @@ func (r *DualReadWriter) moveFile(ctx context.Context, opts DualWriteOptions) (*
return nil, fmt.Errorf("parse original file: %w", err)
}
// Populate existing resource to check delete permission in the correct folder
if err = r.ensureExisting(ctx, parsed); err != nil {
return nil, err
}
// Authorize delete on the original path (checks existing resource's folder if it exists)
if err = r.authorizer.AuthorizeResource(ctx, parsed, utils.VerbDelete); err != nil {
// Authorize delete on the original path
if err = r.authorize(ctx, parsed, utils.VerbDelete); err != nil {
return nil, fmt.Errorf("not authorized to delete original file: %w", err)
}
@@ -501,20 +417,13 @@ func (r *DualReadWriter) moveFile(ctx context.Context, opts DualWriteOptions) (*
return nil, fmt.Errorf("errors while parsing moved file [%v]", newParsed.Errors)
}
// Populate existing resource at destination to check if we're overwriting something
if err = r.ensureExisting(ctx, newParsed); err != nil {
return nil, err
// Authorize create on the new path
verb := utils.VerbCreate
if newParsed.Action == provisioning.ResourceActionUpdate {
verb = utils.VerbUpdate
}
// Authorize for the target resource
// - If resource exists at destination: Check if we can update it in its folder
// - If no resource at destination: Check if we can create in the new folder
verb := utils.VerbUpdate
if newParsed.Existing == nil {
verb = utils.VerbCreate
}
if err = r.authorizer.AuthorizeResource(ctx, newParsed, verb); err != nil {
return nil, fmt.Errorf("not authorized for destination: %w", err)
if err = r.authorize(ctx, newParsed, verb); err != nil {
return nil, fmt.Errorf("not authorized to create new file: %w", err)
}
data, err := newParsed.ToSaveBytes()
@@ -572,25 +481,57 @@ func (r *DualReadWriter) moveFile(ctx context.Context, opts DualWriteOptions) (*
return newParsed, nil
}
// ensureExisting populates parsed.Existing if a resource with the given name exists in storage.
// Returns nil if no resource exists, if Client is nil, or if Existing is already populated.
// This is used before authorization checks to ensure we validate permissions against the actual
// existing resource's folder, not just the folder specified in the file.
func (r *DualReadWriter) ensureExisting(ctx context.Context, parsed *ParsedResource) error {
if parsed.Client == nil || parsed.Existing != nil {
return nil // Already populated or can't check
}
existing, err := parsed.Client.Get(ctx, parsed.Obj.GetName(), metav1.GetOptions{})
func (r *DualReadWriter) authorize(ctx context.Context, parsed *ParsedResource, verb string) error {
id, err := identity.GetRequester(ctx)
if err != nil {
if apierrors.IsNotFound(err) {
return nil // No existing resource
}
return fmt.Errorf("failed to check for existing resource: %w", err)
return apierrors.NewUnauthorized(err.Error())
}
parsed.Existing = existing
return nil
var name string
if parsed.Existing != nil {
name = parsed.Existing.GetName()
} else {
name = parsed.Obj.GetName()
}
rsp, err := r.access.Check(ctx, id, authlib.CheckRequest{
Group: parsed.GVR.Group,
Resource: parsed.GVR.Resource,
Namespace: id.GetNamespace(),
Name: name,
Verb: verb,
}, parsed.Meta.GetFolder())
if err != nil || !rsp.Allowed {
return apierrors.NewForbidden(parsed.GVR.GroupResource(), parsed.Obj.GetName(),
fmt.Errorf("no access to read the embedded file"))
}
idType, _, err := authlib.ParseTypeID(id.GetID())
if err != nil {
return apierrors.NewForbidden(parsed.GVR.GroupResource(), parsed.Obj.GetName(), fmt.Errorf("could not determine identity type to check access"))
}
// only apply role based access if identity is not of type access policy
if idType == authlib.TypeAccessPolicy || id.GetOrgRole().Includes(identity.RoleEditor) {
return nil
}
return apierrors.NewForbidden(parsed.GVR.GroupResource(), parsed.Obj.GetName(),
fmt.Errorf("must be admin or editor to access files from provisioning"))
}
func (r *DualReadWriter) authorizeCreateFolder(ctx context.Context, _ string) error {
id, err := identity.GetRequester(ctx)
if err != nil {
return apierrors.NewUnauthorized(err.Error())
}
// Simple role based access for now
if id.GetOrgRole().Includes(identity.RoleEditor) {
return nil
}
return apierrors.NewForbidden(FolderResource.GroupResource(), "",
fmt.Errorf("must be admin or editor to access folders with provisioning"))
}
func (r *DualReadWriter) deleteFolder(ctx context.Context, opts DualWriteOptions) (*ParsedResource, error) {
@@ -606,13 +547,6 @@ func (r *DualReadWriter) deleteFolder(ctx context.Context, opts DualWriteOptions
}
}
// Check permissions to delete the folder
folderID := ParseFolder(opts.Path, r.repo.Config().Name).ID
folderParsed := folderParsedResource(opts.Path, opts.Ref, r.repo.Config(), folderID)
if err := r.authorizer.AuthorizeResource(ctx, folderParsed, utils.VerbDelete); err != nil {
return nil, err
}
// For branch operations, just delete from the repository without updating Grafana DB
err := r.repo.Delete(ctx, opts.Path, opts.Ref, opts.Message)
if err != nil {
@@ -641,54 +575,6 @@ func getPathType(isDir bool) string {
return "file (no trailing '/')"
}
// folderParsedResource creates a ParsedResource for a folder path.
// This is used for authorization checks on folder operations.
// For create operations, name should be empty string to check parent permissions.
// For other operations, name should be the folder ID derived from the path.
func folderParsedResource(path, ref string, repo *provisioning.Repository, name string) *ParsedResource {
folderObj := &unstructured.Unstructured{}
folderObj.SetName(name)
folderObj.SetNamespace(repo.Namespace)
// TODO: which parent? top existing ancestor.
meta, _ := utils.MetaAccessor(folderObj)
if meta != nil {
// Set parent folder for folder operations
parentFolder := ""
if path != "" {
parentPath := safepath.Dir(path)
if parentPath != "" {
parentFolder = ParseFolder(parentPath, repo.Name).ID
} else {
parentFolder = RootFolder(repo)
}
}
meta.SetFolder(parentFolder)
}
return &ParsedResource{
Info: &repository.FileInfo{
Path: path,
Ref: ref,
},
Obj: folderObj,
Meta: meta,
GVK: schema.GroupVersionKind{
Group: FolderResource.Group,
Version: FolderResource.Version,
Kind: "Folder",
},
GVR: FolderResource,
Repo: provisioning.ResourceRepositoryInfo{
Type: repo.Spec.Type,
Namespace: repo.Namespace,
Name: repo.Name,
Title: repo.Spec.Title,
},
}
}
func folderDeleteResponse(ctx context.Context, path, ref string, repo repository.Repository) (*ParsedResource, error) {
urls, err := getFolderURLs(ctx, path, ref, repo)
if err != nil {

View File

@@ -349,6 +349,7 @@ var wireBasicSet = wire.NewSet(
dashboardservice.ProvideDashboardService,
dashboardservice.ProvideDashboardProvisioningService,
dashboardservice.ProvideDashboardPluginService,
dashboardservice.ProvideDashboardAccessService,
dashboardstore.ProvideDashboardStore,
folderimpl.ProvideService,
wire.Bind(new(folder.Service), new(*folderimpl.Service)),

File diff suppressed because one or more lines are too long

View File

@@ -58,6 +58,13 @@ const (
RelationGetPermissions string = "get_permissions"
RelationSetPermissions string = "set_permissions"
RelationCanGet string = "can_get"
RelationCanCreate string = "can_create"
RelationCanUpdate string = "can_update"
RelationCanDelete string = "can_delete"
RelationCanGetPermissions string = "can_get_permissions"
RelationCanSetPermissions string = "can_set_permissions"
RelationSubresourceSetView string = "resource_" + RelationSetView
RelationSubresourceSetEdit string = "resource_" + RelationSetEdit
RelationSubresourceSetAdmin string = "resource_" + RelationSetAdmin
@@ -134,6 +141,26 @@ var RelationToVerbMapping = map[string]string{
RelationSetPermissions: utils.VerbSetPermissions,
}
// FolderPermissionRelation returns the optimized folder relation for permission management.
func FolderPermissionRelation(relation string) string {
switch relation {
case RelationGet:
return RelationCanGet
case RelationCreate:
return RelationCanCreate
case RelationUpdate:
return RelationCanUpdate
case RelationDelete:
return RelationCanDelete
case RelationGetPermissions:
return RelationCanGetPermissions
case RelationSetPermissions:
return RelationCanSetPermissions
default:
return relation
}
}
func IsGroupResourceRelation(relation string) bool {
return isValidRelation(relation, RelationsGroupResource)
}

View File

@@ -4,15 +4,21 @@ type folder
relations
define parent: [folder]
# Action sets
define view: [user, service-account, team#member, role#assignee] or edit or view from parent
define edit: [user, service-account, team#member, role#assignee] or admin or edit from parent
# Permission levels
define admin: [user, service-account, team#member, role#assignee] or admin from parent
define edit: [user, service-account, team#member, role#assignee] or edit from parent
define view: [user, service-account, team#member, role#assignee] or view from parent
define get: [user, service-account, team#member, role#assignee] or get from parent
define create: [user, service-account, team#member, role#assignee] or create from parent
define update: [user, service-account, team#member, role#assignee] or update from parent
define delete: [user, service-account, team#member, role#assignee] or delete from parent
define get_permissions: [user, service-account, team#member, role#assignee] or get_permissions from parent
define set_permissions: [user, service-account, team#member, role#assignee] or set_permissions from parent
define get: [user, service-account, team#member, role#assignee] or view or get from parent
define create: [user, service-account, team#member, role#assignee] or edit or create from parent
define update: [user, service-account, team#member, role#assignee] or edit or update from parent
define delete: [user, service-account, team#member, role#assignee] or edit or delete from parent
define get_permissions: [user, service-account, team#member, role#assignee] or admin or get_permissions from parent
define set_permissions: [user, service-account, team#member, role#assignee] or admin or set_permissions from parent
# Computed actions
define can_get: admin or edit or view or get
define can_create: admin or edit or create
define can_update: admin or edit or update
define can_delete: admin or edit or delete
define can_get_permissions: admin or get_permissions
define can_set_permissions: admin or set_permissions

View File

@@ -0,0 +1,947 @@
package server
import (
"context"
"fmt"
"math/rand"
"testing"
"time"
authzv1 "github.com/grafana/authlib/authz/proto/v1"
openfgav1 "github.com/openfga/api/proto/openfga/v1"
"github.com/prometheus/client_golang/prometheus"
"github.com/stretchr/testify/require"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/infra/tracing"
authzextv1 "github.com/grafana/grafana/pkg/services/authz/proto/v1"
"github.com/grafana/grafana/pkg/services/authz/zanzana/common"
"github.com/grafana/grafana/pkg/services/authz/zanzana/store"
"github.com/grafana/grafana/pkg/services/sqlstore"
"github.com/grafana/grafana/pkg/setting"
)
const (
benchNamespace = "default"
// Folder tree parameters
foldersPerLevel = 3
folderDepth = 7
// Other data generation parameters
numResources = 50000
numUsers = 1000
numTeams = 100
// Timeout for List operations
listTimeout = 30 * time.Second
// Resource type constants for benchmarks
benchDashboardGroup = "dashboard.grafana.app"
benchDashboardResource = "dashboards"
benchFolderGroup = "folder.grafana.app"
benchFolderResource = "folders"
// BenchmarkBatchCheck measures the performance of BatchCheck requests with 50 items per batch.
batchCheckSize = 50
)
// benchmarkData holds all the generated test data for benchmarks
type benchmarkData struct {
folders []string // folder UIDs
folderDepths map[string]int // folder UID -> depth level
folderParents map[string]string // folder UID -> parent UID
folderDescendants map[string]int // folder UID -> number of descendants (including self)
foldersByDepth [][]string // folders grouped by depth level
resources []string // resource names
resourceFolders map[string]string // resource name -> folder UID
users []string // user identifiers (e.g., "user:1")
teams []string // team identifiers (e.g., "team:1")
// Pre-computed test scenarios
deepestFolder string // folder at max depth for worst-case tests
midDepthFolder string // folder at depth/2
shallowFolder string // folder at depth 1
rootFolder string // root level folder (depth 0)
largestRootFolder string // root folder with most descendants
largestRootDescCount int // number of descendants in largestRootFolder
maxDepth int // maximum depth in the tree
}
// generateFolderHierarchy creates a balanced tree of folders.
// Each folder has `childrenPerFolder` children, up to `depth` levels deep.
func generateFolderHierarchy(childrenPerFolder, depth int) ([]*openfgav1.TupleKey, *benchmarkData) {
// Calculate total folders: childrenPerFolder + childrenPerFolder^2 + ... + childrenPerFolder^(depth+1)
totalFolders := 0
levelSize := childrenPerFolder
for d := 0; d <= depth; d++ {
totalFolders += levelSize
levelSize *= childrenPerFolder
}
data := &benchmarkData{
folders: make([]string, 0, totalFolders),
folderDepths: make(map[string]int),
folderParents: make(map[string]string),
folderDescendants: make(map[string]int),
}
tuples := make([]*openfgav1.TupleKey, 0, totalFolders)
folderIdx := 0
// Track folders at each level for parent assignment
levelFolders := make([][]string, depth+1)
for i := range levelFolders {
levelFolders[i] = make([]string, 0)
}
// Create root level folders (depth 0)
for i := 0; i < childrenPerFolder; i++ {
folderUID := fmt.Sprintf("folder-%d", folderIdx)
data.folders = append(data.folders, folderUID)
data.folderDepths[folderUID] = 0
levelFolders[0] = append(levelFolders[0], folderUID)
folderIdx++
}
// Create folders at each subsequent depth level
for d := 1; d <= depth; d++ {
parentFolders := levelFolders[d-1]
// Each parent gets exactly childrenPerFolder children
for _, parentUID := range parentFolders {
for j := 0; j < childrenPerFolder; j++ {
folderUID := fmt.Sprintf("folder-%d", folderIdx)
data.folders = append(data.folders, folderUID)
data.folderDepths[folderUID] = d
data.folderParents[folderUID] = parentUID
levelFolders[d] = append(levelFolders[d], folderUID)
// Create parent relationship tuple
tuples = append(tuples, common.NewFolderParentTuple(folderUID, parentUID))
folderIdx++
}
}
}
// Set reference folders for different depth scenarios
data.rootFolder = levelFolders[0][0]
data.shallowFolder = levelFolders[0][0]
if len(levelFolders[1]) > 0 {
data.shallowFolder = levelFolders[1][0]
}
midDepth := depth / 2
if len(levelFolders[midDepth]) > 0 {
data.midDepthFolder = levelFolders[midDepth][0]
}
// Deepest folder
if len(levelFolders[depth]) > 0 {
data.deepestFolder = levelFolders[depth][0]
}
// Calculate descendant counts for each folder (bottom-up)
// Initialize all folders with count of 1 (self)
for _, folder := range data.folders {
data.folderDescendants[folder] = 1
}
// Process folders from deepest to shallowest, accumulating descendant counts
for d := depth; d >= 0; d-- {
for _, folder := range levelFolders[d] {
if parent, hasParent := data.folderParents[folder]; hasParent {
data.folderDescendants[parent] += data.folderDescendants[folder]
}
}
}
// Find root folder with most descendants
for _, rootFolder := range levelFolders[0] {
count := data.folderDescendants[rootFolder]
if count > data.largestRootDescCount {
data.largestRootDescCount = count
data.largestRootFolder = rootFolder
}
}
// Store folders by depth for depth-based testing
data.foldersByDepth = levelFolders
data.maxDepth = depth
return tuples, data
}
// generateResources creates resources distributed across folders
func generateResources(data *benchmarkData, numResources int) []*openfgav1.TupleKey {
data.resources = make([]string, numResources)
data.resourceFolders = make(map[string]string, numResources)
// Distribute resources across folders
for i := 0; i < numResources; i++ {
resourceName := fmt.Sprintf("resource-%d", i)
folderIdx := i % len(data.folders)
folderUID := data.folders[folderIdx]
data.resources[i] = resourceName
data.resourceFolders[resourceName] = folderUID
}
// Note: We don't create tuples for resources themselves,
// permissions are assigned to users/teams on folders or directly on resources
return nil
}
// generateUsers creates user identifiers
func generateUsers(data *benchmarkData, numUsers int) {
data.users = make([]string, numUsers)
for i := 0; i < numUsers; i++ {
data.users[i] = fmt.Sprintf("user:%d", i)
}
}
// generateTeams creates team identifiers
func generateTeams(data *benchmarkData, numTeams int) {
data.teams = make([]string, numTeams)
for i := 0; i < numTeams; i++ {
data.teams[i] = fmt.Sprintf("team:%d", i)
}
}
// generatePermissionTuples creates various permission assignments for benchmarking.
// Users are distributed across 7 patterns: global, root folder, mid-depth folder,
// folder-scoped resource, direct resource, team-based, and no permissions.
const numPermissionPatterns = 7
func generatePermissionTuples(data *benchmarkData) []*openfgav1.TupleKey {
tuples := make([]*openfgav1.TupleKey, 0)
// Distribute users across different permission patterns
usersPerPattern := len(data.users) / numPermissionPatterns
// Pattern 1: Users with GroupResource permission (all access)
// Users 0 to usersPerPattern-1
for i := 0; i < usersPerPattern; i++ {
tuples = append(tuples, common.NewGroupResourceTuple(
data.users[i],
common.RelationGet,
benchDashboardGroup,
benchDashboardResource,
"",
))
}
// Pattern 2: Users with folder-level permission on root folders
// Users usersPerPattern to 2*usersPerPattern-1
for i := usersPerPattern; i < 2*usersPerPattern; i++ {
folderIdx := (i - usersPerPattern) % len(data.folders)
// Only assign to root-level folders for this pattern
for j := folderIdx; j < len(data.folders); j++ {
if data.folderDepths[data.folders[j]] == 0 {
tuples = append(tuples, common.NewFolderTuple(
data.users[i],
common.RelationSetView,
data.folders[j],
))
break
}
}
}
// Pattern 3: Users with folder-level permission on mid-depth folders
// Use relative depth range: 1/3 to 2/3 of max depth
// Use "view" relation which grants get through the optimized schema
minMidDepth := data.maxDepth / 3
maxMidDepth := 2 * data.maxDepth / 3
if maxMidDepth < minMidDepth {
maxMidDepth = minMidDepth
}
// Collect folders in the mid-depth range
var midDepthFolders []string
for d := minMidDepth; d <= maxMidDepth; d++ {
if d < len(data.foldersByDepth) {
midDepthFolders = append(midDepthFolders, data.foldersByDepth[d]...)
}
}
// Fall back to root folders if no mid-depth folders exist
if len(midDepthFolders) == 0 {
midDepthFolders = data.foldersByDepth[0]
}
for i := 2 * usersPerPattern; i < 3*usersPerPattern; i++ {
folderIdx := (i - 2*usersPerPattern) % len(midDepthFolders)
tuples = append(tuples, common.NewFolderTuple(
data.users[i],
common.RelationSetView,
midDepthFolders[folderIdx],
))
}
// Pattern 4: Users with folder-scoped resource permission
for i := 3 * usersPerPattern; i < 4*usersPerPattern; i++ {
folderIdx := (i - 3*usersPerPattern) % len(data.folders)
tuples = append(tuples, common.NewFolderResourceTuple(
data.users[i],
common.RelationGet,
benchDashboardGroup,
benchDashboardResource,
"",
data.folders[folderIdx],
))
}
// Pattern 5: Users with direct resource permission
for i := 4 * usersPerPattern; i < 5*usersPerPattern; i++ {
resourceIdx := (i - 4*usersPerPattern) % len(data.resources)
tuples = append(tuples, common.NewResourceTuple(
data.users[i],
common.RelationGet,
benchDashboardGroup,
benchDashboardResource,
"",
data.resources[resourceIdx],
))
}
// Pattern 6: Team memberships and team permissions
// First, add users to teams
for i := 5 * usersPerPattern; i < 6*usersPerPattern && i < len(data.users); i++ {
teamIdx := (i - 5*usersPerPattern) % len(data.teams)
tuples = append(tuples, common.NewTypedTuple(
common.TypeTeam,
data.users[i],
common.RelationTeamMember,
fmt.Sprintf("%d", teamIdx),
))
}
// Then, give teams folder permissions
// Use "view" relation which grants get through the optimized schema
for i := 0; i < len(data.teams); i++ {
folderIdx := i % len(data.folders)
teamMember := fmt.Sprintf("team:%d#member", i)
tuples = append(tuples, common.NewFolderTuple(
teamMember,
common.RelationSetView,
data.folders[folderIdx],
))
}
// Pattern 7: Users with no permissions (remaining users)
// These users don't get any tuples - they're for testing denial cases
return tuples
}
// setupBenchmarkServer creates a server with the benchmark data loaded
func setupBenchmarkServer(b *testing.B) (*Server, *benchmarkData) {
b.Helper()
if testing.Short() {
b.Skip("skipping benchmark in short mode")
}
cfg := setting.NewCfg()
testStore := sqlstore.NewTestStore(b, sqlstore.WithCfg(cfg))
openFGAStore, err := store.NewEmbeddedStore(cfg, testStore, log.NewNopLogger())
require.NoError(b, err)
openfga, err := NewOpenFGAServer(cfg.ZanzanaServer, openFGAStore)
require.NoError(b, err)
srv, err := NewServer(cfg.ZanzanaServer, openfga, log.NewNopLogger(), tracing.NewNoopTracerService(), prometheus.NewRegistry())
require.NoError(b, err)
// Generate test data
b.Log("Generating folder hierarchy...")
folderTuples, data := generateFolderHierarchy(foldersPerLevel, folderDepth)
b.Log("Generating resources...")
generateResources(data, numResources)
b.Log("Generating users...")
generateUsers(data, numUsers)
b.Log("Generating teams...")
generateTeams(data, numTeams)
b.Log("Generating permission tuples...")
permTuples := generatePermissionTuples(data)
// Add special user with permission on largest root folder (for >1000 folder test)
// Use "view" relation which grants get through the optimized schema
largeRootUserTuple := common.NewFolderTuple(
"user:large-root-access",
common.RelationSetView,
data.largestRootFolder,
)
permTuples = append(permTuples, largeRootUserTuple)
// Add users with permissions at each depth level for depth-based testing
// Use "view" relation which grants get through the optimized schema
for depth := 0; depth <= data.maxDepth; depth++ {
if len(data.foldersByDepth[depth]) == 0 {
continue
}
folder := data.foldersByDepth[depth][0]
user := fmt.Sprintf("user:depth-%d-access", depth)
permTuples = append(permTuples, common.NewFolderTuple(user, common.RelationSetView, folder))
}
// Combine all tuples
allTuples := append(folderTuples, permTuples...)
b.Logf("Total tuples to write: %d", len(allTuples))
// Get store info
ctx := newContextWithNamespace()
storeInf, err := srv.getStoreInfo(ctx, benchNamespace)
require.NoError(b, err)
// Write tuples in batches (OpenFGA limits to 100 per write)
batchSize := 100
for i := 0; i < len(allTuples); i += batchSize {
end := i + batchSize
if end > len(allTuples) {
end = len(allTuples)
}
batch := allTuples[i:end]
_, err = srv.openfga.Write(ctx, &openfgav1.WriteRequest{
StoreId: storeInf.ID,
AuthorizationModelId: storeInf.ModelID,
Writes: &openfgav1.WriteRequestWrites{
TupleKeys: batch,
OnDuplicate: "ignore",
},
})
require.NoError(b, err)
if (i/batchSize)%100 == 0 {
b.Logf("Written %d/%d tuples", end, len(allTuples))
}
}
b.Logf("Benchmark data setup complete: %d folders, %d resources, %d users, %d teams",
len(data.folders), len(data.resources), len(data.users), len(data.teams))
b.Logf("Largest root folder: %s with %d descendants", data.largestRootFolder, data.largestRootDescCount)
return srv, data
}
// BenchmarkCheck measures the performance of Check requests
func BenchmarkCheck(b *testing.B) {
srv, data := setupBenchmarkServer(b)
ctx := newContextWithNamespace()
// Helper to create check requests
newCheckReq := func(subject, verb, group, resource, folder, name string) *authzv1.CheckRequest {
return &authzv1.CheckRequest{
Namespace: benchNamespace,
Subject: subject,
Verb: verb,
Group: group,
Resource: resource,
Folder: folder,
Name: name,
}
}
usersPerPattern := len(data.users) / 7
b.Run("GroupResourceDirect", func(b *testing.B) {
// User with group_resource permission - should have access to everything
user := data.users[0] // First user has GroupResource permission
resource := data.resources[rand.Intn(len(data.resources))]
folder := data.resourceFolders[resource]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
if !res.GetAllowed() {
b.Fatal("expected access to be allowed")
}
}
})
// Test folder inheritance at each depth level (0 to maxDepth)
// User has permission on ROOT folder (depth 0), we check access at each deeper level
rootUser := "user:depth-0-access" // has view permission on root folder
for depth := 0; depth <= data.maxDepth; depth++ {
depth := depth // capture for closure
if len(data.foldersByDepth[depth]) == 0 {
continue
}
b.Run(fmt.Sprintf("FolderInheritance/Depth%d", depth), func(b *testing.B) {
resource := data.resources[0]
folder := data.foldersByDepth[depth][0]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(rootUser, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
_ = res.GetAllowed()
}
})
}
b.Run("FolderResourceScoped", func(b *testing.B) {
// User with folder-scoped resource permission
user := data.users[3*usersPerPattern]
folderIdx := 0
folder := data.folders[folderIdx]
resource := data.resources[folderIdx]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
_ = res.GetAllowed()
}
})
b.Run("DirectResource", func(b *testing.B) {
// User with direct resource permission
user := data.users[4*usersPerPattern]
resourceIdx := 0
resource := data.resources[resourceIdx]
folder := data.resourceFolders[resource]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
_ = res.GetAllowed()
}
})
b.Run("TeamMembership", func(b *testing.B) {
// User who is a team member, team has folder permission
user := data.users[5*usersPerPattern]
teamIdx := 0
folderIdx := teamIdx % len(data.folders)
folder := data.folders[folderIdx]
resource := data.resources[folderIdx%len(data.resources)]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
_ = res.GetAllowed()
}
})
b.Run("NoAccess", func(b *testing.B) {
// User with no permissions - tests denial path
user := data.users[len(data.users)-1] // Last user has no permissions
resource := data.resources[0]
folder := data.resourceFolders[resource]
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource, folder, resource))
if err != nil {
b.Fatal(err)
}
if res.GetAllowed() {
b.Fatal("expected access to be denied")
}
}
})
b.Run("FolderCheck", func(b *testing.B) {
// Direct folder access check
user := data.users[usersPerPattern]
folder := data.rootFolder
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.Check(ctx, newCheckReq(user, utils.VerbGet, benchFolderGroup, benchFolderResource, "", folder))
if err != nil {
b.Fatal(err)
}
_ = res.GetAllowed()
}
})
}
func BenchmarkBatchCheck(b *testing.B) {
srv, data := setupBenchmarkServer(b)
ctx := newContextWithNamespace()
// Helper to create batch check requests
newBatchCheckReq := func(subject string, items []*authzextv1.BatchCheckItem) *authzextv1.BatchCheckRequest {
return &authzextv1.BatchCheckRequest{
Namespace: benchNamespace,
Subject: subject,
Items: items,
}
}
// Helper to create batch items for resources in folders
createBatchItems := func(resources []string, resourceFolders map[string]string) []*authzextv1.BatchCheckItem {
items := make([]*authzextv1.BatchCheckItem, 0, batchCheckSize)
for i := 0; i < batchCheckSize && i < len(resources); i++ {
resource := resources[i]
items = append(items, &authzextv1.BatchCheckItem{
Verb: utils.VerbGet,
Group: benchDashboardGroup,
Resource: benchDashboardResource,
Name: resource,
Folder: resourceFolders[resource],
})
}
return items
}
// Helper to create batch items for folders at a specific depth
createFolderBatchItems := func(folders []string, depth int, folderDepths map[string]int) []*authzextv1.BatchCheckItem {
items := make([]*authzextv1.BatchCheckItem, 0, batchCheckSize)
for _, folder := range folders {
if folderDepths[folder] == depth && len(items) < batchCheckSize {
items = append(items, &authzextv1.BatchCheckItem{
Verb: utils.VerbGet,
Group: benchDashboardGroup,
Resource: benchDashboardResource,
Name: fmt.Sprintf("resource-in-%s", folder),
Folder: folder,
})
}
}
// Fill remaining slots if needed
for len(items) < batchCheckSize && len(folders) > 0 {
folder := folders[len(items)%len(folders)]
items = append(items, &authzextv1.BatchCheckItem{
Verb: utils.VerbGet,
Group: benchDashboardGroup,
Resource: benchDashboardResource,
Name: fmt.Sprintf("resource-%d", len(items)),
Folder: folder,
})
}
return items
}
usersPerPattern := len(data.users) / numPermissionPatterns
b.Run("GroupResourceDirect", func(b *testing.B) {
// User with group_resource permission - should have access to everything
user := data.users[0]
items := createBatchItems(data.resources, data.resourceFolders)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("FolderInheritance/Depth1", func(b *testing.B) {
// User with folder permission on shallow folder
user := data.users[usersPerPattern]
items := createFolderBatchItems(data.folders, 1, data.folderDepths)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("FolderInheritance/Depth4", func(b *testing.B) {
// User with folder permission on mid-depth folder
user := data.users[2*usersPerPattern]
items := createFolderBatchItems(data.folders, 4, data.folderDepths)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("FolderInheritance/Depth7", func(b *testing.B) {
// Check access on deepest folders (worst case for inheritance traversal)
user := data.users[usersPerPattern]
items := createFolderBatchItems(data.folders, data.maxDepth, data.folderDepths)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("DirectResource", func(b *testing.B) {
// User with direct resource permission
user := data.users[4*usersPerPattern]
items := createBatchItems(data.resources, data.resourceFolders)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("TeamMembership", func(b *testing.B) {
// User who is a team member, team has folder permission
user := data.users[5*usersPerPattern]
items := createBatchItems(data.resources, data.resourceFolders)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("NoAccess", func(b *testing.B) {
// User with no permissions - tests denial path
user := data.users[len(data.users)-1]
items := createBatchItems(data.resources, data.resourceFolders)
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
b.Run("MixedFolders", func(b *testing.B) {
// Batch of items across different folder depths
user := data.users[usersPerPattern]
items := make([]*authzextv1.BatchCheckItem, 0, batchCheckSize)
for i := 0; i < batchCheckSize; i++ {
folder := data.folders[i%len(data.folders)]
items = append(items, &authzextv1.BatchCheckItem{
Verb: utils.VerbGet,
Group: benchDashboardGroup,
Resource: benchDashboardResource,
Name: fmt.Sprintf("resource-%d", i),
Folder: folder,
})
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := srv.BatchCheck(ctx, newBatchCheckReq(user, items))
if err != nil {
b.Fatal(err)
}
_ = res.Groups
}
})
}
// BenchmarkList measures the performance of List requests (Compile equivalent)
func BenchmarkList(b *testing.B) {
srv, data := setupBenchmarkServer(b)
baseCtx := newContextWithNamespace()
// Helper to create list requests
newListReq := func(subject, verb, group, resource string) *authzv1.ListRequest {
return &authzv1.ListRequest{
Namespace: benchNamespace,
Subject: subject,
Verb: verb,
Group: group,
Resource: resource,
}
}
// Helper to create context with timeout
ctxWithTimeout := func() (context.Context, context.CancelFunc) {
return context.WithTimeout(baseCtx, listTimeout)
}
usersPerPattern := len(data.users) / 7
b.Run("AllAccess", func(b *testing.B) {
// User with group_resource permission - should return All=true quickly
user := data.users[0]
b.Logf("Test: User with group_resource permission (access to ALL dashboards)")
b.Logf("Expected: All=true returned immediately without ListObjects call")
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource))
cancel()
if err != nil {
b.Fatalf("Error: %v", err)
}
if !res.GetAll() {
b.Fatal("expected All=true for user with group_resource permission")
}
}
})
b.Run("FolderScoped", func(b *testing.B) {
// User with folder permissions - should return folder list
user := data.users[usersPerPattern]
b.Logf("Test: User with direct folder permission on a single folder")
b.Logf("Expected: Returns list of folders user has access to")
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource))
cancel()
if err != nil {
b.Fatalf("Error: %v", err)
}
if i == 0 {
b.Logf("Result: %d folders, %d items, All=%v", len(res.GetFolders()), len(res.GetItems()), res.GetAll())
}
}
})
b.Run("DirectResources", func(b *testing.B) {
// User with direct resource permissions - should return items list
user := data.users[4*usersPerPattern]
b.Logf("Test: User with direct permission on specific resources")
b.Logf("Expected: Returns list of specific resources user has access to")
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource))
cancel()
if err != nil {
b.Fatalf("Error: %v", err)
}
if i == 0 {
b.Logf("Result: %d folders, %d items, All=%v", len(res.GetFolders()), len(res.GetItems()), res.GetAll())
}
}
})
b.Run("NoAccess", func(b *testing.B) {
// User with no permissions - should return empty results
user := data.users[len(data.users)-1]
b.Logf("Test: User with NO permissions (denial case)")
b.Logf("Expected: Empty results")
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchDashboardGroup, benchDashboardResource))
cancel()
if err != nil {
b.Fatalf("Error: %v", err)
}
if i == 0 {
b.Logf("Result: %d folders, %d items, All=%v", len(res.GetFolders()), len(res.GetItems()), res.GetAll())
}
}
})
b.Run("LargeRootFolder", func(b *testing.B) {
// User with access to root folder that has many descendants
user := "user:large-root-access"
b.Logf("Test: User with permission on ROOT folder (folder-0)")
b.Logf("Root folder %s has %d total descendants", data.largestRootFolder, data.largestRootDescCount)
b.Logf("Expected: ListObjects should return folders through inheritance")
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
start := time.Now()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchFolderGroup, benchFolderResource))
elapsed := time.Since(start)
cancel()
if err != nil {
b.Fatalf("Error after %v: %v", elapsed, err)
}
if i == 0 {
b.Logf("Result: %d folders returned in %v (descendants: %d)",
len(res.GetItems()), elapsed, data.largestRootDescCount)
}
}
})
// Test List at various folder depths to find breaking point
b.Run("ByDepth", func(b *testing.B) {
b.Logf("Testing List performance at various folder depths (timeout: %v)", listTimeout)
b.Logf("Tree structure: %d folders per level, %d max depth", foldersPerLevel, data.maxDepth)
for depth := 0; depth <= data.maxDepth; depth++ {
if len(data.foldersByDepth[depth]) == 0 {
continue
}
folder := data.foldersByDepth[depth][0]
descendants := data.folderDescendants[folder]
user := fmt.Sprintf("user:depth-%d-access", depth)
b.Run(fmt.Sprintf("Depth%d_%dDescendants", depth, descendants), func(b *testing.B) {
b.Logf("Test: User with permission on folder at depth %d", depth)
b.Logf("Folder: %s, Descendants: %d", folder, descendants)
// First, do a single timed run to report
ctx, cancel := ctxWithTimeout()
start := time.Now()
res, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchFolderGroup, benchFolderResource))
elapsed := time.Since(start)
cancel()
if err != nil {
b.Logf("FAILED after %v: %v", elapsed, err)
if elapsed >= listTimeout {
b.Logf("TIMEOUT: List took longer than %v", listTimeout)
}
b.Skip("Skipping benchmark iterations due to error")
return
}
b.Logf("Result: %d folders in %v", len(res.GetItems()), elapsed)
if elapsed > 5*time.Second {
b.Logf("WARNING: Single List took %v, skipping benchmark iterations", elapsed)
b.Skip("Too slow for benchmark iterations")
return
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
ctx, cancel := ctxWithTimeout()
_, err := srv.List(ctx, newListReq(user, utils.VerbGet, benchFolderGroup, benchFolderResource))
cancel()
if err != nil {
b.Fatalf("Error: %v", err)
}
}
})
}
})
}

View File

@@ -126,8 +126,14 @@ func (s *Server) checkTyped(ctx context.Context, subject, relation string, resou
return &authzv1.CheckResponse{Allowed: false}, nil
}
// Use optimized folder permission relations for permission management
checkRelation := relation
if resource.Type() == common.TypeFolder {
checkRelation = common.FolderPermissionRelation(relation)
}
// Check if subject has direct access to resource
res, err := s.openfgaCheck(ctx, store, subject, relation, resourceIdent, contextuals, nil)
res, err := s.openfgaCheck(ctx, store, subject, checkRelation, resourceIdent, contextuals, nil)
if err != nil {
return nil, err
}
@@ -143,14 +149,15 @@ func (s *Server) checkGeneric(ctx context.Context, subject, relation string, res
defer span.End()
var (
folderIdent = resource.FolderIdent()
resourceCtx = resource.Context()
folderRelation = common.SubresourceRelation(relation)
folderIdent = resource.FolderIdent()
resourceCtx = resource.Context()
folderRelation = common.SubresourceRelation(relation)
folderCheckRelation = common.FolderPermissionRelation(relation)
)
if folderIdent != "" && isFolderPermissionBasedResource(resource.GroupResource()) {
// Check if resource inherits permissions from the folder (like dashboards in a folder)
res, err := s.openfgaCheck(ctx, store, subject, relation, folderIdent, contextuals, resourceCtx)
res, err := s.openfgaCheck(ctx, store, subject, folderCheckRelation, folderIdent, contextuals, resourceCtx)
if err != nil {
return nil, err
}

View File

@@ -85,6 +85,12 @@ func (s *Server) listTyped(ctx context.Context, subject, relation string, resour
resourceCtx = resource.Context()
)
// Use optimized folder permission relations for permission management
listRelation := relation
if resource.Type() == common.TypeFolder {
listRelation = common.FolderPermissionRelation(relation)
}
var items []string
if resource.HasSubresource() && common.IsSubresourceRelation(subresourceRelation) {
// List requested subresources
@@ -110,7 +116,7 @@ func (s *Server) listTyped(ctx context.Context, subject, relation string, resour
StoreId: store.ID,
AuthorizationModelId: store.ModelID,
Type: resource.Type(),
Relation: relation,
Relation: listRelation,
User: subject,
ContextualTuples: contextuals,
})
@@ -129,8 +135,9 @@ func (s *Server) listGeneric(ctx context.Context, subject, relation string, reso
defer span.End()
var (
folderRelation = common.SubresourceRelation(relation)
resourceCtx = resource.Context()
folderRelation = common.SubresourceRelation(relation)
folderListRelation = common.FolderPermissionRelation(relation) // Optimized for permission management
resourceCtx = resource.Context()
)
// 1. List all folders subject has access to resource type in
@@ -159,7 +166,7 @@ func (s *Server) listGeneric(ctx context.Context, subject, relation string, reso
StoreId: store.ID,
AuthorizationModelId: store.ModelID,
Type: common.TypeFolder,
Relation: relation,
Relation: folderListRelation,
User: subject,
Context: resourceCtx,
ContextualTuples: contextuals,

View File

@@ -44,6 +44,11 @@ type DashboardService interface {
GetDashboardsByLibraryPanelUID(ctx context.Context, libraryPanelUID string, orgID int64) ([]*DashboardRef, error)
}
type DashboardAccessService interface {
// The user as access to {VERB} the requested dashboard
HasDashboardAccess(ctx context.Context, user identity.Requester, verb string, namespace string, name string) (bool, error)
}
type PermissionsRegistrationService interface {
RegisterDashboardPermissions(service accesscontrol.DashboardPermissionsService)

View File

@@ -5,9 +5,10 @@ package dashboards
import (
context "context"
identity "github.com/grafana/grafana/pkg/apimachinery/identity"
mock "github.com/stretchr/testify/mock"
identity "github.com/grafana/grafana/pkg/apimachinery/identity"
model "github.com/grafana/grafana/pkg/services/search/model"
unstructured "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
@@ -529,6 +530,11 @@ func (_m *FakeDashboardService) ValidateDashboardRefreshInterval(minRefreshInter
return r0
}
// CanViewDashboard uses the access control service to check if the requested user can see a dashboard
func (_m *FakeDashboardService) HasDashboardAccess(ctx context.Context, user identity.Requester, verb string, namespace string, name string) (bool, error) {
return true, nil
}
// NewFakeDashboardService creates a new instance of FakeDashboardService. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewFakeDashboardService(t interface {

View File

@@ -67,6 +67,7 @@ var (
_ dashboards.DashboardService = (*DashboardServiceImpl)(nil)
_ dashboards.DashboardProvisioningService = (*DashboardServiceImpl)(nil)
_ dashboards.PluginService = (*DashboardServiceImpl)(nil)
_ dashboards.DashboardAccessService = (*DashboardServiceImpl)(nil)
daysInTrash = 24 * 30 * time.Hour
tracer = otel.Tracer("github.com/grafana/grafana/pkg/services/dashboards/service")
@@ -100,6 +101,38 @@ type DashboardServiceImpl struct {
dashboardPermissionsReady chan struct{}
}
// CanViewDashboard uses the access control service to check if the requested user can see a dashboard
func (dr *DashboardServiceImpl) HasDashboardAccess(ctx context.Context, user identity.Requester, verb string, namespace string, name string) (bool, error) {
ns, err := claims.ParseNamespace(namespace)
if err != nil {
return false, err
}
dash, err := dr.GetDashboard(ctx, &dashboards.GetDashboardQuery{
UID: name,
OrgID: ns.OrgID,
})
if err != nil || dash == nil {
return false, nil
}
var action string
switch verb {
case utils.VerbGet:
action = dashboards.ActionDashboardsRead
case utils.VerbUpdate:
action = dashboards.ActionDashboardsWrite
default:
return false, fmt.Errorf("unsupported verb")
}
evaluator := accesscontrol.EvalPermission(action,
dashboards.ScopeDashboardsProvider.GetResourceScopeUID(name))
canView, err := dr.ac.Evaluate(ctx, user, evaluator)
if err != nil || !canView {
return false, nil
}
return true, nil
}
func (dr *DashboardServiceImpl) startK8sDeletedDashboardsCleanupJob(ctx context.Context) chan struct{} {
done := make(chan struct{})
go func() {

View File

@@ -23,3 +23,9 @@ func ProvideDashboardPluginService(
) dashboards.PluginService {
return orig
}
func ProvideDashboardAccessService(
features featuremgmt.FeatureToggles, orig *DashboardServiceImpl,
) dashboards.DashboardAccessService {
return orig
}

View File

@@ -1962,6 +1962,13 @@ var (
RequiresRestart: false,
HideFromDocs: false,
},
{
Name: "elasticsearchRawDSLQuery",
Description: "Enables the raw DSL query editor in the Elasticsearch data source",
Stage: FeatureStageExperimental,
Owner: grafanaPartnerPluginsSquad,
Expression: "false",
},
{
Name: "kubernetesAnnotations",
Description: "Enables app platform API for annotations",

View File

@@ -266,6 +266,7 @@ pluginStoreServiceLoading,experimental,@grafana/plugins-platform-backend,false,f
newPanelPadding,preview,@grafana/dashboards-squad,false,false,true
onlyStoreActionSets,GA,@grafana/identity-access-team,false,false,false
panelTimeSettings,experimental,@grafana/dashboards-squad,false,false,false
elasticsearchRawDSLQuery,experimental,@grafana/partner-datasources,false,false,false
kubernetesAnnotations,experimental,@grafana/grafana-backend-services-squad,false,false,false
awsDatasourcesHttpProxy,experimental,@grafana/aws-datasources,false,false,false
transformationsEmptyPlaceholder,preview,@grafana/datapro,false,false,true
1 Name Stage Owner requiresDevMode RequiresRestart FrontendOnly
266 newPanelPadding preview @grafana/dashboards-squad false false true
267 onlyStoreActionSets GA @grafana/identity-access-team false false false
268 panelTimeSettings experimental @grafana/dashboards-squad false false false
269 elasticsearchRawDSLQuery experimental @grafana/partner-datasources false false false
270 kubernetesAnnotations experimental @grafana/grafana-backend-services-squad false false false
271 awsDatasourcesHttpProxy experimental @grafana/aws-datasources false false false
272 transformationsEmptyPlaceholder preview @grafana/datapro false false true

View File

@@ -758,6 +758,10 @@ const (
// Enables a new panel time settings drawer
FlagPanelTimeSettings = "panelTimeSettings"
// FlagElasticsearchRawDSLQuery
// Enables the raw DSL query editor in the Elasticsearch data source
FlagElasticsearchRawDSLQuery = "elasticsearchRawDSLQuery"
// FlagKubernetesAnnotations
// Enables app platform API for annotations
FlagKubernetesAnnotations = "kubernetesAnnotations"

View File

@@ -1206,6 +1206,19 @@
"codeowner": "@grafana/partner-datasources"
}
},
{
"metadata": {
"name": "elasticsearchRawDSLQuery",
"resourceVersion": "1763508396079",
"creationTimestamp": "2025-11-18T23:26:36Z"
},
"spec": {
"description": "Enables the raw DSL query editor in the Elasticsearch data source",
"stage": "experimental",
"codeowner": "@grafana/partner-datasources",
"expression": "false"
}
},
{
"metadata": {
"name": "enableAppChromeExtensions",

View File

@@ -6,10 +6,11 @@ import (
"fmt"
"strings"
"github.com/grafana/authlib/types"
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/cmd/grafana-cli/logger"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/dashboards"
"github.com/grafana/grafana/pkg/services/live/model"
)
@@ -32,10 +33,9 @@ type dashboardEvent struct {
// DashboardHandler manages all the `grafana/dashboard/*` channels
type DashboardHandler struct {
Publisher model.ChannelPublisher
ClientCount model.ChannelClientCount
DashboardService dashboards.DashboardService
AccessControl accesscontrol.AccessControl
Publisher model.ChannelPublisher
ClientCount model.ChannelClientCount
AccessControl dashboards.DashboardAccessService
}
// GetHandlerForPath called on init
@@ -49,23 +49,15 @@ func (h *DashboardHandler) OnSubscribe(ctx context.Context, user identity.Reques
// make sure can view this dashboard
if len(parts) == 2 && parts[0] == "uid" {
query := dashboards.GetDashboardQuery{UID: parts[1], OrgID: user.GetOrgID()}
_, err := h.DashboardService.GetDashboard(ctx, &query)
if err != nil {
logger.Error("Error getting dashboard", "query", query, "error", err)
return model.SubscribeReply{}, backend.SubscribeStreamStatusNotFound, nil
ns := types.OrgNamespaceFormatter(user.GetOrgID())
ok, err := h.AccessControl.HasDashboardAccess(ctx, user, utils.VerbGet, ns, parts[1])
if ok && err == nil {
return model.SubscribeReply{
Presence: true,
JoinLeave: true,
}, backend.SubscribeStreamStatusOK, nil
}
evaluator := accesscontrol.EvalPermission(dashboards.ActionDashboardsRead, dashboards.ScopeDashboardsProvider.GetResourceScopeUID(parts[1]))
canView, err := h.AccessControl.Evaluate(ctx, user, evaluator)
if err != nil || !canView {
return model.SubscribeReply{}, backend.SubscribeStreamStatusPermissionDenied, err
}
return model.SubscribeReply{
Presence: true,
JoinLeave: true,
}, backend.SubscribeStreamStatusOK, nil
return model.SubscribeReply{}, backend.SubscribeStreamStatusPermissionDenied, err
}
// Unknown path
@@ -88,29 +80,16 @@ func (h *DashboardHandler) OnPublish(ctx context.Context, requester identity.Req
// just ignore the event
return model.PublishReply{}, backend.PublishStreamStatusNotFound, fmt.Errorf("ignore???")
}
query := dashboards.GetDashboardQuery{UID: parts[1], OrgID: requester.GetOrgID()}
_, err = h.DashboardService.GetDashboard(ctx, &query)
if err != nil {
logger.Error("Unknown dashboard", "query", query)
return model.PublishReply{}, backend.PublishStreamStatusNotFound, nil
}
evaluator := accesscontrol.EvalPermission(dashboards.ActionDashboardsWrite, dashboards.ScopeDashboardsProvider.GetResourceScopeUID(parts[1]))
canEdit, err := h.AccessControl.Evaluate(ctx, requester, evaluator)
if err != nil {
return model.PublishReply{}, backend.PublishStreamStatusNotFound, fmt.Errorf("internal error")
ns := types.OrgNamespaceFormatter(requester.GetOrgID())
ok, err := h.AccessControl.HasDashboardAccess(ctx, requester, utils.VerbUpdate, ns, parts[1])
if ok && err == nil {
msg, err := json.Marshal(event)
if err != nil {
return model.PublishReply{}, backend.PublishStreamStatusNotFound, fmt.Errorf("internal error")
}
return model.PublishReply{Data: msg}, backend.PublishStreamStatusOK, nil
}
// Ignore edit events if the user can not edit
if !canEdit {
return model.PublishReply{}, backend.PublishStreamStatusNotFound, nil // NOOP
}
msg, err := json.Marshal(event)
if err != nil {
return model.PublishReply{}, backend.PublishStreamStatusNotFound, fmt.Errorf("internal error")
}
return model.PublishReply{Data: msg}, backend.PublishStreamStatusOK, nil
}
return model.PublishReply{}, backend.PublishStreamStatusNotFound, nil

View File

@@ -27,13 +27,11 @@ import (
"github.com/grafana/grafana/pkg/api/response"
"github.com/grafana/grafana/pkg/api/routing"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/infra/localcache"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/infra/usagestats"
"github.com/grafana/grafana/pkg/middleware"
"github.com/grafana/grafana/pkg/middleware/requestmeta"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/apiserver"
contextmodel "github.com/grafana/grafana/pkg/services/contexthandler/model"
"github.com/grafana/grafana/pkg/services/dashboards"
@@ -52,7 +50,6 @@ import (
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/services/pluginsintegration/plugincontext"
"github.com/grafana/grafana/pkg/services/pluginsintegration/pluginstore"
"github.com/grafana/grafana/pkg/services/secrets"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/util"
"github.com/grafana/grafana/pkg/web"
@@ -72,28 +69,23 @@ type CoreGrafanaScope struct {
Dashboards DashboardActivityChannel
}
func ProvideService(plugCtxProvider *plugincontext.Provider, cfg *setting.Cfg, routeRegister routing.RouteRegister,
pluginStore pluginstore.Store, pluginClient plugins.Client, cacheService *localcache.CacheService,
dataSourceCache datasources.CacheService, secretsService secrets.Service,
func ProvideService(cfg *setting.Cfg, routeRegister routing.RouteRegister, plugCtxProvider *plugincontext.Provider,
pluginStore pluginstore.Store, pluginClient plugins.Client, dataSourceCache datasources.CacheService,
usageStatsService usagestats.Service, toggles featuremgmt.FeatureToggles,
accessControl accesscontrol.AccessControl, dashboardService dashboards.DashboardService,
orgService org.Service, configProvider apiserver.RestConfigProvider) (*GrafanaLive, error) {
dashboardService dashboards.DashboardAccessService,
configProvider apiserver.RestConfigProvider) (*GrafanaLive, error) {
g := &GrafanaLive{
Cfg: cfg,
Features: toggles,
PluginContextProvider: plugCtxProvider,
RouteRegister: routeRegister,
pluginStore: pluginStore,
pluginClient: pluginClient,
CacheService: cacheService,
DataSourceCache: dataSourceCache,
SecretsService: secretsService,
channels: make(map[string]model.ChannelHandler),
GrafanaScope: CoreGrafanaScope{
Features: make(map[string]model.ChannelHandlerFactory),
},
usageStatsService: usageStatsService,
orgService: orgService,
keyPrefix: "gf_live",
}
@@ -176,19 +168,13 @@ func ProvideService(plugCtxProvider *plugincontext.Provider, cfg *setting.Cfg, r
// Initialize the main features
dash := &features.DashboardHandler{
Publisher: g.Publish,
ClientCount: g.ClientCount,
DashboardService: dashboardService,
AccessControl: accessControl,
Publisher: g.Publish,
ClientCount: g.ClientCount,
AccessControl: dashboardService,
}
g.GrafanaScope.Dashboards = dash
g.GrafanaScope.Features["dashboard"] = dash
// Testing watch with just the provisioning support -- this will be removed when it is well validated
//nolint:staticcheck // not yet migrated to OpenFeature
if toggles.IsEnabledGlobally(featuremgmt.FlagProvisioning) {
g.GrafanaScope.Features["watch"] = features.NewWatchRunner(g.Publish, configProvider)
}
g.GrafanaScope.Features["watch"] = features.NewWatchRunner(g.Publish, configProvider)
g.surveyCaller = survey.NewCaller(managedStreamRunner, node)
err = g.surveyCaller.SetupHandlers()
@@ -398,11 +384,11 @@ func ProvideService(plugCtxProvider *plugincontext.Provider, cfg *setting.Cfg, r
pushPipelineWSHandler.ServeHTTP(ctx.Resp, r)
}
g.RouteRegister.Group("/api/live", func(group routing.RouteRegister) {
routeRegister.Group("/api/live", func(group routing.RouteRegister) {
group.Get("/ws", g.websocketHandler)
}, middleware.ReqSignedIn, requestmeta.SetSLOGroup(requestmeta.SLOGroupNone))
g.RouteRegister.Group("/api/live", func(group routing.RouteRegister) {
routeRegister.Group("/api/live", func(group routing.RouteRegister) {
group.Get("/push/:streamId", g.pushWebsocketHandler)
group.Get("/pipeline/push/*", g.pushPipelineWebsocketHandler)
}, middleware.ReqOrgAdmin, requestmeta.SetSLOGroup(requestmeta.SLOGroupNone))
@@ -461,13 +447,9 @@ type GrafanaLive struct {
PluginContextProvider *plugincontext.Provider
Cfg *setting.Cfg
Features featuremgmt.FeatureToggles
RouteRegister routing.RouteRegister
CacheService *localcache.CacheService
DataSourceCache datasources.CacheService
SecretsService secrets.Service
pluginStore pluginstore.Store
pluginClient plugins.Client
orgService org.Service
keyPrefix string // HA prefix for grafana cloud (since the org is always 1)
@@ -1356,71 +1338,6 @@ func (g *GrafanaLive) HandleWriteConfigsPostHTTP(c *contextmodel.ReqContext) res
})
}
// HandleWriteConfigsPutHTTP ...
func (g *GrafanaLive) HandleWriteConfigsPutHTTP(c *contextmodel.ReqContext) response.Response {
body, err := io.ReadAll(c.Req.Body)
if err != nil {
return response.Error(http.StatusInternalServerError, "Error reading body", err)
}
var cmd pipeline.WriteConfigUpdateCmd
err = json.Unmarshal(body, &cmd)
if err != nil {
return response.Error(http.StatusBadRequest, "Error decoding write config update command", err)
}
if cmd.UID == "" {
return response.Error(http.StatusBadRequest, "UID required", nil)
}
existingBackend, ok, err := g.pipelineStorage.GetWriteConfig(c.Req.Context(), c.GetOrgID(), pipeline.WriteConfigGetCmd{
UID: cmd.UID,
})
if err != nil {
return response.Error(http.StatusInternalServerError, "Failed to get write config", err)
}
if ok {
if cmd.SecureSettings == nil {
cmd.SecureSettings = map[string]string{}
}
secureJSONData, err := g.SecretsService.DecryptJsonData(c.Req.Context(), existingBackend.SecureSettings)
if err != nil {
logger.Error("Error decrypting secure settings", "error", err)
return response.Error(http.StatusInternalServerError, "Error decrypting secure settings", err)
}
for k, v := range secureJSONData {
if _, ok := cmd.SecureSettings[k]; !ok {
cmd.SecureSettings[k] = v
}
}
}
result, err := g.pipelineStorage.UpdateWriteConfig(c.Req.Context(), c.GetOrgID(), cmd)
if err != nil {
return response.Error(http.StatusInternalServerError, "Failed to update write config", err)
}
return response.JSON(http.StatusOK, util.DynMap{
"writeConfig": pipeline.WriteConfigToDto(result),
})
}
// HandleWriteConfigsDeleteHTTP ...
func (g *GrafanaLive) HandleWriteConfigsDeleteHTTP(c *contextmodel.ReqContext) response.Response {
body, err := io.ReadAll(c.Req.Body)
if err != nil {
return response.Error(http.StatusInternalServerError, "Error reading body", err)
}
var cmd pipeline.WriteConfigDeleteCmd
err = json.Unmarshal(body, &cmd)
if err != nil {
return response.Error(http.StatusBadRequest, "Error decoding write config delete command", err)
}
if cmd.UID == "" {
return response.Error(http.StatusBadRequest, "UID required", nil)
}
err = g.pipelineStorage.DeleteWriteConfig(c.Req.Context(), c.GetOrgID(), cmd)
if err != nil {
return response.Error(http.StatusInternalServerError, "Failed to delete write config", err)
}
return response.JSON(http.StatusOK, util.DynMap{})
}
// Write to the standard log15 logger
func handleLog(msg centrifuge.LogEntry) {
arr := make([]interface{}, 0)

View File

@@ -19,7 +19,6 @@ import (
"github.com/grafana/grafana/pkg/api/routing"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/infra/usagestats"
"github.com/grafana/grafana/pkg/services/accesscontrol/acimpl"
"github.com/grafana/grafana/pkg/services/dashboards"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/setting"
@@ -340,16 +339,14 @@ func setupLiveService(cfg *setting.Cfg, t *testing.T) (*GrafanaLive, error) {
cfg = setting.NewCfg()
}
return ProvideService(nil,
cfg,
return ProvideService(cfg,
routing.NewRouteRegister(),
nil, nil, nil, nil,
nil, nil, nil,
nil,
&usagestats.UsageStatsMock{T: t},
featuremgmt.WithFeatures(),
acimpl.ProvideAccessControl(featuremgmt.WithFeatures()),
&dashboards.FakeDashboardService{},
nil, nil)
nil)
}
type dummyTransport struct {

View File

@@ -457,6 +457,7 @@ type paginationContext struct {
labelOptions []ngmodels.LabelOption
limitAlertsPerRule int64
limitRulesPerGroup int64
compact bool
}
// pageResult is the result of fetching and filtering of one page
@@ -492,6 +493,7 @@ func (ctx *paginationContext) fetchAndFilterPage(log log.Logger, store ListAlert
Limit: remainingGroups,
RuleLimit: remainingRules,
ContinueToken: token,
Compact: ctx.compact,
}
ruleList, newToken, err := store.ListAlertRulesByGroup(ctx.opts.Ctx, &byGroupQuery)
@@ -519,7 +521,7 @@ func (ctx *paginationContext) fetchAndFilterPage(log log.Logger, store ListAlert
log, rg.GroupKey, rg.Folder, rg.Rules,
ctx.provenanceRecords, ctx.limitAlertsPerRule,
ctx.stateFilterSet, ctx.matchers, ctx.labelOptions,
ctx.ruleStatusMutator, ctx.alertStateMutator,
ctx.ruleStatusMutator, ctx.alertStateMutator, ctx.compact,
)
ruleGroup.Totals = totals
accumulateTotals(result.totalsDelta, totals)
@@ -785,6 +787,8 @@ func PrepareRuleGroupStatusesV2(log log.Logger, store ListAlertRulesStoreV2, opt
}
span.SetAttributes(attribute.Int("rule_name_count", len(ruleNamesSet)))
compact := getBoolWithDefault(opts.Query, "compact", false)
span.SetAttributes(attribute.Bool("compact", compact))
pagCtx := &paginationContext{
opts: opts,
provenanceRecords: provenanceRecords,
@@ -807,6 +811,7 @@ func PrepareRuleGroupStatusesV2(log log.Logger, store ListAlertRulesStoreV2, opt
labelOptions: labelOptions,
limitAlertsPerRule: limitAlertsPerRule,
limitRulesPerGroup: limitRulesPerGroup,
compact: compact,
}
groups, rulesTotals, continueToken, err := paginateRuleGroups(log, store, pagCtx, span, maxGroups, maxRules, nextToken)
@@ -959,7 +964,7 @@ func PrepareRuleGroupStatuses(log log.Logger, store ListAlertRulesStore, opts Ru
break
}
ruleGroup, totals := toRuleGroup(log, rg.GroupKey, rg.Folder, rg.Rules, provenanceRecords, limitAlertsPerRule, stateFilterSet, matchers, labelOptions, ruleStatusMutator, alertStateMutator)
ruleGroup, totals := toRuleGroup(log, rg.GroupKey, rg.Folder, rg.Rules, provenanceRecords, limitAlertsPerRule, stateFilterSet, matchers, labelOptions, ruleStatusMutator, alertStateMutator, false)
ruleGroup.Totals = totals
for k, v := range totals {
rulesTotals[k] += v
@@ -1110,7 +1115,7 @@ func matchersMatch(matchers []*labels.Matcher, labels map[string]string) bool {
return true
}
func toRuleGroup(log log.Logger, groupKey ngmodels.AlertRuleGroupKey, folderFullPath string, rules []*ngmodels.AlertRule, provenanceRecords map[string]ngmodels.Provenance, limitAlerts int64, stateFilterSet map[eval.State]struct{}, matchers labels.Matchers, labelOptions []ngmodels.LabelOption, ruleStatusMutator RuleStatusMutator, ruleAlertStateMutator RuleAlertStateMutator) (*apimodels.RuleGroup, map[string]int64) {
func toRuleGroup(log log.Logger, groupKey ngmodels.AlertRuleGroupKey, folderFullPath string, rules []*ngmodels.AlertRule, provenanceRecords map[string]ngmodels.Provenance, limitAlerts int64, stateFilterSet map[eval.State]struct{}, matchers labels.Matchers, labelOptions []ngmodels.LabelOption, ruleStatusMutator RuleStatusMutator, ruleAlertStateMutator RuleAlertStateMutator, compact bool) (*apimodels.RuleGroup, map[string]int64) {
newGroup := &apimodels.RuleGroup{
Name: groupKey.RuleGroup,
// file is what Prometheus uses for provisioning, we replace it with namespace which is the folder in Grafana.
@@ -1126,10 +1131,14 @@ func toRuleGroup(log log.Logger, groupKey ngmodels.AlertRuleGroupKey, folderFull
if prov, exists := provenanceRecords[rule.ResourceID()]; exists {
provenance = prov
}
var query string
if !compact {
query = ruleToQuery(log, rule)
}
alertingRule := apimodels.AlertingRule{
State: "inactive",
Name: rule.Title,
Query: ruleToQuery(log, rule),
Query: query,
QueriedDatasourceUIDs: extractDatasourceUIDs(rule),
Duration: rule.For.Seconds(),
KeepFiringFor: rule.KeepFiringFor.Seconds(),

View File

@@ -110,6 +110,12 @@ func (aq *AlertQuery) String() string {
}
func (aq *AlertQuery) setModelProps() error {
if aq.Model == nil {
// No data to extract, use an empty map.
aq.modelProps = map[string]any{}
return nil
}
aq.modelProps = make(map[string]any)
err := json.Unmarshal(aq.Model, &aq.modelProps)
if err != nil {

View File

@@ -1022,6 +1022,7 @@ type ListAlertRulesExtendedQuery struct {
Limit int64
RuleLimit int64
ContinueToken string
Compact bool
}
// CountAlertRulesQuery is the query for counting alert rules

View File

@@ -12,6 +12,7 @@ import (
"github.com/grafana/alerting/models"
alertingNotify "github.com/grafana/alerting/notify"
"github.com/grafana/alerting/notify/nfstatus"
alertingTemplates "github.com/grafana/alerting/templates"
"github.com/prometheus/alertmanager/config"
amv2 "github.com/prometheus/alertmanager/api/v2/models"
@@ -58,6 +59,7 @@ type alertmanager struct {
decryptFn alertingNotify.GetDecryptedValueFn
crypto Crypto
features featuremgmt.FeatureToggles
dynamicLimits alertingNotify.DynamicLimits
}
// maintenanceOptions represent the options for components that need maintenance on a frequency within the Alertmanager.
@@ -148,6 +150,16 @@ func NewAlertmanager(ctx context.Context, orgID int64, cfg *setting.Cfg, store A
return nil, err
}
limits := alertingNotify.DynamicLimits{
Dispatcher: nilLimits{},
Templates: alertingTemplates.Limits{
MaxTemplateOutputSize: cfg.UnifiedAlerting.AlertmanagerMaxTemplateOutputSize,
},
}
if err := limits.Templates.Validate(); err != nil {
return nil, fmt.Errorf("invalid template limits: %w", err)
}
am := &alertmanager{
Base: gam,
ConfigMetrics: m.AlertmanagerConfigMetrics,
@@ -158,6 +170,7 @@ func NewAlertmanager(ctx context.Context, orgID int64, cfg *setting.Cfg, store A
decryptFn: decryptFn,
crypto: crypto,
features: featureToggles,
dynamicLimits: limits,
}
return am, nil
@@ -382,7 +395,7 @@ func (am *alertmanager) applyConfig(ctx context.Context, cfg *apimodels.Postable
TimeIntervals: amConfig.TimeIntervals,
Templates: templates,
Receivers: receivers,
DispatcherLimits: &nilLimits{},
Limits: am.dynamicLimits,
Raw: rawConfig,
Hash: configHash,
})

View File

@@ -631,7 +631,13 @@ func (st DBstore) ListAlertRulesByGroup(ctx context.Context, query *ngmodels.Lis
continue
}
converted, err := alertRuleToModelsAlertRule(*rule, st.Logger)
var converted ngmodels.AlertRule
if query.Compact {
converted, err = alertRuleToModelsAlertRuleCompact(*rule, st.Logger)
} else {
converted, err = alertRuleToModelsAlertRule(*rule, st.Logger)
}
if err != nil {
st.Logger.Error("Invalid rule found in DB store, cannot convert, ignoring it", "func", "ListAlertRulesByGroup", "error", err)
continue

View File

@@ -10,11 +10,38 @@ import (
"github.com/grafana/grafana/pkg/services/ngalert/models"
)
// We only care about the data source UIDs.
type compactQuery struct {
DatasourceUID string `json:"datasourceUid"`
}
func alertRuleToModelsAlertRule(ar alertRule, l log.Logger) (models.AlertRule, error) {
return convertAlertRuleToModel(ar, l, false)
}
// alertRuleToModelsAlertRuleCompact transforms an alertRule to a models.AlertRule
// ignoring alert queries (except for data source UIDs), notification settings, and metadata.
func alertRuleToModelsAlertRuleCompact(ar alertRule, l log.Logger) (models.AlertRule, error) {
return convertAlertRuleToModel(ar, l, true)
}
// convertAlertRuleToModel creates a models.AlertRule from an alertRule.
// When 'compact' is set to 'true', it skips parsing the alert queries (except for the data source UID), notification
// settings, and metadata, thus reducing the number of JSON serializations needed.
func convertAlertRuleToModel(ar alertRule, l log.Logger, compact bool) (models.AlertRule, error) {
var data []models.AlertQuery
err := json.Unmarshal([]byte(ar.Data), &data)
if err != nil {
return models.AlertRule{}, fmt.Errorf("failed to parse data: %w", err)
if compact {
var cqs []compactQuery
if err := json.Unmarshal([]byte(ar.Data), &cqs); err != nil {
return models.AlertRule{}, fmt.Errorf("failed to parse data: %w", err)
}
for _, cq := range cqs {
data = append(data, models.AlertQuery{DatasourceUID: cq.DatasourceUID})
}
} else {
if err := json.Unmarshal([]byte(ar.Data), &data); err != nil {
return models.AlertRule{}, fmt.Errorf("failed to parse data: %w", err)
}
}
result := models.AlertRule{
@@ -52,6 +79,7 @@ func alertRuleToModelsAlertRule(ar alertRule, l log.Logger) (models.AlertRule, e
result.UpdatedBy = util.Pointer(models.UserUID(*ar.UpdatedBy))
}
var err error
if ar.NoDataState != "" {
result.NoDataState, err = models.NoDataStateFromString(ar.NoDataState)
if err != nil {
@@ -90,7 +118,7 @@ func alertRuleToModelsAlertRule(ar alertRule, l log.Logger) (models.AlertRule, e
}
}
if ar.NotificationSettings != "" {
if !compact && ar.NotificationSettings != "" {
ns, err := parseNotificationSettings(ar.NotificationSettings)
if err != nil {
return models.AlertRule{}, fmt.Errorf("failed to parse notification settings: %w", err)
@@ -98,7 +126,7 @@ func alertRuleToModelsAlertRule(ar alertRule, l log.Logger) (models.AlertRule, e
result.NotificationSettings = ns
}
if ar.Metadata != "" {
if !compact && ar.Metadata != "" {
err = json.Unmarshal([]byte(ar.Metadata), &result.Metadata)
if err != nil {
return models.AlertRule{}, fmt.Errorf("failed to metadata: %w", err)

View File

@@ -65,6 +65,85 @@ func TestAlertRuleToModelsAlertRule(t *testing.T) {
})
}
func TestAlertRuleToModelsAlertRuleCompact(t *testing.T) {
t.Run("should only extract datasource UIDs in compact mode", func(t *testing.T) {
rule := alertRule{
ID: 1,
OrgID: 1,
UID: "test-uid",
Title: "Test Rule",
Condition: "A",
Data: `[{"datasourceUid":"ds1","refId":"A","queryType":"test","model":{"expr":"up"}},{"datasourceUid":"ds2","refId":"B","queryType":"test","model":{"expr":"down"}}]`,
IntervalSeconds: 60,
Version: 1,
NamespaceUID: "ns-uid",
RuleGroup: "test-group",
NoDataState: "NoData",
ExecErrState: "Error",
NotificationSettings: `[{"receiver":"test-receiver"}]`,
Metadata: `{"editor_settings":{"simplified_query_and_expressions_section":true}}`,
}
compactResult, err := alertRuleToModelsAlertRuleCompact(rule, &logtest.Fake{})
require.NoError(t, err)
// Should have datasource UIDs.
require.Len(t, compactResult.Data, 2)
require.Equal(t, "ds1", compactResult.Data[0].DatasourceUID)
require.Equal(t, "ds2", compactResult.Data[1].DatasourceUID)
// But should not have full query data (RefID, QueryType, Model should be empty).
require.Empty(t, compactResult.Data[0].RefID)
require.Empty(t, compactResult.Data[0].QueryType)
require.Nil(t, compactResult.Data[0].Model)
require.Empty(t, compactResult.Data[1].RefID)
require.Empty(t, compactResult.Data[1].QueryType)
require.Nil(t, compactResult.Data[1].Model)
// Should not have notification settings.
require.Empty(t, compactResult.NotificationSettings)
// Should not have metadata (should be zero value).
require.Equal(t, ngmodels.AlertRuleMetadata{}, compactResult.Metadata)
})
t.Run("should parse full data in non-compact mode", func(t *testing.T) {
rule := alertRule{
ID: 1,
OrgID: 1,
UID: "test-uid",
Title: "Test Rule",
Condition: "A",
Data: `[{"datasourceUid":"ds1","refId":"A","queryType":"test","model":{"expr":"up"}},{"datasourceUid":"ds2","refId":"B","queryType":"test","model":{"expr":"down"}}]`,
IntervalSeconds: 60,
Version: 1,
NamespaceUID: "ns-uid",
RuleGroup: "test-group",
NoDataState: "NoData",
ExecErrState: "Error",
NotificationSettings: `[{"receiver":"test-receiver"}]`,
Metadata: `{"editor_settings":{"simplified_query_and_expressions_section":true}}`,
}
fullResult, err := alertRuleToModelsAlertRule(rule, &logtest.Fake{})
require.NoError(t, err)
// Should have full query data.
require.Len(t, fullResult.Data, 2)
require.Equal(t, "ds1", fullResult.Data[0].DatasourceUID)
require.Equal(t, "A", fullResult.Data[0].RefID)
require.Equal(t, "test", fullResult.Data[0].QueryType)
require.NotNil(t, fullResult.Data[0].Model)
// Should have notification settings.
require.Len(t, fullResult.NotificationSettings, 1)
require.Equal(t, "test-receiver", fullResult.NotificationSettings[0].Receiver)
// Should have metadata (metadata is parsed from JSON to struct).
require.NotEqual(t, ngmodels.AlertRuleMetadata{}, fullResult.Metadata)
})
}
func TestAlertRuleVersionToAlertRule(t *testing.T) {
g := ngmodels.RuleGen

View File

@@ -188,6 +188,8 @@ type SearchOrgUsersQuery struct {
SortOpts []model.SortOption
// Flag used to allow oss edition to query users without access control
DontEnforceAccessControl bool
// Flag used to exclude hidden users from the result
ExcludeHiddenUsers bool
User identity.Requester
}

View File

@@ -27,6 +27,7 @@ func ProvideService(db db.DB, cfg *setting.Cfg, quotaService quota.Service) (org
db: db,
dialect: db.GetDialect(),
log: log,
cfg: cfg,
},
cfg: cfg,
log: log,

View File

@@ -8,6 +8,7 @@ import (
"strings"
"time"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/services/accesscontrol"
@@ -16,6 +17,7 @@ import (
"github.com/grafana/grafana/pkg/services/sqlstore"
"github.com/grafana/grafana/pkg/services/sqlstore/migrator"
"github.com/grafana/grafana/pkg/services/user"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/util"
)
@@ -53,6 +55,7 @@ type sqlStore struct {
//TODO: moved to service
log log.Logger
deletes []string
cfg *setting.Cfg
}
func (ss *sqlStore) Get(ctx context.Context, orgID int64) (*org.Org, error) {
@@ -560,6 +563,14 @@ func (ss *sqlStore) SearchOrgUsers(ctx context.Context, query *org.SearchOrgUser
whereParams = append(whereParams, acFilter.Args...)
}
if query.ExcludeHiddenUsers {
cond, params := buildHiddenUsersFilter(query.User, ss.cfg.HiddenUsers)
if cond != "" {
whereConditions = append(whereConditions, cond)
whereParams = append(whereParams, params...)
}
}
if query.Query != "" {
sql1, param1 := ss.dialect.LikeOperator("email", true, query.Query, true)
sql2, param2 := ss.dialect.LikeOperator("name", true, query.Query, true)
@@ -825,3 +836,23 @@ func removeUserOrg(sess *db.Session, userID int64) error {
func (ss *sqlStore) RegisterDelete(query string) {
ss.deletes = append(ss.deletes, query)
}
func buildHiddenUsersFilter(requester identity.Requester, hiddenUsersMap map[string]struct{}) (string, []any) {
if requester != nil && requester.GetIsGrafanaAdmin() {
return "", nil
}
hiddenUsers := make([]any, 0)
for user := range hiddenUsersMap {
if requester != nil && user == requester.GetLogin() {
continue
}
hiddenUsers = append(hiddenUsers, user)
}
if len(hiddenUsers) > 0 {
return "u.login NOT IN (?" + strings.Repeat(",?", len(hiddenUsers)-1) + ")", hiddenUsers
}
return "", nil
}

View File

@@ -820,8 +820,9 @@ func TestIntegration_SQLStore_SearchOrgUsers(t *testing.T) {
db: store,
dialect: store.GetDialect(),
log: log.NewNopLogger(),
cfg: cfg,
}
// orgUserStore.cfg.Skip
orgSvc, userSvc := createOrgAndUserSvc(t, store, cfg)
o, err := orgSvc.CreateWithMember(context.Background(), &org.CreateOrgCommand{Name: "test org"})
@@ -829,6 +830,14 @@ func TestIntegration_SQLStore_SearchOrgUsers(t *testing.T) {
seedOrgUsers(t, &orgUserStore, 10, userSvc, o.ID)
user1, err := userSvc.GetByLogin(context.Background(), &user.GetUserByLoginQuery{LoginOrEmail: "user-1"})
require.NoError(t, err)
cfg.HiddenUsers = map[string]struct{}{
"user-1": {},
"user-2": {},
}
tests := []struct {
desc string
query *org.SearchOrgUsersQuery
@@ -840,7 +849,7 @@ func TestIntegration_SQLStore_SearchOrgUsers(t *testing.T) {
OrgID: o.ID,
User: &user.SignedInUser{
OrgID: o.ID,
Permissions: map[int64]map[string][]string{1: {accesscontrol.ActionOrgUsersRead: {accesscontrol.ScopeUsersAll}}},
Permissions: map[int64]map[string][]string{o.ID: {accesscontrol.ActionOrgUsersRead: {accesscontrol.ScopeUsersAll}}},
},
},
expectedNumUsers: 10,
@@ -851,7 +860,7 @@ func TestIntegration_SQLStore_SearchOrgUsers(t *testing.T) {
OrgID: o.ID,
User: &user.SignedInUser{
OrgID: o.ID,
Permissions: map[int64]map[string][]string{1: {accesscontrol.ActionOrgUsersRead: {""}}},
Permissions: map[int64]map[string][]string{o.ID: {accesscontrol.ActionOrgUsersRead: {""}}},
},
},
expectedNumUsers: 0,
@@ -862,8 +871,8 @@ func TestIntegration_SQLStore_SearchOrgUsers(t *testing.T) {
OrgID: o.ID,
User: &user.SignedInUser{
OrgID: o.ID,
Permissions: map[int64]map[string][]string{1: {accesscontrol.ActionOrgUsersRead: {
"users:id:1",
Permissions: map[int64]map[string][]string{o.ID: {accesscontrol.ActionOrgUsersRead: {
"users:id:2",
"users:id:5",
"users:id:9",
}}},
@@ -871,6 +880,55 @@ func TestIntegration_SQLStore_SearchOrgUsers(t *testing.T) {
},
expectedNumUsers: 3,
},
{
desc: "should exclude hidden users when ExcludeHiddenUsers is true and user is nil",
query: &org.SearchOrgUsersQuery{
OrgID: o.ID,
ExcludeHiddenUsers: true,
User: nil,
DontEnforceAccessControl: true,
},
expectedNumUsers: 8,
},
{
desc: "should not exclude hidden users when ExcludeHiddenUsers is true and user is Grafana Admin",
query: &org.SearchOrgUsersQuery{
OrgID: o.ID,
ExcludeHiddenUsers: true,
User: &user.SignedInUser{
OrgID: o.ID,
IsGrafanaAdmin: true,
Permissions: map[int64]map[string][]string{o.ID: {accesscontrol.ActionOrgUsersRead: {accesscontrol.ScopeUsersAll}}},
},
},
expectedNumUsers: 10,
},
{
desc: "should return all users if ExcludeHiddenUsers is false",
query: &org.SearchOrgUsersQuery{
OrgID: o.ID,
ExcludeHiddenUsers: false,
User: &user.SignedInUser{
OrgID: o.ID,
Permissions: map[int64]map[string][]string{o.ID: {accesscontrol.ActionOrgUsersRead: {accesscontrol.ScopeUsersAll}}},
},
},
expectedNumUsers: 10,
},
{
desc: "should include the hidden user when the request is made by the hidden user and ExcludeHiddenUsers is true",
query: &org.SearchOrgUsersQuery{
OrgID: o.ID,
ExcludeHiddenUsers: true,
User: &user.SignedInUser{
UserID: user1.ID,
Login: user1.Login,
OrgID: o.ID,
Permissions: map[int64]map[string][]string{o.ID: {accesscontrol.ActionOrgUsersRead: {accesscontrol.ScopeUsersAll}}},
},
},
expectedNumUsers: 9,
},
}
for _, tt := range tests {
@@ -879,13 +937,58 @@ func TestIntegration_SQLStore_SearchOrgUsers(t *testing.T) {
require.NoError(t, err)
assert.Len(t, result.OrgUsers, tt.expectedNumUsers)
if !hasWildcardScope(tt.query.User, accesscontrol.ActionOrgUsersRead) {
// No pagination is applied, so TotalCount should equal to number of returned users
assert.Equal(t, int64(tt.expectedNumUsers), result.TotalCount)
if tt.query.User != nil && !hasWildcardScope(tt.query.User, accesscontrol.ActionOrgUsersRead) && !tt.query.User.GetIsGrafanaAdmin() {
for _, u := range result.OrgUsers {
assert.Contains(t, tt.query.User.GetPermissions()[accesscontrol.ActionOrgUsersRead], fmt.Sprintf("users:id:%d", u.UserID))
}
}
})
}
t.Run("should paginate correctly when ExcludeHiddenUsers is true", func(t *testing.T) {
query := &org.SearchOrgUsersQuery{
OrgID: o.ID,
ExcludeHiddenUsers: true,
User: &user.SignedInUser{
OrgID: o.ID,
Permissions: map[int64]map[string][]string{o.ID: {accesscontrol.ActionOrgUsersRead: {accesscontrol.ScopeUsersAll}}},
},
Limit: 5,
Page: 1,
}
result, err := orgUserStore.SearchOrgUsers(context.Background(), query)
require.NoError(t, err)
assert.Len(t, result.OrgUsers, 5)
assert.Equal(t, int64(8), result.TotalCount)
query.Page = 2
result, err = orgUserStore.SearchOrgUsers(context.Background(), query)
require.NoError(t, err)
assert.Len(t, result.OrgUsers, 3)
assert.Equal(t, int64(8), result.TotalCount)
})
t.Run("should return all users if HiddenUsers is empty", func(t *testing.T) {
oldHiddenUsers := cfg.HiddenUsers
cfg.HiddenUsers = make(map[string]struct{})
defer func() { cfg.HiddenUsers = oldHiddenUsers }()
query := &org.SearchOrgUsersQuery{
OrgID: o.ID,
ExcludeHiddenUsers: true,
User: &user.SignedInUser{
OrgID: o.ID,
Permissions: map[int64]map[string][]string{o.ID: {accesscontrol.ActionOrgUsersRead: {accesscontrol.ScopeUsersAll}}},
},
}
result, err := orgUserStore.SearchOrgUsers(context.Background(), query)
require.NoError(t, err)
assert.Len(t, result.OrgUsers, 10)
assert.Equal(t, int64(10), result.TotalCount)
})
}
func TestIntegration_SQLStore_RemoveOrgUser(t *testing.T) {

View File

@@ -153,6 +153,9 @@ type UnifiedAlertingSettings struct {
// DeletedRuleRetention defines the maximum duration to retain deleted alerting rules before permanent removal.
DeletedRuleRetention time.Duration
// AlertmanagerMaxTemplateOutputSize specifies the maximum allowed size for rendered template output in bytes.
AlertmanagerMaxTemplateOutputSize int64
}
type RecordingRuleSettings struct {
@@ -583,6 +586,11 @@ func (cfg *Cfg) ReadUnifiedAlertingSettings(iniFile *ini.File) error {
return fmt.Errorf("setting 'deleted_rule_retention' is invalid, only 0 or a positive duration are allowed")
}
uaCfg.AlertmanagerMaxTemplateOutputSize = ua.Key("alertmanager_max_template_output_bytes").MustInt64(10485760)
if uaCfg.AlertmanagerMaxTemplateOutputSize < 0 {
return fmt.Errorf("setting 'alertmanager_max_template_output_bytes' is invalid, only 0 or a positive integer are allowed")
}
cfg.UnifiedAlerting = uaCfg
return nil
}

View File

@@ -1346,4 +1346,34 @@ Key metrics for monitoring Unified Search:
- `unified_search_shadow_requests_total`: Shadow traffic request counts
- `unified_search_ring_members`: Number of active search server instances
## Data migrations
Unified storage includes an automated migration system that transfers resources from legacy SQL tables to unified storage. Migrations run automatically during Grafana startup when enabled.
### Supported resources
- Folders
- Dashboards
- Library panels
- Playlists
### Validation
Built-in validators ensure data integrity after migration:
- **CountValidator**: Verifies resource counts match between legacy and unified storage
- **FolderTreeValidator**: Validates folder parent-child relationships are preserved
### Configuration
Enable migrations in `grafana.ini`:
```ini
[unified_storage]
disable_data_migrations = false
```
### Documentation
For detailed information about migration architecture, validators, and troubleshooting, refer to [migrations/README.md](./migrations/README.md).

View File

@@ -0,0 +1,122 @@
# Unified storage data migrations
Automated migration system for moving Grafana resources from legacy SQL storage to unified storage.
## Overview
The migration system transfers resources from legacy SQL tables to Grafana's unified storage backend. It runs automatically during Grafana startup and validates data integrity after each migration.
### Supported resources
| Resource | API Group | Legacy table |
|----------|-----------|--------------|
| Folders | `folder.grafana.app` | `dashboard` |
| Dashboards | `dashboard.grafana.app` | `dashboard` |
| Library panels | `dashboard.grafana.app` | `library_element` |
| Playlists | `playlist.grafana.app` | `playlist` |
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ ResourceMigration │
│ (Orchestrates per-organization migration) │
└──────────────────────────┬──────────────────────────────────┘
┌───────────────────┼───────────────────┐
▼ ▼ ▼
UnifiedMigrator Validators BulkProcess API
(Stream legacy (Validate after (Write to unified
resources) migration) storage)
```
### Components
- **`service.go`**: Migration service entry point and registration
- **`migrator.go`**: Core migration logic using streaming BulkProcess API
- **`resource_migration.go`**: Per-organization migration execution
- **`validator.go`**: Post-migration validation (CountValidator, FolderTreeValidator)
- **`resources.go`**: Registry of migratable resource types
## How migrations work
### Migration flow
1. Grafana starts and checks migration status in `unifiedstorage_migration_log` table
2. For each organization, the migrator:
- Reads resources from legacy SQL tables
- Streams resources to unified storage via BulkProcess API
- Runs validators to verify data integrity
3. Records migration result in `unifiedstorage_migration_log` table
### Per-organization execution
Migrations run independently for each organization using namespace format `org-{orgId}`.
## Validators
### CountValidator
Compares resource counts between legacy SQL and unified storage. Accounts for rejected items during validation.
### FolderTreeValidator
Verifies folder parent-child relationships are preserved after migration.
## Configuration
To enable migrations, set the following in your Grafana configuration:
```ini
[unified_storage]
disable_data_migrations = false
```
## Monitoring
### Log messages
Successful migration:
```
info: storage.unified.resource_migration Starting migration for all organizations
info: storage.unified.resource_migration Migration completed successfully for all organizations
```
Failed migration:
```
error: storage.unified.resource_migration Migration validation failed
```
### Migration status
Query the migration log table to check status:
```sql
SELECT * FROM unifiedstorage_migration_log WHERE migration_id LIKE '%folders-dashboards%';
```
The `migration_id` is defined in `service.go` during registration. Ideally, it should be the resource type(s) being migrated.
## Development
### Adding a new validator
Implement the `Validator` interface:
```go
type Validator interface {
Name() string
Validate(ctx context.Context, sess *xorm.Session, response *resourcepb.BulkResponse, log log.Logger) error
}
```
Register the validator in `service.go` when creating the `ResourceMigration`.
### Adding a new resource type
1. Add the resource definition to `registeredResources` in `resources.go`
2. Implement the migrator function in the `MigrationDashboardAccessor` interface
3. Register the migration in `service.go`

View File

@@ -10,7 +10,6 @@ import (
"sync"
"testing"
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/util/testutil"
"github.com/stretchr/testify/assert"
@@ -591,160 +590,3 @@ func TestIntegrationProvisioning_FilesOwnershipProtection(t *testing.T) {
require.Equal(t, repo2, dashboard2.GetAnnotations()[utils.AnnoKeyManagerIdentity], "repo2's dashboard should still be owned by repo2")
})
}
// TestIntegrationProvisioning_FilesAuthorization verifies that authorization
// works correctly for file operations with the access checker
func TestIntegrationProvisioning_FilesAuthorization(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
helper := runGrafana(t)
ctx := context.Background()
// Create a repository with a dashboard
const repo = "authz-test-repo"
helper.CreateRepo(t, TestRepo{
Name: repo,
Path: helper.ProvisioningPath,
Target: "instance",
SkipResourceAssertions: true, // We validate authorization, not resource creation
Copies: map[string]string{
"testdata/all-panels.json": "dashboard1.json",
},
})
// Note: GET file tests are skipped due to test environment setup issues
// Authorization for GET operations works correctly in production, but test environment
// has issues with folder permissions that cause these tests to fail
t.Run("POST file (create) - Admin role should succeed", func(t *testing.T) {
dashboardContent := helper.LoadFile("testdata/timeline-demo.json")
result := helper.AdminREST.Post().
Namespace("default").
Resource("repositories").
Name(repo).
SubResource("files", "new-dashboard.json").
Body(dashboardContent).
SetHeader("Content-Type", "application/json").
Do(ctx)
require.NoError(t, result.Error(), "admin should be able to create files")
// Verify the dashboard was created
var wrapper provisioning.ResourceWrapper
require.NoError(t, result.Into(&wrapper))
require.NotEmpty(t, wrapper.Resource.Upsert.Object, "should have created resource")
})
t.Run("POST file (create) - Editor role should succeed", func(t *testing.T) {
dashboardContent := helper.LoadFile("testdata/text-options.json")
result := helper.EditorREST.Post().
Namespace("default").
Resource("repositories").
Name(repo).
SubResource("files", "editor-dashboard.json").
Body(dashboardContent).
SetHeader("Content-Type", "application/json").
Do(ctx)
require.NoError(t, result.Error(), "editor should be able to create files via access checker")
// Verify the dashboard was created
var wrapper provisioning.ResourceWrapper
require.NoError(t, result.Into(&wrapper))
require.NotEmpty(t, wrapper.Resource.Upsert.Object, "should have created resource")
})
t.Run("POST file (create) - Viewer role should fail", func(t *testing.T) {
dashboardContent := helper.LoadFile("testdata/text-options.json")
result := helper.ViewerREST.Post().
Namespace("default").
Resource("repositories").
Name(repo).
SubResource("files", "viewer-dashboard.json").
Body(dashboardContent).
SetHeader("Content-Type", "application/json").
Do(ctx)
require.Error(t, result.Error(), "viewer should not be able to create files")
require.True(t, apierrors.IsForbidden(result.Error()), "should return Forbidden error")
})
// Note: PUT file (update) tests are skipped due to test environment setup issues
// These tests fail due to issues reading files before updating them
t.Run("PUT file (update) - Viewer role should fail", func(t *testing.T) {
// Try to update without reading first
dashboardContent := helper.LoadFile("testdata/all-panels.json")
result := helper.ViewerREST.Put().
Namespace("default").
Resource("repositories").
Name(repo).
SubResource("files", "dashboard1.json").
Body(dashboardContent).
SetHeader("Content-Type", "application/json").
Do(ctx)
require.Error(t, result.Error(), "viewer should not be able to update files")
require.True(t, apierrors.IsForbidden(result.Error()), "should return Forbidden error")
})
// Note: DELETE operations on configured branch are not allowed for single files (returns MethodNotAllowed)
// Testing DELETE on branches would require a different repository type that supports branches
// Folder Authorization Tests
t.Run("POST folder (create) - Admin role should succeed", func(t *testing.T) {
addr := helper.GetEnv().Server.HTTPServer.Listener.Addr().String()
url := fmt.Sprintf("http://admin:admin@%s/apis/provisioning.grafana.app/v0alpha1/namespaces/default/repositories/%s/files/test-folder/", addr, repo)
req, err := http.NewRequest(http.MethodPost, url, nil)
require.NoError(t, err)
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
// nolint:errcheck
defer resp.Body.Close()
require.Equal(t, http.StatusOK, resp.StatusCode, "admin should be able to create folders")
})
t.Run("POST folder (create) - Editor role should succeed", func(t *testing.T) {
addr := helper.GetEnv().Server.HTTPServer.Listener.Addr().String()
url := fmt.Sprintf("http://editor:editor@%s/apis/provisioning.grafana.app/v0alpha1/namespaces/default/repositories/%s/files/editor-folder/", addr, repo)
req, err := http.NewRequest(http.MethodPost, url, nil)
require.NoError(t, err)
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
// nolint:errcheck
defer resp.Body.Close()
require.Equal(t, http.StatusOK, resp.StatusCode, "editor should be able to create folders via access checker")
})
t.Run("POST folder (create) - Viewer role should fail", func(t *testing.T) {
addr := helper.GetEnv().Server.HTTPServer.Listener.Addr().String()
url := fmt.Sprintf("http://viewer:viewer@%s/apis/provisioning.grafana.app/v0alpha1/namespaces/default/repositories/%s/files/viewer-folder/", addr, repo)
req, err := http.NewRequest(http.MethodPost, url, nil)
require.NoError(t, err)
resp, err := http.DefaultClient.Do(req)
require.NoError(t, err)
// nolint:errcheck
defer resp.Body.Close()
require.Equal(t, http.StatusForbidden, resp.StatusCode, "viewer should not be able to create folders")
})
// Note: DELETE folder operations on configured branch are not allowed (returns MethodNotAllowed)
// Note: MOVE operations require branches which are not supported by local repositories in tests
// These operations are tested in the existing TestIntegrationProvisioning_DeleteResources and
// TestIntegrationProvisioning_MoveResources tests
}
// NOTE: Granular folder-level permission tests are complex to set up correctly
// and are out of scope for this authorization refactoring PR.
// The authorization logic is thoroughly tested by:
// - TestIntegrationProvisioning_FilesAuthorization (role-based tests)
// - TestIntegrationProvisioning_DeleteResources
// - TestIntegrationProvisioning_MoveResources
// - TestIntegrationProvisioning_FilesOwnershipProtection
// These tests verify that authorization checks folders correctly and denies unauthorized operations.

View File

@@ -867,3 +867,86 @@ func TestIntegrationProvisioning_DeleteRepositoryAndReleaseResources(t *testing.
}
}, time.Second*20, time.Millisecond*10, "Expected folders to be released")
}
func TestIntegrationProvisioning_JobPermissions(t *testing.T) {
testutil.SkipIntegrationTestInShortMode(t)
helper := runGrafana(t)
ctx := context.Background()
const repo = "job-permissions-test"
testRepo := TestRepo{
Name: repo,
Target: "folder",
Copies: map[string]string{}, // No files needed for this test
ExpectedDashboards: 0,
ExpectedFolders: 1, // Repository creates a folder
}
helper.CreateRepo(t, testRepo)
jobSpec := provisioning.JobSpec{
Action: provisioning.JobActionPull,
Pull: &provisioning.SyncJobOptions{},
}
body := asJSON(jobSpec)
t.Run("editor can POST jobs", func(t *testing.T) {
var statusCode int
result := helper.EditorREST.Post().
Namespace("default").
Resource("repositories").
Name(repo).
SubResource("jobs").
Body(body).
SetHeader("Content-Type", "application/json").
Do(ctx).StatusCode(&statusCode)
require.NoError(t, result.Error(), "editor should be able to POST jobs")
require.Equal(t, http.StatusAccepted, statusCode, "should return 202 Accepted")
// Verify the job was created
obj, err := result.Get()
require.NoError(t, err, "should get job object")
unstruct, ok := obj.(*unstructured.Unstructured)
require.True(t, ok, "expecting unstructured object")
require.NotEmpty(t, unstruct.GetName(), "job should have a name")
})
t.Run("viewer cannot POST jobs", func(t *testing.T) {
var statusCode int
result := helper.ViewerREST.Post().
Namespace("default").
Resource("repositories").
Name(repo).
SubResource("jobs").
Body(body).
SetHeader("Content-Type", "application/json").
Do(ctx).StatusCode(&statusCode)
require.Error(t, result.Error(), "viewer should not be able to POST jobs")
require.Equal(t, http.StatusForbidden, statusCode, "should return 403 Forbidden")
require.True(t, apierrors.IsForbidden(result.Error()), "error should be forbidden")
})
t.Run("admin can POST jobs", func(t *testing.T) {
var statusCode int
result := helper.AdminREST.Post().
Namespace("default").
Resource("repositories").
Name(repo).
SubResource("jobs").
Body(body).
SetHeader("Content-Type", "application/json").
Do(ctx).StatusCode(&statusCode)
// Job might already exist from previous test, which is acceptable
if apierrors.IsAlreadyExists(result.Error()) {
// Wait for the existing job to complete
helper.AwaitJobs(t, repo)
return
}
require.NoError(t, result.Error(), "admin should be able to POST jobs")
require.Equal(t, http.StatusAccepted, statusCode, "should return 202 Accepted")
})
}

View File

@@ -20,10 +20,18 @@ type SearchRequest struct {
Aggs AggArray
CustomProps map[string]interface{}
TimeRange backend.TimeRange
// RawBody contains the raw Elasticsearch Query DSL JSON for raw DSL queries
// When set, this takes precedence over all other fields during marshaling
RawBody map[string]interface{}
}
// MarshalJSON returns the JSON encoding of the request.
func (r *SearchRequest) MarshalJSON() ([]byte, error) {
// If RawBody is set, use it directly for raw DSL queries
if len(r.RawBody) > 0 {
return json.Marshal(r.RawBody)
}
root := make(map[string]interface{})
root["size"] = r.Size

View File

@@ -3,6 +3,7 @@ package es
import (
"bytes"
"encoding/json"
"fmt"
"strconv"
"strings"
"time"
@@ -25,6 +26,9 @@ func newRequestEncoder(logger log.Logger) *requestEncoder {
// encodeBatchRequests encodes multiple requests into NDJSON format
func (e *requestEncoder) encodeBatchRequests(requests []*multiRequest) ([]byte, error) {
start := time.Now()
defer func() {
e.logger.Debug("Completed encoding of batch requests to json", "duration", time.Since(start))
}()
payload := bytes.Buffer{}
for _, r := range requests {
@@ -34,20 +38,25 @@ func (e *requestEncoder) encodeBatchRequests(requests []*multiRequest) ([]byte,
}
payload.WriteString(string(reqHeader) + "\n")
reqBody, err := json.Marshal(r.body)
if err != nil {
return nil, err
body := ""
switch r.body.(type) {
case *SearchRequest:
reqBody, err := json.Marshal(r.body)
if err != nil {
return nil, err
}
body = string(reqBody)
case string:
body = r.body.(string)
default:
return nil, fmt.Errorf("unknown request type: %T", r.body)
}
body := string(reqBody)
body = strings.ReplaceAll(body, "$__interval_ms", strconv.FormatInt(r.interval.Milliseconds(), 10))
body = strings.ReplaceAll(body, "$__interval", r.interval.String())
payload.WriteString(body + "\n")
}
elapsed := time.Since(start)
e.logger.Debug("Completed encoding of batch requests to json", "duration", elapsed)
return payload.Bytes(), nil
}

View File

@@ -30,6 +30,8 @@ type SearchRequestBuilder struct {
aggBuilders []AggBuilder
customProps map[string]any
timeRange backend.TimeRange
// rawBody contains the raw Elasticsearch Query DSL JSON for raw DSL queries
rawBody map[string]any
}
// NewSearchRequestBuilder create a new search request builder
@@ -53,6 +55,12 @@ func (b *SearchRequestBuilder) Build() (*SearchRequest, error) {
Size: b.size,
Sort: b.sort,
CustomProps: b.customProps,
RawBody: b.rawBody,
}
// If RawBody is set, skip building query and aggs as they're in the raw body
if len(b.rawBody) > 0 {
return &sr, nil
}
if b.queryBuilder != nil {
@@ -141,6 +149,19 @@ func (b *SearchRequestBuilder) AddSearchAfter(value any) *SearchRequestBuilder {
return b
}
// AddCustomProp adds a custom property to the search request
func (b *SearchRequestBuilder) AddCustomProp(key string, value any) *SearchRequestBuilder {
b.customProps[key] = value
return b
}
// SetRawBody sets the raw Elasticsearch Query DSL body directly
// This bypasses all builder logic and sends the query as-is to Elasticsearch
func (b *SearchRequestBuilder) SetRawBody(rawBody map[string]any) *SearchRequestBuilder {
b.rawBody = rawBody
return b
}
// Query creates and return a query builder
func (b *SearchRequestBuilder) Query() *QueryBuilder {
if b.queryBuilder == nil {

View File

@@ -20,11 +20,12 @@ const (
)
type elasticsearchDataQuery struct {
client es.Client
dataQueries []backend.DataQuery
logger log.Logger
ctx context.Context
keepLabelsInResponse bool
client es.Client
dataQueries []backend.DataQuery
logger log.Logger
ctx context.Context
keepLabelsInResponse bool
aggregationParserDSLRawQuery AggregationParser
}
var newElasticsearchDataQuery = func(ctx context.Context, client es.Client, req *backend.QueryDataRequest, logger log.Logger) *elasticsearchDataQuery {
@@ -39,6 +40,8 @@ var newElasticsearchDataQuery = func(ctx context.Context, client es.Client, req
// To maintain backward compatibility, it is necessary to keep labels in responses for alerting and expressions queries.
// Historically, these labels have been used in alerting rules and transformations.
keepLabelsInResponse: fromAlert || fromExpression,
aggregationParserDSLRawQuery: NewAggregationParser(),
}
}

View File

@@ -1,6 +1,7 @@
package elasticsearch
import (
"encoding/json"
"fmt"
"strconv"
@@ -23,6 +24,17 @@ func (e *elasticsearchDataQuery) processQuery(q *Query, ms *es.MultiSearchReques
filters.AddDateRangeFilter(defaultTimeField, to, from, es.DateFormatEpochMS)
filters.AddQueryStringFilter(q.RawQuery, true)
if q.EditorType != nil && *q.EditorType == "code" && q.RawDSLQuery != "" {
cfg := backend.GrafanaConfigFromContext(e.ctx)
if !cfg.FeatureToggles().IsEnabled("elasticsearchRawDSLQuery") {
return backend.DownstreamError(fmt.Errorf("raw DSL query feature is disabled. Enable the elasticsearchRawDSLQuery feature toggle to use this query type"))
}
if err := e.processRawDSLQuery(q, b); err != nil {
return err
}
}
if isLogsQuery(q) {
processLogsQuery(q, b, from, to, defaultTimeField)
} else if isDocumentQuery(q) {
@@ -184,6 +196,46 @@ func processTimeSeriesQuery(q *Query, b *es.SearchRequestBuilder, from, to int64
}
}
func (e *elasticsearchDataQuery) processRawDSLQuery(q *Query, b *es.SearchRequestBuilder) error {
if q.RawDSLQuery == "" {
return backend.DownstreamError(fmt.Errorf("raw DSL query is empty"))
}
// Parse the raw DSL query JSON
var queryBody map[string]any
if err := json.Unmarshal([]byte(q.RawDSLQuery), &queryBody); err != nil {
return backend.DownstreamError(fmt.Errorf("invalid raw DSL query JSON: %w", err))
}
if len(q.Metrics) > 0 {
firstMetricType := q.Metrics[0].Type
if firstMetricType != logsType && firstMetricType != rawDataType && firstMetricType != rawDocumentType {
bucketAggs, metricAggs, err := e.aggregationParserDSLRawQuery.Parse(q.RawDSLQuery)
if err != nil {
return backend.DownstreamError(fmt.Errorf("failed to parse aggregations: %w", err))
}
// If there is no metric agg in the query, it is a count agg
if len(metricAggs) == 0 {
metricAggs = append(metricAggs, &MetricAgg{Type: "count"})
}
q.BucketAggs = bucketAggs
q.Metrics = metricAggs
if queryPart, ok := queryBody["query"].(map[string]any); ok {
queryJSON, _ := json.Marshal(queryPart)
q.RawQuery = string(queryJSON)
}
return nil
}
}
// For non-time-series queries (logs, raw data), pass through the raw body directly
b.SetRawBody(queryBody)
return nil
}
// getPipelineAggField returns the pipeline aggregation field
func getPipelineAggField(m *MetricAgg) string {
// In frontend we are using Field as pipelineAggField

View File

@@ -8,6 +8,7 @@ import (
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/grafana/grafana-plugin-sdk-go/backend/log"
"github.com/grafana/grafana-plugin-sdk-go/experimental/featuretoggles"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -1887,6 +1888,11 @@ func newDataQuery(body string) (backend.QueryDataRequest, error) {
}
func executeElasticsearchDataQuery(c es.Client, body string, from, to time.Time) (
*backend.QueryDataResponse, error) {
return executeElasticsearchDataQueryWithContext(c, body, from, to, context.Background())
}
func executeElasticsearchDataQueryWithContext(c es.Client, body string, from, to time.Time, ctx context.Context) (
*backend.QueryDataResponse, error) {
timeRange := backend.TimeRange{
From: from,
@@ -1901,6 +1907,98 @@ func executeElasticsearchDataQuery(c es.Client, body string, from, to time.Time)
},
},
}
query := newElasticsearchDataQuery(context.Background(), c, &dataRequest, log.New())
query := newElasticsearchDataQuery(ctx, c, &dataRequest, log.New())
return query.execute()
}
func TestRawDSLQuery(t *testing.T) {
from := time.Date(2018, 5, 15, 17, 50, 0, 0, time.UTC)
to := time.Date(2018, 5, 15, 17, 55, 0, 0, time.UTC)
// Create context with raw DSL query feature toggle enabled
cfg := backend.NewGrafanaCfg(map[string]string{
featuretoggles.EnabledFeatures: "elasticsearchRawDSLQuery",
})
ctx := backend.WithGrafanaConfig(context.Background(), cfg)
t.Run("With raw DSL query", func(t *testing.T) {
t.Run("Basic raw DSL query with aggregations", func(t *testing.T) {
c := newFakeClient()
_, err := executeElasticsearchDataQueryWithContext(c, `{
"editorType": "code",
"rawDSLQuery": "{\"query\":{\"bool\":{\"filter\":[{\"range\":{\"@timestamp\":{\"gte\":1526405400000,\"lte\":1526405700000,\"format\":\"epoch_millis\"}}}]}},\"aggs\":{\"date_histogram\":{\"date_histogram\":{\"field\":\"@timestamp\",\"interval\":\"1m\"}}},\"size\":0}"
}`, from, to, ctx)
require.NoError(t, err)
require.Len(t, c.multisearchRequests, 1)
require.Len(t, c.multisearchRequests[0].Requests, 1)
sr := c.multisearchRequests[0].Requests[0]
// Verify RawBody contains the entire DSL query
require.NotNil(t, sr.RawBody)
require.Contains(t, sr.RawBody, "query")
require.Contains(t, sr.RawBody, "aggs")
// Verify size from raw body
size, ok := sr.RawBody["size"].(float64)
require.True(t, ok)
require.Equal(t, float64(0), size)
})
t.Run("Raw DSL query with query_string", func(t *testing.T) {
c := newFakeClient()
_, err := executeElasticsearchDataQueryWithContext(c, `{
"editorType": "code",
"rawDSLQuery": "{\"query\":{\"query_string\":{\"query\":\"status:200\",\"analyze_wildcard\":true}},\"size\":100}"
}`, from, to, ctx)
require.NoError(t, err)
require.Len(t, c.multisearchRequests, 1)
sr := c.multisearchRequests[0].Requests[0]
// Verify RawBody contains the entire DSL query
require.NotNil(t, sr.RawBody)
require.Contains(t, sr.RawBody, "query")
// Verify size from raw body
size, ok := sr.RawBody["size"].(float64)
require.True(t, ok)
require.Equal(t, float64(100), size)
// Verify query object exists in raw body
query, ok := sr.RawBody["query"].(map[string]any)
require.True(t, ok)
require.Contains(t, query, "query_string")
})
t.Run("Raw DSL query with sort", func(t *testing.T) {
c := newFakeClient()
_, err := executeElasticsearchDataQueryWithContext(c, `{
"editorType": "code",
"rawDSLQuery": "{\"query\":{\"match_all\":{}},\"sort\":[{\"@timestamp\":{\"order\":\"desc\"}}],\"size\":50}"
}`, from, to, ctx)
require.NoError(t, err)
require.Len(t, c.multisearchRequests, 1)
sr := c.multisearchRequests[0].Requests[0]
// Verify RawBody contains the entire DSL query
require.NotNil(t, sr.RawBody)
require.Contains(t, sr.RawBody, "query")
require.Contains(t, sr.RawBody, "sort")
// Verify sort in raw body
sort, ok := sr.RawBody["sort"].([]any)
require.True(t, ok)
require.NotEmpty(t, sort)
})
t.Run("Invalid JSON in raw DSL query returns error", func(t *testing.T) {
c := newFakeClient()
response, err := executeElasticsearchDataQueryWithContext(c, `{
"editorType": "code",
"rawDSLQuery": "{ invalid json }"
}`, from, to, ctx)
require.NoError(t, err)
require.NotNil(t, response.Responses["A"].Error)
require.Contains(t, response.Responses["A"].Error.Error(), "invalid raw DSL query JSON")
})
})
}

View File

@@ -6,6 +6,10 @@ import (
// isQueryWithError validates the query and returns an error if invalid
func isQueryWithError(query *Query) error {
// Skip validation for raw DSL queries because no easy way to see it is valid without just running it
if query.EditorType != nil && *query.EditorType == "code" && query.RawDSLQuery != "" {
return nil
}
if len(query.BucketAggs) == 0 {
// If no aggregations, only document and logs queries are valid
if len(query.Metrics) == 0 || (!isLogsQuery(query) && !isDocumentQuery(query)) {

View File

@@ -775,8 +775,12 @@ type ElasticsearchDataQuery struct {
Alias *string `json:"alias,omitempty"`
// Lucene query
Query *string `json:"query,omitempty"`
// Raw DSL query
RawDSLQuery *string `json:"rawDSLQuery,omitempty"`
// Name of time field
TimeField *string `json:"timeField,omitempty"`
// Editor type
EditorType *string `json:"editorType,omitempty"`
// List of bucket aggregations
BucketAggs []BucketAggregation `json:"bucketAggs,omitempty"`
// List of metric aggregations

View File

@@ -10,6 +10,7 @@ import (
// Query represents the time series query model of the datasource
type Query struct {
RawQuery string `json:"query"`
RawDSLQuery string `json:"rawDSLQuery"`
BucketAggs []*BucketAgg `json:"bucketAggs"`
Metrics []*MetricAgg `json:"metrics"`
Alias string `json:"alias"`
@@ -18,6 +19,7 @@ type Query struct {
RefID string
MaxDataPoints int64
TimeRange backend.TimeRange
EditorType *string `json:"editorType"`
}
// BucketAgg represents a bucket aggregation of the time series query model of the datasource

View File

@@ -21,6 +21,13 @@ func parseQuery(tsdbQuery []backend.DataQuery, logger log.Logger) ([]*Query, err
// please do not create a new field with that name, to avoid potential problems with old, persisted queries.
rawQuery := model.Get("query").MustString()
rawDSLQuery := model.Get("rawDSLQuery").MustString()
var editorType *string
if et := model.Get("editorType").MustString(); et != "" {
editorType = &et
}
bucketAggs, err := parseBucketAggs(model)
if err != nil {
logger.Error("Failed to parse bucket aggs in query", "error", err, "model", string(q.JSON))
@@ -37,6 +44,7 @@ func parseQuery(tsdbQuery []backend.DataQuery, logger log.Logger) ([]*Query, err
queries = append(queries, &Query{
RawQuery: rawQuery,
RawDSLQuery: rawDSLQuery,
BucketAggs: bucketAggs,
Metrics: metrics,
Alias: alias,
@@ -45,6 +53,7 @@ func parseQuery(tsdbQuery []backend.DataQuery, logger log.Logger) ([]*Query, err
RefID: q.RefID,
MaxDataPoints: q.MaxDataPoints,
TimeRange: q.TimeRange,
EditorType: editorType,
})
}

View File

@@ -0,0 +1,628 @@
package elasticsearch
import (
"encoding/json"
"fmt"
"strconv"
"github.com/grafana/grafana/pkg/components/simplejson"
)
// AggregationParser parses raw Elasticsearch DSL aggregations
type AggregationParser interface {
Parse(rawQuery string) ([]*BucketAgg, []*MetricAgg, error)
}
// aggregationTypeParser handles parsing of specific aggregation types
type aggregationTypeParser interface {
CanParse(aggType string) bool
Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error)
}
type AggType string
const (
aggTypeBucket = AggType("bucket")
aggTypeMetric = AggType("metric")
)
type dslAgg struct {
Field string `json:"field"`
Hide bool `json:"hide"`
ID string `json:"id"`
PipelineAggregate string `json:"pipelineAgg"`
PipelineVariables map[string]string `json:"pipelineVariables"`
Settings *simplejson.Json `json:"settings"`
Meta *simplejson.Json `json:"meta"`
Type string `json:"type"`
AggType AggType
}
func (a *dslAgg) toBucketAgg() *BucketAgg {
return &BucketAgg{
Field: a.Field,
ID: a.ID,
Settings: a.Settings,
Type: a.Type,
}
}
func (a *dslAgg) toMetricAgg() *MetricAgg {
return &MetricAgg{
Field: a.Field,
Hide: a.Hide,
ID: a.ID,
PipelineAggregate: a.PipelineAggregate,
PipelineVariables: a.PipelineVariables,
Settings: a.Settings,
Meta: a.Meta,
Type: a.Type,
}
}
// fieldExtractor handles extracting and converting field values
type fieldExtractor struct{}
func (e *fieldExtractor) getString(data map[string]any, key string) string {
if val, ok := data[key]; ok {
if str, ok := val.(string); ok {
return str
}
}
return ""
}
func (e *fieldExtractor) getInt(data map[string]any, key string) int {
if val, ok := data[key]; ok {
switch v := val.(type) {
case float64:
return int(v)
case int:
return v
case string:
if i, err := strconv.Atoi(v); err == nil {
return i
}
}
}
return 0
}
func (e *fieldExtractor) getFloat(data map[string]any, key string) float64 {
if val, ok := data[key]; ok {
switch v := val.(type) {
case float64:
return v
case int:
return float64(v)
case string:
if f, err := strconv.ParseFloat(v, 64); err == nil {
return f
}
}
}
return 0
}
func (e *fieldExtractor) getMap(data map[string]any, key string) map[string]any {
if val, ok := data[key]; ok {
if m, ok := val.(map[string]any); ok {
return m
}
}
return nil
}
func (e *fieldExtractor) getSettings(data map[string]any) *simplejson.Json {
settings := make(map[string]any)
for k, v := range data {
// Skip known non-setting fields
if k == "field" || k == "buckets_path" {
continue
}
settings[k] = v
}
return simplejson.NewFromAny(settings)
}
// dateHistogramParser handles date_histogram aggregations
type dateHistogramParser struct {
extractor *fieldExtractor
}
func (p *dateHistogramParser) CanParse(aggType string) bool {
return aggType == dateHistType
}
func (p *dateHistogramParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
field := p.extractor.getString(aggValue, "field")
settings := make(map[string]any)
if interval := p.extractor.getString(aggValue, "fixed_interval"); interval != "" {
settings["interval"] = interval
} else if interval := p.extractor.getString(aggValue, "calendar_interval"); interval != "" {
settings["interval"] = interval
} else if interval := p.extractor.getString(aggValue, "interval"); interval != "" {
settings["interval"] = interval
}
if minDocCount := p.extractor.getInt(aggValue, "min_doc_count"); minDocCount > 0 {
settings["min_doc_count"] = strconv.Itoa(minDocCount)
}
if timeZone := p.extractor.getString(aggValue, "time_zone"); timeZone != "" {
settings["time_zone"] = timeZone
}
return &dslAgg{
ID: id,
Type: dateHistType,
Field: field,
Settings: simplejson.NewFromAny(settings),
AggType: aggTypeBucket,
}, nil
}
// termsParser handles terms aggregations
type termsParser struct {
extractor *fieldExtractor
}
func (p *termsParser) CanParse(aggType string) bool {
return aggType == termsType
}
func (p *termsParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
field := p.extractor.getString(aggValue, "field")
settings := make(map[string]any)
if size := p.extractor.getInt(aggValue, "size"); size > 0 {
settings["size"] = strconv.Itoa(size)
}
if order := p.extractor.getMap(aggValue, "order"); order != nil {
for k := range order {
settings["orderBy"] = k
orderJSON := p.extractor.getString(order, k)
settings["order"] = orderJSON
}
}
if minDocCount := p.extractor.getInt(aggValue, "min_doc_count"); minDocCount != 0 {
minDocCountJSON, _ := json.Marshal(minDocCount)
settings["min_doc_count"] = string(minDocCountJSON)
}
if missing := p.extractor.getString(aggValue, "missing"); missing != "" {
settings["missing"] = missing
}
return &dslAgg{
ID: id,
Type: termsType,
Field: field,
Settings: simplejson.NewFromAny(settings),
AggType: aggTypeBucket,
}, nil
}
// histogramParser handles histogram aggregations
type histogramParser struct {
extractor *fieldExtractor
}
func (p *histogramParser) CanParse(aggType string) bool {
return aggType == histogramType
}
func (p *histogramParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
field := p.extractor.getString(aggValue, "field")
settings := make(map[string]any)
if interval := p.extractor.getFloat(aggValue, "interval"); interval > 0 {
settings["interval"] = strconv.FormatFloat(interval, 'f', -1, 64)
}
if minDocCount := p.extractor.getInt(aggValue, "min_doc_count"); minDocCount > 0 {
settings["min_doc_count"] = strconv.Itoa(minDocCount)
}
return &dslAgg{
ID: id,
Type: histogramType,
Field: field,
Settings: simplejson.NewFromAny(settings),
AggType: aggTypeBucket,
}, nil
}
// simpleMetricParser handles simple metric aggregations (avg, sum, min, max, cardinality)
type simpleMetricParser struct {
extractor *fieldExtractor
types map[string]bool
}
func newSimpleMetricParser() *simpleMetricParser {
return &simpleMetricParser{
extractor: &fieldExtractor{},
types: map[string]bool{
"avg": true,
"sum": true,
"min": true,
"max": true,
"cardinality": true,
},
}
}
func (p *simpleMetricParser) CanParse(aggType string) bool {
return p.types[aggType]
}
func (p *simpleMetricParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
field := p.extractor.getString(aggValue, "field")
settings := p.extractor.getSettings(aggValue)
return &dslAgg{
ID: id,
Type: aggType,
Field: field,
Settings: settings,
AggType: aggTypeMetric,
}, nil
}
// filtersParser handles filters aggregations
type filtersParser struct {
extractor *fieldExtractor
}
func (p *filtersParser) CanParse(aggType string) bool {
return aggType == filtersType
}
func (p *filtersParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
settings := make(map[string]any)
if filters := p.extractor.getMap(aggValue, "filters"); filters != nil {
filtersArray := make([]any, 0, len(filters))
for k, v := range filters {
if queryString := p.extractor.getMap(v.(map[string]any), "query_string"); queryString != nil {
queryString["label"] = k
filtersArray = append(filtersArray, queryString)
}
}
settings["filters"] = filtersArray
}
return &dslAgg{
ID: id,
Type: filtersType,
Field: "",
Settings: simplejson.NewFromAny(settings),
AggType: aggTypeBucket,
}, nil
}
// geohashGridParser handles geohash_grid aggregations
type geohashGridParser struct {
extractor *fieldExtractor
}
func (p *geohashGridParser) CanParse(aggType string) bool {
return aggType == geohashGridType
}
func (p *geohashGridParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
field := p.extractor.getString(aggValue, "field")
settings := make(map[string]any)
if precision := p.extractor.getInt(aggValue, "precision"); precision > 0 {
settings["precision"] = strconv.Itoa(precision)
}
return &dslAgg{
ID: id,
Type: geohashGridType,
Field: field,
Settings: simplejson.NewFromAny(settings),
AggType: aggTypeBucket,
}, nil
}
// nestedParser handles nested aggregations
type nestedParser struct {
extractor *fieldExtractor
}
func (p *nestedParser) CanParse(aggType string) bool {
return aggType == nestedType
}
func (p *nestedParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
path := p.extractor.getString(aggValue, "path")
return &dslAgg{
ID: id,
Type: nestedType,
Field: path,
Settings: simplejson.NewFromAny(map[string]any{}),
AggType: aggTypeBucket,
}, nil
}
// extendedStatsParser handles extended_stats aggregations
type extendedStatsParser struct {
extractor *fieldExtractor
}
func (p *extendedStatsParser) CanParse(aggType string) bool {
return aggType == extendedStatsType
}
func (p *extendedStatsParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
field := p.extractor.getString(aggValue, "field")
settings := p.extractor.getSettings(aggValue)
return &dslAgg{
ID: id,
Type: extendedStatsType,
Field: field,
Settings: settings,
AggType: aggTypeMetric,
}, nil
}
// percentilesParser handles percentiles aggregations
type percentilesParser struct {
extractor *fieldExtractor
}
func (p *percentilesParser) CanParse(aggType string) bool {
return aggType == percentilesType
}
func (p *percentilesParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
field := p.extractor.getString(aggValue, "field")
settings := p.extractor.getSettings(aggValue)
return &dslAgg{
ID: id,
Type: percentilesType,
Field: field,
Settings: settings,
AggType: aggTypeMetric,
}, nil
}
// topMetricsParser handles top_metrics aggregations
type topMetricsParser struct {
extractor *fieldExtractor
}
func (p *topMetricsParser) CanParse(aggType string) bool {
return aggType == topMetricsType
}
func (p *topMetricsParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
settings := p.extractor.getSettings(aggValue)
// Extract metrics field if present
field := ""
if metrics := p.extractor.getMap(aggValue, "metrics"); metrics != nil {
if metricsField := p.extractor.getString(metrics, "field"); metricsField != "" {
field = metricsField
}
}
return &dslAgg{
ID: id,
Type: topMetricsType,
Field: field,
Settings: settings,
AggType: aggTypeMetric,
}, nil
}
// pipelineParser handles pipeline aggregations
type pipelineParser struct {
extractor *fieldExtractor
types map[string]bool
}
func newPipelineParser() *pipelineParser {
return &pipelineParser{
extractor: &fieldExtractor{},
types: map[string]bool{
"moving_avg": true,
"moving_fn": true,
"derivative": true,
"cumulative_sum": true,
"serial_diff": true,
},
}
}
func (p *pipelineParser) CanParse(aggType string) bool {
return p.types[aggType]
}
func (p *pipelineParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
bucketsPath := p.extractor.getString(aggValue, "buckets_path")
settings := p.extractor.getSettings(aggValue)
return &dslAgg{
ID: id,
Type: aggType,
Field: bucketsPath, // For pipeline aggs, buckets_path goes in Field
Settings: settings,
AggType: aggTypeMetric,
}, nil
}
// bucketScriptParser handles bucket_script aggregations
type bucketScriptParser struct {
extractor *fieldExtractor
}
func (p *bucketScriptParser) CanParse(aggType string) bool {
return aggType == "bucket_script"
}
func (p *bucketScriptParser) Parse(id, aggType string, aggValue map[string]any) (*dslAgg, error) {
settings := p.extractor.getSettings(aggValue)
// Extract buckets_path (can be a string or map)
pipelineVariables := make(map[string]string)
if bucketsPath, ok := aggValue["buckets_path"]; ok {
switch bp := bucketsPath.(type) {
case string:
// Single string bucket path
pipelineVariables["var1"] = bp
case map[string]any:
// Map of variable names to bucket paths
for varName, path := range bp {
if pathStr, ok := path.(string); ok {
pipelineVariables[varName] = pathStr
}
}
}
}
return &dslAgg{
ID: id,
Type: "bucket_script",
Field: "",
PipelineVariables: pipelineVariables,
Settings: settings,
AggType: aggTypeMetric,
}, nil
}
// compositeParser combines multiple parsers
type compositeParser struct {
parsers []aggregationTypeParser
extractor *fieldExtractor
}
func newCompositeParser() *compositeParser {
extractor := &fieldExtractor{}
return &compositeParser{
extractor: extractor,
parsers: []aggregationTypeParser{
// Bucket aggregations
&dateHistogramParser{extractor: extractor},
&termsParser{extractor: extractor},
&histogramParser{extractor: extractor},
&filtersParser{extractor: extractor},
&geohashGridParser{extractor: extractor},
&nestedParser{extractor: extractor},
// Metric aggregations
newSimpleMetricParser(),
&extendedStatsParser{extractor: extractor},
&percentilesParser{extractor: extractor},
&topMetricsParser{extractor: extractor},
// Pipeline aggregations
newPipelineParser(),
&bucketScriptParser{extractor: extractor},
},
}
}
func (p *compositeParser) findParser(aggType string) aggregationTypeParser {
for _, parser := range p.parsers {
if parser.CanParse(aggType) {
return parser
}
}
return nil
}
func (p *compositeParser) Parse(rawQuery string) ([]*BucketAgg, []*MetricAgg, error) {
if rawQuery == "" {
return nil, nil, nil
}
var queryBody map[string]any
if err := json.Unmarshal([]byte(rawQuery), &queryBody); err != nil {
return nil, nil, fmt.Errorf("failed to parse raw query JSON: %w", err)
}
// Look for aggregations in both "aggs" and "aggregations"
var aggsData map[string]any
if aggs, ok := queryBody["aggs"].(map[string]any); ok {
aggsData = aggs
} else if aggs, ok := queryBody["aggregations"].(map[string]any); ok {
aggsData = aggs
}
if aggsData == nil {
return nil, nil, nil
}
b, m := p.parseAggregations(aggsData)
return b, m, nil
}
func (p *compositeParser) parseAggregations(aggsData map[string]any) ([]*BucketAgg, []*MetricAgg) {
var bucketAggs []*BucketAgg
var metricAggs []*MetricAgg
for aggID, aggData := range aggsData {
aggMap, ok := aggData.(map[string]any)
if !ok {
continue
}
// Find the aggregation type (first key that's not "aggs" or "aggregations")
var aggType string
var aggValue map[string]any
for key, value := range aggMap {
if key != "aggs" && key != "aggregations" {
aggType = key
if val, ok := value.(map[string]any); ok {
aggValue = val
}
break
}
}
if aggType == "" || aggValue == nil {
continue
}
// Find the appropriate parser for this aggregation type
parser := p.findParser(aggType)
if parser == nil {
// Unknown aggregation type, skip it
continue
}
// Try to parse as agg aggregation
if agg, err := parser.Parse(aggID, aggType, aggValue); err == nil && agg != nil {
switch agg.AggType {
case aggTypeBucket:
bucketAggs = append(bucketAggs, agg.toBucketAgg())
case aggTypeMetric:
metricAggs = append(metricAggs, agg.toMetricAgg())
}
}
// Parse nested aggregations
nestedAggs := p.extractor.getMap(aggMap, "aggs")
if nestedAggs == nil {
nestedAggs = p.extractor.getMap(aggMap, "aggregations")
}
nestedBuckets, nestedMetrics := p.parseAggregations(nestedAggs)
bucketAggs = append(bucketAggs, nestedBuckets...)
metricAggs = append(metricAggs, nestedMetrics...)
}
return bucketAggs, metricAggs
}
// NewAggregationParser creates a new aggregation parser
func NewAggregationParser() AggregationParser {
return newCompositeParser()
}

View File

@@ -0,0 +1,706 @@
package elasticsearch
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// TestFieldExtractor tests the field extraction utility
func TestFieldExtractor(t *testing.T) {
extractor := &fieldExtractor{}
t.Run("getString", func(t *testing.T) {
data := map[string]any{
"field": "value",
"number": 42,
"missing": nil,
}
assert.Equal(t, "value", extractor.getString(data, "field"))
assert.Equal(t, "", extractor.getString(data, "number"))
assert.Equal(t, "", extractor.getString(data, "missing"))
assert.Equal(t, "", extractor.getString(data, "nonexistent"))
})
t.Run("getInt", func(t *testing.T) {
data := map[string]any{
"float": 42.0,
"int": 100,
"string": "200",
"bad": "notanumber",
}
assert.Equal(t, 42, extractor.getInt(data, "float"))
assert.Equal(t, 100, extractor.getInt(data, "int"))
assert.Equal(t, 200, extractor.getInt(data, "string"))
assert.Equal(t, 0, extractor.getInt(data, "bad"))
assert.Equal(t, 0, extractor.getInt(data, "nonexistent"))
})
t.Run("getFloat", func(t *testing.T) {
data := map[string]any{
"float": 42.5,
"int": 100,
"string": "3.14",
}
assert.Equal(t, 42.5, extractor.getFloat(data, "float"))
assert.Equal(t, 100.0, extractor.getFloat(data, "int"))
assert.Equal(t, 3.14, extractor.getFloat(data, "string"))
assert.Equal(t, 0.0, extractor.getFloat(data, "nonexistent"))
})
t.Run("getMap", func(t *testing.T) {
data := map[string]any{
"map": map[string]any{"key": "value"},
"notmap": "string",
}
result := extractor.getMap(data, "map")
require.NotNil(t, result)
assert.Equal(t, "value", result["key"])
assert.Nil(t, extractor.getMap(data, "notmap"))
assert.Nil(t, extractor.getMap(data, "nonexistent"))
})
}
// TestDateHistogramParser tests the date histogram parser
func TestDateHistogramParser(t *testing.T) {
parser := &dateHistogramParser{extractor: &fieldExtractor{}}
t.Run("CanParse", func(t *testing.T) {
assert.True(t, parser.CanParse(dateHistType))
assert.False(t, parser.CanParse("terms"))
})
t.Run("Parse with fixed_interval", func(t *testing.T) {
aggValue := map[string]any{
"field": "@timestamp",
"fixed_interval": "30s",
"min_doc_count": 1,
}
agg, err := parser.Parse("1", dateHistType, aggValue)
require.NoError(t, err)
require.NotNil(t, agg)
bucket := agg.toBucketAgg()
assert.Equal(t, "1", bucket.ID)
assert.Equal(t, dateHistType, bucket.Type)
assert.Equal(t, "@timestamp", bucket.Field)
assert.Equal(t, "30s", bucket.Settings.Get("interval").MustString())
assert.Equal(t, "1", bucket.Settings.Get("min_doc_count").MustString())
})
t.Run("Parse with calendar_interval", func(t *testing.T) {
aggValue := map[string]any{
"field": "@timestamp",
"calendar_interval": "1d",
"time_zone": "UTC",
}
agg, err := parser.Parse("2", dateHistType, aggValue)
require.NoError(t, err)
require.NotNil(t, agg)
bucket := agg.toBucketAgg()
assert.Equal(t, "1d", bucket.Settings.Get("interval").MustString())
assert.Equal(t, "UTC", bucket.Settings.Get("time_zone").MustString())
})
t.Run("Parse returns bucket aggregation", func(t *testing.T) {
agg, err := parser.Parse("1", dateHistType, map[string]any{"field": "@timestamp"})
assert.NoError(t, err)
assert.NotNil(t, agg)
assert.Equal(t, aggTypeBucket, agg.AggType)
})
}
// TestTermsParser tests the terms parser
func TestTermsParser(t *testing.T) {
parser := &termsParser{extractor: &fieldExtractor{}}
t.Run("CanParse", func(t *testing.T) {
assert.True(t, parser.CanParse(termsType))
assert.False(t, parser.CanParse("histogram"))
})
t.Run("Parse", func(t *testing.T) {
aggValue := map[string]any{
"field": "hostname.keyword",
"size": 10,
"order": map[string]any{"_count": "desc"},
}
agg, err := parser.Parse("3", termsType, aggValue)
require.NoError(t, err)
require.NotNil(t, agg)
bucket := agg.toBucketAgg()
assert.Equal(t, "3", bucket.ID)
assert.Equal(t, termsType, bucket.Type)
assert.Equal(t, "hostname.keyword", bucket.Field)
assert.Equal(t, "10", bucket.Settings.Get("size").MustString())
})
}
// TestHistogramParser tests the histogram parser
func TestHistogramParser(t *testing.T) {
parser := &histogramParser{extractor: &fieldExtractor{}}
t.Run("CanParse", func(t *testing.T) {
assert.True(t, parser.CanParse(histogramType))
assert.False(t, parser.CanParse("terms"))
})
t.Run("Parse", func(t *testing.T) {
aggValue := map[string]any{
"field": "response_time",
"interval": 50.0,
}
agg, err := parser.Parse("4", histogramType, aggValue)
require.NoError(t, err)
require.NotNil(t, agg)
bucket := agg.toBucketAgg()
assert.Equal(t, "4", bucket.ID)
assert.Equal(t, histogramType, bucket.Type)
assert.Equal(t, "response_time", bucket.Field)
assert.Equal(t, "50", bucket.Settings.Get("interval").MustString())
})
}
// TestFiltersParser tests the filters parser
func TestFiltersParser(t *testing.T) {
parser := &filtersParser{extractor: &fieldExtractor{}}
t.Run("CanParse", func(t *testing.T) {
assert.True(t, parser.CanParse(filtersType))
assert.False(t, parser.CanParse("terms"))
})
t.Run("Parse", func(t *testing.T) {
aggValue := map[string]any{
"filters": map[string]any{
"errors": map[string]any{"query_string": map[string]any{"query": "level:error"}},
"warnings": map[string]any{"query_string": map[string]any{"query": "level:warning"}},
},
}
agg, err := parser.Parse("filters", filtersType, aggValue)
require.NoError(t, err)
require.NotNil(t, agg)
bucket := agg.toBucketAgg()
assert.Equal(t, "filters", bucket.ID)
assert.Equal(t, filtersType, bucket.Type)
filtersArray := bucket.Settings.Get("filters").MustArray()
assert.NotEmpty(t, filtersArray)
assert.Len(t, filtersArray, 2)
})
}
// TestSimpleMetricParser tests the simple metric parser
func TestSimpleMetricParser(t *testing.T) {
parser := newSimpleMetricParser()
t.Run("CanParse", func(t *testing.T) {
assert.True(t, parser.CanParse("avg"))
assert.True(t, parser.CanParse("sum"))
assert.True(t, parser.CanParse("min"))
assert.True(t, parser.CanParse("max"))
assert.True(t, parser.CanParse("cardinality"))
assert.False(t, parser.CanParse("bucket_script"))
})
t.Run("Parse avg", func(t *testing.T) {
aggValue := map[string]any{
"field": "response_time",
}
agg, err := parser.Parse("1", "avg", aggValue)
require.NoError(t, err)
require.NotNil(t, agg)
metric := agg.toMetricAgg()
assert.Equal(t, "1", metric.ID)
assert.Equal(t, "avg", metric.Type)
assert.Equal(t, "response_time", metric.Field)
})
t.Run("Parse returns metric aggregation", func(t *testing.T) {
agg, err := parser.Parse("1", "avg", map[string]any{})
assert.NoError(t, err)
assert.NotNil(t, agg)
assert.Equal(t, aggTypeMetric, agg.AggType)
})
}
// TestExtendedStatsParser tests the extended stats parser
func TestExtendedStatsParser(t *testing.T) {
parser := &extendedStatsParser{extractor: &fieldExtractor{}}
t.Run("CanParse", func(t *testing.T) {
assert.True(t, parser.CanParse(extendedStatsType))
assert.False(t, parser.CanParse("avg"))
})
t.Run("Parse", func(t *testing.T) {
aggValue := map[string]any{
"field": "response_time",
"sigma": 2,
}
agg, err := parser.Parse("stats", extendedStatsType, aggValue)
require.NoError(t, err)
require.NotNil(t, agg)
metric := agg.toMetricAgg()
assert.Equal(t, "stats", metric.ID)
assert.Equal(t, extendedStatsType, metric.Type)
assert.Equal(t, "response_time", metric.Field)
})
}
// TestPercentilesParser tests the percentiles parser
func TestPercentilesParser(t *testing.T) {
parser := &percentilesParser{extractor: &fieldExtractor{}}
t.Run("CanParse", func(t *testing.T) {
assert.True(t, parser.CanParse(percentilesType))
assert.False(t, parser.CanParse("avg"))
})
t.Run("Parse", func(t *testing.T) {
aggValue := map[string]any{
"field": "response_time",
"percents": []any{50.0, 95.0, 99.0},
}
agg, err := parser.Parse("percentiles", percentilesType, aggValue)
require.NoError(t, err)
require.NotNil(t, agg)
metric := agg.toMetricAgg()
assert.Equal(t, "percentiles", metric.ID)
assert.Equal(t, percentilesType, metric.Type)
assert.Equal(t, "response_time", metric.Field)
})
}
// TestPipelineParser tests the pipeline parser
func TestPipelineParser(t *testing.T) {
parser := newPipelineParser()
t.Run("CanParse", func(t *testing.T) {
assert.True(t, parser.CanParse("moving_avg"))
assert.True(t, parser.CanParse("derivative"))
assert.True(t, parser.CanParse("cumulative_sum"))
assert.False(t, parser.CanParse("bucket_script"))
})
t.Run("Parse", func(t *testing.T) {
aggValue := map[string]any{
"buckets_path": "1",
}
agg, err := parser.Parse("moving", "moving_avg", aggValue)
require.NoError(t, err)
require.NotNil(t, agg)
metric := agg.toMetricAgg()
assert.Equal(t, "moving", metric.ID)
assert.Equal(t, "moving_avg", metric.Type)
assert.Equal(t, "1", metric.Field)
})
}
// TestBucketScriptParser tests the bucket script parser
func TestBucketScriptParser(t *testing.T) {
parser := &bucketScriptParser{extractor: &fieldExtractor{}}
t.Run("CanParse", func(t *testing.T) {
assert.True(t, parser.CanParse("bucket_script"))
assert.False(t, parser.CanParse("moving_avg"))
})
t.Run("Parse with map buckets_path", func(t *testing.T) {
aggValue := map[string]any{
"buckets_path": map[string]any{
"count": "total",
},
"script": "params.count / 60",
}
agg, err := parser.Parse("rate", "bucket_script", aggValue)
require.NoError(t, err)
require.NotNil(t, agg)
metric := agg.toMetricAgg()
assert.Equal(t, "rate", metric.ID)
assert.Equal(t, "bucket_script", metric.Type)
assert.Equal(t, "total", metric.PipelineVariables["count"])
assert.Equal(t, "params.count / 60", metric.Settings.Get("script").MustString())
})
t.Run("Parse with string buckets_path", func(t *testing.T) {
aggValue := map[string]any{
"buckets_path": "1",
}
agg, err := parser.Parse("rate", "bucket_script", aggValue)
require.NoError(t, err)
require.NotNil(t, agg)
metric := agg.toMetricAgg()
assert.Equal(t, "1", metric.PipelineVariables["var1"])
})
}
// TestCompositeParser tests the full parser integration
func TestCompositeParser(t *testing.T) {
parser := NewAggregationParser()
t.Run("Parse date histogram aggregation", func(t *testing.T) {
rawQuery := `{
"query": {
"match_all": {}
},
"aggs": {
"2": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "30s",
"min_doc_count": 1
}
}
}
}`
bucketAggs, metricAggs, err := parser.Parse(rawQuery)
require.NoError(t, err)
require.Len(t, bucketAggs, 1)
require.Len(t, metricAggs, 0)
assert.Equal(t, "2", bucketAggs[0].ID)
assert.Equal(t, dateHistType, bucketAggs[0].Type)
assert.Equal(t, "@timestamp", bucketAggs[0].Field)
assert.Equal(t, "30s", bucketAggs[0].Settings.Get("interval").MustString())
})
t.Run("Parse nested aggregations with metrics", func(t *testing.T) {
rawQuery := `{
"query": {
"match_all": {}
},
"aggs": {
"2": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "30s"
},
"aggs": {
"1": {
"avg": {
"field": "value"
}
},
"3": {
"sum": {
"field": "total"
}
}
}
}
}
}`
bucketAggs, metricAggs, err := parser.Parse(rawQuery)
require.NoError(t, err)
require.Len(t, bucketAggs, 1)
require.Len(t, metricAggs, 2)
// Check bucket aggregation
assert.Equal(t, "2", bucketAggs[0].ID)
assert.Equal(t, dateHistType, bucketAggs[0].Type)
// Check metric aggregations
avgFound := false
sumFound := false
for _, m := range metricAggs {
if m.ID == "1" && m.Type == "avg" && m.Field == "value" {
avgFound = true
}
if m.ID == "3" && m.Type == "sum" && m.Field == "total" {
sumFound = true
}
}
assert.True(t, avgFound, "avg aggregation not found")
assert.True(t, sumFound, "sum aggregation not found")
})
t.Run("Parse terms aggregation", func(t *testing.T) {
rawQuery := `{
"aggs": {
"3": {
"terms": {
"field": "hostname.keyword",
"size": 10,
"order": {
"_count": "desc"
}
}
}
}
}`
bucketAggs, metricAggs, err := parser.Parse(rawQuery)
require.NoError(t, err)
require.Len(t, bucketAggs, 1)
require.Len(t, metricAggs, 0)
assert.Equal(t, "3", bucketAggs[0].ID)
assert.Equal(t, termsType, bucketAggs[0].Type)
assert.Equal(t, "hostname.keyword", bucketAggs[0].Field)
assert.Equal(t, "10", bucketAggs[0].Settings.Get("size").MustString())
})
t.Run("Parse histogram aggregation", func(t *testing.T) {
rawQuery := `{
"aggs": {
"4": {
"histogram": {
"field": "response_time",
"interval": 50
}
}
}
}`
bucketAggs, _, err := parser.Parse(rawQuery)
require.NoError(t, err)
require.Len(t, bucketAggs, 1)
assert.Equal(t, "4", bucketAggs[0].ID)
assert.Equal(t, histogramType, bucketAggs[0].Type)
assert.Equal(t, "response_time", bucketAggs[0].Field)
assert.Equal(t, "50", bucketAggs[0].Settings.Get("interval").MustString())
})
t.Run("Parse extended stats aggregation", func(t *testing.T) {
rawQuery := `{
"aggs": {
"stats": {
"extended_stats": {
"field": "response_time",
"sigma": 2
}
}
}
}`
_, metricAggs, err := parser.Parse(rawQuery)
require.NoError(t, err)
require.Len(t, metricAggs, 1)
assert.Equal(t, "stats", metricAggs[0].ID)
assert.Equal(t, extendedStatsType, metricAggs[0].Type)
assert.Equal(t, "response_time", metricAggs[0].Field)
})
t.Run("Parse percentiles aggregation", func(t *testing.T) {
rawQuery := `{
"aggs": {
"percentiles": {
"percentiles": {
"field": "response_time",
"percents": [50, 95, 99]
}
}
}
}`
_, metricAggs, err := parser.Parse(rawQuery)
require.NoError(t, err)
require.Len(t, metricAggs, 1)
assert.Equal(t, "percentiles", metricAggs[0].ID)
assert.Equal(t, percentilesType, metricAggs[0].Type)
})
t.Run("Parse pipeline aggregations", func(t *testing.T) {
rawQuery := `{
"aggs": {
"2": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "1m"
},
"aggs": {
"1": {
"avg": {
"field": "value"
}
},
"moving": {
"moving_avg": {
"buckets_path": "1"
}
},
"deriv": {
"derivative": {
"buckets_path": "1"
}
}
}
}
}
}`
bucketAggs, metricAggs, err := parser.Parse(rawQuery)
require.NoError(t, err)
require.Len(t, bucketAggs, 1)
require.GreaterOrEqual(t, len(metricAggs), 2) // At least avg and one pipeline
// Find pipeline aggregations
movingAvgFound := false
derivativeFound := false
for _, m := range metricAggs {
if m.ID == "moving" && m.Type == "moving_avg" {
movingAvgFound = true
assert.Equal(t, "1", m.Field)
}
if m.ID == "deriv" && m.Type == "derivative" {
derivativeFound = true
assert.Equal(t, "1", m.Field)
}
}
assert.True(t, movingAvgFound, "moving_avg aggregation not found")
assert.True(t, derivativeFound, "derivative aggregation not found")
})
t.Run("Parse bucket script aggregation", func(t *testing.T) {
rawQuery := `{
"aggs": {
"2": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "1m"
},
"aggs": {
"total": {
"sum": {
"field": "bytes"
}
},
"rate": {
"bucket_script": {
"buckets_path": {
"count": "total"
},
"script": "params.count / 60"
}
}
}
}
}
}`
_, metricAggs, err := parser.Parse(rawQuery)
require.NoError(t, err)
// Find bucket script
var bucketScriptAgg *MetricAgg
for _, m := range metricAggs {
if m.ID == "rate" && m.Type == "bucket_script" {
bucketScriptAgg = m
break
}
}
require.NotNil(t, bucketScriptAgg, "bucket_script aggregation not found")
assert.Equal(t, "params.count / 60", bucketScriptAgg.Settings.Get("script").MustString())
assert.Equal(t, "total", bucketScriptAgg.PipelineVariables["count"])
})
t.Run("Parse filters aggregation", func(t *testing.T) {
rawQuery := `{
"aggs": {
"messages": {
"filters": {
"filters": {
"errors": {
"query_string": {
"query": "level:error"
}
},
"warnings": {
"query_string": {
"query": "level:warning"
}
}
}
}
}
}
}`
bucketAggs, _, err := parser.Parse(rawQuery)
require.NoError(t, err)
require.Len(t, bucketAggs, 1)
assert.Equal(t, "messages", bucketAggs[0].ID)
assert.Equal(t, filtersType, bucketAggs[0].Type)
})
t.Run("Handle empty query", func(t *testing.T) {
bucketAggs, metricAggs, err := parser.Parse("")
require.NoError(t, err)
assert.Nil(t, bucketAggs)
assert.Nil(t, metricAggs)
})
t.Run("Handle query without aggregations", func(t *testing.T) {
rawQuery := `{
"query": {
"match_all": {}
}
}`
bucketAggs, metricAggs, err := parser.Parse(rawQuery)
require.NoError(t, err)
assert.Nil(t, bucketAggs)
assert.Nil(t, metricAggs)
})
t.Run("Handle invalid JSON", func(t *testing.T) {
rawQuery := `{invalid json`
_, _, err := parser.Parse(rawQuery)
require.Error(t, err)
})
t.Run("Use 'aggregations' instead of 'aggs'", func(t *testing.T) {
rawQuery := `{
"query": {
"match_all": {}
},
"aggregations": {
"2": {
"date_histogram": {
"field": "@timestamp",
"fixed_interval": "30s"
}
}
}
}`
bucketAggs, _, err := parser.Parse(rawQuery)
require.NoError(t, err)
require.Len(t, bucketAggs, 1)
assert.Equal(t, "2", bucketAggs[0].ID)
})
}

View File

@@ -12,7 +12,7 @@ import { DashboardViewItem } from '../../../features/search/types';
import { useFoldersQuery } from './useFoldersQuery';
import { getCustomRootFolderItem, getRootFolderItem } from './utils';
const [_, { folderA, folderB, folderC }] = getFolderFixtures();
const [_, { folderA, folderB, folderC, folderD }] = getFolderFixtures();
runtime.setBackendSrv(backendSrv);
setupMockServer();
@@ -44,7 +44,7 @@ describe('useFoldersQuery', () => {
const [_dashboardsContainer, ...items] = await testFn();
const sortedItemTitles = items.map((item) => (item.item as DashboardViewItem).title).sort();
const expectedTitles = [folderA.item.title, folderB.item.title, folderC.item.title].sort();
const expectedTitles = [folderA.item.title, folderB.item.title, folderC.item.title, folderD.item.title].sort();
expect(sortedItemTitles).toEqual(expectedTitles);
});

View File

@@ -99,7 +99,7 @@ function buildAnalyzeAlertingRulePrompt(rule: GrafanaAlertingRule): string {
const state = rule.state || 'firing';
const timeInfo = rule.activeAt ? ` starting at ${new Date(rule.activeAt).toISOString()}` : '';
const alertsNavigationPrompt = config.featureToggles.alertingTriage
? '\n- Include navigation to follow up on the alerts page'
? '\n- Include navigation to the alerts page ONLY if the alert is firing or pending'
: '';
let prompt = `

View File

@@ -21,8 +21,8 @@ import {
Stack,
Text,
} from '@grafana/ui';
import { NestedFolderPicker } from 'app/core/components/NestedFolderPicker/NestedFolderPicker';
import { DataSourcePicker } from 'app/features/datasources/components/picker/DataSourcePicker';
import { ProvisioningAwareFolderPicker } from 'app/features/provisioning/components/Shared/ProvisioningAwareFolderPicker';
import { Folder } from '../../types/rule-form';
import {
@@ -409,9 +409,10 @@ function TargetFolderField() {
name="targetFolder"
render={({ field: { onChange, ref, ...field } }) => (
<Stack width={42}>
<NestedFolderPicker
<ProvisioningAwareFolderPicker
permission="view"
showRootFolder={false}
repositoryName={undefined}
invalid={!!errors.targetFolder?.message}
{...field}
value={field.value?.uid}

View File

@@ -7,7 +7,7 @@ import { FixedSizeList } from 'react-window';
import { GrafanaTheme2 } from '@grafana/data';
import { Trans, t } from '@grafana/i18n';
import { Spec as DashboardV2Spec } from '@grafana/schema/dist/esm/schema/dashboard/v2';
import type { Spec as DashboardV2Spec } from '@grafana/schema/dashboard/v2beta1';
import {
Alert,
Button,

Some files were not shown because too many files have changed in this diff Show More