Compare commits

...

73 Commits

Author SHA1 Message Date
Roberto Jimenez Sanchez
c9c0e7ace8 Regenerate client and openapi 2025-12-13 10:32:25 +01:00
Roberto Jimenez Sanchez
919308f835 feat(provisioning): add backend support for bulk dashboard export
This commit extracts the backend changes from the provisioning export feature,
implementing support for exporting specific dashboard resources to repositories.

Changes include:
- Add Resources field to ExportJobOptions for specifying dashboards to export
- Implement resource validation in export job validator
- Add specific resource export functionality
- Update worker to handle resource-specific exports
- Add comprehensive tests for export validation and functionality
- Update repository local storage to support custom paths

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-12 19:10:19 +01:00
Alexander Zobnin
629570926d Zanzana: Fix resource translation for dashboards (#115077) 2025-12-12 11:05:10 -06:00
Will Assis
1b59c82b74 Revert "Unified-storage: sql backend key path backfill (#115033)" (#115257)
This reverts commit b2dd095bd8.
2025-12-12 17:00:08 +00:00
Matias Chomicki
f35447435f LogLineContext: remove broken permalink prop (#115252) 2025-12-12 16:24:36 +00:00
Paul Marbach
c0dc92e8cd Gauge: Fit-and-finish tweaks to glows, text position, and sparkline size (#115173)
* Gauge: Fit-and-finish tweaks to glows, text position, and sparkline size

* adjust text height and positions a little more

* cohesive no data handling

* more tweaks

* fix migration test

* Fix JSON formatting by adding missing newline

* remove new line
2025-12-12 11:10:56 -05:00
Matias Chomicki
7114b9cd3b Log Line Details: Fix width calculation in dashboards (#115248)
* FieldSelector: rename functions to be more explicit

* LogDetailsContext: calculate width based on field selector visibility

* LogLineDetails: Fix sidebar max width calculation

* Update functions usage

* Add regression and fix context calculation
2025-12-12 16:56:23 +01:00
Kristina Demeshchik
b40d0e6ff4 Dashboards: Fix accessible color palettes not being saved in v2 schema (#115244)
* Fix palette color v2 conversion

* v2->v1 conversion
2025-12-12 10:36:31 -05:00
Yunwen Zheng
584615cf3f RecentlyViewedDashboards: Set up container on browsing dashboards page (#115164)
* RecentlyViewedDashboards: Set up container on browsing dashboards page
2025-12-12 10:32:05 -05:00
William Wernert
5f80a29a28 Alerting: Prevent users from saving rules to git-synced folders (#114944)
---------

Co-authored-by: Yuri Tseretyan <yuriy.tseretyan@grafana.com>
2025-12-12 15:25:08 +00:00
Bogdan Matei
eab5d2b30e Dashboard: Fix rogue modal when exiting edit mode (#115240)
* Dashboard: Fix rogue modal when exiting edit mode

* Remove unnecessary change
2025-12-12 17:17:34 +02:00
Anna Urbiztondo
f3421b9718 Docs: Git Sync scenarios (#115199)
* WIP

* Review

* Move scenarions

* Structure fix

* Edits, fix

* Vale, x-refs

* Feedback, tweaks

* Consolidate HA, titles

* Prettier

* Prettier

* Adding missing content

* Minor edits

* Links

* Prettier
2025-12-12 16:08:28 +01:00
Alex Khomenko
1addfd69b4 Provisioning: Fix duplicated breadcrumb (#115234)
* Provisioning: Fix duplicated breadcrumb

* translations
2025-12-12 15:00:40 +00:00
Gonzalo Trigueros Manzanas
d4a627c5fc Provisioning: Add resource-level warning support. (#115023) 2025-12-12 15:59:45 +01:00
Johnny Kartheiser
46ef9aaa0a Alerting docs: Links fix (#115044)
* alerting docs: links fix

fixes 404 errors

* Alerting docs: Fix Slack integration links

Fixes Slack links and clarifies the first two steps.

* prettier
2025-12-12 08:58:10 -06:00
Serge Zaitsev
6ce672dd00 Chore: Fix mysql query for annotation migration (#115222)
fix mysql query for annotation migration
2025-12-12 15:37:43 +01:00
Matheus Macabu
403f4d41de APIServer: Add wiring for audit backend and policy rule evaluator (#115212) 2025-12-12 15:17:44 +01:00
Juan Cabanas
6512259acc DashboardLibrary: Restore New dashboard naming (#115184) 2025-12-12 10:10:05 -03:00
Will Assis
b2dd095bd8 Unified-storage: sql backend key path backfill (#115033)
* unified-storage: add migration to backfill key_path in resource_history
2025-12-12 08:09:51 -05:00
Charandas
e525b529a8 fix: Add panic for nil authorizer in installer (#115186) 2025-12-12 05:01:03 -08:00
Paul Marbach
7805e18368 Sparkline: Export a class component for now (#115189) 2025-12-12 07:56:31 -05:00
Levente Balogh
7a07a49ecc Dashboard: Update toolbar layout (option 2.) (#115210)
fix: dashboard toolbar layout updates
2025-12-12 12:22:04 +00:00
beejeebus
9a4e13800d Guard config CRUD metrics so it's safe for grafana-enterprise
Previous attempt to land this required this PR and a grafana-enterprise
PR to land at the ~same time.

This PR guards the use of `dsConfigHandlerRequestsDuration` with a nil
check, and doesn't change any existing APIs, so we can land it without
any timing issues with grafana-enterprise.

Once this has landed, we'll make a follow-up PR for grafana-enterprise.
2025-12-12 07:21:29 -05:00
Nathan Marrs
a0c4e8b4f4 Suggested Dashboards: Add missing loaded event tracking for v1 of feature (#115195)
## Summary

Fixes a regression where the `loaded` analytics event was not being tracked for the `BasicProvisionedDashboardsEmptyPage` component, which is the component shown in production when the `suggestedDashboards` feature toggle is disabled (i.e. community dashboards disabled but v1 of feature enabled)

## Problem

Regression introduced by https://github.com/grafana/grafana/pull/112808/changes#diff-3a19d2e887a3344cb0bcd2449b570bd50a7d78d1d473f4a3cf623f9fe40f35fc adding community dashboard support to `SuggestedDashboards`, the `BasicProvisionedDashboardsEmptyPage` component was missing the `loaded` event tracking. Component is mounted here: https://github.com/grafana/grafana/pull/112808/changes#diff-fba79ed6f8bfb5f712bdd529155158977a3e081d1d6a5932a5fa90fb57a243e6R82. This caused analytics discrepancies where in the past 7 days (note: issue has been present for last several weeks but here is sample of data from previous week):

- 106 provisioned dashboard items were clicked
- Only 1 `loaded` event was received (from `SuggestedDashboards` when the feature toggle is enabled)
- The `loaded` events are missing for the production v1 flow (when `suggestedDashboards` feature toggle is off)

## Root Cause

The `BasicProvisionedDashboardsEmptyPage` component (used in v1 flow in production) was never updated with the `loaded` event tracking that was added to `SuggestedDashboards` in PR #113417. Since the `suggestedDashboards` feature toggle is not enabled in production, users were seeing `BasicProvisionedDashboardsEmptyPage` which had no tracking, resulting in missing analytics events.

## Solution

Added the `loaded` event tracking to `BasicProvisionedDashboardsEmptyPage` using the same approach that was previously used (tracking inside the async callback when dashboards are loaded). This ensures consistency with the existing pattern and restores analytics tracking for the production flow.

## Changes

- Added `DashboardLibraryInteractions.loaded()` call in `BasicProvisionedDashboardsEmptyPage` when dashboards are successfully loaded
- Uses the same tracking pattern as the original implementation (tracking inside async callback)
- Matches the event structure used in `SuggestedDashboards` for consistency

## Testing

- Verified that `loaded` events are now tracked when `BasicProvisionedDashboardsEmptyPage` loads dashboards
- Confirmed the event includes correct `contentKinds`, `datasourceTypes`, and `eventLocation` values
- No duplicate events are sent (tracking only occurs once per load)

## Related

- Original analytics implementation: #113417
- Related PR: #112808
- Component: [`BasicProvisionedDashboardsEmptyPage.tsx`](https://github.com/grafana/grafana/blob/main/public/app/features/dashboard/dashgrid/DashboardLibrary/BasicProvisionedDashboardsEmptyPage.tsx)
2025-12-12 09:16:55 -03:00
Victor Marin
fa62113b41 Dashboards: Fix custom variable legacy model to return options when flag is set (#115154)
* fix custom var legacy model options property

* add test
2025-12-12 12:12:46 +00:00
Roberto Jiménez Sánchez
b863acab05 Provisioning: Fix race condition causing unhealthy repository message to be lost (#115150)
* Fix race condition causing unhealthy repository message to be lost

This commit fixes a race condition in the provisioning repository controller
where the "Repository is unhealthy" message in the sync status could be lost
due to status updates being based on stale repository objects.

## Problem

The issue occurred in the `process` function when:
1. Repository object was fetched from cache with old status
2. `RefreshHealth` immediately patched the health status to "unhealthy"
3. `determineSyncStatusOps` used the stale object to check if unhealthy
   message was already set
4. A second patch operation based on stale data would overwrite the
   health status update

## Solution

Introduced `RefreshHealthWithPatchOps` method that returns patch operations
instead of immediately applying them. This allows batching all status updates
(health + sync) into a single atomic patch operation, eliminating the race
condition.

## Changes

- Added `HealthCheckerInterface` for better testability
- Added `RefreshHealthWithPatchOps` method to return patch ops without applying
- Updated `process` function to batch health and sync status updates
- Added comprehensive unit tests for the fix

Fixes the issue where unhealthy repositories don't show the "Repository is
unhealthy" message in their sync status.

* Fix staticcheck lint error: remove unnecessary nil check for slice
2025-12-12 13:24:58 +02:00
Ezequiel Victorero
c7c052480d Chore: Bump grafana/llm 1.0.1 (#115175) 2025-12-12 11:22:37 +00:00
Gabriel MABILLE
478ae15f0e grafana-iam: Use parent folder to authorize ResourcePermissions (#115008)
* `grafana-iam`: Fetch target parent folder

* WIP add different ParentProviders

* Add version

* Move code to a different file

* Instantiate resourceParentProvider

* same import name

* imports

* Add tests

* Remove unecessary test

* forgot wire

* WIP integration tests

* Add test to cover list

* Fix caching problem in integration tests

* comments

* Logger and comments

* Add lazy creation and caching

* Instantiate clients only once

* Rerun wire gen
2025-12-12 11:43:12 +01:00
Erik Sundell
8ebb1c2bc9 NPM: Remove dist-tag code (#115209)
remove dist-tag
2025-12-12 11:41:57 +01:00
Marc M.
5572ce966a DynamicDashboards: Hide variables from outline in view mode (#115142) 2025-12-12 10:34:47 +00:00
Marc M.
e3510f6eb3 DynamicDashboards: Replace discard changes modal (#114789) 2025-12-12 11:24:53 +01:00
Mihai Doarna
a832e5f222 IAM: Add missing params to team search request (#115208)
add missing params to team search request
2025-12-12 12:13:43 +02:00
Levente Balogh
c5a5482d7d Doc: Add docs for displaying links in the dashboard-controls menu (#115201)
* docs: add docs for displaying links in the dashboard-controls menu

* Update docs/sources/as-code/observability-as-code/schema-v2/links-schema.md

Co-authored-by: Anna Urbiztondo <anna.urbiztondo@grafana.com>

---------

Co-authored-by: Anna Urbiztondo <anna.urbiztondo@grafana.com>
2025-12-12 09:57:35 +00:00
Gareth
169ffc15c6 OpenTSDB: Run suggest queries through the data source backend (#114990)
* OpenTSDB: Run suggest queries through the data source backend

* use mux
2025-12-12 18:36:52 +09:00
Levente Balogh
296fe610ba Docs: Add docs for displaying variables in the dashboard-controls (#115205)
docs: update docs for adding a template variable
2025-12-12 10:34:13 +01:00
grafana-pr-automation[bot]
eceff8d26e I18n: Download translations from Crowdin (#115193)
New Crowdin translations by GitHub Action

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-12 09:30:01 +00:00
Lauren
3cdfe34ec8 Alerting: Fix Alerts page filtering (#115178)
* Alerting: fix filtering on alerts page

* exclude __name__ from label filter dropdown list
2025-12-12 08:16:55 +00:00
Erik Sundell
35c214249f E2E Selectors: Fix comment typo (#115197)
fix typo
2025-12-12 08:59:10 +01:00
Erik Sundell
c3224411c0 NPM: Use env var for OIDC token auth instead of direct npmrc (#115153)
* use env var

* ignore spellcheck
2025-12-12 07:45:04 +01:00
Steve Simpson
b407f0062d Alerting: Add an authorizer to the historian app (#115188)
historian: add an authorizer

Co-authored-by: Charandas Batra <charandas.batra@grafana.com>
2025-12-11 23:34:37 +00:00
Haris Rozajac
0385a7a4a4 Dashboard Import: disable importing V2 dashboards when dashboardNewLayouts is disabled (#114188)
* Disable importing v2 dashboards when dynamic dashboards are disabled

* clean up

* Update error messaging
2025-12-11 15:54:06 -07:00
Jack Baldry
1611489b84 Fix path to generation and source content (#115095)
Signed-off-by: Jack Baldry <jack.baldry@grafana.com>
2025-12-11 21:40:35 +00:00
Eric Hilse
e8039d1c3d fix(topbar): remove minWidth property for better layout handling (#115166) 2025-12-11 13:28:30 -07:00
Andres Torres
652b4f2fab fix(setting): Add default scheme to handle k8s api errors (#115177) 2025-12-11 20:12:25 +00:00
Ezequiel Victorero
c35642b04d Chore: Bump nodemailer with forced resolution (#115172) 2025-12-11 16:40:23 -03:00
Larissa Wandzura
91a72f2572 DOCS: Updates to Elasticsearch data source docs (#115021)
* created new configure folder, rewrote intro page

* updated configure doc

* updated query editor

* updates to template variables

* added troubleshooting doc, fixed heading issues

* fix linter issues

* added alerting doc

* corrected title

* final edits

* fixed linter issue

* added deprecation comment per feedback

* ran prettier
2025-12-11 19:21:33 +00:00
Bogdan Matei
f8027e4d75 Dashboard: Implement modal to confirm layout change (#111093) 2025-12-11 19:17:23 +00:00
Paul Marbach
f5b2dde4a1 Suggestions: Add keyboard support (#114517)
* Suggestions: hashes on suggestions, update logic to select first suggestion

* fix types

* Suggestions: New UI style updates

* update some styles

* getting styles just right

* remove grouping when not on flag

* adjust minimum width for sidebar

* CI cleanups

* updates from ad hoc review

* add loading and error states to suggestions

* remove unused import

* update header ui for panel editor

* restore back button to vizpicker

* fix e2e test

* fix e2e

* add i18n update

* use new util for setVisualization operation

* Apply suggestions from code review

Co-authored-by: Torkel Ödegaard <torkel@grafana.com>

* comments from review

* updates from review

* Suggestions: Add keyboard support

* fix selector for PluginVisualization.item

---------

Co-authored-by: Torkel Ödegaard <torkel@grafana.com>
2025-12-11 14:13:33 -05:00
Misi
0c264b7a5f IAM: Add user search endpoint (#114542)
* wip: initial changes, api registration

* wip

* LegacySearch working with sorting

* Revert mapper change for now

* Clean up

* Cleanup, add integration tests

* Improve tests

* OpenAPI def regen

* Use wildcard search, fix lastSeenAt handling, add lastSeenAtAge

* Add missing files

* Fix merge

* Fixes

* Add tests, regen openapi def

* Address feedback

* Address feedback batch 2

* Chores

* regen openapidef

* Address feedback

* Add tests for paging

* gen apis

* Revert go.mod, go.sum. go.work.sum

* Fix + remove extra tracer parameter
2025-12-11 19:54:48 +01:00
Ashley Harrison
d83b216a32 FS: Fix rendering of public dashboards in MT frontend service (#115162)
* pass publicDashboardAccessToken to ST backend via bootdata

* slightly cleaner

* slightly tidy up go templating

* add HandleView middleware
2025-12-11 17:56:40 +00:00
Anna Urbiztondo
ada14df9fd Add new glossary word (#115070)
* Docs: Add grafanactl term to glossary

* Edit to adapt to Glossary def length

* Fix

* Real fix

* Fix link

---------

Co-authored-by: Jack Baldry <jack.baldry@grafana.com>
2025-12-11 17:05:21 +00:00
Tobias Skarhed
f63c2cb2dd Scopes: Don't use redirect if you're on an active scope navigation (#115149)
* Don't use redirectUrl if we are on an active scope navigation

* Remove superflous test
2025-12-11 17:42:47 +01:00
Tobias Skarhed
fe4c615b3d Scopes: Sync nested scopes navigation open folders to URL (#114786)
* Sync nav_scope_path with url

* Let the current active scope remain if it is a child of the selected subscope

* Remove location updates based on nav_scope_path to maintain expanded folders

* Fix folder tests

* Remove console logs

* Better mock for changeScopes

* Update test to support the new calls

* Update test with function inputs

* Fix failinging test

* Add tests and add isEqual check for fetching new subscopes
2025-12-11 17:34:21 +01:00
grafana-pr-automation[bot]
02d3fd7b31 I18n: Download translations from Crowdin (#115123)
New Crowdin translations by GitHub Action

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-11 16:31:02 +00:00
Jesse David Peterson
5dcfc19060 Table: Add title attribute to make truncated headings legible (#115155)
* fix(table): add HTML title attribute to make truncated headings legible

* fix(table): avoid redundant display name calculation

Co-authored-by: Paul Marbach <paul.marbach@grafana.com>

---------

Co-authored-by: Paul Marbach <paul.marbach@grafana.com>
2025-12-11 12:22:10 -04:00
Roberto Jiménez Sánchez
5bda17be3f Provisioning: Update provisioning docs to reflect kubernetesDashboards defaults to true (#115159)
Docs: Update provisioning docs to reflect kubernetesDashboards defaults to true

The kubernetesDashboards feature toggle now defaults to true, so users
don't need to explicitly enable it in their configuration. Updated
documentation and UI to reflect this:

- Removed kubernetesDashboards from configuration examples
- Added notes explaining it's enabled by default
- Clarified that users only need to take action if they've explicitly
  disabled it
- Kept validation checks to catch explicit disables

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 17:08:57 +01:00
Usman Ahmad
bc88796e6e Created Troubleshooting guide for MySQL data source plugin (#114737)
* created troubleshooting guide for mysql data source plugin

Signed-off-by: Usman Ahmad <usman.ahmad@grafana.com>

* Apply suggestions from code review

thanks for the code review

Co-authored-by: Christopher Moyer <35463610+chri2547@users.noreply.github.com>

* rename file from _index.md to index.md

Signed-off-by: Usman Ahmad <usman.ahmad@grafana.com>

* Update docs/sources/datasources/mysql/troubleshoot/index.md

---------

Signed-off-by: Usman Ahmad <usman.ahmad@grafana.com>
Co-authored-by: Christopher Moyer <35463610+chri2547@users.noreply.github.com>
2025-12-11 16:42:09 +01:00
Andres Torres
5d7b9c5050 fix(setting): Replacing dynamic client to reduce memory footprint (#115125) 2025-12-11 10:24:01 -05:00
Alexander Akhmetov
73bcfbcc74 Alerting: Collate alert_rule.namespace_uid column as binary (#115152)
Alerting: Collate namespace_uid column as binary
2025-12-11 16:05:13 +01:00
Erik Sundell
4ab198b201 E2E Selectors: Fix package description (#115148)
dummie change
2025-12-11 14:00:54 +00:00
Erik Sundell
0c82f92539 NPM: Attempt to fix e2e-selectors dist-tag after OIDC migration (#115012)
* fetch oidc token from github

* use same approach as electron
2025-12-11 14:35:27 +01:00
Ivana Huckova
73de5f98e1 Assistant: Update origin for analyze-rule-menu-item (#115147)
* Assistant: Update origin for analyze-rule-menu-item

* Update origin, not test id
2025-12-11 13:06:09 +00:00
Oscar Kilhed
b6ba8a0fd4 Dashboards: Make variables selectable in controls menu (#115092)
* Dashboard: Make variables selectable in controls menu and improve spacing

- Add selection support for variables in controls menu (onPointerDown handler and selection classes)
- Add padding to variables and annotations in controls menu (theme.spacing(1))
- Reduce menu container padding from 1.5 to 1
- Remove margins between menu items

* fix: remove unused imports in DashboardControlsMenu
2025-12-11 13:55:03 +01:00
Oscar Kilhed
350c3578c7 Dynamic dashboards: Update variable set state when variable hide property changes (#115094)
fix: update variable set state when variable hide property changes

When changing a variable's positioning to show in controls menu using the edit side pane, the state of dashboardControls does not immediately update. This makes it seem to the user that nothing was changed.

The issue was that when a variable's hide property changes, only the variable's state was updated, but not the parent SceneVariableSet state. Components that subscribe to the variable set state (like useDashboardControls) didn't detect the change because the variables array reference remained the same.

This fix updates the parent SceneVariableSet state when a variable's hide property changes, ensuring components that subscribe to the variable set will re-render immediately.

Co-authored-by: grafakus <marc.mignonsin@grafana.com>
2025-12-11 13:54:30 +01:00
Andres Martinez Gotor
e6b5ece559 Plugins Preinstall: Fix URL parsing when includes basic auth (#115143)
Preinstall: Fix URL setting when includes basic auth
2025-12-11 13:38:02 +01:00
Ryan McKinley
eef14d2cee Dependencies: update glob@npm for dependabot (#115146) 2025-12-11 12:33:34 +00:00
Anna Urbiztondo
c71c0b33ee Docs: Configure Git Sync using CLI (#115068)
* WIP

* WIP

* Edits, Claude

* Prettier

* Update docs/sources/as-code/observability-as-code/provision-resources/git-sync-setup.md

Co-authored-by: Roberto Jiménez Sánchez <roberto.jimenez@grafana.com>

* Update docs/sources/as-code/observability-as-code/provision-resources/git-sync-setup.md

Co-authored-by: Roberto Jiménez Sánchez <roberto.jimenez@grafana.com>

* WIP

* Restructuring

* Minor tweaks

* Fix

* Update docs/sources/as-code/observability-as-code/provision-resources/git-sync-setup.md

Co-authored-by: Roberto Jiménez Sánchez <roberto.jimenez@grafana.com>

* Feedback

* Prettier

* Links

---------

Co-authored-by: Roberto Jiménez Sánchez <roberto.jimenez@grafana.com>
2025-12-11 11:27:36 +00:00
Lauren
d568798c64 Alerting: Improve instance count display (#114997)
* Update button text to Show All if filters are enabled

* Show state in text if filters enabled

* resolve PR comments
2025-12-11 11:01:53 +00:00
Ryan McKinley
9bec62a080 Live: simplify dependencies (#115130) 2025-12-11 13:37:45 +03:00
Roberto Jiménez Sánchez
7fe3214f16 Provisioning: Add fieldSelector regression tests for Repository and Jobs (#115135) 2025-12-11 13:36:01 +03:00
Alexander Zobnin
e2d12f4cce Zanzana: Refactor remote client initialization (#114142)
* Zanzana: Refactor remote client

* rename config field URL to Addr

* Instrument grpc queries

* fix duplicated field
2025-12-11 10:55:12 +01:00
Victor Marin
d48455cd20 Dashboards: Panel non applicable filters optimization (#115132)
optimisations
2025-12-11 11:31:19 +02:00
Alexander Akhmetov
439d2c806c Alerting: Add folder_uid label to the grafana_alerting_rule_group_rules metric (#115129) 2025-12-11 09:30:55 +01:00
307 changed files with 12591 additions and 3068 deletions

View File

@@ -1603,7 +1603,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -1671,7 +1670,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 98,
"min": 5,
"noise": 22,
@@ -1689,7 +1687,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -1757,7 +1754,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 98,
"min": 5,
"noise": 22,
@@ -1788,7 +1784,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -1857,7 +1852,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 8,
"min": 1,
"noise": 2,
@@ -1875,7 +1869,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -1944,7 +1937,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 12,
"min": 1,
"noise": 2,
@@ -1962,7 +1954,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -2030,7 +2021,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 100,
"min": 10,
"noise": 22,
@@ -2048,7 +2038,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -2116,7 +2105,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 100,
"min": 10,
"noise": 22,
@@ -2129,6 +2117,147 @@
],
"title": "Backend",
"type": "radialbar"
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 66
},
"id": 35,
"panels": [],
"title": "Empty data",
"type": "row"
},
{
"datasource": {
"type": "grafana-testdata-datasource"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": 0
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 6,
"x": 0,
"y": 67
},
"id": 36,
"options": {
"barWidthFactor": 0.5,
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"segmentCount": 1,
"segmentSpacing": 0.3,
"shape": "gauge",
"showThresholdLabels": false,
"showThresholdMarkers": true,
"sparkline": true
},
"pluginVersion": "13.0.0-pre",
"targets": [
{
"refId": "A",
"scenarioId": "random_walk",
"seriesCount": 0
}
],
"title": "Numeric, no series",
"type": "gauge"
},
{
"datasource": {
"type": "grafana-testdata-datasource"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": 0
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 6,
"x": 6,
"y": 67
},
"id": 37,
"options": {
"barWidthFactor": 0.5,
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"segmentCount": 1,
"segmentSpacing": 0.3,
"shape": "gauge",
"showThresholdLabels": false,
"showThresholdMarkers": true,
"sparkline": true
},
"pluginVersion": "13.0.0-pre",
"targets": [
{
"refId": "A",
"scenarioId": "logs"
}
],
"title": "Non-numeric",
"type": "gauge"
}
],
"preload": false,

View File

@@ -377,10 +377,10 @@ github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyY
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/centrifugal/centrifuge v0.37.2 h1:rerQNvDfYN2FZEkVtb/hvGV7SIrJfEQrKF3MaE8GDlo=
github.com/centrifugal/centrifuge v0.37.2/go.mod h1:aj4iRJGhzi3SlL8iUtVezxway1Xf8g+hmNQkLLO7sS8=
github.com/centrifugal/protocol v0.16.2 h1:KoIHgDeX1fFxyxQoKW+6E8ZTCf5mwGm8JyGoJ5NBMbQ=
github.com/centrifugal/protocol v0.16.2/go.mod h1:Q7OpS/8HMXDnL7f9DpNx24IhG96MP88WPpVTTCdrokI=
github.com/centrifugal/centrifuge v0.38.0 h1:UJTowwc5lSwnpvd3vbrTseODbU7osSggN67RTrJ8EfQ=
github.com/centrifugal/centrifuge v0.38.0/go.mod h1:rcZLARnO5GXOeE9qG7iIPMvERxESespqkSX4cGLCAzo=
github.com/centrifugal/protocol v0.17.0 h1:hD0WczyiG7zrVJcgkQsd5/nhfFXt0Y04SJHV2Z7B1rg=
github.com/centrifugal/protocol v0.17.0/go.mod h1:9MdiYyjw5Bw1+d5Sp4Y0NK+qiuTNyd88nrHJsUUh8k4=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
@@ -1376,11 +1376,13 @@ github.com/puzpuzpuz/xsync/v2 v2.5.1 h1:mVGYAvzDSu52+zaGyNjC+24Xw2bQi3kTr4QJ6N9p
github.com/puzpuzpuz/xsync/v2 v2.5.1/go.mod h1:gD2H2krq/w52MfPLE+Uy64TzJDVY7lP2znR9qmR35kU=
github.com/puzpuzpuz/xsync/v4 v4.2.0 h1:dlxm77dZj2c3rxq0/XNvvUKISAmovoXF4a4qM6Wvkr0=
github.com/puzpuzpuz/xsync/v4 v4.2.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo=
github.com/quagmt/udecimal v1.9.0 h1:TLuZiFeg0HhS6X8VDa78Y6XTaitZZfh+z5q4SXMzpDQ=
github.com/quagmt/udecimal v1.9.0/go.mod h1:ScmJ/xTGZcEoYiyMMzgDLn79PEJHcMBiJ4NNRT3FirA=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/redis/go-redis/v9 v9.14.0 h1:u4tNCjXOyzfgeLN+vAZaW1xUooqWDqVEsZN0U01jfAE=
github.com/redis/go-redis/v9 v9.14.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
github.com/redis/rueidis v1.0.64 h1:XqgbueDuNV3qFdVdQwAHJl1uNt90zUuAJuzqjH4cw6Y=
github.com/redis/rueidis v1.0.64/go.mod h1:Lkhr2QTgcoYBhxARU7kJRO8SyVlgUuEkcJO1Y8MCluA=
github.com/redis/rueidis v1.0.68 h1:gept0E45JGxVigWb3zoWHvxEc4IOC7kc4V/4XvN8eG8=
github.com/redis/rueidis v1.0.68/go.mod h1:Lkhr2QTgcoYBhxARU7kJRO8SyVlgUuEkcJO1Y8MCluA=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=

View File

@@ -22,13 +22,40 @@ v0alpha1: {
serviceaccountv0alpha1,
externalGroupMappingv0alpha1
]
routes: {
namespaced: {
"/searchUsers": {
"GET": {
request: {
query: {
query?: string
limit?: int64 | 10
offset?: int64 | 0
page?: int64 | 1
}
}
response: {
offset: int64
totalHits: int64
hits: [...#UserHit]
queryCost: float64
maxScore: float64
}
responseMetadata: {
typeMeta: false
objectMeta: false
}
}
}
"/searchTeams": {
"GET": {
request: {
query: {
query?: string
limit?: int64 | 50
offset?: int64 | 0
page?: int64 | 1
}
}
response: {
@@ -51,3 +78,15 @@ v0alpha1: {
}
}
}
#UserHit: {
name: string
title: string
login: string
email: string
role: string
lastSeenAt: int64
lastSeenAtAge: string
provisioned: bool
score: float64
}

View File

@@ -29,6 +29,9 @@ userv0alpha1: userKind & {
// }
schema: {
spec: v0alpha1.UserSpec
status: {
lastSeenAt: int64 | 0
}
}
// TODO: Uncomment when the custom routes implementation is done
// routes: {

View File

@@ -3,7 +3,10 @@
package v0alpha1
type GetSearchTeamsRequestParams struct {
Query *string `json:"query,omitempty"`
Query *string `json:"query,omitempty"`
Limit int64 `json:"limit,omitempty"`
Offset int64 `json:"offset,omitempty"`
Page int64 `json:"page,omitempty"`
}
// NewGetSearchTeamsRequestParams creates a new GetSearchTeamsRequestParams object.

View File

@@ -0,0 +1,33 @@
// Code generated - EDITING IS FUTILE. DO NOT EDIT.
package v0alpha1
import (
"github.com/grafana/grafana-app-sdk/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
type GetSearchUsersRequestParamsObject struct {
metav1.TypeMeta `json:",inline"`
GetSearchUsersRequestParams `json:",inline"`
}
func NewGetSearchUsersRequestParamsObject() *GetSearchUsersRequestParamsObject {
return &GetSearchUsersRequestParamsObject{}
}
func (o *GetSearchUsersRequestParamsObject) DeepCopyObject() runtime.Object {
dst := NewGetSearchUsersRequestParamsObject()
o.DeepCopyInto(dst)
return dst
}
func (o *GetSearchUsersRequestParamsObject) DeepCopyInto(dst *GetSearchUsersRequestParamsObject) {
dst.TypeMeta.APIVersion = o.TypeMeta.APIVersion
dst.TypeMeta.Kind = o.TypeMeta.Kind
dstGetSearchUsersRequestParams := GetSearchUsersRequestParams{}
_ = resource.CopyObjectInto(&dstGetSearchUsersRequestParams, &o.GetSearchUsersRequestParams)
}
var _ runtime.Object = NewGetSearchUsersRequestParamsObject()

View File

@@ -0,0 +1,15 @@
// Code generated - EDITING IS FUTILE. DO NOT EDIT.
package v0alpha1
type GetSearchUsersRequestParams struct {
Query *string `json:"query,omitempty"`
Limit int64 `json:"limit,omitempty"`
Offset int64 `json:"offset,omitempty"`
Page int64 `json:"page,omitempty"`
}
// NewGetSearchUsersRequestParams creates a new GetSearchUsersRequestParams object.
func NewGetSearchUsersRequestParams() *GetSearchUsersRequestParams {
return &GetSearchUsersRequestParams{}
}

View File

@@ -0,0 +1,37 @@
// Code generated - EDITING IS FUTILE. DO NOT EDIT.
package v0alpha1
// +k8s:openapi-gen=true
type UserHit struct {
Name string `json:"name"`
Title string `json:"title"`
Login string `json:"login"`
Email string `json:"email"`
Role string `json:"role"`
LastSeenAt int64 `json:"lastSeenAt"`
LastSeenAtAge string `json:"lastSeenAtAge"`
Provisioned bool `json:"provisioned"`
Score float64 `json:"score"`
}
// NewUserHit creates a new UserHit object.
func NewUserHit() *UserHit {
return &UserHit{}
}
// +k8s:openapi-gen=true
type GetSearchUsers struct {
Offset int64 `json:"offset"`
TotalHits int64 `json:"totalHits"`
Hits []UserHit `json:"hits"`
QueryCost float64 `json:"queryCost"`
MaxScore float64 `json:"maxScore"`
}
// NewGetSearchUsers creates a new GetSearchUsers object.
func NewGetSearchUsers() *GetSearchUsers {
return &GetSearchUsers{
Hits: []UserHit{},
}
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/grafana/grafana-app-sdk/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type UserClient struct {
@@ -75,6 +76,24 @@ func (c *UserClient) Patch(ctx context.Context, identifier resource.Identifier,
return c.client.Patch(ctx, identifier, req, opts)
}
func (c *UserClient) UpdateStatus(ctx context.Context, identifier resource.Identifier, newStatus UserStatus, opts resource.UpdateOptions) (*User, error) {
return c.client.Update(ctx, &User{
TypeMeta: metav1.TypeMeta{
Kind: UserKind().Kind(),
APIVersion: GroupVersion.Identifier(),
},
ObjectMeta: metav1.ObjectMeta{
ResourceVersion: opts.ResourceVersion,
Namespace: identifier.Namespace,
Name: identifier.Name,
},
Status: newStatus,
}, resource.UpdateOptions{
Subresource: "status",
ResourceVersion: opts.ResourceVersion,
})
}
func (c *UserClient) Delete(ctx context.Context, identifier resource.Identifier, opts resource.DeleteOptions) error {
return c.client.Delete(ctx, identifier, opts)
}

View File

@@ -21,11 +21,14 @@ type User struct {
// Spec is the spec of the User
Spec UserSpec `json:"spec" yaml:"spec"`
Status UserStatus `json:"status" yaml:"status"`
}
func NewUser() *User {
return &User{
Spec: *NewUserSpec(),
Spec: *NewUserSpec(),
Status: *NewUserStatus(),
}
}
@@ -43,11 +46,15 @@ func (o *User) SetSpec(spec any) error {
}
func (o *User) GetSubresources() map[string]any {
return map[string]any{}
return map[string]any{
"status": o.Status,
}
}
func (o *User) GetSubresource(name string) (any, bool) {
switch name {
case "status":
return o.Status, true
default:
return nil, false
}
@@ -55,6 +62,13 @@ func (o *User) GetSubresource(name string) (any, bool) {
func (o *User) SetSubresource(name string, value any) error {
switch name {
case "status":
cast, ok := value.(UserStatus)
if !ok {
return fmt.Errorf("cannot set status type %#v, not of type UserStatus", value)
}
o.Status = cast
return nil
default:
return fmt.Errorf("subresource '%s' does not exist", name)
}
@@ -226,6 +240,7 @@ func (o *User) DeepCopyInto(dst *User) {
dst.TypeMeta.Kind = o.TypeMeta.Kind
o.ObjectMeta.DeepCopyInto(&dst.ObjectMeta)
o.Spec.DeepCopyInto(&dst.Spec)
o.Status.DeepCopyInto(&dst.Status)
}
// Interface compliance compile-time check
@@ -297,3 +312,15 @@ func (s *UserSpec) DeepCopy() *UserSpec {
func (s *UserSpec) DeepCopyInto(dst *UserSpec) {
resource.CopyObjectInto(dst, s)
}
// DeepCopy creates a full deep copy of UserStatus
func (s *UserStatus) DeepCopy() *UserStatus {
cpy := &UserStatus{}
s.DeepCopyInto(cpy)
return cpy
}
// DeepCopyInto deep copies UserStatus into another UserStatus object
func (s *UserStatus) DeepCopyInto(dst *UserStatus) {
resource.CopyObjectInto(dst, s)
}

View File

@@ -2,43 +2,12 @@
package v0alpha1
// +k8s:openapi-gen=true
type UserstatusOperatorState struct {
// lastEvaluation is the ResourceVersion last evaluated
LastEvaluation string `json:"lastEvaluation"`
// state describes the state of the lastEvaluation.
// It is limited to three possible states for machine evaluation.
State UserStatusOperatorStateState `json:"state"`
// descriptiveState is an optional more descriptive state field which has no requirements on format
DescriptiveState *string `json:"descriptiveState,omitempty"`
// details contains any extra information that is operator-specific
Details map[string]interface{} `json:"details,omitempty"`
}
// NewUserstatusOperatorState creates a new UserstatusOperatorState object.
func NewUserstatusOperatorState() *UserstatusOperatorState {
return &UserstatusOperatorState{}
}
// +k8s:openapi-gen=true
type UserStatus struct {
// operatorStates is a map of operator ID to operator state evaluations.
// Any operator which consumes this kind SHOULD add its state evaluation information to this field.
OperatorStates map[string]UserstatusOperatorState `json:"operatorStates,omitempty"`
// additionalFields is reserved for future use
AdditionalFields map[string]interface{} `json:"additionalFields,omitempty"`
LastSeenAt int64 `json:"lastSeenAt"`
}
// NewUserStatus creates a new UserStatus object.
func NewUserStatus() *UserStatus {
return &UserStatus{}
}
// +k8s:openapi-gen=true
type UserStatusOperatorStateState string
const (
UserStatusOperatorStateStateSuccess UserStatusOperatorStateState = "success"
UserStatusOperatorStateStateInProgress UserStatusOperatorStateState = "in_progress"
UserStatusOperatorStateStateFailed UserStatusOperatorStateState = "failed"
)

View File

@@ -21,6 +21,7 @@ func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenA
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GetGroupsBody": schema_pkg_apis_iam_v0alpha1_GetGroupsBody(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GetSearchTeams": schema_pkg_apis_iam_v0alpha1_GetSearchTeams(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GetSearchTeamsBody": schema_pkg_apis_iam_v0alpha1_GetSearchTeamsBody(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GetSearchUsers": schema_pkg_apis_iam_v0alpha1_GetSearchUsers(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GlobalRole": schema_pkg_apis_iam_v0alpha1_GlobalRole(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GlobalRoleBinding": schema_pkg_apis_iam_v0alpha1_GlobalRoleBinding(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GlobalRoleBindingList": schema_pkg_apis_iam_v0alpha1_GlobalRoleBindingList(ref),
@@ -72,10 +73,10 @@ func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenA
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.TeamStatus": schema_pkg_apis_iam_v0alpha1_TeamStatus(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.TeamstatusOperatorState": schema_pkg_apis_iam_v0alpha1_TeamstatusOperatorState(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.User": schema_pkg_apis_iam_v0alpha1_User(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserHit": schema_pkg_apis_iam_v0alpha1_UserHit(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserList": schema_pkg_apis_iam_v0alpha1_UserList(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserSpec": schema_pkg_apis_iam_v0alpha1_UserSpec(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserStatus": schema_pkg_apis_iam_v0alpha1_UserStatus(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserstatusOperatorState": schema_pkg_apis_iam_v0alpha1_UserstatusOperatorState(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.VersionsV0alpha1Kinds7RoutesGroupsGETResponseExternalGroupMapping": schema_pkg_apis_iam_v0alpha1_VersionsV0alpha1Kinds7RoutesGroupsGETResponseExternalGroupMapping(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.VersionsV0alpha1RoutesNamespacedSearchTeamsGETResponseTeamHit": schema_pkg_apis_iam_v0alpha1_VersionsV0alpha1RoutesNamespacedSearchTeamsGETResponseTeamHit(ref),
}
@@ -688,6 +689,62 @@ func schema_pkg_apis_iam_v0alpha1_GetSearchTeamsBody(ref common.ReferenceCallbac
}
}
func schema_pkg_apis_iam_v0alpha1_GetSearchUsers(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"offset": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"totalHits": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"hits": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref("github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserHit"),
},
},
},
},
},
"queryCost": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"number"},
Format: "double",
},
},
"maxScore": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"number"},
Format: "double",
},
},
},
Required: []string{"offset", "totalHits", "hits", "queryCost", "maxScore"},
},
},
Dependencies: []string{
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserHit"},
}
}
func schema_pkg_apis_iam_v0alpha1_GlobalRole(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
@@ -2833,12 +2890,94 @@ func schema_pkg_apis_iam_v0alpha1_User(ref common.ReferenceCallback) common.Open
Ref: ref("github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserSpec"),
},
},
"status": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref("github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserStatus"),
},
},
},
Required: []string{"metadata", "spec"},
Required: []string{"metadata", "spec", "status"},
},
},
Dependencies: []string{
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserSpec", "k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta"},
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserSpec", "github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserStatus", "k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta"},
}
}
func schema_pkg_apis_iam_v0alpha1_UserHit(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"title": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"login": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"email": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"role": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"lastSeenAt": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"lastSeenAtAge": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"provisioned": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"score": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"number"},
Format: "double",
},
},
},
Required: []string{"name", "title", "login", "email", "role", "lastSeenAt", "lastSeenAtAge", "provisioned", "score"},
},
},
}
}
@@ -2965,90 +3104,15 @@ func schema_pkg_apis_iam_v0alpha1_UserStatus(ref common.ReferenceCallback) commo
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"operatorStates": {
"lastSeenAt": {
SchemaProps: spec.SchemaProps{
Description: "operatorStates is a map of operator ID to operator state evaluations. Any operator which consumes this kind SHOULD add its state evaluation information to this field.",
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref("github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserstatusOperatorState"),
},
},
},
},
},
"additionalFields": {
SchemaProps: spec.SchemaProps{
Description: "additionalFields is reserved for future use",
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Format: "",
},
},
},
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
},
},
},
Dependencies: []string{
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserstatusOperatorState"},
}
}
func schema_pkg_apis_iam_v0alpha1_UserstatusOperatorState(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"lastEvaluation": {
SchemaProps: spec.SchemaProps{
Description: "lastEvaluation is the ResourceVersion last evaluated",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"state": {
SchemaProps: spec.SchemaProps{
Description: "state describes the state of the lastEvaluation. It is limited to three possible states for machine evaluation.",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"descriptiveState": {
SchemaProps: spec.SchemaProps{
Description: "descriptiveState is an optional more descriptive state field which has no requirements on format",
Type: []string{"string"},
Format: "",
},
},
"details": {
SchemaProps: spec.SchemaProps{
Description: "details contains any extra information that is operator-specific",
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Format: "",
},
},
},
},
},
},
Required: []string{"lastEvaluation", "state"},
Required: []string{"lastSeenAt"},
},
},
}

View File

@@ -173,6 +173,36 @@ var appManifestData = app.ManifestData{
Parameters: []*spec3.Parameter{
{
ParameterProps: spec3.ParameterProps{
Name: "limit",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "offset",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "page",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "query",
@@ -261,6 +291,118 @@ var appManifestData = app.ManifestData{
},
},
},
"/searchUsers": {
Get: &spec3.Operation{
OperationProps: spec3.OperationProps{
OperationId: "getSearchUsers",
Parameters: []*spec3.Parameter{
{
ParameterProps: spec3.ParameterProps{
Name: "limit",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "offset",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "page",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "query",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
},
},
},
Responses: &spec3.Responses{
ResponsesProps: spec3.ResponsesProps{
Default: &spec3.Response{
ResponseProps: spec3.ResponseProps{
Description: "Default OK response",
Content: map[string]*spec3.MediaType{
"application/json": {
MediaTypeProps: spec3.MediaTypeProps{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"hits": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Ref: spec.MustCreateRef("#/components/schemas/getSearchUsersUserHit"),
}},
},
},
},
"maxScore": {
SchemaProps: spec.SchemaProps{
Type: []string{"number"},
},
},
"offset": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
},
},
"queryCost": {
SchemaProps: spec.SchemaProps{
Type: []string{"number"},
},
},
"totalHits": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
},
},
},
Required: []string{
"offset",
"totalHits",
"hits",
"queryCost",
"maxScore",
},
}},
}},
},
},
},
}},
},
},
},
},
Cluster: map[string]spec3.PathProps{},
Schemas: map[string]spec.Schema{
@@ -303,6 +445,69 @@ var appManifestData = app.ManifestData{
},
},
},
"getSearchUsersUserHit": {
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"email": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
"lastSeenAt": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
},
},
"lastSeenAtAge": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
"login": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
"name": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
"provisioned": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
},
},
"role": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
"score": {
SchemaProps: spec.SchemaProps{
Type: []string{"number"},
},
},
"title": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
},
Required: []string{
"name",
"title",
"login",
"email",
"role",
"lastSeenAt",
"lastSeenAtAge",
"provisioned",
"score",
},
},
},
},
},
},
@@ -342,6 +547,7 @@ var customRouteToGoResponseType = map[string]any{
"v0alpha1|Team|groups|GET": v0alpha1.GetGroups{},
"v0alpha1||<namespace>/searchTeams|GET": v0alpha1.GetSearchTeams{},
"v0alpha1||<namespace>/searchUsers|GET": v0alpha1.GetSearchUsers{},
}
// ManifestCustomRouteResponsesAssociator returns the associated response go type for a given kind, version, custom route path, and method, if one exists.

View File

@@ -4,6 +4,8 @@ import (
"context"
"fmt"
"github.com/prometheus/client_golang/prometheus"
"github.com/grafana/grafana-app-sdk/app"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana-app-sdk/operator"
@@ -12,7 +14,6 @@ import (
foldersKind "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
"github.com/grafana/grafana/apps/iam/pkg/reconcilers"
"github.com/grafana/grafana/pkg/services/authz"
"github.com/prometheus/client_golang/prometheus"
)
var appManifestData = app.ManifestData{
@@ -78,7 +79,7 @@ func New(cfg app.Config) (app.App, error) {
folderReconciler, err := reconcilers.NewFolderReconciler(reconcilers.ReconcilerConfig{
ZanzanaCfg: appSpecificConfig.ZanzanaClientCfg,
Metrics: metrics,
})
}, appSpecificConfig.MetricsRegisterer)
if err != nil {
return nil, fmt.Errorf("unable to create FolderReconciler: %w", err)
}

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"time"
"github.com/prometheus/client_golang/prometheus"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
@@ -35,9 +36,9 @@ type FolderReconciler struct {
metrics *ReconcilerMetrics
}
func NewFolderReconciler(cfg ReconcilerConfig) (operator.Reconciler, error) {
func NewFolderReconciler(cfg ReconcilerConfig, reg prometheus.Registerer) (operator.Reconciler, error) {
// Create Zanzana client
zanzanaClient, err := authz.NewRemoteZanzanaClient("*", cfg.ZanzanaCfg)
zanzanaClient, err := authz.NewRemoteZanzanaClient(cfg.ZanzanaCfg, reg)
if err != nil {
return nil, fmt.Errorf("unable to create zanzana client: %w", err)

View File

@@ -133,6 +133,12 @@ type ExportJobOptions struct {
// FIXME: we should validate this in admission hooks
// Prefix in target file system
Path string `json:"path,omitempty"`
// Resources to export
// This option has been created because currently the frontend does not use
// standarized app platform APIs. For performance and API consistency reasons, the preferred option
// is it to use the resources.
Resources []ResourceRef `json:"resources,omitempty"`
}
type MigrateJobOptions struct {
@@ -198,6 +204,7 @@ type JobStatus struct {
Finished int64 `json:"finished,omitempty"`
Message string `json:"message,omitempty"`
Errors []string `json:"errors,omitempty"`
Warnings []string `json:"warnings,omitempty"`
// Optional value 0-100 that can be set while running
Progress float64 `json:"progress,omitempty"`
@@ -225,18 +232,20 @@ type JobResourceSummary struct {
Kind string `json:"kind,omitempty"`
Total int64 `json:"total,omitempty"` // the count (if known)
Create int64 `json:"create,omitempty"`
Update int64 `json:"update,omitempty"`
Delete int64 `json:"delete,omitempty"`
Write int64 `json:"write,omitempty"` // Create or update (export)
Error int64 `json:"error,omitempty"` // The error count
Create int64 `json:"create,omitempty"`
Update int64 `json:"update,omitempty"`
Delete int64 `json:"delete,omitempty"`
Write int64 `json:"write,omitempty"` // Create or update (export)
Error int64 `json:"error,omitempty"` // The error count
Warning int64 `json:"warning,omitempty"` // The warning count
// No action required (useful for sync)
Noop int64 `json:"noop,omitempty"`
// Report errors for this resource type
// Report errors/warnings for this resource type
// This may not be an exhaustive list and recommend looking at the logs for more info
Errors []string `json:"errors,omitempty"`
Errors []string `json:"errors,omitempty"`
Warnings []string `json:"warnings,omitempty"`
}
// HistoricJob is an append only log, saving all jobs that have been processed.

View File

@@ -88,6 +88,11 @@ func (in *ErrorDetails) DeepCopy() *ErrorDetails {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ExportJobOptions) DeepCopyInto(out *ExportJobOptions) {
*out = *in
if in.Resources != nil {
in, out := &in.Resources, &out.Resources
*out = make([]ResourceRef, len(*in))
copy(*out, *in)
}
return
}
@@ -401,6 +406,11 @@ func (in *JobResourceSummary) DeepCopyInto(out *JobResourceSummary) {
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Warnings != nil {
in, out := &in.Warnings, &out.Warnings
*out = make([]string, len(*in))
copy(*out, *in)
}
return
}
@@ -425,7 +435,7 @@ func (in *JobSpec) DeepCopyInto(out *JobSpec) {
if in.Push != nil {
in, out := &in.Push, &out.Push
*out = new(ExportJobOptions)
**out = **in
(*in).DeepCopyInto(*out)
}
if in.Pull != nil {
in, out := &in.Pull, &out.Pull
@@ -468,6 +478,11 @@ func (in *JobStatus) DeepCopyInto(out *JobStatus) {
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Warnings != nil {
in, out := &in.Warnings, &out.Warnings
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Summary != nil {
in, out := &in.Summary, &out.Summary
*out = make([]*JobResourceSummary, len(*in))

View File

@@ -258,9 +258,25 @@ func schema_pkg_apis_provisioning_v0alpha1_ExportJobOptions(ref common.Reference
Format: "",
},
},
"resources": {
SchemaProps: spec.SchemaProps{
Description: "Resources to export This option has been created because currently the frontend does not use standarized app platform APIs. For performance and API consistency reasons, the preferred option is it to use the resources.",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref("github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1.ResourceRef"),
},
},
},
},
},
},
},
},
Dependencies: []string{
"github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1.ResourceRef"},
}
}
@@ -889,6 +905,13 @@ func schema_pkg_apis_provisioning_v0alpha1_JobResourceSummary(ref common.Referen
Format: "int64",
},
},
"warning": {
SchemaProps: spec.SchemaProps{
Description: "The error count",
Type: []string{"integer"},
Format: "int64",
},
},
"noop": {
SchemaProps: spec.SchemaProps{
Description: "No action required (useful for sync)",
@@ -898,7 +921,7 @@ func schema_pkg_apis_provisioning_v0alpha1_JobResourceSummary(ref common.Referen
},
"errors": {
SchemaProps: spec.SchemaProps{
Description: "Report errors for this resource type This may not be an exhaustive list and recommend looking at the logs for more info",
Description: "Report errors/warnings for this resource type This may not be an exhaustive list and recommend looking at the logs for more info",
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
@@ -911,6 +934,20 @@ func schema_pkg_apis_provisioning_v0alpha1_JobResourceSummary(ref common.Referen
},
},
},
"warnings": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
},
},
},
@@ -1029,6 +1066,20 @@ func schema_pkg_apis_provisioning_v0alpha1_JobStatus(ref common.ReferenceCallbac
},
},
},
"warnings": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
},
},
},
"progress": {
SchemaProps: spec.SchemaProps{
Description: "Optional value 0-100 that can be set while running",

View File

@@ -1,10 +1,13 @@
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,DeleteJobOptions,Paths
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,DeleteJobOptions,Resources
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,ExportJobOptions,Resources
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,FileList,Items
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,HistoryList,Items
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,JobResourceSummary,Errors
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,JobResourceSummary,Warnings
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,JobStatus,Errors
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,JobStatus,Summary
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,JobStatus,Warnings
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,ManagerStats,Stats
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,MoveJobOptions,Paths
API rule violation: list_type_missing,github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1,MoveJobOptions,Resources

View File

@@ -7,10 +7,11 @@ package v0alpha1
// ExportJobOptionsApplyConfiguration represents a declarative configuration of the ExportJobOptions type for use
// with apply.
type ExportJobOptionsApplyConfiguration struct {
Message *string `json:"message,omitempty"`
Folder *string `json:"folder,omitempty"`
Branch *string `json:"branch,omitempty"`
Path *string `json:"path,omitempty"`
Message *string `json:"message,omitempty"`
Folder *string `json:"folder,omitempty"`
Branch *string `json:"branch,omitempty"`
Path *string `json:"path,omitempty"`
Resources []ResourceRefApplyConfiguration `json:"resources,omitempty"`
}
// ExportJobOptionsApplyConfiguration constructs a declarative configuration of the ExportJobOptions type for use with
@@ -50,3 +51,16 @@ func (b *ExportJobOptionsApplyConfiguration) WithPath(value string) *ExportJobOp
b.Path = &value
return b
}
// WithResources adds the given value to the Resources field in the declarative configuration
// and returns the receiver, so that objects can be build by chaining "With" function invocations.
// If called multiple times, values provided by each call will be appended to the Resources field.
func (b *ExportJobOptionsApplyConfiguration) WithResources(values ...*ResourceRefApplyConfiguration) *ExportJobOptionsApplyConfiguration {
for i := range values {
if values[i] == nil {
panic("nil value passed to WithResources")
}
b.Resources = append(b.Resources, *values[i])
}
return b
}

View File

@@ -7,16 +7,18 @@ package v0alpha1
// JobResourceSummaryApplyConfiguration represents a declarative configuration of the JobResourceSummary type for use
// with apply.
type JobResourceSummaryApplyConfiguration struct {
Group *string `json:"group,omitempty"`
Kind *string `json:"kind,omitempty"`
Total *int64 `json:"total,omitempty"`
Create *int64 `json:"create,omitempty"`
Update *int64 `json:"update,omitempty"`
Delete *int64 `json:"delete,omitempty"`
Write *int64 `json:"write,omitempty"`
Error *int64 `json:"error,omitempty"`
Noop *int64 `json:"noop,omitempty"`
Errors []string `json:"errors,omitempty"`
Group *string `json:"group,omitempty"`
Kind *string `json:"kind,omitempty"`
Total *int64 `json:"total,omitempty"`
Create *int64 `json:"create,omitempty"`
Update *int64 `json:"update,omitempty"`
Delete *int64 `json:"delete,omitempty"`
Write *int64 `json:"write,omitempty"`
Error *int64 `json:"error,omitempty"`
Warning *int64 `json:"warning,omitempty"`
Noop *int64 `json:"noop,omitempty"`
Errors []string `json:"errors,omitempty"`
Warnings []string `json:"warnings,omitempty"`
}
// JobResourceSummaryApplyConfiguration constructs a declarative configuration of the JobResourceSummary type for use with
@@ -89,6 +91,14 @@ func (b *JobResourceSummaryApplyConfiguration) WithError(value int64) *JobResour
return b
}
// WithWarning sets the Warning field in the declarative configuration to the given value
// and returns the receiver, so that objects can be built by chaining "With" function invocations.
// If called multiple times, the Warning field is set to the value of the last call.
func (b *JobResourceSummaryApplyConfiguration) WithWarning(value int64) *JobResourceSummaryApplyConfiguration {
b.Warning = &value
return b
}
// WithNoop sets the Noop field in the declarative configuration to the given value
// and returns the receiver, so that objects can be built by chaining "With" function invocations.
// If called multiple times, the Noop field is set to the value of the last call.
@@ -106,3 +116,13 @@ func (b *JobResourceSummaryApplyConfiguration) WithErrors(values ...string) *Job
}
return b
}
// WithWarnings adds the given value to the Warnings field in the declarative configuration
// and returns the receiver, so that objects can be build by chaining "With" function invocations.
// If called multiple times, values provided by each call will be appended to the Warnings field.
func (b *JobResourceSummaryApplyConfiguration) WithWarnings(values ...string) *JobResourceSummaryApplyConfiguration {
for i := range values {
b.Warnings = append(b.Warnings, values[i])
}
return b
}

View File

@@ -16,6 +16,7 @@ type JobStatusApplyConfiguration struct {
Finished *int64 `json:"finished,omitempty"`
Message *string `json:"message,omitempty"`
Errors []string `json:"errors,omitempty"`
Warnings []string `json:"warnings,omitempty"`
Progress *float64 `json:"progress,omitempty"`
Summary []*provisioningv0alpha1.JobResourceSummary `json:"summary,omitempty"`
URLs *RepositoryURLsApplyConfiguration `json:"url,omitempty"`
@@ -69,6 +70,16 @@ func (b *JobStatusApplyConfiguration) WithErrors(values ...string) *JobStatusApp
return b
}
// WithWarnings adds the given value to the Warnings field in the declarative configuration
// and returns the receiver, so that objects can be build by chaining "With" function invocations.
// If called multiple times, values provided by each call will be appended to the Warnings field.
func (b *JobStatusApplyConfiguration) WithWarnings(values ...string) *JobStatusApplyConfiguration {
for i := range values {
b.Warnings = append(b.Warnings, values[i])
}
return b
}
// WithProgress sets the Progress field in the declarative configuration to the given value
// and returns the receiver, so that objects can be built by chaining "With" function invocations.
// If called multiple times, the Progress field is set to the value of the last call.

View File

@@ -7,6 +7,7 @@ import (
provisioning "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
"github.com/grafana/grafana/apps/provisioning/pkg/repository/git"
"github.com/grafana/grafana/apps/provisioning/pkg/safepath"
"github.com/grafana/grafana/pkg/registry/apis/provisioning/resources"
)
// ValidateJob performs validation on the Job specification and returns an error if validation fails
@@ -99,6 +100,40 @@ func validateExportJobOptions(opts *provisioning.ExportJobOptions) field.ErrorLi
}
}
// Validate resources if specified
if len(opts.Resources) > 0 {
for i, r := range opts.Resources {
resourcePath := field.NewPath("spec", "push", "resources").Index(i)
// Validate required fields
if r.Name == "" {
list = append(list, field.Required(resourcePath.Child("name"), "resource name is required"))
}
if r.Kind == "" {
list = append(list, field.Required(resourcePath.Child("kind"), "resource kind is required"))
}
if r.Group == "" {
list = append(list, field.Required(resourcePath.Child("group"), "resource group is required"))
}
// Validate that folders are not allowed
if r.Kind == resources.FolderKind.Kind || r.Group == resources.FolderResource.Group {
list = append(list, field.Invalid(resourcePath, r, "folders are not supported for export"))
continue // Skip further validation for folders
}
// Validate that only supported resources are allowed
// Currently only Dashboard resources are supported (folders are rejected above)
if r.Kind != "" && r.Group != "" {
// Check if it's a Dashboard resource
isDashboard := r.Group == resources.DashboardResource.Group && r.Kind == "Dashboard"
if !isDashboard {
list = append(list, field.Invalid(resourcePath, r, "resource type is not supported for export"))
}
}
}
}
return list
}

View File

@@ -575,6 +575,242 @@ func TestValidateJob(t *testing.T) {
},
wantErr: false,
},
{
name: "push action with valid dashboard resources",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Resources: []provisioning.ResourceRef{
{
Name: "dashboard-1",
Kind: "Dashboard",
Group: "dashboard.grafana.app",
},
{
Name: "dashboard-2",
Kind: "Dashboard",
Group: "dashboard.grafana.app",
},
},
Path: "dashboards/",
Message: "Export dashboards",
},
},
},
wantErr: false,
},
{
name: "push action with resource missing name",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Resources: []provisioning.ResourceRef{
{
Kind: "Dashboard",
Group: "dashboard.grafana.app",
},
},
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.push.resources[0].name")
require.Contains(t, err.Error(), "Required value")
},
},
{
name: "push action with resource missing kind",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Resources: []provisioning.ResourceRef{
{
Name: "dashboard-1",
Group: "dashboard.grafana.app",
},
},
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.push.resources[0].kind")
require.Contains(t, err.Error(), "Required value")
},
},
{
name: "push action with resource missing group",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Resources: []provisioning.ResourceRef{
{
Name: "dashboard-1",
Kind: "Dashboard",
},
},
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.push.resources[0].group")
require.Contains(t, err.Error(), "Required value")
},
},
{
name: "push action with folder resource by kind",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Resources: []provisioning.ResourceRef{
{
Name: "my-folder",
Kind: "Folder",
Group: "folder.grafana.app",
},
},
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.push.resources[0]")
require.Contains(t, err.Error(), "folders are not supported for export")
},
},
{
name: "push action with folder resource by group",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Resources: []provisioning.ResourceRef{
{
Name: "my-folder",
Kind: "SomeKind",
Group: "folder.grafana.app",
},
},
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.push.resources[0]")
require.Contains(t, err.Error(), "folders are not supported for export")
},
},
{
name: "push action with unsupported resource type",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Resources: []provisioning.ResourceRef{
{
Name: "my-resource",
Kind: "AlertRule",
Group: "alerting.grafana.app",
},
},
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.push.resources[0]")
require.Contains(t, err.Error(), "resource type is not supported for export")
},
},
{
name: "push action with valid folder (old behavior)",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Folder: "my-folder",
Path: "dashboards/",
Message: "Export folder",
},
},
},
wantErr: false,
},
{
name: "push action with multiple resources including invalid ones",
job: &provisioning.Job{
ObjectMeta: metav1.ObjectMeta{
Name: "test-job",
},
Spec: provisioning.JobSpec{
Action: provisioning.JobActionPush,
Repository: "test-repo",
Push: &provisioning.ExportJobOptions{
Resources: []provisioning.ResourceRef{
{
Name: "dashboard-1",
Kind: "Dashboard",
Group: "dashboard.grafana.app",
},
{
Name: "my-folder",
Kind: "Folder",
Group: "folder.grafana.app",
},
{
Name: "dashboard-2",
Kind: "Dashboard",
Group: "dashboard.grafana.app",
},
},
},
},
},
wantErr: true,
validateError: func(t *testing.T, err error) {
require.Contains(t, err.Error(), "spec.push.resources[1]")
require.Contains(t, err.Error(), "folders are not supported for export")
},
},
}
for _, tt := range tests {

View File

@@ -288,18 +288,18 @@ func (r *localRepository) calculateFileHash(path string) (string, int64, error)
return hex.EncodeToString(hasher.Sum(nil)), size, nil
}
func (r *localRepository) Create(ctx context.Context, filepath string, ref string, data []byte, comment string) error {
func (r *localRepository) Create(ctx context.Context, filePath string, ref string, data []byte, comment string) error {
if err := r.validateRequest(ref); err != nil {
return err
}
fpath := safepath.Join(r.path, filepath)
fpath := safepath.Join(r.path, filePath)
_, err := os.Stat(fpath)
if !errors.Is(err, os.ErrNotExist) {
if err != nil {
return apierrors.NewInternalError(fmt.Errorf("failed to check if file exists: %w", err))
}
return apierrors.NewAlreadyExists(schema.GroupResource{}, filepath)
return apierrors.NewAlreadyExists(schema.GroupResource{}, filePath)
}
if safepath.IsDir(fpath) {
@@ -314,7 +314,7 @@ func (r *localRepository) Create(ctx context.Context, filepath string, ref strin
return nil
}
if err := os.MkdirAll(path.Dir(fpath), 0700); err != nil {
if err := os.MkdirAll(filepath.Dir(fpath), 0700); err != nil {
return apierrors.NewInternalError(fmt.Errorf("failed to create path: %w", err))
}
@@ -352,7 +352,7 @@ func (r *localRepository) Write(ctx context.Context, fpath, ref string, data []b
return os.MkdirAll(fpath, 0700)
}
if err := os.MkdirAll(path.Dir(fpath), 0700); err != nil {
if err := os.MkdirAll(filepath.Dir(fpath), 0700); err != nil {
return apierrors.NewInternalError(fmt.Errorf("failed to create path: %w", err))
}

View File

@@ -75,9 +75,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": true,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -152,9 +152,9 @@
"effects": {
"barGlow": false,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -229,9 +229,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -306,9 +306,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -383,9 +383,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -460,9 +460,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": false,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -537,9 +537,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": false,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -627,9 +627,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -704,9 +704,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -781,9 +781,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -858,9 +858,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": false,
"rounded": true,
"spotlight": true,
"gradient": false
"spotlight": true
},
"orientation": "auto",
"reduceOptions": {
@@ -952,9 +952,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1029,9 +1029,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1106,9 +1106,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1183,9 +1183,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1260,9 +1260,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": false,
"rounded": false,
"spotlight": false,
"gradient": false
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1354,9 +1354,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1435,9 +1435,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1516,9 +1516,9 @@
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false,
"gradient": true
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
@@ -1565,7 +1565,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -1606,9 +1605,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -1631,7 +1630,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 98,
"min": 5,
"noise": 22,
@@ -1649,7 +1647,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -1690,9 +1687,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -1715,7 +1712,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 98,
"min": 5,
"noise": 22,
@@ -1746,7 +1742,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -1788,9 +1783,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -1813,7 +1808,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 8,
"min": 1,
"noise": 2,
@@ -1831,7 +1825,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -1873,9 +1866,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -1898,7 +1891,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 12,
"min": 1,
"noise": 2,
@@ -1916,7 +1908,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -1957,9 +1948,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -1982,7 +1973,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 100,
"min": 10,
"noise": 22,
@@ -2000,7 +1990,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
@@ -2041,9 +2030,9 @@
"effects": {
"barGlow": true,
"centerGlow": true,
"gradient": true,
"rounded": true,
"spotlight": true,
"gradient": true
"spotlight": true
},
"glow": "both",
"orientation": "auto",
@@ -2066,7 +2055,6 @@
"datasource": {
"type": "grafana-testdata-datasource"
},
"hide": false,
"max": 100,
"min": 10,
"noise": 22,
@@ -2079,6 +2067,147 @@
],
"title": "Backend",
"type": "radialbar"
},
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 66
},
"id": 35,
"panels": [],
"title": "Empty data",
"type": "row"
},
{
"datasource": {
"type": "grafana-testdata-datasource"
},
"fieldConfig": {
"defaults": {
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": 0
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 6,
"x": 0,
"y": 67
},
"id": 36,
"options": {
"barWidthFactor": 0.5,
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"segmentCount": 1,
"segmentSpacing": 0.3,
"shape": "gauge",
"showThresholdLabels": false,
"showThresholdMarkers": true,
"sparkline": true
},
"pluginVersion": "13.0.0-pre",
"targets": [
{
"refId": "A",
"scenarioId": "random_walk",
"seriesCount": 0
}
],
"title": "Numeric, no series",
"type": "gauge"
},
{
"datasource": {
"type": "grafana-testdata-datasource"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": 0
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 8,
"w": 6,
"x": 6,
"y": 67
},
"id": 37,
"options": {
"barWidthFactor": 0.5,
"effects": {
"barGlow": false,
"centerGlow": false,
"gradient": true,
"rounded": false,
"spotlight": false
},
"orientation": "auto",
"reduceOptions": {
"calcs": ["lastNotNull"],
"fields": "",
"values": false
},
"segmentCount": 1,
"segmentSpacing": 0.3,
"shape": "gauge",
"showThresholdLabels": false,
"showThresholdMarkers": true,
"sparkline": true
},
"pluginVersion": "13.0.0-pre",
"targets": [
{
"refId": "A",
"scenarioId": "logs"
}
],
"title": "Non-numeric",
"type": "gauge"
}
],
"preload": false,
@@ -2095,5 +2224,5 @@
"timezone": "browser",
"title": "Panel tests - Gauge (new)",
"uid": "panel-tests-gauge-new",
"version": 6
"version": 9
}

View File

@@ -83,6 +83,12 @@ tree:
nodeType: leaf
linkId: test-case-2
linkType: scope
test-case-redirect:
title: Test case with redirect
nodeType: leaf
linkId: shoe-org
linkType: scope
redirectPath: /d/dcb9f5e9-8066-4397-889e-864b99555dbb #Reliability dashboard
clusters:
title: Clusters
nodeType: container

View File

@@ -67,10 +67,12 @@ type ScopeFilterConfig struct {
type TreeNode struct {
Title string `yaml:"title"`
SubTitle string `yaml:"subTitle,omitempty"`
Description string `yaml:"description,omitempty"`
NodeType string `yaml:"nodeType"`
LinkID string `yaml:"linkId,omitempty"`
LinkType string `yaml:"linkType,omitempty"`
DisableMultiSelect bool `yaml:"disableMultiSelect,omitempty"`
RedirectPath string `yaml:"redirectPath,omitempty"`
Children map[string]TreeNode `yaml:"children,omitempty"`
}
@@ -259,6 +261,7 @@ func (c *Client) createScopeNode(name string, node TreeNode, parentName string)
spec := v0alpha1.ScopeNodeSpec{
Title: node.Title,
SubTitle: node.SubTitle,
Description: node.Description,
NodeType: nodeType,
DisableMultiSelect: node.DisableMultiSelect,
}
@@ -272,6 +275,10 @@ func (c *Client) createScopeNode(name string, node TreeNode, parentName string)
spec.LinkType = linkType
}
if node.RedirectPath != "" {
spec.RedirectPath = node.RedirectPath
}
resource := v0alpha1.ScopeNode{
TypeMeta: metav1.TypeMeta{
APIVersion: apiVersion,

View File

@@ -7,8 +7,8 @@ MAKEFLAGS += --no-builtin-rule
include docs.mk
.PHONY: sources/panels-visualizations/query-transform-data/transform-data/index.md
sources/panels-visualizations/query-transform-data/transform-data/index.md: ## Generate the Transform Data page source.
.PHONY: sources/visualizations/panels-visualizations/query-transform-data/transform-data/index.md
sources/visualizations/panels-visualizations/query-transform-data/transform-data/index.md: ## Generate the Transform Data page source.
cd $(CURDIR)/.. && \
npx tsx ./scripts/docs/generate-transformations.ts && \
npx prettier -w $(CURDIR)/$@

View File

@@ -59,9 +59,9 @@ For more details on contact points, including how to test them and enable notifi
## Alertmanager settings
| Option | Description |
| ------ | ---------------------------------------------------------------------------------------------------------------------------------- |
| URL | The Alertmanager URL. This field is [protected](ref:configure-contact-points#protected-fields) from modification in Grafana Cloud. |
| Option | Description |
| ------ | ----------------------------------------------------------------------------------------------------------------- |
| URL | The Alertmanager URL. This field is [protected](ref:configure-contact-points) from modification in Grafana Cloud. |
#### Optional settings

View File

@@ -49,14 +49,14 @@ For more details on contact points, including how to test them and enable notifi
### Required Settings
| Key | Description |
| ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| URL | The URL of the REST API of your Jira instance. Supported versions: `2` and `3` (e.g., `https://your-domain.atlassian.net/rest/api/3`). This field is [protected](ref:configure-contact-points#protected-fields) from modification in Grafana Cloud. |
| Basic Auth User | Username for authentication. For Jira Cloud, use your email address. |
| Basic Auth Password | Password or personal token. For Jira Cloud, you need to obtain a personal token [here](https://id.atlassian.com/manage-profile/security/api-tokens) and use it as the password. |
| API Token | An alternative to basic authentication, a bearer token is used to authorize the API requests. See [Jira documentation](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html) for more information. |
| Project Key | The project key identifying the project where issues will be created. Project keys are unique identifiers for a project. |
| Issue Type | The type of issue to create (e.g., `Task`, `Bug`, `Incident`). Make sure that you specify a type that is available in your project. |
| Key | Description |
| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| URL | The URL of the REST API of your Jira instance. Supported versions: `2` and `3` (e.g., `https://your-domain.atlassian.net/rest/api/3`). This field is [protected](ref:configure-contact-points) from modification in Grafana Cloud. |
| Basic Auth User | Username for authentication. For Jira Cloud, use your email address. |
| Basic Auth Password | Password or personal token. For Jira Cloud, you need to obtain a personal token [here](https://id.atlassian.com/manage-profile/security/api-tokens) and use it as the password. |
| API Token | An alternative to basic authentication, a bearer token is used to authorize the API requests. See [Jira documentation](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html) for more information. |
| Project Key | The project key identifying the project where issues will be created. Project keys are unique identifiers for a project. |
| Issue Type | The type of issue to create (e.g., `Task`, `Bug`, `Incident`). Make sure that you specify a type that is available in your project. |
### Optional Settings

View File

@@ -54,10 +54,10 @@ For more details on contact points, including how to test them and enable notifi
### Required Settings
| Option | Description |
| ---------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
| Broker URL | The URL of the MQTT broker. This field is [protected](ref:configure-contact-points#protected-fields) from modification in Grafana Cloud. |
| Topic | The topic to which the message will be sent. |
| Option | Description |
| ---------- | ----------------------------------------------------------------------------------------------------------------------- |
| Broker URL | The URL of the MQTT broker. This field is [protected](ref:configure-contact-points) from modification in Grafana Cloud. |
| Topic | The topic to which the message will be sent. |
### Optional Settings

View File

@@ -51,8 +51,8 @@ You can customize the `title` and `body` of the Slack message using [notificatio
If you are using a Slack API Token, complete the following steps.
1. Follow steps 1 and 2 of the [Slack API Quickstart](https://api.slack.com/start/quickstart).
1. Add the [chat:write.public](https://api.slack.com/scopes/chat:write.public) scope to give your app the ability to post in all public channels without joining.
1. Follow step 1 of the [Slack API Quickstart](https://docs.slack.dev/app-management/quickstart-app-settings/#creating) to create the app.
1. Continue onto the second step of the [Slack API Quickstart](https://docs.slack.dev/app-management/quickstart-app-settings/#scopes) and add the [chat:write.public](https://api.slack.com/scopes/chat:write.public) scope as described to give your app the ability to post in all public channels without joining.
1. In OAuth Tokens for Your Workspace, copy the Bot User OAuth Token.
1. Open your Slack workplace.
1. Right click the channel you want to receive notifications in.

View File

@@ -62,9 +62,9 @@ For more details on contact points, including how to test them and enable notifi
## Webhook settings
| Option | Description |
| ------ | ----------------------------------------------------------------------------------------------------------------------------- |
| URL | The Webhook URL. This field is [protected](ref:configure-contact-points#protected-fields) from modification in Grafana Cloud. |
| Option | Description |
| ------ | ------------------------------------------------------------------------------------------------------------ |
| URL | The Webhook URL. This field is [protected](ref:configure-contact-points) from modification in Grafana Cloud. |
#### Optional settings

View File

@@ -81,7 +81,7 @@ Replace the placeholders with your values:
In your `grafana` directory, create a sub-folder called `dashboards`.
This guide shows you how to creates three separate dashboards. For all dashboard configurations, replace the placeholders with your values:
This guide shows you how to create three separate dashboards. For all dashboard configurations, replace the placeholders with your values:
- _`<GRAFANA_CLOUD_STACK_NAME>`_: Name of your Grafana Cloud Stack
- _`<GRAFANA_OPERATOR_NAMESPACE>`_: Namespace where the `grafana-operator` is deployed in your Kubernetes cluster

View File

@@ -54,7 +54,7 @@ For production systems, use the `folderFromFilesStructure` capability instead of
## Before you begin
{{< admonition type="note" >}}
Enable the `provisioning` and `kubernetesDashboards` feature toggles in Grafana to use this feature.
Enable the `provisioning` feature toggle in Grafana to use this feature.
{{< /admonition >}}
To set up file provisioning, you need:
@@ -67,7 +67,7 @@ To set up file provisioning, you need:
## Enable required feature toggles and configure permitted paths
To activate local file provisioning in Grafana, you need to enable the `provisioning` and `kubernetesDashboards` feature toggles.
To activate local file provisioning in Grafana, you need to enable the `provisioning` feature toggle.
For additional information about feature toggles, refer to [Configure feature toggles](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/feature-toggles).
The local setting must be a relative path and its relative path must be configured in the `permitted_provisioned_paths` configuration option.
@@ -82,12 +82,11 @@ Any subdirectories are automatically included.
The values that you enter for the `permitted_provisioning_paths` become the base paths for those entered when you enter a local path in the **Connect to local storage** wizard.
1. Open your Grafana configuration file, either `grafana.ini` or `custom.ini`. For file location based on operating system, refer to [Configuration file location](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/feature-toggles/#experimental-feature-toggles).
1. Locate or add a `[feature_toggles]` section. Add these values:
1. Locate or add a `[feature_toggles]` section. Add this value:
```ini
[feature_toggles]
provisioning = true
kubernetesDashboards = true ; use k8s from browser
```
1. Locate or add a `[paths]` section. To add more than one location, use the pipe character (`|`) to separate the paths. The list should not include empty paths or trailing pipes. Add these values:

View File

@@ -0,0 +1,147 @@
---
title: Git Sync deployment scenarios
menuTitle: Deployment scenarios
description: Learn about common Git Sync deployment patterns and configurations for different organizational needs
weight: 450
keywords:
- git sync
- deployment patterns
- scenarios
- multi-environment
- teams
---
# Git Sync deployment scenarios
This guide shows practical deployment scenarios for Grafanas Git Sync. Learn how to configure bidirectional synchronization between Grafana and Git repositories for teams, environments, and regions.
{{< admonition type="caution" >}}
Git Sync is an experimental feature. It reflects Grafanas approach to Observability as Code and might include limitations or breaking changes. For current status and known limitations, refer to the [Git Sync introduction](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/intro-git-sync/).
{{< /admonition >}}
## Understand the relationship between key Git Sync components
Before you explore the scenarios, understand how the key Git Sync components relate:
- [Grafana instance](#grafana-instance)
- [Git repository structure](#git-repository-structure)
- [Git Sync repository resource](#git-sync-repository-resource)
### Grafana instance
A Grafana instance is a running Grafana server. Multiple instances can:
- Connect to the same Git repository using different Repository configurations.
- Sync from different branches of the same repository.
- Sync from different paths within the same repository.
- Sync from different repositories.
### Git repository structure
You can organize your Git repository in several ways:
- Single branch, multiple paths: Use different directories for different purposes (for example, `dev/`, `prod/`, `team-a/`).
- Multiple branches: Use different branches for different environments or teams (for example, `main`, `develop`, `team-a`).
- Multiple repositories: Use separate repositories for different teams or environments.
### Git Sync repository resource
A repository resource is a Grafana configuration object that defines:
- Which Git repository to sync with.
- Which branch to use.
- Which directory path to synchronize.
- Sync behavior and workflows.
Each repository resource creates bidirectional synchronization between a Grafana instance and a specific location in Git.
## How does repository sync behave?
With Git Sync you configure a repository resource to sync with your Grafana instance:
1. Grafana monitors the specified Git location (repository, branch, and path).
2. Grafana creates a folder in Dashboards (typically named after the repository).
3. Grafana creates dashboards from dashboard JSON files in Git within this folder.
4. Grafana commits dashboard changes made in the UI back to Git.
5. Grafana pulls dashboard changes made in Git and updates dashboards in the UI.
6. Synchronization occurs at regular intervals (configurable), or instantly if you use webhooks.
You can find the provisioned dashboards organized in folders under **Dashboards**.
## Example: Relationship between repository, branch, and path
Here's a concrete example showing how the three parameters work together:
**Configuration:**
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `main`
- **Path**: `team-platform/grafana/`
**In Git (on branch `main`):**
```
your-org/grafana-manifests/
├── .git/
├── README.md
├── team-platform/
│ └── grafana/
│ ├── cpu-metrics.json ← Synced
│ ├── memory-usage.json ← Synced
│ └── disk-io.json ← Synced
├── team-data/
│ └── grafana/
│ └── pipeline-stats.json ← Not synced (different path)
└── other-files.txt ← Not synced (outside path)
```
**In Grafana Dashboards view:**
```
Dashboards
└── 📁 grafana-manifests/
├── CPU Metrics Dashboard
├── Memory Usage Dashboard
└── Disk I/O Dashboard
```
**Key points:**
- Grafana only synchronizes files within the specified path (`team-platform/grafana/`).
- Grafana ignores files in other paths or at the repository root.
- The folder name in Grafana comes from the repository name.
- Dashboard titles come from the JSON file content, not the filename.
## Repository configuration flexibility
Git Sync repositories support different combinations of repository URL, branch, and path:
- Different Git repositories: Each environment or team can use its own repository.
- Instance A: `repository: your-org/grafana-prod`.
- Instance B: `repository: your-org/grafana-dev`.
- Different branches: Use separate branches within the same repository.
- Instance A: `repository: your-org/grafana-manifests, branch: main`.
- Instance B: `repository: your-org/grafana-manifests, branch: develop`.
- Different paths: Use different directory paths within the same repository.
- Instance A: `repository: your-org/grafana-manifests, branch: main, path: production/`.
- Instance B: `repository: your-org/grafana-manifests, branch: main, path: development/`.
- Any combination: Mix and match based on your workflow requirements.
## Scenarios
Use these deployment scenarios to plan your Git Sync setup:
- [Single instance](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/git-sync-deployment-scenarios/single-instance/)
- [Git Sync for development and production environments](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/git-sync-deployment-scenarios/dev-prod/)
- [Git Sync with regional replication](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/git-sync-deployment-scenarios/multi-region/)
- [High availability](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/git-sync-deployment-scenarios/high-availability/)
- [Git Sync in a shared Grafana instance](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/git-sync-deployment-scenarios/multi-team/)
## Learn more
Refer to the following documents to learn more:
- [Git Sync introduction](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/intro-git-sync/)
- [Git Sync setup guide](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/git-sync-setup/)
- [Dashboard provisioning](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/administration/provisioning/)
- [Observability as Code](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/)

View File

@@ -0,0 +1,147 @@
---
title: Git Sync for development and production environments
menuTitle: Across environments
description: Use separate Grafana instances for development and production with Git-controlled promotion
weight: 20
---
# Git Sync for development and production environments
Use separate Grafana instances for development and production. Each syncs with different Git locations to test dashboards before production.
## Use it for
- **Staged deployments**: You need to test dashboard changes before production deployment.
- **Change control**: You require approvals before dashboards reach production.
- **Quality assurance**: You verify dashboard functionality in a non-production environment.
- **Risk mitigation**: You minimize the risk of breaking production dashboards.
## Architecture
```
┌────────────────────────────────────────────────────────────┐
│ GitHub Repository │
│ Repository: your-org/grafana-manifests │
│ Branch: main │
│ │
│ grafana-manifests/ │
│ ├── dev/ │
│ │ ├── dashboard-new.json ← Development dashboards │
│ │ └── dashboard-test.json │
│ │ │
│ └── prod/ │
│ ├── dashboard-stable.json ← Production dashboards │
│ └── dashboard-approved.json │
└────────────────────────────────────────────────────────────┘
↕ ↕
Git Sync (dev/) Git Sync (prod/)
↕ ↕
┌─────────────────────┐ ┌─────────────────────┐
│ Dev Grafana │ │ Prod Grafana │
│ │ │ │
│ Repository: │ │ Repository: │
│ - path: dev/ │ │ - path: prod/ │
│ │ │ │
│ Creates folder: │ │ Creates folder: │
│ "grafana-manifests"│ │ "grafana-manifests"│
└─────────────────────┘ └─────────────────────┘
```
## Repository structure
**In Git:**
```
your-org/grafana-manifests
├── dev/
│ ├── dashboard-new.json
│ └── dashboard-test.json
└── prod/
├── dashboard-stable.json
└── dashboard-approved.json
```
**In Grafana Dashboards view:**
**Dev instance:**
```
Dashboards
└── 📁 grafana-manifests/
├── New Dashboard
└── Test Dashboard
```
**Prod instance:**
```
Dashboards
└── 📁 grafana-manifests/
├── Stable Dashboard
└── Approved Dashboard
```
- Both instances create a folder named "grafana-manifests" (from repository name)
- Each instance only shows dashboards from its configured path (`dev/` or `prod/`)
- Dashboards appear with their titles from the JSON files
## Configuration parameters
Development:
- Repository: `your-org/grafana-manifests`
- Branch: `main`
- Path: `dev/`
Production:
- Repository: `your-org/grafana-manifests`
- Branch: `main`
- Path: `prod/`
## How it works
1. Developers create and modify dashboards in development.
2. Git Sync commits changes to `dev/`.
3. You review changes in Git.
4. You promote approved dashboards from `dev/` to `prod/`.
5. Production syncs from `prod/`.
6. Production dashboards update.
## Alternative: Use branches
Instead of using different paths, you can configure instances to use different branches:
**Development instance:**
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `develop`
- **Path**: `grafana/`
**Production instance:**
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `main`
- **Path**: `grafana/`
With this approach:
- Development changes go to the `develop` branch
- Use Git merge or pull request workflows to promote changes from `develop` to `main`
- Production automatically syncs from the `main` branch
## Alternative: Use separate repositories for stricter isolation
For stricter isolation, use completely separate repositories:
**Development instance:**
- **Repository**: `your-org/grafana-manifests-dev`
- **Branch**: `main`
- **Path**: `grafana/`
**Production instance:**
- **Repository**: `your-org/grafana-manifests-prod`
- **Branch**: `main`
- **Path**: `grafana/`

View File

@@ -0,0 +1,217 @@
---
title: Git Sync for high availability environments
menuTitle: High availability
description: Run multiple Grafana instances serving traffic simultaneously, synchronized via Git Sync
weight: 50
---
# Git Sync for high availability environments
## Primaryreplica scenario
Use a primary Grafana instance and one or more replicas synchronized with the same Git location to enable failover.
### Use it for
- **Automatic failover**: You need service continuity when the primary instance fails.
- **High availability**: Your organization requires guaranteed dashboard availability.
- **Simple HA setup**: You want high availability without the complexity of activeactive.
- **Maintenance windows**: You perform updates while another instance serves traffic.
- **Business continuity**: Dashboard access can't tolerate downtime.
### Architecture
```
┌─────────────────────────────────────────────────────┐
│ GitHub Repository │
│ Repository: your-org/grafana-manifests │
│ Branch: main │
│ │
│ grafana-manifests/ │
│ └── shared/ │
│ ├── dashboard-metrics.json │
│ ├── dashboard-alerts.json │
│ └── dashboard-logs.json │
└─────────────────────────────────────────────────────┘
↕ ↕
Git Sync (shared/) Git Sync (shared/)
↕ ↕
┌────────────────────┐ ┌────────────────────┐
│ Master Grafana │ │ Replica Grafana │
│ (Active) │ │ (Standby) │
│ │ │ │
│ Repository: │ │ Repository: │
│ - path: shared/ │ │ - path: shared/ │
└────────────────────┘ └────────────────────┘
│ │
└───────────┬───────────────────┘
┌──────────────────────┐
│ Reverse Proxy │
│ (Failover) │
└──────────────────────┘
```
### Repository structure
**In Git:**
```
your-org/grafana-manifests
└── shared/
├── dashboard-metrics.json
├── dashboard-alerts.json
└── dashboard-logs.json
```
**In Grafana Dashboards view (both instances):**
```
Dashboards
└── 📁 grafana-manifests/
├── Metrics Dashboard
├── Alerts Dashboard
└── Logs Dashboard
```
- Master and replica instances show identical folder structure.
- Both sync from the same `shared/` path.
- Reverse proxy routes traffic to master (active) instance.
- If master fails, proxy automatically fails over to replica (standby).
- Users see the same dashboards regardless of which instance is serving traffic.
### Configuration parameters
Both master and replica instances use identical parameters:
**Master instance:**
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `main`
- **Path**: `shared/`
**Replica instance:**
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `main`
- **Path**: `shared/`
### How it works
1. Both instances stay synchronized through Git.
2. Reverse proxy routes traffic to primary.
3. Users edit on primary. Git Sync commits changes.
4. Both instances pull latest changes to keep replica in sync.
5. On primary failure, proxy fails over to replica.
### Failover considerations
- Health checks and monitoring.
- Continuous syncing to minimize data loss.
- Plan failback (automatic or manual).
## Load balancer scenario
Run multiple active Grafana instances behind a load balancer. All instances sync from the same Git location.
### Use it for
- **High traffic**: Your deployment needs to handle significant user load.
- **Load distribution**: You want to distribute user requests across instances.
- **Maximum availability**: You need service continuity during maintenance or failures.
- **Scalability**: You want to add instances as load increases.
- **Performance**: Users need fast response times under heavy load.
### Architecture
```
┌─────────────────────────────────────────────────────┐
│ GitHub Repository │
│ Repository: your-org/grafana-manifests │
│ Branch: main │
│ │
│ grafana-manifests/ │
│ └── shared/ │
│ ├── dashboard-metrics.json │
│ ├── dashboard-alerts.json │
│ └── dashboard-logs.json │
└─────────────────────────────────────────────────────┘
↕ ↕
Git Sync (shared/) Git Sync (shared/)
↕ ↕
┌────────────────────┐ ┌────────────────────┐
│ Grafana Instance 1│ │ Grafana Instance 2│
│ (Active) │ │ (Active) │
│ │ │ │
│ Repository: │ │ Repository: │
│ - path: shared/ │ │ - path: shared/ │
└────────────────────┘ └────────────────────┘
│ │
└───────────┬───────────────────┘
┌──────────────────────┐
│ Load Balancer │
│ (Round Robin) │
└──────────────────────┘
```
### Repository structure
**In Git:**
```
your-org/grafana-manifests
└── shared/
├── dashboard-metrics.json
├── dashboard-alerts.json
└── dashboard-logs.json
```
**In Grafana Dashboards view (all instances):**
```
Dashboards
└── 📁 grafana-manifests/
├── Metrics Dashboard
├── Alerts Dashboard
└── Logs Dashboard
```
- All instances show identical folder structure.
- All instances sync from the same `shared/` path.
- Load balancer distributes requests across all active instances.
- Any instance can serve read requests.
- Any instance can accept dashboard modifications.
- Changes propagate to all instances through Git.
### Configuration parameters
All instances use identical parameters:
**Instance 1:**
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `main`
- **Path**: `shared/`
**Instance 2:**
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `main`
- **Path**: `shared/`
### How it works
1. All instances stay synchronized through Git.
2. Load balancer distributes incoming traffic across all active instances.
3. Users can view dashboards from any instance.
4. When a user modifies a dashboard on any instance, Git Sync commits the change.
5. All other instances pull the updated dashboard during their next sync cycle, or instantly if webhooks are configured.
6. If one instance fails, load balancer stops routing traffic to it and remaining instances continue serving.
### Important considerations
- **Eventually consistent**: Due to sync intervals, instances may briefly have different dashboard versions.
- **Concurrent edits**: Multiple users editing the same dashboard on different instances can cause conflicts.
- **Database sharing**: Instances should share the same backend database for user sessions, preferences, and annotations.
- **Stateless design**: Design for stateless operation where possible to maximize load balancing effectiveness.

View File

@@ -0,0 +1,93 @@
---
title: Git Sync with regional replication
menuTitle: Regional replication
description: Synchronize multiple regional Grafana instances from a shared Git location
weight: 30
---
# Git Sync with regional replication
Deploy multiple Grafana instances across regions. Synchronize them with the same Git location to ensure consistent dashboards everywhere.
## Use it for
- **Geographic distribution**: You deploy Grafana close to users in different regions.
- **Latency reduction**: Users need fast dashboard access from their location.
- **Data sovereignty**: You keep dashboard data in specific regions.
- **High availability**: You need dashboard availability across regions.
- **Consistent experience**: All users see the same dashboards regardless of region.
## Architecture
```
┌─────────────────────────────────────────────────────┐
│ GitHub Repository │
│ Repository: your-org/grafana-manifests │
│ Branch: main │
│ │
│ grafana-manifests/ │
│ └── shared/ │
│ ├── dashboard-global.json │
│ ├── dashboard-metrics.json │
│ └── dashboard-logs.json │
└─────────────────────────────────────────────────────┘
↕ ↕
Git Sync (shared/) Git Sync (shared/)
↕ ↕
┌────────────────────┐ ┌────────────────────┐
│ US Region │ │ EU Region │
│ Grafana │ │ Grafana │
│ │ │ │
│ Repository: │ │ Repository: │
│ - path: shared/ │ │ - path: shared/ │
└────────────────────┘ └────────────────────┘
```
## Repository structure
**In Git:**
```
your-org/grafana-manifests
└── shared/
├── dashboard-global.json
├── dashboard-metrics.json
└── dashboard-logs.json
```
**In Grafana Dashboards view (all regions):**
```
Dashboards
└── 📁 grafana-manifests/
├── Global Dashboard
├── Metrics Dashboard
└── Logs Dashboard
```
- All regional instances (US, EU, etc.) show identical folder structure
- Same folder name "grafana-manifests" in every region
- Same dashboards synced from the `shared/` path appear everywhere
- Users in any region see the exact same dashboards with the same titles
## Configuration parameters
All regions:
- Repository: `your-org/grafana-manifests`
- Branch: `main`
- Path: `shared/`
## How it works
1. All regional instances pull dashboards from `shared/`.
2. Any regions change commits to Git.
3. Other regions pull updates during the next sync (or via webhooks).
4. Changes propagate across regions per sync interval.
## Considerations
- **Write conflicts**: If users in different regions modify the same dashboard simultaneously, Git uses last-write-wins.
- **Primary region**: Consider designating one region as the primary location for making dashboard changes.
- **Propagation time**: Changes propagate to all regions within the configured sync interval, or instantly if webhooks are configured.
- **Network reliability**: Ensure all regions have reliable connectivity to the Git repository.

View File

@@ -0,0 +1,169 @@
---
title: Multiple team Git Sync
menuTitle: Shared instance
description: Use multiple Git repositories with one Grafana instance, one repository per team
weight: 60
---
# Git Sync in a Grafana instance shared by multiple teams
Use a single Grafana instance with multiple Repository resources, one per team. Each team manages its own dashboards while sharing Grafana.
## Use it for
- **Team autonomy**: Different teams manage their own dashboards independently.
- **Organizational structure**: Dashboard organization aligns with team structure.
- **Resource efficiency**: Multiple teams share Grafana infrastructure.
- **Cost optimization**: You reduce infrastructure costs while maintaining team separation.
- **Collaboration**: Teams can view each others dashboards while managing their own.
## Architecture
```
┌─────────────────────────┐ ┌─────────────────────────┐
│ Platform Team Repo │ │ Data Team Repo │
│ platform-dashboards │ │ data-dashboards │
│ │ │ │
│ platform-dashboards/ │ │ data-dashboards/ │
│ └── grafana/ │ │ └── grafana/ │
│ ├── k8s.json │ │ ├── pipeline.json │
│ └── infra.json │ │ └── analytics.json │
└─────────────────────────┘ └─────────────────────────┘
↕ ↕
Git Sync (grafana/) Git Sync (grafana/)
↕ ↕
┌──────────────────────────────────────┐
│ Grafana Instance │
│ │
│ Repository 1: │
│ - repo: platform-dashboards │
│ → Creates "platform-dashboards" │
│ │
│ Repository 2: │
│ - repo: data-dashboards │
│ → Creates "data-dashboards" │
└──────────────────────────────────────┘
```
## Repository structure
**In Git (separate repositories):**
**Platform team repository:**
```
your-org/platform-dashboards
└── grafana/
├── dashboard-k8s.json
└── dashboard-infra.json
```
**Data team repository:**
```
your-org/data-dashboards
└── grafana/
├── dashboard-pipeline.json
└── dashboard-analytics.json
```
**In Grafana Dashboards view:**
```
Dashboards
├── 📁 platform-dashboards/
│ ├── Kubernetes Dashboard
│ └── Infrastructure Dashboard
└── 📁 data-dashboards/
├── Pipeline Dashboard
└── Analytics Dashboard
```
- Two separate folders created (one per Repository resource).
- Folder names derived from repository names.
- Each team has complete control over their own repository.
- Teams can independently manage permissions, branches, and workflows in their repos.
- All teams can view each other's dashboards in Grafana but manage only their own.
## Configuration parameters
**Platform team repository:**
- **Repository**: `your-org/platform-dashboards`
- **Branch**: `main`
- **Path**: `grafana/`
**Data team repository:**
- **Repository**: `your-org/data-dashboards`
- **Branch**: `main`
- **Path**: `grafana/`
## How it works
1. Each team has their own Git repository for complete autonomy.
2. Each repository resource in Grafana creates a separate folder.
3. Platform team dashboards sync from `your-org/platform-dashboards` repository.
4. Data team dashboards sync from `your-org/data-dashboards` repository.
5. Teams can independently manage their repository settings, access controls, and workflows.
6. All teams can view each other's dashboards in Grafana but edit only their own.
## Scale to more teams
Adding additional teams is straightforward. For a third team, create a new repository and configure:
- **Repository**: `your-org/security-dashboards`
- **Branch**: `main`
- **Path**: `grafana/`
This creates a new "security-dashboards" folder in the same Grafana instance.
## Alternative: Shared repository with different paths
For teams that prefer sharing a single repository, use different paths to separate team dashboards:
**In Git:**
```
your-org/grafana-manifests
├── team-platform/
│ ├── dashboard-k8s.json
│ └── dashboard-infra.json
└── team-data/
├── dashboard-pipeline.json
└── dashboard-analytics.json
```
**Configuration:**
**Platform team:**
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `main`
- **Path**: `team-platform/`
**Data team:**
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `main`
- **Path**: `team-data/`
This approach provides simpler repository management but less isolation between teams.
## Alternative: Different branches per team
For teams wanting their own branch in a shared repository:
**Platform team:**
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `team-platform`
- **Path**: `grafana/`
**Data team:**
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `team-data`
- **Path**: `grafana/`
This allows teams to use Git branch workflows for collaboration while sharing the same repository.

View File

@@ -0,0 +1,86 @@
---
title: Single instance Git Sync
menuTitle: Single instance
description: Synchronize a single Grafana instance with a Git repository
weight: 10
---
# Single instance Git Sync
Use a single Grafana instance synchronized with a Git repository. This is the foundation for Git Sync and helps you understand bidirectional synchronization.
## Use it for
- **Getting started**: You want to learn how Git Sync works before implementing complex scenarios.
- **Personal projects**: Individual developers manage their own dashboards.
- **Small teams**: You have a simple setup without multiple environments or complex workflows.
- **Development environments**: You need quick prototyping and testing.
## Architecture
```
┌─────────────────────────────────────────────────────┐
│ GitHub Repository │
│ Repository: your-org/grafana-manifests │
│ Branch: main │
│ │
│ grafana-manifests/ │
│ └── grafana/ │
│ ├── dashboard-1.json │
│ ├── dashboard-2.json │
│ └── dashboard-3.json │
└─────────────────────────────────────────────────────┘
Git Sync (bidirectional)
┌─────────────────────────────┐
│ Grafana Instance │
│ │
│ Repository Resource: │
│ - url: grafana-manifests │
│ - branch: main │
│ - path: grafana/ │
│ │
│ Creates folder: │
│ "grafana-manifests" │
└─────────────────────────────┘
```
## Repository structure
**In Git:**
```
your-org/grafana-manifests
└── grafana/
├── dashboard-1.json
├── dashboard-2.json
└── dashboard-3.json
```
**In Grafana Dashboards view:**
```
Dashboards
└── 📁 grafana-manifests/
├── Dashboard 1
├── Dashboard 2
└── Dashboard 3
```
- A folder named "grafana-manifests" (from repository name) contains all synced dashboards.
- Each JSON file becomes a dashboard with its title displayed in the folder.
- Users browse dashboards organized under this folder structure.
## Configuration parameters
Configure your Grafana instance to synchronize with:
- **Repository**: `your-org/grafana-manifests`
- **Branch**: `main`
- **Path**: `grafana/`
## How it works
1. **From Grafana to Git**: When users create or modify dashboards in Grafana, Git Sync commits changes to the `grafana/` directory on the `main` branch.
2. **From Git to Grafana**: When dashboard JSON files are added or modified in the `grafana/` directory, Git Sync pulls these changes into Grafana.

View File

@@ -29,76 +29,70 @@ You can sign up to the private preview using the [Git Sync early access form](ht
{{< /admonition >}}
Git Sync lets you manage Grafana dashboards as code by storing dashboard JSON files and folders in a remote GitHub repository.
To set up Git Sync and synchronize with a GitHub repository follow these steps:
1. [Enable feature toggles in Grafana](#enable-required-feature-toggles) (first time set up).
1. [Create a GitHub access token](#create-a-github-access-token).
1. [Configure a connection to your GitHub repository](#set-up-the-connection-to-github).
1. [Choose what content to sync with Grafana](#choose-what-to-synchronize).
Optionally, you can [extend Git Sync](#configure-webhooks-and-image-rendering) by enabling pull request notifications and image previews of dashboard changes.
| Capability | Benefit | Requires |
| ----------------------------------------------------- | ------------------------------------------------------------------------------- | -------------------------------------- |
| Adds a table summarizing changes to your pull request | Provides a convenient way to save changes back to GitHub. | Webhooks configured |
| Add a dashboard preview image to a PR | View a snapshot of dashboard changes to a pull request without opening Grafana. | Image renderer and webhooks configured |
{{< admonition type="note" >}}
Alternatively, you can configure a local file system instead of using GitHub. Refer to [Set up file provisioning](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/file-path-setup/) for more information.
{{< /admonition >}}
## Performance impacts of enabling Git Sync
Git Sync is an experimental feature and is under continuous development. Reporting any issues you encounter can help us improve Git Sync.
When Git Sync is enabled, the database load might increase, especially for instances with a lot of folders and nested folders. Evaluate the performance impact, if any, in a non-production environment.
This guide shows you how to set up Git Sync to synchronize your Grafana dashboards and folders with a GitHub repository. You'll set up Git Sync to enable version-controlled dashboard management either [using the UI](#set-up-git-sync-using-grafana-ui) or [as code](#set-up-git-sync-as-code).
## Before you begin
{{< admonition type="caution" >}}
Before you begin, ensure you have the following:
Refer to [Known limitations](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/intro-git-sync#known-limitations/) before using Git Sync.
- A Grafana instance (Cloud, OSS, or Enterprise).
- If you're [using webhooks or image rendering](#extend-git-sync-for-real-time-notification-and-image-rendering), a public instance with external access
- Administration rights in your Grafana organization
- A [GitHub private access token](#create-a-github-access-token)
- A GitHub repository to store your dashboards in
- Optional: The [Image Renderer service](https://github.com/grafana/grafana-image-renderer) to save image previews with your PRs
### Known limitations
Refer to [Known limitations](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/intro-git-sync#known-limitations) before using Git Sync.
Refer to [Supported resources](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/intro-git-sync#supported-resources) for details about which resources you can sync.
### Performance considerations
When Git Sync is enabled, the database load might increase, especially for instances with many folders and nested folders. Evaluate the performance impact, if any, in a non-production environment.
Git Sync is under continuous development. [Report any issues](https://grafana.com/help/) you encounter to help us improve Git Sync.
## Set up Git Sync
To set up Git Sync and synchronize with a GitHub repository, follow these steps:
1. [Enable feature toggles in Grafana](#enable-required-feature-toggles) (first time setup)
1. [Create a GitHub access token](#create-a-github-access-token)
1. Set up Git Sync [using the UI](#set-up-git-sync-using-grafana-ui) or [as code](#set-up-git-sync-as-code)
After setup, you can [verify your dashboards](#verify-your-dashboards-in-grafana).
Optionally, you can also [extend Git Sync with webhooks and image rendering](#extend-git-sync-for-real-time-notification-and-image-rendering).
{{< admonition type="note" >}}
Alternatively, you can configure a local file system instead of using GitHub. Refer to [Set up file provisioning](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/file-path-setup/) for more information.
{{< /admonition >}}
### Requirements
To set up Git Sync, you need:
- Administration rights in your Grafana organization.
- Enable the required feature toggles in your Grafana instance. Refer to [Enable required feature toggles](#enable-required-feature-toggles) for instructions.
- A GitHub repository to store your dashboards in.
- If you want to use a local file path, refer to [the local file path guide](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/file-path-setup/).
- A GitHub access token. The Grafana UI will prompt you during setup.
- Optional: A public Grafana instance.
- Optional: The [Image Renderer service](https://github.com/grafana/grafana-image-renderer) to save image previews with your PRs.
## Enable required feature toggles
To activate Git Sync in Grafana, you need to enable the `provisioning` and `kubernetesDashboards` feature toggles.
For additional information about feature toggles, refer to [Configure feature toggles](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/feature-toggles).
To activate Git Sync in Grafana, you need to enable the `provisioning` feature toggle. For more information about feature toggles, refer to [Configure feature toggles](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/feature-toggles/#experimental-feature-toggles).
To enable the required feature toggles, add them to your Grafana configuration file:
To enable the required feature toggle:
1. Open your Grafana configuration file, either `grafana.ini` or `custom.ini`. For file location based on operating system, refer to [Configuration file location](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/feature-toggles/#experimental-feature-toggles).
1. Locate or add a `[feature_toggles]` section. Add these values:
1. Locate or add a `[feature_toggles]` section. Add this value:
```ini
[feature_toggles]
provisioning = true
kubernetesDashboards = true ; use k8s from browser
```
1. Save the changes to the file and restart Grafana.
## Create a GitHub access token
Whenever you connect to a GitHub repository, you need to create a GitHub access token with specific repository permissions.
This token needs to be added to your Git Sync configuration to enable read and write permissions between Grafana and GitHub repository.
Whenever you connect to a GitHub repository, you need to create a GitHub access token with specific repository permissions. This token needs to be added to your Git Sync configuration to enable read and write permissions between Grafana and GitHub repository.
To create a GitHub access token:
1. Create a new token using [Create new fine-grained personal access token](https://github.com/settings/personal-access-tokens/new). Refer to [Managing your personal access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) for instructions.
1. Under **Permissions**, expand **Repository permissions**.
@@ -112,19 +106,23 @@ This token needs to be added to your Git Sync configuration to enable read and w
1. Verify the options and select **Generate token**.
1. Copy the access token. Leave the browser window available with the token until you've completed configuration.
GitHub Apps are not currently supported.
GitHub Apps aren't currently supported.
## Set up the connection to GitHub
## Set up Git Sync using Grafana UI
Use **Provisioning** to guide you through setting up Git Sync to use a GitHub repository.
1. [Configure a connection to your GitHub repository](#set-up-the-connection-to-github)
1. [Choose what content to sync with Grafana](#choose-what-to-synchronize)
1. [Choose additional settings](#choose-additional-settings)
### Set up the connection to GitHub
Use **Provisioning** to guide you through setting up Git Sync to use a GitHub repository:
1. Log in to your Grafana server with an account that has the Grafana Admin flag set.
1. Select **Administration** in the left-side menu and then **Provisioning**.
1. Select **Configure Git Sync**.
### Connect to external storage
To connect your GitHub repository, follow these steps:
To connect your GitHub repository:
1. Paste your GitHub personal access token into **Enter your access token**. Refer to [Create a GitHub access token](#create-a-github-access-token) for instructions.
1. Paste the **Repository URL** for your GitHub repository into the text box.
@@ -134,32 +132,12 @@ To connect your GitHub repository, follow these steps:
### Choose what to synchronize
In this step you can decide which elements to synchronize. Keep in mind the available options depend on the status of your Grafana instance.
In this step, you can decide which elements to synchronize. The available options depend on the status of your Grafana instance:
- If the instance contains resources in an incompatible data format, you'll have to migrate all the data using instance sync. Folder sync won't be supported.
- If there is already another connection using folder sync, instance sync won't be offered.
- If there's already another connection using folder sync, instance sync won't be offered.
#### Synchronization limitations
Git Sync only supports dashboards and folders. Alerts, panels, and other resources are not supported yet.
{{< admonition type="caution" >}}
Refer to [Known limitations](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/intro-git-sync#known-limitations/) before using Git Sync. Refer to [Supported resources](/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/intro-git-sync#supported-resources) for details about which resources you can sync.
{{< /admonition >}}
Full instance sync is not available in Grafana Cloud.
In Grafana OSS/Enterprise:
- If you try to perform a full instance sync with resources that contain alerts or panels, Git Sync will block the connection.
- You won't be able to create new alerts or library panels after the setup is completed.
- If you opted for full instance sync and want to use alerts and library panels, you'll have to delete the synced repository and connect again with folder sync.
#### Set up synchronization
To set up synchronization, choose to either sync your entire organization resources with external storage, or to sync certain resources to a new Grafana folder (with up to 10 connections).
To set up synchronization:
- Choose **Sync all resources with external storage** if you want to sync and manage your entire Grafana instance through external storage. With this option, all of your dashboards are synced to that one repository. You can only have one provisioned connection with this selection, and you won't have the option of setting up additional repositories to connect to.
- Choose **Sync external storage to new Grafana folder** to sync external resources into a new folder without affecting the rest of your instance. You can repeat this process for up to 10 connections.
@@ -170,20 +148,183 @@ Next, enter a **Display name** for the repository connection. Resources stored i
Finally, you can set up how often your configured storage is polled for updates.
To configure additional settings:
1. For **Update instance interval (seconds)**, enter how often you want the instance to pull updates from GitHub. The default value is 60 seconds.
1. Optional: Select **Read only** to ensure resources can't be modified in Grafana.
1. Optional: If you have the Grafana Image Renderer plugin configured, you can **Enable dashboards previews in pull requests**. If image rendering is not available, then you can't select this option. For more information, refer to the [Image Renderer service](https://github.com/grafana/grafana-image-renderer).
1. Optional: If you have the Grafana Image Renderer plugin configured, you can **Enable dashboards previews in pull requests**. If image rendering isn't available, then you can't select this option. For more information, refer to the [Image Renderer service](https://github.com/grafana/grafana-image-renderer).
1. Select **Finish** to proceed.
### Modify your configuration after setup is complete
To update your repository configuration after you've completed setup:
1. Log in to your Grafana server with an account that has the Grafana Admin flag set.
1. Select **Administration** in the left-side menu and then **Provisioning**.
1. Select **Settings** for the repository you wish to modify.
1. Use the **Configure repository** screen to update any of the settings.
1. Select **Save** to preserve the updates.
## Set up Git Sync as code
Alternatively, you can also configure Git Sync using `grafanactl`. Since Git Sync configuration is managed as code using Custom Resource Definitions (CRDs), you can create a Repository CRD in a YAML file and use `grafanactl` to push it to Grafana. This approach enables automated, GitOps-style workflows for managing Git Sync configuration instead of using the Grafana UI.
To set up Git Sync with `grafanactl`, follow these steps:
1. [Create the repository CRD](#create-the-repository-crd)
1. [Push the repository CRD to Grafana](#push-the-repository-crd-to-grafana)
1. [Manage repository resources](#manage-repository-resources)
1. [Verify setup](#verify-setup)
For more information, refer to the following documents:
- [grafanactl Documentation](https://grafana.github.io/grafanactl/)
- [Repository CRD Reference](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/git-sync-setup/)
- [Dashboard CRD Format](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/export-resources/)
### Create the repository CRD
Create a `repository.yaml` file defining your Git Sync configuration:
```yaml
apiVersion: provisioning.grafana.app/v0alpha1
kind: Repository
metadata:
name: <REPOSITORY_NAME>
spec:
title: <REPOSITORY_TITLE>
type: github
github:
url: <GITHUB_REPO_URL>
branch: <BRANCH>
path: grafana/
generateDashboardPreviews: true
sync:
enabled: true
intervalSeconds: 60
target: folder
workflows:
- write
- branch
secure:
token:
create: <GITHUB_PAT>
```
Replace the placeholders with your values:
- _`<REPOSITORY_NAME>`_: Unique identifier for this repository resource
- _`<REPOSITORY_TITLE>`_: Human-readable name displayed in Grafana UI
- _`<GITHUB_REPO_URL>`_: GitHub repository URL
- _`<BRANCH>`_: Branch to sync
- _`<GITHUB_PAT>`_: GitHub Personal Access Token
{{< admonition type="note" >}}
Only `target: folder` is currently supported for Git Sync.
{{< /admonition >}}
#### Configuration parameters
The following configuration parameters are available:
| Field | Description |
| --------------------------------------- | ----------------------------------------------------------- |
| `metadata.name` | Unique identifier for this repository resource |
| `spec.title` | Human-readable name displayed in Grafana UI |
| `spec.type` | Repository type (`github`) |
| `spec.github.url` | GitHub repository URL |
| `spec.github.branch` | Branch to sync |
| `spec.github.path` | Directory path containing dashboards |
| `spec.github.generateDashboardPreviews` | Generate preview images (true/false) |
| `spec.sync.enabled` | Enable synchronization (true/false) |
| `spec.sync.intervalSeconds` | Sync interval in seconds |
| `spec.sync.target` | Where to place synced dashboards (`folder`) |
| `spec.workflows` | Enabled workflows: `write` (direct commits), `branch` (PRs) |
| `secure.token.create` | GitHub Personal Access Token |
### Push the repository CRD to Grafana
Before pushing any resources, configure `grafanactl` with your Grafana instance details. Refer to the [grafanactl configuration documentation](https://grafana.github.io/grafanactl/) for setup instructions.
Push the repository configuration:
```sh
grafanactl resources push --path <DIRECTORY>
```
The `--path` parameter has to point to the directory containing your `repository.yaml` file.
After pushing, Grafana will:
1. Create the repository resource
1. Connect to your GitHub repository
1. Pull dashboards from the specified path
1. Begin syncing at the configured interval
### Manage repository resources
#### List repositories
To list all repositories:
```sh
grafanactl resources get repositories
```
#### Get repository details
To get details for a specific repository:
```sh
grafanactl resources get repository/<REPOSITORY_NAME>
grafanactl resources get repository/<REPOSITORY_NAME> -o json
grafanactl resources get repository/<REPOSITORY_NAME> -o yaml
```
#### Update the repository
To update a repository:
```sh
grafanactl resources edit repository/<REPOSITORY_NAME>
```
#### Delete the repository
To delete a repository:
```sh
grafanactl resources delete repository/<REPOSITORY_NAME>
```
### Verify setup
Check that Git Sync is working:
```sh
# List repositories
grafanactl resources get repositories
# Check Grafana UI
# Navigate to: Administration → Provisioning → Git Sync
```
## Verify your dashboards in Grafana
To verify that your dashboards are available at the location that you specified, click **Dashboards**. The name of the dashboard is listed in the **Name** column.
Now that your dashboards have been synced from a repository, you can customize the name, change the branch, and create a pull request (PR) for it. Refer to [Manage provisioned repositories with Git Sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/use-git-sync/) for more information.
Now that your dashboards have been synced from a repository, you can customize the name, change the branch, and create a pull request (PR) for it. Refer to [Manage provisioned repositories with Git Sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/use-git-sync/) for more information.
## Configure webhooks and image rendering
## Extend Git Sync for real-time notification and image rendering
You can extend Git Sync by getting instant updates and pull requests using webhooks and add dashboard previews in pull requests.
Optionally, you can extend Git Sync by enabling pull request notifications and image previews of dashboard changes.
| Capability | Benefit | Requires |
| ----------------------------------------------------- | ------------------------------------------------------------------------------ | -------------------------------------- |
| Adds a table summarizing changes to your pull request | Provides a convenient way to save changes back to GitHub | Webhooks configured |
| Add a dashboard preview image to a PR | View a snapshot of dashboard changes to a pull request without opening Grafana | Image renderer and webhooks configured |
### Set up webhooks for realtime notification and pull request integration
@@ -191,25 +332,26 @@ When connecting to a GitHub repository, Git Sync uses webhooks to enable real-ti
You can set up webhooks with whichever service or tooling you prefer. You can use Cloudflare Tunnels with a Cloudflare-managed domain, port-forwarding and DNS options, or a tool such as `ngrok`.
To set up webhooks you need to expose your Grafana instance to the public Internet. You can do this via port forwarding and DNS, a tool such as `ngrok`, or any other method you prefer. The permissions set in your GitHub access token provide the authorization for this communication.
To set up webhooks, you need to expose your Grafana instance to the public Internet. You can do this via port forwarding and DNS, a tool such as `ngrok`, or any other method you prefer. The permissions set in your GitHub access token provide the authorization for this communication.
After you have the public URL, you can add it to your Grafana configuration file:
```yaml
```ini
[server]
root_url = https://PUBLIC_DOMAIN.HERE
root_url = https://<PUBLIC_DOMAIN>
```
Replace _`<PUBLIC_DOMAIN>`_ with your public domain.
To check the configured webhooks, go to **Administration** > **Provisioning** and click the **View** link for your GitHub repository.
#### Expose necessary paths only
If your security setup does not permit publicly exposing the Grafana instance, you can either choose to `allowlist` the GitHub IP addresses, or expose only the necessary paths.
If your security setup doesn't permit publicly exposing the Grafana instance, you can either choose to allowlist the GitHub IP addresses, or expose only the necessary paths.
The necessary paths required to be exposed are, in RegExp:
- `/apis/provisioning\.grafana\.app/v0(alpha1)?/namespaces/[^/]+/repositories/[^/]+/(webhook|render/.*)$`
<!-- TODO: Path for the blob storage for image rendering? @ryantxu would know this best. -->
### Set up image rendering for dashboard previews
@@ -217,12 +359,14 @@ Set up image rendering to add visual previews of dashboard updates directly in p
To enable this capability, install the Grafana Image Renderer in your Grafana instance. For more information and installation instructions, refer to the [Image Renderer service](https://github.com/grafana/grafana-image-renderer).
## Modify configurations after set up is complete
## Next steps
To update your repository configuration after you've completed set up:
You've successfully set up Git Sync to manage your Grafana dashboards through version control. Your dashboards are now synchronized with a GitHub repository, enabling collaborative development and change tracking.
1. Log in to your Grafana server with an account that has the Grafana Admin flag set.
1. Select **Administration** in the left-side menu and then **Provisioning**.
1. Select **Settings** for the repository you wish to modify.
1. Use the **Configure repository** screen to update any of the settings.
1. Select **Save** to preserve the updates.
To learn more about using Git Sync:
- [Work with provisioned dashboards](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/provisioned-dashboards/)
- [Manage provisioned repositories with Git Sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/use-git-sync/)
- [Git Sync deployment scenarios](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/git-sync-deployment-scenarios)
- [Export resources](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/export-resources/)
- [grafanactl documentation](https://grafana.github.io/grafanactl/)

View File

@@ -127,7 +127,13 @@ An instance can be in one of the following Git Sync states:
## Common use cases
You can use Git Sync in the following scenarios.
{{< admonition type="note" >}}
Refer to [Git Sync deployment scenarios](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/git-sync-deployment-scenarios) for sample scenarios, including architecture and configuration details.
{{< /admonition >}}
You can use Git Sync for the following use cases:
### Version control and auditing

View File

@@ -14,7 +14,7 @@ labels:
- cloud
title: Manage provisioned repositories with Git Sync
menuTitle: Manage repositories with Git Sync
weight: 120
weight: 400
canonical: https://grafana.com/docs/grafana/latest/as-code/observability-as-code/provision-resources/use-git-sync/
aliases:
- ../../../observability-as-code/provision-resources/use-git-sync/ # /docs/grafana/next/observability-as-code/provision-resources/use-git-sync/

View File

@@ -62,5 +62,6 @@ The table includes default and other fields:
| targetBlank | bool. If true, the link will be opened in a new tab. Default is `false`. |
| includeVars | bool. If true, includes current template variables values in the link as query params. Default is `false`. |
| keepTime | bool. If true, includes current time range in the link as query params. Default is `false`. |
| placement? | string. Use placement to display the link somewhere else on the dashboard other than above the visualizations. Use the `inControlsMenu` parameter to render the link in the dashboard controls dropdown menu. |
<!-- prettier-ignore-end -->

View File

@@ -17,16 +17,6 @@ menuTitle: Elasticsearch
title: Elasticsearch data source
weight: 325
refs:
configuration:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#sigv4_auth_enabled
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#sigv4_auth_enabled
provisioning-grafana:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/provisioning/#data-sources
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/provisioning/#data-sources
explore:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/
@@ -44,12 +34,36 @@ refs:
Elasticsearch is a search and analytics engine used for a variety of use cases.
You can create many types of queries to visualize logs or metrics stored in Elasticsearch, and annotate graphs with log events stored in Elasticsearch.
The following will help you get started working with Elasticsearch and Grafana:
The following resources will help you get started with Elasticsearch and Grafana:
- [What is Elasticsearch?](https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-intro.html)
- [Configure the Elasticsearch data source](/docs/grafana/latest/datasources/elasticsearch/configure-elasticsearch-data-source/)
- [Elasticsearch query editor](query-editor/)
- [Elasticsearch template variables](template-variables/)
- [Configure the Elasticsearch data source](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/configure/)
- [Elasticsearch query editor](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/query-editor/)
- [Elasticsearch template variables](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/template-variables/)
- [Elasticsearch annotations](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/annotations/)
- [Elasticsearch alerting](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/alerting/)
- [Troubleshooting issues with the Elasticsearch data source](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/troubleshooting/)
## Key capabilities
The Elasticsearch data source supports:
- **Metrics queries:** Aggregate and visualize numeric data using bucket and metric aggregations.
- **Log queries:** Search, filter, and explore log data with Lucene query syntax.
- **Annotations:** Overlay Elasticsearch events on your dashboard graphs.
- **Alerting:** Create alerts based on Elasticsearch query results.
## Before you begin
Before you configure the Elasticsearch data source, you need:
- An Elasticsearch instance (v7.17+, v8.x, or v9.x)
- Network access from Grafana to your Elasticsearch server
- Appropriate user credentials or API keys with read access
{{< admonition type="note" >}}
If you use Amazon OpenSearch Service (the successor to Amazon Elasticsearch Service), use the [OpenSearch data source](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/opensearch/) instead.
{{< /admonition >}}
## Supported Elasticsearch versions
@@ -63,86 +77,18 @@ This data source supports these versions of Elasticsearch:
- v8.x
- v9.x
Our maintenance policy for Elasticsearch data source is aligned with the [Elastic Product End of Life Dates](https://www.elastic.co/support/eol) and we ensure proper functionality for supported versions. If you are using an Elasticsearch with version that is past its end-of-life (EOL), you can still execute queries, but you will receive a notification in the query builder indicating that the version of Elasticsearch you are using is no longer supported. It's important to note that in such cases, we do not guarantee the correctness of the functionality, and we will not be addressing any related issues.
The Grafana maintenance policy for the Elasticsearch data source aligns with [Elastic Product End of Life Dates](https://www.elastic.co/support/eol). Grafana ensures proper functionality for supported versions only. If you use an EOL version of Elasticsearch, you can still run queries, but the query builder displays a warning. Grafana doesn't guarantee functionality or provide fixes for EOL versions.
## Provision the data source
## Additional resources
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana](ref:provisioning-grafana).
Once you have configured the Elasticsearch data source, you can:
{{< admonition type="note" >}}
The previously used `database` field has now been [deprecated](https://github.com/grafana/grafana/pull/58647).
You should now use the `index` field in `jsonData` to store the index name.
Please see the examples below.
{{< /admonition >}}
- Use [Explore](ref:explore) to run ad-hoc queries against your Elasticsearch data.
- Configure and use [template variables](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/template-variables/) for dynamic dashboards.
- Add [Transformations](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/transform-data/) to process query results.
- [Build dashboards](ref:build-dashboards) to visualize your Elasticsearch data.
### Provisioning examples
## Related data sources
**Basic provisioning**
```yaml
apiVersion: 1
datasources:
- name: Elastic
type: elasticsearch
access: proxy
url: http://localhost:9200
jsonData:
index: '[metrics-]YYYY.MM.DD'
interval: Daily
timeField: '@timestamp'
```
**Provision for logs**
```yaml
apiVersion: 1
datasources:
- name: elasticsearch-v7-filebeat
type: elasticsearch
access: proxy
url: http://localhost:9200
jsonData:
index: '[filebeat-]YYYY.MM.DD'
interval: Daily
timeField: '@timestamp'
logMessageField: message
logLevelField: fields.level
dataLinks:
- datasourceUid: my_jaeger_uid # Target UID needs to be known
field: traceID
url: '$${__value.raw}' # Careful about the double "$$" because of env var expansion
```
## Configure Amazon Elasticsearch Service
If you use Amazon Elasticsearch Service, you can use Grafana's Elasticsearch data source to visualize data from it.
If you use an AWS Identity and Access Management (IAM) policy to control access to your Amazon Elasticsearch Service domain, you must use AWS Signature Version 4 (AWS SigV4) to sign all requests to that domain.
For details on AWS SigV4, refer to the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
### AWS Signature Version 4 authentication
To sign requests to your Amazon Elasticsearch Service domain, you can enable SigV4 in Grafana's [configuration](ref:configuration).
Once AWS SigV4 is enabled, you can configure it on the Elasticsearch data source configuration page.
For more information about AWS authentication options, refer to [AWS authentication](../aws-cloudwatch/aws-authentication/).
{{< figure src="/static/img/docs/v73/elasticsearch-sigv4-config-editor.png" max-width="500px" class="docs-image--no-shadow" caption="SigV4 configuration for AWS Elasticsearch Service" >}}
## Query the data source
You can select multiple metrics and group by multiple terms or filters when using the Elasticsearch query editor.
For details, see the [query editor documentation](query-editor/).
## Use template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For details, see the [template variables documentation](template-variables/).
- [OpenSearch](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/opensearch/) - For Amazon OpenSearch Service.
- [Loki](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/loki/) - Grafana's log aggregation system.

View File

@@ -0,0 +1,144 @@
---
aliases:
- ../../data-sources/elasticsearch/alerting/
description: Using Grafana Alerting with the Elasticsearch data source
keywords:
- grafana
- elasticsearch
- alerting
- alerts
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Alerting
title: Elasticsearch alerting
weight: 550
refs:
alerting:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/
create-alert-rule:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-grafana-managed-rule/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-grafana-managed-rule/
---
# Elasticsearch alerting
You can use Grafana Alerting with Elasticsearch to create alerts based on your Elasticsearch data. This allows you to monitor metrics, detect anomalies, and receive notifications when specific conditions are met.
For general information about Grafana Alerting, refer to [Grafana Alerting](ref:alerting).
## Before you begin
Before creating alerts with Elasticsearch, ensure you have:
- An Elasticsearch data source configured in Grafana
- Appropriate permissions to create alert rules
- Understanding of the metrics you want to monitor
## Supported query types
Elasticsearch alerting works best with **metrics queries** that return time series data. To create a valid alert query:
- Use a **Date histogram** as the last bucket aggregation (under **Group by**)
- Select appropriate metric aggregations (Count, Average, Sum, Min, Max, etc.)
Queries that return time series data allow Grafana to evaluate values over time and trigger alerts when thresholds are crossed.
### Query types and alerting compatibility
| Query type | Alerting support | Notes |
| ------------------------------ | ---------------- | ----------------------------------------------------------- |
| Metrics with Date histogram | ✅ Full support | Recommended for alerting |
| Metrics without Date histogram | ⚠️ Limited | May not evaluate correctly over time |
| Logs | ❌ Not supported | Use metrics queries instead |
| Raw data | ❌ Not supported | Use metrics queries instead |
| Raw document (deprecated) | ❌ Not supported | Deprecated since Grafana v10.1. Use metrics queries instead |
## Create an alert rule
To create an alert rule using Elasticsearch:
1. Navigate to **Alerting** > **Alert rules**.
1. Click **New alert rule**.
1. Enter a name for the alert rule.
1. Select your **Elasticsearch** data source.
1. Build your query using the query editor:
- Add metric aggregations (for example, Average, Count, Sum)
- Add a Date histogram under **Group by**
- Optionally add filters using Lucene query syntax
1. Configure the alert condition (for example, when the average is above a threshold).
1. Set the evaluation interval and pending period.
1. Configure notifications and labels.
1. Click **Save rule**.
For detailed instructions, refer to [Create a Grafana-managed alert rule](ref:create-alert-rule).
## Example alert queries
The following examples show common alerting scenarios with Elasticsearch.
### Alert on high error count
Monitor the number of error-level log entries:
1. **Query:** `level:error`
1. **Metric:** Count
1. **Group by:** Date histogram (interval: 1m)
1. **Condition:** When count is above 100
### Alert on average response time
Monitor API response times:
1. **Query:** `type:api_request`
1. **Metric:** Average on field `response_time`
1. **Group by:** Date histogram (interval: 5m)
1. **Condition:** When average is above 500 (milliseconds)
### Alert on unique user count drop
Detect drops in active users:
1. **Query:** `*` (all documents)
1. **Metric:** Unique count on field `user_id`
1. **Group by:** Date histogram (interval: 1h)
1. **Condition:** When unique count is below 100
## Limitations
When using Elasticsearch with Grafana Alerting, be aware of the following limitations:
### Template variables not supported
Alert queries cannot contain template variables. Grafana evaluates alert rules on the backend without dashboard context, so variables like `$hostname` or `$environment` won't be resolved.
If your dashboard query uses template variables, create a separate query for alerting with hard coded values.
### Logs queries not supported
Queries using the **Logs** metric type cannot be used for alerting. Convert your query to use metric aggregations with a Date histogram instead.
### Query complexity
Complex queries with many nested aggregations may timeout or fail to evaluate. Simplify queries for alerting by:
- Reducing the number of bucket aggregations
- Using appropriate time intervals
- Adding filters to limit the data scanned
## Best practices
Follow these best practices when creating Elasticsearch alerts:
- **Use specific filters:** Add Lucene query filters to focus on relevant data and improve query performance.
- **Choose appropriate intervals:** Match the Date histogram interval to your evaluation frequency.
- **Test queries first:** Verify your query returns expected results in Explore before creating an alert.
- **Set realistic thresholds:** Base alert thresholds on historical data patterns.
- **Use meaningful names:** Give alert rules descriptive names that indicate what they monitor.

View File

@@ -0,0 +1,124 @@
---
aliases:
- ../../data-sources/elasticsearch/annotations/
description: Using annotations with Elasticsearch in Grafana
keywords:
- grafana
- elasticsearch
- annotations
- events
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Annotations
title: Elasticsearch annotations
weight: 500
refs:
annotate-visualizations:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/annotate-visualizations/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/annotate-visualizations/
---
# Elasticsearch annotations
Annotations overlay event data on your dashboard graphs, helping you correlate log events with metrics.
You can use Elasticsearch as a data source for annotations to display events such as deployments, alerts, or other significant occurrences on your visualizations.
For general information about annotations, refer to [Annotate visualizations](ref:annotate-visualizations).
## Before you begin
Before creating Elasticsearch annotations, ensure you have:
- An Elasticsearch data source configured in Grafana
- Documents in Elasticsearch containing event data with timestamp fields
- Read access to the Elasticsearch index containing your events
## Create an annotation query
To add an Elasticsearch annotation to your dashboard:
1. Navigate to your dashboard and click **Dashboard settings** (gear icon).
1. Select **Annotations** in the left menu.
1. Click **Add annotation query**.
1. Enter a **Name** for the annotation.
1. Select your **Elasticsearch** data source from the **Data source** drop-down.
1. Configure the annotation query and field mappings.
1. Click **Save dashboard**.
## Query
Use the query field to filter which Elasticsearch documents appear as annotations. The query uses [Lucene query syntax](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-string-syntax).
**Examples:**
| Query | Description |
| ---------------------------------------- | ---------------------------------------------------- |
| `*` | Matches all documents. |
| `type:deployment` | Shows only deployment events. |
| `level:error OR level:critical` | Shows error and critical events. |
| `service:api AND environment:production` | Shows events for a specific service and environment. |
| `tags:release` | Shows events tagged as releases. |
You can use template variables in your annotation queries. For example, `service:$service` filters annotations based on the selected service variable.
## Field mappings
Field mappings tell Grafana which Elasticsearch fields contain the annotation data.
### Time
The **Time** field specifies which field contains the annotation timestamp.
- **Default:** `@timestamp`
- **Format:** The field must contain a date value that Elasticsearch recognizes.
### Time End
The **Time End** field specifies a field containing the end time for range annotations. Range annotations display as a shaded region on the graph instead of a single vertical line.
- **Default:** Empty (single-point annotations)
- **Use case:** Display maintenance windows, incidents, or any event with a duration.
### Text
The **Text** field specifies which field contains the annotation description displayed when you hover over the annotation.
- **Default:** `tags`
- **Tip:** Use a descriptive field like `message`, `description`, or `summary`.
### Tags
The **Tags** field specifies which field contains tags for the annotation. Tags help categorize and filter annotations.
- **Default:** Empty
- **Format:** The field can contain either a comma-separated string or an array of strings.
## Example: Deployment annotations
To display deployment events as annotations:
1. Create an annotation query with the following settings:
- **Query:** `type:deployment`
- **Time:** `@timestamp`
- **Text:** `message`
- **Tags:** `environment`
This configuration displays deployment events with their messages as the annotation text and environments as tags.
## Example: Range annotations for incidents
To display incidents with duration:
1. Create an annotation query with the following settings:
- **Query:** `type:incident`
- **Time:** `start_time`
- **Time End:** `end_time`
- **Text:** `description`
- **Tags:** `severity`
This configuration displays incidents as shaded regions from their start time to end time.

View File

@@ -1,209 +0,0 @@
---
aliases:
- ../data-sources/elasticsearch/
- ../features/datasources/elasticsearch/
description: Guide for configuring the Elasticsearch data source in Grafana
keywords:
- grafana
- elasticsearch
- guide
- data source
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Configure Elasticsearch
title: Configure the Elasticsearch data source
weight: 200
refs:
administration-documentation:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management/
supported-expressions:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/logs-integration/#log-level
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/logs-integration/#log-level
query-and-transform-data:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data/
provisioning-data-source:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/#provision-the-data-source
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/connect-externally-hosted/data-sources/elasticsearch/#provision-the-data-source
---
# Configure the Elasticsearch data source
Grafana ships with built-in support for Elasticsearch.
You can create a variety of queries to visualize logs or metrics stored in Elasticsearch, and annotate graphs with log events stored in Elasticsearch.
For instructions on how to add a data source to Grafana, refer to the [administration documentation](ref:administration-documentation).
Only users with the organization `administrator` role can add data sources.
Administrators can also [configure the data source via YAML](ref:provisioning-data-source) with Grafana's provisioning system.
## Configuring permissions
When Elasticsearch security features are enabled, it is essential to configure the necessary cluster privileges to ensure seamless operation. Below is a list of the required privileges along with their purposes:
- **monitor** - Necessary to retrieve the version information of the connected Elasticsearch instance.
- **view_index_metadata** - Required for accessing mapping definitions of indices.
- **read** - Grants the ability to perform search and retrieval operations on indices. This is essential for querying and extracting data from the cluster.
## Add the data source
To add the Elasticsearch data source, complete the following steps:
1. Click **Connections** in the left-side menu.
1. Under **Connections**, click **Add new connection**.
1. Enter `Elasticsearch` in the search bar.
1. Click **Elasticsearch** under the **Data source** section.
1. Click **Add new data source** in the upper right.
You will be taken to the **Settings** tab where you will set up your Elasticsearch configuration.
## Configuration options
The following is a list of configuration options for Elasticsearch.
The first option to configure is the name of your connection:
- **Name** - The data source name. This is how you refer to the data source in panels and queries. Examples: elastic-1, elasticsearch_metrics.
- **Default** - Toggle to select as the default data source option. When you go to a dashboard panel or Explore, this will be the default selected data source.
## Connection
Connect the Elasticsearch data source by specifying a URL.
- **URL** - The URL of your Elasticsearch server. If your Elasticsearch server is local, use `http://localhost:9200`. If it is on a server within a network, this is the URL with the port where you are running Elasticsearch. Example: `http://elasticsearch.example.orgname:9200`.
## Authentication
There are several authentication methods you can choose in the Authentication section.
Select one of the following authentication methods from the dropdown menu.
- **Basic authentication** - The most common authentication method. Use your `data source` user name and `data source` password to connect.
- **Forward OAuth identity** - Forward the OAuth access token (and the OIDC ID token if available) of the user querying the data source.
- **No authentication** - Make the data source available without authentication. Grafana recommends using some type of authentication method.
<!-- - **With credentials** - Toggle to enable credentials such as cookies or auth headers to be sent with cross-site requests. -->
### TLS settings
{{< admonition type="note" >}}
Use TLS (Transport Layer Security) for an additional layer of security when working with Elasticsearch. For information on setting up TLS encryption with Elasticsearch see [Configure TLS](https://www.elastic.co/guide/en/elasticsearch/reference/8.8/configuring-tls.html#configuring-tls). You must add TLS settings to your Elasticsearch configuration file **prior** to setting these options in Grafana.
{{< /admonition >}}
- **Add self-signed certificate** - Check the box to authenticate with a CA certificate. Follow the instructions of the CA (Certificate Authority) to download the certificate file. Required for verifying self-signed TLS certificates.
- **TLS client authentication** - Check the box to authenticate with the TLS client, where the server authenticates the client. Add the `Server name`, `Client certificate` and `Client key`. The **ServerName** is used to verify the hostname on the returned certificate. The **Client certificate** can be generated from a Certificate Authority (CA) or be self-signed. The **Client key** can also be generated from a Certificate Authority (CA) or be self-signed. The client key encrypts the data between client and server.
- **Skip TLS certificate validation** - Check the box to bypass TLS certificate validation. Skipping TLS certificate validation is not recommended unless absolutely necessary or for testing purposes.
### HTTP headers
Click **+ Add header** to add one or more HTTP headers. HTTP headers pass additional context and metadata about the request/response.
- **Header** - Add a custom header. This allows custom headers to be passed based on the needs of your Elasticsearch instance.
- **Value** - The value of the header.
## Additional settings
Additional settings are optional settings that can be configured for more control over your data source.
### Advanced HTTP settings
- **Allowed cookies** - Specify cookies by name that should be forwarded to the data source. The Grafana proxy deletes all forwarded cookies by default.
- **Timeout** - The HTTP request timeout. This must be in seconds. There is no default, so this setting is up to you.
### Elasticsearch details
The following settings are specific to the Elasticsearch data source.
- **Index name** - Use the index settings to specify a default for the `time field` and your Elasticsearch index's name. You can use a time pattern, for example `[logstash-]YYYY.MM.DD`, or a wildcard for the index name. When specifying a time pattern, the fixed part(s) of the pattern should be wrapped in square brackets.
- **Pattern** - Select the matching pattern if using one in your index name. Options include:
- no pattern
- hourly
- daily
- weekly
- monthly
- yearly
Only select a pattern option if you have specified a time pattern in the Index name field.
- **Time field name** - Name of the time field. The default value is @timestamp. You can enter a different name.
- **Max concurrent shard requests** - Sets the number of shards being queried at the same time. The default is `5`. For more information on shards see [Elasticsearch's documentation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/scalability.html#scalability).
- **Min time interval** - Defines a lower limit for the auto group-by time interval. This value **must** be formatted as a number followed by a valid time identifier:
| Identifier | Description |
| ---------- | ----------- |
| `y` | year |
| `M` | month |
| `w` | week |
| `d` | day |
| `h` | hour |
| `m` | minute |
| `s` | second |
| `ms` | millisecond |
We recommend setting this value to match your Elasticsearch write frequency.
For example, set this to `1m` if Elasticsearch writes data every minute.
You can also override this setting in a dashboard panel under its data source options. The default is `10s`.
- **X-Pack enabled** - Toggle to enable `X-Pack`-specific features and options, which provide the [query editor](../query-editor/) with additional aggregations, such as `Rate` and `Top Metrics`.
- **Include frozen indices** - Toggle on when the `X-Pack enabled` setting is active. Includes frozen indices in searches. You can configure Grafana to include [frozen indices](https://www.elastic.co/guide/en/elasticsearch/reference/7.13/frozen-indices.html) when performing search requests.
{{< admonition type="note" >}}
Frozen indices are [deprecated in Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/frozen-indices.html) since v7.14.
{{< /admonition >}}
- **Default query mode** - Specifies which query mode the data source uses by default. Options are `Metrics`, `Logs`, `Raw data`, and `Raw document`. The default is `Metrics`.
### Logs
In this section you can configure which fields the data source uses for log messages and log levels.
- **Message field name:** - Grabs the actual log message from the default source.
- **Level field name:** - Name of the field with log level/severity information. When a level label is specified, the value of this label is used to determine the log level and update the color of each log line accordingly. If the log doesnt have a specified level label, we try to determine if its content matches any of the [supported expressions](ref:supported-expressions). The first match always determines the log level. If Grafana cannot infer a log-level field, it will be visualized with an unknown log level.
### Data links
Data links create a link from a specified field that can be accessed in Explore's logs view. You can add multiple data links by clicking **+ Add**.
Each data link configuration consists of:
- **Field** - Sets the name of the field used by the data link.
- **URL/query** - Sets the full link URL if the link is external. If the link is internal, this input serves as a query for the target data source.<br/>In both cases, you can interpolate the value from the field with the `${__value.raw }` macro.
- **URL Label** (Optional) - Sets a custom display label for the link. The link label defaults to the full external URL or name of the linked internal data source and is overridden by this setting.
- **Internal link** - Toggle on to set an internal link. For an internal link, you can select the target data source with a data source selector. This supports only tracing data sources.
## Private data source connect (PDC) and Elasticsearch
Use private data source connect (PDC) to connect to and query data within a secure network without opening that network to inbound traffic from Grafana Cloud. See [Private data source connect](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/) for more information on how PDC works and [Configure Grafana private data source connect (PDC)](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/configure-pdc/#configure-grafana-private-data-source-connect-pdc) for steps on setting up a PDC connection.
If you use PDC with SIGv4 (AWS Signature Version 4 Authentication), the PDC agent must allow internet egress to`sts.<region>.amazonaws.com:443`.
- **Private data source connect** - Click in the box to set the default PDC connection from the dropdown menu or create a new connection.
Once you have configured your Elasticsearch data source options, click **Save & test** at the bottom to test out your data source connection. You can also remove a connection by clicking **Delete**.

View File

@@ -0,0 +1,377 @@
---
aliases:
- ../configure-elasticsearch-data-source/
description: Guide for configuring the Elasticsearch data source in Grafana
keywords:
- grafana
- elasticsearch
- guide
- data source
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Configure
title: Configure the Elasticsearch data source
weight: 200
refs:
administration-documentation:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management/
supported-expressions:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/logs-integration/#log-level
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/logs-integration/#log-level
query-and-transform-data:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data/
provisioning-data-source:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/configure/#provision-the-data-source
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/connect-externally-hosted/data-sources/elasticsearch/configure/#provision-the-data-source
configuration:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#sigv4_auth_enabled
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#sigv4_auth_enabled
provisioning-grafana:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/provisioning/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/provisioning/
---
# Configure the Elasticsearch data source
Grafana ships with built-in support for Elasticsearch.
You can create a variety of queries to visualize logs or metrics stored in Elasticsearch, and annotate graphs with log events stored in Elasticsearch.
For instructions on how to add a data source to Grafana, refer to the [administration documentation](ref:administration-documentation).
Administrators can also [configure the data source via YAML](ref:provisioning-data-source) with Grafana's provisioning system.
## Before you begin
To configure the Elasticsearch data source, you need:
- **Grafana administrator permissions:** Only users with the organization `administrator` role can add data sources.
- **A supported Elasticsearch version:** v7.17 or later, v8.x, or v9.x. Elastic Cloud Serverless isn't supported.
- **Elasticsearch server URL:** The HTTP or HTTPS endpoint for your Elasticsearch instance, including the port (default: `9200`).
- **Authentication credentials:** Depending on your Elasticsearch security configuration, you need one of the following:
- Username and password for basic authentication
- API key
- No credentials (if Elasticsearch security is disabled)
- **Network access:** Grafana must be able to reach your Elasticsearch server. For Grafana Cloud, consider using [Private data source connect (PDC)](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/) if your Elasticsearch instance is in a private network.
## Elasticsearch permissions
When Elasticsearch security features are enabled, you must configure the following cluster privileges for the user or API key that Grafana uses to connect:
- **monitor** - Necessary to retrieve the version information of the connected Elasticsearch instance.
- **view_index_metadata** - Required for accessing mapping definitions of indices.
- **read** - Grants the ability to perform search and retrieval operations on indices. This is essential for querying and extracting data from the cluster.
## Add the data source
To add the Elasticsearch data source, complete the following steps:
1. Click **Connections** in the left-side menu.
1. Under **Connections**, click **Add new connection**.
1. Enter `Elasticsearch` in the search bar.
1. Click **Elasticsearch** under the **Data source** section.
1. Click **Add new data source** in the upper right.
You will be taken to the **Settings** tab where you will set up your Elasticsearch configuration.
## Configuration options
Configure the following basic settings for the Elasticsearch data source:
- **Name** - The data source name. This is how you refer to the data source in panels and queries. Examples: `elastic-1`, `elasticsearch_metrics`.
- **Default** - Toggle on to make this the default data source. New panels and Explore queries use the default data source.
## Connection
- **URL** - The URL of your Elasticsearch server, including the port. Examples: `http://localhost:9200`, `http://elasticsearch.example.com:9200`.
## Authentication
Select an authentication method from the drop-down menu:
- **Basic authentication** - Enter the username and password for your Elasticsearch user.
- **Forward OAuth identity** - Forward the OAuth access token (and the OIDC ID token if available) of the user querying the data source.
- **No authentication** - Connect without credentials. Only use this option if your Elasticsearch instance doesn't require authentication.
### API key authentication
To authenticate using an Elasticsearch API key, select **No authentication** and configure the API key using HTTP headers:
1. In the **HTTP headers** section, click **+ Add header**.
1. Set **Header** to `Authorization`.
1. Set **Value** to `ApiKey <your-api-key>`, replacing `<your-api-key>` with your base64-encoded Elasticsearch API key.
For information about creating API keys, refer to the [Elasticsearch API keys documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html).
### Amazon Elasticsearch Service
If you use Amazon Elasticsearch Service, you can use Grafana's Elasticsearch data source to visualize data from it.
If you use an AWS Identity and Access Management (IAM) policy to control access to your Amazon Elasticsearch Service domain, you must use AWS Signature Version 4 (AWS SigV4) to sign all requests to that domain.
For details on AWS SigV4, refer to the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
To sign requests to your Amazon Elasticsearch Service domain, you can enable SigV4 in Grafana's [configuration](ref:configuration).
Once AWS SigV4 is enabled, you can configure it on the Elasticsearch data source configuration page.
For more information about AWS authentication options, refer to [AWS authentication](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/aws-authentication/).
{{< figure src="/static/img/docs/v73/elasticsearch-sigv4-config-editor.png" max-width="500px" class="docs-image--no-shadow" caption="SigV4 configuration for AWS Elasticsearch Service" >}}
### TLS settings
{{< admonition type="note" >}}
Use TLS (Transport Layer Security) for an additional layer of security when working with Elasticsearch. For information on setting up TLS encryption with Elasticsearch, refer to [Configure TLS](https://www.elastic.co/guide/en/elasticsearch/reference/8.8/configuring-tls.html#configuring-tls). You must add TLS settings to your Elasticsearch configuration file **prior** to setting these options in Grafana.
{{< /admonition >}}
- **Add self-signed certificate** - Check the box to authenticate with a CA certificate. Follow the instructions of the CA (Certificate Authority) to download the certificate file. Required for verifying self-signed TLS certificates.
- **TLS client authentication** - Check the box to authenticate with the TLS client, where the server authenticates the client. Add the `Server name`, `Client certificate` and `Client key`. The **ServerName** is used to verify the hostname on the returned certificate. The **Client certificate** can be generated from a Certificate Authority (CA) or be self-signed. The **Client key** can also be generated from a Certificate Authority (CA) or be self-signed. The client key encrypts the data between client and server.
- **Skip TLS certificate validation** - Check the box to bypass TLS certificate validation. Skipping TLS certificate validation is not recommended unless absolutely necessary or for testing purposes.
### HTTP headers
Click **+ Add header** to add one or more HTTP headers. HTTP headers pass additional context and metadata about the request/response.
- **Header** - Add a custom header. This allows custom headers to be passed based on the needs of your Elasticsearch instance.
- **Value** - The value of the header.
## Additional settings
Additional settings are optional settings that can be configured for more control over your data source.
### Advanced HTTP settings
- **Allowed cookies** - Specify cookies by name that should be forwarded to the data source. The Grafana proxy deletes all forwarded cookies by default.
- **Timeout** - The HTTP request timeout. This must be in seconds. There is no default, so this setting is up to you.
### Elasticsearch details
The following settings are specific to the Elasticsearch data source.
- **Index name** - The name of your Elasticsearch index. You can use the following formats:
- **Wildcard patterns** - Use `*` to match multiple indices. Examples: `logs-*`, `metrics-*`, `filebeat-*`.
- **Time patterns** - Use date placeholders for time-based indices. Wrap the fixed portion in square brackets. Examples: `[logstash-]YYYY.MM.DD`, `[metrics-]YYYY.MM`.
- **Specific index** - Enter the exact index name. Example: `application-logs`.
- **Pattern** - Select the matching pattern if you use a time pattern in your index name. Options include:
- no pattern
- hourly
- daily
- weekly
- monthly
- yearly
Only select a pattern option if you have specified a time pattern in the Index name field.
- **Time field name** - Name of the time field. The default value is `@timestamp`. You can enter a different name.
- **Max concurrent shard requests** - Sets the number of shards being queried at the same time. The default is `5`. For more information on shards, refer to the [Elasticsearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/scalability.html#scalability).
- **Min time interval** - Defines a lower limit for the auto group-by time interval. This value **must** be formatted as a number followed by a valid time identifier:
| Identifier | Description |
| ---------- | ----------- |
| `y` | year |
| `M` | month |
| `w` | week |
| `d` | day |
| `h` | hour |
| `m` | minute |
| `s` | second |
| `ms` | millisecond |
We recommend setting this value to match your Elasticsearch write frequency.
For example, set this to `1m` if Elasticsearch writes data every minute.
You can also override this setting in a dashboard panel under its data source options. The default is `10s`.
- **X-Pack enabled** - Toggle to enable `X-Pack`-specific features and options, which provide the [query editor](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/query-editor/) with additional aggregations, such as `Rate` and `Top Metrics`.
- **Include frozen indices** - Toggle on when the `X-Pack enabled` setting is active. Includes frozen indices in searches. You can configure Grafana to include [frozen indices](https://www.elastic.co/guide/en/elasticsearch/reference/7.13/frozen-indices.html) when performing search requests.
{{< admonition type="note" >}}
Frozen indices are [deprecated in Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/frozen-indices.html) since v7.14.
{{< /admonition >}}
### Logs
Configure which fields the data source uses for log messages and log levels.
- **Message field name** - The field that contains the log message content.
- **Level field name** - The field that contains log level or severity information. When specified, Grafana uses this field to determine the log level and color-code each log line. If the log doesn't have a level field, Grafana tries to match the content against [supported expressions](ref:supported-expressions). If Grafana can't determine the log level, it displays as unknown.
### Data links
Data links create a link from a specified field that can be accessed in Explore's logs view. You can add multiple data links by clicking **+ Add**.
Each data link configuration consists of:
- **Field** - Sets the name of the field used by the data link.
- **URL/query** - Sets the full link URL if the link is external. If the link is internal, this input serves as a query for the target data source.<br/>In both cases, you can interpolate the value from the field with the `${__value.raw }` macro.
- **URL Label** (Optional) - Sets a custom display label for the link. The link label defaults to the full external URL or name of the linked internal data source and is overridden by this setting.
- **Internal link** - Toggle on to set an internal link. For an internal link, you can select the target data source with a data source selector. This supports only tracing data sources.
## Private data source connect (PDC) and Elasticsearch
Use private data source connect (PDC) to connect to and query data within a secure network without opening that network to inbound traffic from Grafana Cloud. Refer to [Private data source connect](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/) for more information on how PDC works and [Configure Grafana private data source connect (PDC)](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/configure-pdc/#configure-grafana-private-data-source-connect-pdc) for steps on setting up a PDC connection.
If you use PDC with SigV4 (AWS Signature Version 4 Authentication), the PDC agent must allow internet egress to `sts.<region>.amazonaws.com:443`.
- **Private data source connect** - Click in the box to set the default PDC connection from the drop-down menu or create a new connection.
Once you have configured your Elasticsearch data source options, click **Save & test** to test the connection. A successful connection displays the following message:
`Elasticsearch data source is healthy.`
## Provision the data source
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana](ref:provisioning-grafana).
{{< admonition type="note" >}}
The previously used `database` field has now been [deprecated](https://github.com/grafana/grafana/pull/58647).
Use the `index` field in `jsonData` to store the index name.
Refer to the examples below.
{{< /admonition >}}
### Basic provisioning
```yaml
apiVersion: 1
datasources:
- name: Elastic
type: elasticsearch
access: proxy
url: http://localhost:9200
jsonData:
index: '[metrics-]YYYY.MM.DD'
interval: Daily
timeField: '@timestamp'
```
### Provision for logs
```yaml
apiVersion: 1
datasources:
- name: elasticsearch-v7-filebeat
type: elasticsearch
access: proxy
url: http://localhost:9200
jsonData:
index: '[filebeat-]YYYY.MM.DD'
interval: Daily
timeField: '@timestamp'
logMessageField: message
logLevelField: fields.level
dataLinks:
- datasourceUid: my_jaeger_uid # Target UID needs to be known
field: traceID
url: '$${__value.raw}' # Careful about the double "$$" because of env var expansion
```
## Provision the data source using Terraform
You can provision the Elasticsearch data source using [Terraform](https://www.terraform.io/) with the [Grafana Terraform provider](https://registry.terraform.io/providers/grafana/grafana/latest/docs).
For more information about provisioning resources with Terraform, refer to the [Grafana as code using Terraform](https://grafana.com/docs/grafana-cloud/developer-resources/infrastructure-as-code/terraform/) documentation.
### Basic Terraform example
The following example creates a basic Elasticsearch data source for metrics:
```hcl
resource "grafana_data_source" "elasticsearch" {
name = "Elasticsearch"
type = "elasticsearch"
url = "http://localhost:9200"
json_data_encoded = jsonencode({
index = "[metrics-]YYYY.MM.DD"
interval = "Daily"
timeField = "@timestamp"
})
}
```
### Terraform example for logs
The following example creates an Elasticsearch data source configured for logs with a data link to Jaeger:
```hcl
resource "grafana_data_source" "elasticsearch_logs" {
name = "Elasticsearch Logs"
type = "elasticsearch"
url = "http://localhost:9200"
json_data_encoded = jsonencode({
index = "[filebeat-]YYYY.MM.DD"
interval = "Daily"
timeField = "@timestamp"
logMessageField = "message"
logLevelField = "fields.level"
dataLinks = [
{
datasourceUid = grafana_data_source.jaeger.uid
field = "traceID"
url = "$${__value.raw}"
}
]
})
}
```
### Terraform example with basic authentication
The following example includes basic authentication:
```hcl
resource "grafana_data_source" "elasticsearch_auth" {
name = "Elasticsearch"
type = "elasticsearch"
url = "http://localhost:9200"
basic_auth_enabled = true
basic_auth_username = "elastic_user"
secure_json_data_encoded = jsonencode({
basicAuthPassword = var.elasticsearch_password
})
json_data_encoded = jsonencode({
index = "[metrics-]YYYY.MM.DD"
interval = "Daily"
timeField = "@timestamp"
})
}
```
For all available configuration options, refer to the [Grafana provider data source resource documentation](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/data_source).

View File

@@ -30,7 +30,7 @@ refs:
# Elasticsearch query editor
Grafana provides a query editor for Elasticsearch. Elasticsearch queries are in Lucene format.
See [Lucene query syntax](https://www.elastic.co/guide/en/kibana/current/lucene-query.html) and [Query string syntax](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/query-dsl-query-string-query.html#query-string-syntax) if you are new to working with Lucene queries in Elasticsearch.
For more information about query syntax, refer to [Lucene query syntax](https://www.elastic.co/guide/en/kibana/current/lucene-query.html) and [Query string syntax](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-string-syntax).
{{< admonition type="note" >}}
When composing Lucene queries, ensure that you use uppercase boolean operators: `AND`, `OR`, and `NOT`. Lowercase versions of these operators are not supported by the Lucene query syntax.
@@ -38,17 +38,17 @@ When composing Lucene queries, ensure that you use uppercase boolean operators:
{{< figure src="/static/img/docs/elasticsearch/elastic-query-editor-10.1.png" max-width="800px" class="docs-image--no-shadow" caption="Elasticsearch query editor" >}}
For general documentation on querying data sources in Grafana, including options and functions common to all query editors, see [Query and transform data](ref:query-and-transform-data).
For general documentation on querying data sources in Grafana, including options and functions common to all query editors, refer to [Query and transform data](ref:query-and-transform-data).
## Aggregation types
Elasticsearch groups aggregations into three categories:
- **Bucket** - Bucket aggregations don't calculate metrics, they create buckets of documents based on field values, ranges and a variety of other criteria. See [Bucket aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket.html) for additional information. Use bucket aggregations under `Group by` when creating a metrics query in the query builder.
- **Bucket** - Bucket aggregations don't calculate metrics, they create buckets of documents based on field values, ranges and a variety of other criteria. Refer to [Bucket aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket.html) for additional information. Use bucket aggregations under `Group by` when creating a metrics query in the query builder.
- **Metrics** - Metrics aggregations perform calculations such as sum, average, min, etc. They can be single-value or multi-value. See [Metrics aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html) for additional information. Use metrics aggregations in the metrics query type in the query builder.
- **Metrics** - Metrics aggregations perform calculations such as sum, average, min, etc. They can be single-value or multi-value. Refer to [Metrics aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html) for additional information. Use metrics aggregations in the metrics query type in the query builder.
- **Pipeline** - Elasticsearch pipeline aggregations work with inputs or metrics created from other aggregations (not documents or fields). There are parent and sibling and sibling pipeline aggregations. See [Pipeline aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-pipeline.html) for additional information.
- **Pipeline** - Pipeline aggregations work on the output of other aggregations rather than on documents or fields. Refer to [Pipeline aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline.html) for additional information.
## Select a query type
@@ -56,44 +56,51 @@ There are three types of queries you can create with the Elasticsearch query bui
### Metrics query type
Metrics queries aggregate data and produce a variety of calculations such as count, min, max, etc. Click on the metric box to view a list of options in the dropdown menu. The default is `count`.
Metrics queries aggregate data and produce calculations such as count, min, max, and more. Click the metric box to view options in the drop-down menu. The default is `count`.
- **Alias** - Aliasing only applies to **time series queries**, where the last group is `date histogram`. This is ignored for any other type of query.
- **Metric** - Metrics aggregations include:
- count - see [Value count aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-valuecount-aggregation.html)
- average - see [Avg aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-rate-aggregation.html)
- sum - see [Sum aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-sum-aggregation.html)
- max - see [Max aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-max-aggregation.html)
- min - see [Min aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-min-aggregation.html)
- extended stats - see [Extended stats aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-extendedstats-aggregation.html)
- percentiles - see [Percentiles aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-percentile-aggregation.html)
- unique count - see [Cardinality aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-cardinality-aggregation.html)
- top metrics - see [Top metrics aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-top-metrics.html)
- rate - see [Rate aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-rate-aggregation.html)
- count - refer to [Value count aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-valuecount-aggregation.html)
- average - refer to [Avg aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-avg-aggregation.html)
- sum - refer to [Sum aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-sum-aggregation.html)
- max - refer to [Max aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-max-aggregation.html)
- min - refer to [Min aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-min-aggregation.html)
- extended stats - refer to [Extended stats aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-extendedstats-aggregation.html)
- percentiles - refer to [Percentiles aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-percentile-aggregation.html)
- unique count - refer to [Cardinality aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-cardinality-aggregation.html)
- top metrics - refer to [Top metrics aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-top-metrics.html)
- rate - refer to [Rate aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-rate-aggregation.html)
- **Pipeline aggregations** - Pipeline aggregations work on the output of other aggregations rather than on documents. The following pipeline aggregations are available:
- moving function - Calculates a value based on a sliding window of aggregated values. Refer to [Moving function aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-movfn-aggregation.html).
- derivative - Calculates the derivative of a metric. Refer to [Derivative aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-derivative-aggregation.html).
- cumulative sum - Calculates the cumulative sum of a metric. Refer to [Cumulative sum aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-cumulative-sum-aggregation.html).
- serial difference - Calculates the difference between values in a time series. Refer to [Serial differencing aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-serialdiff-aggregation.html).
- bucket script - Executes a script on metric values from other aggregations. Refer to [Bucket script aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-bucket-script-aggregation.html).
You can select multiple metrics and group by multiple terms or filters when using the Elasticsearch query editor.
Use the **+ sign** to the right to add multiple metrics to your query. Click on the **eye icon** next to **Metric** to hide metrics, and the **garbage can icon** to remove metrics.
- **Group by options** - Create multiple group by options when constructing your Elasticsearch query. Date histogram is the default option. Below is a list of options in the dropdown menu.
- terms - see [Terms aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html).
- filter - see [Filter aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-filter-aggregation.html).
- geo hash grid - see [Geohash grid aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-geohashgrid-aggregation.html).
- date histogram - for time series queries. See [Date histogram aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-datehistogram-aggregation.html).
- histogram - Depicts frequency distributions. See [Histogram aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-histogram-aggregation.html).
- nested (experimental) - See [Nested aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-nested-aggregation.html).
- **Group by options** - Create multiple group by options when constructing your Elasticsearch query. Date histogram is the default option. The following options are available in the drop-down menu:
- terms - refer to [Terms aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html).
- filter - refer to [Filter aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-filter-aggregation.html).
- geo hash grid - refer to [Geohash grid aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-geohashgrid-aggregation.html).
- date histogram - for time series queries. Refer to [Date histogram aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-datehistogram-aggregation.html).
- histogram - Depicts frequency distributions. Refer to [Histogram aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-histogram-aggregation.html).
- nested (experimental) - Refer to [Nested aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-nested-aggregation.html).
Each group by option will have a different subset of options to further narrow your query.
The following options are specific to the **date histogram** bucket aggregation option.
- **Time field** - Depicts date data options. The default option can be specified when configuring the Elasticsearch data source in the **Time field name** under the [**Elasticsearch details**](/docs/grafana/latest/datasources/elasticsearch/configure-elasticsearch-data-source/#elasticsearch-details) section. Otherwise **@timestamp** field will be used as a default option.
- **Interval** - Group by a type of interval. There are option to choose from the dropdown menu to select seconds, minutes, hours or day. You can also add a custom interval such as `30d` (30 days). `Auto` is the default option.
- **Min doc count** - The minimum amount of data to include in your query. The default is `0`.
- **Thin edges** - Select to trim edges on the time series data points. The default is `0`.
- **Offset** - Changes the start value of each bucket by the specified positive(+) or negative (-) offset duration. Examples include `1h` for 1 hour, `5s` for 5 seconds or `1d` for 1 day.
- **Timezone** - Select a timezone from the dropdown menu. The default is `Coordinated universal time`.
- **Time field** - The field used for time-based queries. The default can be set when configuring the data source in the **Time field name** setting under [Elasticsearch details](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/configure/#elasticsearch-details). The default is `@timestamp`.
- **Interval** - The time interval for grouping data. Select from the drop-down menu or enter a custom interval such as `30d` (30 days). The default is `Auto`.
- **Min doc count** - The minimum number of documents required to include a bucket. The default is `0`.
- **Trim edges** - Removes partial buckets at the edges of the time range. The default is `0`.
- **Offset** - Shifts the start of each bucket by the specified duration. Use positive (`+`) or negative (`-`) values. Examples: `1h`, `5s`, `1d`.
- **Timezone** - The timezone for date calculations. The default is `Coordinated Universal Time`.
Configure the following options for the **terms** bucket aggregation option:
@@ -101,7 +108,7 @@ Configure the following options for the **terms** bucket aggregation option:
- **Size** - Limits the number of documents, or size of the data set. You can set a custom number or `no limit`.
- **Min doc count** - The minimum amount of data to include in your query. The default is `0`.
- **Order by** - Order terms by `term value`, `doc count` or `count`.
- **Missing** - Defines how documents missing a value should be treated. Missing values are ignored by default, but they can be treated as if they had a value. See [Missing value](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#_missing_value_5) in Elasticsearch's documentation for more information.
- **Missing** - Defines how documents missing a value should be treated. Missing values are ignored by default, but they can be treated as if they had a value. Refer to [Missing value](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#_missing_value_5) in the Elasticsearch documentation for more information.
Configure the following options for the **filters** bucket aggregation option:
@@ -114,8 +121,8 @@ Configure the following options for the **geo hash grid** bucket aggregation opt
Configure the following options for the **histogram** bucket aggregation option:
- **Interval** - Group by a type of interval. There are option to choose from the dropdown menu to select seconds, minutes, hours or day. You can also add a custom interval such as `30d` (30 days). `Auto` is the default option.
- **Min doc count** - The minimum amount of data to include in your query. The default is `0`
- **Interval** - The numeric interval for grouping values into buckets.
- **Min doc count** - The minimum number of documents required to include a bucket. The default is `0`.
The **nested** group by option is currently experimental, you can select a field and then settings specific to that field.
@@ -141,7 +148,7 @@ The option to run a **raw document query** is deprecated as of Grafana v10.1.
## Use template variables
You can also augment queries by using [template variables](../template-variables/).
You can also augment queries by using [template variables](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/template-variables/).
Queries of `terms` have a 500-result limit by default.
To set a custom limit, set the `size` property in your query.

View File

@@ -22,6 +22,11 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/
add-template-variables-add-ad-hoc-filters:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/add-template-variables/#add-ad-hoc-filters
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/add-template-variables/#add-ad-hoc-filters
add-template-variables-multi-value-variables:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/add-template-variables/#multi-value-variables
@@ -37,11 +42,29 @@ refs:
# Elasticsearch template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana lists these variables in drop-down select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For an introduction to templating and template variables, refer to the [Templating](ref:variables) and [Add and manage variables](ref:add-template-variables) documentation.
## Use ad hoc filters
Elasticsearch supports the **Ad hoc filters** variable type.
You can use this variable type to specify any number of key/value filters, and Grafana applies them automatically to all of your Elasticsearch queries.
Ad hoc filters support the following operators:
| Operator | Description |
| -------- | ------------------------------------------------------------- |
| `=` | Equals. Adds `AND field:"value"` to the query. |
| `!=` | Not equals. Adds `AND -field:"value"` to the query. |
| `=~` | Matches regex. Adds `AND field:/value/` to the query. |
| `!~` | Does not match regex. Adds `AND -field:/value/` to the query. |
| `>` | Greater than. Adds `AND field:>value` to the query. |
| `<` | Less than. Adds `AND field:<value` to the query. |
For more information, refer to [Add ad hoc filters](ref:add-template-variables-add-ad-hoc-filters).
## Choose a variable syntax
The Elasticsearch data source supports two variable syntaxes for use in the **Query** field:
@@ -50,34 +73,35 @@ The Elasticsearch data source supports two variable syntaxes for use in the **Qu
- `[[varname]]`, such as `hostname:[[hostname]]`
When the _Multi-value_ or _Include all value_ options are enabled, Grafana converts the labels from plain text to a Lucene-compatible condition.
For details, see the [Multi-value variables](ref:add-template-variables-multi-value-variables) documentation.
For details, refer to the [Multi-value variables](ref:add-template-variables-multi-value-variables) documentation.
## Use variables in queries
You can use other variables inside the query.
This example is used to define a variable named `$host`:
You can use variables in the Lucene query field, metric aggregation fields, bucket aggregation fields, and the alias field.
### Variables in Lucene queries
Use variables to filter your Elasticsearch queries dynamically:
```
{"find": "terms", "field": "hostname", "query": "source:$source"}
hostname:$hostname AND level:$level
```
This uses another variable named `$source` inside the query definition.
Whenever you change the value of the `$source` variable via the dropdown, Grafana triggers an update of the `$host` variable to contain only hostnames filtered by, in this case, the `source` document property.
### Chain or nest variables
These queries by default return results in term order (which can then be sorted alphabetically or numerically as for any variable).
To produce a list of terms sorted by doc count (a top-N values list), add an `orderBy` property of "doc_count".
This automatically selects a descending sort.
You can create nested variables, where one variable's values depend on another variable's selection.
{{< admonition type="note" >}}
To use an ascending sort (`asc`) with doc_count (a bottom-N list), set `order: "asc"`. However, Elasticsearch [discourages this](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#search-aggregations-bucket-terms-aggregation-order) because sorting by ascending doc count can return inaccurate results.
{{< /admonition >}}
To keep terms in the doc count order, set the variable's Sort dropdown to **Disabled**.
You can alternatively use other sorting criteria, such as **Alphabetical**, to re-sort them.
This example defines a variable named `$host` that only shows hosts matching the selected `$environment`:
```json
{ "find": "terms", "field": "hostname", "query": "environment:$environment" }
```
{"find": "terms", "field": "hostname", "orderBy": "doc_count"}
```
Whenever you change the value of the `$environment` variable via the drop-down, Grafana triggers an update of the `$host` variable to contain only hostnames filtered by the selected environment.
### Variables in aggregations
You can use variables in bucket aggregation fields to dynamically change how data is grouped. For example, use a variable in the **Terms** group by field to let users switch between grouping by `hostname`, `service`, or `datacenter`.
## Template variable examples
@@ -92,11 +116,36 @@ Write the query using a custom JSON string, with the field mapped as a [keyword]
If the query is [multi-field](https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-fields.html) with both a `text` and `keyword` type, use `"field":"fieldname.keyword"` (sometimes `fieldname.raw`) to specify the keyword field in your query.
| Query | Description |
| ------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `{"find": "fields", "type": "keyword"}` | Returns a list of field names with the index type `keyword`. |
| `{"find": "terms", "field": "hostname.keyword", "size": 1000}` | Returns a list of values for a keyword using term aggregation. Query will use current dashboard time range as time range query. |
| `{"find": "terms", "field": "hostname", "query": '<Lucene query>'}` | Returns a list of values for a keyword field using term aggregation and a specified Lucene query filter. Query will use current dashboard time range as time range for query. |
| Query | Description |
| ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------ |
| `{"find": "fields", "type": "keyword"}` | Returns a list of field names with the index type `keyword`. |
| `{"find": "fields", "type": "number"}` | Returns a list of numeric field names (includes `float`, `double`, `integer`, `long`, `scaled_float`). |
| `{"find": "fields", "type": "date"}` | Returns a list of date field names. |
| `{"find": "terms", "field": "hostname.keyword", "size": 1000}` | Returns a list of values for a keyword field. Uses the current dashboard time range. |
| `{"find": "terms", "field": "hostname", "query": "<Lucene query>"}` | Returns a list of values filtered by a Lucene query. Uses the current dashboard time range. |
| `{"find": "terms", "field": "status", "orderBy": "doc_count"}` | Returns values sorted by document count (descending by default). |
| `{"find": "terms", "field": "status", "orderBy": "doc_count", "order": "asc"}` | Returns values sorted by document count in ascending order. |
Queries of `terms` have a 500-result limit by default.
To set a custom limit, set the `size` property in your query.
Queries of `terms` have a 500-result limit by default. To set a custom limit, set the `size` property in your query.
### Sort query results
By default, queries return results in term order (which can then be sorted alphabetically or numerically using the variable's Sort setting).
To produce a list of terms sorted by document count (a top-N values list), add an `orderBy` property of `doc_count`. This automatically selects a descending sort:
```json
{ "find": "terms", "field": "status", "orderBy": "doc_count" }
```
You can also use the `order` property to explicitly set ascending or descending sort:
```json
{ "find": "terms", "field": "hostname", "orderBy": "doc_count", "order": "asc" }
```
{{< admonition type="note" >}}
Elasticsearch [discourages](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#search-aggregations-bucket-terms-aggregation-order) sorting by ascending doc count because it can return inaccurate results.
{{< /admonition >}}
To keep terms in the document count order, set the variable's Sort drop-down to **Disabled**. You can alternatively use other sorting criteria, such as **Alphabetical**, to re-sort them.

View File

@@ -0,0 +1,266 @@
---
aliases:
- ../../data-sources/elasticsearch/troubleshooting/
description: Troubleshooting the Elasticsearch data source in Grafana
keywords:
- grafana
- elasticsearch
- troubleshooting
- errors
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Troubleshooting
title: Troubleshoot issues with the Elasticsearch data source
weight: 600
---
# Troubleshoot issues with the Elasticsearch data source
This document provides troubleshooting information for common errors you may encounter when using the Elasticsearch data source in Grafana.
## Connection errors
The following errors occur when Grafana cannot establish or maintain a connection to Elasticsearch.
### Failed to connect to Elasticsearch
**Error message:** "Health check failed: Failed to connect to Elasticsearch"
**Cause:** Grafana cannot establish a network connection to the Elasticsearch server.
**Solution:**
1. Verify that the Elasticsearch URL is correct in the data source configuration.
1. Check that Elasticsearch is running and accessible from the Grafana server.
1. Ensure there are no firewall rules blocking the connection.
1. If using a proxy, verify the proxy settings are correct.
1. For Grafana Cloud, ensure you have configured [Private data source connect](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/) if your Elasticsearch instance is not publicly accessible.
### Request timed out
**Error message:** "Health check failed: Elasticsearch data source is not healthy. Request timed out"
**Cause:** The connection to Elasticsearch timed out before receiving a response.
**Solution:**
1. Check the network latency between Grafana and Elasticsearch.
1. Verify that Elasticsearch is not overloaded or experiencing performance issues.
1. Increase the timeout setting in the data source configuration if needed.
1. Check if any network devices (load balancers, proxies) are timing out the connection.
### Failed to parse data source URL
**Error message:** "Failed to parse data source URL"
**Cause:** The URL entered in the data source configuration is not valid.
**Solution:**
1. Verify the URL format is correct (for example, `http://localhost:9200` or `https://elasticsearch.example.com:9200`).
1. Ensure the URL includes the protocol (`http://` or `https://`).
1. Remove any trailing slashes or invalid characters from the URL.
## Authentication errors
The following errors occur when there are issues with authentication credentials or permissions.
### Unauthorized (401)
**Error message:** "Health check failed: Elasticsearch data source is not healthy. Status: 401 Unauthorized"
**Cause:** The authentication credentials are invalid or missing.
**Solution:**
1. Verify that the username and password are correct.
1. If using an API key, ensure the key is valid and has not expired.
1. Check that the authentication method selected matches your Elasticsearch configuration.
1. Verify the user has the required permissions to access the Elasticsearch cluster.
### Forbidden (403)
**Error message:** "Health check failed: Elasticsearch data source is not healthy. Status: 403 Forbidden"
**Cause:** The authenticated user does not have permission to access the requested resource.
**Solution:**
1. Verify the user has read access to the specified index.
1. Check Elasticsearch security settings and role mappings.
1. Ensure the user has permission to access the `_cluster/health` endpoint.
1. If using AWS Elasticsearch Service with SigV4 authentication, verify the IAM policy grants the required permissions.
## Cluster health errors
The following errors occur when the Elasticsearch cluster is unhealthy or unavailable.
### Cluster status is red
**Error message:** "Health check failed: Elasticsearch data source is not healthy"
**Cause:** The Elasticsearch cluster health status is red, indicating one or more primary shards are not allocated.
**Solution:**
1. Check the Elasticsearch cluster health using `GET /_cluster/health`.
1. Review Elasticsearch logs for errors.
1. Verify all nodes in the cluster are running and connected.
1. Check for unassigned shards using `GET /_cat/shards?v&h=index,shard,prirep,state,unassigned.reason`.
1. Consider increasing the cluster's resources or reducing the number of shards.
### Bad Gateway (502)
**Error message:** "Health check failed: Elasticsearch data source is not healthy. Status: 502 Bad Gateway"
**Cause:** A proxy or load balancer between Grafana and Elasticsearch returned an error.
**Solution:**
1. Check the health of any proxies or load balancers in the connection path.
1. Verify Elasticsearch is running and accepting connections.
1. Review proxy/load balancer logs for more details.
1. Ensure the proxy timeout is configured appropriately for Elasticsearch requests.
## Index errors
The following errors occur when there are issues with the configured index or index pattern.
### Index not found
**Error message:** "Error validating index: index_not_found"
**Cause:** The specified index or index pattern does not match any existing indices.
**Solution:**
1. Verify the index name or pattern in the data source configuration.
1. Check that the index exists using `GET /_cat/indices`.
1. If using a time-based index pattern (for example, `[logs-]YYYY.MM.DD`), ensure indices exist for the selected time range.
1. Verify the user has permission to access the index.
### Time field not found
**Error message:** "Could not find time field '@timestamp' with type date in index"
**Cause:** The specified time field does not exist in the index or is not of type `date`.
**Solution:**
1. Verify the time field name in the data source configuration matches the field in your index.
1. Check the field mapping using `GET /<index>/_mapping`.
1. Ensure the time field is mapped as a `date` type, not `text` or `keyword`.
1. If the field name is different (for example, `timestamp` instead of `@timestamp`), update the data source configuration.
## Query errors
The following errors occur when there are issues with query syntax or configuration.
### Too many buckets
**Error message:** "Trying to create too many buckets. Must be less than or equal to: [65536]."
**Cause:** The query is generating more aggregation buckets than Elasticsearch allows.
**Solution:**
1. Reduce the time range of your query.
1. Increase the date histogram interval (for example, change from `10s` to `1m`).
1. Add filters to reduce the number of documents being aggregated.
1. Increase the `search.max_buckets` setting in Elasticsearch (requires cluster admin access).
### Required field missing
**Error message:** "Required one of fields [field, script], but none were specified."
**Cause:** A metric aggregation (such as Average, Sum, or Min) was added without specifying a field.
**Solution:**
1. Select a field for the metric aggregation in the query editor.
1. Ensure the selected field exists in your index and contains numeric data.
### Unsupported interval
**Error message:** "unsupported interval '&lt;interval&gt;'"
**Cause:** The interval specified for the index pattern is not valid.
**Solution:**
1. Use a supported interval: `Hourly`, `Daily`, `Weekly`, `Monthly`, or `Yearly`.
1. If you don't need a time-based index pattern, use `No pattern` and specify the exact index name.
## Version errors
The following errors occur when there are Elasticsearch version compatibility issues.
### Unsupported Elasticsearch version
**Error message:** "Support for Elasticsearch versions after their end-of-life (currently versions &lt; 7.16) was removed. Using unsupported version of Elasticsearch may lead to unexpected and incorrect results."
**Cause:** The Elasticsearch version is no longer supported by the Grafana data source.
**Solution:**
1. Upgrade Elasticsearch to a supported version (7.17+, 8.x, or 9.x).
1. Refer to [Elastic Product End of Life Dates](https://www.elastic.co/support/eol) for version support information.
1. Note that queries may still work, but Grafana does not guarantee functionality for unsupported versions.
## Other common issues
The following issues don't produce specific error messages but are commonly encountered.
### Empty query results
**Cause:** The query returns no data.
**Solution:**
1. Verify the time range includes data in your index.
1. Check the Lucene query syntax for errors.
1. Test the query directly in Elasticsearch using the `_search` API.
1. Ensure the index contains documents matching your query filters.
### Slow query performance
**Cause:** Queries take a long time to execute.
**Solution:**
1. Reduce the time range of your query.
1. Add more specific filters to limit the data scanned.
1. Increase the date histogram interval.
1. Check Elasticsearch cluster performance and resource utilization.
1. Consider using index aliases or data streams for better query routing.
### CORS errors in browser console
**Cause:** Cross-Origin Resource Sharing (CORS) is blocking requests from the browser to Elasticsearch.
**Solution:**
1. Use Server (proxy) access mode instead of Browser access mode in the data source configuration.
1. If Browser access is required, configure CORS settings in Elasticsearch:
```yaml
http.cors.enabled: true
http.cors.allow-origin: '<your-grafana-url>'
http.cors.allow-headers: 'Authorization, Content-Type'
http.cors.allow-credentials: true
```
{{< admonition type="note" >}}
Server (proxy) access mode is recommended for security and reliability.
{{< /admonition >}}
## Get additional help
If you continue to experience issues after following this troubleshooting guide:
1. Check the [Elasticsearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html) for API-specific guidance.
1. Review the [Grafana community forums](https://community.grafana.com/) for similar issues.
1. Contact Grafana Support if you have an Enterprise license.

View File

@@ -0,0 +1,80 @@
---
description: Learn how to troubleshoot common problems with the Grafana MySQL data source plugin
keywords:
- grafana
- mysql
- query
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Troubleshoot
title: Troubleshoot common problems with the Grafana MySQL data source plugin
weight: 40
refs:
variables:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/variables/
variable-syntax-advanced-variable-format-options:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/variable-syntax/#advanced-variable-format-options
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/variables/variable-syntax/#advanced-variable-format-options
annotate-visualizations:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/annotate-visualizations/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/build-dashboards/annotate-visualizations/
explore:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/
query-transform-data:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data/
panel-inspector:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/panel-inspector/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/panel-inspector/
query-editor:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/#query-editors
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data/#query-editors
alert-rules:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/
template-annotations-and-labels:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/templates/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/templates/
configure-standard-options:
- pattern: /docs/grafana/
- destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/configure-standard-options/
---
# Troubleshoot common problems with the Grafana MySQL data source plugin
This page lists common issues you might experience when setting up the Grafana MySQL data source plugin.
### My data source connection fails when using the Grafana MySQL data source plugin
- Check if the MySQL server is up and running.
- Make sure that your firewall is open for MySQL server (default port is `3306`).
- Ensure that you have the correct permissions to access the MySQL server and also have permission to access the database.
- If the error persists, create a new user for the Grafana MySQL data source plugin with correct permissions and try to connect with it.
### What should I do if I see "An unexpected error happened" or "Could not connect to MySQL" after trying all of the above?
- Check the Grafana logs for more details about the error.
- For Grafana Cloud customers, contact support.

View File

@@ -83,6 +83,11 @@ This topic lists words and abbreviations that are commonly used in the Grafana d
A commonly-used visualization that displays data as points, lines, or bars.
</td>
</tr>
<tr>
<td style="vertical-align: top"><code>grafanactl</code></td>
<td>
A command-line tool that enables users to authenticate, manage multiple environments, and perform administrative tasks through Grafana's REST API.
</tr>
<tr>
<td style="vertical-align: top">mixin</td>
<td>

View File

@@ -99,6 +99,7 @@ Add links to other dashboards at the top of your current dashboard.
- **Include current time range** Select this option to include the dashboard time range in the link. When the user clicks the link, the linked dashboard opens with the indicated time range already set. **Example:** https://play.grafana.org/d/000000010/annotations?orgId=1&from=now-3h&to=now
- **Include current template variable values** Select this option to include template variables currently used as query parameters in the link. When the user clicks the link, any matching templates in the linked dashboard are set to the values from the link. For more information, see [Dashboard URL variables](ref:dashboard-url-variables).
- **Open link in new tab** Select this option if you want the dashboard link to open in a new tab or window.
- **Show in controls menu** Select this option to display the link in the dashboard controls menu instead of at the top of the dashboard. The dashboard controls menu appears as a button in the dashboard toolbar.
1. Click **Save dashboard** in the top-right corner.
1. Click **Back to dashboard** and then **Exit edit**.
@@ -121,6 +122,7 @@ Add a link to a URL at the top of your current dashboard. You can link to any av
- **Include current time range** Select this option to include the dashboard time range in the link. When the user clicks the link, the linked dashboard opens with the indicated time range already set. **Example:** https://play.grafana.org/d/000000010/annotations?orgId=1&from=now-3h&to=now
- **Include current template variable values** Select this option to include template variables currently used as query parameters in the link. When the user clicks the link, any matching templates in the linked dashboard are set to the values from the link.
- **Open link in new tab** Select this option if you want the dashboard link to open in a new tab or window.
- **Show in controls menu** Select this option to display the link in the dashboard controls menu instead of at the top of the dashboard. The dashboard controls menu appears as a button in the dashboard toolbar.
1. Click **Save dashboard** in the top-right corner.
1. Click **Back to dashboard** and then **Exit edit**.

View File

@@ -123,10 +123,11 @@ To create a variable, follow these steps:
If you don't enter a display name, then the drop-down list label is the variable name.
1. Choose a **Show on dashboard** option:
- **Label and value** - The variable drop-down list displays the variable **Name** or **Label** value. This is the default.
- **Value:** The variable drop-down list only displays the selected variable value and a down arrow.
- **Nothing:** No variable drop-down list is displayed on the dashboard.
1. Choose a **Display** option:
- **Above dashboard** - The variable drop-down list displays above the dashboard with the variable **Name** or **Label** value. This is the default.
- **Above dashboard, label hidden** - The variable drop-down list displays above the dashboard, but without showing the name of the variable.
- **Controls menu** - The variable is displayed in the dashboard controls menu instead of above the dashboard. The dashboard controls menu appears as a button in the dashboard toolbar.
- **Hidden** - No variable drop-down list is displayed on the dashboard.
1. Click one of the following links to complete the steps for adding your selected variable type:
- [Query](#add-a-query-variable)

View File

@@ -12,12 +12,13 @@ comments: |
To build this Markdown, do the following:
$ cd /docs (from the root of the repository)
$ make sources/panels-visualizations/query-transform-data/transform-data/index.md
$ make sources/visualizations/panels-visualizations/query-transform-data/transform-data/index.md
$ make docs
Browse to http://localhost:3003/docs/grafana/latest/panels-visualizations/query-transform-data/transform-data/
Refer to ./docs/README.md "Content guidelines" for more information about editing and building these docs.
aliases:
- ../../../panels/transform-data/ # /docs/grafana/next/panels/transform-data/
- ../../../panels/transform-data/about-transformation/ # /docs/grafana/next/panels/transform-data/about-transformation/

View File

@@ -8,6 +8,7 @@ test.use({
scopeFilters: true,
groupByVariable: true,
reloadDashboardsOnParamsChange: true,
useScopesNavigationEndpoint: true,
},
});
@@ -61,31 +62,6 @@ test.describe('Scope Redirect Functionality', () => {
});
});
test('should fall back to scope navigation when no redirectUrl', async ({ page, gotoDashboardPage }) => {
const scopes = testScopesWithRedirect();
await test.step('Navigate to dashboard and open scopes selector', async () => {
await gotoDashboardPage({ uid: 'cuj-dashboard-1' });
await openScopesSelector(page, scopes);
});
await test.step('Select scope without redirectUrl', async () => {
// Select the scope without redirectUrl directly
await selectScope(page, 'sn-redirect-fallback', scopes[1]);
});
await test.step('Apply scopes and verify fallback behavior', async () => {
await applyScopes(page, [scopes[1]]);
// Should stay on current dashboard since no redirectUrl is provided
// The scope navigation fallback should not redirect (as per existing behavior)
await expect(page).toHaveURL(/\/d\/cuj-dashboard-1/);
// Verify the scope was applied
await expect(page).toHaveURL(/scopes=scope-sn-redirect-fallback/);
});
});
test('should not redirect when reloading page on dashboard not in dashboard list', async ({
page,
gotoDashboardPage,
@@ -171,4 +147,47 @@ test.describe('Scope Redirect Functionality', () => {
await expect(page).not.toHaveURL(/scopes=/);
});
});
test('should not redirect to redirectPath when on active scope navigation', async ({ page, gotoDashboardPage }) => {
const scopes = testScopesWithRedirect();
await test.step('Set up scope navigation to dashboard-1', async () => {
// First, apply a scope that creates scope navigation to dashboard-1 (without redirectPath)
await gotoDashboardPage({ uid: 'cuj-dashboard-1' });
await openScopesSelector(page, scopes);
await selectScope(page, 'sn-redirect-setup', scopes[2]);
await applyScopes(page, [scopes[2]]);
// Verify we're on dashboard-1 with the scope applied
await expect(page).toHaveURL(/\/d\/cuj-dashboard-1/);
await expect(page).toHaveURL(/scopes=scope-sn-redirect-setup/);
});
await test.step('Navigate to dashboard-1 to be on active scope navigation', async () => {
// Navigate to dashboard-1 which is now a scope navigation target
await gotoDashboardPage({
uid: 'cuj-dashboard-1',
queryParams: new URLSearchParams({ scopes: 'scope-sn-redirect-setup' }),
});
// Verify we're on dashboard-1
await expect(page).toHaveURL(/\/d\/cuj-dashboard-1/);
});
await test.step('Apply scope with redirectPath and verify no redirect', async () => {
// Now apply a different scope that has redirectPath
// Since we're on an active scope navigation, it should NOT redirect
await openScopesSelector(page, scopes);
await selectScope(page, 'sn-redirect-with-navigation', scopes[3]);
await applyScopes(page, [scopes[3]]);
// Verify the new scope was applied
await expect(page).toHaveURL(/scopes=scope-sn-redirect-with-navigation/);
// Since we're already on the active scope navigation (dashboard-1),
// we should NOT redirect to redirectPath (dashboard-3)
await expect(page).toHaveURL(/\/d\/cuj-dashboard-1/);
await expect(page).not.toHaveURL(/\/d\/cuj-dashboard-3/);
});
});
});

View File

@@ -419,6 +419,9 @@ test.describe(
// Select tabs layout
await page.getByLabel('layout-selection-option-Tabs').click();
// confirm layout change
await dashboardPage.getByGrafanaSelector(selectors.pages.ConfirmModal.delete).click();
await expect(dashboardPage.getByGrafanaSelector(selectors.components.Tab.title('New row'))).toBeVisible();
await expect(dashboardPage.getByGrafanaSelector(selectors.components.Tab.title('New row 1'))).toBeVisible();
await expect(
@@ -757,6 +760,9 @@ test.describe(
// Select rows layout
await page.getByLabel('layout-selection-option-Rows').click();
// confirm layout change
await dashboardPage.getByGrafanaSelector(selectors.pages.ConfirmModal.delete).click();
await dashboardPage
.getByGrafanaSelector(selectors.components.DashboardRow.wrapper('New tab 1'))
.scrollIntoViewIfNeeded();

View File

@@ -4,6 +4,8 @@ import { test, expect, E2ESelectorGroups, DashboardPage } from '@grafana/plugin-
import testV2Dashboard from '../dashboards/TestV2Dashboard.json';
import { switchToAutoGrid } from './utils';
test.use({
featureToggles: {
kubernetesDashboards: true,
@@ -33,7 +35,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
await expect(
dashboardPage.getByGrafanaSelector(selectors.components.Panels.Panel.title('New panel'))
@@ -64,7 +67,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
// Get initial positions - standard width should have panels on different rows
const firstPanelTop = await getPanelTop(dashboardPage, selectors);
@@ -124,7 +128,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
await dashboardPage
.getByGrafanaSelector(selectors.components.PanelEditor.ElementEditPane.AutoGridLayout.minColumnWidth)
@@ -181,7 +186,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
await dashboardPage
.getByGrafanaSelector(selectors.components.PanelEditor.ElementEditPane.AutoGridLayout.maxColumns)
@@ -216,7 +222,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
const regularRowHeight = await getPanelHeight(dashboardPage, selectors);
@@ -271,7 +278,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
const regularRowHeight = await getPanelHeight(dashboardPage, selectors);
@@ -328,7 +336,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
// Set narrow column width first to ensure panels fit horizontally
await dashboardPage

View File

@@ -1,6 +1,6 @@
import { Page } from 'playwright-core';
import { test, expect } from '@grafana/plugin-e2e';
import { test, expect, DashboardPage } from '@grafana/plugin-e2e';
import testV2DashWithRepeats from '../dashboards/V2DashWithRepeats.json';
@@ -12,6 +12,7 @@ import {
getPanelPosition,
importTestDashboard,
goToEmbeddedPanel,
switchToAutoGrid,
} from './utils';
const repeatTitleBase = 'repeat - ';
@@ -42,7 +43,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await dashboardPage.getByGrafanaSelector(selectors.components.Panels.Panel.title('New panel')).first().click();
@@ -78,7 +79,8 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
await page.reload();
@@ -117,7 +119,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
// select first/original repeat panel to activate edit pane
await dashboardPage
@@ -148,7 +150,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
await page.reload();
@@ -214,7 +216,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
// loading directly into panel editor
@@ -271,7 +273,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
// this moving repeated panel between two normal panels
await movePanel(dashboardPage, selectors, `${repeatTitleBase}${repeatOptions.at(0)}`, 'New panel');
@@ -319,7 +321,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
await page.reload();
@@ -382,7 +384,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
await page.reload();
@@ -410,7 +412,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
await page.reload();
@@ -462,7 +464,3 @@ test.describe(
});
}
);
async function switchToAutoGrid(page: Page) {
await page.getByLabel('layout-selection-option-Auto grid').click();
}

View File

@@ -1,5 +1,6 @@
import { Page } from '@playwright/test';
import { selectors } from '@grafana/e2e-selectors';
import { DashboardPage, E2ESelectorGroups, expect } from '@grafana/plugin-e2e';
import testV2Dashboard from '../dashboards/TestV2Dashboard.json';
@@ -239,3 +240,12 @@ export async function getTabPosition(dashboardPage: DashboardPage, selectors: E2
const boundingBox = await tab.boundingBox();
return boundingBox;
}
export async function switchToAutoGrid(page: Page, dashboardPage: DashboardPage) {
await page.getByLabel('layout-selection-option-Auto grid').click();
// confirm layout change if applicable
const confirmModal = dashboardPage.getByGrafanaSelector(selectors.pages.ConfirmModal.delete);
if (confirmModal) {
await confirmModal.click();
}
}

View File

@@ -156,13 +156,18 @@ export async function applyScopes(page: Page, scopes?: TestScope[]) {
return;
}
const url: string =
const dashboardBindingsUrl: string =
'**/apis/scope.grafana.app/v0alpha1/namespaces/*/find/scope_dashboard_bindings?' +
scopes.map((scope) => `scope=scope-${scope.name}`).join('&');
const scopeNavigationsUrl: string =
'**/apis/scope.grafana.app/v0alpha1/namespaces/*/find/scope_navigations?' +
scopes.map((scope) => `scope=scope-${scope.name}`).join('&');
const groups: string[] = ['Most relevant', 'Dashboards', 'Something else', ''];
await page.route(url, async (route) => {
// Mock scope_dashboard_bindings endpoint
await page.route(dashboardBindingsUrl, async (route) => {
await route.fulfill({
status: 200,
contentType: 'application/json',
@@ -215,7 +220,52 @@ export async function applyScopes(page: Page, scopes?: TestScope[]) {
});
});
const responsePromise = page.waitForResponse((response) => response.url().includes(`/find/scope_dashboard_bindings`));
// Mock scope_navigations endpoint
await page.route(scopeNavigationsUrl, async (route) => {
await route.fulfill({
status: 200,
contentType: 'application/json',
body: JSON.stringify({
apiVersion: 'scope.grafana.app/v0alpha1',
items: scopes.flatMap((scope) => {
const navigations: Array<{
kind: string;
apiVersion: string;
metadata: { name: string; resourceVersion: string; creationTimestamp: string };
spec: { url: string; scope: string };
status: { title: string };
}> = [];
// Create a scope navigation if dashboardUid is provided
if (scope.dashboardUid && scope.addLinks) {
navigations.push({
kind: 'ScopeNavigation',
apiVersion: 'scope.grafana.app/v0alpha1',
metadata: {
name: `scope-${scope.name}-nav`,
resourceVersion: '1',
creationTimestamp: 'stamp',
},
spec: {
url: `/d/${scope.dashboardUid}`,
scope: `scope-${scope.name}`,
},
status: {
title: scope.dashboardTitle ?? scope.title,
},
});
}
return navigations;
}),
}),
});
});
const responsePromise = page.waitForResponse(
(response) =>
response.url().includes(`/find/scope_dashboard_bindings`) || response.url().includes(`/find/scope_navigations`)
);
const scopeRequestPromises: Array<Promise<Response>> = [];
for (const scope of scopes) {

View File

@@ -124,5 +124,23 @@ export const testScopesWithRedirect = (): TestScope[] => {
dashboardTitle: 'CUJ Dashboard 2',
addLinks: true,
},
{
name: 'sn-redirect-setup',
title: 'Setup Navigation',
// No redirectPath - used to set up scope navigation to dashboard-1
filters: [{ key: 'namespace', operator: 'equals', value: 'setup-nav' }],
dashboardUid: 'cuj-dashboard-1', // Creates scope navigation to this dashboard
dashboardTitle: 'CUJ Dashboard 1',
addLinks: true,
},
{
name: 'sn-redirect-with-navigation',
title: 'Redirect With Navigation',
redirectPath: '/d/cuj-dashboard-3', // Redirect target
filters: [{ key: 'namespace', operator: 'equals', value: 'redirect-with-nav' }],
dashboardUid: 'cuj-dashboard-1', // Creates scope navigation to this dashboard
dashboardTitle: 'CUJ Dashboard 1',
addLinks: true,
},
];
};

View File

@@ -2882,11 +2882,6 @@
"count": 1
}
},
"public/app/features/panel/components/VizTypePicker/PanelTypeCard.tsx": {
"@grafana/no-aria-label-selectors": {
"count": 1
}
},
"public/app/features/panel/panellinks/linkSuppliers.ts": {
"@typescript-eslint/no-explicit-any": {
"count": 1

7
go.mod
View File

@@ -48,7 +48,7 @@ require (
github.com/blugelabs/bluge_segment_api v0.2.0 // @grafana/grafana-backend-group
github.com/bradfitz/gomemcache v0.0.0-20230905024940-24af94b03874 // @grafana/grafana-backend-group
github.com/bwmarrin/snowflake v0.3.0 // @grafana/grafana-app-platform-squad
github.com/centrifugal/centrifuge v0.37.2 // @grafana/grafana-app-platform-squad
github.com/centrifugal/centrifuge v0.38.0 // @grafana/grafana-app-platform-squad
github.com/crewjam/saml v0.4.14 // @grafana/identity-access-team
github.com/dgraph-io/badger/v4 v4.7.0 // @grafana/grafana-search-and-storage
github.com/dlmiddlecote/sqlstats v1.0.2 // @grafana/grafana-backend-group
@@ -386,7 +386,7 @@ require (
github.com/caio/go-tdigest v3.1.0+incompatible // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // @grafana/alerting-backend
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/centrifugal/protocol v0.16.2 // indirect
github.com/centrifugal/protocol v0.17.0 // indirect
github.com/cespare/xxhash v1.1.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cheekybits/genny v1.0.0 // indirect
@@ -562,7 +562,7 @@ require (
github.com/prometheus/procfs v0.16.1 // indirect
github.com/protocolbuffers/txtpbfmt v0.0.0-20241112170944-20d2c9ebc01d // indirect
github.com/puzpuzpuz/xsync/v2 v2.5.1 // indirect
github.com/redis/rueidis v1.0.64 // indirect
github.com/redis/rueidis v1.0.68 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect
@@ -687,6 +687,7 @@ require (
github.com/moby/term v0.5.0 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/quagmt/udecimal v1.9.0 // indirect
github.com/shirou/gopsutil/v4 v4.25.3 // indirect
github.com/tklauser/go-sysconf v0.3.14 // indirect
github.com/tklauser/numcpus v0.8.0 // indirect

14
go.sum
View File

@@ -1006,10 +1006,10 @@ github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F9
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.4.1/go.mod h1:4T9NM4+4Vw91VeyqjLS6ao50K5bOcLKN6Q42XnYaRYw=
github.com/centrifugal/centrifuge v0.37.2 h1:rerQNvDfYN2FZEkVtb/hvGV7SIrJfEQrKF3MaE8GDlo=
github.com/centrifugal/centrifuge v0.37.2/go.mod h1:aj4iRJGhzi3SlL8iUtVezxway1Xf8g+hmNQkLLO7sS8=
github.com/centrifugal/protocol v0.16.2 h1:KoIHgDeX1fFxyxQoKW+6E8ZTCf5mwGm8JyGoJ5NBMbQ=
github.com/centrifugal/protocol v0.16.2/go.mod h1:Q7OpS/8HMXDnL7f9DpNx24IhG96MP88WPpVTTCdrokI=
github.com/centrifugal/centrifuge v0.38.0 h1:UJTowwc5lSwnpvd3vbrTseODbU7osSggN67RTrJ8EfQ=
github.com/centrifugal/centrifuge v0.38.0/go.mod h1:rcZLARnO5GXOeE9qG7iIPMvERxESespqkSX4cGLCAzo=
github.com/centrifugal/protocol v0.17.0 h1:hD0WczyiG7zrVJcgkQsd5/nhfFXt0Y04SJHV2Z7B1rg=
github.com/centrifugal/protocol v0.17.0/go.mod h1:9MdiYyjw5Bw1+d5Sp4Y0NK+qiuTNyd88nrHJsUUh8k4=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
@@ -2334,11 +2334,13 @@ github.com/puzpuzpuz/xsync/v2 v2.5.1 h1:mVGYAvzDSu52+zaGyNjC+24Xw2bQi3kTr4QJ6N9p
github.com/puzpuzpuz/xsync/v2 v2.5.1/go.mod h1:gD2H2krq/w52MfPLE+Uy64TzJDVY7lP2znR9qmR35kU=
github.com/puzpuzpuz/xsync/v4 v4.2.0 h1:dlxm77dZj2c3rxq0/XNvvUKISAmovoXF4a4qM6Wvkr0=
github.com/puzpuzpuz/xsync/v4 v4.2.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo=
github.com/quagmt/udecimal v1.9.0 h1:TLuZiFeg0HhS6X8VDa78Y6XTaitZZfh+z5q4SXMzpDQ=
github.com/quagmt/udecimal v1.9.0/go.mod h1:ScmJ/xTGZcEoYiyMMzgDLn79PEJHcMBiJ4NNRT3FirA=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/redis/go-redis/v9 v9.14.0 h1:u4tNCjXOyzfgeLN+vAZaW1xUooqWDqVEsZN0U01jfAE=
github.com/redis/go-redis/v9 v9.14.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
github.com/redis/rueidis v1.0.64 h1:XqgbueDuNV3qFdVdQwAHJl1uNt90zUuAJuzqjH4cw6Y=
github.com/redis/rueidis v1.0.64/go.mod h1:Lkhr2QTgcoYBhxARU7kJRO8SyVlgUuEkcJO1Y8MCluA=
github.com/redis/rueidis v1.0.68 h1:gept0E45JGxVigWb3zoWHvxEc4IOC7kc4V/4XvN8eG8=
github.com/redis/rueidis v1.0.68/go.mod h1:Lkhr2QTgcoYBhxARU7kJRO8SyVlgUuEkcJO1Y8MCluA=
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=

View File

@@ -708,6 +708,8 @@ github.com/envoyproxy/go-control-plane/envoy v1.32.3/go.mod h1:F6hWupPfh75TBXGKA
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
github.com/envoyproxy/protoc-gen-validate v1.0.4/go.mod h1:qys6tmnRsYrQqIhm2bvKZH4Blx/1gTIZ2UKVY1M+Yew=
github.com/envoyproxy/protoc-gen-validate v1.1.0/go.mod h1:sXRDRVmzEbkM7CVcM06s9shE/m23dg3wzjl0UWqJ2q4=
github.com/ericlagergren/decimal v0.0.0-20240411145413-00de7ca16731 h1:R/ZjJpjQKsZ6L/+Gf9WHbt31GG8NMVcpRqUE+1mMIyo=
github.com/ericlagergren/decimal v0.0.0-20240411145413-00de7ca16731/go.mod h1:M9R1FoZ3y//hwwnJtO51ypFGwm8ZfpxPT/ZLtO1mcgQ=
github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM=
github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw=
@@ -1330,6 +1332,7 @@ github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e h1:aoZm08cpOy4WuID//EZDgc
github.com/pkg/sftp v1.13.1 h1:I2qBYMChEhIjOgazfJmV3/mZM256btk6wkCDRmW7JYs=
github.com/pkg/xattr v0.4.10 h1:Qe0mtiNFHQZ296vRgUjRCoPHPqH7VdTOrZx3g0T+pGA=
github.com/pkg/xattr v0.4.10/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/planetscale/vtprotobuf v0.6.0/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
github.com/posener/complete v1.2.3 h1:NP0eAhjcjImqslEwo/1hq7gpajME0fTLTezBKDqfXqo=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/pquerna/cachecontrol v0.1.0 h1:yJMy84ti9h/+OEWa752kBTKv4XC30OtVVHYv/8cTqKc=
@@ -1397,6 +1400,7 @@ github.com/schollz/closestmatch v2.1.0+incompatible/go.mod h1:RtP1ddjLong6gTkbtm
github.com/schollz/progressbar/v3 v3.14.6 h1:GyjwcWBAf+GFDMLziwerKvpuS7ZF+mNTAXIB2aspiZs=
github.com/schollz/progressbar/v3 v3.14.6/go.mod h1:Nrzpuw3Nl0srLY0VlTvC4V6RL50pcEymjy6qyJAaLa0=
github.com/sclevine/spec v1.4.0/go.mod h1:LvpgJaFyvQzRvc1kaDs0bulYwzC70PbiYjC4QnFHkOM=
github.com/segmentio/asm v1.1.4/go.mod h1:Ld3L4ZXGNcSLRg4JBsZ3//1+f/TjYl0Mzen/DQy1EJg=
github.com/segmentio/fasthash v1.0.3 h1:EI9+KE1EwvMLBWwjpRDc+fEM+prwxDYbslddQGtrmhM=
github.com/segmentio/fasthash v1.0.3/go.mod h1:waKX8l2N8yckOgmSsXJi7x1ZfdKZ4x7KRMzBtS3oedY=
github.com/segmentio/parquet-go v0.0.0-20220811205829-7efc157d28af/go.mod h1:PxYdAI6cGd+s1j4hZDQbz3VFgobF5fDA0weLeNWKTE4=
@@ -1935,6 +1939,7 @@ golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT
golang.org/x/net v0.0.0-20211123203042-d83791d6bcd9/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.3.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/net v0.24.0/go.mod h1:2Q7sJY5mzlzWjKtYUEXSlBWCdyaioyXzRB2RtU8KVE8=
@@ -2001,6 +2006,7 @@ golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ=
golang.org/x/term v0.33.0/go.mod h1:s18+ql9tYWp1IfpV9DmCtQDDSRBUjKaw9M1eAv5UeF0=
golang.org/x/term v0.34.0/go.mod h1:5jC53AEywhIVebHgPVeg0mj8OD3VO9OzclacVrqpaAw=
golang.org/x/term v0.35.0/go.mod h1:TPGtkTLesOwf2DE8CgVYiZinHAOuy5AYUYT1lENIZnA=
golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.17.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
@@ -2077,6 +2083,7 @@ google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5/go.
google.golang.org/genproto/googleapis/api v0.0.0-20250929231259-57b25ae835d4/go.mod h1:NnuHhy+bxcg30o7FnVAZbXsPHUDQ9qKWAQKCD7VxFtk=
google.golang.org/genproto/googleapis/bytestream v0.0.0-20250603155806-513f23925822 h1:zWFRixYR5QlotL+Uv3YfsPRENIrQFXiGs+iwqel6fOQ=
google.golang.org/genproto/googleapis/bytestream v0.0.0-20250603155806-513f23925822/go.mod h1:h6yxum/C2qRb4txaZRLDHK8RyS0H/o2oEDeKY4onY/Y=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d/go.mod h1:+Bk1OCOj40wS2hwAMA+aCW9ypzm63QTBBHp6lQ3p+9M=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231002182017-d307bd883b97/go.mod h1:v7nGkzlmW8P3n/bKmWBn2WpBjpOEx8Q6gMueudAmKfY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231106174013-bbf56f31fb17/go.mod h1:oQ5rr10WTTMvP4A36n8JpR1OrO1BEiV4f78CneXZxkA=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80/go.mod h1:PAREbraiVEVGVdTZsVWjSbbTtSyGbAgIIvni8a8CD5s=
@@ -2107,6 +2114,7 @@ google.golang.org/genproto/googleapis/rpc v0.0.0-20251014184007-4626949a642f/go.
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
google.golang.org/grpc v1.58.2/go.mod h1:tgX3ZQDlNJGU96V6yHh1T/JeoBQ2TXdr43YbYSsCJk0=
google.golang.org/grpc v1.59.0/go.mod h1:aUPDwccQo6OTjy7Hct4AfBPD1GptF4fyUjIkQ9YtF98=
google.golang.org/grpc v1.61.0/go.mod h1:VUbo7IFqmF1QtCAstipjG0GIoq49KvMe9+h1jFLBNJs=
google.golang.org/grpc v1.62.1/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE=

View File

@@ -124,7 +124,6 @@
"@types/eslint": "9.6.1",
"@types/eslint-scope": "^8.0.0",
"@types/file-saver": "2.0.7",
"@types/glob": "^9.0.0",
"@types/google.analytics": "^0.0.46",
"@types/gtag.js": "^0.0.20",
"@types/history": "4.7.11",
@@ -290,7 +289,7 @@
"@grafana/google-sdk": "0.3.5",
"@grafana/i18n": "workspace:*",
"@grafana/lezer-logql": "0.2.9",
"@grafana/llm": "0.22.1",
"@grafana/llm": "1.0.1",
"@grafana/monaco-logql": "^0.0.8",
"@grafana/o11y-ds-frontend": "workspace:*",
"@grafana/plugin-ui": "^0.11.1",
@@ -460,7 +459,8 @@
"gitconfiglocal": "2.1.0",
"tmp@npm:^0.0.33": "~0.2.1",
"js-yaml@npm:4.1.0": "^4.1.0",
"js-yaml@npm:=4.1.0": "^4.1.0"
"js-yaml@npm:=4.1.0": "^4.1.0",
"nodemailer": "7.0.7"
},
"workspaces": {
"packages": [

View File

@@ -165,6 +165,19 @@ const injectedRtkApi = api
}),
providesTags: ['Search'],
}),
getSearchUsers: build.query<GetSearchUsersApiResponse, GetSearchUsersApiArg>({
query: (queryArg) => ({
url: `/searchUsers`,
params: {
query: queryArg.query,
limit: queryArg.limit,
page: queryArg.page,
offset: queryArg.offset,
sort: queryArg.sort,
},
}),
providesTags: ['Search'],
}),
listServiceAccount: build.query<ListServiceAccountApiResponse, ListServiceAccountApiArg>({
query: (queryArg) => ({
url: `/serviceaccounts`,
@@ -896,6 +909,18 @@ export type GetSearchTeamsApiArg = {
/** page number to start from */
page?: number;
};
export type GetSearchUsersApiResponse = unknown;
export type GetSearchUsersApiArg = {
query?: string;
/** number of results to return */
limit?: number;
/** page number (starting from 1) */
page?: number;
/** number of results to skip */
offset?: number;
/** sortable field */
sort?: string;
};
export type ListServiceAccountApiResponse = /** status 200 OK */ ServiceAccountList;
export type ListServiceAccountApiArg = {
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
@@ -2067,6 +2092,9 @@ export type UserSpec = {
role: string;
title: string;
};
export type UserStatus = {
lastSeenAt: number;
};
export type User = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
apiVersion?: string;
@@ -2075,6 +2103,7 @@ export type User = {
metadata: ObjectMeta;
/** Spec is the spec of the User */
spec: UserSpec;
status: UserStatus;
};
export type UserList = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
@@ -2120,6 +2149,8 @@ export const {
useUpdateExternalGroupMappingMutation,
useGetSearchTeamsQuery,
useLazyGetSearchTeamsQuery,
useGetSearchUsersQuery,
useLazyGetSearchUsersQuery,
useListServiceAccountQuery,
useLazyListServiceAccountQuery,
useCreateServiceAccountMutation,

View File

@@ -1108,6 +1108,8 @@ export type ExportJobOptions = {
message?: string;
/** FIXME: we should validate this in admission hooks Prefix in target file system */
path?: string;
/** Resources to export This option has been created because currently the frontend does not use standarized app platform APIs. For performance and API consistency reasons, the preferred option is it to use the resources. */
resources?: ResourceRef[];
};
export type JobSpec = {
/** Possible enum values:
@@ -1138,7 +1140,7 @@ export type JobResourceSummary = {
delete?: number;
/** Create or update (export) */
error?: number;
/** Report errors for this resource type This may not be an exhaustive list and recommend looking at the logs for more info */
/** Report errors/warnings for this resource type This may not be an exhaustive list and recommend looking at the logs for more info */
errors?: string[];
group?: string;
kind?: string;
@@ -1146,6 +1148,9 @@ export type JobResourceSummary = {
noop?: number;
total?: number;
update?: number;
/** The error count */
warning?: number;
warnings?: string[];
write?: number;
};
export type RepositoryUrLs = {
@@ -1176,6 +1181,7 @@ export type JobStatus = {
summary?: JobResourceSummary[];
/** URLs contains URLs for the reference branch or commit if applicable. */
url?: RepositoryUrLs;
warnings?: string[];
};
export type Job = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */

View File

@@ -1,5 +1,5 @@
/**
* A library containing the different design components of the Grafana ecosystem.
* A library containing e2e selectors for the Grafana ecosystem.
*
* @packageDocumentation
*/

View File

@@ -3,7 +3,7 @@
// (a <button> with clear text, for example, does not need an aria-label as it's already labeled)
// but you still might need to select it for testing,
// in that case please add the attribute data-testid={selector} in the component and
// prefix your selector string with 'data-testid' so that when create the selectors we know to search for it on the right attribute
// prefix your selector string with 'data-testid' so that when we create the selectors we know to search for it on the right attribute
import { VersionedSelectorGroup } from '../types';
@@ -1057,6 +1057,7 @@ export const versionedComponents = {
},
PluginVisualization: {
item: {
'12.4.0': (title: string) => `data-testid Plugin visualization item ${title}`,
[MIN_GRAFANA_VERSION]: (title: string) => `Plugin visualization item ${title}`,
},
current: {

View File

@@ -17,6 +17,10 @@ export interface Options {
* Controls the height of the rows
*/
cellHeight?: ui.TableCellHeight;
/**
* If true, disables all keyboard events in the table. this is used when previewing a table (i.e. suggestions)
*/
disableKeyboardEvents?: boolean;
/**
* Enable pagination on the table
*/

View File

@@ -13,6 +13,7 @@ import * as common from '@grafana/schema';
export const pluginVersion = "12.4.0-pre";
export interface Options extends common.OptionsWithTimezones, common.OptionsWithAnnotations {
disableKeyboardEvents?: boolean;
legend: common.VizLegendOptions;
orientation?: common.VizOrientation;
timeCompare?: common.TimeCompareOptions;

View File

@@ -106,6 +106,11 @@ export function RadialGauge(props: RadialGaugeProps) {
const gaugeId = useId();
const styles = useStyles2(getStyles);
let effectiveTextMode = textMode;
if (effectiveTextMode === 'auto') {
effectiveTextMode = vizCount === 1 ? 'value' : 'value_and_name';
}
const startAngle = shape === 'gauge' ? 250 : 0;
const endAngle = shape === 'gauge' ? 110 : 360;
@@ -188,7 +193,7 @@ export function RadialGauge(props: RadialGaugeProps) {
// These elements are only added for first value / bar
if (barIndex === 0) {
if (glowBar) {
defs.push(<GlowGradient key="glow-filter" id={glowFilterId} radius={dimensions.radius} />);
defs.push(<GlowGradient key="glow-filter" id={glowFilterId} barWidth={dimensions.barWidth} />);
}
if (glowCenter) {
@@ -198,14 +203,14 @@ export function RadialGauge(props: RadialGaugeProps) {
graphics.push(
<RadialText
key="radial-text"
vizCount={vizCount}
textMode={textMode}
textMode={effectiveTextMode}
displayValue={displayValue.display}
dimensions={dimensions}
theme={theme}
valueManualFontSize={props.valueManualFontSize}
nameManualFontSize={props.nameManualFontSize}
shape={shape}
sparkline={displayValue.sparkline}
/>
);
@@ -254,6 +259,7 @@ export function RadialGauge(props: RadialGaugeProps) {
theme={theme}
color={color}
shape={shape}
textMode={effectiveTextMode}
/>
);
}

View File

@@ -1,11 +1,9 @@
import { css } from '@emotion/css';
import { FieldDisplay, GrafanaTheme2, FieldConfig } from '@grafana/data';
import { GraphFieldConfig, GraphGradientMode, LineInterpolation } from '@grafana/schema';
import { Sparkline } from '../Sparkline/Sparkline';
import { RadialShape } from './RadialGauge';
import { RadialShape, RadialTextMode } from './RadialGauge';
import { GaugeDimensions } from './utils';
interface RadialSparklineProps {
@@ -14,23 +12,22 @@ interface RadialSparklineProps {
theme: GrafanaTheme2;
color?: string;
shape?: RadialShape;
textMode: Exclude<RadialTextMode, 'auto'>;
}
export function RadialSparkline({ sparkline, dimensions, theme, color, shape }: RadialSparklineProps) {
export function RadialSparkline({ sparkline, dimensions, theme, color, shape, textMode }: RadialSparklineProps) {
const { radius, barWidth } = dimensions;
if (!sparkline) {
return null;
}
const { radius, barWidth } = dimensions;
const height = radius / 4;
const widthFactor = shape === 'gauge' ? 1.6 : 1.4;
const width = radius * widthFactor - barWidth;
const topPos = shape === 'gauge' ? `${dimensions.gaugeBottomY - height}px` : `calc(50% + ${radius / 2.8}px)`;
const styles = css({
position: 'absolute',
top: topPos,
});
const showNameAndValue = textMode === 'value_and_name';
const height = radius / (showNameAndValue ? 4 : 3);
const width = radius * (shape === 'gauge' ? 1.6 : 1.4) - barWidth;
const topPos =
shape === 'gauge'
? `${dimensions.gaugeBottomY - height}px`
: `calc(50% + ${radius / (showNameAndValue ? 3.3 : 4)}px)`;
const config: FieldConfig<GraphFieldConfig> = {
color: {
@@ -45,7 +42,7 @@ export function RadialSparkline({ sparkline, dimensions, theme, color, shape }:
};
return (
<div className={styles}>
<div style={{ position: 'absolute', top: topPos }}>
<Sparkline height={height} width={width} sparkline={sparkline} theme={theme} config={config} />
</div>
);

View File

@@ -1,6 +1,12 @@
import { css } from '@emotion/css';
import { DisplayValue, DisplayValueAlignmentFactors, formattedValueToString, GrafanaTheme2 } from '@grafana/data';
import {
DisplayValue,
DisplayValueAlignmentFactors,
FieldSparkline,
formattedValueToString,
GrafanaTheme2,
} from '@grafana/data';
import { useStyles2 } from '../../themes/ThemeContext';
import { calculateFontSize } from '../../utils/measureText';
@@ -8,21 +14,13 @@ import { calculateFontSize } from '../../utils/measureText';
import { RadialShape, RadialTextMode } from './RadialGauge';
import { GaugeDimensions } from './utils';
// function toCartesian(centerX: number, centerY: number, radius: number, angleInDegrees: number) {
// let radian = ((angleInDegrees - 90) * Math.PI) / 180.0;
// return {
// x: centerX + radius * Math.cos(radian),
// y: centerY + radius * Math.sin(radian),
// };
// }
interface RadialTextProps {
displayValue: DisplayValue;
theme: GrafanaTheme2;
dimensions: GaugeDimensions;
textMode: RadialTextMode;
vizCount: number;
textMode: Exclude<RadialTextMode, 'auto'>;
shape: RadialShape;
sparkline?: FieldSparkline;
alignmentFactors?: DisplayValueAlignmentFactors;
valueManualFontSize?: number;
nameManualFontSize?: number;
@@ -33,8 +31,8 @@ export function RadialText({
theme,
dimensions,
textMode,
vizCount,
shape,
sparkline,
alignmentFactors,
valueManualFontSize,
nameManualFontSize,
@@ -46,10 +44,6 @@ export function RadialText({
return null;
}
if (textMode === 'auto') {
textMode = vizCount === 1 ? 'value' : 'value_and_name';
}
const nameToAlignTo = (alignmentFactors ? alignmentFactors.title : displayValue.title) ?? '';
const valueToAlignTo = formattedValueToString(alignmentFactors ? alignmentFactors : displayValue);
@@ -59,7 +53,7 @@ export function RadialText({
// Not sure where this comes from but svg text is not using body line-height
const lineHeight = 1.21;
const valueWidthToRadiusFactor = 0.85;
const valueWidthToRadiusFactor = 0.82;
const nameToHeightFactor = 0.45;
const largeRadiusScalingDecay = 0.86;
@@ -98,18 +92,23 @@ export function RadialText({
const valueHeight = valueFontSize * lineHeight;
const nameHeight = nameFontSize * lineHeight;
const valueY = showName ? centerY - nameHeight / 2 : centerY;
const valueNameSpacing = valueHeight / 3.5;
const nameY = showValue ? valueY + valueHeight / 2 + valueNameSpacing : centerY;
const valueY = showName ? centerY - nameHeight * 0.3 : centerY;
const nameY = showValue ? valueY + valueHeight * 0.7 : centerY;
const nameColor = showValue ? theme.colors.text.secondary : theme.colors.text.primary;
const suffixShift = (valueFontSize - unitFontSize * 1.2) / 2;
// For gauge shape we shift text up a bit
const valueDy = shape === 'gauge' ? -valueFontSize * 0.3 : 0;
const nameDy = shape === 'gauge' ? -nameFontSize * 0.7 : 0;
// adjust the text up on gauges and when sparklines are present
let yOffset = 0;
if (shape === 'gauge') {
// we render from the center of the gauge, so move up by half of half of the total height
yOffset -= (valueHeight + nameHeight) / 4;
}
if (sparkline) {
yOffset -= 8;
}
return (
<g>
<g transform={`translate(0, ${yOffset})`}>
{showValue && (
<text
x={centerX}
@@ -119,7 +118,6 @@ export function RadialText({
className={styles.text}
textAnchor="middle"
dominantBaseline="middle"
dy={valueDy}
>
<tspan fontSize={unitFontSize}>{displayValue.prefix ?? ''}</tspan>
<tspan>{displayValue.text}</tspan>
@@ -133,7 +131,6 @@ export function RadialText({
fontSize={nameFontSize}
x={centerX}
y={nameY}
dy={nameDy}
textAnchor="middle"
dominantBaseline="middle"
fill={nameColor}

View File

@@ -4,11 +4,12 @@ import { GaugeDimensions } from './utils';
export interface GlowGradientProps {
id: string;
radius: number;
barWidth: number;
}
export function GlowGradient({ id, radius }: GlowGradientProps) {
const glowSize = 0.02 * radius;
export function GlowGradient({ id, barWidth }: GlowGradientProps) {
// 0.75 is the minimum glow size, and it scales with bar width
const glowSize = 0.75 + barWidth * 0.08;
return (
<filter id={id} filterUnits="userSpaceOnUse">
@@ -82,7 +83,7 @@ export function MiddleCircleGlow({ dimensions, gaugeId, color }: CenterGlowProps
<>
<defs>
<radialGradient id={gradientId} r={'50%'} fr={'0%'}>
<stop offset="0%" stopColor={color} stopOpacity={0.2} />
<stop offset="0%" stopColor={color} stopOpacity={0.15} />
<stop offset="90%" stopColor={color} stopOpacity={0} />
</radialGradient>
</defs>

View File

@@ -16,7 +16,7 @@ export interface SparklineProps extends Themeable2 {
sparkline: FieldSparkline;
}
export const Sparkline: React.FC<SparklineProps> = memo((props) => {
const SparklineFn: React.FC<SparklineProps> = memo((props) => {
const { sparkline, config: fieldConfig, theme, width, height } = props;
const { frame: alignedDataFrame, warning } = prepareSeries(sparkline, fieldConfig);
@@ -30,4 +30,14 @@ export const Sparkline: React.FC<SparklineProps> = memo((props) => {
return <UPlotChart data={data} config={configBuilder} width={width} height={height} />;
});
Sparkline.displayName = 'Sparkline';
SparklineFn.displayName = 'Sparkline';
// we converted to function component above, but some apps extend Sparkline, so we need
// to keep exporting a class component until those apps are all rolled out.
// see https://github.com/grafana/app-observability-plugin/pull/2079
// eslint-disable-next-line react-prefer-function-component/react-prefer-function-component
export class Sparkline extends React.PureComponent<SparklineProps> {
render() {
return <SparklineFn {...this.props} />;
}
}

View File

@@ -451,6 +451,19 @@ describe('TableNG', () => {
expect(screen.getByText('A1')).toBeInTheDocument();
expect(screen.getByText('1')).toBeInTheDocument();
});
it('shows full column name in title attribute for truncated headers', () => {
const { container } = render(
<TableNG enableVirtualization={false} data={createBasicDataFrame()} width={800} height={600} />
);
const headers = container.querySelectorAll('[role="columnheader"]');
const firstHeaderSpan = headers[0].querySelector('span');
const secondHeaderSpan = headers[1].querySelector('span');
expect(firstHeaderSpan).toHaveAttribute('title', 'Column A');
expect(secondHeaderSpan).toHaveAttribute('title', 'Column B');
});
});
describe('Footer options', () => {

View File

@@ -105,6 +105,7 @@ export function TableNG(props: TableNGProps) {
const {
cellHeight,
data,
disableKeyboardEvents,
disableSanitizeHtml,
enablePagination = false,
enableSharedCrosshair = false,
@@ -819,9 +820,9 @@ export function TableNG(props: TableNGProps) {
}
}}
onCellKeyDown={
hasNestedFrames
hasNestedFrames || disableKeyboardEvents
? (_, event) => {
if (event.isDefaultPrevented()) {
if (disableKeyboardEvents || event.isDefaultPrevented()) {
// skip parent grid keyboard navigation if nested grid handled it
event.preventGridDefault();
}

View File

@@ -55,7 +55,9 @@ const HeaderCell: React.FC<HeaderCellProps> = ({
{showTypeIcons && (
<Icon className={styles.headerCellIcon} name={getFieldTypeIcon(field)} title={field?.type} size="sm" />
)}
<span className={styles.headerCellLabel}>{getDisplayName(field)}</span>
<span className={styles.headerCellLabel} title={displayName}>
{displayName}
</span>
{direction && (
<Icon
className={cx(styles.headerCellIcon, styles.headerSortIcon)}

View File

@@ -138,6 +138,8 @@ export interface BaseTableProps {
enableVirtualization?: boolean;
// for MarkdownCell, this flag disables sanitization of HTML content. Configured via config.ini.
disableSanitizeHtml?: boolean;
// if true, disables all keyboard events in the table. this is used when previewing a table (i.e. suggestions)
disableKeyboardEvents?: boolean;
}
/* ---------------------------- Table cell props ---------------------------- */

View File

@@ -187,6 +187,15 @@ func (hs *HTTPServer) registerRoutes() {
publicdashboardsapi.CountPublicDashboardRequest(),
hs.Index,
)
r.Get("/bootdata/:accessToken",
reqNoAuth,
hs.PublicDashboardsApi.Middleware.HandleView,
publicdashboardsapi.SetPublicDashboardAccessToken,
publicdashboardsapi.SetPublicDashboardOrgIdOnContext(hs.PublicDashboardsApi.PublicDashboardService),
publicdashboardsapi.CountPublicDashboardRequest(),
hs.GetBootdata,
)
}
r.Get("/explore", authorize(ac.EvalPermission(ac.ActionDatasourcesExplore)), hs.Index)

View File

@@ -111,17 +111,15 @@ func TestGetHomeDashboard(t *testing.T) {
}
}
func newTestLive(t *testing.T, store db.DB) *live.GrafanaLive {
func newTestLive(t *testing.T) *live.GrafanaLive {
features := featuremgmt.WithFeatures()
cfg := setting.NewCfg()
cfg.AppURL = "http://localhost:3000/"
gLive, err := live.ProvideService(nil, cfg,
routing.NewRouteRegister(),
nil, nil, nil, nil,
store,
nil,
&usagestats.UsageStatsMock{T: t},
nil,
features, acimpl.ProvideAccessControl(features),
&dashboards.FakeDashboardService{},
nil, nil)
@@ -751,7 +749,7 @@ func TestIntegrationDashboardAPIEndpoint(t *testing.T) {
hs := HTTPServer{
Cfg: cfg,
ProvisioningService: provisioning.NewProvisioningServiceMock(context.Background()),
Live: newTestLive(t, db.InitTestDB(t)),
Live: newTestLive(t),
QuotaService: quotatest.New(false, nil),
LibraryElementService: &libraryelementsfake.LibraryElementService{},
DashboardService: dashboardService,
@@ -1003,7 +1001,7 @@ func postDashboardScenario(t *testing.T, desc string, url string, routePattern s
hs := HTTPServer{
Cfg: cfg,
ProvisioningService: provisioning.NewProvisioningServiceMock(context.Background()),
Live: newTestLive(t, db.InitTestDB(t)),
Live: newTestLive(t),
QuotaService: quotatest.New(false, nil),
pluginStore: &pluginstore.FakePluginStore{},
LibraryElementService: &libraryelementsfake.LibraryElementService{},
@@ -1043,7 +1041,7 @@ func restoreDashboardVersionScenario(t *testing.T, desc string, url string, rout
hs := HTTPServer{
Cfg: cfg,
ProvisioningService: provisioning.NewProvisioningServiceMock(context.Background()),
Live: newTestLive(t, db.InitTestDB(t)),
Live: newTestLive(t),
QuotaService: quotatest.New(false, nil),
LibraryElementService: &libraryelementsfake.LibraryElementService{},
DashboardService: mock,

View File

@@ -343,7 +343,7 @@ func TestUpdateDataSourceByID_DataSourceNameExists(t *testing.T) {
Cfg: setting.NewCfg(),
AccessControl: acimpl.ProvideAccessControl(featuremgmt.WithFeatures()),
accesscontrolService: actest.FakeService{},
Live: newTestLive(t, nil),
Live: newTestLive(t),
}
sc := setupScenarioContext(t, "/api/datasources/1")
@@ -450,7 +450,7 @@ func TestAPI_datasources_AccessControl(t *testing.T) {
hs.Cfg = setting.NewCfg()
hs.DataSourcesService = &dataSourcesServiceMock{expectedDatasource: &datasources.DataSource{}}
hs.accesscontrolService = actest.FakeService{}
hs.Live = newTestLive(t, hs.SQLStore)
hs.Live = newTestLive(t)
hs.promRegister, hs.dsConfigHandlerRequestsDuration = setupDsConfigHandlerMetrics()
})

View File

@@ -1,11 +0,0 @@
package dtos
import "encoding/json"
type LivePublishCmd struct {
Channel string `json:"channel"`
Data json.RawMessage `json:"data,omitempty"`
}
type LivePublishResponse struct {
}

View File

@@ -0,0 +1,29 @@
package auditing
import (
auditinternal "k8s.io/apiserver/pkg/apis/audit"
"k8s.io/apiserver/pkg/audit"
"k8s.io/apiserver/pkg/authorization/authorizer"
)
// NoopBackend is a no-op implementation of audit.Backend
type NoopBackend struct{}
func ProvideNoopBackend() audit.Backend { return &NoopBackend{} }
func (b *NoopBackend) ProcessEvents(k8sEvents ...*auditinternal.Event) bool { return false }
func (NoopBackend) Run(stopCh <-chan struct{}) error { return nil }
func (NoopBackend) Shutdown() {}
func (NoopBackend) String() string { return "" }
// NoopPolicyRuleEvaluator is a no-op implementation of audit.PolicyRuleEvaluator
type NoopPolicyRuleEvaluator struct{}
func ProvideNoopPolicyRuleEvaluator() audit.PolicyRuleEvaluator { return &NoopPolicyRuleEvaluator{} }
func (NoopPolicyRuleEvaluator) EvaluatePolicyRule(authorizer.Attributes) audit.RequestAuditConfig {
return audit.RequestAuditConfig{Level: auditinternal.LevelNone}
}

View File

@@ -11,15 +11,16 @@ import (
"os/signal"
"syscall"
"github.com/prometheus/client_golang/prometheus"
"k8s.io/client-go/rest"
"k8s.io/client-go/transport"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana-app-sdk/operator"
folder "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
"github.com/grafana/grafana/apps/iam/pkg/app"
"github.com/grafana/grafana/pkg/server"
"github.com/grafana/grafana/pkg/setting"
"github.com/prometheus/client_golang/prometheus"
"k8s.io/client-go/rest"
"k8s.io/client-go/transport"
"github.com/grafana/authlib/authn"
utilnet "k8s.io/apimachinery/pkg/util/net"
@@ -95,7 +96,7 @@ func buildIAMConfigFromSettings(cfg *setting.Cfg, registerer prometheus.Register
if zanzanaURL == "" {
return nil, fmt.Errorf("zanzana_url is required in [operator] section")
}
iamCfg.AppConfig.ZanzanaClientCfg.URL = zanzanaURL
iamCfg.AppConfig.ZanzanaClientCfg.Addr = zanzanaURL
iamCfg.AppConfig.InformerConfig.MaxConcurrentWorkers = operatorSec.Key("max_concurrent_workers").MustUint64(20)

View File

@@ -61,20 +61,24 @@ func (s *legacyStorage) List(ctx context.Context, options *internalversion.ListO
}
func (s *legacyStorage) Get(ctx context.Context, name string, options *metav1.GetOptions) (runtime.Object, error) {
start := time.Now()
defer func() {
metricutil.ObserveWithExemplar(ctx, s.dsConfigHandlerRequestsDuration.WithLabelValues("new", "Get"), time.Since(start).Seconds())
}()
if s.dsConfigHandlerRequestsDuration != nil {
start := time.Now()
defer func() {
metricutil.ObserveWithExemplar(ctx, s.dsConfigHandlerRequestsDuration.WithLabelValues("new", "Get"), time.Since(start).Seconds())
}()
}
return s.datasources.GetDataSource(ctx, name)
}
// Create implements rest.Creater.
func (s *legacyStorage) Create(ctx context.Context, obj runtime.Object, createValidation rest.ValidateObjectFunc, options *metav1.CreateOptions) (runtime.Object, error) {
start := time.Now()
defer func() {
metricutil.ObserveWithExemplar(ctx, s.dsConfigHandlerRequestsDuration.WithLabelValues("new", "Create"), time.Since(start).Seconds())
}()
if s.dsConfigHandlerRequestsDuration != nil {
start := time.Now()
defer func() {
metricutil.ObserveWithExemplar(ctx, s.dsConfigHandlerRequestsDuration.WithLabelValues("new", "Create"), time.Since(start).Seconds())
}()
}
ds, ok := obj.(*v0alpha1.DataSource)
if !ok {
@@ -85,10 +89,12 @@ func (s *legacyStorage) Create(ctx context.Context, obj runtime.Object, createVa
// Update implements rest.Updater.
func (s *legacyStorage) Update(ctx context.Context, name string, objInfo rest.UpdatedObjectInfo, createValidation rest.ValidateObjectFunc, updateValidation rest.ValidateObjectUpdateFunc, forceAllowCreate bool, options *metav1.UpdateOptions) (runtime.Object, bool, error) {
start := time.Now()
defer func() {
metricutil.ObserveWithExemplar(ctx, s.dsConfigHandlerRequestsDuration.WithLabelValues("new", "Create"), time.Since(start).Seconds())
}()
if s.dsConfigHandlerRequestsDuration != nil {
start := time.Now()
defer func() {
metricutil.ObserveWithExemplar(ctx, s.dsConfigHandlerRequestsDuration.WithLabelValues("new", "Create"), time.Since(start).Seconds())
}()
}
old, err := s.Get(ctx, name, &metav1.GetOptions{})
if err != nil {
@@ -126,10 +132,12 @@ func (s *legacyStorage) Update(ctx context.Context, name string, objInfo rest.Up
// Delete implements rest.GracefulDeleter.
func (s *legacyStorage) Delete(ctx context.Context, name string, deleteValidation rest.ValidateObjectFunc, options *metav1.DeleteOptions) (runtime.Object, bool, error) {
start := time.Now()
defer func() {
metricutil.ObserveWithExemplar(ctx, s.dsConfigHandlerRequestsDuration.WithLabelValues("new", "Create"), time.Since(start).Seconds())
}()
if s.dsConfigHandlerRequestsDuration != nil {
start := time.Now()
defer func() {
metricutil.ObserveWithExemplar(ctx, s.dsConfigHandlerRequestsDuration.WithLabelValues("new", "Create"), time.Since(start).Seconds())
}()
}
err := s.datasources.DeleteDataSource(ctx, name)
return nil, false, err

View File

@@ -3,6 +3,7 @@ package datasource
import (
"context"
"encoding/json"
"errors"
"fmt"
"maps"
@@ -38,14 +39,14 @@ var (
// DataSourceAPIBuilder is used just so wire has something unique to return
type DataSourceAPIBuilder struct {
datasourceResourceInfo utils.ResourceInfo
pluginJSON plugins.JSONData
client PluginClient // will only ever be called with the same plugin id!
datasources PluginDatasourceProvider
contextProvider PluginContextWrapper
accessControl accesscontrol.AccessControl
queryTypes *queryV0.QueryTypeDefinitionList
configCrudUseNewApis bool
pluginJSON plugins.JSONData
client PluginClient // will only ever be called with the same plugin id!
datasources PluginDatasourceProvider
contextProvider PluginContextWrapper
accessControl accesscontrol.AccessControl
queryTypes *queryV0.QueryTypeDefinitionList
configCrudUseNewApis bool
dataSourceCRUDMetric *prometheus.HistogramVec
}
func RegisterAPIService(
@@ -66,6 +67,16 @@ func RegisterAPIService(
var err error
var builder *DataSourceAPIBuilder
dataSourceCRUDMetric := metricutil.NewHistogramVec(prometheus.HistogramOpts{
Namespace: "grafana",
Name: "ds_config_handler_requests_duration_seconds",
Help: "Duration of requests handled by datasource configuration handlers",
}, []string{"code_path", "handler"})
regErr := reg.Register(dataSourceCRUDMetric)
if regErr != nil && !errors.As(regErr, &prometheus.AlreadyRegisteredError{}) {
return nil, regErr
}
pluginJSONs, err := getDatasourcePlugins(pluginSources)
if err != nil {
return nil, fmt.Errorf("error getting list of datasource plugins: %s", err)
@@ -91,6 +102,7 @@ func RegisterAPIService(
if err != nil {
return nil, err
}
builder.SetDataSourceCRUDMetrics(dataSourceCRUDMetric)
apiRegistrar.RegisterAPI(builder)
}
@@ -161,6 +173,10 @@ func (b *DataSourceAPIBuilder) GetGroupVersion() schema.GroupVersion {
return b.datasourceResourceInfo.GroupVersion()
}
func (b *DataSourceAPIBuilder) SetDataSourceCRUDMetrics(datasourceCRUDMetric *prometheus.HistogramVec) {
b.dataSourceCRUDMetric = datasourceCRUDMetric
}
func addKnownTypes(scheme *runtime.Scheme, gv schema.GroupVersion) {
scheme.AddKnownTypes(gv,
&datasourceV0.DataSource{},
@@ -218,13 +234,9 @@ func (b *DataSourceAPIBuilder) UpdateAPIGroupInfo(apiGroupInfo *genericapiserver
if b.configCrudUseNewApis {
legacyStore := &legacyStorage{
datasources: b.datasources,
resourceInfo: &ds,
dsConfigHandlerRequestsDuration: metricutil.NewHistogramVec(prometheus.HistogramOpts{
Namespace: "grafana",
Name: "ds_config_handler_requests_duration_seconds",
Help: "Duration of requests handled by datasource configuration handlers",
}, []string{"code_path", "handler"}),
datasources: b.datasources,
resourceInfo: &ds,
dsConfigHandlerRequestsDuration: b.dataSourceCRUDMetric,
}
unified, err := grafanaregistry.NewRegistryStore(opts.Scheme, ds, opts.OptsGetter)
if err != nil {

View File

@@ -22,6 +22,7 @@ type iamAuthorizer struct {
func newIAMAuthorizer(accessClient authlib.AccessClient, legacyAccessClient authlib.AccessClient) authorizer.Authorizer {
resourceAuthorizer := make(map[string]authorizer.Authorizer)
serviceAuthorizer := gfauthorizer.NewServiceAuthorizer()
// Authorizer that allows any authenticated user
// To be used when authorization is handled at the storage layer
allowAuthorizer := authorizer.AuthorizerFunc(func(
@@ -50,8 +51,7 @@ func newIAMAuthorizer(accessClient authlib.AccessClient, legacyAccessClient auth
resourceAuthorizer[iamv0.UserResourceInfo.GetName()] = authorizer
resourceAuthorizer[iamv0.ExternalGroupMappingResourceInfo.GetName()] = authorizer
resourceAuthorizer[iamv0.TeamResourceInfo.GetName()] = authorizer
serviceAuthorizer := gfauthorizer.NewServiceAuthorizer()
resourceAuthorizer["searchUsers"] = serviceAuthorizer
resourceAuthorizer["searchTeams"] = serviceAuthorizer
return &iamAuthorizer{resourceAuthorizer: resourceAuthorizer}

View File

@@ -0,0 +1,164 @@
package authorizer
import (
"context"
"errors"
"fmt"
"net/http"
"sync"
"github.com/grafana/authlib/authn"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
dashboardv1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v1beta1"
folderv1 "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
"github.com/grafana/grafana/apps/provisioning/pkg/auth"
"github.com/grafana/grafana/pkg/apimachinery/utils"
)
var (
ErrNoConfigProvider = errors.New("no config provider for group resource")
ErrNoVersionInfo = errors.New("no version info for group resource")
Versions = map[schema.GroupResource]string{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: folderv1.VERSION,
{Group: dashboardv1.GROUP, Resource: dashboardv1.DASHBOARD_RESOURCE}: dashboardv1.VERSION,
}
)
// ConfigProvider is a function that provides a rest.Config for a given context.
type ConfigProvider func(ctx context.Context) (*rest.Config, error)
// DynamicClientFactory is a function that creates a dynamic.Interface from a rest.Config.
// This can be overridden in tests.
type DynamicClientFactory func(config *rest.Config) (dynamic.Interface, error)
// ParentProvider implementation that fetches the parent folder information from remote API servers.
type ParentProviderImpl struct {
configProviders map[schema.GroupResource]ConfigProvider
versions map[schema.GroupResource]string
dynamicClientFactory DynamicClientFactory
// Cache of dynamic clients for each group resource
// This is used to avoid creating a new dynamic client for each request
// and to reuse the same client for the same group resource.
clients map[schema.GroupResource]dynamic.Interface
clientsMu sync.Mutex
}
// DialConfig holds the configuration for dialing a remote API server.
type DialConfig struct {
Host string
Insecure bool
CAFile string
Audience string
}
// NewLocalConfigProvider creates a map of ConfigProviders that return the same given config for local API servers.
func NewLocalConfigProvider(
configProvider ConfigProvider,
) map[schema.GroupResource]ConfigProvider {
return map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: configProvider,
{Group: dashboardv1.GROUP, Resource: dashboardv1.DASHBOARD_RESOURCE}: configProvider,
}
}
// NewRemoteConfigProvider creates a map of ConfigProviders for remote API servers based on the given DialConfig.
func NewRemoteConfigProvider(cfg map[schema.GroupResource]DialConfig, exchangeClient authn.TokenExchanger) map[schema.GroupResource]ConfigProvider {
configProviders := make(map[schema.GroupResource]ConfigProvider, len(cfg))
for gr, dialConfig := range cfg {
configProviders[gr] = func(ctx context.Context) (*rest.Config, error) {
return &rest.Config{
Host: dialConfig.Host,
WrapTransport: func(rt http.RoundTripper) http.RoundTripper {
return auth.NewRoundTripper(exchangeClient, rt, dialConfig.Audience)
},
TLSClientConfig: rest.TLSClientConfig{
Insecure: dialConfig.Insecure,
CAFile: dialConfig.CAFile,
},
QPS: 50,
Burst: 100,
}, nil
}
}
return configProviders
}
// NewApiParentProvider creates a new ParentProviderImpl with the given config providers and version info.
func NewApiParentProvider(
configProviders map[schema.GroupResource]ConfigProvider,
version map[schema.GroupResource]string,
) *ParentProviderImpl {
return &ParentProviderImpl{
configProviders: configProviders,
versions: version,
dynamicClientFactory: func(config *rest.Config) (dynamic.Interface, error) {
return dynamic.NewForConfig(config)
},
clients: make(map[schema.GroupResource]dynamic.Interface),
}
}
func (p *ParentProviderImpl) HasParent(gr schema.GroupResource) bool {
_, ok := p.configProviders[gr]
return ok
}
func (p *ParentProviderImpl) getClient(ctx context.Context, gr schema.GroupResource) (dynamic.Interface, error) {
p.clientsMu.Lock()
client, ok := p.clients[gr]
p.clientsMu.Unlock()
if ok {
return client, nil
}
provider, ok := p.configProviders[gr]
if !ok {
return nil, fmt.Errorf("%w: %s", ErrNoConfigProvider, gr.String())
}
restConfig, err := provider(ctx)
if err != nil {
return nil, err
}
client, err = p.dynamicClientFactory(restConfig)
if err != nil {
return nil, err
}
p.clientsMu.Lock()
p.clients[gr] = client
p.clientsMu.Unlock()
return client, nil
}
func (p *ParentProviderImpl) GetParent(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error) {
client, err := p.getClient(ctx, gr)
if err != nil {
return "", err
}
version, ok := p.versions[gr]
if !ok {
return "", fmt.Errorf("%w: %s", ErrNoVersionInfo, gr.String())
}
resourceClient := client.Resource(schema.GroupVersionResource{
Group: gr.Group,
Resource: gr.Resource,
Version: version,
}).Namespace(namespace)
unstructObj, err := resourceClient.Get(ctx, name, metav1.GetOptions{})
if err != nil {
return "", err
}
return unstructObj.GetAnnotations()[utils.AnnoKeyFolder], nil
}

View File

@@ -0,0 +1,198 @@
package authorizer
import (
"context"
"errors"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
folderv1 "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
"github.com/grafana/grafana/pkg/apimachinery/utils"
)
var configProvider = func(ctx context.Context) (*rest.Config, error) {
return &rest.Config{}, nil
}
func TestParentProviderImpl_GetParent(t *testing.T) {
tests := []struct {
name string
gr schema.GroupResource
namespace string
resourceName string
parentFolder string
setupFake func(*fakeDynamicClient, *fakeResourceInterface)
configProviders map[schema.GroupResource]ConfigProvider
versions map[schema.GroupResource]string
expectedError string
expectedParent string
}{
{
name: "successfully get parent folder",
gr: schema.GroupResource{Group: folderv1.GROUP, Resource: folderv1.RESOURCE},
namespace: "org-1",
resourceName: "dash1",
parentFolder: "fold1",
setupFake: func(fakeClient *fakeDynamicClient, fakeResource *fakeResourceInterface) {
fakeClient.resourceInterface = fakeResource
fakeResource.getFunc = func(ctx context.Context, name string, opts metav1.GetOptions, subresources ...string) (*unstructured.Unstructured, error) {
obj := &unstructured.Unstructured{}
obj.SetAnnotations(map[string]string{utils.AnnoKeyFolder: "fold1"})
return obj, nil
}
},
configProviders: map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: configProvider,
},
versions: Versions,
expectedParent: "fold1",
},
{
name: "resource without parent annotation returns empty",
gr: schema.GroupResource{Group: folderv1.GROUP, Resource: folderv1.RESOURCE},
namespace: "org-1",
resourceName: "dash1",
setupFake: func(fakeClient *fakeDynamicClient, fakeResource *fakeResourceInterface) {
fakeClient.resourceInterface = fakeResource
fakeResource.getFunc = func(ctx context.Context, name string, opts metav1.GetOptions, subresources ...string) (*unstructured.Unstructured, error) {
obj := &unstructured.Unstructured{}
obj.SetAnnotations(map[string]string{})
return obj, nil
}
},
configProviders: map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: configProvider,
},
versions: Versions,
expectedParent: "",
},
{
name: "no config provider returns error",
gr: schema.GroupResource{Group: "unknown.group", Resource: "unknown"},
namespace: "org-1",
resourceName: "resource-1",
configProviders: map[schema.GroupResource]ConfigProvider{},
versions: Versions,
expectedError: ErrNoConfigProvider.Error(),
},
{
name: "config provider returns error",
gr: schema.GroupResource{Group: folderv1.GROUP, Resource: folderv1.RESOURCE},
namespace: "org-1",
resourceName: "resource-1",
configProviders: map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: func(ctx context.Context) (*rest.Config, error) {
return nil, errors.New("config provider error")
},
},
versions: Versions,
expectedError: "config provider error",
},
{
name: "no version info returns error",
gr: schema.GroupResource{Group: folderv1.GROUP, Resource: folderv1.RESOURCE},
namespace: "org-1",
resourceName: "resource-1",
configProviders: map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: func(ctx context.Context) (*rest.Config, error) {
return &rest.Config{}, nil
},
},
versions: map[schema.GroupResource]string{},
expectedError: ErrNoVersionInfo.Error(),
},
{
name: "resource get returns error",
gr: schema.GroupResource{Group: folderv1.GROUP, Resource: folderv1.RESOURCE},
namespace: "org-1",
resourceName: "resource-1",
setupFake: func(fakeClient *fakeDynamicClient, fakeResource *fakeResourceInterface) {
fakeClient.resourceInterface = fakeResource
fakeResource.getFunc = func(ctx context.Context, name string, opts metav1.GetOptions, subresources ...string) (*unstructured.Unstructured, error) {
return nil, errors.New("resource not found")
}
},
configProviders: map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: func(ctx context.Context) (*rest.Config, error) {
return &rest.Config{}, nil
},
},
versions: Versions,
expectedError: "resource not found",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
fakeClient := &fakeDynamicClient{}
fakeResource := &fakeResourceInterface{}
if tt.setupFake != nil {
tt.setupFake(fakeClient, fakeResource)
}
provider := &ParentProviderImpl{
configProviders: tt.configProviders,
versions: tt.versions,
dynamicClientFactory: func(config *rest.Config) (dynamic.Interface, error) {
return fakeClient, nil
},
clients: make(map[schema.GroupResource]dynamic.Interface),
}
parent, err := provider.GetParent(context.Background(), tt.gr, tt.namespace, tt.resourceName)
if tt.expectedError != "" {
require.Error(t, err)
assert.Contains(t, err.Error(), tt.expectedError)
assert.Empty(t, parent)
} else {
require.NoError(t, err)
assert.Equal(t, tt.expectedParent, parent)
}
})
}
}
// fakeDynamicClient is a fake implementation of dynamic.Interface
type fakeDynamicClient struct {
resourceInterface dynamic.ResourceInterface
}
func (f *fakeDynamicClient) Resource(resource schema.GroupVersionResource) dynamic.NamespaceableResourceInterface {
return &fakeNamespaceableResourceInterface{
resourceInterface: f.resourceInterface,
}
}
// fakeNamespaceableResourceInterface is a fake implementation of dynamic.NamespaceableResourceInterface
type fakeNamespaceableResourceInterface struct {
dynamic.NamespaceableResourceInterface
resourceInterface dynamic.ResourceInterface
}
func (f *fakeNamespaceableResourceInterface) Namespace(namespace string) dynamic.ResourceInterface {
if f.resourceInterface != nil {
return f.resourceInterface
}
return &fakeResourceInterface{}
}
// fakeResourceInterface is a fake implementation of dynamic.ResourceInterface
type fakeResourceInterface struct {
dynamic.ResourceInterface
getFunc func(ctx context.Context, name string, opts metav1.GetOptions, subresources ...string) (*unstructured.Unstructured, error)
}
func (f *fakeResourceInterface) Get(ctx context.Context, name string, opts metav1.GetOptions, subresources ...string) (*unstructured.Unstructured, error) {
if f.getFunc != nil {
return f.getFunc(ctx, name, opts, subresources...)
}
return &unstructured.Unstructured{}, nil
}

View File

@@ -10,24 +10,44 @@ import (
iamv0 "github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/services/apiserver/auth/authorizer/storewrapper"
)
// TODO: Logs, Metrics, Traces?
// ParentProvider interface for fetching parent information of resources
type ParentProvider interface {
// HasParent checks if the given GroupResource has a parent folder
HasParent(gr schema.GroupResource) bool
// GetParent fetches the parent folder name for the given resource
GetParent(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error)
}
// ResourcePermissionsAuthorizer
type ResourcePermissionsAuthorizer struct {
accessClient types.AccessClient
accessClient types.AccessClient
parentProvider ParentProvider
logger log.Logger
}
var _ storewrapper.ResourceStorageAuthorizer = (*ResourcePermissionsAuthorizer)(nil)
func NewResourcePermissionsAuthorizer(accessClient types.AccessClient) *ResourcePermissionsAuthorizer {
func NewResourcePermissionsAuthorizer(
accessClient types.AccessClient,
parentProvider ParentProvider,
) *ResourcePermissionsAuthorizer {
return &ResourcePermissionsAuthorizer{
accessClient: accessClient,
accessClient: accessClient,
parentProvider: parentProvider,
logger: log.New("iam.resource-permissions-authorizer"),
}
}
func isAccessPolicy(authInfo types.AuthInfo) bool {
return types.IsIdentityType(authInfo.GetIdentityType(), types.TypeAccessPolicy)
}
// AfterGet implements ResourceStorageAuthorizer.
func (r *ResourcePermissionsAuthorizer) AfterGet(ctx context.Context, obj runtime.Object) error {
authInfo, ok := types.AuthInfoFrom(ctx)
@@ -37,9 +57,24 @@ func (r *ResourcePermissionsAuthorizer) AfterGet(ctx context.Context, obj runtim
switch o := obj.(type) {
case *iamv0.ResourcePermission:
target := o.Spec.Resource
targetGR := schema.GroupResource{Group: target.ApiGroup, Resource: target.Resource}
// TODO: Fetch the resource to retrieve its parent folder.
parent := ""
// Fetch the parent of the resource
// Access Policies have global scope, so no parent check needed
if !isAccessPolicy(authInfo) && r.parentProvider.HasParent(targetGR) {
p, err := r.parentProvider.GetParent(ctx, targetGR, o.Namespace, target.Name)
if err != nil {
r.logger.Error("after get: error fetching parent", "error", err.Error(),
"namespace", o.Namespace,
"group", target.ApiGroup,
"resource", target.Resource,
"name", target.Name,
)
return err
}
parent = p
}
checkReq := types.CheckRequest{
Namespace: o.Namespace,
@@ -72,9 +107,24 @@ func (r *ResourcePermissionsAuthorizer) beforeWrite(ctx context.Context, obj run
switch o := obj.(type) {
case *iamv0.ResourcePermission:
target := o.Spec.Resource
targetGR := schema.GroupResource{Group: target.ApiGroup, Resource: target.Resource}
// TODO: Fetch the resource to retrieve its parent folder.
parent := ""
// Fetch the parent of the resource
// Access Policies have global scope, so no parent check needed
if !isAccessPolicy(authInfo) && r.parentProvider.HasParent(targetGR) {
p, err := r.parentProvider.GetParent(ctx, targetGR, o.Namespace, target.Name)
if err != nil {
r.logger.Error("before write: error fetching parent", "error", err.Error(),
"namespace", o.Namespace,
"group", target.ApiGroup,
"resource", target.Resource,
"name", target.Name,
)
return err
}
parent = p
}
checkReq := types.CheckRequest{
Namespace: o.Namespace,
@@ -153,8 +203,29 @@ func (r *ResourcePermissionsAuthorizer) FilterList(ctx context.Context, list run
canViewFuncs[gr] = canView
}
// TODO : Fetch the resource to retrieve its parent folder.
target := item.Spec.Resource
targetGR := schema.GroupResource{Group: target.ApiGroup, Resource: target.Resource}
parent := ""
// Fetch the parent of the resource
// It's not efficient to do for every item in the list, but it's a good starting point.
// Access Policies have global scope, so no parent check needed
if !isAccessPolicy(authInfo) && r.parentProvider.HasParent(targetGR) {
p, err := r.parentProvider.GetParent(ctx, targetGR, item.Namespace, target.Name)
if err != nil {
// Skip item on error fetching parent
r.logger.Warn("filter list: error fetching parent, skipping item",
"error", err.Error(),
"namespace",
item.Namespace,
"group", target.ApiGroup,
"resource", target.Resource,
"name", target.Name,
)
continue
}
parent = p
}
allowed := canView(item.Spec.Resource.Name, parent)
if allowed {

View File

@@ -5,13 +5,15 @@ import (
"testing"
"github.com/go-jose/go-jose/v4/jwt"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/grafana/authlib/authn"
"github.com/grafana/authlib/types"
iamv0 "github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
var (
@@ -63,6 +65,7 @@ func TestResourcePermissions_AfterGet(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
parent := "fold-1"
checkFunc := func(id types.AuthInfo, req *types.CheckRequest, folder string) (types.CheckResponse, error) {
require.NotNil(t, id)
// Check is called with the user's identity
@@ -74,12 +77,18 @@ func TestResourcePermissions_AfterGet(t *testing.T) {
require.Equal(t, fold1.Spec.Resource.Resource, req.Resource)
require.Equal(t, fold1.Spec.Resource.Name, req.Name)
require.Equal(t, utils.VerbGetPermissions, req.Verb)
require.Equal(t, parent, folder)
return types.CheckResponse{Allowed: tt.shouldAllow}, nil
}
getParentFunc := func(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error) {
// For this test, we can return a fixed parent folder ID
return parent, nil
}
accessClient := &fakeAccessClient{checkFunc: checkFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient)
fakeParentProvider := &fakeParentProvider{hasParent: true, getParentFunc: getParentFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient, fakeParentProvider)
ctx := types.WithAuthInfo(context.Background(), user)
err := resPermAuthz.AfterGet(ctx, fold1)
@@ -89,6 +98,7 @@ func TestResourcePermissions_AfterGet(t *testing.T) {
require.Error(t, err, "expected error for denied access")
}
require.True(t, accessClient.checkCalled, "accessClient.Check should be called")
require.True(t, fakeParentProvider.getParentCalled, "parentProvider.GetParent should be called")
})
}
}
@@ -121,23 +131,32 @@ func TestResourcePermissions_FilterList(t *testing.T) {
require.Equal(t, "dashboards", req.Resource)
}
// Return a checker that allows only specific resources: fold-1 and dash-2
// Return a checker that allows access to fold-1 and its content
return func(name, folder string) bool {
if name == "fold-1" || name == "dash-2" {
if name == "fold-1" || folder == "fold-1" {
return true
}
return false
}, &types.NoopZookie{}, nil
}
getParentFunc := func(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error) {
if name == "dash-2" {
return "fold-1", nil
}
return "", nil
}
accessClient := &fakeAccessClient{compileFunc: compileFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient)
fakeParentProvider := &fakeParentProvider{hasParent: true, getParentFunc: getParentFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient, fakeParentProvider)
ctx := types.WithAuthInfo(context.Background(), user)
obj, err := resPermAuthz.FilterList(ctx, list)
require.NoError(t, err)
require.NotNil(t, list)
require.True(t, accessClient.compileCalled, "accessClient.Compile should be called")
require.True(t, fakeParentProvider.getParentCalled, "parentProvider.GetParent should be called")
filtered, ok := obj.(*iamv0.ResourcePermissionList)
require.True(t, ok, "response should be of type ResourcePermissionList")
@@ -165,6 +184,7 @@ func TestResourcePermissions_beforeWrite(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
parent := "fold-1"
checkFunc := func(id types.AuthInfo, req *types.CheckRequest, folder string) (types.CheckResponse, error) {
require.NotNil(t, id)
// Check is called with the user's identity
@@ -176,12 +196,18 @@ func TestResourcePermissions_beforeWrite(t *testing.T) {
require.Equal(t, fold1.Spec.Resource.Resource, req.Resource)
require.Equal(t, fold1.Spec.Resource.Name, req.Name)
require.Equal(t, utils.VerbSetPermissions, req.Verb)
require.Equal(t, parent, folder)
return types.CheckResponse{Allowed: tt.shouldAllow}, nil
}
getParentFunc := func(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error) {
return parent, nil
}
accessClient := &fakeAccessClient{checkFunc: checkFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient)
fakeParentProvider := &fakeParentProvider{hasParent: true, getParentFunc: getParentFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient, fakeParentProvider)
ctx := types.WithAuthInfo(context.Background(), user)
err := resPermAuthz.beforeWrite(ctx, fold1)
@@ -191,6 +217,7 @@ func TestResourcePermissions_beforeWrite(t *testing.T) {
require.Error(t, err, "expected error for denied delete")
}
require.True(t, accessClient.checkCalled, "accessClient.Check should be called")
require.True(t, fakeParentProvider.getParentCalled, "parentProvider.GetParent should be called")
})
}
}
@@ -214,3 +241,18 @@ func (m *fakeAccessClient) Compile(ctx context.Context, id types.AuthInfo, req t
}
var _ types.AccessClient = (*fakeAccessClient)(nil)
type fakeParentProvider struct {
hasParent bool
getParentCalled bool
getParentFunc func(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error)
}
func (f *fakeParentProvider) HasParent(gr schema.GroupResource) bool {
return f.hasParent
}
func (f *fakeParentProvider) GetParent(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error) {
f.getParentCalled = true
return f.getParentFunc(ctx, gr, namespace, name)
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/grafana/authlib/types"
"github.com/grafana/grafana/pkg/infra/log"
iamauthorizer "github.com/grafana/grafana/pkg/registry/apis/iam/authorizer"
"github.com/grafana/grafana/pkg/registry/apis/iam/externalgroupmapping"
"github.com/grafana/grafana/pkg/registry/apis/iam/legacy"
"github.com/grafana/grafana/pkg/registry/apis/iam/serviceaccount"
@@ -60,6 +61,10 @@ type IdentityAccessManagementAPIBuilder struct {
roleBindingsStorage RoleBindingStorageBackend
externalGroupMappingStorage ExternalGroupMappingStorageBackend
// Required for resource permissions authorization
// fetches resources parent folders
resourceParentProvider iamauthorizer.ParentProvider
// Access Control
authorizer authorizer.Authorizer
// legacyAccessClient is used for the identity apis, we need to migrate to the access client
@@ -77,10 +82,11 @@ type IdentityAccessManagementAPIBuilder struct {
reg prometheus.Registerer
logger log.Logger
dual dualwrite.Service
unified resource.ResourceClient
userSearchClient resourcepb.ResourceIndexClient
teamSearch *TeamSearchHandler
dual dualwrite.Service
unified resource.ResourceClient
userSearchClient resourcepb.ResourceIndexClient
userSearchHandler *user.SearchHandler
teamSearch *TeamSearchHandler
teamGroupsHandler externalgroupmapping.TeamGroupsHandler

Some files were not shown because too many files have changed in this diff Show More