Compare commits

...

50 Commits

Author SHA1 Message Date
Jayclifford345
f37986e97b prettier 2025-12-12 13:10:11 +00:00
Jayclifford345
29ad717011 make sue component is registered to show side bar 2025-12-12 13:01:49 +00:00
Nathan Marrs
a0c4e8b4f4 Suggested Dashboards: Add missing loaded event tracking for v1 of feature (#115195)
## Summary

Fixes a regression where the `loaded` analytics event was not being tracked for the `BasicProvisionedDashboardsEmptyPage` component, which is the component shown in production when the `suggestedDashboards` feature toggle is disabled (i.e. community dashboards disabled but v1 of feature enabled)

## Problem

Regression introduced by https://github.com/grafana/grafana/pull/112808/changes#diff-3a19d2e887a3344cb0bcd2449b570bd50a7d78d1d473f4a3cf623f9fe40f35fc adding community dashboard support to `SuggestedDashboards`, the `BasicProvisionedDashboardsEmptyPage` component was missing the `loaded` event tracking. Component is mounted here: https://github.com/grafana/grafana/pull/112808/changes#diff-fba79ed6f8bfb5f712bdd529155158977a3e081d1d6a5932a5fa90fb57a243e6R82. This caused analytics discrepancies where in the past 7 days (note: issue has been present for last several weeks but here is sample of data from previous week):

- 106 provisioned dashboard items were clicked
- Only 1 `loaded` event was received (from `SuggestedDashboards` when the feature toggle is enabled)
- The `loaded` events are missing for the production v1 flow (when `suggestedDashboards` feature toggle is off)

## Root Cause

The `BasicProvisionedDashboardsEmptyPage` component (used in v1 flow in production) was never updated with the `loaded` event tracking that was added to `SuggestedDashboards` in PR #113417. Since the `suggestedDashboards` feature toggle is not enabled in production, users were seeing `BasicProvisionedDashboardsEmptyPage` which had no tracking, resulting in missing analytics events.

## Solution

Added the `loaded` event tracking to `BasicProvisionedDashboardsEmptyPage` using the same approach that was previously used (tracking inside the async callback when dashboards are loaded). This ensures consistency with the existing pattern and restores analytics tracking for the production flow.

## Changes

- Added `DashboardLibraryInteractions.loaded()` call in `BasicProvisionedDashboardsEmptyPage` when dashboards are successfully loaded
- Uses the same tracking pattern as the original implementation (tracking inside async callback)
- Matches the event structure used in `SuggestedDashboards` for consistency

## Testing

- Verified that `loaded` events are now tracked when `BasicProvisionedDashboardsEmptyPage` loads dashboards
- Confirmed the event includes correct `contentKinds`, `datasourceTypes`, and `eventLocation` values
- No duplicate events are sent (tracking only occurs once per load)

## Related

- Original analytics implementation: #113417
- Related PR: #112808
- Component: [`BasicProvisionedDashboardsEmptyPage.tsx`](https://github.com/grafana/grafana/blob/main/public/app/features/dashboard/dashgrid/DashboardLibrary/BasicProvisionedDashboardsEmptyPage.tsx)
2025-12-12 09:16:55 -03:00
Victor Marin
fa62113b41 Dashboards: Fix custom variable legacy model to return options when flag is set (#115154)
* fix custom var legacy model options property

* add test
2025-12-12 12:12:46 +00:00
Roberto Jiménez Sánchez
b863acab05 Provisioning: Fix race condition causing unhealthy repository message to be lost (#115150)
* Fix race condition causing unhealthy repository message to be lost

This commit fixes a race condition in the provisioning repository controller
where the "Repository is unhealthy" message in the sync status could be lost
due to status updates being based on stale repository objects.

## Problem

The issue occurred in the `process` function when:
1. Repository object was fetched from cache with old status
2. `RefreshHealth` immediately patched the health status to "unhealthy"
3. `determineSyncStatusOps` used the stale object to check if unhealthy
   message was already set
4. A second patch operation based on stale data would overwrite the
   health status update

## Solution

Introduced `RefreshHealthWithPatchOps` method that returns patch operations
instead of immediately applying them. This allows batching all status updates
(health + sync) into a single atomic patch operation, eliminating the race
condition.

## Changes

- Added `HealthCheckerInterface` for better testability
- Added `RefreshHealthWithPatchOps` method to return patch ops without applying
- Updated `process` function to batch health and sync status updates
- Added comprehensive unit tests for the fix

Fixes the issue where unhealthy repositories don't show the "Repository is
unhealthy" message in their sync status.

* Fix staticcheck lint error: remove unnecessary nil check for slice
2025-12-12 13:24:58 +02:00
Ezequiel Victorero
c7c052480d Chore: Bump grafana/llm 1.0.1 (#115175) 2025-12-12 11:22:37 +00:00
Gabriel MABILLE
478ae15f0e grafana-iam: Use parent folder to authorize ResourcePermissions (#115008)
* `grafana-iam`: Fetch target parent folder

* WIP add different ParentProviders

* Add version

* Move code to a different file

* Instantiate resourceParentProvider

* same import name

* imports

* Add tests

* Remove unecessary test

* forgot wire

* WIP integration tests

* Add test to cover list

* Fix caching problem in integration tests

* comments

* Logger and comments

* Add lazy creation and caching

* Instantiate clients only once

* Rerun wire gen
2025-12-12 11:43:12 +01:00
Erik Sundell
8ebb1c2bc9 NPM: Remove dist-tag code (#115209)
remove dist-tag
2025-12-12 11:41:57 +01:00
Marc M.
5572ce966a DynamicDashboards: Hide variables from outline in view mode (#115142) 2025-12-12 10:34:47 +00:00
Marc M.
e3510f6eb3 DynamicDashboards: Replace discard changes modal (#114789) 2025-12-12 11:24:53 +01:00
Mihai Doarna
a832e5f222 IAM: Add missing params to team search request (#115208)
add missing params to team search request
2025-12-12 12:13:43 +02:00
Levente Balogh
c5a5482d7d Doc: Add docs for displaying links in the dashboard-controls menu (#115201)
* docs: add docs for displaying links in the dashboard-controls menu

* Update docs/sources/as-code/observability-as-code/schema-v2/links-schema.md

Co-authored-by: Anna Urbiztondo <anna.urbiztondo@grafana.com>

---------

Co-authored-by: Anna Urbiztondo <anna.urbiztondo@grafana.com>
2025-12-12 09:57:35 +00:00
Gareth
169ffc15c6 OpenTSDB: Run suggest queries through the data source backend (#114990)
* OpenTSDB: Run suggest queries through the data source backend

* use mux
2025-12-12 18:36:52 +09:00
Levente Balogh
296fe610ba Docs: Add docs for displaying variables in the dashboard-controls (#115205)
docs: update docs for adding a template variable
2025-12-12 10:34:13 +01:00
grafana-pr-automation[bot]
eceff8d26e I18n: Download translations from Crowdin (#115193)
New Crowdin translations by GitHub Action

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-12 09:30:01 +00:00
Lauren
3cdfe34ec8 Alerting: Fix Alerts page filtering (#115178)
* Alerting: fix filtering on alerts page

* exclude __name__ from label filter dropdown list
2025-12-12 08:16:55 +00:00
Erik Sundell
35c214249f E2E Selectors: Fix comment typo (#115197)
fix typo
2025-12-12 08:59:10 +01:00
Erik Sundell
c3224411c0 NPM: Use env var for OIDC token auth instead of direct npmrc (#115153)
* use env var

* ignore spellcheck
2025-12-12 07:45:04 +01:00
Steve Simpson
b407f0062d Alerting: Add an authorizer to the historian app (#115188)
historian: add an authorizer

Co-authored-by: Charandas Batra <charandas.batra@grafana.com>
2025-12-11 23:34:37 +00:00
Haris Rozajac
0385a7a4a4 Dashboard Import: disable importing V2 dashboards when dashboardNewLayouts is disabled (#114188)
* Disable importing v2 dashboards when dynamic dashboards are disabled

* clean up

* Update error messaging
2025-12-11 15:54:06 -07:00
Jack Baldry
1611489b84 Fix path to generation and source content (#115095)
Signed-off-by: Jack Baldry <jack.baldry@grafana.com>
2025-12-11 21:40:35 +00:00
Eric Hilse
e8039d1c3d fix(topbar): remove minWidth property for better layout handling (#115166) 2025-12-11 13:28:30 -07:00
Andres Torres
652b4f2fab fix(setting): Add default scheme to handle k8s api errors (#115177) 2025-12-11 20:12:25 +00:00
Ezequiel Victorero
c35642b04d Chore: Bump nodemailer with forced resolution (#115172) 2025-12-11 16:40:23 -03:00
Larissa Wandzura
91a72f2572 DOCS: Updates to Elasticsearch data source docs (#115021)
* created new configure folder, rewrote intro page

* updated configure doc

* updated query editor

* updates to template variables

* added troubleshooting doc, fixed heading issues

* fix linter issues

* added alerting doc

* corrected title

* final edits

* fixed linter issue

* added deprecation comment per feedback

* ran prettier
2025-12-11 19:21:33 +00:00
Bogdan Matei
f8027e4d75 Dashboard: Implement modal to confirm layout change (#111093) 2025-12-11 19:17:23 +00:00
Paul Marbach
f5b2dde4a1 Suggestions: Add keyboard support (#114517)
* Suggestions: hashes on suggestions, update logic to select first suggestion

* fix types

* Suggestions: New UI style updates

* update some styles

* getting styles just right

* remove grouping when not on flag

* adjust minimum width for sidebar

* CI cleanups

* updates from ad hoc review

* add loading and error states to suggestions

* remove unused import

* update header ui for panel editor

* restore back button to vizpicker

* fix e2e test

* fix e2e

* add i18n update

* use new util for setVisualization operation

* Apply suggestions from code review

Co-authored-by: Torkel Ödegaard <torkel@grafana.com>

* comments from review

* updates from review

* Suggestions: Add keyboard support

* fix selector for PluginVisualization.item

---------

Co-authored-by: Torkel Ödegaard <torkel@grafana.com>
2025-12-11 14:13:33 -05:00
Misi
0c264b7a5f IAM: Add user search endpoint (#114542)
* wip: initial changes, api registration

* wip

* LegacySearch working with sorting

* Revert mapper change for now

* Clean up

* Cleanup, add integration tests

* Improve tests

* OpenAPI def regen

* Use wildcard search, fix lastSeenAt handling, add lastSeenAtAge

* Add missing files

* Fix merge

* Fixes

* Add tests, regen openapi def

* Address feedback

* Address feedback batch 2

* Chores

* regen openapidef

* Address feedback

* Add tests for paging

* gen apis

* Revert go.mod, go.sum. go.work.sum

* Fix + remove extra tracer parameter
2025-12-11 19:54:48 +01:00
Ashley Harrison
d83b216a32 FS: Fix rendering of public dashboards in MT frontend service (#115162)
* pass publicDashboardAccessToken to ST backend via bootdata

* slightly cleaner

* slightly tidy up go templating

* add HandleView middleware
2025-12-11 17:56:40 +00:00
Anna Urbiztondo
ada14df9fd Add new glossary word (#115070)
* Docs: Add grafanactl term to glossary

* Edit to adapt to Glossary def length

* Fix

* Real fix

* Fix link

---------

Co-authored-by: Jack Baldry <jack.baldry@grafana.com>
2025-12-11 17:05:21 +00:00
Tobias Skarhed
f63c2cb2dd Scopes: Don't use redirect if you're on an active scope navigation (#115149)
* Don't use redirectUrl if we are on an active scope navigation

* Remove superflous test
2025-12-11 17:42:47 +01:00
Tobias Skarhed
fe4c615b3d Scopes: Sync nested scopes navigation open folders to URL (#114786)
* Sync nav_scope_path with url

* Let the current active scope remain if it is a child of the selected subscope

* Remove location updates based on nav_scope_path to maintain expanded folders

* Fix folder tests

* Remove console logs

* Better mock for changeScopes

* Update test to support the new calls

* Update test with function inputs

* Fix failinging test

* Add tests and add isEqual check for fetching new subscopes
2025-12-11 17:34:21 +01:00
grafana-pr-automation[bot]
02d3fd7b31 I18n: Download translations from Crowdin (#115123)
New Crowdin translations by GitHub Action

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-12-11 16:31:02 +00:00
Jesse David Peterson
5dcfc19060 Table: Add title attribute to make truncated headings legible (#115155)
* fix(table): add HTML title attribute to make truncated headings legible

* fix(table): avoid redundant display name calculation

Co-authored-by: Paul Marbach <paul.marbach@grafana.com>

---------

Co-authored-by: Paul Marbach <paul.marbach@grafana.com>
2025-12-11 12:22:10 -04:00
Roberto Jiménez Sánchez
5bda17be3f Provisioning: Update provisioning docs to reflect kubernetesDashboards defaults to true (#115159)
Docs: Update provisioning docs to reflect kubernetesDashboards defaults to true

The kubernetesDashboards feature toggle now defaults to true, so users
don't need to explicitly enable it in their configuration. Updated
documentation and UI to reflect this:

- Removed kubernetesDashboards from configuration examples
- Added notes explaining it's enabled by default
- Clarified that users only need to take action if they've explicitly
  disabled it
- Kept validation checks to catch explicit disables

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-11 17:08:57 +01:00
Usman Ahmad
bc88796e6e Created Troubleshooting guide for MySQL data source plugin (#114737)
* created troubleshooting guide for mysql data source plugin

Signed-off-by: Usman Ahmad <usman.ahmad@grafana.com>

* Apply suggestions from code review

thanks for the code review

Co-authored-by: Christopher Moyer <35463610+chri2547@users.noreply.github.com>

* rename file from _index.md to index.md

Signed-off-by: Usman Ahmad <usman.ahmad@grafana.com>

* Update docs/sources/datasources/mysql/troubleshoot/index.md

---------

Signed-off-by: Usman Ahmad <usman.ahmad@grafana.com>
Co-authored-by: Christopher Moyer <35463610+chri2547@users.noreply.github.com>
2025-12-11 16:42:09 +01:00
Andres Torres
5d7b9c5050 fix(setting): Replacing dynamic client to reduce memory footprint (#115125) 2025-12-11 10:24:01 -05:00
Alexander Akhmetov
73bcfbcc74 Alerting: Collate alert_rule.namespace_uid column as binary (#115152)
Alerting: Collate namespace_uid column as binary
2025-12-11 16:05:13 +01:00
Erik Sundell
4ab198b201 E2E Selectors: Fix package description (#115148)
dummie change
2025-12-11 14:00:54 +00:00
Erik Sundell
0c82f92539 NPM: Attempt to fix e2e-selectors dist-tag after OIDC migration (#115012)
* fetch oidc token from github

* use same approach as electron
2025-12-11 14:35:27 +01:00
Ivana Huckova
73de5f98e1 Assistant: Update origin for analyze-rule-menu-item (#115147)
* Assistant: Update origin for analyze-rule-menu-item

* Update origin, not test id
2025-12-11 13:06:09 +00:00
Oscar Kilhed
b6ba8a0fd4 Dashboards: Make variables selectable in controls menu (#115092)
* Dashboard: Make variables selectable in controls menu and improve spacing

- Add selection support for variables in controls menu (onPointerDown handler and selection classes)
- Add padding to variables and annotations in controls menu (theme.spacing(1))
- Reduce menu container padding from 1.5 to 1
- Remove margins between menu items

* fix: remove unused imports in DashboardControlsMenu
2025-12-11 13:55:03 +01:00
Oscar Kilhed
350c3578c7 Dynamic dashboards: Update variable set state when variable hide property changes (#115094)
fix: update variable set state when variable hide property changes

When changing a variable's positioning to show in controls menu using the edit side pane, the state of dashboardControls does not immediately update. This makes it seem to the user that nothing was changed.

The issue was that when a variable's hide property changes, only the variable's state was updated, but not the parent SceneVariableSet state. Components that subscribe to the variable set state (like useDashboardControls) didn't detect the change because the variables array reference remained the same.

This fix updates the parent SceneVariableSet state when a variable's hide property changes, ensuring components that subscribe to the variable set will re-render immediately.

Co-authored-by: grafakus <marc.mignonsin@grafana.com>
2025-12-11 13:54:30 +01:00
Andres Martinez Gotor
e6b5ece559 Plugins Preinstall: Fix URL parsing when includes basic auth (#115143)
Preinstall: Fix URL setting when includes basic auth
2025-12-11 13:38:02 +01:00
Ryan McKinley
eef14d2cee Dependencies: update glob@npm for dependabot (#115146) 2025-12-11 12:33:34 +00:00
Anna Urbiztondo
c71c0b33ee Docs: Configure Git Sync using CLI (#115068)
* WIP

* WIP

* Edits, Claude

* Prettier

* Update docs/sources/as-code/observability-as-code/provision-resources/git-sync-setup.md

Co-authored-by: Roberto Jiménez Sánchez <roberto.jimenez@grafana.com>

* Update docs/sources/as-code/observability-as-code/provision-resources/git-sync-setup.md

Co-authored-by: Roberto Jiménez Sánchez <roberto.jimenez@grafana.com>

* WIP

* Restructuring

* Minor tweaks

* Fix

* Update docs/sources/as-code/observability-as-code/provision-resources/git-sync-setup.md

Co-authored-by: Roberto Jiménez Sánchez <roberto.jimenez@grafana.com>

* Feedback

* Prettier

* Links

---------

Co-authored-by: Roberto Jiménez Sánchez <roberto.jimenez@grafana.com>
2025-12-11 11:27:36 +00:00
Lauren
d568798c64 Alerting: Improve instance count display (#114997)
* Update button text to Show All if filters are enabled

* Show state in text if filters enabled

* resolve PR comments
2025-12-11 11:01:53 +00:00
Ryan McKinley
9bec62a080 Live: simplify dependencies (#115130) 2025-12-11 13:37:45 +03:00
Roberto Jiménez Sánchez
7fe3214f16 Provisioning: Add fieldSelector regression tests for Repository and Jobs (#115135) 2025-12-11 13:36:01 +03:00
Alexander Zobnin
e2d12f4cce Zanzana: Refactor remote client initialization (#114142)
* Zanzana: Refactor remote client

* rename config field URL to Addr

* Instrument grpc queries

* fix duplicated field
2025-12-11 10:55:12 +01:00
209 changed files with 8698 additions and 2603 deletions

View File

@@ -377,10 +377,10 @@ github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyY
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/centrifugal/centrifuge v0.37.2 h1:rerQNvDfYN2FZEkVtb/hvGV7SIrJfEQrKF3MaE8GDlo=
github.com/centrifugal/centrifuge v0.37.2/go.mod h1:aj4iRJGhzi3SlL8iUtVezxway1Xf8g+hmNQkLLO7sS8=
github.com/centrifugal/protocol v0.16.2 h1:KoIHgDeX1fFxyxQoKW+6E8ZTCf5mwGm8JyGoJ5NBMbQ=
github.com/centrifugal/protocol v0.16.2/go.mod h1:Q7OpS/8HMXDnL7f9DpNx24IhG96MP88WPpVTTCdrokI=
github.com/centrifugal/centrifuge v0.38.0 h1:UJTowwc5lSwnpvd3vbrTseODbU7osSggN67RTrJ8EfQ=
github.com/centrifugal/centrifuge v0.38.0/go.mod h1:rcZLARnO5GXOeE9qG7iIPMvERxESespqkSX4cGLCAzo=
github.com/centrifugal/protocol v0.17.0 h1:hD0WczyiG7zrVJcgkQsd5/nhfFXt0Y04SJHV2Z7B1rg=
github.com/centrifugal/protocol v0.17.0/go.mod h1:9MdiYyjw5Bw1+d5Sp4Y0NK+qiuTNyd88nrHJsUUh8k4=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
@@ -1376,11 +1376,13 @@ github.com/puzpuzpuz/xsync/v2 v2.5.1 h1:mVGYAvzDSu52+zaGyNjC+24Xw2bQi3kTr4QJ6N9p
github.com/puzpuzpuz/xsync/v2 v2.5.1/go.mod h1:gD2H2krq/w52MfPLE+Uy64TzJDVY7lP2znR9qmR35kU=
github.com/puzpuzpuz/xsync/v4 v4.2.0 h1:dlxm77dZj2c3rxq0/XNvvUKISAmovoXF4a4qM6Wvkr0=
github.com/puzpuzpuz/xsync/v4 v4.2.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo=
github.com/quagmt/udecimal v1.9.0 h1:TLuZiFeg0HhS6X8VDa78Y6XTaitZZfh+z5q4SXMzpDQ=
github.com/quagmt/udecimal v1.9.0/go.mod h1:ScmJ/xTGZcEoYiyMMzgDLn79PEJHcMBiJ4NNRT3FirA=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/redis/go-redis/v9 v9.14.0 h1:u4tNCjXOyzfgeLN+vAZaW1xUooqWDqVEsZN0U01jfAE=
github.com/redis/go-redis/v9 v9.14.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
github.com/redis/rueidis v1.0.64 h1:XqgbueDuNV3qFdVdQwAHJl1uNt90zUuAJuzqjH4cw6Y=
github.com/redis/rueidis v1.0.64/go.mod h1:Lkhr2QTgcoYBhxARU7kJRO8SyVlgUuEkcJO1Y8MCluA=
github.com/redis/rueidis v1.0.68 h1:gept0E45JGxVigWb3zoWHvxEc4IOC7kc4V/4XvN8eG8=
github.com/redis/rueidis v1.0.68/go.mod h1:Lkhr2QTgcoYBhxARU7kJRO8SyVlgUuEkcJO1Y8MCluA=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=

View File

@@ -22,13 +22,40 @@ v0alpha1: {
serviceaccountv0alpha1,
externalGroupMappingv0alpha1
]
routes: {
namespaced: {
"/searchUsers": {
"GET": {
request: {
query: {
query?: string
limit?: int64 | 10
offset?: int64 | 0
page?: int64 | 1
}
}
response: {
offset: int64
totalHits: int64
hits: [...#UserHit]
queryCost: float64
maxScore: float64
}
responseMetadata: {
typeMeta: false
objectMeta: false
}
}
}
"/searchTeams": {
"GET": {
request: {
query: {
query?: string
limit?: int64 | 50
offset?: int64 | 0
page?: int64 | 1
}
}
response: {
@@ -51,3 +78,15 @@ v0alpha1: {
}
}
}
#UserHit: {
name: string
title: string
login: string
email: string
role: string
lastSeenAt: int64
lastSeenAtAge: string
provisioned: bool
score: float64
}

View File

@@ -29,6 +29,9 @@ userv0alpha1: userKind & {
// }
schema: {
spec: v0alpha1.UserSpec
status: {
lastSeenAt: int64 | 0
}
}
// TODO: Uncomment when the custom routes implementation is done
// routes: {

View File

@@ -3,7 +3,10 @@
package v0alpha1
type GetSearchTeamsRequestParams struct {
Query *string `json:"query,omitempty"`
Query *string `json:"query,omitempty"`
Limit int64 `json:"limit,omitempty"`
Offset int64 `json:"offset,omitempty"`
Page int64 `json:"page,omitempty"`
}
// NewGetSearchTeamsRequestParams creates a new GetSearchTeamsRequestParams object.

View File

@@ -0,0 +1,33 @@
// Code generated - EDITING IS FUTILE. DO NOT EDIT.
package v0alpha1
import (
"github.com/grafana/grafana-app-sdk/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
type GetSearchUsersRequestParamsObject struct {
metav1.TypeMeta `json:",inline"`
GetSearchUsersRequestParams `json:",inline"`
}
func NewGetSearchUsersRequestParamsObject() *GetSearchUsersRequestParamsObject {
return &GetSearchUsersRequestParamsObject{}
}
func (o *GetSearchUsersRequestParamsObject) DeepCopyObject() runtime.Object {
dst := NewGetSearchUsersRequestParamsObject()
o.DeepCopyInto(dst)
return dst
}
func (o *GetSearchUsersRequestParamsObject) DeepCopyInto(dst *GetSearchUsersRequestParamsObject) {
dst.TypeMeta.APIVersion = o.TypeMeta.APIVersion
dst.TypeMeta.Kind = o.TypeMeta.Kind
dstGetSearchUsersRequestParams := GetSearchUsersRequestParams{}
_ = resource.CopyObjectInto(&dstGetSearchUsersRequestParams, &o.GetSearchUsersRequestParams)
}
var _ runtime.Object = NewGetSearchUsersRequestParamsObject()

View File

@@ -0,0 +1,15 @@
// Code generated - EDITING IS FUTILE. DO NOT EDIT.
package v0alpha1
type GetSearchUsersRequestParams struct {
Query *string `json:"query,omitempty"`
Limit int64 `json:"limit,omitempty"`
Offset int64 `json:"offset,omitempty"`
Page int64 `json:"page,omitempty"`
}
// NewGetSearchUsersRequestParams creates a new GetSearchUsersRequestParams object.
func NewGetSearchUsersRequestParams() *GetSearchUsersRequestParams {
return &GetSearchUsersRequestParams{}
}

View File

@@ -0,0 +1,37 @@
// Code generated - EDITING IS FUTILE. DO NOT EDIT.
package v0alpha1
// +k8s:openapi-gen=true
type UserHit struct {
Name string `json:"name"`
Title string `json:"title"`
Login string `json:"login"`
Email string `json:"email"`
Role string `json:"role"`
LastSeenAt int64 `json:"lastSeenAt"`
LastSeenAtAge string `json:"lastSeenAtAge"`
Provisioned bool `json:"provisioned"`
Score float64 `json:"score"`
}
// NewUserHit creates a new UserHit object.
func NewUserHit() *UserHit {
return &UserHit{}
}
// +k8s:openapi-gen=true
type GetSearchUsers struct {
Offset int64 `json:"offset"`
TotalHits int64 `json:"totalHits"`
Hits []UserHit `json:"hits"`
QueryCost float64 `json:"queryCost"`
MaxScore float64 `json:"maxScore"`
}
// NewGetSearchUsers creates a new GetSearchUsers object.
func NewGetSearchUsers() *GetSearchUsers {
return &GetSearchUsers{
Hits: []UserHit{},
}
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/grafana/grafana-app-sdk/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type UserClient struct {
@@ -75,6 +76,24 @@ func (c *UserClient) Patch(ctx context.Context, identifier resource.Identifier,
return c.client.Patch(ctx, identifier, req, opts)
}
func (c *UserClient) UpdateStatus(ctx context.Context, identifier resource.Identifier, newStatus UserStatus, opts resource.UpdateOptions) (*User, error) {
return c.client.Update(ctx, &User{
TypeMeta: metav1.TypeMeta{
Kind: UserKind().Kind(),
APIVersion: GroupVersion.Identifier(),
},
ObjectMeta: metav1.ObjectMeta{
ResourceVersion: opts.ResourceVersion,
Namespace: identifier.Namespace,
Name: identifier.Name,
},
Status: newStatus,
}, resource.UpdateOptions{
Subresource: "status",
ResourceVersion: opts.ResourceVersion,
})
}
func (c *UserClient) Delete(ctx context.Context, identifier resource.Identifier, opts resource.DeleteOptions) error {
return c.client.Delete(ctx, identifier, opts)
}

View File

@@ -21,11 +21,14 @@ type User struct {
// Spec is the spec of the User
Spec UserSpec `json:"spec" yaml:"spec"`
Status UserStatus `json:"status" yaml:"status"`
}
func NewUser() *User {
return &User{
Spec: *NewUserSpec(),
Spec: *NewUserSpec(),
Status: *NewUserStatus(),
}
}
@@ -43,11 +46,15 @@ func (o *User) SetSpec(spec any) error {
}
func (o *User) GetSubresources() map[string]any {
return map[string]any{}
return map[string]any{
"status": o.Status,
}
}
func (o *User) GetSubresource(name string) (any, bool) {
switch name {
case "status":
return o.Status, true
default:
return nil, false
}
@@ -55,6 +62,13 @@ func (o *User) GetSubresource(name string) (any, bool) {
func (o *User) SetSubresource(name string, value any) error {
switch name {
case "status":
cast, ok := value.(UserStatus)
if !ok {
return fmt.Errorf("cannot set status type %#v, not of type UserStatus", value)
}
o.Status = cast
return nil
default:
return fmt.Errorf("subresource '%s' does not exist", name)
}
@@ -226,6 +240,7 @@ func (o *User) DeepCopyInto(dst *User) {
dst.TypeMeta.Kind = o.TypeMeta.Kind
o.ObjectMeta.DeepCopyInto(&dst.ObjectMeta)
o.Spec.DeepCopyInto(&dst.Spec)
o.Status.DeepCopyInto(&dst.Status)
}
// Interface compliance compile-time check
@@ -297,3 +312,15 @@ func (s *UserSpec) DeepCopy() *UserSpec {
func (s *UserSpec) DeepCopyInto(dst *UserSpec) {
resource.CopyObjectInto(dst, s)
}
// DeepCopy creates a full deep copy of UserStatus
func (s *UserStatus) DeepCopy() *UserStatus {
cpy := &UserStatus{}
s.DeepCopyInto(cpy)
return cpy
}
// DeepCopyInto deep copies UserStatus into another UserStatus object
func (s *UserStatus) DeepCopyInto(dst *UserStatus) {
resource.CopyObjectInto(dst, s)
}

View File

@@ -2,43 +2,12 @@
package v0alpha1
// +k8s:openapi-gen=true
type UserstatusOperatorState struct {
// lastEvaluation is the ResourceVersion last evaluated
LastEvaluation string `json:"lastEvaluation"`
// state describes the state of the lastEvaluation.
// It is limited to three possible states for machine evaluation.
State UserStatusOperatorStateState `json:"state"`
// descriptiveState is an optional more descriptive state field which has no requirements on format
DescriptiveState *string `json:"descriptiveState,omitempty"`
// details contains any extra information that is operator-specific
Details map[string]interface{} `json:"details,omitempty"`
}
// NewUserstatusOperatorState creates a new UserstatusOperatorState object.
func NewUserstatusOperatorState() *UserstatusOperatorState {
return &UserstatusOperatorState{}
}
// +k8s:openapi-gen=true
type UserStatus struct {
// operatorStates is a map of operator ID to operator state evaluations.
// Any operator which consumes this kind SHOULD add its state evaluation information to this field.
OperatorStates map[string]UserstatusOperatorState `json:"operatorStates,omitempty"`
// additionalFields is reserved for future use
AdditionalFields map[string]interface{} `json:"additionalFields,omitempty"`
LastSeenAt int64 `json:"lastSeenAt"`
}
// NewUserStatus creates a new UserStatus object.
func NewUserStatus() *UserStatus {
return &UserStatus{}
}
// +k8s:openapi-gen=true
type UserStatusOperatorStateState string
const (
UserStatusOperatorStateStateSuccess UserStatusOperatorStateState = "success"
UserStatusOperatorStateStateInProgress UserStatusOperatorStateState = "in_progress"
UserStatusOperatorStateStateFailed UserStatusOperatorStateState = "failed"
)

View File

@@ -21,6 +21,7 @@ func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenA
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GetGroupsBody": schema_pkg_apis_iam_v0alpha1_GetGroupsBody(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GetSearchTeams": schema_pkg_apis_iam_v0alpha1_GetSearchTeams(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GetSearchTeamsBody": schema_pkg_apis_iam_v0alpha1_GetSearchTeamsBody(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GetSearchUsers": schema_pkg_apis_iam_v0alpha1_GetSearchUsers(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GlobalRole": schema_pkg_apis_iam_v0alpha1_GlobalRole(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GlobalRoleBinding": schema_pkg_apis_iam_v0alpha1_GlobalRoleBinding(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GlobalRoleBindingList": schema_pkg_apis_iam_v0alpha1_GlobalRoleBindingList(ref),
@@ -72,10 +73,10 @@ func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenA
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.TeamStatus": schema_pkg_apis_iam_v0alpha1_TeamStatus(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.TeamstatusOperatorState": schema_pkg_apis_iam_v0alpha1_TeamstatusOperatorState(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.User": schema_pkg_apis_iam_v0alpha1_User(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserHit": schema_pkg_apis_iam_v0alpha1_UserHit(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserList": schema_pkg_apis_iam_v0alpha1_UserList(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserSpec": schema_pkg_apis_iam_v0alpha1_UserSpec(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserStatus": schema_pkg_apis_iam_v0alpha1_UserStatus(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserstatusOperatorState": schema_pkg_apis_iam_v0alpha1_UserstatusOperatorState(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.VersionsV0alpha1Kinds7RoutesGroupsGETResponseExternalGroupMapping": schema_pkg_apis_iam_v0alpha1_VersionsV0alpha1Kinds7RoutesGroupsGETResponseExternalGroupMapping(ref),
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.VersionsV0alpha1RoutesNamespacedSearchTeamsGETResponseTeamHit": schema_pkg_apis_iam_v0alpha1_VersionsV0alpha1RoutesNamespacedSearchTeamsGETResponseTeamHit(ref),
}
@@ -688,6 +689,62 @@ func schema_pkg_apis_iam_v0alpha1_GetSearchTeamsBody(ref common.ReferenceCallbac
}
}
func schema_pkg_apis_iam_v0alpha1_GetSearchUsers(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"offset": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"totalHits": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"hits": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref("github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserHit"),
},
},
},
},
},
"queryCost": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"number"},
Format: "double",
},
},
"maxScore": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"number"},
Format: "double",
},
},
},
Required: []string{"offset", "totalHits", "hits", "queryCost", "maxScore"},
},
},
Dependencies: []string{
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserHit"},
}
}
func schema_pkg_apis_iam_v0alpha1_GlobalRole(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
@@ -2833,12 +2890,94 @@ func schema_pkg_apis_iam_v0alpha1_User(ref common.ReferenceCallback) common.Open
Ref: ref("github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserSpec"),
},
},
"status": {
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref("github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserStatus"),
},
},
},
Required: []string{"metadata", "spec"},
Required: []string{"metadata", "spec", "status"},
},
},
Dependencies: []string{
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserSpec", "k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta"},
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserSpec", "github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserStatus", "k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta"},
}
}
func schema_pkg_apis_iam_v0alpha1_UserHit(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"name": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"title": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"login": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"email": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"role": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"lastSeenAt": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
"lastSeenAtAge": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
Format: "",
},
},
"provisioned": {
SchemaProps: spec.SchemaProps{
Default: false,
Type: []string{"boolean"},
Format: "",
},
},
"score": {
SchemaProps: spec.SchemaProps{
Default: 0,
Type: []string{"number"},
Format: "double",
},
},
},
Required: []string{"name", "title", "login", "email", "role", "lastSeenAt", "lastSeenAtAge", "provisioned", "score"},
},
},
}
}
@@ -2965,90 +3104,15 @@ func schema_pkg_apis_iam_v0alpha1_UserStatus(ref common.ReferenceCallback) commo
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"operatorStates": {
"lastSeenAt": {
SchemaProps: spec.SchemaProps{
Description: "operatorStates is a map of operator ID to operator state evaluations. Any operator which consumes this kind SHOULD add its state evaluation information to this field.",
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Default: map[string]interface{}{},
Ref: ref("github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserstatusOperatorState"),
},
},
},
},
},
"additionalFields": {
SchemaProps: spec.SchemaProps{
Description: "additionalFields is reserved for future use",
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Format: "",
},
},
},
Default: 0,
Type: []string{"integer"},
Format: "int64",
},
},
},
},
},
Dependencies: []string{
"github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.UserstatusOperatorState"},
}
}
func schema_pkg_apis_iam_v0alpha1_UserstatusOperatorState(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"lastEvaluation": {
SchemaProps: spec.SchemaProps{
Description: "lastEvaluation is the ResourceVersion last evaluated",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"state": {
SchemaProps: spec.SchemaProps{
Description: "state describes the state of the lastEvaluation. It is limited to three possible states for machine evaluation.",
Default: "",
Type: []string{"string"},
Format: "",
},
},
"descriptiveState": {
SchemaProps: spec.SchemaProps{
Description: "descriptiveState is an optional more descriptive state field which has no requirements on format",
Type: []string{"string"},
Format: "",
},
},
"details": {
SchemaProps: spec.SchemaProps{
Description: "details contains any extra information that is operator-specific",
Type: []string{"object"},
AdditionalProperties: &spec.SchemaOrBool{
Allows: true,
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Format: "",
},
},
},
},
},
},
Required: []string{"lastEvaluation", "state"},
Required: []string{"lastSeenAt"},
},
},
}

View File

@@ -173,6 +173,36 @@ var appManifestData = app.ManifestData{
Parameters: []*spec3.Parameter{
{
ParameterProps: spec3.ParameterProps{
Name: "limit",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "offset",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "page",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "query",
@@ -261,6 +291,118 @@ var appManifestData = app.ManifestData{
},
},
},
"/searchUsers": {
Get: &spec3.Operation{
OperationProps: spec3.OperationProps{
OperationId: "getSearchUsers",
Parameters: []*spec3.Parameter{
{
ParameterProps: spec3.ParameterProps{
Name: "limit",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "offset",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "page",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{},
},
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "query",
In: "query",
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
},
},
},
Responses: &spec3.Responses{
ResponsesProps: spec3.ResponsesProps{
Default: &spec3.Response{
ResponseProps: spec3.ResponseProps{
Description: "Default OK response",
Content: map[string]*spec3.MediaType{
"application/json": {
MediaTypeProps: spec3.MediaTypeProps{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"hits": {
SchemaProps: spec.SchemaProps{
Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
Ref: spec.MustCreateRef("#/components/schemas/getSearchUsersUserHit"),
}},
},
},
},
"maxScore": {
SchemaProps: spec.SchemaProps{
Type: []string{"number"},
},
},
"offset": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
},
},
"queryCost": {
SchemaProps: spec.SchemaProps{
Type: []string{"number"},
},
},
"totalHits": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
},
},
},
Required: []string{
"offset",
"totalHits",
"hits",
"queryCost",
"maxScore",
},
}},
}},
},
},
},
}},
},
},
},
},
Cluster: map[string]spec3.PathProps{},
Schemas: map[string]spec.Schema{
@@ -303,6 +445,69 @@ var appManifestData = app.ManifestData{
},
},
},
"getSearchUsersUserHit": {
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
"email": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
"lastSeenAt": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
},
},
"lastSeenAtAge": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
"login": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
"name": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
"provisioned": {
SchemaProps: spec.SchemaProps{
Type: []string{"boolean"},
},
},
"role": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
"score": {
SchemaProps: spec.SchemaProps{
Type: []string{"number"},
},
},
"title": {
SchemaProps: spec.SchemaProps{
Type: []string{"string"},
},
},
},
Required: []string{
"name",
"title",
"login",
"email",
"role",
"lastSeenAt",
"lastSeenAtAge",
"provisioned",
"score",
},
},
},
},
},
},
@@ -342,6 +547,7 @@ var customRouteToGoResponseType = map[string]any{
"v0alpha1|Team|groups|GET": v0alpha1.GetGroups{},
"v0alpha1||<namespace>/searchTeams|GET": v0alpha1.GetSearchTeams{},
"v0alpha1||<namespace>/searchUsers|GET": v0alpha1.GetSearchUsers{},
}
// ManifestCustomRouteResponsesAssociator returns the associated response go type for a given kind, version, custom route path, and method, if one exists.

View File

@@ -4,6 +4,8 @@ import (
"context"
"fmt"
"github.com/prometheus/client_golang/prometheus"
"github.com/grafana/grafana-app-sdk/app"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana-app-sdk/operator"
@@ -12,7 +14,6 @@ import (
foldersKind "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
"github.com/grafana/grafana/apps/iam/pkg/reconcilers"
"github.com/grafana/grafana/pkg/services/authz"
"github.com/prometheus/client_golang/prometheus"
)
var appManifestData = app.ManifestData{
@@ -78,7 +79,7 @@ func New(cfg app.Config) (app.App, error) {
folderReconciler, err := reconcilers.NewFolderReconciler(reconcilers.ReconcilerConfig{
ZanzanaCfg: appSpecificConfig.ZanzanaClientCfg,
Metrics: metrics,
})
}, appSpecificConfig.MetricsRegisterer)
if err != nil {
return nil, fmt.Errorf("unable to create FolderReconciler: %w", err)
}

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"time"
"github.com/prometheus/client_golang/prometheus"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
@@ -35,9 +36,9 @@ type FolderReconciler struct {
metrics *ReconcilerMetrics
}
func NewFolderReconciler(cfg ReconcilerConfig) (operator.Reconciler, error) {
func NewFolderReconciler(cfg ReconcilerConfig, reg prometheus.Registerer) (operator.Reconciler, error) {
// Create Zanzana client
zanzanaClient, err := authz.NewRemoteZanzanaClient("*", cfg.ZanzanaCfg)
zanzanaClient, err := authz.NewRemoteZanzanaClient(cfg.ZanzanaCfg, reg)
if err != nil {
return nil, fmt.Errorf("unable to create zanzana client: %w", err)

View File

@@ -83,6 +83,12 @@ tree:
nodeType: leaf
linkId: test-case-2
linkType: scope
test-case-redirect:
title: Test case with redirect
nodeType: leaf
linkId: shoe-org
linkType: scope
redirectPath: /d/dcb9f5e9-8066-4397-889e-864b99555dbb #Reliability dashboard
clusters:
title: Clusters
nodeType: container

View File

@@ -67,10 +67,12 @@ type ScopeFilterConfig struct {
type TreeNode struct {
Title string `yaml:"title"`
SubTitle string `yaml:"subTitle,omitempty"`
Description string `yaml:"description,omitempty"`
NodeType string `yaml:"nodeType"`
LinkID string `yaml:"linkId,omitempty"`
LinkType string `yaml:"linkType,omitempty"`
DisableMultiSelect bool `yaml:"disableMultiSelect,omitempty"`
RedirectPath string `yaml:"redirectPath,omitempty"`
Children map[string]TreeNode `yaml:"children,omitempty"`
}
@@ -259,6 +261,7 @@ func (c *Client) createScopeNode(name string, node TreeNode, parentName string)
spec := v0alpha1.ScopeNodeSpec{
Title: node.Title,
SubTitle: node.SubTitle,
Description: node.Description,
NodeType: nodeType,
DisableMultiSelect: node.DisableMultiSelect,
}
@@ -272,6 +275,10 @@ func (c *Client) createScopeNode(name string, node TreeNode, parentName string)
spec.LinkType = linkType
}
if node.RedirectPath != "" {
spec.RedirectPath = node.RedirectPath
}
resource := v0alpha1.ScopeNode{
TypeMeta: metav1.TypeMeta{
APIVersion: apiVersion,

View File

@@ -7,8 +7,8 @@ MAKEFLAGS += --no-builtin-rule
include docs.mk
.PHONY: sources/panels-visualizations/query-transform-data/transform-data/index.md
sources/panels-visualizations/query-transform-data/transform-data/index.md: ## Generate the Transform Data page source.
.PHONY: sources/visualizations/panels-visualizations/query-transform-data/transform-data/index.md
sources/visualizations/panels-visualizations/query-transform-data/transform-data/index.md: ## Generate the Transform Data page source.
cd $(CURDIR)/.. && \
npx tsx ./scripts/docs/generate-transformations.ts && \
npx prettier -w $(CURDIR)/$@

View File

@@ -54,7 +54,7 @@ For production systems, use the `folderFromFilesStructure` capability instead of
## Before you begin
{{< admonition type="note" >}}
Enable the `provisioning` and `kubernetesDashboards` feature toggles in Grafana to use this feature.
Enable the `provisioning` feature toggle in Grafana to use this feature.
{{< /admonition >}}
To set up file provisioning, you need:
@@ -67,7 +67,7 @@ To set up file provisioning, you need:
## Enable required feature toggles and configure permitted paths
To activate local file provisioning in Grafana, you need to enable the `provisioning` and `kubernetesDashboards` feature toggles.
To activate local file provisioning in Grafana, you need to enable the `provisioning` feature toggle.
For additional information about feature toggles, refer to [Configure feature toggles](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/feature-toggles).
The local setting must be a relative path and its relative path must be configured in the `permitted_provisioned_paths` configuration option.
@@ -82,12 +82,11 @@ Any subdirectories are automatically included.
The values that you enter for the `permitted_provisioning_paths` become the base paths for those entered when you enter a local path in the **Connect to local storage** wizard.
1. Open your Grafana configuration file, either `grafana.ini` or `custom.ini`. For file location based on operating system, refer to [Configuration file location](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/feature-toggles/#experimental-feature-toggles).
1. Locate or add a `[feature_toggles]` section. Add these values:
1. Locate or add a `[feature_toggles]` section. Add this value:
```ini
[feature_toggles]
provisioning = true
kubernetesDashboards = true ; use k8s from browser
```
1. Locate or add a `[paths]` section. To add more than one location, use the pipe character (`|`) to separate the paths. The list should not include empty paths or trailing pipes. Add these values:

View File

@@ -29,76 +29,70 @@ You can sign up to the private preview using the [Git Sync early access form](ht
{{< /admonition >}}
Git Sync lets you manage Grafana dashboards as code by storing dashboard JSON files and folders in a remote GitHub repository.
To set up Git Sync and synchronize with a GitHub repository follow these steps:
1. [Enable feature toggles in Grafana](#enable-required-feature-toggles) (first time set up).
1. [Create a GitHub access token](#create-a-github-access-token).
1. [Configure a connection to your GitHub repository](#set-up-the-connection-to-github).
1. [Choose what content to sync with Grafana](#choose-what-to-synchronize).
Optionally, you can [extend Git Sync](#configure-webhooks-and-image-rendering) by enabling pull request notifications and image previews of dashboard changes.
| Capability | Benefit | Requires |
| ----------------------------------------------------- | ------------------------------------------------------------------------------- | -------------------------------------- |
| Adds a table summarizing changes to your pull request | Provides a convenient way to save changes back to GitHub. | Webhooks configured |
| Add a dashboard preview image to a PR | View a snapshot of dashboard changes to a pull request without opening Grafana. | Image renderer and webhooks configured |
{{< admonition type="note" >}}
Alternatively, you can configure a local file system instead of using GitHub. Refer to [Set up file provisioning](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/file-path-setup/) for more information.
{{< /admonition >}}
## Performance impacts of enabling Git Sync
Git Sync is an experimental feature and is under continuous development. Reporting any issues you encounter can help us improve Git Sync.
When Git Sync is enabled, the database load might increase, especially for instances with a lot of folders and nested folders. Evaluate the performance impact, if any, in a non-production environment.
This guide shows you how to set up Git Sync to synchronize your Grafana dashboards and folders with a GitHub repository. You'll set up Git Sync to enable version-controlled dashboard management either [using the UI](#set-up-git-sync-using-grafana-ui) or [as code](#set-up-git-sync-as-code).
## Before you begin
{{< admonition type="caution" >}}
Before you begin, ensure you have the following:
Refer to [Known limitations](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/intro-git-sync#known-limitations/) before using Git Sync.
- A Grafana instance (Cloud, OSS, or Enterprise).
- If you're [using webhooks or image rendering](#extend-git-sync-for-real-time-notification-and-image-rendering), a public instance with external access
- Administration rights in your Grafana organization
- A [GitHub private access token](#create-a-github-access-token)
- A GitHub repository to store your dashboards in
- Optional: The [Image Renderer service](https://github.com/grafana/grafana-image-renderer) to save image previews with your PRs
### Known limitations
Refer to [Known limitations](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/intro-git-sync#known-limitations) before using Git Sync.
Refer to [Supported resources](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/intro-git-sync#supported-resources) for details about which resources you can sync.
### Performance considerations
When Git Sync is enabled, the database load might increase, especially for instances with many folders and nested folders. Evaluate the performance impact, if any, in a non-production environment.
Git Sync is under continuous development. [Report any issues](https://grafana.com/help/) you encounter to help us improve Git Sync.
## Set up Git Sync
To set up Git Sync and synchronize with a GitHub repository, follow these steps:
1. [Enable feature toggles in Grafana](#enable-required-feature-toggles) (first time setup)
1. [Create a GitHub access token](#create-a-github-access-token)
1. Set up Git Sync [using the UI](#set-up-git-sync-using-grafana-ui) or [as code](#set-up-git-sync-as-code)
After setup, you can [verify your dashboards](#verify-your-dashboards-in-grafana).
Optionally, you can also [extend Git Sync with webhooks and image rendering](#extend-git-sync-for-real-time-notification-and-image-rendering).
{{< admonition type="note" >}}
Alternatively, you can configure a local file system instead of using GitHub. Refer to [Set up file provisioning](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/file-path-setup/) for more information.
{{< /admonition >}}
### Requirements
To set up Git Sync, you need:
- Administration rights in your Grafana organization.
- Enable the required feature toggles in your Grafana instance. Refer to [Enable required feature toggles](#enable-required-feature-toggles) for instructions.
- A GitHub repository to store your dashboards in.
- If you want to use a local file path, refer to [the local file path guide](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/file-path-setup/).
- A GitHub access token. The Grafana UI will prompt you during setup.
- Optional: A public Grafana instance.
- Optional: The [Image Renderer service](https://github.com/grafana/grafana-image-renderer) to save image previews with your PRs.
## Enable required feature toggles
To activate Git Sync in Grafana, you need to enable the `provisioning` and `kubernetesDashboards` feature toggles.
For additional information about feature toggles, refer to [Configure feature toggles](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/feature-toggles).
To activate Git Sync in Grafana, you need to enable the `provisioning` feature toggle. For more information about feature toggles, refer to [Configure feature toggles](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/feature-toggles/#experimental-feature-toggles).
To enable the required feature toggles, add them to your Grafana configuration file:
To enable the required feature toggle:
1. Open your Grafana configuration file, either `grafana.ini` or `custom.ini`. For file location based on operating system, refer to [Configuration file location](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/feature-toggles/#experimental-feature-toggles).
1. Locate or add a `[feature_toggles]` section. Add these values:
1. Locate or add a `[feature_toggles]` section. Add this value:
```ini
[feature_toggles]
provisioning = true
kubernetesDashboards = true ; use k8s from browser
```
1. Save the changes to the file and restart Grafana.
## Create a GitHub access token
Whenever you connect to a GitHub repository, you need to create a GitHub access token with specific repository permissions.
This token needs to be added to your Git Sync configuration to enable read and write permissions between Grafana and GitHub repository.
Whenever you connect to a GitHub repository, you need to create a GitHub access token with specific repository permissions. This token needs to be added to your Git Sync configuration to enable read and write permissions between Grafana and GitHub repository.
To create a GitHub access token:
1. Create a new token using [Create new fine-grained personal access token](https://github.com/settings/personal-access-tokens/new). Refer to [Managing your personal access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) for instructions.
1. Under **Permissions**, expand **Repository permissions**.
@@ -112,19 +106,23 @@ This token needs to be added to your Git Sync configuration to enable read and w
1. Verify the options and select **Generate token**.
1. Copy the access token. Leave the browser window available with the token until you've completed configuration.
GitHub Apps are not currently supported.
GitHub Apps aren't currently supported.
## Set up the connection to GitHub
## Set up Git Sync using Grafana UI
Use **Provisioning** to guide you through setting up Git Sync to use a GitHub repository.
1. [Configure a connection to your GitHub repository](#set-up-the-connection-to-github)
1. [Choose what content to sync with Grafana](#choose-what-to-synchronize)
1. [Choose additional settings](#choose-additional-settings)
### Set up the connection to GitHub
Use **Provisioning** to guide you through setting up Git Sync to use a GitHub repository:
1. Log in to your Grafana server with an account that has the Grafana Admin flag set.
1. Select **Administration** in the left-side menu and then **Provisioning**.
1. Select **Configure Git Sync**.
### Connect to external storage
To connect your GitHub repository, follow these steps:
To connect your GitHub repository:
1. Paste your GitHub personal access token into **Enter your access token**. Refer to [Create a GitHub access token](#create-a-github-access-token) for instructions.
1. Paste the **Repository URL** for your GitHub repository into the text box.
@@ -134,32 +132,12 @@ To connect your GitHub repository, follow these steps:
### Choose what to synchronize
In this step you can decide which elements to synchronize. Keep in mind the available options depend on the status of your Grafana instance.
In this step, you can decide which elements to synchronize. The available options depend on the status of your Grafana instance:
- If the instance contains resources in an incompatible data format, you'll have to migrate all the data using instance sync. Folder sync won't be supported.
- If there is already another connection using folder sync, instance sync won't be offered.
- If there's already another connection using folder sync, instance sync won't be offered.
#### Synchronization limitations
Git Sync only supports dashboards and folders. Alerts, panels, and other resources are not supported yet.
{{< admonition type="caution" >}}
Refer to [Known limitations](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/intro-git-sync#known-limitations/) before using Git Sync. Refer to [Supported resources](/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/intro-git-sync#supported-resources) for details about which resources you can sync.
{{< /admonition >}}
Full instance sync is not available in Grafana Cloud.
In Grafana OSS/Enterprise:
- If you try to perform a full instance sync with resources that contain alerts or panels, Git Sync will block the connection.
- You won't be able to create new alerts or library panels after the setup is completed.
- If you opted for full instance sync and want to use alerts and library panels, you'll have to delete the synced repository and connect again with folder sync.
#### Set up synchronization
To set up synchronization, choose to either sync your entire organization resources with external storage, or to sync certain resources to a new Grafana folder (with up to 10 connections).
To set up synchronization:
- Choose **Sync all resources with external storage** if you want to sync and manage your entire Grafana instance through external storage. With this option, all of your dashboards are synced to that one repository. You can only have one provisioned connection with this selection, and you won't have the option of setting up additional repositories to connect to.
- Choose **Sync external storage to new Grafana folder** to sync external resources into a new folder without affecting the rest of your instance. You can repeat this process for up to 10 connections.
@@ -170,20 +148,183 @@ Next, enter a **Display name** for the repository connection. Resources stored i
Finally, you can set up how often your configured storage is polled for updates.
To configure additional settings:
1. For **Update instance interval (seconds)**, enter how often you want the instance to pull updates from GitHub. The default value is 60 seconds.
1. Optional: Select **Read only** to ensure resources can't be modified in Grafana.
1. Optional: If you have the Grafana Image Renderer plugin configured, you can **Enable dashboards previews in pull requests**. If image rendering is not available, then you can't select this option. For more information, refer to the [Image Renderer service](https://github.com/grafana/grafana-image-renderer).
1. Optional: If you have the Grafana Image Renderer plugin configured, you can **Enable dashboards previews in pull requests**. If image rendering isn't available, then you can't select this option. For more information, refer to the [Image Renderer service](https://github.com/grafana/grafana-image-renderer).
1. Select **Finish** to proceed.
### Modify your configuration after setup is complete
To update your repository configuration after you've completed setup:
1. Log in to your Grafana server with an account that has the Grafana Admin flag set.
1. Select **Administration** in the left-side menu and then **Provisioning**.
1. Select **Settings** for the repository you wish to modify.
1. Use the **Configure repository** screen to update any of the settings.
1. Select **Save** to preserve the updates.
## Set up Git Sync as code
Alternatively, you can also configure Git Sync using `grafanactl`. Since Git Sync configuration is managed as code using Custom Resource Definitions (CRDs), you can create a Repository CRD in a YAML file and use `grafanactl` to push it to Grafana. This approach enables automated, GitOps-style workflows for managing Git Sync configuration instead of using the Grafana UI.
To set up Git Sync with `grafanactl`, follow these steps:
1. [Create the repository CRD](#create-the-repository-crd)
1. [Push the repository CRD to Grafana](#push-the-repository-crd-to-grafana)
1. [Manage repository resources](#manage-repository-resources)
1. [Verify setup](#verify-setup)
For more information, refer to the following documents:
- [grafanactl Documentation](https://grafana.github.io/grafanactl/)
- [Repository CRD Reference](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/git-sync-setup/)
- [Dashboard CRD Format](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/export-resources/)
### Create the repository CRD
Create a `repository.yaml` file defining your Git Sync configuration:
```yaml
apiVersion: provisioning.grafana.app/v0alpha1
kind: Repository
metadata:
name: <REPOSITORY_NAME>
spec:
title: <REPOSITORY_TITLE>
type: github
github:
url: <GITHUB_REPO_URL>
branch: <BRANCH>
path: grafana/
generateDashboardPreviews: true
sync:
enabled: true
intervalSeconds: 60
target: folder
workflows:
- write
- branch
secure:
token:
create: <GITHUB_PAT>
```
Replace the placeholders with your values:
- _`<REPOSITORY_NAME>`_: Unique identifier for this repository resource
- _`<REPOSITORY_TITLE>`_: Human-readable name displayed in Grafana UI
- _`<GITHUB_REPO_URL>`_: GitHub repository URL
- _`<BRANCH>`_: Branch to sync
- _`<GITHUB_PAT>`_: GitHub Personal Access Token
{{< admonition type="note" >}}
Only `target: folder` is currently supported for Git Sync.
{{< /admonition >}}
#### Configuration parameters
The following configuration parameters are available:
| Field | Description |
| --------------------------------------- | ----------------------------------------------------------- |
| `metadata.name` | Unique identifier for this repository resource |
| `spec.title` | Human-readable name displayed in Grafana UI |
| `spec.type` | Repository type (`github`) |
| `spec.github.url` | GitHub repository URL |
| `spec.github.branch` | Branch to sync |
| `spec.github.path` | Directory path containing dashboards |
| `spec.github.generateDashboardPreviews` | Generate preview images (true/false) |
| `spec.sync.enabled` | Enable synchronization (true/false) |
| `spec.sync.intervalSeconds` | Sync interval in seconds |
| `spec.sync.target` | Where to place synced dashboards (`folder`) |
| `spec.workflows` | Enabled workflows: `write` (direct commits), `branch` (PRs) |
| `secure.token.create` | GitHub Personal Access Token |
### Push the repository CRD to Grafana
Before pushing any resources, configure `grafanactl` with your Grafana instance details. Refer to the [grafanactl configuration documentation](https://grafana.github.io/grafanactl/) for setup instructions.
Push the repository configuration:
```sh
grafanactl resources push --path <DIRECTORY>
```
The `--path` parameter has to point to the directory containing your `repository.yaml` file.
After pushing, Grafana will:
1. Create the repository resource
1. Connect to your GitHub repository
1. Pull dashboards from the specified path
1. Begin syncing at the configured interval
### Manage repository resources
#### List repositories
To list all repositories:
```sh
grafanactl resources get repositories
```
#### Get repository details
To get details for a specific repository:
```sh
grafanactl resources get repository/<REPOSITORY_NAME>
grafanactl resources get repository/<REPOSITORY_NAME> -o json
grafanactl resources get repository/<REPOSITORY_NAME> -o yaml
```
#### Update the repository
To update a repository:
```sh
grafanactl resources edit repository/<REPOSITORY_NAME>
```
#### Delete the repository
To delete a repository:
```sh
grafanactl resources delete repository/<REPOSITORY_NAME>
```
### Verify setup
Check that Git Sync is working:
```sh
# List repositories
grafanactl resources get repositories
# Check Grafana UI
# Navigate to: Administration → Provisioning → Git Sync
```
## Verify your dashboards in Grafana
To verify that your dashboards are available at the location that you specified, click **Dashboards**. The name of the dashboard is listed in the **Name** column.
Now that your dashboards have been synced from a repository, you can customize the name, change the branch, and create a pull request (PR) for it. Refer to [Manage provisioned repositories with Git Sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/observability-as-code/provision-resources/use-git-sync/) for more information.
Now that your dashboards have been synced from a repository, you can customize the name, change the branch, and create a pull request (PR) for it. Refer to [Manage provisioned repositories with Git Sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/use-git-sync/) for more information.
## Configure webhooks and image rendering
## Extend Git Sync for real-time notification and image rendering
You can extend Git Sync by getting instant updates and pull requests using webhooks and add dashboard previews in pull requests.
Optionally, you can extend Git Sync by enabling pull request notifications and image previews of dashboard changes.
| Capability | Benefit | Requires |
| ----------------------------------------------------- | ------------------------------------------------------------------------------ | -------------------------------------- |
| Adds a table summarizing changes to your pull request | Provides a convenient way to save changes back to GitHub | Webhooks configured |
| Add a dashboard preview image to a PR | View a snapshot of dashboard changes to a pull request without opening Grafana | Image renderer and webhooks configured |
### Set up webhooks for realtime notification and pull request integration
@@ -191,25 +332,26 @@ When connecting to a GitHub repository, Git Sync uses webhooks to enable real-ti
You can set up webhooks with whichever service or tooling you prefer. You can use Cloudflare Tunnels with a Cloudflare-managed domain, port-forwarding and DNS options, or a tool such as `ngrok`.
To set up webhooks you need to expose your Grafana instance to the public Internet. You can do this via port forwarding and DNS, a tool such as `ngrok`, or any other method you prefer. The permissions set in your GitHub access token provide the authorization for this communication.
To set up webhooks, you need to expose your Grafana instance to the public Internet. You can do this via port forwarding and DNS, a tool such as `ngrok`, or any other method you prefer. The permissions set in your GitHub access token provide the authorization for this communication.
After you have the public URL, you can add it to your Grafana configuration file:
```yaml
```ini
[server]
root_url = https://PUBLIC_DOMAIN.HERE
root_url = https://<PUBLIC_DOMAIN>
```
Replace _`<PUBLIC_DOMAIN>`_ with your public domain.
To check the configured webhooks, go to **Administration** > **Provisioning** and click the **View** link for your GitHub repository.
#### Expose necessary paths only
If your security setup does not permit publicly exposing the Grafana instance, you can either choose to `allowlist` the GitHub IP addresses, or expose only the necessary paths.
If your security setup doesn't permit publicly exposing the Grafana instance, you can either choose to allowlist the GitHub IP addresses, or expose only the necessary paths.
The necessary paths required to be exposed are, in RegExp:
- `/apis/provisioning\.grafana\.app/v0(alpha1)?/namespaces/[^/]+/repositories/[^/]+/(webhook|render/.*)$`
<!-- TODO: Path for the blob storage for image rendering? @ryantxu would know this best. -->
### Set up image rendering for dashboard previews
@@ -217,12 +359,13 @@ Set up image rendering to add visual previews of dashboard updates directly in p
To enable this capability, install the Grafana Image Renderer in your Grafana instance. For more information and installation instructions, refer to the [Image Renderer service](https://github.com/grafana/grafana-image-renderer).
## Modify configurations after set up is complete
## Next steps
To update your repository configuration after you've completed set up:
You've successfully set up Git Sync to manage your Grafana dashboards through version control. Your dashboards are now synchronized with a GitHub repository, enabling collaborative development and change tracking.
1. Log in to your Grafana server with an account that has the Grafana Admin flag set.
1. Select **Administration** in the left-side menu and then **Provisioning**.
1. Select **Settings** for the repository you wish to modify.
1. Use the **Configure repository** screen to update any of the settings.
1. Select **Save** to preserve the updates.
To learn more about using Git Sync:
- [Work with provisioned dashboards](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/provisioned-dashboards/)
- [Manage provisioned repositories with Git Sync](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/use-git-sync/)
- [Export resources](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/as-code/observability-as-code/provision-resources/export-resources/)
- [grafanactl documentation](https://grafana.github.io/grafanactl/)

View File

@@ -62,5 +62,6 @@ The table includes default and other fields:
| targetBlank | bool. If true, the link will be opened in a new tab. Default is `false`. |
| includeVars | bool. If true, includes current template variables values in the link as query params. Default is `false`. |
| keepTime | bool. If true, includes current time range in the link as query params. Default is `false`. |
| placement? | string. Use placement to display the link somewhere else on the dashboard other than above the visualizations. Use the `inControlsMenu` parameter to render the link in the dashboard controls dropdown menu. |
<!-- prettier-ignore-end -->

View File

@@ -17,16 +17,6 @@ menuTitle: Elasticsearch
title: Elasticsearch data source
weight: 325
refs:
configuration:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#sigv4_auth_enabled
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#sigv4_auth_enabled
provisioning-grafana:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/provisioning/#data-sources
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/provisioning/#data-sources
explore:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/
@@ -44,12 +34,36 @@ refs:
Elasticsearch is a search and analytics engine used for a variety of use cases.
You can create many types of queries to visualize logs or metrics stored in Elasticsearch, and annotate graphs with log events stored in Elasticsearch.
The following will help you get started working with Elasticsearch and Grafana:
The following resources will help you get started with Elasticsearch and Grafana:
- [What is Elasticsearch?](https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-intro.html)
- [Configure the Elasticsearch data source](/docs/grafana/latest/datasources/elasticsearch/configure-elasticsearch-data-source/)
- [Elasticsearch query editor](query-editor/)
- [Elasticsearch template variables](template-variables/)
- [Configure the Elasticsearch data source](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/configure/)
- [Elasticsearch query editor](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/query-editor/)
- [Elasticsearch template variables](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/template-variables/)
- [Elasticsearch annotations](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/annotations/)
- [Elasticsearch alerting](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/alerting/)
- [Troubleshooting issues with the Elasticsearch data source](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/troubleshooting/)
## Key capabilities
The Elasticsearch data source supports:
- **Metrics queries:** Aggregate and visualize numeric data using bucket and metric aggregations.
- **Log queries:** Search, filter, and explore log data with Lucene query syntax.
- **Annotations:** Overlay Elasticsearch events on your dashboard graphs.
- **Alerting:** Create alerts based on Elasticsearch query results.
## Before you begin
Before you configure the Elasticsearch data source, you need:
- An Elasticsearch instance (v7.17+, v8.x, or v9.x)
- Network access from Grafana to your Elasticsearch server
- Appropriate user credentials or API keys with read access
{{< admonition type="note" >}}
If you use Amazon OpenSearch Service (the successor to Amazon Elasticsearch Service), use the [OpenSearch data source](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/opensearch/) instead.
{{< /admonition >}}
## Supported Elasticsearch versions
@@ -63,86 +77,18 @@ This data source supports these versions of Elasticsearch:
- v8.x
- v9.x
Our maintenance policy for Elasticsearch data source is aligned with the [Elastic Product End of Life Dates](https://www.elastic.co/support/eol) and we ensure proper functionality for supported versions. If you are using an Elasticsearch with version that is past its end-of-life (EOL), you can still execute queries, but you will receive a notification in the query builder indicating that the version of Elasticsearch you are using is no longer supported. It's important to note that in such cases, we do not guarantee the correctness of the functionality, and we will not be addressing any related issues.
The Grafana maintenance policy for the Elasticsearch data source aligns with [Elastic Product End of Life Dates](https://www.elastic.co/support/eol). Grafana ensures proper functionality for supported versions only. If you use an EOL version of Elasticsearch, you can still run queries, but the query builder displays a warning. Grafana doesn't guarantee functionality or provide fixes for EOL versions.
## Provision the data source
## Additional resources
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana](ref:provisioning-grafana).
Once you have configured the Elasticsearch data source, you can:
{{< admonition type="note" >}}
The previously used `database` field has now been [deprecated](https://github.com/grafana/grafana/pull/58647).
You should now use the `index` field in `jsonData` to store the index name.
Please see the examples below.
{{< /admonition >}}
- Use [Explore](ref:explore) to run ad-hoc queries against your Elasticsearch data.
- Configure and use [template variables](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/template-variables/) for dynamic dashboards.
- Add [Transformations](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/transform-data/) to process query results.
- [Build dashboards](ref:build-dashboards) to visualize your Elasticsearch data.
### Provisioning examples
## Related data sources
**Basic provisioning**
```yaml
apiVersion: 1
datasources:
- name: Elastic
type: elasticsearch
access: proxy
url: http://localhost:9200
jsonData:
index: '[metrics-]YYYY.MM.DD'
interval: Daily
timeField: '@timestamp'
```
**Provision for logs**
```yaml
apiVersion: 1
datasources:
- name: elasticsearch-v7-filebeat
type: elasticsearch
access: proxy
url: http://localhost:9200
jsonData:
index: '[filebeat-]YYYY.MM.DD'
interval: Daily
timeField: '@timestamp'
logMessageField: message
logLevelField: fields.level
dataLinks:
- datasourceUid: my_jaeger_uid # Target UID needs to be known
field: traceID
url: '$${__value.raw}' # Careful about the double "$$" because of env var expansion
```
## Configure Amazon Elasticsearch Service
If you use Amazon Elasticsearch Service, you can use Grafana's Elasticsearch data source to visualize data from it.
If you use an AWS Identity and Access Management (IAM) policy to control access to your Amazon Elasticsearch Service domain, you must use AWS Signature Version 4 (AWS SigV4) to sign all requests to that domain.
For details on AWS SigV4, refer to the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
### AWS Signature Version 4 authentication
To sign requests to your Amazon Elasticsearch Service domain, you can enable SigV4 in Grafana's [configuration](ref:configuration).
Once AWS SigV4 is enabled, you can configure it on the Elasticsearch data source configuration page.
For more information about AWS authentication options, refer to [AWS authentication](../aws-cloudwatch/aws-authentication/).
{{< figure src="/static/img/docs/v73/elasticsearch-sigv4-config-editor.png" max-width="500px" class="docs-image--no-shadow" caption="SigV4 configuration for AWS Elasticsearch Service" >}}
## Query the data source
You can select multiple metrics and group by multiple terms or filters when using the Elasticsearch query editor.
For details, see the [query editor documentation](query-editor/).
## Use template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For details, see the [template variables documentation](template-variables/).
- [OpenSearch](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/opensearch/) - For Amazon OpenSearch Service.
- [Loki](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/loki/) - Grafana's log aggregation system.

View File

@@ -0,0 +1,144 @@
---
aliases:
- ../../data-sources/elasticsearch/alerting/
description: Using Grafana Alerting with the Elasticsearch data source
keywords:
- grafana
- elasticsearch
- alerting
- alerts
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Alerting
title: Elasticsearch alerting
weight: 550
refs:
alerting:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/
create-alert-rule:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-grafana-managed-rule/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-grafana-managed-rule/
---
# Elasticsearch alerting
You can use Grafana Alerting with Elasticsearch to create alerts based on your Elasticsearch data. This allows you to monitor metrics, detect anomalies, and receive notifications when specific conditions are met.
For general information about Grafana Alerting, refer to [Grafana Alerting](ref:alerting).
## Before you begin
Before creating alerts with Elasticsearch, ensure you have:
- An Elasticsearch data source configured in Grafana
- Appropriate permissions to create alert rules
- Understanding of the metrics you want to monitor
## Supported query types
Elasticsearch alerting works best with **metrics queries** that return time series data. To create a valid alert query:
- Use a **Date histogram** as the last bucket aggregation (under **Group by**)
- Select appropriate metric aggregations (Count, Average, Sum, Min, Max, etc.)
Queries that return time series data allow Grafana to evaluate values over time and trigger alerts when thresholds are crossed.
### Query types and alerting compatibility
| Query type | Alerting support | Notes |
| ------------------------------ | ---------------- | ----------------------------------------------------------- |
| Metrics with Date histogram | ✅ Full support | Recommended for alerting |
| Metrics without Date histogram | ⚠️ Limited | May not evaluate correctly over time |
| Logs | ❌ Not supported | Use metrics queries instead |
| Raw data | ❌ Not supported | Use metrics queries instead |
| Raw document (deprecated) | ❌ Not supported | Deprecated since Grafana v10.1. Use metrics queries instead |
## Create an alert rule
To create an alert rule using Elasticsearch:
1. Navigate to **Alerting** > **Alert rules**.
1. Click **New alert rule**.
1. Enter a name for the alert rule.
1. Select your **Elasticsearch** data source.
1. Build your query using the query editor:
- Add metric aggregations (for example, Average, Count, Sum)
- Add a Date histogram under **Group by**
- Optionally add filters using Lucene query syntax
1. Configure the alert condition (for example, when the average is above a threshold).
1. Set the evaluation interval and pending period.
1. Configure notifications and labels.
1. Click **Save rule**.
For detailed instructions, refer to [Create a Grafana-managed alert rule](ref:create-alert-rule).
## Example alert queries
The following examples show common alerting scenarios with Elasticsearch.
### Alert on high error count
Monitor the number of error-level log entries:
1. **Query:** `level:error`
1. **Metric:** Count
1. **Group by:** Date histogram (interval: 1m)
1. **Condition:** When count is above 100
### Alert on average response time
Monitor API response times:
1. **Query:** `type:api_request`
1. **Metric:** Average on field `response_time`
1. **Group by:** Date histogram (interval: 5m)
1. **Condition:** When average is above 500 (milliseconds)
### Alert on unique user count drop
Detect drops in active users:
1. **Query:** `*` (all documents)
1. **Metric:** Unique count on field `user_id`
1. **Group by:** Date histogram (interval: 1h)
1. **Condition:** When unique count is below 100
## Limitations
When using Elasticsearch with Grafana Alerting, be aware of the following limitations:
### Template variables not supported
Alert queries cannot contain template variables. Grafana evaluates alert rules on the backend without dashboard context, so variables like `$hostname` or `$environment` won't be resolved.
If your dashboard query uses template variables, create a separate query for alerting with hard coded values.
### Logs queries not supported
Queries using the **Logs** metric type cannot be used for alerting. Convert your query to use metric aggregations with a Date histogram instead.
### Query complexity
Complex queries with many nested aggregations may timeout or fail to evaluate. Simplify queries for alerting by:
- Reducing the number of bucket aggregations
- Using appropriate time intervals
- Adding filters to limit the data scanned
## Best practices
Follow these best practices when creating Elasticsearch alerts:
- **Use specific filters:** Add Lucene query filters to focus on relevant data and improve query performance.
- **Choose appropriate intervals:** Match the Date histogram interval to your evaluation frequency.
- **Test queries first:** Verify your query returns expected results in Explore before creating an alert.
- **Set realistic thresholds:** Base alert thresholds on historical data patterns.
- **Use meaningful names:** Give alert rules descriptive names that indicate what they monitor.

View File

@@ -0,0 +1,124 @@
---
aliases:
- ../../data-sources/elasticsearch/annotations/
description: Using annotations with Elasticsearch in Grafana
keywords:
- grafana
- elasticsearch
- annotations
- events
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Annotations
title: Elasticsearch annotations
weight: 500
refs:
annotate-visualizations:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/annotate-visualizations/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/annotate-visualizations/
---
# Elasticsearch annotations
Annotations overlay event data on your dashboard graphs, helping you correlate log events with metrics.
You can use Elasticsearch as a data source for annotations to display events such as deployments, alerts, or other significant occurrences on your visualizations.
For general information about annotations, refer to [Annotate visualizations](ref:annotate-visualizations).
## Before you begin
Before creating Elasticsearch annotations, ensure you have:
- An Elasticsearch data source configured in Grafana
- Documents in Elasticsearch containing event data with timestamp fields
- Read access to the Elasticsearch index containing your events
## Create an annotation query
To add an Elasticsearch annotation to your dashboard:
1. Navigate to your dashboard and click **Dashboard settings** (gear icon).
1. Select **Annotations** in the left menu.
1. Click **Add annotation query**.
1. Enter a **Name** for the annotation.
1. Select your **Elasticsearch** data source from the **Data source** drop-down.
1. Configure the annotation query and field mappings.
1. Click **Save dashboard**.
## Query
Use the query field to filter which Elasticsearch documents appear as annotations. The query uses [Lucene query syntax](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-string-syntax).
**Examples:**
| Query | Description |
| ---------------------------------------- | ---------------------------------------------------- |
| `*` | Matches all documents. |
| `type:deployment` | Shows only deployment events. |
| `level:error OR level:critical` | Shows error and critical events. |
| `service:api AND environment:production` | Shows events for a specific service and environment. |
| `tags:release` | Shows events tagged as releases. |
You can use template variables in your annotation queries. For example, `service:$service` filters annotations based on the selected service variable.
## Field mappings
Field mappings tell Grafana which Elasticsearch fields contain the annotation data.
### Time
The **Time** field specifies which field contains the annotation timestamp.
- **Default:** `@timestamp`
- **Format:** The field must contain a date value that Elasticsearch recognizes.
### Time End
The **Time End** field specifies a field containing the end time for range annotations. Range annotations display as a shaded region on the graph instead of a single vertical line.
- **Default:** Empty (single-point annotations)
- **Use case:** Display maintenance windows, incidents, or any event with a duration.
### Text
The **Text** field specifies which field contains the annotation description displayed when you hover over the annotation.
- **Default:** `tags`
- **Tip:** Use a descriptive field like `message`, `description`, or `summary`.
### Tags
The **Tags** field specifies which field contains tags for the annotation. Tags help categorize and filter annotations.
- **Default:** Empty
- **Format:** The field can contain either a comma-separated string or an array of strings.
## Example: Deployment annotations
To display deployment events as annotations:
1. Create an annotation query with the following settings:
- **Query:** `type:deployment`
- **Time:** `@timestamp`
- **Text:** `message`
- **Tags:** `environment`
This configuration displays deployment events with their messages as the annotation text and environments as tags.
## Example: Range annotations for incidents
To display incidents with duration:
1. Create an annotation query with the following settings:
- **Query:** `type:incident`
- **Time:** `start_time`
- **Time End:** `end_time`
- **Text:** `description`
- **Tags:** `severity`
This configuration displays incidents as shaded regions from their start time to end time.

View File

@@ -1,209 +0,0 @@
---
aliases:
- ../data-sources/elasticsearch/
- ../features/datasources/elasticsearch/
description: Guide for configuring the Elasticsearch data source in Grafana
keywords:
- grafana
- elasticsearch
- guide
- data source
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Configure Elasticsearch
title: Configure the Elasticsearch data source
weight: 200
refs:
administration-documentation:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management/
supported-expressions:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/logs-integration/#log-level
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/logs-integration/#log-level
query-and-transform-data:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data/
provisioning-data-source:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/#provision-the-data-source
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/connect-externally-hosted/data-sources/elasticsearch/#provision-the-data-source
---
# Configure the Elasticsearch data source
Grafana ships with built-in support for Elasticsearch.
You can create a variety of queries to visualize logs or metrics stored in Elasticsearch, and annotate graphs with log events stored in Elasticsearch.
For instructions on how to add a data source to Grafana, refer to the [administration documentation](ref:administration-documentation).
Only users with the organization `administrator` role can add data sources.
Administrators can also [configure the data source via YAML](ref:provisioning-data-source) with Grafana's provisioning system.
## Configuring permissions
When Elasticsearch security features are enabled, it is essential to configure the necessary cluster privileges to ensure seamless operation. Below is a list of the required privileges along with their purposes:
- **monitor** - Necessary to retrieve the version information of the connected Elasticsearch instance.
- **view_index_metadata** - Required for accessing mapping definitions of indices.
- **read** - Grants the ability to perform search and retrieval operations on indices. This is essential for querying and extracting data from the cluster.
## Add the data source
To add the Elasticsearch data source, complete the following steps:
1. Click **Connections** in the left-side menu.
1. Under **Connections**, click **Add new connection**.
1. Enter `Elasticsearch` in the search bar.
1. Click **Elasticsearch** under the **Data source** section.
1. Click **Add new data source** in the upper right.
You will be taken to the **Settings** tab where you will set up your Elasticsearch configuration.
## Configuration options
The following is a list of configuration options for Elasticsearch.
The first option to configure is the name of your connection:
- **Name** - The data source name. This is how you refer to the data source in panels and queries. Examples: elastic-1, elasticsearch_metrics.
- **Default** - Toggle to select as the default data source option. When you go to a dashboard panel or Explore, this will be the default selected data source.
## Connection
Connect the Elasticsearch data source by specifying a URL.
- **URL** - The URL of your Elasticsearch server. If your Elasticsearch server is local, use `http://localhost:9200`. If it is on a server within a network, this is the URL with the port where you are running Elasticsearch. Example: `http://elasticsearch.example.orgname:9200`.
## Authentication
There are several authentication methods you can choose in the Authentication section.
Select one of the following authentication methods from the dropdown menu.
- **Basic authentication** - The most common authentication method. Use your `data source` user name and `data source` password to connect.
- **Forward OAuth identity** - Forward the OAuth access token (and the OIDC ID token if available) of the user querying the data source.
- **No authentication** - Make the data source available without authentication. Grafana recommends using some type of authentication method.
<!-- - **With credentials** - Toggle to enable credentials such as cookies or auth headers to be sent with cross-site requests. -->
### TLS settings
{{< admonition type="note" >}}
Use TLS (Transport Layer Security) for an additional layer of security when working with Elasticsearch. For information on setting up TLS encryption with Elasticsearch see [Configure TLS](https://www.elastic.co/guide/en/elasticsearch/reference/8.8/configuring-tls.html#configuring-tls). You must add TLS settings to your Elasticsearch configuration file **prior** to setting these options in Grafana.
{{< /admonition >}}
- **Add self-signed certificate** - Check the box to authenticate with a CA certificate. Follow the instructions of the CA (Certificate Authority) to download the certificate file. Required for verifying self-signed TLS certificates.
- **TLS client authentication** - Check the box to authenticate with the TLS client, where the server authenticates the client. Add the `Server name`, `Client certificate` and `Client key`. The **ServerName** is used to verify the hostname on the returned certificate. The **Client certificate** can be generated from a Certificate Authority (CA) or be self-signed. The **Client key** can also be generated from a Certificate Authority (CA) or be self-signed. The client key encrypts the data between client and server.
- **Skip TLS certificate validation** - Check the box to bypass TLS certificate validation. Skipping TLS certificate validation is not recommended unless absolutely necessary or for testing purposes.
### HTTP headers
Click **+ Add header** to add one or more HTTP headers. HTTP headers pass additional context and metadata about the request/response.
- **Header** - Add a custom header. This allows custom headers to be passed based on the needs of your Elasticsearch instance.
- **Value** - The value of the header.
## Additional settings
Additional settings are optional settings that can be configured for more control over your data source.
### Advanced HTTP settings
- **Allowed cookies** - Specify cookies by name that should be forwarded to the data source. The Grafana proxy deletes all forwarded cookies by default.
- **Timeout** - The HTTP request timeout. This must be in seconds. There is no default, so this setting is up to you.
### Elasticsearch details
The following settings are specific to the Elasticsearch data source.
- **Index name** - Use the index settings to specify a default for the `time field` and your Elasticsearch index's name. You can use a time pattern, for example `[logstash-]YYYY.MM.DD`, or a wildcard for the index name. When specifying a time pattern, the fixed part(s) of the pattern should be wrapped in square brackets.
- **Pattern** - Select the matching pattern if using one in your index name. Options include:
- no pattern
- hourly
- daily
- weekly
- monthly
- yearly
Only select a pattern option if you have specified a time pattern in the Index name field.
- **Time field name** - Name of the time field. The default value is @timestamp. You can enter a different name.
- **Max concurrent shard requests** - Sets the number of shards being queried at the same time. The default is `5`. For more information on shards see [Elasticsearch's documentation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/scalability.html#scalability).
- **Min time interval** - Defines a lower limit for the auto group-by time interval. This value **must** be formatted as a number followed by a valid time identifier:
| Identifier | Description |
| ---------- | ----------- |
| `y` | year |
| `M` | month |
| `w` | week |
| `d` | day |
| `h` | hour |
| `m` | minute |
| `s` | second |
| `ms` | millisecond |
We recommend setting this value to match your Elasticsearch write frequency.
For example, set this to `1m` if Elasticsearch writes data every minute.
You can also override this setting in a dashboard panel under its data source options. The default is `10s`.
- **X-Pack enabled** - Toggle to enable `X-Pack`-specific features and options, which provide the [query editor](../query-editor/) with additional aggregations, such as `Rate` and `Top Metrics`.
- **Include frozen indices** - Toggle on when the `X-Pack enabled` setting is active. Includes frozen indices in searches. You can configure Grafana to include [frozen indices](https://www.elastic.co/guide/en/elasticsearch/reference/7.13/frozen-indices.html) when performing search requests.
{{< admonition type="note" >}}
Frozen indices are [deprecated in Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/frozen-indices.html) since v7.14.
{{< /admonition >}}
- **Default query mode** - Specifies which query mode the data source uses by default. Options are `Metrics`, `Logs`, `Raw data`, and `Raw document`. The default is `Metrics`.
### Logs
In this section you can configure which fields the data source uses for log messages and log levels.
- **Message field name:** - Grabs the actual log message from the default source.
- **Level field name:** - Name of the field with log level/severity information. When a level label is specified, the value of this label is used to determine the log level and update the color of each log line accordingly. If the log doesnt have a specified level label, we try to determine if its content matches any of the [supported expressions](ref:supported-expressions). The first match always determines the log level. If Grafana cannot infer a log-level field, it will be visualized with an unknown log level.
### Data links
Data links create a link from a specified field that can be accessed in Explore's logs view. You can add multiple data links by clicking **+ Add**.
Each data link configuration consists of:
- **Field** - Sets the name of the field used by the data link.
- **URL/query** - Sets the full link URL if the link is external. If the link is internal, this input serves as a query for the target data source.<br/>In both cases, you can interpolate the value from the field with the `${__value.raw }` macro.
- **URL Label** (Optional) - Sets a custom display label for the link. The link label defaults to the full external URL or name of the linked internal data source and is overridden by this setting.
- **Internal link** - Toggle on to set an internal link. For an internal link, you can select the target data source with a data source selector. This supports only tracing data sources.
## Private data source connect (PDC) and Elasticsearch
Use private data source connect (PDC) to connect to and query data within a secure network without opening that network to inbound traffic from Grafana Cloud. See [Private data source connect](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/) for more information on how PDC works and [Configure Grafana private data source connect (PDC)](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/configure-pdc/#configure-grafana-private-data-source-connect-pdc) for steps on setting up a PDC connection.
If you use PDC with SIGv4 (AWS Signature Version 4 Authentication), the PDC agent must allow internet egress to`sts.<region>.amazonaws.com:443`.
- **Private data source connect** - Click in the box to set the default PDC connection from the dropdown menu or create a new connection.
Once you have configured your Elasticsearch data source options, click **Save & test** at the bottom to test out your data source connection. You can also remove a connection by clicking **Delete**.

View File

@@ -0,0 +1,377 @@
---
aliases:
- ../configure-elasticsearch-data-source/
description: Guide for configuring the Elasticsearch data source in Grafana
keywords:
- grafana
- elasticsearch
- guide
- data source
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Configure
title: Configure the Elasticsearch data source
weight: 200
refs:
administration-documentation:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management/
supported-expressions:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/logs-integration/#log-level
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/logs-integration/#log-level
query-and-transform-data:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data/
provisioning-data-source:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/configure/#provision-the-data-source
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/connect-externally-hosted/data-sources/elasticsearch/configure/#provision-the-data-source
configuration:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#sigv4_auth_enabled
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana/#sigv4_auth_enabled
provisioning-grafana:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/provisioning/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/administration/provisioning/
---
# Configure the Elasticsearch data source
Grafana ships with built-in support for Elasticsearch.
You can create a variety of queries to visualize logs or metrics stored in Elasticsearch, and annotate graphs with log events stored in Elasticsearch.
For instructions on how to add a data source to Grafana, refer to the [administration documentation](ref:administration-documentation).
Administrators can also [configure the data source via YAML](ref:provisioning-data-source) with Grafana's provisioning system.
## Before you begin
To configure the Elasticsearch data source, you need:
- **Grafana administrator permissions:** Only users with the organization `administrator` role can add data sources.
- **A supported Elasticsearch version:** v7.17 or later, v8.x, or v9.x. Elastic Cloud Serverless isn't supported.
- **Elasticsearch server URL:** The HTTP or HTTPS endpoint for your Elasticsearch instance, including the port (default: `9200`).
- **Authentication credentials:** Depending on your Elasticsearch security configuration, you need one of the following:
- Username and password for basic authentication
- API key
- No credentials (if Elasticsearch security is disabled)
- **Network access:** Grafana must be able to reach your Elasticsearch server. For Grafana Cloud, consider using [Private data source connect (PDC)](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/) if your Elasticsearch instance is in a private network.
## Elasticsearch permissions
When Elasticsearch security features are enabled, you must configure the following cluster privileges for the user or API key that Grafana uses to connect:
- **monitor** - Necessary to retrieve the version information of the connected Elasticsearch instance.
- **view_index_metadata** - Required for accessing mapping definitions of indices.
- **read** - Grants the ability to perform search and retrieval operations on indices. This is essential for querying and extracting data from the cluster.
## Add the data source
To add the Elasticsearch data source, complete the following steps:
1. Click **Connections** in the left-side menu.
1. Under **Connections**, click **Add new connection**.
1. Enter `Elasticsearch` in the search bar.
1. Click **Elasticsearch** under the **Data source** section.
1. Click **Add new data source** in the upper right.
You will be taken to the **Settings** tab where you will set up your Elasticsearch configuration.
## Configuration options
Configure the following basic settings for the Elasticsearch data source:
- **Name** - The data source name. This is how you refer to the data source in panels and queries. Examples: `elastic-1`, `elasticsearch_metrics`.
- **Default** - Toggle on to make this the default data source. New panels and Explore queries use the default data source.
## Connection
- **URL** - The URL of your Elasticsearch server, including the port. Examples: `http://localhost:9200`, `http://elasticsearch.example.com:9200`.
## Authentication
Select an authentication method from the drop-down menu:
- **Basic authentication** - Enter the username and password for your Elasticsearch user.
- **Forward OAuth identity** - Forward the OAuth access token (and the OIDC ID token if available) of the user querying the data source.
- **No authentication** - Connect without credentials. Only use this option if your Elasticsearch instance doesn't require authentication.
### API key authentication
To authenticate using an Elasticsearch API key, select **No authentication** and configure the API key using HTTP headers:
1. In the **HTTP headers** section, click **+ Add header**.
1. Set **Header** to `Authorization`.
1. Set **Value** to `ApiKey <your-api-key>`, replacing `<your-api-key>` with your base64-encoded Elasticsearch API key.
For information about creating API keys, refer to the [Elasticsearch API keys documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html).
### Amazon Elasticsearch Service
If you use Amazon Elasticsearch Service, you can use Grafana's Elasticsearch data source to visualize data from it.
If you use an AWS Identity and Access Management (IAM) policy to control access to your Amazon Elasticsearch Service domain, you must use AWS Signature Version 4 (AWS SigV4) to sign all requests to that domain.
For details on AWS SigV4, refer to the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
To sign requests to your Amazon Elasticsearch Service domain, you can enable SigV4 in Grafana's [configuration](ref:configuration).
Once AWS SigV4 is enabled, you can configure it on the Elasticsearch data source configuration page.
For more information about AWS authentication options, refer to [AWS authentication](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch/aws-authentication/).
{{< figure src="/static/img/docs/v73/elasticsearch-sigv4-config-editor.png" max-width="500px" class="docs-image--no-shadow" caption="SigV4 configuration for AWS Elasticsearch Service" >}}
### TLS settings
{{< admonition type="note" >}}
Use TLS (Transport Layer Security) for an additional layer of security when working with Elasticsearch. For information on setting up TLS encryption with Elasticsearch, refer to [Configure TLS](https://www.elastic.co/guide/en/elasticsearch/reference/8.8/configuring-tls.html#configuring-tls). You must add TLS settings to your Elasticsearch configuration file **prior** to setting these options in Grafana.
{{< /admonition >}}
- **Add self-signed certificate** - Check the box to authenticate with a CA certificate. Follow the instructions of the CA (Certificate Authority) to download the certificate file. Required for verifying self-signed TLS certificates.
- **TLS client authentication** - Check the box to authenticate with the TLS client, where the server authenticates the client. Add the `Server name`, `Client certificate` and `Client key`. The **ServerName** is used to verify the hostname on the returned certificate. The **Client certificate** can be generated from a Certificate Authority (CA) or be self-signed. The **Client key** can also be generated from a Certificate Authority (CA) or be self-signed. The client key encrypts the data between client and server.
- **Skip TLS certificate validation** - Check the box to bypass TLS certificate validation. Skipping TLS certificate validation is not recommended unless absolutely necessary or for testing purposes.
### HTTP headers
Click **+ Add header** to add one or more HTTP headers. HTTP headers pass additional context and metadata about the request/response.
- **Header** - Add a custom header. This allows custom headers to be passed based on the needs of your Elasticsearch instance.
- **Value** - The value of the header.
## Additional settings
Additional settings are optional settings that can be configured for more control over your data source.
### Advanced HTTP settings
- **Allowed cookies** - Specify cookies by name that should be forwarded to the data source. The Grafana proxy deletes all forwarded cookies by default.
- **Timeout** - The HTTP request timeout. This must be in seconds. There is no default, so this setting is up to you.
### Elasticsearch details
The following settings are specific to the Elasticsearch data source.
- **Index name** - The name of your Elasticsearch index. You can use the following formats:
- **Wildcard patterns** - Use `*` to match multiple indices. Examples: `logs-*`, `metrics-*`, `filebeat-*`.
- **Time patterns** - Use date placeholders for time-based indices. Wrap the fixed portion in square brackets. Examples: `[logstash-]YYYY.MM.DD`, `[metrics-]YYYY.MM`.
- **Specific index** - Enter the exact index name. Example: `application-logs`.
- **Pattern** - Select the matching pattern if you use a time pattern in your index name. Options include:
- no pattern
- hourly
- daily
- weekly
- monthly
- yearly
Only select a pattern option if you have specified a time pattern in the Index name field.
- **Time field name** - Name of the time field. The default value is `@timestamp`. You can enter a different name.
- **Max concurrent shard requests** - Sets the number of shards being queried at the same time. The default is `5`. For more information on shards, refer to the [Elasticsearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/scalability.html#scalability).
- **Min time interval** - Defines a lower limit for the auto group-by time interval. This value **must** be formatted as a number followed by a valid time identifier:
| Identifier | Description |
| ---------- | ----------- |
| `y` | year |
| `M` | month |
| `w` | week |
| `d` | day |
| `h` | hour |
| `m` | minute |
| `s` | second |
| `ms` | millisecond |
We recommend setting this value to match your Elasticsearch write frequency.
For example, set this to `1m` if Elasticsearch writes data every minute.
You can also override this setting in a dashboard panel under its data source options. The default is `10s`.
- **X-Pack enabled** - Toggle to enable `X-Pack`-specific features and options, which provide the [query editor](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/query-editor/) with additional aggregations, such as `Rate` and `Top Metrics`.
- **Include frozen indices** - Toggle on when the `X-Pack enabled` setting is active. Includes frozen indices in searches. You can configure Grafana to include [frozen indices](https://www.elastic.co/guide/en/elasticsearch/reference/7.13/frozen-indices.html) when performing search requests.
{{< admonition type="note" >}}
Frozen indices are [deprecated in Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/frozen-indices.html) since v7.14.
{{< /admonition >}}
### Logs
Configure which fields the data source uses for log messages and log levels.
- **Message field name** - The field that contains the log message content.
- **Level field name** - The field that contains log level or severity information. When specified, Grafana uses this field to determine the log level and color-code each log line. If the log doesn't have a level field, Grafana tries to match the content against [supported expressions](ref:supported-expressions). If Grafana can't determine the log level, it displays as unknown.
### Data links
Data links create a link from a specified field that can be accessed in Explore's logs view. You can add multiple data links by clicking **+ Add**.
Each data link configuration consists of:
- **Field** - Sets the name of the field used by the data link.
- **URL/query** - Sets the full link URL if the link is external. If the link is internal, this input serves as a query for the target data source.<br/>In both cases, you can interpolate the value from the field with the `${__value.raw }` macro.
- **URL Label** (Optional) - Sets a custom display label for the link. The link label defaults to the full external URL or name of the linked internal data source and is overridden by this setting.
- **Internal link** - Toggle on to set an internal link. For an internal link, you can select the target data source with a data source selector. This supports only tracing data sources.
## Private data source connect (PDC) and Elasticsearch
Use private data source connect (PDC) to connect to and query data within a secure network without opening that network to inbound traffic from Grafana Cloud. Refer to [Private data source connect](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/) for more information on how PDC works and [Configure Grafana private data source connect (PDC)](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/configure-pdc/#configure-grafana-private-data-source-connect-pdc) for steps on setting up a PDC connection.
If you use PDC with SigV4 (AWS Signature Version 4 Authentication), the PDC agent must allow internet egress to `sts.<region>.amazonaws.com:443`.
- **Private data source connect** - Click in the box to set the default PDC connection from the drop-down menu or create a new connection.
Once you have configured your Elasticsearch data source options, click **Save & test** to test the connection. A successful connection displays the following message:
`Elasticsearch data source is healthy.`
## Provision the data source
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana](ref:provisioning-grafana).
{{< admonition type="note" >}}
The previously used `database` field has now been [deprecated](https://github.com/grafana/grafana/pull/58647).
Use the `index` field in `jsonData` to store the index name.
Refer to the examples below.
{{< /admonition >}}
### Basic provisioning
```yaml
apiVersion: 1
datasources:
- name: Elastic
type: elasticsearch
access: proxy
url: http://localhost:9200
jsonData:
index: '[metrics-]YYYY.MM.DD'
interval: Daily
timeField: '@timestamp'
```
### Provision for logs
```yaml
apiVersion: 1
datasources:
- name: elasticsearch-v7-filebeat
type: elasticsearch
access: proxy
url: http://localhost:9200
jsonData:
index: '[filebeat-]YYYY.MM.DD'
interval: Daily
timeField: '@timestamp'
logMessageField: message
logLevelField: fields.level
dataLinks:
- datasourceUid: my_jaeger_uid # Target UID needs to be known
field: traceID
url: '$${__value.raw}' # Careful about the double "$$" because of env var expansion
```
## Provision the data source using Terraform
You can provision the Elasticsearch data source using [Terraform](https://www.terraform.io/) with the [Grafana Terraform provider](https://registry.terraform.io/providers/grafana/grafana/latest/docs).
For more information about provisioning resources with Terraform, refer to the [Grafana as code using Terraform](https://grafana.com/docs/grafana-cloud/developer-resources/infrastructure-as-code/terraform/) documentation.
### Basic Terraform example
The following example creates a basic Elasticsearch data source for metrics:
```hcl
resource "grafana_data_source" "elasticsearch" {
name = "Elasticsearch"
type = "elasticsearch"
url = "http://localhost:9200"
json_data_encoded = jsonencode({
index = "[metrics-]YYYY.MM.DD"
interval = "Daily"
timeField = "@timestamp"
})
}
```
### Terraform example for logs
The following example creates an Elasticsearch data source configured for logs with a data link to Jaeger:
```hcl
resource "grafana_data_source" "elasticsearch_logs" {
name = "Elasticsearch Logs"
type = "elasticsearch"
url = "http://localhost:9200"
json_data_encoded = jsonencode({
index = "[filebeat-]YYYY.MM.DD"
interval = "Daily"
timeField = "@timestamp"
logMessageField = "message"
logLevelField = "fields.level"
dataLinks = [
{
datasourceUid = grafana_data_source.jaeger.uid
field = "traceID"
url = "$${__value.raw}"
}
]
})
}
```
### Terraform example with basic authentication
The following example includes basic authentication:
```hcl
resource "grafana_data_source" "elasticsearch_auth" {
name = "Elasticsearch"
type = "elasticsearch"
url = "http://localhost:9200"
basic_auth_enabled = true
basic_auth_username = "elastic_user"
secure_json_data_encoded = jsonencode({
basicAuthPassword = var.elasticsearch_password
})
json_data_encoded = jsonencode({
index = "[metrics-]YYYY.MM.DD"
interval = "Daily"
timeField = "@timestamp"
})
}
```
For all available configuration options, refer to the [Grafana provider data source resource documentation](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/data_source).

View File

@@ -30,7 +30,7 @@ refs:
# Elasticsearch query editor
Grafana provides a query editor for Elasticsearch. Elasticsearch queries are in Lucene format.
See [Lucene query syntax](https://www.elastic.co/guide/en/kibana/current/lucene-query.html) and [Query string syntax](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/query-dsl-query-string-query.html#query-string-syntax) if you are new to working with Lucene queries in Elasticsearch.
For more information about query syntax, refer to [Lucene query syntax](https://www.elastic.co/guide/en/kibana/current/lucene-query.html) and [Query string syntax](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-string-syntax).
{{< admonition type="note" >}}
When composing Lucene queries, ensure that you use uppercase boolean operators: `AND`, `OR`, and `NOT`. Lowercase versions of these operators are not supported by the Lucene query syntax.
@@ -38,17 +38,17 @@ When composing Lucene queries, ensure that you use uppercase boolean operators:
{{< figure src="/static/img/docs/elasticsearch/elastic-query-editor-10.1.png" max-width="800px" class="docs-image--no-shadow" caption="Elasticsearch query editor" >}}
For general documentation on querying data sources in Grafana, including options and functions common to all query editors, see [Query and transform data](ref:query-and-transform-data).
For general documentation on querying data sources in Grafana, including options and functions common to all query editors, refer to [Query and transform data](ref:query-and-transform-data).
## Aggregation types
Elasticsearch groups aggregations into three categories:
- **Bucket** - Bucket aggregations don't calculate metrics, they create buckets of documents based on field values, ranges and a variety of other criteria. See [Bucket aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket.html) for additional information. Use bucket aggregations under `Group by` when creating a metrics query in the query builder.
- **Bucket** - Bucket aggregations don't calculate metrics, they create buckets of documents based on field values, ranges and a variety of other criteria. Refer to [Bucket aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket.html) for additional information. Use bucket aggregations under `Group by` when creating a metrics query in the query builder.
- **Metrics** - Metrics aggregations perform calculations such as sum, average, min, etc. They can be single-value or multi-value. See [Metrics aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html) for additional information. Use metrics aggregations in the metrics query type in the query builder.
- **Metrics** - Metrics aggregations perform calculations such as sum, average, min, etc. They can be single-value or multi-value. Refer to [Metrics aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html) for additional information. Use metrics aggregations in the metrics query type in the query builder.
- **Pipeline** - Elasticsearch pipeline aggregations work with inputs or metrics created from other aggregations (not documents or fields). There are parent and sibling and sibling pipeline aggregations. See [Pipeline aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-pipeline.html) for additional information.
- **Pipeline** - Pipeline aggregations work on the output of other aggregations rather than on documents or fields. Refer to [Pipeline aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline.html) for additional information.
## Select a query type
@@ -56,44 +56,51 @@ There are three types of queries you can create with the Elasticsearch query bui
### Metrics query type
Metrics queries aggregate data and produce a variety of calculations such as count, min, max, etc. Click on the metric box to view a list of options in the dropdown menu. The default is `count`.
Metrics queries aggregate data and produce calculations such as count, min, max, and more. Click the metric box to view options in the drop-down menu. The default is `count`.
- **Alias** - Aliasing only applies to **time series queries**, where the last group is `date histogram`. This is ignored for any other type of query.
- **Metric** - Metrics aggregations include:
- count - see [Value count aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-valuecount-aggregation.html)
- average - see [Avg aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-rate-aggregation.html)
- sum - see [Sum aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-sum-aggregation.html)
- max - see [Max aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-max-aggregation.html)
- min - see [Min aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-min-aggregation.html)
- extended stats - see [Extended stats aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-extendedstats-aggregation.html)
- percentiles - see [Percentiles aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-percentile-aggregation.html)
- unique count - see [Cardinality aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-cardinality-aggregation.html)
- top metrics - see [Top metrics aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-top-metrics.html)
- rate - see [Rate aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/8.9/search-aggregations-metrics-rate-aggregation.html)
- count - refer to [Value count aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-valuecount-aggregation.html)
- average - refer to [Avg aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-avg-aggregation.html)
- sum - refer to [Sum aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-sum-aggregation.html)
- max - refer to [Max aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-max-aggregation.html)
- min - refer to [Min aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-min-aggregation.html)
- extended stats - refer to [Extended stats aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-extendedstats-aggregation.html)
- percentiles - refer to [Percentiles aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-percentile-aggregation.html)
- unique count - refer to [Cardinality aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-cardinality-aggregation.html)
- top metrics - refer to [Top metrics aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-top-metrics.html)
- rate - refer to [Rate aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-rate-aggregation.html)
- **Pipeline aggregations** - Pipeline aggregations work on the output of other aggregations rather than on documents. The following pipeline aggregations are available:
- moving function - Calculates a value based on a sliding window of aggregated values. Refer to [Moving function aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-movfn-aggregation.html).
- derivative - Calculates the derivative of a metric. Refer to [Derivative aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-derivative-aggregation.html).
- cumulative sum - Calculates the cumulative sum of a metric. Refer to [Cumulative sum aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-cumulative-sum-aggregation.html).
- serial difference - Calculates the difference between values in a time series. Refer to [Serial differencing aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-serialdiff-aggregation.html).
- bucket script - Executes a script on metric values from other aggregations. Refer to [Bucket script aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-bucket-script-aggregation.html).
You can select multiple metrics and group by multiple terms or filters when using the Elasticsearch query editor.
Use the **+ sign** to the right to add multiple metrics to your query. Click on the **eye icon** next to **Metric** to hide metrics, and the **garbage can icon** to remove metrics.
- **Group by options** - Create multiple group by options when constructing your Elasticsearch query. Date histogram is the default option. Below is a list of options in the dropdown menu.
- terms - see [Terms aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html).
- filter - see [Filter aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-filter-aggregation.html).
- geo hash grid - see [Geohash grid aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-geohashgrid-aggregation.html).
- date histogram - for time series queries. See [Date histogram aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-datehistogram-aggregation.html).
- histogram - Depicts frequency distributions. See [Histogram aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-histogram-aggregation.html).
- nested (experimental) - See [Nested aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-nested-aggregation.html).
- **Group by options** - Create multiple group by options when constructing your Elasticsearch query. Date histogram is the default option. The following options are available in the drop-down menu:
- terms - refer to [Terms aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html).
- filter - refer to [Filter aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-filter-aggregation.html).
- geo hash grid - refer to [Geohash grid aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-geohashgrid-aggregation.html).
- date histogram - for time series queries. Refer to [Date histogram aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-datehistogram-aggregation.html).
- histogram - Depicts frequency distributions. Refer to [Histogram aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-histogram-aggregation.html).
- nested (experimental) - Refer to [Nested aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-nested-aggregation.html).
Each group by option will have a different subset of options to further narrow your query.
The following options are specific to the **date histogram** bucket aggregation option.
- **Time field** - Depicts date data options. The default option can be specified when configuring the Elasticsearch data source in the **Time field name** under the [**Elasticsearch details**](/docs/grafana/latest/datasources/elasticsearch/configure-elasticsearch-data-source/#elasticsearch-details) section. Otherwise **@timestamp** field will be used as a default option.
- **Interval** - Group by a type of interval. There are option to choose from the dropdown menu to select seconds, minutes, hours or day. You can also add a custom interval such as `30d` (30 days). `Auto` is the default option.
- **Min doc count** - The minimum amount of data to include in your query. The default is `0`.
- **Thin edges** - Select to trim edges on the time series data points. The default is `0`.
- **Offset** - Changes the start value of each bucket by the specified positive(+) or negative (-) offset duration. Examples include `1h` for 1 hour, `5s` for 5 seconds or `1d` for 1 day.
- **Timezone** - Select a timezone from the dropdown menu. The default is `Coordinated universal time`.
- **Time field** - The field used for time-based queries. The default can be set when configuring the data source in the **Time field name** setting under [Elasticsearch details](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/configure/#elasticsearch-details). The default is `@timestamp`.
- **Interval** - The time interval for grouping data. Select from the drop-down menu or enter a custom interval such as `30d` (30 days). The default is `Auto`.
- **Min doc count** - The minimum number of documents required to include a bucket. The default is `0`.
- **Trim edges** - Removes partial buckets at the edges of the time range. The default is `0`.
- **Offset** - Shifts the start of each bucket by the specified duration. Use positive (`+`) or negative (`-`) values. Examples: `1h`, `5s`, `1d`.
- **Timezone** - The timezone for date calculations. The default is `Coordinated Universal Time`.
Configure the following options for the **terms** bucket aggregation option:
@@ -101,7 +108,7 @@ Configure the following options for the **terms** bucket aggregation option:
- **Size** - Limits the number of documents, or size of the data set. You can set a custom number or `no limit`.
- **Min doc count** - The minimum amount of data to include in your query. The default is `0`.
- **Order by** - Order terms by `term value`, `doc count` or `count`.
- **Missing** - Defines how documents missing a value should be treated. Missing values are ignored by default, but they can be treated as if they had a value. See [Missing value](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#_missing_value_5) in Elasticsearch's documentation for more information.
- **Missing** - Defines how documents missing a value should be treated. Missing values are ignored by default, but they can be treated as if they had a value. Refer to [Missing value](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#_missing_value_5) in the Elasticsearch documentation for more information.
Configure the following options for the **filters** bucket aggregation option:
@@ -114,8 +121,8 @@ Configure the following options for the **geo hash grid** bucket aggregation opt
Configure the following options for the **histogram** bucket aggregation option:
- **Interval** - Group by a type of interval. There are option to choose from the dropdown menu to select seconds, minutes, hours or day. You can also add a custom interval such as `30d` (30 days). `Auto` is the default option.
- **Min doc count** - The minimum amount of data to include in your query. The default is `0`
- **Interval** - The numeric interval for grouping values into buckets.
- **Min doc count** - The minimum number of documents required to include a bucket. The default is `0`.
The **nested** group by option is currently experimental, you can select a field and then settings specific to that field.
@@ -141,7 +148,7 @@ The option to run a **raw document query** is deprecated as of Grafana v10.1.
## Use template variables
You can also augment queries by using [template variables](../template-variables/).
You can also augment queries by using [template variables](https://grafana.com/docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch/template-variables/).
Queries of `terms` have a 500-result limit by default.
To set a custom limit, set the `size` property in your query.

View File

@@ -22,6 +22,11 @@ refs:
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/
add-template-variables-add-ad-hoc-filters:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/add-template-variables/#add-ad-hoc-filters
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/add-template-variables/#add-ad-hoc-filters
add-template-variables-multi-value-variables:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/add-template-variables/#multi-value-variables
@@ -37,11 +42,29 @@ refs:
# Elasticsearch template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana lists these variables in drop-down select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For an introduction to templating and template variables, refer to the [Templating](ref:variables) and [Add and manage variables](ref:add-template-variables) documentation.
## Use ad hoc filters
Elasticsearch supports the **Ad hoc filters** variable type.
You can use this variable type to specify any number of key/value filters, and Grafana applies them automatically to all of your Elasticsearch queries.
Ad hoc filters support the following operators:
| Operator | Description |
| -------- | ------------------------------------------------------------- |
| `=` | Equals. Adds `AND field:"value"` to the query. |
| `!=` | Not equals. Adds `AND -field:"value"` to the query. |
| `=~` | Matches regex. Adds `AND field:/value/` to the query. |
| `!~` | Does not match regex. Adds `AND -field:/value/` to the query. |
| `>` | Greater than. Adds `AND field:>value` to the query. |
| `<` | Less than. Adds `AND field:<value` to the query. |
For more information, refer to [Add ad hoc filters](ref:add-template-variables-add-ad-hoc-filters).
## Choose a variable syntax
The Elasticsearch data source supports two variable syntaxes for use in the **Query** field:
@@ -50,34 +73,35 @@ The Elasticsearch data source supports two variable syntaxes for use in the **Qu
- `[[varname]]`, such as `hostname:[[hostname]]`
When the _Multi-value_ or _Include all value_ options are enabled, Grafana converts the labels from plain text to a Lucene-compatible condition.
For details, see the [Multi-value variables](ref:add-template-variables-multi-value-variables) documentation.
For details, refer to the [Multi-value variables](ref:add-template-variables-multi-value-variables) documentation.
## Use variables in queries
You can use other variables inside the query.
This example is used to define a variable named `$host`:
You can use variables in the Lucene query field, metric aggregation fields, bucket aggregation fields, and the alias field.
### Variables in Lucene queries
Use variables to filter your Elasticsearch queries dynamically:
```
{"find": "terms", "field": "hostname", "query": "source:$source"}
hostname:$hostname AND level:$level
```
This uses another variable named `$source` inside the query definition.
Whenever you change the value of the `$source` variable via the dropdown, Grafana triggers an update of the `$host` variable to contain only hostnames filtered by, in this case, the `source` document property.
### Chain or nest variables
These queries by default return results in term order (which can then be sorted alphabetically or numerically as for any variable).
To produce a list of terms sorted by doc count (a top-N values list), add an `orderBy` property of "doc_count".
This automatically selects a descending sort.
You can create nested variables, where one variable's values depend on another variable's selection.
{{< admonition type="note" >}}
To use an ascending sort (`asc`) with doc_count (a bottom-N list), set `order: "asc"`. However, Elasticsearch [discourages this](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#search-aggregations-bucket-terms-aggregation-order) because sorting by ascending doc count can return inaccurate results.
{{< /admonition >}}
To keep terms in the doc count order, set the variable's Sort dropdown to **Disabled**.
You can alternatively use other sorting criteria, such as **Alphabetical**, to re-sort them.
This example defines a variable named `$host` that only shows hosts matching the selected `$environment`:
```json
{ "find": "terms", "field": "hostname", "query": "environment:$environment" }
```
{"find": "terms", "field": "hostname", "orderBy": "doc_count"}
```
Whenever you change the value of the `$environment` variable via the drop-down, Grafana triggers an update of the `$host` variable to contain only hostnames filtered by the selected environment.
### Variables in aggregations
You can use variables in bucket aggregation fields to dynamically change how data is grouped. For example, use a variable in the **Terms** group by field to let users switch between grouping by `hostname`, `service`, or `datacenter`.
## Template variable examples
@@ -92,11 +116,36 @@ Write the query using a custom JSON string, with the field mapped as a [keyword]
If the query is [multi-field](https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-fields.html) with both a `text` and `keyword` type, use `"field":"fieldname.keyword"` (sometimes `fieldname.raw`) to specify the keyword field in your query.
| Query | Description |
| ------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `{"find": "fields", "type": "keyword"}` | Returns a list of field names with the index type `keyword`. |
| `{"find": "terms", "field": "hostname.keyword", "size": 1000}` | Returns a list of values for a keyword using term aggregation. Query will use current dashboard time range as time range query. |
| `{"find": "terms", "field": "hostname", "query": '<Lucene query>'}` | Returns a list of values for a keyword field using term aggregation and a specified Lucene query filter. Query will use current dashboard time range as time range for query. |
| Query | Description |
| ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------ |
| `{"find": "fields", "type": "keyword"}` | Returns a list of field names with the index type `keyword`. |
| `{"find": "fields", "type": "number"}` | Returns a list of numeric field names (includes `float`, `double`, `integer`, `long`, `scaled_float`). |
| `{"find": "fields", "type": "date"}` | Returns a list of date field names. |
| `{"find": "terms", "field": "hostname.keyword", "size": 1000}` | Returns a list of values for a keyword field. Uses the current dashboard time range. |
| `{"find": "terms", "field": "hostname", "query": "<Lucene query>"}` | Returns a list of values filtered by a Lucene query. Uses the current dashboard time range. |
| `{"find": "terms", "field": "status", "orderBy": "doc_count"}` | Returns values sorted by document count (descending by default). |
| `{"find": "terms", "field": "status", "orderBy": "doc_count", "order": "asc"}` | Returns values sorted by document count in ascending order. |
Queries of `terms` have a 500-result limit by default.
To set a custom limit, set the `size` property in your query.
Queries of `terms` have a 500-result limit by default. To set a custom limit, set the `size` property in your query.
### Sort query results
By default, queries return results in term order (which can then be sorted alphabetically or numerically using the variable's Sort setting).
To produce a list of terms sorted by document count (a top-N values list), add an `orderBy` property of `doc_count`. This automatically selects a descending sort:
```json
{ "find": "terms", "field": "status", "orderBy": "doc_count" }
```
You can also use the `order` property to explicitly set ascending or descending sort:
```json
{ "find": "terms", "field": "hostname", "orderBy": "doc_count", "order": "asc" }
```
{{< admonition type="note" >}}
Elasticsearch [discourages](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#search-aggregations-bucket-terms-aggregation-order) sorting by ascending doc count because it can return inaccurate results.
{{< /admonition >}}
To keep terms in the document count order, set the variable's Sort drop-down to **Disabled**. You can alternatively use other sorting criteria, such as **Alphabetical**, to re-sort them.

View File

@@ -0,0 +1,266 @@
---
aliases:
- ../../data-sources/elasticsearch/troubleshooting/
description: Troubleshooting the Elasticsearch data source in Grafana
keywords:
- grafana
- elasticsearch
- troubleshooting
- errors
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Troubleshooting
title: Troubleshoot issues with the Elasticsearch data source
weight: 600
---
# Troubleshoot issues with the Elasticsearch data source
This document provides troubleshooting information for common errors you may encounter when using the Elasticsearch data source in Grafana.
## Connection errors
The following errors occur when Grafana cannot establish or maintain a connection to Elasticsearch.
### Failed to connect to Elasticsearch
**Error message:** "Health check failed: Failed to connect to Elasticsearch"
**Cause:** Grafana cannot establish a network connection to the Elasticsearch server.
**Solution:**
1. Verify that the Elasticsearch URL is correct in the data source configuration.
1. Check that Elasticsearch is running and accessible from the Grafana server.
1. Ensure there are no firewall rules blocking the connection.
1. If using a proxy, verify the proxy settings are correct.
1. For Grafana Cloud, ensure you have configured [Private data source connect](https://grafana.com/docs/grafana-cloud/connect-externally-hosted/private-data-source-connect/) if your Elasticsearch instance is not publicly accessible.
### Request timed out
**Error message:** "Health check failed: Elasticsearch data source is not healthy. Request timed out"
**Cause:** The connection to Elasticsearch timed out before receiving a response.
**Solution:**
1. Check the network latency between Grafana and Elasticsearch.
1. Verify that Elasticsearch is not overloaded or experiencing performance issues.
1. Increase the timeout setting in the data source configuration if needed.
1. Check if any network devices (load balancers, proxies) are timing out the connection.
### Failed to parse data source URL
**Error message:** "Failed to parse data source URL"
**Cause:** The URL entered in the data source configuration is not valid.
**Solution:**
1. Verify the URL format is correct (for example, `http://localhost:9200` or `https://elasticsearch.example.com:9200`).
1. Ensure the URL includes the protocol (`http://` or `https://`).
1. Remove any trailing slashes or invalid characters from the URL.
## Authentication errors
The following errors occur when there are issues with authentication credentials or permissions.
### Unauthorized (401)
**Error message:** "Health check failed: Elasticsearch data source is not healthy. Status: 401 Unauthorized"
**Cause:** The authentication credentials are invalid or missing.
**Solution:**
1. Verify that the username and password are correct.
1. If using an API key, ensure the key is valid and has not expired.
1. Check that the authentication method selected matches your Elasticsearch configuration.
1. Verify the user has the required permissions to access the Elasticsearch cluster.
### Forbidden (403)
**Error message:** "Health check failed: Elasticsearch data source is not healthy. Status: 403 Forbidden"
**Cause:** The authenticated user does not have permission to access the requested resource.
**Solution:**
1. Verify the user has read access to the specified index.
1. Check Elasticsearch security settings and role mappings.
1. Ensure the user has permission to access the `_cluster/health` endpoint.
1. If using AWS Elasticsearch Service with SigV4 authentication, verify the IAM policy grants the required permissions.
## Cluster health errors
The following errors occur when the Elasticsearch cluster is unhealthy or unavailable.
### Cluster status is red
**Error message:** "Health check failed: Elasticsearch data source is not healthy"
**Cause:** The Elasticsearch cluster health status is red, indicating one or more primary shards are not allocated.
**Solution:**
1. Check the Elasticsearch cluster health using `GET /_cluster/health`.
1. Review Elasticsearch logs for errors.
1. Verify all nodes in the cluster are running and connected.
1. Check for unassigned shards using `GET /_cat/shards?v&h=index,shard,prirep,state,unassigned.reason`.
1. Consider increasing the cluster's resources or reducing the number of shards.
### Bad Gateway (502)
**Error message:** "Health check failed: Elasticsearch data source is not healthy. Status: 502 Bad Gateway"
**Cause:** A proxy or load balancer between Grafana and Elasticsearch returned an error.
**Solution:**
1. Check the health of any proxies or load balancers in the connection path.
1. Verify Elasticsearch is running and accepting connections.
1. Review proxy/load balancer logs for more details.
1. Ensure the proxy timeout is configured appropriately for Elasticsearch requests.
## Index errors
The following errors occur when there are issues with the configured index or index pattern.
### Index not found
**Error message:** "Error validating index: index_not_found"
**Cause:** The specified index or index pattern does not match any existing indices.
**Solution:**
1. Verify the index name or pattern in the data source configuration.
1. Check that the index exists using `GET /_cat/indices`.
1. If using a time-based index pattern (for example, `[logs-]YYYY.MM.DD`), ensure indices exist for the selected time range.
1. Verify the user has permission to access the index.
### Time field not found
**Error message:** "Could not find time field '@timestamp' with type date in index"
**Cause:** The specified time field does not exist in the index or is not of type `date`.
**Solution:**
1. Verify the time field name in the data source configuration matches the field in your index.
1. Check the field mapping using `GET /<index>/_mapping`.
1. Ensure the time field is mapped as a `date` type, not `text` or `keyword`.
1. If the field name is different (for example, `timestamp` instead of `@timestamp`), update the data source configuration.
## Query errors
The following errors occur when there are issues with query syntax or configuration.
### Too many buckets
**Error message:** "Trying to create too many buckets. Must be less than or equal to: [65536]."
**Cause:** The query is generating more aggregation buckets than Elasticsearch allows.
**Solution:**
1. Reduce the time range of your query.
1. Increase the date histogram interval (for example, change from `10s` to `1m`).
1. Add filters to reduce the number of documents being aggregated.
1. Increase the `search.max_buckets` setting in Elasticsearch (requires cluster admin access).
### Required field missing
**Error message:** "Required one of fields [field, script], but none were specified."
**Cause:** A metric aggregation (such as Average, Sum, or Min) was added without specifying a field.
**Solution:**
1. Select a field for the metric aggregation in the query editor.
1. Ensure the selected field exists in your index and contains numeric data.
### Unsupported interval
**Error message:** "unsupported interval '&lt;interval&gt;'"
**Cause:** The interval specified for the index pattern is not valid.
**Solution:**
1. Use a supported interval: `Hourly`, `Daily`, `Weekly`, `Monthly`, or `Yearly`.
1. If you don't need a time-based index pattern, use `No pattern` and specify the exact index name.
## Version errors
The following errors occur when there are Elasticsearch version compatibility issues.
### Unsupported Elasticsearch version
**Error message:** "Support for Elasticsearch versions after their end-of-life (currently versions &lt; 7.16) was removed. Using unsupported version of Elasticsearch may lead to unexpected and incorrect results."
**Cause:** The Elasticsearch version is no longer supported by the Grafana data source.
**Solution:**
1. Upgrade Elasticsearch to a supported version (7.17+, 8.x, or 9.x).
1. Refer to [Elastic Product End of Life Dates](https://www.elastic.co/support/eol) for version support information.
1. Note that queries may still work, but Grafana does not guarantee functionality for unsupported versions.
## Other common issues
The following issues don't produce specific error messages but are commonly encountered.
### Empty query results
**Cause:** The query returns no data.
**Solution:**
1. Verify the time range includes data in your index.
1. Check the Lucene query syntax for errors.
1. Test the query directly in Elasticsearch using the `_search` API.
1. Ensure the index contains documents matching your query filters.
### Slow query performance
**Cause:** Queries take a long time to execute.
**Solution:**
1. Reduce the time range of your query.
1. Add more specific filters to limit the data scanned.
1. Increase the date histogram interval.
1. Check Elasticsearch cluster performance and resource utilization.
1. Consider using index aliases or data streams for better query routing.
### CORS errors in browser console
**Cause:** Cross-Origin Resource Sharing (CORS) is blocking requests from the browser to Elasticsearch.
**Solution:**
1. Use Server (proxy) access mode instead of Browser access mode in the data source configuration.
1. If Browser access is required, configure CORS settings in Elasticsearch:
```yaml
http.cors.enabled: true
http.cors.allow-origin: '<your-grafana-url>'
http.cors.allow-headers: 'Authorization, Content-Type'
http.cors.allow-credentials: true
```
{{< admonition type="note" >}}
Server (proxy) access mode is recommended for security and reliability.
{{< /admonition >}}
## Get additional help
If you continue to experience issues after following this troubleshooting guide:
1. Check the [Elasticsearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html) for API-specific guidance.
1. Review the [Grafana community forums](https://community.grafana.com/) for similar issues.
1. Contact Grafana Support if you have an Enterprise license.

View File

@@ -0,0 +1,80 @@
---
description: Learn how to troubleshoot common problems with the Grafana MySQL data source plugin
keywords:
- grafana
- mysql
- query
labels:
products:
- cloud
- enterprise
- oss
menuTitle: Troubleshoot
title: Troubleshoot common problems with the Grafana MySQL data source plugin
weight: 40
refs:
variables:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/variables/
variable-syntax-advanced-variable-format-options:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/variables/variable-syntax/#advanced-variable-format-options
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/variables/variable-syntax/#advanced-variable-format-options
annotate-visualizations:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/dashboards/build-dashboards/annotate-visualizations/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/dashboards/build-dashboards/annotate-visualizations/
explore:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana/<GRAFANA_VERSION>/explore/
query-transform-data:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data/
panel-inspector:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/panel-inspector/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/panel-inspector/
query-editor:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/#query-editors
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data/#query-editors
alert-rules:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/
template-annotations-and-labels:
- pattern: /docs/grafana/
destination: /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/templates/
- pattern: /docs/grafana-cloud/
destination: /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/templates/
configure-standard-options:
- pattern: /docs/grafana/
- destination: /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/configure-standard-options/
---
# Troubleshoot common problems with the Grafana MySQL data source plugin
This page lists common issues you might experience when setting up the Grafana MySQL data source plugin.
### My data source connection fails when using the Grafana MySQL data source plugin
- Check if the MySQL server is up and running.
- Make sure that your firewall is open for MySQL server (default port is `3306`).
- Ensure that you have the correct permissions to access the MySQL server and also have permission to access the database.
- If the error persists, create a new user for the Grafana MySQL data source plugin with correct permissions and try to connect with it.
### What should I do if I see "An unexpected error happened" or "Could not connect to MySQL" after trying all of the above?
- Check the Grafana logs for more details about the error.
- For Grafana Cloud customers, contact support.

View File

@@ -83,6 +83,11 @@ This topic lists words and abbreviations that are commonly used in the Grafana d
A commonly-used visualization that displays data as points, lines, or bars.
</td>
</tr>
<tr>
<td style="vertical-align: top"><code>grafanactl</code></td>
<td>
A command-line tool that enables users to authenticate, manage multiple environments, and perform administrative tasks through Grafana's REST API.
</tr>
<tr>
<td style="vertical-align: top">mixin</td>
<td>

View File

@@ -99,6 +99,7 @@ Add links to other dashboards at the top of your current dashboard.
- **Include current time range** Select this option to include the dashboard time range in the link. When the user clicks the link, the linked dashboard opens with the indicated time range already set. **Example:** https://play.grafana.org/d/000000010/annotations?orgId=1&from=now-3h&to=now
- **Include current template variable values** Select this option to include template variables currently used as query parameters in the link. When the user clicks the link, any matching templates in the linked dashboard are set to the values from the link. For more information, see [Dashboard URL variables](ref:dashboard-url-variables).
- **Open link in new tab** Select this option if you want the dashboard link to open in a new tab or window.
- **Show in controls menu** Select this option to display the link in the dashboard controls menu instead of at the top of the dashboard. The dashboard controls menu appears as a button in the dashboard toolbar.
1. Click **Save dashboard** in the top-right corner.
1. Click **Back to dashboard** and then **Exit edit**.
@@ -121,6 +122,7 @@ Add a link to a URL at the top of your current dashboard. You can link to any av
- **Include current time range** Select this option to include the dashboard time range in the link. When the user clicks the link, the linked dashboard opens with the indicated time range already set. **Example:** https://play.grafana.org/d/000000010/annotations?orgId=1&from=now-3h&to=now
- **Include current template variable values** Select this option to include template variables currently used as query parameters in the link. When the user clicks the link, any matching templates in the linked dashboard are set to the values from the link.
- **Open link in new tab** Select this option if you want the dashboard link to open in a new tab or window.
- **Show in controls menu** Select this option to display the link in the dashboard controls menu instead of at the top of the dashboard. The dashboard controls menu appears as a button in the dashboard toolbar.
1. Click **Save dashboard** in the top-right corner.
1. Click **Back to dashboard** and then **Exit edit**.

View File

@@ -123,10 +123,11 @@ To create a variable, follow these steps:
If you don't enter a display name, then the drop-down list label is the variable name.
1. Choose a **Show on dashboard** option:
- **Label and value** - The variable drop-down list displays the variable **Name** or **Label** value. This is the default.
- **Value:** The variable drop-down list only displays the selected variable value and a down arrow.
- **Nothing:** No variable drop-down list is displayed on the dashboard.
1. Choose a **Display** option:
- **Above dashboard** - The variable drop-down list displays above the dashboard with the variable **Name** or **Label** value. This is the default.
- **Above dashboard, label hidden** - The variable drop-down list displays above the dashboard, but without showing the name of the variable.
- **Controls menu** - The variable is displayed in the dashboard controls menu instead of above the dashboard. The dashboard controls menu appears as a button in the dashboard toolbar.
- **Hidden** - No variable drop-down list is displayed on the dashboard.
1. Click one of the following links to complete the steps for adding your selected variable type:
- [Query](#add-a-query-variable)

View File

@@ -12,12 +12,13 @@ comments: |
To build this Markdown, do the following:
$ cd /docs (from the root of the repository)
$ make sources/panels-visualizations/query-transform-data/transform-data/index.md
$ make sources/visualizations/panels-visualizations/query-transform-data/transform-data/index.md
$ make docs
Browse to http://localhost:3003/docs/grafana/latest/panels-visualizations/query-transform-data/transform-data/
Refer to ./docs/README.md "Content guidelines" for more information about editing and building these docs.
aliases:
- ../../../panels/transform-data/ # /docs/grafana/next/panels/transform-data/
- ../../../panels/transform-data/about-transformation/ # /docs/grafana/next/panels/transform-data/about-transformation/

View File

@@ -8,6 +8,7 @@ test.use({
scopeFilters: true,
groupByVariable: true,
reloadDashboardsOnParamsChange: true,
useScopesNavigationEndpoint: true,
},
});
@@ -61,31 +62,6 @@ test.describe('Scope Redirect Functionality', () => {
});
});
test('should fall back to scope navigation when no redirectUrl', async ({ page, gotoDashboardPage }) => {
const scopes = testScopesWithRedirect();
await test.step('Navigate to dashboard and open scopes selector', async () => {
await gotoDashboardPage({ uid: 'cuj-dashboard-1' });
await openScopesSelector(page, scopes);
});
await test.step('Select scope without redirectUrl', async () => {
// Select the scope without redirectUrl directly
await selectScope(page, 'sn-redirect-fallback', scopes[1]);
});
await test.step('Apply scopes and verify fallback behavior', async () => {
await applyScopes(page, [scopes[1]]);
// Should stay on current dashboard since no redirectUrl is provided
// The scope navigation fallback should not redirect (as per existing behavior)
await expect(page).toHaveURL(/\/d\/cuj-dashboard-1/);
// Verify the scope was applied
await expect(page).toHaveURL(/scopes=scope-sn-redirect-fallback/);
});
});
test('should not redirect when reloading page on dashboard not in dashboard list', async ({
page,
gotoDashboardPage,
@@ -171,4 +147,47 @@ test.describe('Scope Redirect Functionality', () => {
await expect(page).not.toHaveURL(/scopes=/);
});
});
test('should not redirect to redirectPath when on active scope navigation', async ({ page, gotoDashboardPage }) => {
const scopes = testScopesWithRedirect();
await test.step('Set up scope navigation to dashboard-1', async () => {
// First, apply a scope that creates scope navigation to dashboard-1 (without redirectPath)
await gotoDashboardPage({ uid: 'cuj-dashboard-1' });
await openScopesSelector(page, scopes);
await selectScope(page, 'sn-redirect-setup', scopes[2]);
await applyScopes(page, [scopes[2]]);
// Verify we're on dashboard-1 with the scope applied
await expect(page).toHaveURL(/\/d\/cuj-dashboard-1/);
await expect(page).toHaveURL(/scopes=scope-sn-redirect-setup/);
});
await test.step('Navigate to dashboard-1 to be on active scope navigation', async () => {
// Navigate to dashboard-1 which is now a scope navigation target
await gotoDashboardPage({
uid: 'cuj-dashboard-1',
queryParams: new URLSearchParams({ scopes: 'scope-sn-redirect-setup' }),
});
// Verify we're on dashboard-1
await expect(page).toHaveURL(/\/d\/cuj-dashboard-1/);
});
await test.step('Apply scope with redirectPath and verify no redirect', async () => {
// Now apply a different scope that has redirectPath
// Since we're on an active scope navigation, it should NOT redirect
await openScopesSelector(page, scopes);
await selectScope(page, 'sn-redirect-with-navigation', scopes[3]);
await applyScopes(page, [scopes[3]]);
// Verify the new scope was applied
await expect(page).toHaveURL(/scopes=scope-sn-redirect-with-navigation/);
// Since we're already on the active scope navigation (dashboard-1),
// we should NOT redirect to redirectPath (dashboard-3)
await expect(page).toHaveURL(/\/d\/cuj-dashboard-1/);
await expect(page).not.toHaveURL(/\/d\/cuj-dashboard-3/);
});
});
});

View File

@@ -419,6 +419,9 @@ test.describe(
// Select tabs layout
await page.getByLabel('layout-selection-option-Tabs').click();
// confirm layout change
await dashboardPage.getByGrafanaSelector(selectors.pages.ConfirmModal.delete).click();
await expect(dashboardPage.getByGrafanaSelector(selectors.components.Tab.title('New row'))).toBeVisible();
await expect(dashboardPage.getByGrafanaSelector(selectors.components.Tab.title('New row 1'))).toBeVisible();
await expect(
@@ -757,6 +760,9 @@ test.describe(
// Select rows layout
await page.getByLabel('layout-selection-option-Rows').click();
// confirm layout change
await dashboardPage.getByGrafanaSelector(selectors.pages.ConfirmModal.delete).click();
await dashboardPage
.getByGrafanaSelector(selectors.components.DashboardRow.wrapper('New tab 1'))
.scrollIntoViewIfNeeded();

View File

@@ -4,6 +4,8 @@ import { test, expect, E2ESelectorGroups, DashboardPage } from '@grafana/plugin-
import testV2Dashboard from '../dashboards/TestV2Dashboard.json';
import { switchToAutoGrid } from './utils';
test.use({
featureToggles: {
kubernetesDashboards: true,
@@ -33,7 +35,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
await expect(
dashboardPage.getByGrafanaSelector(selectors.components.Panels.Panel.title('New panel'))
@@ -64,7 +67,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
// Get initial positions - standard width should have panels on different rows
const firstPanelTop = await getPanelTop(dashboardPage, selectors);
@@ -124,7 +128,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
await dashboardPage
.getByGrafanaSelector(selectors.components.PanelEditor.ElementEditPane.AutoGridLayout.minColumnWidth)
@@ -181,7 +186,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
await dashboardPage
.getByGrafanaSelector(selectors.components.PanelEditor.ElementEditPane.AutoGridLayout.maxColumns)
@@ -216,7 +222,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
const regularRowHeight = await getPanelHeight(dashboardPage, selectors);
@@ -271,7 +278,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
const regularRowHeight = await getPanelHeight(dashboardPage, selectors);
@@ -328,7 +336,8 @@ test.describe(
).toHaveCount(3);
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await page.getByLabel('layout-selection-option-Auto grid').click();
await switchToAutoGrid(page, dashboardPage);
// Set narrow column width first to ensure panels fit horizontally
await dashboardPage

View File

@@ -1,6 +1,6 @@
import { Page } from 'playwright-core';
import { test, expect } from '@grafana/plugin-e2e';
import { test, expect, DashboardPage } from '@grafana/plugin-e2e';
import testV2DashWithRepeats from '../dashboards/V2DashWithRepeats.json';
@@ -12,6 +12,7 @@ import {
getPanelPosition,
importTestDashboard,
goToEmbeddedPanel,
switchToAutoGrid,
} from './utils';
const repeatTitleBase = 'repeat - ';
@@ -42,7 +43,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await dashboardPage.getByGrafanaSelector(selectors.components.Panels.Panel.title('New panel')).first().click();
@@ -78,7 +79,8 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
await page.reload();
@@ -117,7 +119,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
// select first/original repeat panel to activate edit pane
await dashboardPage
@@ -148,7 +150,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
await page.reload();
@@ -214,7 +216,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
// loading directly into panel editor
@@ -271,7 +273,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
// this moving repeated panel between two normal panels
await movePanel(dashboardPage, selectors, `${repeatTitleBase}${repeatOptions.at(0)}`, 'New panel');
@@ -319,7 +321,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
await page.reload();
@@ -382,7 +384,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
await page.reload();
@@ -410,7 +412,7 @@ test.describe(
await dashboardPage.getByGrafanaSelector(selectors.components.NavToolbar.editDashboard.editButton).click();
await dashboardPage.getByGrafanaSelector(selectors.pages.Dashboard.Sidebar.optionsButton).click();
await switchToAutoGrid(page);
await switchToAutoGrid(page, dashboardPage);
await saveDashboard(dashboardPage, page, selectors);
await page.reload();
@@ -462,7 +464,3 @@ test.describe(
});
}
);
async function switchToAutoGrid(page: Page) {
await page.getByLabel('layout-selection-option-Auto grid').click();
}

View File

@@ -1,5 +1,6 @@
import { Page } from '@playwright/test';
import { selectors } from '@grafana/e2e-selectors';
import { DashboardPage, E2ESelectorGroups, expect } from '@grafana/plugin-e2e';
import testV2Dashboard from '../dashboards/TestV2Dashboard.json';
@@ -239,3 +240,12 @@ export async function getTabPosition(dashboardPage: DashboardPage, selectors: E2
const boundingBox = await tab.boundingBox();
return boundingBox;
}
export async function switchToAutoGrid(page: Page, dashboardPage: DashboardPage) {
await page.getByLabel('layout-selection-option-Auto grid').click();
// confirm layout change if applicable
const confirmModal = dashboardPage.getByGrafanaSelector(selectors.pages.ConfirmModal.delete);
if (confirmModal) {
await confirmModal.click();
}
}

View File

@@ -156,13 +156,18 @@ export async function applyScopes(page: Page, scopes?: TestScope[]) {
return;
}
const url: string =
const dashboardBindingsUrl: string =
'**/apis/scope.grafana.app/v0alpha1/namespaces/*/find/scope_dashboard_bindings?' +
scopes.map((scope) => `scope=scope-${scope.name}`).join('&');
const scopeNavigationsUrl: string =
'**/apis/scope.grafana.app/v0alpha1/namespaces/*/find/scope_navigations?' +
scopes.map((scope) => `scope=scope-${scope.name}`).join('&');
const groups: string[] = ['Most relevant', 'Dashboards', 'Something else', ''];
await page.route(url, async (route) => {
// Mock scope_dashboard_bindings endpoint
await page.route(dashboardBindingsUrl, async (route) => {
await route.fulfill({
status: 200,
contentType: 'application/json',
@@ -215,7 +220,52 @@ export async function applyScopes(page: Page, scopes?: TestScope[]) {
});
});
const responsePromise = page.waitForResponse((response) => response.url().includes(`/find/scope_dashboard_bindings`));
// Mock scope_navigations endpoint
await page.route(scopeNavigationsUrl, async (route) => {
await route.fulfill({
status: 200,
contentType: 'application/json',
body: JSON.stringify({
apiVersion: 'scope.grafana.app/v0alpha1',
items: scopes.flatMap((scope) => {
const navigations: Array<{
kind: string;
apiVersion: string;
metadata: { name: string; resourceVersion: string; creationTimestamp: string };
spec: { url: string; scope: string };
status: { title: string };
}> = [];
// Create a scope navigation if dashboardUid is provided
if (scope.dashboardUid && scope.addLinks) {
navigations.push({
kind: 'ScopeNavigation',
apiVersion: 'scope.grafana.app/v0alpha1',
metadata: {
name: `scope-${scope.name}-nav`,
resourceVersion: '1',
creationTimestamp: 'stamp',
},
spec: {
url: `/d/${scope.dashboardUid}`,
scope: `scope-${scope.name}`,
},
status: {
title: scope.dashboardTitle ?? scope.title,
},
});
}
return navigations;
}),
}),
});
});
const responsePromise = page.waitForResponse(
(response) =>
response.url().includes(`/find/scope_dashboard_bindings`) || response.url().includes(`/find/scope_navigations`)
);
const scopeRequestPromises: Array<Promise<Response>> = [];
for (const scope of scopes) {

View File

@@ -124,5 +124,23 @@ export const testScopesWithRedirect = (): TestScope[] => {
dashboardTitle: 'CUJ Dashboard 2',
addLinks: true,
},
{
name: 'sn-redirect-setup',
title: 'Setup Navigation',
// No redirectPath - used to set up scope navigation to dashboard-1
filters: [{ key: 'namespace', operator: 'equals', value: 'setup-nav' }],
dashboardUid: 'cuj-dashboard-1', // Creates scope navigation to this dashboard
dashboardTitle: 'CUJ Dashboard 1',
addLinks: true,
},
{
name: 'sn-redirect-with-navigation',
title: 'Redirect With Navigation',
redirectPath: '/d/cuj-dashboard-3', // Redirect target
filters: [{ key: 'namespace', operator: 'equals', value: 'redirect-with-nav' }],
dashboardUid: 'cuj-dashboard-1', // Creates scope navigation to this dashboard
dashboardTitle: 'CUJ Dashboard 1',
addLinks: true,
},
];
};

View File

@@ -2882,11 +2882,6 @@
"count": 1
}
},
"public/app/features/panel/components/VizTypePicker/PanelTypeCard.tsx": {
"@grafana/no-aria-label-selectors": {
"count": 1
}
},
"public/app/features/panel/panellinks/linkSuppliers.ts": {
"@typescript-eslint/no-explicit-any": {
"count": 1

7
go.mod
View File

@@ -48,7 +48,7 @@ require (
github.com/blugelabs/bluge_segment_api v0.2.0 // @grafana/grafana-backend-group
github.com/bradfitz/gomemcache v0.0.0-20230905024940-24af94b03874 // @grafana/grafana-backend-group
github.com/bwmarrin/snowflake v0.3.0 // @grafana/grafana-app-platform-squad
github.com/centrifugal/centrifuge v0.37.2 // @grafana/grafana-app-platform-squad
github.com/centrifugal/centrifuge v0.38.0 // @grafana/grafana-app-platform-squad
github.com/crewjam/saml v0.4.14 // @grafana/identity-access-team
github.com/dgraph-io/badger/v4 v4.7.0 // @grafana/grafana-search-and-storage
github.com/dlmiddlecote/sqlstats v1.0.2 // @grafana/grafana-backend-group
@@ -386,7 +386,7 @@ require (
github.com/caio/go-tdigest v3.1.0+incompatible // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // @grafana/alerting-backend
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/centrifugal/protocol v0.16.2 // indirect
github.com/centrifugal/protocol v0.17.0 // indirect
github.com/cespare/xxhash v1.1.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cheekybits/genny v1.0.0 // indirect
@@ -562,7 +562,7 @@ require (
github.com/prometheus/procfs v0.16.1 // indirect
github.com/protocolbuffers/txtpbfmt v0.0.0-20241112170944-20d2c9ebc01d // indirect
github.com/puzpuzpuz/xsync/v2 v2.5.1 // indirect
github.com/redis/rueidis v1.0.64 // indirect
github.com/redis/rueidis v1.0.68 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect
@@ -687,6 +687,7 @@ require (
github.com/moby/term v0.5.0 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/quagmt/udecimal v1.9.0 // indirect
github.com/shirou/gopsutil/v4 v4.25.3 // indirect
github.com/tklauser/go-sysconf v0.3.14 // indirect
github.com/tklauser/numcpus v0.8.0 // indirect

14
go.sum
View File

@@ -1006,10 +1006,10 @@ github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F9
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.4.1/go.mod h1:4T9NM4+4Vw91VeyqjLS6ao50K5bOcLKN6Q42XnYaRYw=
github.com/centrifugal/centrifuge v0.37.2 h1:rerQNvDfYN2FZEkVtb/hvGV7SIrJfEQrKF3MaE8GDlo=
github.com/centrifugal/centrifuge v0.37.2/go.mod h1:aj4iRJGhzi3SlL8iUtVezxway1Xf8g+hmNQkLLO7sS8=
github.com/centrifugal/protocol v0.16.2 h1:KoIHgDeX1fFxyxQoKW+6E8ZTCf5mwGm8JyGoJ5NBMbQ=
github.com/centrifugal/protocol v0.16.2/go.mod h1:Q7OpS/8HMXDnL7f9DpNx24IhG96MP88WPpVTTCdrokI=
github.com/centrifugal/centrifuge v0.38.0 h1:UJTowwc5lSwnpvd3vbrTseODbU7osSggN67RTrJ8EfQ=
github.com/centrifugal/centrifuge v0.38.0/go.mod h1:rcZLARnO5GXOeE9qG7iIPMvERxESespqkSX4cGLCAzo=
github.com/centrifugal/protocol v0.17.0 h1:hD0WczyiG7zrVJcgkQsd5/nhfFXt0Y04SJHV2Z7B1rg=
github.com/centrifugal/protocol v0.17.0/go.mod h1:9MdiYyjw5Bw1+d5Sp4Y0NK+qiuTNyd88nrHJsUUh8k4=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
@@ -2334,11 +2334,13 @@ github.com/puzpuzpuz/xsync/v2 v2.5.1 h1:mVGYAvzDSu52+zaGyNjC+24Xw2bQi3kTr4QJ6N9p
github.com/puzpuzpuz/xsync/v2 v2.5.1/go.mod h1:gD2H2krq/w52MfPLE+Uy64TzJDVY7lP2znR9qmR35kU=
github.com/puzpuzpuz/xsync/v4 v4.2.0 h1:dlxm77dZj2c3rxq0/XNvvUKISAmovoXF4a4qM6Wvkr0=
github.com/puzpuzpuz/xsync/v4 v4.2.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo=
github.com/quagmt/udecimal v1.9.0 h1:TLuZiFeg0HhS6X8VDa78Y6XTaitZZfh+z5q4SXMzpDQ=
github.com/quagmt/udecimal v1.9.0/go.mod h1:ScmJ/xTGZcEoYiyMMzgDLn79PEJHcMBiJ4NNRT3FirA=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/redis/go-redis/v9 v9.14.0 h1:u4tNCjXOyzfgeLN+vAZaW1xUooqWDqVEsZN0U01jfAE=
github.com/redis/go-redis/v9 v9.14.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
github.com/redis/rueidis v1.0.64 h1:XqgbueDuNV3qFdVdQwAHJl1uNt90zUuAJuzqjH4cw6Y=
github.com/redis/rueidis v1.0.64/go.mod h1:Lkhr2QTgcoYBhxARU7kJRO8SyVlgUuEkcJO1Y8MCluA=
github.com/redis/rueidis v1.0.68 h1:gept0E45JGxVigWb3zoWHvxEc4IOC7kc4V/4XvN8eG8=
github.com/redis/rueidis v1.0.68/go.mod h1:Lkhr2QTgcoYBhxARU7kJRO8SyVlgUuEkcJO1Y8MCluA=
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=

View File

@@ -708,6 +708,8 @@ github.com/envoyproxy/go-control-plane/envoy v1.32.3/go.mod h1:F6hWupPfh75TBXGKA
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
github.com/envoyproxy/protoc-gen-validate v1.0.4/go.mod h1:qys6tmnRsYrQqIhm2bvKZH4Blx/1gTIZ2UKVY1M+Yew=
github.com/envoyproxy/protoc-gen-validate v1.1.0/go.mod h1:sXRDRVmzEbkM7CVcM06s9shE/m23dg3wzjl0UWqJ2q4=
github.com/ericlagergren/decimal v0.0.0-20240411145413-00de7ca16731 h1:R/ZjJpjQKsZ6L/+Gf9WHbt31GG8NMVcpRqUE+1mMIyo=
github.com/ericlagergren/decimal v0.0.0-20240411145413-00de7ca16731/go.mod h1:M9R1FoZ3y//hwwnJtO51ypFGwm8ZfpxPT/ZLtO1mcgQ=
github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.9.11/go.mod h1:3j+LviiESTElxA4p3EMKAB9HXj3/XEtnUf6OZxqIQTM=
github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw=
@@ -1330,6 +1332,7 @@ github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e h1:aoZm08cpOy4WuID//EZDgc
github.com/pkg/sftp v1.13.1 h1:I2qBYMChEhIjOgazfJmV3/mZM256btk6wkCDRmW7JYs=
github.com/pkg/xattr v0.4.10 h1:Qe0mtiNFHQZ296vRgUjRCoPHPqH7VdTOrZx3g0T+pGA=
github.com/pkg/xattr v0.4.10/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/planetscale/vtprotobuf v0.6.0/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
github.com/posener/complete v1.2.3 h1:NP0eAhjcjImqslEwo/1hq7gpajME0fTLTezBKDqfXqo=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/pquerna/cachecontrol v0.1.0 h1:yJMy84ti9h/+OEWa752kBTKv4XC30OtVVHYv/8cTqKc=
@@ -1397,6 +1400,7 @@ github.com/schollz/closestmatch v2.1.0+incompatible/go.mod h1:RtP1ddjLong6gTkbtm
github.com/schollz/progressbar/v3 v3.14.6 h1:GyjwcWBAf+GFDMLziwerKvpuS7ZF+mNTAXIB2aspiZs=
github.com/schollz/progressbar/v3 v3.14.6/go.mod h1:Nrzpuw3Nl0srLY0VlTvC4V6RL50pcEymjy6qyJAaLa0=
github.com/sclevine/spec v1.4.0/go.mod h1:LvpgJaFyvQzRvc1kaDs0bulYwzC70PbiYjC4QnFHkOM=
github.com/segmentio/asm v1.1.4/go.mod h1:Ld3L4ZXGNcSLRg4JBsZ3//1+f/TjYl0Mzen/DQy1EJg=
github.com/segmentio/fasthash v1.0.3 h1:EI9+KE1EwvMLBWwjpRDc+fEM+prwxDYbslddQGtrmhM=
github.com/segmentio/fasthash v1.0.3/go.mod h1:waKX8l2N8yckOgmSsXJi7x1ZfdKZ4x7KRMzBtS3oedY=
github.com/segmentio/parquet-go v0.0.0-20220811205829-7efc157d28af/go.mod h1:PxYdAI6cGd+s1j4hZDQbz3VFgobF5fDA0weLeNWKTE4=
@@ -1935,6 +1939,7 @@ golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT
golang.org/x/net v0.0.0-20211123203042-d83791d6bcd9/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.3.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/net v0.16.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/net v0.24.0/go.mod h1:2Q7sJY5mzlzWjKtYUEXSlBWCdyaioyXzRB2RtU8KVE8=
@@ -2001,6 +2006,7 @@ golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ=
golang.org/x/term v0.33.0/go.mod h1:s18+ql9tYWp1IfpV9DmCtQDDSRBUjKaw9M1eAv5UeF0=
golang.org/x/term v0.34.0/go.mod h1:5jC53AEywhIVebHgPVeg0mj8OD3VO9OzclacVrqpaAw=
golang.org/x/term v0.35.0/go.mod h1:TPGtkTLesOwf2DE8CgVYiZinHAOuy5AYUYT1lENIZnA=
golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.17.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
@@ -2077,6 +2083,7 @@ google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5/go.
google.golang.org/genproto/googleapis/api v0.0.0-20250929231259-57b25ae835d4/go.mod h1:NnuHhy+bxcg30o7FnVAZbXsPHUDQ9qKWAQKCD7VxFtk=
google.golang.org/genproto/googleapis/bytestream v0.0.0-20250603155806-513f23925822 h1:zWFRixYR5QlotL+Uv3YfsPRENIrQFXiGs+iwqel6fOQ=
google.golang.org/genproto/googleapis/bytestream v0.0.0-20250603155806-513f23925822/go.mod h1:h6yxum/C2qRb4txaZRLDHK8RyS0H/o2oEDeKY4onY/Y=
google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d/go.mod h1:+Bk1OCOj40wS2hwAMA+aCW9ypzm63QTBBHp6lQ3p+9M=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231002182017-d307bd883b97/go.mod h1:v7nGkzlmW8P3n/bKmWBn2WpBjpOEx8Q6gMueudAmKfY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20231106174013-bbf56f31fb17/go.mod h1:oQ5rr10WTTMvP4A36n8JpR1OrO1BEiV4f78CneXZxkA=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240123012728-ef4313101c80/go.mod h1:PAREbraiVEVGVdTZsVWjSbbTtSyGbAgIIvni8a8CD5s=
@@ -2107,6 +2114,7 @@ google.golang.org/genproto/googleapis/rpc v0.0.0-20251014184007-4626949a642f/go.
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
google.golang.org/grpc v1.58.2/go.mod h1:tgX3ZQDlNJGU96V6yHh1T/JeoBQ2TXdr43YbYSsCJk0=
google.golang.org/grpc v1.59.0/go.mod h1:aUPDwccQo6OTjy7Hct4AfBPD1GptF4fyUjIkQ9YtF98=
google.golang.org/grpc v1.61.0/go.mod h1:VUbo7IFqmF1QtCAstipjG0GIoq49KvMe9+h1jFLBNJs=
google.golang.org/grpc v1.62.1/go.mod h1:IWTG0VlJLCh1SkC58F7np9ka9mx/WNkjl4PGJaiq+QE=

View File

@@ -124,7 +124,6 @@
"@types/eslint": "9.6.1",
"@types/eslint-scope": "^8.0.0",
"@types/file-saver": "2.0.7",
"@types/glob": "^9.0.0",
"@types/google.analytics": "^0.0.46",
"@types/gtag.js": "^0.0.20",
"@types/history": "4.7.11",
@@ -290,7 +289,7 @@
"@grafana/google-sdk": "0.3.5",
"@grafana/i18n": "workspace:*",
"@grafana/lezer-logql": "0.2.9",
"@grafana/llm": "0.22.1",
"@grafana/llm": "1.0.1",
"@grafana/monaco-logql": "^0.0.8",
"@grafana/o11y-ds-frontend": "workspace:*",
"@grafana/plugin-ui": "^0.11.1",
@@ -460,7 +459,8 @@
"gitconfiglocal": "2.1.0",
"tmp@npm:^0.0.33": "~0.2.1",
"js-yaml@npm:4.1.0": "^4.1.0",
"js-yaml@npm:=4.1.0": "^4.1.0"
"js-yaml@npm:=4.1.0": "^4.1.0",
"nodemailer": "7.0.7"
},
"workspaces": {
"packages": [

View File

@@ -165,6 +165,19 @@ const injectedRtkApi = api
}),
providesTags: ['Search'],
}),
getSearchUsers: build.query<GetSearchUsersApiResponse, GetSearchUsersApiArg>({
query: (queryArg) => ({
url: `/searchUsers`,
params: {
query: queryArg.query,
limit: queryArg.limit,
page: queryArg.page,
offset: queryArg.offset,
sort: queryArg.sort,
},
}),
providesTags: ['Search'],
}),
listServiceAccount: build.query<ListServiceAccountApiResponse, ListServiceAccountApiArg>({
query: (queryArg) => ({
url: `/serviceaccounts`,
@@ -896,6 +909,18 @@ export type GetSearchTeamsApiArg = {
/** page number to start from */
page?: number;
};
export type GetSearchUsersApiResponse = unknown;
export type GetSearchUsersApiArg = {
query?: string;
/** number of results to return */
limit?: number;
/** page number (starting from 1) */
page?: number;
/** number of results to skip */
offset?: number;
/** sortable field */
sort?: string;
};
export type ListServiceAccountApiResponse = /** status 200 OK */ ServiceAccountList;
export type ListServiceAccountApiArg = {
/** If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget). */
@@ -2067,6 +2092,9 @@ export type UserSpec = {
role: string;
title: string;
};
export type UserStatus = {
lastSeenAt: number;
};
export type User = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
apiVersion?: string;
@@ -2075,6 +2103,7 @@ export type User = {
metadata: ObjectMeta;
/** Spec is the spec of the User */
spec: UserSpec;
status: UserStatus;
};
export type UserList = {
/** APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources */
@@ -2120,6 +2149,8 @@ export const {
useUpdateExternalGroupMappingMutation,
useGetSearchTeamsQuery,
useLazyGetSearchTeamsQuery,
useGetSearchUsersQuery,
useLazyGetSearchUsersQuery,
useListServiceAccountQuery,
useLazyListServiceAccountQuery,
useCreateServiceAccountMutation,

View File

@@ -1,5 +1,5 @@
/**
* A library containing the different design components of the Grafana ecosystem.
* A library containing e2e selectors for the Grafana ecosystem.
*
* @packageDocumentation
*/

View File

@@ -3,7 +3,7 @@
// (a <button> with clear text, for example, does not need an aria-label as it's already labeled)
// but you still might need to select it for testing,
// in that case please add the attribute data-testid={selector} in the component and
// prefix your selector string with 'data-testid' so that when create the selectors we know to search for it on the right attribute
// prefix your selector string with 'data-testid' so that when we create the selectors we know to search for it on the right attribute
import { VersionedSelectorGroup } from '../types';
@@ -1057,6 +1057,7 @@ export const versionedComponents = {
},
PluginVisualization: {
item: {
'12.4.0': (title: string) => `data-testid Plugin visualization item ${title}`,
[MIN_GRAFANA_VERSION]: (title: string) => `Plugin visualization item ${title}`,
},
current: {

View File

@@ -17,6 +17,10 @@ export interface Options {
* Controls the height of the rows
*/
cellHeight?: ui.TableCellHeight;
/**
* If true, disables all keyboard events in the table. this is used when previewing a table (i.e. suggestions)
*/
disableKeyboardEvents?: boolean;
/**
* Enable pagination on the table
*/

View File

@@ -13,6 +13,7 @@ import * as common from '@grafana/schema';
export const pluginVersion = "12.4.0-pre";
export interface Options extends common.OptionsWithTimezones, common.OptionsWithAnnotations {
disableKeyboardEvents?: boolean;
legend: common.VizLegendOptions;
orientation?: common.VizOrientation;
timeCompare?: common.TimeCompareOptions;

View File

@@ -451,6 +451,19 @@ describe('TableNG', () => {
expect(screen.getByText('A1')).toBeInTheDocument();
expect(screen.getByText('1')).toBeInTheDocument();
});
it('shows full column name in title attribute for truncated headers', () => {
const { container } = render(
<TableNG enableVirtualization={false} data={createBasicDataFrame()} width={800} height={600} />
);
const headers = container.querySelectorAll('[role="columnheader"]');
const firstHeaderSpan = headers[0].querySelector('span');
const secondHeaderSpan = headers[1].querySelector('span');
expect(firstHeaderSpan).toHaveAttribute('title', 'Column A');
expect(secondHeaderSpan).toHaveAttribute('title', 'Column B');
});
});
describe('Footer options', () => {

View File

@@ -105,6 +105,7 @@ export function TableNG(props: TableNGProps) {
const {
cellHeight,
data,
disableKeyboardEvents,
disableSanitizeHtml,
enablePagination = false,
enableSharedCrosshair = false,
@@ -819,9 +820,9 @@ export function TableNG(props: TableNGProps) {
}
}}
onCellKeyDown={
hasNestedFrames
hasNestedFrames || disableKeyboardEvents
? (_, event) => {
if (event.isDefaultPrevented()) {
if (disableKeyboardEvents || event.isDefaultPrevented()) {
// skip parent grid keyboard navigation if nested grid handled it
event.preventGridDefault();
}

View File

@@ -55,7 +55,9 @@ const HeaderCell: React.FC<HeaderCellProps> = ({
{showTypeIcons && (
<Icon className={styles.headerCellIcon} name={getFieldTypeIcon(field)} title={field?.type} size="sm" />
)}
<span className={styles.headerCellLabel}>{getDisplayName(field)}</span>
<span className={styles.headerCellLabel} title={displayName}>
{displayName}
</span>
{direction && (
<Icon
className={cx(styles.headerCellIcon, styles.headerSortIcon)}

View File

@@ -138,6 +138,8 @@ export interface BaseTableProps {
enableVirtualization?: boolean;
// for MarkdownCell, this flag disables sanitization of HTML content. Configured via config.ini.
disableSanitizeHtml?: boolean;
// if true, disables all keyboard events in the table. this is used when previewing a table (i.e. suggestions)
disableKeyboardEvents?: boolean;
}
/* ---------------------------- Table cell props ---------------------------- */

View File

@@ -187,6 +187,15 @@ func (hs *HTTPServer) registerRoutes() {
publicdashboardsapi.CountPublicDashboardRequest(),
hs.Index,
)
r.Get("/bootdata/:accessToken",
reqNoAuth,
hs.PublicDashboardsApi.Middleware.HandleView,
publicdashboardsapi.SetPublicDashboardAccessToken,
publicdashboardsapi.SetPublicDashboardOrgIdOnContext(hs.PublicDashboardsApi.PublicDashboardService),
publicdashboardsapi.CountPublicDashboardRequest(),
hs.GetBootdata,
)
}
r.Get("/explore", authorize(ac.EvalPermission(ac.ActionDatasourcesExplore)), hs.Index)

View File

@@ -111,17 +111,15 @@ func TestGetHomeDashboard(t *testing.T) {
}
}
func newTestLive(t *testing.T, store db.DB) *live.GrafanaLive {
func newTestLive(t *testing.T) *live.GrafanaLive {
features := featuremgmt.WithFeatures()
cfg := setting.NewCfg()
cfg.AppURL = "http://localhost:3000/"
gLive, err := live.ProvideService(nil, cfg,
routing.NewRouteRegister(),
nil, nil, nil, nil,
store,
nil,
&usagestats.UsageStatsMock{T: t},
nil,
features, acimpl.ProvideAccessControl(features),
&dashboards.FakeDashboardService{},
nil, nil)
@@ -751,7 +749,7 @@ func TestIntegrationDashboardAPIEndpoint(t *testing.T) {
hs := HTTPServer{
Cfg: cfg,
ProvisioningService: provisioning.NewProvisioningServiceMock(context.Background()),
Live: newTestLive(t, db.InitTestDB(t)),
Live: newTestLive(t),
QuotaService: quotatest.New(false, nil),
LibraryElementService: &libraryelementsfake.LibraryElementService{},
DashboardService: dashboardService,
@@ -1003,7 +1001,7 @@ func postDashboardScenario(t *testing.T, desc string, url string, routePattern s
hs := HTTPServer{
Cfg: cfg,
ProvisioningService: provisioning.NewProvisioningServiceMock(context.Background()),
Live: newTestLive(t, db.InitTestDB(t)),
Live: newTestLive(t),
QuotaService: quotatest.New(false, nil),
pluginStore: &pluginstore.FakePluginStore{},
LibraryElementService: &libraryelementsfake.LibraryElementService{},
@@ -1043,7 +1041,7 @@ func restoreDashboardVersionScenario(t *testing.T, desc string, url string, rout
hs := HTTPServer{
Cfg: cfg,
ProvisioningService: provisioning.NewProvisioningServiceMock(context.Background()),
Live: newTestLive(t, db.InitTestDB(t)),
Live: newTestLive(t),
QuotaService: quotatest.New(false, nil),
LibraryElementService: &libraryelementsfake.LibraryElementService{},
DashboardService: mock,

View File

@@ -343,7 +343,7 @@ func TestUpdateDataSourceByID_DataSourceNameExists(t *testing.T) {
Cfg: setting.NewCfg(),
AccessControl: acimpl.ProvideAccessControl(featuremgmt.WithFeatures()),
accesscontrolService: actest.FakeService{},
Live: newTestLive(t, nil),
Live: newTestLive(t),
}
sc := setupScenarioContext(t, "/api/datasources/1")
@@ -450,7 +450,7 @@ func TestAPI_datasources_AccessControl(t *testing.T) {
hs.Cfg = setting.NewCfg()
hs.DataSourcesService = &dataSourcesServiceMock{expectedDatasource: &datasources.DataSource{}}
hs.accesscontrolService = actest.FakeService{}
hs.Live = newTestLive(t, hs.SQLStore)
hs.Live = newTestLive(t)
hs.promRegister, hs.dsConfigHandlerRequestsDuration = setupDsConfigHandlerMetrics()
})

View File

@@ -1,11 +0,0 @@
package dtos
import "encoding/json"
type LivePublishCmd struct {
Channel string `json:"channel"`
Data json.RawMessage `json:"data,omitempty"`
}
type LivePublishResponse struct {
}

View File

@@ -11,15 +11,16 @@ import (
"os/signal"
"syscall"
"github.com/prometheus/client_golang/prometheus"
"k8s.io/client-go/rest"
"k8s.io/client-go/transport"
"github.com/grafana/grafana-app-sdk/logging"
"github.com/grafana/grafana-app-sdk/operator"
folder "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
"github.com/grafana/grafana/apps/iam/pkg/app"
"github.com/grafana/grafana/pkg/server"
"github.com/grafana/grafana/pkg/setting"
"github.com/prometheus/client_golang/prometheus"
"k8s.io/client-go/rest"
"k8s.io/client-go/transport"
"github.com/grafana/authlib/authn"
utilnet "k8s.io/apimachinery/pkg/util/net"
@@ -95,7 +96,7 @@ func buildIAMConfigFromSettings(cfg *setting.Cfg, registerer prometheus.Register
if zanzanaURL == "" {
return nil, fmt.Errorf("zanzana_url is required in [operator] section")
}
iamCfg.AppConfig.ZanzanaClientCfg.URL = zanzanaURL
iamCfg.AppConfig.ZanzanaClientCfg.Addr = zanzanaURL
iamCfg.AppConfig.InformerConfig.MaxConcurrentWorkers = operatorSec.Key("max_concurrent_workers").MustUint64(20)

View File

@@ -22,6 +22,7 @@ type iamAuthorizer struct {
func newIAMAuthorizer(accessClient authlib.AccessClient, legacyAccessClient authlib.AccessClient) authorizer.Authorizer {
resourceAuthorizer := make(map[string]authorizer.Authorizer)
serviceAuthorizer := gfauthorizer.NewServiceAuthorizer()
// Authorizer that allows any authenticated user
// To be used when authorization is handled at the storage layer
allowAuthorizer := authorizer.AuthorizerFunc(func(
@@ -50,8 +51,7 @@ func newIAMAuthorizer(accessClient authlib.AccessClient, legacyAccessClient auth
resourceAuthorizer[iamv0.UserResourceInfo.GetName()] = authorizer
resourceAuthorizer[iamv0.ExternalGroupMappingResourceInfo.GetName()] = authorizer
resourceAuthorizer[iamv0.TeamResourceInfo.GetName()] = authorizer
serviceAuthorizer := gfauthorizer.NewServiceAuthorizer()
resourceAuthorizer["searchUsers"] = serviceAuthorizer
resourceAuthorizer["searchTeams"] = serviceAuthorizer
return &iamAuthorizer{resourceAuthorizer: resourceAuthorizer}

View File

@@ -0,0 +1,164 @@
package authorizer
import (
"context"
"errors"
"fmt"
"net/http"
"sync"
"github.com/grafana/authlib/authn"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
dashboardv1 "github.com/grafana/grafana/apps/dashboard/pkg/apis/dashboard/v1beta1"
folderv1 "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
"github.com/grafana/grafana/apps/provisioning/pkg/auth"
"github.com/grafana/grafana/pkg/apimachinery/utils"
)
var (
ErrNoConfigProvider = errors.New("no config provider for group resource")
ErrNoVersionInfo = errors.New("no version info for group resource")
Versions = map[schema.GroupResource]string{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: folderv1.VERSION,
{Group: dashboardv1.GROUP, Resource: dashboardv1.DASHBOARD_RESOURCE}: dashboardv1.VERSION,
}
)
// ConfigProvider is a function that provides a rest.Config for a given context.
type ConfigProvider func(ctx context.Context) (*rest.Config, error)
// DynamicClientFactory is a function that creates a dynamic.Interface from a rest.Config.
// This can be overridden in tests.
type DynamicClientFactory func(config *rest.Config) (dynamic.Interface, error)
// ParentProvider implementation that fetches the parent folder information from remote API servers.
type ParentProviderImpl struct {
configProviders map[schema.GroupResource]ConfigProvider
versions map[schema.GroupResource]string
dynamicClientFactory DynamicClientFactory
// Cache of dynamic clients for each group resource
// This is used to avoid creating a new dynamic client for each request
// and to reuse the same client for the same group resource.
clients map[schema.GroupResource]dynamic.Interface
clientsMu sync.Mutex
}
// DialConfig holds the configuration for dialing a remote API server.
type DialConfig struct {
Host string
Insecure bool
CAFile string
Audience string
}
// NewLocalConfigProvider creates a map of ConfigProviders that return the same given config for local API servers.
func NewLocalConfigProvider(
configProvider ConfigProvider,
) map[schema.GroupResource]ConfigProvider {
return map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: configProvider,
{Group: dashboardv1.GROUP, Resource: dashboardv1.DASHBOARD_RESOURCE}: configProvider,
}
}
// NewRemoteConfigProvider creates a map of ConfigProviders for remote API servers based on the given DialConfig.
func NewRemoteConfigProvider(cfg map[schema.GroupResource]DialConfig, exchangeClient authn.TokenExchanger) map[schema.GroupResource]ConfigProvider {
configProviders := make(map[schema.GroupResource]ConfigProvider, len(cfg))
for gr, dialConfig := range cfg {
configProviders[gr] = func(ctx context.Context) (*rest.Config, error) {
return &rest.Config{
Host: dialConfig.Host,
WrapTransport: func(rt http.RoundTripper) http.RoundTripper {
return auth.NewRoundTripper(exchangeClient, rt, dialConfig.Audience)
},
TLSClientConfig: rest.TLSClientConfig{
Insecure: dialConfig.Insecure,
CAFile: dialConfig.CAFile,
},
QPS: 50,
Burst: 100,
}, nil
}
}
return configProviders
}
// NewApiParentProvider creates a new ParentProviderImpl with the given config providers and version info.
func NewApiParentProvider(
configProviders map[schema.GroupResource]ConfigProvider,
version map[schema.GroupResource]string,
) *ParentProviderImpl {
return &ParentProviderImpl{
configProviders: configProviders,
versions: version,
dynamicClientFactory: func(config *rest.Config) (dynamic.Interface, error) {
return dynamic.NewForConfig(config)
},
clients: make(map[schema.GroupResource]dynamic.Interface),
}
}
func (p *ParentProviderImpl) HasParent(gr schema.GroupResource) bool {
_, ok := p.configProviders[gr]
return ok
}
func (p *ParentProviderImpl) getClient(ctx context.Context, gr schema.GroupResource) (dynamic.Interface, error) {
p.clientsMu.Lock()
client, ok := p.clients[gr]
p.clientsMu.Unlock()
if ok {
return client, nil
}
provider, ok := p.configProviders[gr]
if !ok {
return nil, fmt.Errorf("%w: %s", ErrNoConfigProvider, gr.String())
}
restConfig, err := provider(ctx)
if err != nil {
return nil, err
}
client, err = p.dynamicClientFactory(restConfig)
if err != nil {
return nil, err
}
p.clientsMu.Lock()
p.clients[gr] = client
p.clientsMu.Unlock()
return client, nil
}
func (p *ParentProviderImpl) GetParent(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error) {
client, err := p.getClient(ctx, gr)
if err != nil {
return "", err
}
version, ok := p.versions[gr]
if !ok {
return "", fmt.Errorf("%w: %s", ErrNoVersionInfo, gr.String())
}
resourceClient := client.Resource(schema.GroupVersionResource{
Group: gr.Group,
Resource: gr.Resource,
Version: version,
}).Namespace(namespace)
unstructObj, err := resourceClient.Get(ctx, name, metav1.GetOptions{})
if err != nil {
return "", err
}
return unstructObj.GetAnnotations()[utils.AnnoKeyFolder], nil
}

View File

@@ -0,0 +1,198 @@
package authorizer
import (
"context"
"errors"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
folderv1 "github.com/grafana/grafana/apps/folder/pkg/apis/folder/v1beta1"
"github.com/grafana/grafana/pkg/apimachinery/utils"
)
var configProvider = func(ctx context.Context) (*rest.Config, error) {
return &rest.Config{}, nil
}
func TestParentProviderImpl_GetParent(t *testing.T) {
tests := []struct {
name string
gr schema.GroupResource
namespace string
resourceName string
parentFolder string
setupFake func(*fakeDynamicClient, *fakeResourceInterface)
configProviders map[schema.GroupResource]ConfigProvider
versions map[schema.GroupResource]string
expectedError string
expectedParent string
}{
{
name: "successfully get parent folder",
gr: schema.GroupResource{Group: folderv1.GROUP, Resource: folderv1.RESOURCE},
namespace: "org-1",
resourceName: "dash1",
parentFolder: "fold1",
setupFake: func(fakeClient *fakeDynamicClient, fakeResource *fakeResourceInterface) {
fakeClient.resourceInterface = fakeResource
fakeResource.getFunc = func(ctx context.Context, name string, opts metav1.GetOptions, subresources ...string) (*unstructured.Unstructured, error) {
obj := &unstructured.Unstructured{}
obj.SetAnnotations(map[string]string{utils.AnnoKeyFolder: "fold1"})
return obj, nil
}
},
configProviders: map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: configProvider,
},
versions: Versions,
expectedParent: "fold1",
},
{
name: "resource without parent annotation returns empty",
gr: schema.GroupResource{Group: folderv1.GROUP, Resource: folderv1.RESOURCE},
namespace: "org-1",
resourceName: "dash1",
setupFake: func(fakeClient *fakeDynamicClient, fakeResource *fakeResourceInterface) {
fakeClient.resourceInterface = fakeResource
fakeResource.getFunc = func(ctx context.Context, name string, opts metav1.GetOptions, subresources ...string) (*unstructured.Unstructured, error) {
obj := &unstructured.Unstructured{}
obj.SetAnnotations(map[string]string{})
return obj, nil
}
},
configProviders: map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: configProvider,
},
versions: Versions,
expectedParent: "",
},
{
name: "no config provider returns error",
gr: schema.GroupResource{Group: "unknown.group", Resource: "unknown"},
namespace: "org-1",
resourceName: "resource-1",
configProviders: map[schema.GroupResource]ConfigProvider{},
versions: Versions,
expectedError: ErrNoConfigProvider.Error(),
},
{
name: "config provider returns error",
gr: schema.GroupResource{Group: folderv1.GROUP, Resource: folderv1.RESOURCE},
namespace: "org-1",
resourceName: "resource-1",
configProviders: map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: func(ctx context.Context) (*rest.Config, error) {
return nil, errors.New("config provider error")
},
},
versions: Versions,
expectedError: "config provider error",
},
{
name: "no version info returns error",
gr: schema.GroupResource{Group: folderv1.GROUP, Resource: folderv1.RESOURCE},
namespace: "org-1",
resourceName: "resource-1",
configProviders: map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: func(ctx context.Context) (*rest.Config, error) {
return &rest.Config{}, nil
},
},
versions: map[schema.GroupResource]string{},
expectedError: ErrNoVersionInfo.Error(),
},
{
name: "resource get returns error",
gr: schema.GroupResource{Group: folderv1.GROUP, Resource: folderv1.RESOURCE},
namespace: "org-1",
resourceName: "resource-1",
setupFake: func(fakeClient *fakeDynamicClient, fakeResource *fakeResourceInterface) {
fakeClient.resourceInterface = fakeResource
fakeResource.getFunc = func(ctx context.Context, name string, opts metav1.GetOptions, subresources ...string) (*unstructured.Unstructured, error) {
return nil, errors.New("resource not found")
}
},
configProviders: map[schema.GroupResource]ConfigProvider{
{Group: folderv1.GROUP, Resource: folderv1.RESOURCE}: func(ctx context.Context) (*rest.Config, error) {
return &rest.Config{}, nil
},
},
versions: Versions,
expectedError: "resource not found",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
fakeClient := &fakeDynamicClient{}
fakeResource := &fakeResourceInterface{}
if tt.setupFake != nil {
tt.setupFake(fakeClient, fakeResource)
}
provider := &ParentProviderImpl{
configProviders: tt.configProviders,
versions: tt.versions,
dynamicClientFactory: func(config *rest.Config) (dynamic.Interface, error) {
return fakeClient, nil
},
clients: make(map[schema.GroupResource]dynamic.Interface),
}
parent, err := provider.GetParent(context.Background(), tt.gr, tt.namespace, tt.resourceName)
if tt.expectedError != "" {
require.Error(t, err)
assert.Contains(t, err.Error(), tt.expectedError)
assert.Empty(t, parent)
} else {
require.NoError(t, err)
assert.Equal(t, tt.expectedParent, parent)
}
})
}
}
// fakeDynamicClient is a fake implementation of dynamic.Interface
type fakeDynamicClient struct {
resourceInterface dynamic.ResourceInterface
}
func (f *fakeDynamicClient) Resource(resource schema.GroupVersionResource) dynamic.NamespaceableResourceInterface {
return &fakeNamespaceableResourceInterface{
resourceInterface: f.resourceInterface,
}
}
// fakeNamespaceableResourceInterface is a fake implementation of dynamic.NamespaceableResourceInterface
type fakeNamespaceableResourceInterface struct {
dynamic.NamespaceableResourceInterface
resourceInterface dynamic.ResourceInterface
}
func (f *fakeNamespaceableResourceInterface) Namespace(namespace string) dynamic.ResourceInterface {
if f.resourceInterface != nil {
return f.resourceInterface
}
return &fakeResourceInterface{}
}
// fakeResourceInterface is a fake implementation of dynamic.ResourceInterface
type fakeResourceInterface struct {
dynamic.ResourceInterface
getFunc func(ctx context.Context, name string, opts metav1.GetOptions, subresources ...string) (*unstructured.Unstructured, error)
}
func (f *fakeResourceInterface) Get(ctx context.Context, name string, opts metav1.GetOptions, subresources ...string) (*unstructured.Unstructured, error) {
if f.getFunc != nil {
return f.getFunc(ctx, name, opts, subresources...)
}
return &unstructured.Unstructured{}, nil
}

View File

@@ -10,24 +10,44 @@ import (
iamv0 "github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/services/apiserver/auth/authorizer/storewrapper"
)
// TODO: Logs, Metrics, Traces?
// ParentProvider interface for fetching parent information of resources
type ParentProvider interface {
// HasParent checks if the given GroupResource has a parent folder
HasParent(gr schema.GroupResource) bool
// GetParent fetches the parent folder name for the given resource
GetParent(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error)
}
// ResourcePermissionsAuthorizer
type ResourcePermissionsAuthorizer struct {
accessClient types.AccessClient
accessClient types.AccessClient
parentProvider ParentProvider
logger log.Logger
}
var _ storewrapper.ResourceStorageAuthorizer = (*ResourcePermissionsAuthorizer)(nil)
func NewResourcePermissionsAuthorizer(accessClient types.AccessClient) *ResourcePermissionsAuthorizer {
func NewResourcePermissionsAuthorizer(
accessClient types.AccessClient,
parentProvider ParentProvider,
) *ResourcePermissionsAuthorizer {
return &ResourcePermissionsAuthorizer{
accessClient: accessClient,
accessClient: accessClient,
parentProvider: parentProvider,
logger: log.New("iam.resource-permissions-authorizer"),
}
}
func isAccessPolicy(authInfo types.AuthInfo) bool {
return types.IsIdentityType(authInfo.GetIdentityType(), types.TypeAccessPolicy)
}
// AfterGet implements ResourceStorageAuthorizer.
func (r *ResourcePermissionsAuthorizer) AfterGet(ctx context.Context, obj runtime.Object) error {
authInfo, ok := types.AuthInfoFrom(ctx)
@@ -37,9 +57,24 @@ func (r *ResourcePermissionsAuthorizer) AfterGet(ctx context.Context, obj runtim
switch o := obj.(type) {
case *iamv0.ResourcePermission:
target := o.Spec.Resource
targetGR := schema.GroupResource{Group: target.ApiGroup, Resource: target.Resource}
// TODO: Fetch the resource to retrieve its parent folder.
parent := ""
// Fetch the parent of the resource
// Access Policies have global scope, so no parent check needed
if !isAccessPolicy(authInfo) && r.parentProvider.HasParent(targetGR) {
p, err := r.parentProvider.GetParent(ctx, targetGR, o.Namespace, target.Name)
if err != nil {
r.logger.Error("after get: error fetching parent", "error", err.Error(),
"namespace", o.Namespace,
"group", target.ApiGroup,
"resource", target.Resource,
"name", target.Name,
)
return err
}
parent = p
}
checkReq := types.CheckRequest{
Namespace: o.Namespace,
@@ -72,9 +107,24 @@ func (r *ResourcePermissionsAuthorizer) beforeWrite(ctx context.Context, obj run
switch o := obj.(type) {
case *iamv0.ResourcePermission:
target := o.Spec.Resource
targetGR := schema.GroupResource{Group: target.ApiGroup, Resource: target.Resource}
// TODO: Fetch the resource to retrieve its parent folder.
parent := ""
// Fetch the parent of the resource
// Access Policies have global scope, so no parent check needed
if !isAccessPolicy(authInfo) && r.parentProvider.HasParent(targetGR) {
p, err := r.parentProvider.GetParent(ctx, targetGR, o.Namespace, target.Name)
if err != nil {
r.logger.Error("before write: error fetching parent", "error", err.Error(),
"namespace", o.Namespace,
"group", target.ApiGroup,
"resource", target.Resource,
"name", target.Name,
)
return err
}
parent = p
}
checkReq := types.CheckRequest{
Namespace: o.Namespace,
@@ -153,8 +203,29 @@ func (r *ResourcePermissionsAuthorizer) FilterList(ctx context.Context, list run
canViewFuncs[gr] = canView
}
// TODO : Fetch the resource to retrieve its parent folder.
target := item.Spec.Resource
targetGR := schema.GroupResource{Group: target.ApiGroup, Resource: target.Resource}
parent := ""
// Fetch the parent of the resource
// It's not efficient to do for every item in the list, but it's a good starting point.
// Access Policies have global scope, so no parent check needed
if !isAccessPolicy(authInfo) && r.parentProvider.HasParent(targetGR) {
p, err := r.parentProvider.GetParent(ctx, targetGR, item.Namespace, target.Name)
if err != nil {
// Skip item on error fetching parent
r.logger.Warn("filter list: error fetching parent, skipping item",
"error", err.Error(),
"namespace",
item.Namespace,
"group", target.ApiGroup,
"resource", target.Resource,
"name", target.Name,
)
continue
}
parent = p
}
allowed := canView(item.Spec.Resource.Name, parent)
if allowed {

View File

@@ -5,13 +5,15 @@ import (
"testing"
"github.com/go-jose/go-jose/v4/jwt"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/grafana/authlib/authn"
"github.com/grafana/authlib/types"
iamv0 "github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/apimachinery/utils"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
var (
@@ -63,6 +65,7 @@ func TestResourcePermissions_AfterGet(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
parent := "fold-1"
checkFunc := func(id types.AuthInfo, req *types.CheckRequest, folder string) (types.CheckResponse, error) {
require.NotNil(t, id)
// Check is called with the user's identity
@@ -74,12 +77,18 @@ func TestResourcePermissions_AfterGet(t *testing.T) {
require.Equal(t, fold1.Spec.Resource.Resource, req.Resource)
require.Equal(t, fold1.Spec.Resource.Name, req.Name)
require.Equal(t, utils.VerbGetPermissions, req.Verb)
require.Equal(t, parent, folder)
return types.CheckResponse{Allowed: tt.shouldAllow}, nil
}
getParentFunc := func(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error) {
// For this test, we can return a fixed parent folder ID
return parent, nil
}
accessClient := &fakeAccessClient{checkFunc: checkFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient)
fakeParentProvider := &fakeParentProvider{hasParent: true, getParentFunc: getParentFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient, fakeParentProvider)
ctx := types.WithAuthInfo(context.Background(), user)
err := resPermAuthz.AfterGet(ctx, fold1)
@@ -89,6 +98,7 @@ func TestResourcePermissions_AfterGet(t *testing.T) {
require.Error(t, err, "expected error for denied access")
}
require.True(t, accessClient.checkCalled, "accessClient.Check should be called")
require.True(t, fakeParentProvider.getParentCalled, "parentProvider.GetParent should be called")
})
}
}
@@ -121,23 +131,32 @@ func TestResourcePermissions_FilterList(t *testing.T) {
require.Equal(t, "dashboards", req.Resource)
}
// Return a checker that allows only specific resources: fold-1 and dash-2
// Return a checker that allows access to fold-1 and its content
return func(name, folder string) bool {
if name == "fold-1" || name == "dash-2" {
if name == "fold-1" || folder == "fold-1" {
return true
}
return false
}, &types.NoopZookie{}, nil
}
getParentFunc := func(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error) {
if name == "dash-2" {
return "fold-1", nil
}
return "", nil
}
accessClient := &fakeAccessClient{compileFunc: compileFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient)
fakeParentProvider := &fakeParentProvider{hasParent: true, getParentFunc: getParentFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient, fakeParentProvider)
ctx := types.WithAuthInfo(context.Background(), user)
obj, err := resPermAuthz.FilterList(ctx, list)
require.NoError(t, err)
require.NotNil(t, list)
require.True(t, accessClient.compileCalled, "accessClient.Compile should be called")
require.True(t, fakeParentProvider.getParentCalled, "parentProvider.GetParent should be called")
filtered, ok := obj.(*iamv0.ResourcePermissionList)
require.True(t, ok, "response should be of type ResourcePermissionList")
@@ -165,6 +184,7 @@ func TestResourcePermissions_beforeWrite(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
parent := "fold-1"
checkFunc := func(id types.AuthInfo, req *types.CheckRequest, folder string) (types.CheckResponse, error) {
require.NotNil(t, id)
// Check is called with the user's identity
@@ -176,12 +196,18 @@ func TestResourcePermissions_beforeWrite(t *testing.T) {
require.Equal(t, fold1.Spec.Resource.Resource, req.Resource)
require.Equal(t, fold1.Spec.Resource.Name, req.Name)
require.Equal(t, utils.VerbSetPermissions, req.Verb)
require.Equal(t, parent, folder)
return types.CheckResponse{Allowed: tt.shouldAllow}, nil
}
getParentFunc := func(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error) {
return parent, nil
}
accessClient := &fakeAccessClient{checkFunc: checkFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient)
fakeParentProvider := &fakeParentProvider{hasParent: true, getParentFunc: getParentFunc}
resPermAuthz := NewResourcePermissionsAuthorizer(accessClient, fakeParentProvider)
ctx := types.WithAuthInfo(context.Background(), user)
err := resPermAuthz.beforeWrite(ctx, fold1)
@@ -191,6 +217,7 @@ func TestResourcePermissions_beforeWrite(t *testing.T) {
require.Error(t, err, "expected error for denied delete")
}
require.True(t, accessClient.checkCalled, "accessClient.Check should be called")
require.True(t, fakeParentProvider.getParentCalled, "parentProvider.GetParent should be called")
})
}
}
@@ -214,3 +241,18 @@ func (m *fakeAccessClient) Compile(ctx context.Context, id types.AuthInfo, req t
}
var _ types.AccessClient = (*fakeAccessClient)(nil)
type fakeParentProvider struct {
hasParent bool
getParentCalled bool
getParentFunc func(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error)
}
func (f *fakeParentProvider) HasParent(gr schema.GroupResource) bool {
return f.hasParent
}
func (f *fakeParentProvider) GetParent(ctx context.Context, gr schema.GroupResource, namespace, name string) (string, error) {
f.getParentCalled = true
return f.getParentFunc(ctx, gr, namespace, name)
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/grafana/authlib/types"
"github.com/grafana/grafana/pkg/infra/log"
iamauthorizer "github.com/grafana/grafana/pkg/registry/apis/iam/authorizer"
"github.com/grafana/grafana/pkg/registry/apis/iam/externalgroupmapping"
"github.com/grafana/grafana/pkg/registry/apis/iam/legacy"
"github.com/grafana/grafana/pkg/registry/apis/iam/serviceaccount"
@@ -60,6 +61,10 @@ type IdentityAccessManagementAPIBuilder struct {
roleBindingsStorage RoleBindingStorageBackend
externalGroupMappingStorage ExternalGroupMappingStorageBackend
// Required for resource permissions authorization
// fetches resources parent folders
resourceParentProvider iamauthorizer.ParentProvider
// Access Control
authorizer authorizer.Authorizer
// legacyAccessClient is used for the identity apis, we need to migrate to the access client
@@ -77,10 +82,11 @@ type IdentityAccessManagementAPIBuilder struct {
reg prometheus.Registerer
logger log.Logger
dual dualwrite.Service
unified resource.ResourceClient
userSearchClient resourcepb.ResourceIndexClient
teamSearch *TeamSearchHandler
dual dualwrite.Service
unified resource.ResourceClient
userSearchClient resourcepb.ResourceIndexClient
userSearchHandler *user.SearchHandler
teamSearch *TeamSearchHandler
teamGroupsHandler externalgroupmapping.TeamGroupsHandler

View File

@@ -41,11 +41,13 @@ import (
"github.com/grafana/grafana/pkg/registry/apis/iam/teambinding"
"github.com/grafana/grafana/pkg/registry/apis/iam/user"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/apiserver"
gfauthorizer "github.com/grafana/grafana/pkg/services/apiserver/auth/authorizer"
"github.com/grafana/grafana/pkg/services/apiserver/auth/authorizer/storewrapper"
"github.com/grafana/grafana/pkg/services/apiserver/builder"
"github.com/grafana/grafana/pkg/services/authz/zanzana"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/services/ssosettings"
teamservice "github.com/grafana/grafana/pkg/services/team"
legacyuser "github.com/grafana/grafana/pkg/services/user"
@@ -76,8 +78,10 @@ func RegisterAPIService(
teamGroupsHandlerImpl externalgroupmapping.TeamGroupsHandler,
dual dualwrite.Service,
unified resource.ResourceClient,
orgService org.Service,
userService legacyuser.Service,
teamService teamservice.Service,
restConfig apiserver.RestConfigProvider,
) (*IdentityAccessManagementAPIBuilder, error) {
dbProvider := legacysql.NewDatabaseProvider(sql)
store := legacy.NewLegacySQLStores(dbProvider)
@@ -88,6 +92,11 @@ func RegisterAPIService(
//nolint:staticcheck // not yet migrated to OpenFeature
enableAuthnMutation := features.IsEnabledGlobally(featuremgmt.FlagKubernetesAuthnMutation)
resourceParentProvider := iamauthorizer.NewApiParentProvider(
iamauthorizer.NewLocalConfigProvider(restConfig.GetRestConfig),
iamauthorizer.Versions,
)
builder := &IdentityAccessManagementAPIBuilder{
store: store,
userLegacyStore: user.NewLegacyStore(store, accessClient, enableAuthnMutation, tracing),
@@ -102,6 +111,7 @@ func RegisterAPIService(
externalGroupMappingStorage: externalGroupMappingStorageBackend,
teamGroupsHandler: teamGroupsHandlerImpl,
sso: ssoService,
resourceParentProvider: resourceParentProvider,
authorizer: authorizer,
legacyAccessClient: legacyAccessClient,
accessClient: accessClient,
@@ -114,9 +124,11 @@ func RegisterAPIService(
dual: dual,
unified: unified,
userSearchClient: resource.NewSearchClient(dualwrite.NewSearchAdapter(dual), iamv0.UserResourceInfo.GroupResource(),
unified, user.NewUserLegacySearchClient(userService, tracing), features),
unified, user.NewUserLegacySearchClient(orgService, tracing, cfg), features),
teamSearch: NewTeamSearchHandler(tracing, dual, team.NewLegacyTeamSearchClient(teamService), unified, features),
}
builder.userSearchHandler = user.NewSearchHandler(tracing, builder.userSearchClient, features, cfg)
apiregistration.RegisterAPI(builder)
return builder, nil
@@ -138,6 +150,12 @@ func NewAPIService(
resourceAuthorizer := gfauthorizer.NewResourceAuthorizer(accessClient)
coreRoleAuthorizer := iamauthorizer.NewCoreRoleAuthorizer(accessClient)
// TODO: in a follow up PR, make this configurable
resourceParentProvider := iamauthorizer.NewApiParentProvider(
iamauthorizer.NewRemoteConfigProvider(map[schema.GroupResource]iamauthorizer.DialConfig{}, nil),
iamauthorizer.Versions,
)
return &IdentityAccessManagementAPIBuilder{
store: store,
display: user.NewLegacyDisplayREST(store),
@@ -148,6 +166,7 @@ func NewAPIService(
logger: log.New("iam.apis"),
features: features,
accessClient: accessClient,
resourceParentProvider: resourceParentProvider,
zClient: zClient,
zTickets: make(chan bool, MaxConcurrentZanzanaWrites),
reg: reg,
@@ -440,7 +459,7 @@ func (b *IdentityAccessManagementAPIBuilder) UpdateResourcePermissionsAPIGroup(
return fmt.Errorf("expected RegistryStoreDualWrite, got %T", dw)
}
authzWrapper := storewrapper.New(regStoreDW, iamauthorizer.NewResourcePermissionsAuthorizer(b.accessClient))
authzWrapper := storewrapper.New(regStoreDW, iamauthorizer.NewResourcePermissionsAuthorizer(b.accessClient, b.resourceParentProvider))
storage[iamv0.ResourcePermissionInfo.StoragePath()] = authzWrapper
return nil
@@ -510,10 +529,18 @@ func (b *IdentityAccessManagementAPIBuilder) PostProcessOpenAPI(oas *spec3.OpenA
func (b *IdentityAccessManagementAPIBuilder) GetAPIRoutes(gv schema.GroupVersion) *builder.APIRoutes {
defs := b.GetOpenAPIDefinitions()(func(path string) spec.Ref { return spec.Ref{} })
routes := b.teamSearch.GetAPIRoutes(defs)
routes.Namespace = append(routes.Namespace, b.display.GetAPIRoutes(defs).Namespace...)
searchRoutes := make([]*builder.APIRoutes, 0, 2)
if b.userSearchHandler != nil {
searchRoutes = append(searchRoutes, b.userSearchHandler.GetAPIRoutes(defs))
}
return routes
if b.teamSearch != nil {
searchRoutes = append(searchRoutes, b.teamSearch.GetAPIRoutes(defs))
}
routes := []*builder.APIRoutes{b.display.GetAPIRoutes(defs)}
routes = append(routes, searchRoutes...)
return mergeAPIRoutes(routes...)
}
func (b *IdentityAccessManagementAPIBuilder) GetAuthorizer() authorizer.Authorizer {
@@ -621,3 +648,15 @@ func NewLocalStore(resourceInfo utils.ResourceInfo, scheme *runtime.Scheme, defa
store, err := grafanaregistry.NewRegistryStore(scheme, resourceInfo, optsGetter)
return store, err
}
func mergeAPIRoutes(routes ...*builder.APIRoutes) *builder.APIRoutes {
merged := &builder.APIRoutes{}
for _, r := range routes {
if r == nil {
continue
}
merged.Root = append(merged.Root, r.Root...)
merged.Namespace = append(merged.Namespace, r.Namespace...)
}
return merged
}

View File

@@ -2,16 +2,22 @@ package user
import (
"context"
"encoding/binary"
"fmt"
"log/slog"
"math"
"regexp"
"sort"
"go.opentelemetry.io/otel/trace"
"google.golang.org/grpc"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/services/user"
res "github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/services/search/model"
"github.com/grafana/grafana/pkg/services/searchusers/sortopts"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
"github.com/grafana/grafana/pkg/storage/unified/search/builders"
)
@@ -21,28 +27,36 @@ const (
UserResourceGroup = "iam.grafana.com"
)
var _ resourcepb.ResourceIndexClient = (*UserLegacySearchClient)(nil)
var (
_ resourcepb.ResourceIndexClient = (*UserLegacySearchClient)(nil)
fieldLogin = fmt.Sprintf("%s%s", resource.SEARCH_FIELD_PREFIX, builders.USER_LOGIN)
fieldEmail = fmt.Sprintf("%s%s", resource.SEARCH_FIELD_PREFIX, builders.USER_EMAIL)
fieldLastSeenAt = fmt.Sprintf("%s%s", resource.SEARCH_FIELD_PREFIX, builders.USER_LAST_SEEN_AT)
fieldRole = fmt.Sprintf("%s%s", resource.SEARCH_FIELD_PREFIX, builders.USER_ROLE)
wildcardsMatcher = regexp.MustCompile(`[\*\?\\]`)
)
// UserLegacySearchClient is a client for searching for users in the legacy search engine.
type UserLegacySearchClient struct {
resourcepb.ResourceIndexClient
userService user.Service
log *slog.Logger
tracer trace.Tracer
orgService org.Service
log *slog.Logger
tracer trace.Tracer
cfg *setting.Cfg
}
// NewUserLegacySearchClient creates a new UserLegacySearchClient.
func NewUserLegacySearchClient(userService user.Service, tracer trace.Tracer) *UserLegacySearchClient {
func NewUserLegacySearchClient(orgService org.Service, tracer trace.Tracer, cfg *setting.Cfg) *UserLegacySearchClient {
return &UserLegacySearchClient{
userService: userService,
log: slog.Default().With("logger", "legacy-user-search-client"),
tracer: tracer,
orgService: orgService,
log: slog.Default().With("logger", "legacy-user-search-client"),
tracer: tracer,
cfg: cfg,
}
}
// Search searches for users in the legacy search engine.
// It only supports exact matching for title, login, or email.
// FIXME: This implementation only supports a single field query and will be extended in the future.
func (c *UserLegacySearchClient) Search(ctx context.Context, req *resourcepb.ResourceSearchRequest, _ ...grpc.CallOption) (*resourcepb.ResourceSearchResponse, error) {
ctx, span := c.tracer.Start(ctx, "user.Search")
defer span.End()
@@ -52,21 +66,30 @@ func (c *UserLegacySearchClient) Search(ctx context.Context, req *resourcepb.Res
return nil, err
}
if req.Limit > 100 {
req.Limit = 100
if req.Limit > maxLimit {
req.Limit = maxLimit
}
if req.Limit <= 0 {
req.Limit = 1
req.Limit = 30
}
if req.Page > math.MaxInt32 || req.Page < 0 {
return nil, fmt.Errorf("invalid page number: %d", req.Page)
}
query := &user.SearchUsersQuery{
SignedInUser: signedInUser,
Limit: int(req.Limit),
Page: int(req.Page),
if req.Page < 1 {
req.Page = 1
}
legacySortOptions := convertToSortOptions(req.SortBy)
query := &org.SearchOrgUsersQuery{
OrgID: signedInUser.GetOrgID(),
Limit: int(req.Limit),
Page: int(req.Page),
SortOpts: legacySortOptions,
User: signedInUser,
}
var title, login, email string
@@ -76,19 +99,15 @@ func (c *UserLegacySearchClient) Search(ctx context.Context, req *resourcepb.Res
c.log.Warn("only single value fields are supported for legacy search, using first value", "field", field.Key, "values", vals)
}
switch field.Key {
case res.SEARCH_FIELD_TITLE:
case resource.SEARCH_FIELD_TITLE:
title = vals[0]
case "fields.login":
case fieldLogin:
login = vals[0]
case "fields.email":
case fieldEmail:
email = vals[0]
}
}
if title == "" && login == "" && email == "" {
return nil, fmt.Errorf("at least one of title, login, or email must be provided for the query")
}
// The user store's Search method combines these into an OR.
// For legacy search we can only supply one.
if title != "" {
@@ -99,20 +118,35 @@ func (c *UserLegacySearchClient) Search(ctx context.Context, req *resourcepb.Res
query.Query = email
}
columns := getColumns(req.Fields)
// Unified search `query` has wildcards, but legacy search does not support them.
// We have to remove them here to make legacy search work as expected with SQL LIKE queries.
if req.Query != "" {
query.Query = wildcardsMatcher.ReplaceAllString(req.Query, "")
}
fields := req.Fields
if len(fields) == 0 {
fields = []string{resource.SEARCH_FIELD_TITLE, fieldEmail, fieldLogin, fieldLastSeenAt, fieldRole}
}
columns := getColumns(fields)
list := &resourcepb.ResourceSearchResponse{
Results: &resourcepb.ResourceTable{
Columns: columns,
},
}
res, err := c.userService.Search(ctx, query)
res, err := c.orgService.SearchOrgUsers(ctx, query)
if err != nil {
return nil, err
}
for _, u := range res.Users {
cells := createBaseCells(u, req.Fields)
for _, u := range res.OrgUsers {
if c.isHiddenUser(u.Login, signedInUser) {
continue
}
cells := createCells(u, req.Fields)
list.Results.Rows = append(list.Results.Rows, &resourcepb.ResourceTableRow{
Key: getResourceKey(u, req.Options.Key.Namespace),
Cells: cells,
@@ -123,7 +157,19 @@ func (c *UserLegacySearchClient) Search(ctx context.Context, req *resourcepb.Res
return list, nil
}
func getResourceKey(item *user.UserSearchHitDTO, namespace string) *resourcepb.ResourceKey {
func (c *UserLegacySearchClient) isHiddenUser(login string, signedInUser identity.Requester) bool {
if login == "" || signedInUser.GetIsGrafanaAdmin() || login == signedInUser.GetUsername() {
return false
}
if _, hidden := c.cfg.HiddenUsers[login]; hidden {
return true
}
return false
}
func getResourceKey(item *org.OrgUserDTO, namespace string) *resourcepb.ResourceKey {
return &resourcepb.ResourceKey{
Namespace: namespace,
Group: UserResourceGroup,
@@ -133,42 +179,74 @@ func getResourceKey(item *user.UserSearchHitDTO, namespace string) *resourcepb.R
}
func getColumns(fields []string) []*resourcepb.ResourceTableColumnDefinition {
columns := defaultColumns()
cols := make([]*resourcepb.ResourceTableColumnDefinition, 0, len(fields))
standardSearchFields := resource.StandardSearchFields()
for _, field := range fields {
switch field {
case "email":
columns = append(columns, builders.UserTableColumnDefinitions[builders.USER_EMAIL])
case "login":
columns = append(columns, builders.UserTableColumnDefinitions[builders.USER_LOGIN])
case resource.SEARCH_FIELD_TITLE:
cols = append(cols, standardSearchFields.Field(resource.SEARCH_FIELD_TITLE))
case fieldLastSeenAt:
cols = append(cols, builders.UserTableColumnDefinitions[builders.USER_LAST_SEEN_AT])
case fieldRole:
cols = append(cols, builders.UserTableColumnDefinitions[builders.USER_ROLE])
case fieldEmail:
cols = append(cols, builders.UserTableColumnDefinitions[builders.USER_EMAIL])
case fieldLogin:
cols = append(cols, builders.UserTableColumnDefinitions[builders.USER_LOGIN])
}
}
return columns
return cols
}
func createBaseCells(u *user.UserSearchHitDTO, fields []string) [][]byte {
cells := createDefaultCells(u)
func createCells(u *org.OrgUserDTO, fields []string) [][]byte {
cells := make([][]byte, 0, len(fields))
for _, field := range fields {
switch field {
case "email":
case resource.SEARCH_FIELD_TITLE:
cells = append(cells, []byte(u.Name))
case fieldEmail:
cells = append(cells, []byte(u.Email))
case "login":
case fieldLogin:
cells = append(cells, []byte(u.Login))
case fieldLastSeenAt:
b := make([]byte, 8)
binary.BigEndian.PutUint64(b, uint64(u.LastSeenAt.Unix()))
cells = append(cells, b)
case fieldRole:
cells = append(cells, []byte(u.Role))
}
}
return cells
}
func createDefaultCells(u *user.UserSearchHitDTO) [][]byte {
return [][]byte{
[]byte(u.UID),
[]byte(u.Name),
}
}
func convertToSortOptions(sortBy []*resourcepb.ResourceSearchRequest_Sort) []model.SortOption {
opts := []model.SortOption{}
for _, s := range sortBy {
field := s.Field
// Handle mapping if necessary
switch field {
case fieldLastSeenAt:
field = "lastSeenAtAge"
case resource.SEARCH_FIELD_TITLE:
field = "name"
case fieldLogin:
field = "login"
case fieldEmail:
field = "email"
}
func defaultColumns() []*resourcepb.ResourceTableColumnDefinition {
searchFields := res.StandardSearchFields()
return []*resourcepb.ResourceTableColumnDefinition{
searchFields.Field(res.SEARCH_FIELD_NAME),
searchFields.Field(res.SEARCH_FIELD_TITLE),
suffix := "asc"
if s.Desc {
suffix = "desc"
}
key := fmt.Sprintf("%s-%s", field, suffix)
if opt, ok := sortopts.SortOptionsByQueryParam[key]; ok {
opts = append(opts, opt)
}
}
sort.Slice(opts, func(i, j int) bool {
return opts[i].Index < opts[j].Index || (opts[i].Index == opts[j].Index && opts[i].Name < opts[j].Name)
})
return opts
}

View File

@@ -5,7 +5,7 @@ import (
"google.golang.org/grpc"
"github.com/grafana/grafana/pkg/services/user"
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
)
@@ -13,7 +13,7 @@ import (
type FakeUserLegacySearchClient struct {
resourcepb.ResourceIndexClient
SearchFunc func(ctx context.Context, req *resourcepb.ResourceSearchRequest, opts ...grpc.CallOption) (*resourcepb.ResourceSearchResponse, error)
Users []*user.UserSearchHitDTO
Users []*org.OrgUserDTO
}
// Search calls the underlying SearchFunc or simulates a search over the Users slice.
@@ -23,7 +23,7 @@ func (c *FakeUserLegacySearchClient) Search(ctx context.Context, req *resourcepb
}
// Basic filtering for testing purposes
var filteredUsers []*user.UserSearchHitDTO
var filteredUsers []*org.OrgUserDTO
var queryValue string
for _, field := range req.Options.Fields {
@@ -43,7 +43,7 @@ func (c *FakeUserLegacySearchClient) Search(ctx context.Context, req *resourcepb
for _, u := range filteredUsers {
rows = append(rows, &resourcepb.ResourceTableRow{
Key: getResourceKey(u, req.Options.Key.Namespace),
Cells: createBaseCells(u, req.Fields),
Cells: createCells(u, req.Fields),
})
}

View File

@@ -9,29 +9,15 @@ import (
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/services/org/orgtest"
"github.com/grafana/grafana/pkg/services/user"
"github.com/grafana/grafana/pkg/services/user/usertest"
"github.com/grafana/grafana/pkg/setting"
res "github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
)
func TestUserLegacySearchClient_Search(t *testing.T) {
t.Run("should return error if no query fields are provided", func(t *testing.T) {
mockUserService := usertest.NewMockService(t)
client := NewUserLegacySearchClient(mockUserService, tracing.NewNoopTracerService())
ctx := identity.WithRequester(context.Background(), &user.SignedInUser{OrgID: 1, UserID: 1})
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
Key: &resourcepb.ResourceKey{Namespace: "default"},
},
}
_, err := client.Search(ctx, req)
require.Error(t, err)
require.Equal(t, "at least one of title, login, or email must be provided for the query", err.Error())
})
testCases := []struct {
name string
fieldKey string
@@ -66,8 +52,8 @@ func TestUserLegacySearchClient_Search(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
mockUserService := usertest.NewMockService(t)
client := NewUserLegacySearchClient(mockUserService, tracing.NewNoopTracerService())
mockOrgService := orgtest.NewMockService(t)
client := NewUserLegacySearchClient(mockOrgService, tracing.NewNoopTracerService(), &setting.Cfg{})
ctx := identity.WithRequester(context.Background(), &user.SignedInUser{OrgID: 1, UserID: 1})
req := &resourcepb.ResourceSearchRequest{
Limit: 10,
@@ -81,14 +67,14 @@ func TestUserLegacySearchClient_Search(t *testing.T) {
Fields: []string{"email", "login"},
}
mockUsers := []*user.UserSearchHitDTO{
{ID: 1, UID: "uid1", Name: "Test User 1", Email: "test1@example.com", Login: "testlogin1"},
mockUsers := []*org.OrgUserDTO{
{UID: "uid1", Name: "Test User 1", Email: "test1@example.com", Login: "testlogin1"},
}
mockUserService.On("Search", mock.Anything, mock.MatchedBy(func(q *user.SearchUsersQuery) bool {
mockOrgService.On("SearchOrgUsers", mock.Anything, mock.MatchedBy(func(q *org.SearchOrgUsersQuery) bool {
return q.Query == tc.expectedQuery && q.Limit == 10 && q.Page == 1
})).Return(&user.SearchUserQueryResult{
Users: mockUsers,
})).Return(&org.SearchOrgUsersQueryResult{
OrgUsers: mockUsers,
TotalCount: 1,
}, nil)
@@ -113,7 +99,7 @@ func TestUserLegacySearchClient_Search(t *testing.T) {
require.Equal(t, UserResource, row.Key.Resource)
require.Equal(t, u.UID, row.Key.Name)
expectedCells := createBaseCells(&user.UserSearchHitDTO{
expectedCells := createCells(&org.OrgUserDTO{
UID: u.UID,
Name: u.Name,
Email: u.Email,
@@ -125,8 +111,8 @@ func TestUserLegacySearchClient_Search(t *testing.T) {
}
t.Run("title should have precedence over login and email", func(t *testing.T) {
mockUserService := usertest.NewMockService(t)
client := NewUserLegacySearchClient(mockUserService, tracing.NewNoopTracerService())
mockOrgService := orgtest.NewMockService(t)
client := NewUserLegacySearchClient(mockOrgService, tracing.NewNoopTracerService(), &setting.Cfg{})
ctx := identity.WithRequester(context.Background(), &user.SignedInUser{OrgID: 1, UserID: 1})
req := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
@@ -139,9 +125,9 @@ func TestUserLegacySearchClient_Search(t *testing.T) {
},
}
mockUserService.On("Search", mock.Anything, mock.MatchedBy(func(q *user.SearchUsersQuery) bool {
mockOrgService.On("SearchOrgUsers", mock.Anything, mock.MatchedBy(func(q *org.SearchOrgUsersQuery) bool {
return q.Query == "title"
})).Return(&user.SearchUserQueryResult{Users: []*user.UserSearchHitDTO{}, TotalCount: 0}, nil)
})).Return(&org.SearchOrgUsersQueryResult{OrgUsers: []*org.OrgUserDTO{}, TotalCount: 0}, nil)
_, err := client.Search(ctx, req)
require.NoError(t, err)

View File

@@ -0,0 +1,399 @@
package user
import (
"encoding/binary"
"encoding/json"
"fmt"
"log/slog"
"net/http"
"net/url"
"regexp"
"slices"
"strconv"
"strings"
"time"
"go.opentelemetry.io/otel/trace"
"k8s.io/apimachinery/pkg/selection"
"k8s.io/kube-openapi/pkg/common"
"k8s.io/kube-openapi/pkg/spec3"
"k8s.io/kube-openapi/pkg/validation/spec"
iamv0 "github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/services/apiserver/builder"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
"github.com/grafana/grafana/pkg/storage/unified/search/builders"
"github.com/grafana/grafana/pkg/util"
"github.com/grafana/grafana/pkg/util/errhttp"
)
const maxLimit = 100
type SearchHandler struct {
log *slog.Logger
client resourcepb.ResourceIndexClient
tracer trace.Tracer
features featuremgmt.FeatureToggles
cfg *setting.Cfg
}
func NewSearchHandler(tracer trace.Tracer, searchClient resourcepb.ResourceIndexClient, features featuremgmt.FeatureToggles, cfg *setting.Cfg) *SearchHandler {
return &SearchHandler{
client: searchClient,
log: slog.Default().With("logger", "grafana-apiserver.user.search"),
tracer: tracer,
features: features,
cfg: cfg,
}
}
func (s *SearchHandler) GetAPIRoutes(defs map[string]common.OpenAPIDefinition) *builder.APIRoutes {
searchResults := defs["github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1.GetSearchUsers"].Schema
return &builder.APIRoutes{
Namespace: []builder.APIRouteHandler{
{
Path: "searchUsers",
Spec: &spec3.PathProps{
Get: &spec3.Operation{
OperationProps: spec3.OperationProps{
Description: "User search",
Tags: []string{"Search"},
OperationId: "getSearchUsers",
Parameters: []*spec3.Parameter{
{
ParameterProps: spec3.ParameterProps{
Name: "namespace",
In: "path",
Required: true,
Example: "default",
Description: "workspace",
Schema: spec.StringProperty(),
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "query",
In: "query",
Required: false,
Schema: spec.StringProperty(),
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "limit",
In: "query",
Description: "number of results to return",
Example: 30,
Required: false,
Schema: spec.Int64Property(),
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "page",
In: "query",
Description: "page number (starting from 1)",
Example: 1,
Required: false,
Schema: spec.Int64Property(),
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "offset",
In: "query",
Description: "number of results to skip",
Example: 0,
Required: false,
Schema: spec.Int64Property(),
},
},
{
ParameterProps: spec3.ParameterProps{
Name: "sort",
In: "query",
Description: "sortable field",
Example: "",
Examples: map[string]*spec3.Example{
"": {
ExampleProps: spec3.ExampleProps{
Summary: "default sorting",
Value: "",
},
},
"title": {
ExampleProps: spec3.ExampleProps{
Summary: "title ascending",
Value: "title",
},
},
"-title": {
ExampleProps: spec3.ExampleProps{
Summary: "title descending",
Value: "-title",
},
},
"lastSeenAt": {
ExampleProps: spec3.ExampleProps{
Summary: "last seen at ascending",
Value: "lastSeenAt",
},
},
"-lastSeenAt": {
ExampleProps: spec3.ExampleProps{
Summary: "last seen at descending",
Value: "-lastSeenAt",
},
},
"email": {
ExampleProps: spec3.ExampleProps{
Summary: "email ascending",
Value: "email",
},
},
"-email": {
ExampleProps: spec3.ExampleProps{
Summary: "email descending",
Value: "-email",
},
},
"login": {
ExampleProps: spec3.ExampleProps{
Summary: "login ascending",
Value: "login",
},
},
"-login": {
ExampleProps: spec3.ExampleProps{
Summary: "login descending",
Value: "-login",
},
},
},
Required: false,
Schema: spec.StringProperty(),
},
},
},
Responses: &spec3.Responses{
ResponsesProps: spec3.ResponsesProps{
Default: &spec3.Response{
ResponseProps: spec3.ResponseProps{
Description: "Default OK response",
Content: map[string]*spec3.MediaType{
"application/json": {
MediaTypeProps: spec3.MediaTypeProps{
Schema: &searchResults,
},
},
},
},
},
},
},
},
},
},
Handler: s.DoSearch,
},
},
}
}
func (s *SearchHandler) DoSearch(w http.ResponseWriter, r *http.Request) {
ctx, span := s.tracer.Start(r.Context(), "user.search")
defer span.End()
queryParams, err := url.ParseQuery(r.URL.RawQuery)
if err != nil {
errhttp.Write(ctx, err, w)
return
}
requester, err := identity.GetRequester(ctx)
if err != nil {
errhttp.Write(ctx, fmt.Errorf("no identity found for request: %w", err), w)
return
}
limit := 30
offset := 0
page := 1
if queryParams.Has("limit") {
limit, _ = strconv.Atoi(queryParams.Get("limit"))
}
if queryParams.Has("offset") {
offset, _ = strconv.Atoi(queryParams.Get("offset"))
if offset > 0 && limit > 0 {
page = (offset / limit) + 1
}
} else if queryParams.Has("page") {
page, _ = strconv.Atoi(queryParams.Get("page"))
offset = (page - 1) * limit
}
// Escape characters that are used by bleve wildcard search to be literal strings.
rawQuery := escapeBleveQuery(queryParams.Get("query"))
searchQuery := fmt.Sprintf(`*%s*`, rawQuery)
userGvr := iamv0.UserResourceInfo.GroupResource()
request := &resourcepb.ResourceSearchRequest{
Options: &resourcepb.ListOptions{
Key: &resourcepb.ResourceKey{
Group: userGvr.Group,
Resource: userGvr.Resource,
Namespace: requester.GetNamespace(),
},
},
Query: searchQuery,
Fields: []string{resource.SEARCH_FIELD_TITLE, fieldEmail, fieldLogin, fieldLastSeenAt, fieldRole},
Limit: int64(limit),
Page: int64(page),
Offset: int64(offset),
}
if !requester.GetIsGrafanaAdmin() {
// FIXME: Use the new config service instead of the legacy one
hiddenUsers := []string{}
for user := range s.cfg.HiddenUsers {
if user != requester.GetUsername() {
hiddenUsers = append(hiddenUsers, user)
}
}
if len(hiddenUsers) > 0 {
request.Options.Fields = append(request.Options.Fields, &resourcepb.Requirement{
Key: fieldLogin,
Operator: string(selection.NotIn),
Values: hiddenUsers,
})
}
}
if queryParams.Has("sort") {
for _, sort := range queryParams["sort"] {
currField := sort
desc := false
if strings.HasPrefix(sort, "-") {
currField = sort[1:]
desc = true
}
if slices.Contains(builders.UserSortableExtraFields, currField) {
sort = resource.SEARCH_FIELD_PREFIX + currField
} else {
sort = currField
}
s := &resourcepb.ResourceSearchRequest_Sort{
Field: sort,
Desc: desc,
}
request.SortBy = append(request.SortBy, s)
}
}
resp, err := s.client.Search(ctx, request)
if err != nil {
errhttp.Write(ctx, err, w)
return
}
result, err := ParseResults(resp)
if err != nil {
errhttp.Write(ctx, err, w)
return
}
s.write(w, result)
}
func (s *SearchHandler) write(w http.ResponseWriter, obj any) {
w.Header().Set("Content-Type", "application/json")
if err := json.NewEncoder(w).Encode(obj); err != nil {
s.log.Error("failed to encode JSON response", "error", err)
}
}
func ParseResults(result *resourcepb.ResourceSearchResponse) (*iamv0.GetSearchUsers, error) {
if result == nil {
return iamv0.NewGetSearchUsers(), nil
} else if result.Error != nil {
return iamv0.NewGetSearchUsers(), fmt.Errorf("%d error searching: %s: %s", result.Error.Code, result.Error.Message, result.Error.Details)
} else if result.Results == nil {
return iamv0.NewGetSearchUsers(), nil
}
titleIDX := -1
emailIDX := -1
loginIDX := -1
lastSeenAtIDX := -1
roleIDX := -1
for i, v := range result.Results.Columns {
switch v.Name {
case resource.SEARCH_FIELD_TITLE:
titleIDX = i
case builders.USER_EMAIL:
emailIDX = i
case builders.USER_LOGIN:
loginIDX = i
case builders.USER_LAST_SEEN_AT:
lastSeenAtIDX = i
case builders.USER_ROLE:
roleIDX = i
}
}
sr := iamv0.NewGetSearchUsers()
sr.TotalHits = result.TotalHits
sr.QueryCost = result.QueryCost
sr.MaxScore = result.MaxScore
sr.Hits = make([]iamv0.UserHit, 0, len(result.Results.Rows))
for _, row := range result.Results.Rows {
if len(row.Cells) != len(result.Results.Columns) {
return iamv0.NewGetSearchUsers(), fmt.Errorf("error parsing user search response: mismatch number of columns and cells")
}
var login string
if loginIDX >= 0 && row.Cells[loginIDX] != nil {
login = string(row.Cells[loginIDX])
}
hit := iamv0.UserHit{
Name: row.Key.Name,
Login: login,
}
if titleIDX >= 0 && row.Cells[titleIDX] != nil {
hit.Title = string(row.Cells[titleIDX])
}
if emailIDX >= 0 && row.Cells[emailIDX] != nil {
hit.Email = string(row.Cells[emailIDX])
}
if roleIDX >= 0 && row.Cells[roleIDX] != nil {
hit.Role = string(row.Cells[roleIDX])
}
if lastSeenAtIDX >= 0 && row.Cells[lastSeenAtIDX] != nil {
if len(row.Cells[lastSeenAtIDX]) == 8 {
hit.LastSeenAt = int64(binary.BigEndian.Uint64(row.Cells[lastSeenAtIDX]))
hit.LastSeenAtAge = util.GetAgeString(time.Unix(hit.LastSeenAt, 0))
}
}
sr.Hits = append(sr.Hits, hit)
}
return sr, nil
}
var bleveEscapeRegex = regexp.MustCompile(`([\\*?])`)
func escapeBleveQuery(query string) string {
return bleveEscapeRegex.ReplaceAllString(query, `\$1`)
}

View File

@@ -0,0 +1,169 @@
package user
import (
"context"
"net/http/httptest"
"testing"
"google.golang.org/grpc"
iamv0 "github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/apiserver/rest"
"github.com/grafana/grafana/pkg/infra/tracing"
"github.com/grafana/grafana/pkg/services/featuremgmt"
legacyuser "github.com/grafana/grafana/pkg/services/user"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/storage/legacysql/dualwrite"
"github.com/grafana/grafana/pkg/storage/unified/resource"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
)
func TestSearchFallback(t *testing.T) {
tests := []struct {
name string
mode rest.DualWriterMode
expectUnified bool
}{
{name: "should hit legacy search handler on mode 0", mode: rest.Mode0, expectUnified: false},
{name: "should hit legacy search handler on mode 1", mode: rest.Mode1, expectUnified: false},
{name: "should hit legacy search handler on mode 2", mode: rest.Mode2, expectUnified: false},
{name: "should hit unified storage search handler on mode 3", mode: rest.Mode3, expectUnified: true},
{name: "should hit unified storage search handler on mode 4", mode: rest.Mode4, expectUnified: true},
{name: "should hit unified storage search handler on mode 5", mode: rest.Mode5, expectUnified: true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mockClient := &MockClient{}
mockLegacyClient := &MockClient{}
cfg := &setting.Cfg{
UnifiedStorage: map[string]setting.UnifiedStorageConfig{
"users.iam.grafana.app": {DualWriterMode: tt.mode},
},
}
dual := dualwrite.ProvideStaticServiceForTests(cfg)
searchClient := resource.NewSearchClient(dualwrite.NewSearchAdapter(dual), iamv0.UserResourceInfo.GroupResource(), mockClient, mockLegacyClient, featuremgmt.WithFeatures())
searchHandler := NewSearchHandler(tracing.NewNoopTracerService(), searchClient, featuremgmt.WithFeatures(), cfg)
rr := httptest.NewRecorder()
req := httptest.NewRequest("GET", "/searchUsers", nil)
req.Header.Add("content-type", "application/json")
req = req.WithContext(identity.WithRequester(req.Context(), &legacyuser.SignedInUser{Namespace: "test"}))
searchHandler.DoSearch(rr, req)
if tt.expectUnified {
if mockClient.LastSearchRequest == nil {
t.Fatalf("expected Unified Search to be called, but it was not")
}
} else {
if mockLegacyClient.LastSearchRequest == nil {
t.Fatalf("expected Legacy Search to be called, but it was not")
}
}
})
}
}
// MockClient implements the ResourceIndexClient interface for testing
type MockClient struct {
resourcepb.ResourceIndexClient
resource.ResourceIndex
LastSearchRequest *resourcepb.ResourceSearchRequest
MockResponses []*resourcepb.ResourceSearchResponse
MockCalls []*resourcepb.ResourceSearchRequest
CallCount int
}
func (m *MockClient) Search(ctx context.Context, in *resourcepb.ResourceSearchRequest, opts ...grpc.CallOption) (*resourcepb.ResourceSearchResponse, error) {
m.LastSearchRequest = in
m.MockCalls = append(m.MockCalls, in)
var response *resourcepb.ResourceSearchResponse
if m.CallCount < len(m.MockResponses) {
response = m.MockResponses[m.CallCount]
}
m.CallCount = m.CallCount + 1
if response == nil {
response = &resourcepb.ResourceSearchResponse{}
}
return response, nil
}
func (m *MockClient) GetStats(ctx context.Context, in *resourcepb.ResourceStatsRequest, opts ...grpc.CallOption) (*resourcepb.ResourceStatsResponse, error) {
return nil, nil
}
func (m *MockClient) CountManagedObjects(ctx context.Context, in *resourcepb.CountManagedObjectsRequest, opts ...grpc.CallOption) (*resourcepb.CountManagedObjectsResponse, error) {
return nil, nil
}
func (m *MockClient) Watch(ctx context.Context, in *resourcepb.WatchRequest, opts ...grpc.CallOption) (resourcepb.ResourceStore_WatchClient, error) {
return nil, nil
}
func (m *MockClient) Delete(ctx context.Context, in *resourcepb.DeleteRequest, opts ...grpc.CallOption) (*resourcepb.DeleteResponse, error) {
return nil, nil
}
func (m *MockClient) Create(ctx context.Context, in *resourcepb.CreateRequest, opts ...grpc.CallOption) (*resourcepb.CreateResponse, error) {
return nil, nil
}
func (m *MockClient) Update(ctx context.Context, in *resourcepb.UpdateRequest, opts ...grpc.CallOption) (*resourcepb.UpdateResponse, error) {
return nil, nil
}
func (m *MockClient) Read(ctx context.Context, in *resourcepb.ReadRequest, opts ...grpc.CallOption) (*resourcepb.ReadResponse, error) {
return nil, nil
}
func (m *MockClient) GetBlob(ctx context.Context, in *resourcepb.GetBlobRequest, opts ...grpc.CallOption) (*resourcepb.GetBlobResponse, error) {
return nil, nil
}
func (m *MockClient) PutBlob(ctx context.Context, in *resourcepb.PutBlobRequest, opts ...grpc.CallOption) (*resourcepb.PutBlobResponse, error) {
return nil, nil
}
func (m *MockClient) List(ctx context.Context, in *resourcepb.ListRequest, opts ...grpc.CallOption) (*resourcepb.ListResponse, error) {
return nil, nil
}
func (m *MockClient) ListManagedObjects(ctx context.Context, in *resourcepb.ListManagedObjectsRequest, opts ...grpc.CallOption) (*resourcepb.ListManagedObjectsResponse, error) {
return nil, nil
}
func (m *MockClient) IsHealthy(ctx context.Context, in *resourcepb.HealthCheckRequest, opts ...grpc.CallOption) (*resourcepb.HealthCheckResponse, error) {
return nil, nil
}
func (m *MockClient) BulkProcess(ctx context.Context, opts ...grpc.CallOption) (resourcepb.BulkStore_BulkProcessClient, error) {
return nil, nil
}
func (m *MockClient) UpdateIndex(ctx context.Context, reason string) error {
return nil
}
func (m *MockClient) GetQuotaUsage(ctx context.Context, req *resourcepb.QuotaUsageRequest, opts ...grpc.CallOption) (*resourcepb.QuotaUsageResponse, error) {
return nil, nil
}
func TestEscapeBleveQuery(t *testing.T) {
tests := []struct {
input string
expected string
}{
{input: "normal", expected: "normal"},
{input: "*", expected: "\\*"},
{input: "?", expected: "\\?"},
{input: "\\", expected: "\\\\"},
{input: "\\*", expected: "\\\\\\*"},
{input: "*\\?", expected: "\\*\\\\\\?"},
{input: "foo*bar", expected: "foo\\*bar"},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
got := escapeBleveQuery(tt.input)
if got != tt.expected {
t.Errorf("escapeBleveQuery(%q) = %q, want %q", tt.input, got, tt.expected)
}
})
}
}

View File

@@ -3,7 +3,6 @@ package user
import (
"context"
"fmt"
"time"
"go.opentelemetry.io/otel/trace"
"k8s.io/apimachinery/pkg/apis/meta/internalversion"
@@ -35,7 +34,7 @@ var (
_ rest.TableConvertor = (*LegacyStore)(nil)
)
var resource = iamv0alpha1.UserResourceInfo
var userResource = iamv0alpha1.UserResourceInfo
func NewLegacyStore(store legacy.LegacyIdentityStore, ac claims.AccessClient, enableAuthnMutation bool, tracer trace.Tracer) *LegacyStore {
return &LegacyStore{store, ac, enableAuthnMutation, tracer}
@@ -54,7 +53,7 @@ func (s *LegacyStore) Update(ctx context.Context, name string, objInfo rest.Upda
defer span.End()
if !s.enableAuthnMutation {
return nil, false, apierrors.NewMethodNotSupported(resource.GroupResource(), "update")
return nil, false, apierrors.NewMethodNotSupported(userResource.GroupResource(), "update")
}
ns, err := request.NamespaceInfoFrom(ctx, true)
@@ -105,7 +104,7 @@ func (s *LegacyStore) Update(ctx context.Context, name string, objInfo rest.Upda
// DeleteCollection implements rest.CollectionDeleter.
func (s *LegacyStore) DeleteCollection(ctx context.Context, deleteValidation rest.ValidateObjectFunc, options *metav1.DeleteOptions, listOptions *internalversion.ListOptions) (runtime.Object, error) {
return nil, apierrors.NewMethodNotSupported(resource.GroupResource(), "deletecollection")
return nil, apierrors.NewMethodNotSupported(userResource.GroupResource(), "deletecollection")
}
// Delete implements rest.GracefulDeleter.
@@ -114,7 +113,7 @@ func (s *LegacyStore) Delete(ctx context.Context, name string, deleteValidation
defer span.End()
if !s.enableAuthnMutation {
return nil, false, apierrors.NewMethodNotSupported(resource.GroupResource(), "delete")
return nil, false, apierrors.NewMethodNotSupported(userResource.GroupResource(), "delete")
}
ns, err := request.NamespaceInfoFrom(ctx, true)
@@ -131,7 +130,7 @@ func (s *LegacyStore) Delete(ctx context.Context, name string, deleteValidation
return nil, false, err
}
if found == nil || len(found.Items) < 1 {
return nil, false, resource.NewNotFound(name)
return nil, false, userResource.NewNotFound(name)
}
userToDelete := &found.Items[0]
@@ -157,7 +156,7 @@ func (s *LegacyStore) Delete(ctx context.Context, name string, deleteValidation
}
func (s *LegacyStore) New() runtime.Object {
return resource.NewFunc()
return userResource.NewFunc()
}
func (s *LegacyStore) Destroy() {}
@@ -167,15 +166,15 @@ func (s *LegacyStore) NamespaceScoped() bool {
}
func (s *LegacyStore) GetSingularName() string {
return resource.GetSingularName()
return userResource.GetSingularName()
}
func (s *LegacyStore) NewList() runtime.Object {
return resource.NewListFunc()
return userResource.NewListFunc()
}
func (s *LegacyStore) ConvertToTable(ctx context.Context, object runtime.Object, tableOptions runtime.Object) (*metav1.Table, error) {
return resource.TableConverter().ConvertToTable(ctx, object, tableOptions)
return userResource.TableConverter().ConvertToTable(ctx, object, tableOptions)
}
func (s *LegacyStore) List(ctx context.Context, options *internalversion.ListOptions) (runtime.Object, error) {
@@ -183,7 +182,7 @@ func (s *LegacyStore) List(ctx context.Context, options *internalversion.ListOpt
defer span.End()
res, err := common.List(
ctx, resource, s.ac, common.PaginationFromListOptions(options),
ctx, userResource, s.ac, common.PaginationFromListOptions(options),
func(ctx context.Context, ns claims.NamespaceInfo, p common.Pagination) (*common.ListResponse[iamv0alpha1.User], error) {
found, err := s.store.ListUsers(ctx, ns, legacy.ListUserQuery{
Pagination: p,
@@ -231,10 +230,10 @@ func (s *LegacyStore) Get(ctx context.Context, name string, options *metav1.GetO
Pagination: common.Pagination{Limit: 1},
})
if found == nil || err != nil {
return nil, resource.NewNotFound(name)
return nil, userResource.NewNotFound(name)
}
if len(found.Items) < 1 {
return nil, resource.NewNotFound(name)
return nil, userResource.NewNotFound(name)
}
obj := toUserItem(&found.Items[0], ns.Value)
@@ -247,7 +246,7 @@ func (s *LegacyStore) Create(ctx context.Context, obj runtime.Object, createVali
defer span.End()
if !s.enableAuthnMutation {
return nil, apierrors.NewMethodNotSupported(resource.GroupResource(), "create")
return nil, apierrors.NewMethodNotSupported(userResource.GroupResource(), "create")
}
ns, err := request.NamespaceInfoFrom(ctx, true)
@@ -310,18 +309,12 @@ func toUserItem(u *common.UserWithRole, ns string) iamv0alpha1.User {
Provisioned: u.IsProvisioned,
Role: u.Role,
},
Status: iamv0alpha1.UserStatus{
LastSeenAt: u.LastSeenAt.Unix(),
},
}
obj, _ := utils.MetaAccessor(item)
obj.SetUpdatedTimestamp(&u.Updated)
obj.SetAnnotation(AnnoKeyLastSeenAt, formatTime(&u.LastSeenAt))
obj.SetDeprecatedInternalID(u.ID) // nolint:staticcheck
return *item
}
func formatTime(v *time.Time) string {
txt := ""
if v != nil && v.Unix() != 0 {
txt = v.UTC().Format(time.RFC3339)
}
return txt
}

View File

@@ -128,7 +128,7 @@ func validateEmail(ctx context.Context, searchClient resourcepb.ResourceIndexCli
Operator: string(selection.Equals),
Values: []string{email},
},
}, []string{"name", "email", "login"})
}, []string{"fields.email", "fields.login"})
resp, err := searchClient.Search(ctx, req)
if err != nil {
@@ -159,7 +159,7 @@ func validateLogin(ctx context.Context, searchClient resourcepb.ResourceIndexCli
Operator: string(selection.Equals),
Values: []string{login},
},
}, []string{"name", "email", "login"})
}, []string{"fields.email", "fields.login"})
resp, err := searchClient.Search(ctx, req)
if err != nil {
return err

View File

@@ -9,7 +9,7 @@ import (
"github.com/grafana/authlib/types"
iamv0alpha1 "github.com/grafana/grafana/apps/iam/pkg/apis/iam/v0alpha1"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/services/user"
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/storage/unified/resourcepb"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -178,7 +178,7 @@ func TestValidateOnCreate(t *testing.T) {
IsGrafanaAdmin: false,
},
searchClient: &FakeUserLegacySearchClient{
Users: []*user.UserSearchHitDTO{
Users: []*org.OrgUserDTO{
{Email: "existing@example"},
},
},
@@ -202,7 +202,7 @@ func TestValidateOnCreate(t *testing.T) {
IsGrafanaAdmin: false,
},
searchClient: &FakeUserLegacySearchClient{
Users: []*user.UserSearchHitDTO{
Users: []*org.OrgUserDTO{
{Login: "existinguser"},
},
},
@@ -490,7 +490,7 @@ func TestValidateOnUpdate(t *testing.T) {
IsGrafanaAdmin: true,
},
searchClient: &FakeUserLegacySearchClient{
Users: []*user.UserSearchHitDTO{
Users: []*org.OrgUserDTO{
{Email: "two@example"},
},
},
@@ -516,7 +516,7 @@ func TestValidateOnUpdate(t *testing.T) {
IsGrafanaAdmin: true,
},
searchClient: &FakeUserLegacySearchClient{
Users: []*user.UserSearchHitDTO{
Users: []*org.OrgUserDTO{
{Name: "other", UID: "uid456", Login: "two"},
},
},
@@ -536,7 +536,7 @@ func TestValidateOnUpdate(t *testing.T) {
IsGrafanaAdmin: true,
},
searchClient: &FakeUserLegacySearchClient{
Users: []*user.UserSearchHitDTO{
Users: []*org.OrgUserDTO{
{Login: "testuser", Email: "test@example"},
},
},

View File

@@ -26,6 +26,18 @@ type StatusPatcher interface {
Patch(ctx context.Context, repo *provisioning.Repository, patchOperations ...map[string]interface{}) error
}
// HealthCheckerInterface defines the interface for health checking operations
//
//go:generate mockery --name=HealthCheckerInterface --structname=MockHealthChecker
type HealthCheckerInterface interface {
ShouldCheckHealth(repo *provisioning.Repository) bool
RefreshHealth(ctx context.Context, repo repository.Repository) (*provisioning.TestResults, provisioning.HealthStatus, error)
RefreshHealthWithPatchOps(ctx context.Context, repo repository.Repository) (*provisioning.TestResults, provisioning.HealthStatus, []map[string]interface{}, error)
RefreshTimestamp(ctx context.Context, repo *provisioning.Repository) error
RecordFailure(ctx context.Context, failureType provisioning.HealthFailureType, err error, repo *provisioning.Repository) error
HasRecentFailure(healthStatus provisioning.HealthStatus, failureType provisioning.HealthFailureType) bool
}
// HealthChecker provides unified health checking for repositories
type HealthChecker struct {
statusPatcher StatusPatcher
@@ -162,6 +174,33 @@ func (hc *HealthChecker) RefreshHealth(ctx context.Context, repo repository.Repo
return testResults, newHealthStatus, nil
}
// RefreshHealthWithPatchOps performs a health check on an existing repository
// and returns the test results, health status, and patch operations to apply.
// This method does NOT apply the patch itself, allowing the caller to batch
// multiple status updates together to avoid race conditions.
func (hc *HealthChecker) RefreshHealthWithPatchOps(ctx context.Context, repo repository.Repository) (*provisioning.TestResults, provisioning.HealthStatus, []map[string]interface{}, error) {
cfg := repo.Config()
// Use health checker to perform comprehensive health check with existing status
testResults, newHealthStatus, err := hc.refreshHealth(ctx, repo, cfg.Status.Health)
if err != nil {
return nil, provisioning.HealthStatus{}, nil, fmt.Errorf("health check failed: %w", err)
}
var patchOps []map[string]interface{}
// Only return patch operation if health status actually changed
if hc.hasHealthStatusChanged(cfg.Status.Health, newHealthStatus) {
patchOps = append(patchOps, map[string]interface{}{
"op": "replace",
"path": "/status/health",
"value": newHealthStatus,
})
}
return testResults, newHealthStatus, patchOps, nil
}
// RefreshTimestamp updates the health status timestamp without changing other fields
func (hc *HealthChecker) RefreshTimestamp(ctx context.Context, repo *provisioning.Repository) error {
// Update the timestamp on the existing health status

View File

@@ -532,6 +532,136 @@ func TestRefreshHealth(t *testing.T) {
}
}
func TestRefreshHealthWithPatchOps(t *testing.T) {
tests := []struct {
name string
testResult *provisioning.TestResults
testError error
existingStatus provisioning.HealthStatus
expectError bool
expectedHealth bool
expectPatchOps bool
expectedPatchPath string
}{
{
name: "successful health check with status change",
testResult: &provisioning.TestResults{
Success: true,
Code: 200,
},
testError: nil,
existingStatus: provisioning.HealthStatus{
Healthy: false,
Error: provisioning.HealthFailureHealth,
Checked: time.Now().Add(-time.Hour).UnixMilli(),
},
expectError: false,
expectedHealth: true,
expectPatchOps: true,
expectedPatchPath: "/status/health",
},
{
name: "failed health check with status change",
testResult: &provisioning.TestResults{
Success: false,
Code: 500,
Errors: []provisioning.ErrorDetails{
{Detail: "connection failed"},
},
},
testError: nil,
existingStatus: provisioning.HealthStatus{
Healthy: true,
Checked: time.Now().Add(-time.Hour).UnixMilli(),
},
expectError: false,
expectedHealth: false,
expectPatchOps: true,
expectedPatchPath: "/status/health",
},
{
name: "no status change - no patch ops returned",
testResult: &provisioning.TestResults{
Success: true,
Code: 200,
},
testError: nil,
existingStatus: provisioning.HealthStatus{
Healthy: true,
Checked: time.Now().Add(-15 * time.Second).UnixMilli(),
},
expectError: false,
expectedHealth: true,
expectPatchOps: false,
},
{
name: "test repository error",
testResult: nil,
testError: errors.New("repository test failed"),
existingStatus: provisioning.HealthStatus{
Healthy: true,
Checked: time.Now().Add(-time.Hour).UnixMilli(),
},
expectError: true,
expectedHealth: false,
expectPatchOps: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Create mock repository
mockRepo := &mockRepository{
config: &provisioning.Repository{
Spec: provisioning.RepositorySpec{
Title: "Test Repository",
Type: provisioning.LocalRepositoryType,
},
Status: provisioning.RepositoryStatus{
Health: tt.existingStatus,
},
},
testResult: tt.testResult,
testError: tt.testError,
}
// Create health checker with validator and tester
validator := repository.NewValidator(30*time.Second, []provisioning.SyncTargetType{provisioning.SyncTargetTypeFolder, provisioning.SyncTargetTypeInstance}, true)
hc := NewHealthChecker(nil, prometheus.NewPedanticRegistry(), repository.NewSimpleRepositoryTester(validator))
// Call RefreshHealthWithPatchOps
testResults, healthStatus, patchOps, err := hc.RefreshHealthWithPatchOps(context.Background(), mockRepo)
// Verify error
if tt.expectError {
assert.Error(t, err)
assert.Nil(t, testResults)
return
}
assert.NoError(t, err)
// Verify health status
assert.Equal(t, tt.expectedHealth, healthStatus.Healthy)
// Verify patch operations
if tt.expectPatchOps {
assert.NotEmpty(t, patchOps, "expected patch operations to be returned")
assert.Len(t, patchOps, 1)
assert.Equal(t, "replace", patchOps[0]["op"])
assert.Equal(t, tt.expectedPatchPath, patchOps[0]["path"])
assert.Equal(t, healthStatus, patchOps[0]["value"])
} else {
assert.Empty(t, patchOps, "expected no patch operations to be returned")
}
// Verify test results
if tt.testResult != nil {
assert.Equal(t, tt.testResult, testResults)
}
})
}
}
func TestHasHealthStatusChanged(t *testing.T) {
tests := []struct {
name string

View File

@@ -0,0 +1,187 @@
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks
import (
context "context"
mock "github.com/stretchr/testify/mock"
repository "github.com/grafana/grafana/apps/provisioning/pkg/repository"
v0alpha1 "github.com/grafana/grafana/apps/provisioning/pkg/apis/provisioning/v0alpha1"
)
// MockHealthChecker is an autogenerated mock type for the HealthCheckerInterface type
type MockHealthChecker struct {
mock.Mock
}
// HasRecentFailure provides a mock function with given fields: healthStatus, failureType
func (_m *MockHealthChecker) HasRecentFailure(healthStatus v0alpha1.HealthStatus, failureType v0alpha1.HealthFailureType) bool {
ret := _m.Called(healthStatus, failureType)
if len(ret) == 0 {
panic("no return value specified for HasRecentFailure")
}
var r0 bool
if rf, ok := ret.Get(0).(func(v0alpha1.HealthStatus, v0alpha1.HealthFailureType) bool); ok {
r0 = rf(healthStatus, failureType)
} else {
r0 = ret.Get(0).(bool)
}
return r0
}
// RecordFailure provides a mock function with given fields: ctx, failureType, err, repo
func (_m *MockHealthChecker) RecordFailure(ctx context.Context, failureType v0alpha1.HealthFailureType, err error, repo *v0alpha1.Repository) error {
ret := _m.Called(ctx, failureType, err, repo)
if len(ret) == 0 {
panic("no return value specified for RecordFailure")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, v0alpha1.HealthFailureType, error, *v0alpha1.Repository) error); ok {
r0 = rf(ctx, failureType, err, repo)
} else {
r0 = ret.Error(0)
}
return r0
}
// RefreshHealth provides a mock function with given fields: ctx, repo
func (_m *MockHealthChecker) RefreshHealth(ctx context.Context, repo repository.Repository) (*v0alpha1.TestResults, v0alpha1.HealthStatus, error) {
ret := _m.Called(ctx, repo)
if len(ret) == 0 {
panic("no return value specified for RefreshHealth")
}
var r0 *v0alpha1.TestResults
var r1 v0alpha1.HealthStatus
var r2 error
if rf, ok := ret.Get(0).(func(context.Context, repository.Repository) (*v0alpha1.TestResults, v0alpha1.HealthStatus, error)); ok {
return rf(ctx, repo)
}
if rf, ok := ret.Get(0).(func(context.Context, repository.Repository) *v0alpha1.TestResults); ok {
r0 = rf(ctx, repo)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v0alpha1.TestResults)
}
}
if rf, ok := ret.Get(1).(func(context.Context, repository.Repository) v0alpha1.HealthStatus); ok {
r1 = rf(ctx, repo)
} else {
r1 = ret.Get(1).(v0alpha1.HealthStatus)
}
if rf, ok := ret.Get(2).(func(context.Context, repository.Repository) error); ok {
r2 = rf(ctx, repo)
} else {
r2 = ret.Error(2)
}
return r0, r1, r2
}
// RefreshHealthWithPatchOps provides a mock function with given fields: ctx, repo
func (_m *MockHealthChecker) RefreshHealthWithPatchOps(ctx context.Context, repo repository.Repository) (*v0alpha1.TestResults, v0alpha1.HealthStatus, []map[string]interface{}, error) {
ret := _m.Called(ctx, repo)
if len(ret) == 0 {
panic("no return value specified for RefreshHealthWithPatchOps")
}
var r0 *v0alpha1.TestResults
var r1 v0alpha1.HealthStatus
var r2 []map[string]interface{}
var r3 error
if rf, ok := ret.Get(0).(func(context.Context, repository.Repository) (*v0alpha1.TestResults, v0alpha1.HealthStatus, []map[string]interface{}, error)); ok {
return rf(ctx, repo)
}
if rf, ok := ret.Get(0).(func(context.Context, repository.Repository) *v0alpha1.TestResults); ok {
r0 = rf(ctx, repo)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*v0alpha1.TestResults)
}
}
if rf, ok := ret.Get(1).(func(context.Context, repository.Repository) v0alpha1.HealthStatus); ok {
r1 = rf(ctx, repo)
} else {
r1 = ret.Get(1).(v0alpha1.HealthStatus)
}
if rf, ok := ret.Get(2).(func(context.Context, repository.Repository) []map[string]interface{}); ok {
r2 = rf(ctx, repo)
} else {
if ret.Get(2) != nil {
r2 = ret.Get(2).([]map[string]interface{})
}
}
if rf, ok := ret.Get(3).(func(context.Context, repository.Repository) error); ok {
r3 = rf(ctx, repo)
} else {
r3 = ret.Error(3)
}
return r0, r1, r2, r3
}
// RefreshTimestamp provides a mock function with given fields: ctx, repo
func (_m *MockHealthChecker) RefreshTimestamp(ctx context.Context, repo *v0alpha1.Repository) error {
ret := _m.Called(ctx, repo)
if len(ret) == 0 {
panic("no return value specified for RefreshTimestamp")
}
var r0 error
if rf, ok := ret.Get(0).(func(context.Context, *v0alpha1.Repository) error); ok {
r0 = rf(ctx, repo)
} else {
r0 = ret.Error(0)
}
return r0
}
// ShouldCheckHealth provides a mock function with given fields: repo
func (_m *MockHealthChecker) ShouldCheckHealth(repo *v0alpha1.Repository) bool {
ret := _m.Called(repo)
if len(ret) == 0 {
panic("no return value specified for ShouldCheckHealth")
}
var r0 bool
if rf, ok := ret.Get(0).(func(*v0alpha1.Repository) bool); ok {
r0 = rf(repo)
} else {
r0 = ret.Get(0).(bool)
}
return r0
}
// NewMockHealthChecker creates a new instance of MockHealthChecker. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewMockHealthChecker(t interface {
mock.TestingT
Cleanup(func())
}) *MockHealthChecker {
mock := &MockHealthChecker{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v2.52.4. DO NOT EDIT.
// Code generated by mockery v2.53.4. DO NOT EDIT.
package mocks

View File

@@ -561,11 +561,16 @@ func (rc *RepositoryController) process(item *queueItem) error {
}
// Handle health checks using the health checker
_, healthStatus, err := rc.healthChecker.RefreshHealth(ctx, repo)
_, healthStatus, healthPatchOps, err := rc.healthChecker.RefreshHealthWithPatchOps(ctx, repo)
if err != nil {
return fmt.Errorf("update health status: %w", err)
}
// Add health patch operations first
if len(healthPatchOps) > 0 {
patchOperations = append(patchOperations, healthPatchOps...)
}
// determine the sync strategy and sync status to apply
syncOptions := rc.determineSyncStrategy(ctx, obj, repo, shouldResync, healthStatus)
patchOperations = append(patchOperations, rc.determineSyncStatusOps(obj, syncOptions, healthStatus)...)

View File

@@ -350,6 +350,161 @@ type mockJobsQueueStore struct {
*jobs.MockStore
}
func TestRepositoryController_process_UnhealthyRepositoryStatusUpdate(t *testing.T) {
testCases := []struct {
name string
repo *provisioning.Repository
healthStatus provisioning.HealthStatus
hasHealthStatusChanged bool
expectedUnhealthyMessage bool
description string
}{
{
name: "unhealthy repository should set unhealthy message in sync status",
repo: &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
Namespace: "default",
Generation: 1,
},
Spec: provisioning.RepositorySpec{
Sync: provisioning.SyncOptions{
Enabled: true,
IntervalSeconds: 300,
},
},
Status: provisioning.RepositoryStatus{
ObservedGeneration: 1,
Health: provisioning.HealthStatus{
Healthy: true,
Checked: time.Now().Add(-10 * time.Minute).UnixMilli(),
},
Sync: provisioning.SyncStatus{
State: provisioning.JobStateSuccess,
Finished: time.Now().Add(-1 * time.Minute).UnixMilli(),
Message: []string{},
},
},
},
healthStatus: provisioning.HealthStatus{
Healthy: false,
Error: provisioning.HealthFailureHealth,
Checked: time.Now().UnixMilli(),
Message: []string{"connection failed"},
},
hasHealthStatusChanged: true,
expectedUnhealthyMessage: true,
description: "should set unhealthy message when repository becomes unhealthy",
},
{
name: "unhealthy repository should not duplicate unhealthy message",
repo: &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
Namespace: "default",
Generation: 1,
},
Spec: provisioning.RepositorySpec{
Sync: provisioning.SyncOptions{
Enabled: true,
IntervalSeconds: 300,
},
},
Status: provisioning.RepositoryStatus{
ObservedGeneration: 1,
Health: provisioning.HealthStatus{
Healthy: false,
Checked: time.Now().Add(-2 * time.Minute).UnixMilli(),
},
Sync: provisioning.SyncStatus{
State: provisioning.JobStateError,
Finished: time.Now().Add(-1 * time.Minute).UnixMilli(),
Message: []string{"Repository is unhealthy"},
},
},
},
healthStatus: provisioning.HealthStatus{
Healthy: false,
Error: provisioning.HealthFailureHealth,
Checked: time.Now().UnixMilli(),
Message: []string{"connection failed"},
},
hasHealthStatusChanged: false,
expectedUnhealthyMessage: false,
description: "should not set unhealthy message when it already exists",
},
{
name: "healthy repository should clear unhealthy message",
repo: &provisioning.Repository{
ObjectMeta: metav1.ObjectMeta{
Name: "test-repo",
Namespace: "default",
Generation: 1,
},
Spec: provisioning.RepositorySpec{
Sync: provisioning.SyncOptions{
Enabled: true,
IntervalSeconds: 300,
},
},
Status: provisioning.RepositoryStatus{
ObservedGeneration: 1,
Health: provisioning.HealthStatus{
Healthy: false,
Checked: time.Now().Add(-2 * time.Minute).UnixMilli(),
},
Sync: provisioning.SyncStatus{
State: provisioning.JobStateError,
Finished: time.Now().Add(-1 * time.Minute).UnixMilli(),
Message: []string{"Repository is unhealthy"},
},
},
},
healthStatus: provisioning.HealthStatus{
Healthy: true,
Checked: time.Now().UnixMilli(),
Message: []string{},
},
hasHealthStatusChanged: true,
expectedUnhealthyMessage: false,
description: "should clear unhealthy message when repository becomes healthy",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create controller
rc := &RepositoryController{}
// Determine sync status ops (this is a pure function, no mocks needed)
syncOps := rc.determineSyncStatusOps(tc.repo, nil, tc.healthStatus)
// Verify expectations
hasUnhealthyOp := false
hasClearUnhealthyOp := false
for _, op := range syncOps {
if path, ok := op["path"].(string); ok {
if path == "/status/sync/message" {
if messages, ok := op["value"].([]string); ok {
if len(messages) > 0 && messages[0] == "Repository is unhealthy" {
hasUnhealthyOp = true
} else if len(messages) == 0 {
hasClearUnhealthyOp = true
}
}
}
}
}
if tc.expectedUnhealthyMessage {
assert.True(t, hasUnhealthyOp, tc.description+": expected unhealthy message operation")
} else if len(tc.repo.Status.Sync.Message) > 0 && tc.healthStatus.Healthy {
assert.True(t, hasClearUnhealthyOp, tc.description+": expected clear unhealthy message operation")
}
})
}
}
func TestRepositoryController_shouldResync_StaleSyncStatus(t *testing.T) {
testCases := []struct {
name string

View File

@@ -1,9 +1,12 @@
package historian
import (
"context"
"github.com/grafana/grafana-app-sdk/app"
appsdkapiserver "github.com/grafana/grafana-app-sdk/k8s/apiserver"
"github.com/grafana/grafana-app-sdk/simple"
"k8s.io/apiserver/pkg/authorization/authorizer"
restclient "k8s.io/client-go/rest"
"github.com/grafana/grafana/apps/alerting/historian/pkg/apis"
@@ -23,6 +26,14 @@ type AlertingHistorianAppInstaller struct {
appsdkapiserver.AppInstaller
}
func (a *AlertingHistorianAppInstaller) GetAuthorizer() authorizer.Authorizer {
return authorizer.AuthorizerFunc(
func(ctx context.Context, a authorizer.Attributes) (authorizer.Decision, string, error) {
return authorizer.DecisionAllow, "", nil
},
)
}
func RegisterAppInstaller(
cfg *setting.Cfg,
ng *ngalert.AlertNG,

20
pkg/server/wire_gen.go generated
View File

@@ -672,10 +672,7 @@ func Initialize(ctx context.Context, cfg *setting.Cfg, opts Options, apiOpts api
starService := starimpl.ProvideService(sqlStore)
searchSearchService := search2.ProvideService(cfg, sqlStore, starService, dashboardService, folderimplService, featureToggles, sortService)
plugincontextProvider := plugincontext.ProvideService(cfg, cacheService, pluginstoreService, cacheServiceImpl, service15, service13, requestConfigProvider)
qsDatasourceClientBuilder := dsquerierclient.NewNullQSDatasourceClientBuilder()
exprService := expr.ProvideService(cfg, middlewareHandler, plugincontextProvider, featureToggles, registerer, tracingService, qsDatasourceClientBuilder)
queryServiceImpl := query.ProvideService(cfg, cacheServiceImpl, exprService, ossDataSourceRequestValidator, middlewareHandler, plugincontextProvider, qsDatasourceClientBuilder)
grafanaLive, err := live.ProvideService(plugincontextProvider, cfg, routeRegisterImpl, pluginstoreService, middlewareHandler, cacheService, cacheServiceImpl, sqlStore, secretsService, usageStats, queryServiceImpl, featureToggles, accessControl, dashboardService, orgService, eventualRestConfigProvider)
grafanaLive, err := live.ProvideService(plugincontextProvider, cfg, routeRegisterImpl, pluginstoreService, middlewareHandler, cacheService, cacheServiceImpl, secretsService, usageStats, featureToggles, accessControl, dashboardService, orgService, eventualRestConfigProvider)
if err != nil {
return nil, err
}
@@ -684,6 +681,8 @@ func Initialize(ctx context.Context, cfg *setting.Cfg, opts Options, apiOpts api
authnAuthenticator := authnimpl.ProvideAuthnServiceAuthenticateOnly(authnimplService)
contexthandlerContextHandler := contexthandler.ProvideService(cfg, authnAuthenticator, featureToggles)
logger := loggermw.Provide(cfg, featureToggles)
qsDatasourceClientBuilder := dsquerierclient.NewNullQSDatasourceClientBuilder()
exprService := expr.ProvideService(cfg, middlewareHandler, plugincontextProvider, featureToggles, registerer, tracingService, qsDatasourceClientBuilder)
ngAlert := metrics2.ProvideService()
repositoryImpl := annotationsimpl.ProvideService(sqlStore, cfg, featureToggles, tagimplService, tracingService, dBstore, dashboardService, registerer)
alertNG, err := ngalert.ProvideService(cfg, featureToggles, cacheServiceImpl, service15, routeRegisterImpl, sqlStore, kvStore, exprService, dataSourceProxyService, quotaService, secretsService, notificationService, ngAlert, folderimplService, accessControl, dashboardService, renderingService, inProcBus, acimplService, repositoryImpl, pluginstoreService, tracingService, dBstore, httpclientProvider, plugincontextProvider, receiverPermissionsService, userService)
@@ -708,6 +707,7 @@ func Initialize(ctx context.Context, cfg *setting.Cfg, opts Options, apiOpts api
}
ossSearchUserFilter := filters.ProvideOSSSearchUserFilter()
ossService := searchusers.ProvideUsersService(cfg, ossSearchUserFilter, userService)
queryServiceImpl := query.ProvideService(cfg, cacheServiceImpl, exprService, ossDataSourceRequestValidator, middlewareHandler, plugincontextProvider, qsDatasourceClientBuilder)
serviceAccountsProxy, err := proxy.ProvideServiceAccountsProxy(cfg, accessControl, acimplService, featureToggles, serviceAccountPermissionsService, serviceAccountsService, routeRegisterImpl)
if err != nil {
return nil, err
@@ -879,7 +879,7 @@ func Initialize(ctx context.Context, cfg *setting.Cfg, opts Options, apiOpts api
folderAPIBuilder := folders.RegisterAPIService(cfg, featureToggles, apiserverService, folderimplService, folderPermissionsService, accessControl, acimplService, accessClient, registerer, resourceClient, zanzanaClient)
storageBackendImpl := noopstorage.ProvideStorageBackend()
noopTeamGroupsREST := externalgroupmapping.ProvideNoopTeamGroupsREST()
identityAccessManagementAPIBuilder, err := iam.RegisterAPIService(cfg, featureToggles, apiserverService, ssosettingsimplService, sqlStore, accessControl, accessClient, zanzanaClient, registerer, storageBackendImpl, storageBackendImpl, tracingService, storageBackendImpl, storageBackendImpl, noopTeamGroupsREST, dualwriteService, resourceClient, userService, teamService)
identityAccessManagementAPIBuilder, err := iam.RegisterAPIService(cfg, featureToggles, apiserverService, ssosettingsimplService, sqlStore, accessControl, accessClient, zanzanaClient, registerer, storageBackendImpl, storageBackendImpl, tracingService, storageBackendImpl, storageBackendImpl, noopTeamGroupsREST, dualwriteService, resourceClient, orgService, userService, teamService, eventualRestConfigProvider)
if err != nil {
return nil, err
}
@@ -1329,10 +1329,7 @@ func InitializeForTest(ctx context.Context, t sqlutil.ITestDB, testingT interfac
starService := starimpl.ProvideService(sqlStore)
searchSearchService := search2.ProvideService(cfg, sqlStore, starService, dashboardService, folderimplService, featureToggles, sortService)
plugincontextProvider := plugincontext.ProvideService(cfg, cacheService, pluginstoreService, cacheServiceImpl, service15, service13, requestConfigProvider)
qsDatasourceClientBuilder := dsquerierclient.NewNullQSDatasourceClientBuilder()
exprService := expr.ProvideService(cfg, middlewareHandler, plugincontextProvider, featureToggles, registerer, tracingService, qsDatasourceClientBuilder)
queryServiceImpl := query.ProvideService(cfg, cacheServiceImpl, exprService, ossDataSourceRequestValidator, middlewareHandler, plugincontextProvider, qsDatasourceClientBuilder)
grafanaLive, err := live.ProvideService(plugincontextProvider, cfg, routeRegisterImpl, pluginstoreService, middlewareHandler, cacheService, cacheServiceImpl, sqlStore, secretsService, usageStats, queryServiceImpl, featureToggles, accessControl, dashboardService, orgService, eventualRestConfigProvider)
grafanaLive, err := live.ProvideService(plugincontextProvider, cfg, routeRegisterImpl, pluginstoreService, middlewareHandler, cacheService, cacheServiceImpl, secretsService, usageStats, featureToggles, accessControl, dashboardService, orgService, eventualRestConfigProvider)
if err != nil {
return nil, err
}
@@ -1341,6 +1338,8 @@ func InitializeForTest(ctx context.Context, t sqlutil.ITestDB, testingT interfac
authnAuthenticator := authnimpl.ProvideAuthnServiceAuthenticateOnly(authnimplService)
contexthandlerContextHandler := contexthandler.ProvideService(cfg, authnAuthenticator, featureToggles)
logger := loggermw.Provide(cfg, featureToggles)
qsDatasourceClientBuilder := dsquerierclient.NewNullQSDatasourceClientBuilder()
exprService := expr.ProvideService(cfg, middlewareHandler, plugincontextProvider, featureToggles, registerer, tracingService, qsDatasourceClientBuilder)
notificationServiceMock := notifications.MockNotificationService()
ngAlert := metrics2.ProvideServiceForTest()
repositoryImpl := annotationsimpl.ProvideService(sqlStore, cfg, featureToggles, tagimplService, tracingService, dBstore, dashboardService, registerer)
@@ -1366,6 +1365,7 @@ func InitializeForTest(ctx context.Context, t sqlutil.ITestDB, testingT interfac
}
ossSearchUserFilter := filters.ProvideOSSSearchUserFilter()
ossService := searchusers.ProvideUsersService(cfg, ossSearchUserFilter, userService)
queryServiceImpl := query.ProvideService(cfg, cacheServiceImpl, exprService, ossDataSourceRequestValidator, middlewareHandler, plugincontextProvider, qsDatasourceClientBuilder)
serviceAccountsProxy, err := proxy.ProvideServiceAccountsProxy(cfg, accessControl, acimplService, featureToggles, serviceAccountPermissionsService, serviceAccountsService, routeRegisterImpl)
if err != nil {
return nil, err
@@ -1537,7 +1537,7 @@ func InitializeForTest(ctx context.Context, t sqlutil.ITestDB, testingT interfac
folderAPIBuilder := folders.RegisterAPIService(cfg, featureToggles, apiserverService, folderimplService, folderPermissionsService, accessControl, acimplService, accessClient, registerer, resourceClient, zanzanaClient)
storageBackendImpl := noopstorage.ProvideStorageBackend()
noopTeamGroupsREST := externalgroupmapping.ProvideNoopTeamGroupsREST()
identityAccessManagementAPIBuilder, err := iam.RegisterAPIService(cfg, featureToggles, apiserverService, ssosettingsimplService, sqlStore, accessControl, accessClient, zanzanaClient, registerer, storageBackendImpl, storageBackendImpl, tracingService, storageBackendImpl, storageBackendImpl, noopTeamGroupsREST, dualwriteService, resourceClient, userService, teamService)
identityAccessManagementAPIBuilder, err := iam.RegisterAPIService(cfg, featureToggles, apiserverService, ssosettingsimplService, sqlStore, accessControl, accessClient, zanzanaClient, registerer, storageBackendImpl, storageBackendImpl, tracingService, storageBackendImpl, storageBackendImpl, noopTeamGroupsREST, dualwriteService, resourceClient, orgService, userService, teamService, eventualRestConfigProvider)
if err != nil {
return nil, err
}

View File

@@ -152,7 +152,7 @@ func ProvideStandaloneAuthZClient(
//nolint:staticcheck // not yet migrated to OpenFeature
zanzanaEnabled := features.IsEnabledGlobally(featuremgmt.FlagZanzana)
zanzanaClient, err := ProvideStandaloneZanzanaClient(cfg, features)
zanzanaClient, err := ProvideStandaloneZanzanaClient(cfg, features, reg)
if err != nil {
return nil, err
}

View File

@@ -4,16 +4,19 @@ import (
"context"
"errors"
"fmt"
"time"
"github.com/fullstorydev/grpchan/inprocgrpc"
authnlib "github.com/grafana/authlib/authn"
authzv1 "github.com/grafana/authlib/authz/proto/v1"
"github.com/grafana/authlib/grpcutils"
"github.com/grafana/authlib/types"
"github.com/grafana/dskit/middleware"
"github.com/grafana/dskit/services"
grpcAuth "github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/auth"
openfgav1 "github.com/openfga/api/proto/openfga/v1"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
@@ -43,14 +46,14 @@ func ProvideZanzanaClient(cfg *setting.Cfg, db db.DB, tracer tracing.Tracer, fea
switch cfg.ZanzanaClient.Mode {
case setting.ZanzanaModeClient:
return NewRemoteZanzanaClient(
fmt.Sprintf("stacks-%s", cfg.StackID),
ZanzanaClientConfig{
URL: cfg.ZanzanaClient.Addr,
Token: cfg.ZanzanaClient.Token,
TokenExchangeURL: cfg.ZanzanaClient.TokenExchangeURL,
ServerCertFile: cfg.ZanzanaClient.ServerCertFile,
})
zanzanaConfig := ZanzanaClientConfig{
Addr: cfg.ZanzanaClient.Addr,
Token: cfg.ZanzanaClient.Token,
TokenExchangeURL: cfg.ZanzanaClient.TokenExchangeURL,
TokenNamespace: cfg.ZanzanaClient.TokenNamespace,
ServerCertFile: cfg.ZanzanaClient.ServerCertFile,
}
return NewRemoteZanzanaClient(zanzanaConfig, reg)
case setting.ZanzanaModeEmbedded:
logger := log.New("zanzana.server")
@@ -97,32 +100,33 @@ func ProvideZanzanaClient(cfg *setting.Cfg, db db.DB, tracer tracing.Tracer, fea
// ProvideStandaloneZanzanaClient provides a standalone Zanzana client, without registering the Zanzana service.
// Client connects to a remote Zanzana server specified in the configuration.
func ProvideStandaloneZanzanaClient(cfg *setting.Cfg, features featuremgmt.FeatureToggles) (zanzana.Client, error) {
func ProvideStandaloneZanzanaClient(cfg *setting.Cfg, features featuremgmt.FeatureToggles, reg prometheus.Registerer) (zanzana.Client, error) {
//nolint:staticcheck // not yet migrated to OpenFeature
if !features.IsEnabledGlobally(featuremgmt.FlagZanzana) {
return zClient.NewNoopClient(), nil
}
zanzanaConfig := ZanzanaClientConfig{
URL: cfg.ZanzanaClient.Addr,
Addr: cfg.ZanzanaClient.Addr,
Token: cfg.ZanzanaClient.Token,
TokenExchangeURL: cfg.ZanzanaClient.TokenExchangeURL,
TokenNamespace: cfg.ZanzanaClient.TokenNamespace,
ServerCertFile: cfg.ZanzanaClient.ServerCertFile,
}
return NewRemoteZanzanaClient(cfg.ZanzanaClient.TokenNamespace, zanzanaConfig)
return NewRemoteZanzanaClient(zanzanaConfig, reg)
}
type ZanzanaClientConfig struct {
URL string
Addr string
Token string
TokenExchangeURL string
ServerCertFile string
TokenNamespace string
ServerCertFile string
}
// NewRemoteZanzanaClient creates a new Zanzana client that connects to remote Zanzana server.
func NewRemoteZanzanaClient(namespace string, cfg ZanzanaClientConfig) (zanzana.Client, error) {
func NewRemoteZanzanaClient(cfg ZanzanaClientConfig, reg prometheus.Registerer) (zanzana.Client, error) {
tokenClient, err := authnlib.NewTokenExchangeClient(authnlib.TokenExchangeConfig{
Token: cfg.Token,
TokenExchangeURL: cfg.TokenExchangeURL,
@@ -139,18 +143,25 @@ func NewRemoteZanzanaClient(namespace string, cfg ZanzanaClientConfig) (zanzana.
}
}
authzRequestDuration := promauto.With(reg).NewHistogramVec(prometheus.HistogramOpts{
Name: "authz_zanzana_client_request_duration_seconds",
Help: "Time spent executing requests to zanzana server.",
NativeHistogramBucketFactor: 1.1,
NativeHistogramMaxBucketNumber: 160,
NativeHistogramMinResetDuration: time.Hour,
}, []string{"operation", "status_code"})
unaryInterceptors, streamInterceptors := instrument(authzRequestDuration, middleware.ReportGRPCStatusOption)
dialOptions := []grpc.DialOption{
grpc.WithTransportCredentials(transportCredentials),
grpc.WithPerRPCCredentials(
NewGRPCTokenAuth(
AuthzServiceAudience,
namespace,
tokenClient,
),
NewGRPCTokenAuth(AuthzServiceAudience, cfg.TokenNamespace, tokenClient),
),
grpc.WithChainUnaryInterceptor(unaryInterceptors...),
grpc.WithChainStreamInterceptor(streamInterceptors...),
}
conn, err := grpc.NewClient(cfg.URL, dialOptions...)
conn, err := grpc.NewClient(cfg.Addr, dialOptions...)
if err != nil {
return nil, fmt.Errorf("failed to create zanzana client to remote server: %w", err)
}

View File

@@ -22,6 +22,7 @@ import (
fswebassets "github.com/grafana/grafana/pkg/services/frontend/webassets"
"github.com/grafana/grafana/pkg/services/hooks"
"github.com/grafana/grafana/pkg/services/licensing"
publicdashboardsapi "github.com/grafana/grafana/pkg/services/publicdashboards/api"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/web"
)
@@ -164,6 +165,11 @@ func (s *frontendService) registerRoutes(m *web.Mux) {
// uses cache busting to ensure requests aren't cached.
s.routeGet(m, "/-/fe-boot-error", s.handleBootError)
s.routeGet(m, "/public-dashboards/:accessToken",
publicdashboardsapi.SetPublicDashboardAccessToken,
s.index.HandleRequest,
)
// All other requests return index.html
s.routeGet(m, "/*", s.index.HandleRequest)
}

View File

@@ -45,6 +45,8 @@ type IndexViewData struct {
// Nonce is a cryptographic identifier for use with Content Security Policy.
Nonce string
PublicDashboardAccessToken string
}
// Templates setup.
@@ -138,9 +140,12 @@ func (p *IndexProvider) HandleRequest(writer http.ResponseWriter, request *http.
return
}
reqCtx := contexthandler.FromContext(ctx)
// TODO -- restructure so the static stuff is under one variable and the rest is dynamic
data := p.data // copy everything
data.Nonce = nonce
data.PublicDashboardAccessToken = reqCtx.PublicDashboardAccessToken
if data.CSPEnabled {
data.CSPContent = middleware.ReplacePolicyVariables(p.data.CSPContent, p.data.AppSubUrl, data.Nonce)
@@ -150,7 +155,6 @@ func (p *IndexProvider) HandleRequest(writer http.ResponseWriter, request *http.
writer.Header().Set("Content-Security-Policy-Report-Only", policy)
}
reqCtx := contexthandler.FromContext(ctx)
p.runIndexDataHooks(reqCtx, &data)
writer.Header().Set("Content-Type", "text/html; charset=UTF-8")

View File

@@ -188,6 +188,7 @@
// Wrap in an IIFE to avoid polluting the global scope. Intentionally global-scope properties
// are explicitly assigned to the `window` object.
(() => {
const publicDashboardAccessToken = [[.PublicDashboardAccessToken]]
// Grafana can only fail to load once
// However, it can fail to load in multiple different places
// To avoid double reporting the error, we use this boolean to check if we've already failed
@@ -271,9 +272,15 @@
async function fetchBootData() {
const queryParams = new URLSearchParams(window.location.search);
let path = '/bootdata';
// call a special bootdata url with the public access token
// this is needed to set the access token and correct org for public dashboards on the ST backend
if (publicDashboardAccessToken) {
path += `/${publicDashboardAccessToken}`;
}
// pass the search params through to the bootdata request
// this allows for overriding the theme/language etc
const bootDataUrl = new URL('/bootdata', window.location.origin);
const bootDataUrl = new URL(path, window.location.origin);
for (const [key, value] of queryParams.entries()) {
bootDataUrl.searchParams.append(key, value);
}

View File

@@ -1,48 +0,0 @@
package database
import (
"fmt"
"time"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/infra/localcache"
"github.com/grafana/grafana/pkg/services/live/model"
)
type Storage struct {
store db.DB
cache *localcache.CacheService
}
func NewStorage(store db.DB, cache *localcache.CacheService) *Storage {
return &Storage{store: store, cache: cache}
}
func getLiveMessageCacheKey(orgID int64, channel string) string {
return fmt.Sprintf("live_message_%d_%s", orgID, channel)
}
func (s *Storage) SaveLiveMessage(query *model.SaveLiveMessageQuery) error {
// Come back to saving into database after evaluating database structure.
s.cache.Set(getLiveMessageCacheKey(query.OrgID, query.Channel), model.LiveMessage{
ID: 0, // Not used actually.
OrgID: query.OrgID,
Channel: query.Channel,
Data: query.Data,
Published: time.Now(),
}, 0)
return nil
}
func (s *Storage) GetLiveMessage(query *model.GetLiveMessageQuery) (model.LiveMessage, bool, error) {
// Come back to saving into database after evaluating database structure.
m, ok := s.cache.Get(getLiveMessageCacheKey(query.OrgID, query.Channel))
if !ok {
return model.LiveMessage{}, false, nil
}
msg, ok := m.(model.LiveMessage)
if !ok {
return model.LiveMessage{}, false, fmt.Errorf("unexpected live message type in cache: %T", m)
}
return msg, true, nil
}

View File

@@ -1,18 +0,0 @@
package tests
import (
"testing"
"time"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/infra/localcache"
"github.com/grafana/grafana/pkg/services/live/database"
)
// SetupTestStorage initializes a storage to used by the integration tests.
// This is required to properly register and execute migrations.
func SetupTestStorage(t *testing.T) *database.Storage {
sqlStore := db.InitTestDB(t)
localCache := localcache.New(time.Hour, time.Hour)
return database.NewStorage(sqlStore, localCache)
}

View File

@@ -1,67 +0,0 @@
package tests
import (
"encoding/json"
"testing"
"github.com/stretchr/testify/require"
"github.com/grafana/grafana/pkg/services/live/model"
"github.com/grafana/grafana/pkg/tests/testsuite"
"github.com/grafana/grafana/pkg/util/testutil"
)
func TestMain(m *testing.M) {
testsuite.Run(m)
}
func TestIntegrationLiveMessage(t *testing.T) {
testutil.SkipIntegrationTestInShortMode(t)
storage := SetupTestStorage(t)
getQuery := &model.GetLiveMessageQuery{
OrgID: 1,
Channel: "test_channel",
}
_, ok, err := storage.GetLiveMessage(getQuery)
require.NoError(t, err)
require.False(t, ok)
saveQuery := &model.SaveLiveMessageQuery{
OrgID: 1,
Channel: "test_channel",
Data: []byte(`{}`),
}
err = storage.SaveLiveMessage(saveQuery)
require.NoError(t, err)
msg, ok, err := storage.GetLiveMessage(getQuery)
require.NoError(t, err)
require.True(t, ok)
require.Equal(t, int64(1), msg.OrgID)
require.Equal(t, "test_channel", msg.Channel)
require.Equal(t, json.RawMessage(`{}`), msg.Data)
require.NotZero(t, msg.Published)
// try saving again, should be replaced.
saveQuery2 := &model.SaveLiveMessageQuery{
OrgID: 1,
Channel: "test_channel",
Data: []byte(`{"input": "hello"}`),
}
err = storage.SaveLiveMessage(saveQuery2)
require.NoError(t, err)
getQuery2 := &model.GetLiveMessageQuery{
OrgID: 1,
Channel: "test_channel",
}
msg2, ok, err := storage.GetLiveMessage(getQuery2)
require.NoError(t, err)
require.True(t, ok)
require.Equal(t, int64(1), msg2.OrgID)
require.Equal(t, "test_channel", msg2.Channel)
require.Equal(t, json.RawMessage(`{"input": "hello"}`), msg2.Data)
require.NotZero(t, msg2.Published)
}

View File

@@ -1,70 +0,0 @@
package features
import (
"context"
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/services/live/model"
)
var (
logger = log.New("live.features") // scoped to all features?
)
//go:generate mockgen -destination=broadcast_mock.go -package=features github.com/grafana/grafana/pkg/services/live/features LiveMessageStore
type LiveMessageStore interface {
SaveLiveMessage(query *model.SaveLiveMessageQuery) error
GetLiveMessage(query *model.GetLiveMessageQuery) (model.LiveMessage, bool, error)
}
// BroadcastRunner will simply broadcast all events to `grafana/broadcast/*` channels
// This assumes that data is a JSON object
type BroadcastRunner struct {
liveMessageStore LiveMessageStore
}
func NewBroadcastRunner(liveMessageStore LiveMessageStore) *BroadcastRunner {
return &BroadcastRunner{liveMessageStore: liveMessageStore}
}
// GetHandlerForPath called on init
func (b *BroadcastRunner) GetHandlerForPath(_ string) (model.ChannelHandler, error) {
return b, nil // all dashboards share the same handler
}
// OnSubscribe will let anyone connect to the path
func (b *BroadcastRunner) OnSubscribe(_ context.Context, u identity.Requester, e model.SubscribeEvent) (model.SubscribeReply, backend.SubscribeStreamStatus, error) {
reply := model.SubscribeReply{
Presence: true,
JoinLeave: true,
}
query := &model.GetLiveMessageQuery{
OrgID: u.GetOrgID(),
Channel: e.Channel,
}
msg, ok, err := b.liveMessageStore.GetLiveMessage(query)
if err != nil {
return model.SubscribeReply{}, 0, err
}
if ok {
reply.Data = msg.Data
}
return reply, backend.SubscribeStreamStatusOK, nil
}
// OnPublish is called when a client wants to broadcast on the websocket
func (b *BroadcastRunner) OnPublish(_ context.Context, u identity.Requester, e model.PublishEvent) (model.PublishReply, backend.PublishStreamStatus, error) {
query := &model.SaveLiveMessageQuery{
OrgID: u.GetOrgID(),
Channel: e.Channel,
Data: e.Data,
}
if err := b.liveMessageStore.SaveLiveMessage(query); err != nil {
return model.PublishReply{}, 0, err
}
return model.PublishReply{Data: e.Data}, backend.PublishStreamStatusOK, nil
}

View File

@@ -1,66 +0,0 @@
// Code generated by MockGen. DO NOT EDIT.
// Source: github.com/grafana/grafana/pkg/services/live/features (interfaces: LiveMessageStore)
// Package features is a generated GoMock package.
package features
import (
reflect "reflect"
gomock "github.com/golang/mock/gomock"
model "github.com/grafana/grafana/pkg/services/live/model"
)
// MockLiveMessageStore is a mock of LiveMessageStore interface.
type MockLiveMessageStore struct {
ctrl *gomock.Controller
recorder *MockLiveMessageStoreMockRecorder
}
// MockLiveMessageStoreMockRecorder is the mock recorder for MockLiveMessageStore.
type MockLiveMessageStoreMockRecorder struct {
mock *MockLiveMessageStore
}
// NewMockLiveMessageStore creates a new mock instance.
func NewMockLiveMessageStore(ctrl *gomock.Controller) *MockLiveMessageStore {
mock := &MockLiveMessageStore{ctrl: ctrl}
mock.recorder = &MockLiveMessageStoreMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockLiveMessageStore) EXPECT() *MockLiveMessageStoreMockRecorder {
return m.recorder
}
// GetLiveMessage mocks base method.
func (m *MockLiveMessageStore) GetLiveMessage(arg0 *model.GetLiveMessageQuery) (model.LiveMessage, bool, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetLiveMessage", arg0)
ret0, _ := ret[0].(model.LiveMessage)
ret1, _ := ret[1].(bool)
ret2, _ := ret[2].(error)
return ret0, ret1, ret2
}
// GetLiveMessage indicates an expected call of GetLiveMessage.
func (mr *MockLiveMessageStoreMockRecorder) GetLiveMessage(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetLiveMessage", reflect.TypeOf((*MockLiveMessageStore)(nil).GetLiveMessage), arg0)
}
// SaveLiveMessage mocks base method.
func (m *MockLiveMessageStore) SaveLiveMessage(arg0 *model.SaveLiveMessageQuery) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "SaveLiveMessage", arg0)
ret0, _ := ret[0].(error)
return ret0
}
// SaveLiveMessage indicates an expected call of SaveLiveMessage.
func (mr *MockLiveMessageStoreMockRecorder) SaveLiveMessage(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SaveLiveMessage", reflect.TypeOf((*MockLiveMessageStore)(nil).SaveLiveMessage), arg0)
}

View File

@@ -1,87 +0,0 @@
package features
import (
"context"
"encoding/json"
"testing"
"github.com/golang/mock/gomock"
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/stretchr/testify/require"
"github.com/grafana/grafana/pkg/services/live/model"
"github.com/grafana/grafana/pkg/services/user"
)
func TestNewBroadcastRunner(t *testing.T) {
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
d := NewMockLiveMessageStore(mockCtrl)
br := NewBroadcastRunner(d)
require.NotNil(t, br)
}
func TestBroadcastRunner_OnSubscribe(t *testing.T) {
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
mockDispatcher := NewMockLiveMessageStore(mockCtrl)
channel := "stream/channel/test"
data := json.RawMessage(`{}`)
mockDispatcher.EXPECT().GetLiveMessage(&model.GetLiveMessageQuery{
OrgID: 1,
Channel: channel,
}).DoAndReturn(func(query *model.GetLiveMessageQuery) (model.LiveMessage, bool, error) {
return model.LiveMessage{
Data: data,
}, true, nil
}).Times(1)
br := NewBroadcastRunner(mockDispatcher)
require.NotNil(t, br)
handler, err := br.GetHandlerForPath("test")
require.NoError(t, err)
reply, status, err := handler.OnSubscribe(
context.Background(),
&user.SignedInUser{OrgID: 1, UserID: 2},
model.SubscribeEvent{Channel: channel, Path: "test"},
)
require.NoError(t, err)
require.Equal(t, backend.SubscribeStreamStatusOK, status)
require.Equal(t, data, reply.Data)
require.True(t, reply.Presence)
require.True(t, reply.JoinLeave)
require.False(t, reply.Recover)
}
func TestBroadcastRunner_OnPublish(t *testing.T) {
mockCtrl := gomock.NewController(t)
defer mockCtrl.Finish()
mockDispatcher := NewMockLiveMessageStore(mockCtrl)
channel := "stream/channel/test"
data := json.RawMessage(`{}`)
var orgID int64 = 1
mockDispatcher.EXPECT().SaveLiveMessage(&model.SaveLiveMessageQuery{
OrgID: orgID,
Channel: channel,
Data: data,
}).DoAndReturn(func(query *model.SaveLiveMessageQuery) error {
return nil
}).Times(1)
br := NewBroadcastRunner(mockDispatcher)
require.NotNil(t, br)
handler, err := br.GetHandlerForPath("test")
require.NoError(t, err)
reply, status, err := handler.OnPublish(
context.Background(),
&user.SignedInUser{OrgID: 1, UserID: 2},
model.PublishEvent{Channel: channel, Path: "test", Data: data},
)
require.NoError(t, err)
require.Equal(t, backend.PublishStreamStatusOK, status)
require.Equal(t, data, reply.Data)
}

View File

@@ -7,9 +7,8 @@ import (
"strings"
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/cmd/grafana-cli/logger"
"github.com/grafana/grafana/pkg/services/accesscontrol"
"github.com/grafana/grafana/pkg/services/dashboards"
"github.com/grafana/grafana/pkg/services/live/model"
@@ -35,7 +34,6 @@ type dashboardEvent struct {
type DashboardHandler struct {
Publisher model.ChannelPublisher
ClientCount model.ChannelClientCount
Store db.DB
DashboardService dashboards.DashboardService
AccessControl accesscontrol.AccessControl
}

View File

@@ -5,9 +5,11 @@ import (
"errors"
"github.com/centrifugal/centrifuge"
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/cmd/grafana-cli/logger"
"github.com/grafana/grafana/pkg/plugins"
"github.com/grafana/grafana/pkg/services/live/model"
"github.com/grafana/grafana/pkg/services/live/orgchannel"

View File

@@ -15,7 +15,6 @@ import (
"github.com/centrifugal/centrifuge"
"github.com/gobwas/glob"
jsoniter "github.com/json-iterator/go"
"github.com/redis/go-redis/v9"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
@@ -25,12 +24,9 @@ import (
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/grafana/grafana-plugin-sdk-go/live"
"github.com/grafana/grafana/pkg/api/dtos"
"github.com/grafana/grafana/pkg/api/response"
"github.com/grafana/grafana/pkg/api/routing"
"github.com/grafana/grafana/pkg/apimachinery/errutil"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/infra/localcache"
"github.com/grafana/grafana/pkg/infra/log"
"github.com/grafana/grafana/pkg/infra/usagestats"
@@ -43,7 +39,6 @@ import (
"github.com/grafana/grafana/pkg/services/dashboards"
"github.com/grafana/grafana/pkg/services/datasources"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/services/live/database"
"github.com/grafana/grafana/pkg/services/live/features"
"github.com/grafana/grafana/pkg/services/live/livecontext"
"github.com/grafana/grafana/pkg/services/live/liveplugin"
@@ -57,7 +52,6 @@ import (
"github.com/grafana/grafana/pkg/services/org"
"github.com/grafana/grafana/pkg/services/pluginsintegration/plugincontext"
"github.com/grafana/grafana/pkg/services/pluginsintegration/pluginstore"
"github.com/grafana/grafana/pkg/services/query"
"github.com/grafana/grafana/pkg/services/secrets"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/util"
@@ -80,8 +74,8 @@ type CoreGrafanaScope struct {
func ProvideService(plugCtxProvider *plugincontext.Provider, cfg *setting.Cfg, routeRegister routing.RouteRegister,
pluginStore pluginstore.Store, pluginClient plugins.Client, cacheService *localcache.CacheService,
dataSourceCache datasources.CacheService, sqlStore db.DB, secretsService secrets.Service,
usageStatsService usagestats.Service, queryDataService query.Service, toggles featuremgmt.FeatureToggles,
dataSourceCache datasources.CacheService, secretsService secrets.Service,
usageStatsService usagestats.Service, toggles featuremgmt.FeatureToggles,
accessControl accesscontrol.AccessControl, dashboardService dashboards.DashboardService,
orgService org.Service, configProvider apiserver.RestConfigProvider) (*GrafanaLive, error) {
g := &GrafanaLive{
@@ -93,9 +87,7 @@ func ProvideService(plugCtxProvider *plugincontext.Provider, cfg *setting.Cfg, r
pluginClient: pluginClient,
CacheService: cacheService,
DataSourceCache: dataSourceCache,
SQLStore: sqlStore,
SecretsService: secretsService,
queryDataService: queryDataService,
channels: make(map[string]model.ChannelHandler),
GrafanaScope: CoreGrafanaScope{
Features: make(map[string]model.ChannelHandlerFactory),
@@ -186,14 +178,11 @@ func ProvideService(plugCtxProvider *plugincontext.Provider, cfg *setting.Cfg, r
dash := &features.DashboardHandler{
Publisher: g.Publish,
ClientCount: g.ClientCount,
Store: sqlStore,
DashboardService: dashboardService,
AccessControl: accessControl,
}
g.storage = database.NewStorage(g.SQLStore, g.CacheService)
g.GrafanaScope.Dashboards = dash
g.GrafanaScope.Features["dashboard"] = dash
g.GrafanaScope.Features["broadcast"] = features.NewBroadcastRunner(g.storage)
// Testing watch with just the provisioning support -- this will be removed when it is well validated
//nolint:staticcheck // not yet migrated to OpenFeature
@@ -388,14 +377,14 @@ func ProvideService(plugCtxProvider *plugincontext.Provider, cfg *setting.Cfg, r
UserID: strconv.FormatInt(id, 10),
}
newCtx := centrifuge.SetCredentials(ctx.Req.Context(), cred)
newCtx = livecontext.SetContextSignedUser(newCtx, user)
newCtx = identity.WithRequester(newCtx, user)
r := ctx.Req.WithContext(newCtx)
wsHandler.ServeHTTP(ctx.Resp, r)
}
g.pushWebsocketHandler = func(ctx *contextmodel.ReqContext) {
user := ctx.SignedInUser
newCtx := livecontext.SetContextSignedUser(ctx.Req.Context(), user)
newCtx := identity.WithRequester(ctx.Req.Context(), user)
newCtx = livecontext.SetContextStreamID(newCtx, web.Params(ctx.Req)[":streamId"])
r := ctx.Req.WithContext(newCtx)
pushWSHandler.ServeHTTP(ctx.Resp, r)
@@ -403,7 +392,7 @@ func ProvideService(plugCtxProvider *plugincontext.Provider, cfg *setting.Cfg, r
g.pushPipelineWebsocketHandler = func(ctx *contextmodel.ReqContext) {
user := ctx.SignedInUser
newCtx := livecontext.SetContextSignedUser(ctx.Req.Context(), user)
newCtx := identity.WithRequester(ctx.Req.Context(), user)
newCtx = livecontext.SetContextChannelID(newCtx, web.Params(ctx.Req)["*"])
r := ctx.Req.WithContext(newCtx)
pushPipelineWSHandler.ServeHTTP(ctx.Resp, r)
@@ -475,14 +464,12 @@ type GrafanaLive struct {
RouteRegister routing.RouteRegister
CacheService *localcache.CacheService
DataSourceCache datasources.CacheService
SQLStore db.DB
SecretsService secrets.Service
pluginStore pluginstore.Store
pluginClient plugins.Client
queryDataService query.Service
orgService org.Service
keyPrefix string
keyPrefix string // HA prefix for grafana cloud (since the org is always 1)
node *centrifuge.Node
surveyCaller *survey.Caller
@@ -505,7 +492,6 @@ type GrafanaLive struct {
contextGetter *liveplugin.ContextGetter
runStreamManager *runstream.Manager
storage *database.Storage
usageStatsService usagestats.Service
usageStats usageStats
@@ -673,18 +659,13 @@ func (g *GrafanaLive) HandleDatasourceUpdate(orgID int64, dsUID string) {
}
}
// Use a configuration that's compatible with the standard library
// to minimize the risk of introducing bugs. This will make sure
// that map keys is ordered.
var jsonStd = jsoniter.ConfigCompatibleWithStandardLibrary
func (g *GrafanaLive) handleOnRPC(clientContextWithSpan context.Context, client *centrifuge.Client, e centrifuge.RPCEvent) (centrifuge.RPCReply, error) {
logger.Debug("Client calls RPC", "user", client.UserID(), "client", client.ID(), "method", e.Method)
if e.Method != "grafana.query" {
return centrifuge.RPCReply{}, centrifuge.ErrorMethodNotFound
}
user, ok := livecontext.GetContextSignedUser(clientContextWithSpan)
if !ok {
user, err := identity.GetRequester(clientContextWithSpan)
if err != nil {
logger.Error("No user found in context", "user", client.UserID(), "client", client.ID(), "method", e.Method)
return centrifuge.RPCReply{}, centrifuge.ErrorInternal
}
@@ -694,38 +675,15 @@ func (g *GrafanaLive) handleOnRPC(clientContextWithSpan context.Context, client
return centrifuge.RPCReply{}, centrifuge.ErrorExpired
}
var req dtos.MetricRequest
err := json.Unmarshal(e.Data, &req)
if err != nil {
return centrifuge.RPCReply{}, centrifuge.ErrorBadRequest
}
resp, err := g.queryDataService.QueryData(clientContextWithSpan, user, false, req)
if err != nil {
logger.Error("Error query data", "user", client.UserID(), "client", client.ID(), "method", e.Method, "error", err)
if errors.Is(err, datasources.ErrDataSourceAccessDenied) {
return centrifuge.RPCReply{}, &centrifuge.Error{Code: uint32(http.StatusForbidden), Message: http.StatusText(http.StatusForbidden)}
}
var gfErr errutil.Error
if errors.As(err, &gfErr) && gfErr.Reason.Status() == errutil.StatusBadRequest {
return centrifuge.RPCReply{}, &centrifuge.Error{Code: uint32(http.StatusBadRequest), Message: http.StatusText(http.StatusBadRequest)}
}
return centrifuge.RPCReply{}, centrifuge.ErrorInternal
}
data, err := jsonStd.Marshal(resp)
if err != nil {
logger.Error("Error marshaling query response", "user", client.UserID(), "client", client.ID(), "method", e.Method, "error", err)
return centrifuge.RPCReply{}, centrifuge.ErrorInternal
}
return centrifuge.RPCReply{
Data: data,
}, nil
// RPC events not available
return centrifuge.RPCReply{}, centrifuge.ErrorNotAvailable
}
func (g *GrafanaLive) handleOnSubscribe(clientContextWithSpan context.Context, client *centrifuge.Client, e centrifuge.SubscribeEvent) (centrifuge.SubscribeReply, error) {
logger.Debug("Client wants to subscribe", "user", client.UserID(), "client", client.ID(), "channel", e.Channel)
user, ok := livecontext.GetContextSignedUser(clientContextWithSpan)
if !ok {
user, err := identity.GetRequester(clientContextWithSpan)
if err != nil {
logger.Error("No user found in context", "user", client.UserID(), "client", client.ID(), "channel", e.Channel)
return centrifuge.SubscribeReply{}, centrifuge.ErrorInternal
}
@@ -830,8 +788,8 @@ func (g *GrafanaLive) handleOnSubscribe(clientContextWithSpan context.Context, c
func (g *GrafanaLive) handleOnPublish(clientCtxWithSpan context.Context, client *centrifuge.Client, e centrifuge.PublishEvent) (centrifuge.PublishReply, error) {
logger.Debug("Client wants to publish", "user", client.UserID(), "client", client.ID(), "channel", e.Channel)
user, ok := livecontext.GetContextSignedUser(clientCtxWithSpan)
if !ok {
user, err := identity.GetRequester(clientCtxWithSpan)
if err != nil {
logger.Error("No user found in context", "user", client.UserID(), "client", client.ID(), "channel", e.Channel)
return centrifuge.PublishReply{}, centrifuge.ErrorInternal
}
@@ -1083,7 +1041,7 @@ func (g *GrafanaLive) ClientCount(orgID int64, channel string) (int, error) {
}
func (g *GrafanaLive) HandleHTTPPublish(ctx *contextmodel.ReqContext) response.Response {
cmd := dtos.LivePublishCmd{}
cmd := model.LivePublishCmd{}
if err := web.Bind(ctx.Req, &cmd); err != nil {
return response.Error(http.StatusBadRequest, "bad request data", err)
}
@@ -1122,7 +1080,7 @@ func (g *GrafanaLive) HandleHTTPPublish(ctx *contextmodel.ReqContext) response.R
logger.Error("Error processing input", "user", user, "channel", channel, "error", err)
return response.Error(http.StatusInternalServerError, http.StatusText(http.StatusInternalServerError), nil)
}
return response.JSON(http.StatusOK, dtos.LivePublishResponse{})
return response.JSON(http.StatusOK, model.LivePublishResponse{})
}
}
@@ -1150,7 +1108,7 @@ func (g *GrafanaLive) HandleHTTPPublish(ctx *contextmodel.ReqContext) response.R
}
}
logger.Debug("Publication successful", "identity", ctx.GetID(), "channel", cmd.Channel)
return response.JSON(http.StatusOK, dtos.LivePublishResponse{})
return response.JSON(http.StatusOK, model.LivePublishResponse{})
}
type streamChannelListResponse struct {

View File

@@ -11,20 +11,17 @@ import (
"testing"
"time"
"github.com/centrifugal/centrifuge"
"github.com/go-jose/go-jose/v4"
"github.com/go-jose/go-jose/v4/jwt"
"github.com/stretchr/testify/require"
"github.com/centrifugal/centrifuge"
"github.com/grafana/grafana/pkg/api/routing"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/infra/db"
"github.com/grafana/grafana/pkg/infra/usagestats"
"github.com/grafana/grafana/pkg/services/accesscontrol/acimpl"
"github.com/grafana/grafana/pkg/services/dashboards"
"github.com/grafana/grafana/pkg/services/featuremgmt"
"github.com/grafana/grafana/pkg/services/live/livecontext"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/tests/testsuite"
"github.com/grafana/grafana/pkg/util/testutil"
@@ -245,7 +242,7 @@ func Test_handleOnPublish_IDTokenExpiration(t *testing.T) {
t.Run("expired token", func(t *testing.T) {
expiration := time.Now().Add(-time.Hour)
token := createToken(t, &expiration)
ctx := livecontext.SetContextSignedUser(context.Background(), &identity.StaticRequester{IDToken: token})
ctx := identity.WithRequester(context.Background(), &identity.StaticRequester{IDToken: token})
reply, err := g.handleOnPublish(ctx, client, centrifuge.PublishEvent{
Channel: "test",
Data: []byte("test"),
@@ -257,7 +254,7 @@ func Test_handleOnPublish_IDTokenExpiration(t *testing.T) {
t.Run("unexpired token", func(t *testing.T) {
expiration := time.Now().Add(time.Hour)
token := createToken(t, &expiration)
ctx := livecontext.SetContextSignedUser(context.Background(), &identity.StaticRequester{IDToken: token})
ctx := identity.WithRequester(context.Background(), &identity.StaticRequester{IDToken: token})
reply, err := g.handleOnPublish(ctx, client, centrifuge.PublishEvent{
Channel: "test",
Data: []byte("test"),
@@ -280,7 +277,7 @@ func Test_handleOnRPC_IDTokenExpiration(t *testing.T) {
t.Run("expired token", func(t *testing.T) {
expiration := time.Now().Add(-time.Hour)
token := createToken(t, &expiration)
ctx := livecontext.SetContextSignedUser(context.Background(), &identity.StaticRequester{IDToken: token})
ctx := identity.WithRequester(context.Background(), &identity.StaticRequester{IDToken: token})
reply, err := g.handleOnRPC(ctx, client, centrifuge.RPCEvent{
Method: "grafana.query",
Data: []byte("test"),
@@ -292,7 +289,7 @@ func Test_handleOnRPC_IDTokenExpiration(t *testing.T) {
t.Run("unexpired token", func(t *testing.T) {
expiration := time.Now().Add(time.Hour)
token := createToken(t, &expiration)
ctx := livecontext.SetContextSignedUser(context.Background(), &identity.StaticRequester{IDToken: token})
ctx := identity.WithRequester(context.Background(), &identity.StaticRequester{IDToken: token})
reply, err := g.handleOnRPC(ctx, client, centrifuge.RPCEvent{
Method: "grafana.query",
Data: []byte("test"),
@@ -315,7 +312,7 @@ func Test_handleOnSubscribe_IDTokenExpiration(t *testing.T) {
t.Run("expired token", func(t *testing.T) {
expiration := time.Now().Add(-time.Hour)
token := createToken(t, &expiration)
ctx := livecontext.SetContextSignedUser(context.Background(), &identity.StaticRequester{IDToken: token})
ctx := identity.WithRequester(context.Background(), &identity.StaticRequester{IDToken: token})
reply, err := g.handleOnSubscribe(ctx, client, centrifuge.SubscribeEvent{
Channel: "test",
})
@@ -326,7 +323,7 @@ func Test_handleOnSubscribe_IDTokenExpiration(t *testing.T) {
t.Run("unexpired token", func(t *testing.T) {
expiration := time.Now().Add(time.Hour)
token := createToken(t, &expiration)
ctx := livecontext.SetContextSignedUser(context.Background(), &identity.StaticRequester{IDToken: token})
ctx := identity.WithRequester(context.Background(), &identity.StaticRequester{IDToken: token})
reply, err := g.handleOnSubscribe(ctx, client, centrifuge.SubscribeEvent{
Channel: "test",
})
@@ -347,10 +344,8 @@ func setupLiveService(cfg *setting.Cfg, t *testing.T) (*GrafanaLive, error) {
cfg,
routing.NewRouteRegister(),
nil, nil, nil, nil,
db.InitTestDB(t),
nil,
&usagestats.UsageStatsMock{T: t},
nil,
featuremgmt.WithFeatures(),
acimpl.ProvideAccessControl(featuremgmt.WithFeatures()),
&dashboards.FakeDashboardService{},
@@ -361,7 +356,12 @@ type dummyTransport struct {
name string
}
var (
_ centrifuge.Transport = (*dummyTransport)(nil)
)
func (t *dummyTransport) Name() string { return t.name }
func (t *dummyTransport) AcceptProtocol() string { return "" }
func (t *dummyTransport) Protocol() centrifuge.ProtocolType { return centrifuge.ProtocolTypeJSON }
func (t *dummyTransport) ProtocolVersion() centrifuge.ProtocolVersion {
return centrifuge.ProtocolVersion2

View File

@@ -2,27 +2,8 @@ package livecontext
import (
"context"
"github.com/grafana/grafana/pkg/apimachinery/identity"
)
type signedUserContextKeyType int
var signedUserContextKey signedUserContextKeyType
func SetContextSignedUser(ctx context.Context, user identity.Requester) context.Context {
ctx = context.WithValue(ctx, signedUserContextKey, user)
return ctx
}
func GetContextSignedUser(ctx context.Context) (identity.Requester, bool) {
if val := ctx.Value(signedUserContextKey); val != nil {
user, ok := val.(identity.Requester)
return user, ok
}
return nil, false
}
type streamIDContextKey struct{}
func SetContextStreamID(ctx context.Context, streamID string) context.Context {

View File

@@ -67,21 +67,9 @@ type ChannelHandlerFactory interface {
GetHandlerForPath(path string) (ChannelHandler, error)
}
type LiveMessage struct {
ID int64 `xorm:"pk autoincr 'id'"`
OrgID int64 `xorm:"org_id"`
Channel string
Data json.RawMessage
Published time.Time
type LivePublishCmd struct {
Channel string `json:"channel"`
Data json.RawMessage `json:"data,omitempty"`
}
type SaveLiveMessageQuery struct {
OrgID int64 `xorm:"org_id"`
Channel string
Data json.RawMessage
}
type GetLiveMessageQuery struct {
OrgID int64 `xorm:"org_id"`
Channel string
}
type LivePublishResponse struct{}

View File

@@ -6,7 +6,7 @@ import (
"github.com/grafana/grafana-plugin-sdk-go/backend"
"github.com/grafana/grafana/pkg/services/live/livecontext"
"github.com/grafana/grafana/pkg/apimachinery/identity"
"github.com/grafana/grafana/pkg/services/live/model"
)
@@ -25,8 +25,8 @@ func (s *BuiltinDataOutput) Type() string {
}
func (s *BuiltinDataOutput) OutputData(ctx context.Context, vars Vars, data []byte) ([]*ChannelData, error) {
u, ok := livecontext.GetContextSignedUser(ctx)
if !ok {
u, err := identity.GetRequester(ctx)
if err != nil {
return nil, errors.New("user not found in context")
}
handler, _, err := s.channelHandlerGetter.GetChannelHandler(ctx, u, vars.Channel)

Some files were not shown because too many files have changed in this diff Show More